Compare commits

...

25 Commits

Author SHA1 Message Date
satushh
18cced70ec add bullet points in changelog file 2025-08-22 11:16:35 +05:30
satushh
196e457450 Merge branch 'develop' into flag-sync-from-genesis 2025-08-22 11:12:09 +05:30
satushh
00f441e7e2 changelog 2025-08-22 11:10:04 +05:30
satushh
6f7e7f5885 bazel run //:gazelle -- fix 2025-08-22 10:24:19 +05:30
satushh
bb666833c5 node: check for genesis block in validateSyncFlags 2025-08-21 21:41:35 +05:30
Justin Traglia
ee03c7cce2 Add spec references, a mapping of spec to implementation (#15592)
* Add spec references, a mapping of spec to implementation

* Add changelog fragment

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-21 14:54:04 +00:00
satushh
334eb40576 bazel run //:gazelle -- fix 2025-08-21 17:07:31 +05:30
satushh
097605b45d node: avoid duplicate registration in test 2025-08-21 15:28:18 +05:30
satushh
8f68e224d9 node: tests and usage 2025-08-21 11:32:13 +05:30
kasey
c5135f6995 enforce schedule alignment when next_fork_epoch matches (#15604)
* enforce schedule alignment when next_fork_epoch matches

* lint & typo

* James feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 23:57:30 +00:00
Pop Chunhapanya
29aedac113 Fix subnet peer discovery (#15603)
* Fix subnet peer discovery

Currently computeAllNeededSubnets is called only once when the subnets
are subscribed. It should have been called regularly.

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 16:52:11 +00:00
kasey
08fb3812b7 provide data column storage to rpc handlers (#15606)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 15:51:11 +00:00
satushh
9b2ee0f720 node: make flags more robust 2025-08-20 20:43:21 +05:30
satushh
dcf9379dd2 flag: support sync-from-genesis flag 2025-08-19 20:47:45 +05:30
kasey
07738dd9a4 improve pubsub topic subscription failure logging (#15600)
* improve pubsub topic subscription failure logging

* Errorf doesn't support %w, so use %v

* log capitalization

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-19 12:24:07 +00:00
james-prysm
53df29b07f Fix find peers regression (#15578)
* adding what I think could be a fix for find peer

* removing uneeded comment

* unit tests

* linting

* gofmt

* changelog

* Update beacon-chain/p2p/discovery_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update changelog/james-prysm_fix-find-peers.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fixing test import

* applying suggestions

* fixing typo

* manu feedback

* accidently checked in files

* addressing manu's edgecase, old bug

* moving tests from service-test.go to subnets_test.go and adding coverage for receiving bad existing node with higher seq

* cleanup

* updating for clarity

* missingPeerCount should increment if we are removing the peer from map

* manu's recommendations on defective subnet rollback edge case

* rollback introduced too much complication as well as a new bug so we are removing it

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-18 19:41:32 +00:00
Manu NALEPA
00cf1f2507 Implement PeerDAS sync (#15564)
* PeerDAS: Implement sync

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Satyajit's comment.

* Partially fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Add tests for `sendDataColumnSidecarsRequest`.

* Fix Satyajit's comment.

* Implement `TestSendDataColumnSidecarsRequest`.

* Implement `TestFetchDataColumnSidecarsFromPeers`.

* Implement `TestUpdateResults`.

* Implement `TestSelectPeers`.

* Implement `TestCategorizeIndices`.

* Fix James' comment.

* Fix James's comment.

* Fix James' commit.

* Fix James' comment.

* Fix James' comment.

* Fix flakiness in `TestSelectPeers`.

* Update cmd/beacon-chain/flags/config.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix Preston's comment.

* Fix James's comment.

* Implement `TestFetchDataColumnSidecars`.

* Revert "Fix Potuz's comment."

This reverts commit c45230b455.

* Fix Potuz's comment.

* Revert "Fix James' comment."

This reverts commit a3f919205a.

* Fix James' comment.

* Fix Preston's comment.

* Fix James' comment.

* `selectPeers`: Avoid map with key but empty value.

* Fix typo.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix James' comment.

* Add DataColumnStorage and SubscribeAllDataSubnets flag.

* Add extra flags

* Fix Potuz's and Preston's comment.

* Add rate limiter check.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-18 14:36:07 +00:00
terence
6528fb9cea Update consensus spec to v1.6.0-alpha.4 and implement data column support (#15590)
* Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests

* Apply suggestion from @prestonvanloon

Co-Authored-By: Preston Van Loon <pvanloon@offchainlabs.com>

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-08-16 15:49:12 +00:00
terence
5021131811 Fix NewSignedBeaconBlock calls to use Block field for equivocation handling (#15595) 2025-08-16 14:19:11 +00:00
kasey
26cec9d9c7 omit NetworkScheduleEntry fields that are not part of BlobScheduleEntry (#15557)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-14 22:45:47 +00:00
Justin Traglia
4ed90a02ef Rename various variables/functions to be more clear (#15529)
* Rename various variables/functions to be more clear

* Add changelog fragment

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-08-14 11:06:22 +00:00
trinadh61
7d528c75bb adding user agent validator beacon client (#15574)
* adding user agent validator beacon client

* Update runtime/version/version.go

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

* test cases

* contribution readme

* setting user agent to build version data

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-08-13 21:36:17 +00:00
Preston Van Loon
e7b2953d5a Address out of bounds concern in beacon-chain/core/peerdas/das_core.go (#15586) 2025-08-13 15:07:26 +00:00
Muzry
acf35e849e Update endpoint to return 404 after isOptimistic check (#15559)
* Update endpoint to return 404 after isOptimistic check

* Fix error handling by using predefined errors

* fix: helpers build.bazel

* remove the StateIdDecodeError

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-13 14:40:20 +00:00
Preston Van Loon
c826d334a1 Add missing Fulu block type in stream handler (grpc StreamBlocksAltair) (#15583) 2025-08-12 22:26:26 +00:00
136 changed files with 17925 additions and 1349 deletions

43
.github/workflows/check-specrefs.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: Check Spec References
on: [push, pull_request]
jobs:
check-specrefs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check version consistency
run: |
WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/')
ETHSPECIFY_VERSION=$(grep '^version:' specrefs/.ethspecify.yml | sed 's/version: //')
if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then
echo "Version mismatch between WORKSPACE and ethspecify"
echo " WORKSPACE: $WORKSPACE_VERSION"
echo " specrefs/.ethspecify.yml: $ETHSPECIFY_VERSION"
exit 1
else
echo "Versions match: $WORKSPACE_VERSION"
fi
- name: Install ethspecify
run: python3 -mpip install ethspecify
- name: Update spec references
run: ethspecify process --path=specrefs
- name: Check for differences
run: |
if ! git diff --exit-code specrefs >/dev/null; then
echo "Spec references are out-of-date!"
echo ""
git --no-pager diff specrefs
exit 1
else
echo "Spec references are up-to-date!"
fi
- name: Check spec references
run: ethspecify check --path=specrefs

View File

@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.1"
consensus_spec_version = "v1.6.0-alpha.4"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-o4t9p3R+fQHF4KOykGmwlG3zDw5wUdVWprkzId8aIsk=",
"minimal": "sha256-sU7ToI8t3MR8x0vVjC8ERmAHZDWpEmnAC9FWIpHi5x4=",
"mainnet": "sha256-YKS4wngg0LgI9Upp4MYJ77aG+8+e/G4YeqEIlp06LZw=",
"general": "sha256-MaN4zu3o0vWZypUHS5r4D8WzJF4wANoadM8qm6iyDs4=",
"minimal": "sha256-aZGNPp/bBvJgq3Wf6vyR0H6G3DOkbSuggEmOL4jEmtg=",
"mainnet": "sha256-C7jjosvpzUgw3GPajlsWBV02ZbkZ5Uv4ikmOqfDGajI=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Nv4TEuEJPQIM4E6T9J0FOITsmappmXZjGtlhe1HEXnU=",
integrity = "sha256-qreawRS77l8CebiNww8z727qUItw7KlHY1Xqj7IrPdk=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -6,20 +6,20 @@ import (
)
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
func Verify(blobSidecars ...blocks.ROBlob) error {
if len(blobSidecars) == 0 {
return nil
}
if len(sidecars) == 1 {
if len(blobSidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(sidecars[0].Blob),
bytesToCommitment(sidecars[0].KzgCommitment),
bytesToKZGProof(sidecars[0].KzgProof))
bytesToBlob(blobSidecars[0].Blob),
bytesToCommitment(blobSidecars[0].KzgCommitment),
bytesToKZGProof(blobSidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs := make([]GoKZG.Blob, len(blobSidecars))
cmts := make([]GoKZG.KZGCommitment, len(blobSidecars))
proofs := make([]GoKZG.KZGProof, len(blobSidecars))
for i, sidecar := range blobSidecars {
blobs[i] = *bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)

View File

@@ -22,8 +22,8 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
}
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
blobSidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(blobSidecars...))
}
func TestBytesToAny(t *testing.T) {

View File

@@ -240,9 +240,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
if err := s.areSidecarsAvailable(ctx, avs, b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability for block %#x at slot %d", b.Root(), b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
@@ -308,6 +309,30 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
if blockVersion >= version.Fulu {
if err := s.areDataColumnsAvailable(ctx, roBlock.Root(), block); err != nil {
return errors.Wrapf(err, "are data columns available for block %#x with slot %d", roBlock.Root(), slot)
}
return nil
}
if blockVersion >= version.Deneb {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), roBlock); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", slot)
}
return nil
}
return nil
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -584,7 +609,7 @@ func (s *Service) runLateBlockTasks() {
// It returns a map where each key represents a missing BlobSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// BlobSidecars against the set of known missing sidecars.
func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
func missingBlobIndices(store *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(expected) == 0 {
return nil, nil
@@ -592,7 +617,7 @@ func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength
if len(expected) > maxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices := bs.Summary(root)
indices := store.Summary(root)
missing := make(map[uint64]bool, len(expected))
for i := range expected {
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
@@ -607,7 +632,7 @@ func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength
// It returns a map where each key represents a missing DataColumnSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// DataColumns against the set of known missing sidecars.
func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
@@ -619,7 +644,7 @@ func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparam
}
// Get a summary of the data columns stored in the database.
summary := bs.Summary(root)
summary := store.Summary(root)
// Check all expected data columns against the summary.
missing := make(map[uint64]bool)
@@ -717,7 +742,7 @@ func (s *Service) areDataColumnsAvailable(
summary := s.dataColumnStorage.Summary(root)
storedDataColumnsCount := summary.Count()
minimumColumnCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
minimumColumnCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
// As soon as we have enough data column sidecars, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
@@ -820,7 +845,7 @@ func (s *Service) areDataColumnsAvailable(
missingIndices = uint64MapToSortedSlice(missingMap)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing: %v", block.Slot(), root, missingIndices)
}
}
}

View File

@@ -2889,7 +2889,7 @@ func TestIsDataAvailable(t *testing.T) {
})
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
for i := range minimumColumnsCountToReconstruct {
indices = append(indices, i)
@@ -2974,7 +2974,7 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
for i := range minimumColumnsCountToReconstruct - missingColumns {

View File

@@ -17,7 +17,7 @@ func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataC
// ReceiveDataColumn receives a single data column.
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
return errors.Wrap(err, "save data column sidecars")
return errors.Wrap(err, "save data column sidecar")
}
return nil

View File

@@ -89,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
func (mb *mockBroadcaster) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}

View File

@@ -78,6 +78,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +265,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -223,6 +223,14 @@ func dataColumnsSidecars(
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
// Validate that we have enough cells and proofs for this column index
if columnIndex >= uint64(len(cellsForRow)) {
return nil, errors.Errorf("column index %d exceeds cells length %d for blob %d", columnIndex, len(cellsForRow), rowIndex)
}
if columnIndex >= uint64(len(proofsForRow)) {
return nil, errors.Errorf("column index %d exceeds proofs length %d for blob %d", columnIndex, len(proofsForRow), rowIndex)
}
cell := cellsForRow[columnIndex]
column = append(column, cell)

View File

@@ -67,6 +67,55 @@ func TestDataColumnSidecars(t *testing.T) {
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
})
t.Run("cells array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with insufficient cells for the number of columns.
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, 10), // Only 10 cells
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
},
}
// This should fail because the function will try to access columns up to NumberOfColumns
// but we only have 10 cells/proofs.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds cells length", err)
})
t.Run("proofs array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
},
}
// This should fail when trying to access proof beyond index 4.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds proofs length", err)
})
}
func TestComputeCustodyGroupForColumn(t *testing.T) {

View File

@@ -18,8 +18,8 @@ var (
ErrBlobsCellsProofsMismatch = errors.New("blobs and cells proofs mismatch")
)
// MinimumColumnsCountToReconstruct return the minimum number of columns needed to proceed to a reconstruction.
func MinimumColumnsCountToReconstruct() uint64 {
// MinimumColumnCountToReconstruct return the minimum number of columns needed to proceed to a reconstruction.
func MinimumColumnCountToReconstruct() uint64 {
// If the number of columns is odd, then we need total / 2 + 1 columns to reconstruct.
// If the number of columns is even, then we need total / 2 columns to reconstruct.
return (params.BeaconConfig().NumberOfColumns + 1) / 2
@@ -58,7 +58,7 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
// Check if there is enough sidecars to reconstruct the missing columns.
sidecarCount := len(sidecarByIndex)
if uint64(sidecarCount) < MinimumColumnsCountToReconstruct() {
if uint64(sidecarCount) < MinimumColumnCountToReconstruct() {
return nil, ErrNotEnoughDataColumnSidecars
}

View File

@@ -48,7 +48,7 @@ func TestMinimumColumnsCountToReconstruct(t *testing.T) {
params.OverrideBeaconConfig(cfg)
// Compute the minimum number of columns needed to reconstruct.
actual := peerdas.MinimumColumnsCountToReconstruct()
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, tc.expected, actual)
})
}
@@ -100,7 +100,7 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
t.Run("not enough columns to enable reconstruction", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
minimum := peerdas.MinimumColumnsCountToReconstruct()
minimum := peerdas.MinimumColumnCountToReconstruct()
_, err := peerdas.ReconstructDataColumnSidecars(verifiedRoSidecars[:minimum-1])
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
})

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
@@ -13,7 +12,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -23,7 +21,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -33,7 +30,6 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
],
@@ -49,7 +45,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -53,30 +53,25 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
if len(blobSidecars) > 1 {
firstRoot := blobSidecars[0].BlockRoot()
for _, sidecar := range blobSidecars[1:] {
if len(sidecars) > 1 {
firstRoot := sidecars[0].BlockRoot()
for _, sidecar := range sidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(blobSidecars[0].Slot()), slots.ToEpoch(current)) {
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(blobSidecars[0])
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range blobSidecars {
for _, blobSidecar := range sidecars {
if err := entry.stash(&blobSidecar); err != nil {
return err
}

View File

@@ -118,23 +118,21 @@ func TestLazilyPersistent_Missing(t *testing.T) {
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[2]))
require.NoError(t, as.Persist(1, blobSidecars[2]))
err := as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, scs[0]))
require.NoError(t, as.Persist(1, blobSidecars[0]))
err = as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(1, blobSidecars...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
}
@@ -149,10 +147,8 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
require.NoError(t, as.Persist(1, blobSidecars[0]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
@@ -161,29 +157,25 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
func TestLazyPersistOnceCommitted(t *testing.T) {
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(1, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
require.ErrorIs(t, as.Persist(1, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
require.ErrorIs(t, as.Persist(1, blobSidecars[0]), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
require.NoError(t, as.Persist(32+slotOOB, more[0]))
require.NoError(t, as.Persist(32+slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(1, more...))
require.NoError(t, as.Persist(1, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -1,213 +0,0 @@
package das
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
nodeID enode.ID
cache *dataColumnCache
newDataColumnsVerifier verification.NewDataColumnsVerifier
custodyGroupCount uint64
}
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
nodeID enode.ID,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
custodyGroupCount uint64,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
nodeID: nodeID,
cache: newDataColumnCache(),
newDataColumnsVerifier: newDataColumnsVerifier,
custodyGroupCount: custodyGroupCount,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
if len(sidecars) == 0 {
return nil
}
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
// It is safe to retrieve the first sidecar.
firstSidecar := dataColumnSidecars[0]
if len(sidecars) > 1 {
firstRoot := firstSidecar.BlockRoot()
for _, sidecar := range dataColumnSidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
return nil
}
key := cacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
entry := s.cache.ensure(key)
for _, sidecar := range dataColumnSidecars {
if err := entry.stash(&sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
}
// Return early for blocks that do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
// Get the root of the block.
blockRoot := block.Root()
// Build the cache key for the block.
key := cacheKey{slot: block.Block().Slot(), root: blockRoot}
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
entry := s.cache.ensure(key)
// Delete the cache entry for the block at the end.
defer s.cache.delete(key)
// Set the disk summary for the block in the cache entry.
entry.setDiskSummary(s.store.Summary(blockRoot))
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
if err != nil {
return errors.Wrap(err, "entry filter")
}
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return errors.Wrap(err, "sidecar KZG proof verified")
}
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
if err != nil {
return errors.Wrap(err, "verified RO data columns - should never happen")
}
if err := s.store.Save(verifiedRoDataColumns); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
// fullCommitmentsToCheck returns the commitments to check for a given block.
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
// Compute the block epoch.
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current epoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return &safeCommitmentsArray{}, nil
}
// Retrieve the KZG commitments for the block.
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Return early if there are no commitments in the block.
if len(kzgCommitments) == 0 {
return &safeCommitmentsArray{}, nil
}
// Retrieve peer info.
samplingSize := max(s.custodyGroupCount, samplesPerSlot)
peerInfo, _, err := peerdas.Info(nodeID, samplingSize)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}
// Create a safe commitments array for the custody columns.
commitmentsArray := &safeCommitmentsArray{}
commitmentsArraySize := uint64(len(commitmentsArray))
for column := range peerInfo.CustodyColumns {
if column >= commitmentsArraySize {
return nil, errors.Errorf("custody column index %d too high (max allowed %d) - should never happen", column, commitmentsArraySize)
}
commitmentsArray[column] = kzgCommitments
}
return commitmentsArray, nil
}

View File

@@ -1,313 +0,0 @@
package das
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
)
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("mixed roots", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
{Slot: 2, Index: 2},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.ErrorIs(t, err, errMixedRoots)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := lazilyPersistentStoreColumns.cache.entries[key]
require.Equal(t, true, ok)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), entry.diskSummary.Count())
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
for i, roDataColumn := range entry.scs {
if map[int]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
indices := [...]uint64{1, 17, 19, 42, 75, 87, 102, 117}
dataColumnsParams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
key := cacheKey{root: root}
entry := lazilyPersistentStoreColumns.cache.ensure(key)
defer lazilyPersistentStoreColumns.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = lazilyPersistentStoreColumns.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := dataColumnStorage.Get(root, indices[:])
require.NoError(t, err)
summary := dataColumnStorage.Summary(root)
require.Equal(t, uint64(len(indices)), summary.Count())
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestFullCommitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: windowSlots + 1,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = 100
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, numberOfColumns)
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
require.NoError(t, err)
for _, commitments := range commitmentsArray {
require.DeepEqual(t, tc.commitments, commitments)
}
})
}
}
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, parameters []util.DataColumnParam) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, parameters)
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
}
return roSidecars, roDataColumns
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }

View File

@@ -15,5 +15,5 @@ import (
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}

View File

@@ -5,13 +5,12 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
errors "github.com/pkg/errors"
)
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, sc ...blocks.ROBlob) error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityStore = &MockAvailabilityStore{}
@@ -25,13 +24,9 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
}
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
func (m *MockAvailabilityStore) Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error {
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(current, blobSidecars...)
return m.PersistBlobsCallback(current, blobSidecar...)
}
return nil
}

View File

@@ -100,6 +100,14 @@ type (
}
)
// DataColumnStorageReader is an interface to read data column sidecars from the filesystem.
type DataColumnStorageReader interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
Get(root [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error)
}
var _ DataColumnStorageReader = &DataColumnStorage{}
// WithDataColumnBasePath is a required option that sets the base path of data column storage.
func WithDataColumnBasePath(base string) DataColumnStorageOption {
return func(b *DataColumnStorage) error {

View File

@@ -84,12 +84,6 @@ func (s DataColumnStorageSummary) Stored() map[uint64]bool {
return stored
}
// DataColumnStorageSummarizer can be used to receive a summary of metadata about data columns on disk for a given root.
// The DataColumnStorageSummary can be used to check which indices (if any) are available for a given block by root.
type DataColumnStorageSummarizer interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
}
type dataColumnStorageSummaryCache struct {
mu sync.RWMutex
dataColumnCount float64
@@ -98,8 +92,6 @@ type dataColumnStorageSummaryCache struct {
cache map[[fieldparams.RootLength]byte]DataColumnStorageSummary
}
var _ DataColumnStorageSummarizer = &dataColumnStorageSummaryCache{}
func newDataColumnStorageSummaryCache() *dataColumnStorageSummaryCache {
return &dataColumnStorageSummaryCache{
cache: make(map[[fieldparams.RootLength]byte]DataColumnStorageSummary),

View File

@@ -144,14 +144,3 @@ func NewEphemeralDataColumnStorageWithMocker(t testing.TB) (*DataColumnMocker, *
fs, dcs := NewEphemeralDataColumnStorageAndFs(t)
return &DataColumnMocker{fs: fs, dcs: dcs}, dcs
}
func NewMockDataColumnStorageSummarizer(t *testing.T, set map[[fieldparams.RootLength]byte][]uint64) DataColumnStorageSummarizer {
c := newDataColumnStorageSummaryCache()
for root, indices := range set {
if err := c.set(DataColumnsIdent{Root: root, Epoch: 0, Indices: indices}); err != nil {
t.Fatal(err)
}
}
return c
}

View File

@@ -115,6 +115,17 @@ type NoHeadAccessDatabase interface {
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
DeleteHistoricalDataBeforeSlot(ctx context.Context, slot primitives.Slot, batchSize int) (int, error)
// Genesis operations.
LoadGenesis(ctx context.Context, stateBytes []byte) error
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// Support for checkpoint sync and backfill.
SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32]byte) error
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
@@ -131,16 +142,6 @@ type HeadAccessDatabase interface {
HeadBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
HeadBlockRoot() ([32]byte, error)
SaveHeadBlockRoot(ctx context.Context, blockRoot [32]byte) error
// Genesis operations.
LoadGenesis(ctx context.Context, stateBytes []byte) error
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// Support for checkpoint sync and backfill.
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
}
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.

View File

@@ -86,7 +86,9 @@ go_test(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/monitor:go_default_library",
@@ -99,6 +101,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -550,6 +550,11 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
return errors.Wrap(err, "could not ensure embedded genesis")
}
// Validate sync options when starting with an empty database
if err := b.validateSyncFlags(); err != nil {
return err
}
if b.CheckpointInitializer != nil {
log.Info("Checkpoint sync - Downloading origin state and block")
if err := b.CheckpointInitializer.Initialize(b.ctx, b.db); err != nil {
@@ -564,6 +569,52 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
log.WithField("address", depositAddress).Info("Deposit contract")
return nil
}
// validateSyncFlags ensures that when starting with an empty database,
// the user has explicitly chosen either genesis sync or checkpoint sync.
func (b *BeaconNode) validateSyncFlags() error {
// Check if database has an origin checkpoint (indicating it's not empty)
_, err := b.db.OriginCheckpointBlockRoot(b.ctx)
if err == nil {
// Database is not empty, validation is not needed
return nil
}
if !errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
// Some other error occurred
return errors.Wrap(err, "could not check origin checkpoint block root")
}
// if genesis exists, also consider DB non-empty.
if gb, err := b.db.GenesisBlock(b.ctx); err == nil && gb != nil && !gb.IsNil() {
return nil
}
// Database is empty, check if user has provided required flags
syncFromGenesis := b.cliCtx.Bool(flags.SyncFromGenesis.Name)
hasCheckpointSync := b.CheckpointInitializer != nil
if !syncFromGenesis && !hasCheckpointSync {
return errors.New("when starting with an empty database, you must specify either:\n" +
" --sync-from-genesis (to sync from genesis)\n" +
" --checkpoint-sync-url <url> (to sync from a remote beacon node)\n" +
" --checkpoint-state <path> and --checkpoint-block <path> (to sync from local files)\n\n" +
"Checkpoint sync is recommended for faster syncing.")
}
// Check for conflicting sync options
if syncFromGenesis && hasCheckpointSync {
return errors.New("conflicting sync options: cannot use both --sync-from-genesis and checkpoint sync flags. " +
"Please choose either genesis sync or checkpoint sync, not both.")
}
if syncFromGenesis {
log.Warn("Syncing from genesis is enabled. This will take a very long time and is not recommended. " +
"Consider using checkpoint sync instead with --checkpoint-sync-url.")
}
return nil
}
func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context, clearer *dbClearer) error {
if !b.slasherEnabled {
return nil
@@ -845,6 +896,7 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
ClockWaiter: b.clockWaiter,
InitialSyncComplete: complete,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
}, opts...)
return b.services.RegisterService(is)
}
@@ -966,6 +1018,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
Router: router,
ClockWaiter: b.clockWaiter,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
PayloadIDCache: b.payloadIDCache,
LCStore: b.lcStore,

View File

@@ -7,6 +7,7 @@ import (
"net/http"
"net/http/httptest"
"path/filepath"
"strings"
"testing"
"time"
@@ -14,15 +15,19 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/monitor"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/runtime"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prometheus/client_golang/prometheus"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
@@ -49,6 +54,7 @@ func TestNodeClose_OK(t *testing.T) {
set.Bool("demo-config", true, "demo configuration")
set.String("deposit-contract", "0x0000000000000000000000000000000000000000", "deposit contract address")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
cmd.ValidatorMonitorIndicesFlag.Value = &cli.IntSlice{}
cmd.ValidatorMonitorIndicesFlag.Value.SetInt(1)
@@ -74,6 +80,7 @@ func TestNodeStart_Ok(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
@@ -104,6 +111,7 @@ func TestNodeStart_SyncChecker(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
@@ -143,6 +151,7 @@ func TestClearDB(t *testing.T) {
set.String("datadir", tmp, "node data directory")
set.Bool(cmd.ForceClearDB.Name, true, "force clear db")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
context, cancel := newCliContextWithCancel(&app, set)
@@ -262,3 +271,128 @@ func TestCORS(t *testing.T) {
})
}
}
// TestValidateSyncFlags tests the validateSyncFlags function with real database instances
func TestValidateSyncFlags(t *testing.T) {
tests := []struct {
expectWarning bool
expectError bool
hasCheckpointInitializer bool
syncFromGenesis bool
dbHasOriginCheckpoint bool
expectedErrorContains string
name string
}{
{
name: "Database not empty - validation skipped",
dbHasOriginCheckpoint: true,
syncFromGenesis: false,
expectError: false,
},
{
name: "Empty DB, no sync flags - should fail",
dbHasOriginCheckpoint: false,
syncFromGenesis: false,
expectError: true,
expectedErrorContains: "when starting with an empty database, you must specify either",
},
{
name: "Empty DB, sync from genesis - should succeed with warning",
dbHasOriginCheckpoint: false,
syncFromGenesis: true,
expectError: false,
expectWarning: true,
},
{
name: "Empty DB, checkpoint sync - should succeed",
dbHasOriginCheckpoint: false,
hasCheckpointInitializer: true,
expectError: false,
},
{
name: "Empty DB, conflicting sync options - should fail",
dbHasOriginCheckpoint: false,
syncFromGenesis: true,
hasCheckpointInitializer: true,
expectError: true,
expectedErrorContains: "conflicting sync options",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Isolate Prometheus metrics per subtest to avoid duplicate registration across DB setups.
reg := prometheus.NewRegistry()
prometheus.DefaultRegisterer = reg
prometheus.DefaultGatherer = reg
ctx := context.Background()
// Set up real database for testing (empty to start).
beaconDB := testDB.SetupDB(t)
// Populate database if needed (simulate "non-empty" via origin checkpoint).
if tt.dbHasOriginCheckpoint {
err := beaconDB.SaveOriginCheckpointBlockRoot(ctx, [32]byte{0x01})
require.NoError(t, err)
}
// Set up CLI flags
flagSet := flag.NewFlagSet("test", flag.ContinueOnError)
flagSet.Bool(flags.SyncFromGenesis.Name, tt.syncFromGenesis, "")
app := cli.App{}
cliCtx := cli.NewContext(&app, flagSet, nil)
// Create BeaconNode with test setup
beaconNode := &BeaconNode{
ctx: ctx,
db: beaconDB,
cliCtx: cliCtx,
}
// Set CheckpointInitializer if needed
if tt.hasCheckpointInitializer {
beaconNode.CheckpointInitializer = &mockCheckpointInitializer{}
}
// Capture log output for warning detection
hook := logTest.NewGlobal()
defer hook.Reset()
// Call the function under test
err := beaconNode.validateSyncFlags()
// Validate results
if tt.expectError {
require.NotNil(t, err)
if tt.expectedErrorContains != "" {
require.ErrorContains(t, tt.expectedErrorContains, err)
}
} else {
require.NoError(t, err)
}
// Check for warning log if expected
if tt.expectWarning {
found := false
for _, entry := range hook.Entries {
if entry.Level.String() == "warning" &&
strings.Contains(entry.Message, "Syncing from genesis is enabled") {
found = true
break
}
}
require.Equal(t, true, found, "Expected warning log about genesis sync")
}
})
}
}
// mockCheckpointInitializer is a simple mock for testing
type mockCheckpointInitializer struct{}
func (m *mockCheckpointInitializer) Initialize(ctx context.Context, db db.Database) error {
return nil
}

View File

@@ -195,7 +195,6 @@ go_test(
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],

View File

@@ -305,15 +305,15 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
return nil
}
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// BroadcastDataColumnSidecar broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
func (s *Service) BroadcastDataColumn(
func (s *Service) BroadcastDataColumnSidecar(
root [fieldparams.RootLength]byte,
dataColumnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(s.ctx, "p2p.BroadcastDataColumn")
ctx, span := trace.StartSpan(s.ctx, "p2p.BroadcastDataColumnSidecar")
defer span.End()
// Ensure the data column sidecar is not nil.
@@ -330,12 +330,12 @@ func (s *Service) BroadcastDataColumn(
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumn(ctx, root, dataColumnSubnet, dataColumnSidecar, forkDigest)
go s.internalBroadcastDataColumnSidecar(ctx, root, dataColumnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumn(
func (s *Service) internalBroadcastDataColumnSidecar(
ctx context.Context,
root [fieldparams.RootLength]byte,
columnSubnet uint64,
@@ -343,7 +343,7 @@ func (s *Service) internalBroadcastDataColumn(
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumn")
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumnSidecar")
defer span.End()
// Increase the number of broadcast attempts.

View File

@@ -716,7 +716,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
// Attempt to broadcast nil object should fail.
var emptyRoot [fieldparams.RootLength]byte
err = service.BroadcastDataColumn(emptyRoot, subnet, nil)
err = service.BroadcastDataColumnSidecar(emptyRoot, subnet, nil)
require.ErrorContains(t, "attempted to broadcast nil", err)
// Subscribe to the topic.
@@ -727,7 +727,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
time.Sleep(50 * time.Millisecond)
// Broadcast to peers and wait.
err = service.BroadcastDataColumn(emptyRoot, subnet, sidecar)
err = service.BroadcastDataColumnSidecar(emptyRoot, subnet, sidecar)
require.NoError(t, err)
// Receive the message.

View File

@@ -443,20 +443,27 @@ func (s *Service) findPeers(ctx context.Context, missingPeerCount uint) ([]*enod
return peersToDial, ctx.Err()
}
// Skip peer not matching the filter.
node := iterator.Node()
if !s.filterPeer(node) {
continue
}
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() > node.Seq() {
if ok && existing.Seq() >= node.Seq() {
continue // keep existing and skip.
}
// Treat nodes that exist in nodeByNodeID with higher seq numbers as new peers
// Skip peer not matching the filter.
if !s.filterPeer(node) {
if ok {
// this means the existing peer with the lower sequence number is no longer valid
delete(nodeByNodeID, existing.ID())
missingPeerCount++
}
continue
}
nodeByNodeID[node.ID()] = node
// We found a new peer. Decrease the missing peer count.
nodeByNodeID[node.ID()] = node
missingPeerCount--
}

View File

@@ -1,9 +1,11 @@
package p2p
import (
"bytes"
"context"
"crypto/ecdsa"
"crypto/rand"
"crypto/sha256"
"fmt"
mathRand "math/rand"
"net"
@@ -58,6 +60,81 @@ func createAddrAndPrivKey(t *testing.T) (net.IP, *ecdsa.PrivateKey) {
return ipAddr, pkey
}
// createTestNodeWithID creates a LocalNode for testing with deterministic private key
// This is needed for deduplication tests where we need the same node ID across different sequence numbers
func createTestNodeWithID(t *testing.T, id string) *enode.LocalNode {
// Create a deterministic reader based on the ID for consistent key generation
h := sha256.New()
h.Write([]byte(id))
seedBytes := h.Sum(nil)
// Create a deterministic reader using the seed
deterministicReader := bytes.NewReader(seedBytes)
// Generate the private key using the same approach as the production code
privKey, _, err := crypto.GenerateSecp256k1Key(deterministicReader)
require.NoError(t, err)
// Convert to ECDSA private key for enode usage
ecdsaPrivKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(privKey)
require.NoError(t, err)
db, err := enode.OpenDB("")
require.NoError(t, err)
t.Cleanup(func() { db.Close() })
localNode := enode.NewLocalNode(db, ecdsaPrivKey)
// Set basic properties
localNode.SetStaticIP(net.ParseIP("127.0.0.1"))
localNode.Set(enr.TCP(3000))
localNode.Set(enr.UDP(3000))
localNode.Set(enr.WithEntry(eth2EnrKey, make([]byte, 16)))
return localNode
}
// createTestNodeRandom creates a LocalNode for testing using the existing createAddrAndPrivKey function
func createTestNodeRandom(t *testing.T) *enode.LocalNode {
_, privKey := createAddrAndPrivKey(t)
db, err := enode.OpenDB("")
require.NoError(t, err)
t.Cleanup(func() { db.Close() })
localNode := enode.NewLocalNode(db, privKey)
// Set basic properties
localNode.SetStaticIP(net.ParseIP("127.0.0.1"))
localNode.Set(enr.TCP(3000))
localNode.Set(enr.UDP(3000))
localNode.Set(enr.WithEntry(eth2EnrKey, make([]byte, 16)))
return localNode
}
// setNodeSeq updates a LocalNode to have the specified sequence number
func setNodeSeq(localNode *enode.LocalNode, seq uint64) {
// Force set the sequence number - we need to update the record seq-1 times
// because it starts at 1
currentSeq := localNode.Node().Seq()
for currentSeq < seq {
localNode.Set(enr.WithEntry("dummy", currentSeq))
currentSeq++
}
}
// setNodeSubnets sets the attestation subnets for a LocalNode
func setNodeSubnets(localNode *enode.LocalNode, attSubnets []uint64) {
if len(attSubnets) > 0 {
bitV := bitfield.NewBitvector64()
for _, subnet := range attSubnets {
bitV.SetBitAt(subnet, true)
}
localNode.Set(enr.WithEntry(attSubnetEnrKey, &bitV))
}
}
func TestCreateListener(t *testing.T) {
port := 1024
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -241,7 +318,7 @@ func TestCreateLocalNode(t *testing.T) {
// Check fork is set.
fork := new([]byte)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(eth2ENRKey, fork)))
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(eth2EnrKey, fork)))
require.NotEmpty(t, *fork)
// Check att subnets.
@@ -492,7 +569,7 @@ func TestMultipleDiscoveryAddresses(t *testing.T) {
node := enode.NewLocalNode(db, key)
node.Set(enr.IPv4{127, 0, 0, 1})
node.Set(enr.IPv6{0x20, 0x01, 0x48, 0x60, 0, 0, 0x20, 0x01, 0, 0, 0, 0, 0, 0, 0x00, 0x68})
s := &Service{dv5Listener: mockListener{localNode: node}}
s := &Service{dv5Listener: testp2p.NewMockListener(node, nil)}
multiAddresses, err := s.DiscoveryAddresses()
require.NoError(t, err)
@@ -517,7 +594,7 @@ func TestDiscoveryV5_SeqNumber(t *testing.T) {
node := enode.NewLocalNode(db, key)
node.Set(enr.IPv4{127, 0, 0, 1})
currentSeq := node.Seq()
s := &Service{dv5Listener: mockListener{localNode: node}}
s := &Service{dv5Listener: testp2p.NewMockListener(node, nil)}
_, err = s.DiscoveryAddresses()
require.NoError(t, err)
newSeq := node.Seq()
@@ -529,7 +606,7 @@ func TestDiscoveryV5_SeqNumber(t *testing.T) {
nodeTwo.Set(enr.IPv6{0x20, 0x01, 0x48, 0x60, 0, 0, 0x20, 0x01, 0, 0, 0, 0, 0, 0, 0x00, 0x68})
seqTwo := nodeTwo.Seq()
assert.NotEqual(t, seqTwo, newSeq)
sTwo := &Service{dv5Listener: mockListener{localNode: nodeTwo}}
sTwo := &Service{dv5Listener: testp2p.NewMockListener(nodeTwo, nil)}
_, err = sTwo.DiscoveryAddresses()
require.NoError(t, err)
assert.Equal(t, seqTwo+1, nodeTwo.Seq())
@@ -886,3 +963,291 @@ func TestRefreshPersistentSubnets(t *testing.T) {
// Reset the config.
params.OverrideBeaconConfig(defaultCfg)
}
func TestFindPeers_NodeDeduplication(t *testing.T) {
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
// Create LocalNodes and manipulate sequence numbers
localNode1 := createTestNodeWithID(t, "node1")
localNode2 := createTestNodeWithID(t, "node2")
localNode3 := createTestNodeWithID(t, "node3")
// Create different sequence versions of node1
setNodeSeq(localNode1, 1)
node1_seq1 := localNode1.Node()
setNodeSeq(localNode1, 2)
node1_seq2 := localNode1.Node() // Same ID, higher seq
setNodeSeq(localNode1, 3)
node1_seq3 := localNode1.Node() // Same ID, even higher seq
// Other nodes with seq 1
node2_seq1 := localNode2.Node()
node3_seq1 := localNode3.Node()
tests := []struct {
name string
nodes []*enode.Node
missingPeers uint
expectedCount int
description string
eval func(t *testing.T, result []*enode.Node)
}{
{
name: "No duplicates - all unique nodes",
nodes: []*enode.Node{
node2_seq1,
node3_seq1,
},
missingPeers: 2,
expectedCount: 2,
description: "Should return all unique nodes without deduplication",
eval: nil, // No special validation needed
},
{
name: "Duplicate with lower seq comes first - should replace",
nodes: []*enode.Node{
node1_seq1,
node1_seq2, // Higher seq, should replace
node2_seq1, // Different node added after duplicates are processed
},
missingPeers: 2, // Need 2 peers so we process all nodes
expectedCount: 2, // Should get node1 (with higher seq) and node2
description: "Should keep node with higher sequence number when duplicate found",
eval: func(t *testing.T, result []*enode.Node) {
// Should have node2 and node1 with higher seq (node1_seq2)
foundNode1WithHigherSeq := false
for _, node := range result {
if node.ID() == node1_seq2.ID() {
require.Equal(t, node1_seq2.Seq(), node.Seq(), "Node1 should have higher seq")
foundNode1WithHigherSeq = true
}
}
require.Equal(t, true, foundNode1WithHigherSeq, "Should have node1 with higher seq")
},
},
{
name: "Duplicate with higher seq comes first - should keep existing",
nodes: []*enode.Node{
node1_seq3, // Higher seq
node1_seq2, // Lower seq, should be skipped (continue branch)
node1_seq1, // Even lower seq, should also be skipped (continue branch)
node2_seq1, // Different node added after duplicates are processed
},
missingPeers: 2,
expectedCount: 2,
description: "Should keep existing node when it has higher sequence number and skip all lower seq duplicates",
eval: func(t *testing.T, result []*enode.Node) {
// Should have kept the node with highest seq (node1_seq3)
foundNode1WithHigherSeq := false
for _, node := range result {
if node.ID() == node1_seq3.ID() {
require.Equal(t, node1_seq3.Seq(), node.Seq(), "Node1 should have highest seq")
foundNode1WithHigherSeq = true
}
}
require.Equal(t, true, foundNode1WithHigherSeq, "Should have node1 with highest seq")
},
},
{
name: "Multiple duplicates with increasing seq",
nodes: []*enode.Node{
node1_seq1,
node1_seq2, // Should replace seq1
node1_seq3, // Should replace seq2
node2_seq1, // Different node added after duplicates are processed
},
missingPeers: 2,
expectedCount: 2,
description: "Should keep updating to highest sequence number",
eval: func(t *testing.T, result []*enode.Node) {
// Should have the node with highest seq (node1_seq3)
foundNode1WithHigherSeq := false
for _, node := range result {
if node.ID() == node1_seq3.ID() {
require.Equal(t, node1_seq3.Seq(), node.Seq(), "Node1 should have highest seq")
foundNode1WithHigherSeq = true
}
}
require.Equal(t, true, foundNode1WithHigherSeq, "Should have node1 with highest seq")
},
},
{
name: "Duplicate with equal seq comes after - should skip",
nodes: []*enode.Node{
node1_seq2, // First occurrence
node1_seq2, // Same exact node instance, should be skipped (continue branch for >= case)
node2_seq1, // Different node
},
missingPeers: 2,
expectedCount: 2,
description: "Should skip duplicate with equal sequence number",
eval: func(t *testing.T, result []*enode.Node) {
// Should have exactly one instance of node1_seq2 and one instance of node2_seq1
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq2.ID() {
require.Equal(t, node1_seq2.Seq(), node.Seq(), "Node1 should have the expected seq")
require.Equal(t, false, foundNode1, "Should have only one instance of node1") // Ensure no duplicates
foundNode1 = true
}
if node.ID() == node2_seq1.ID() {
foundNode2 = true
}
}
require.Equal(t, true, foundNode1, "Should have node1")
require.Equal(t, true, foundNode2, "Should have node2")
},
},
{
name: "Mix of unique and duplicate nodes",
nodes: []*enode.Node{
node1_seq1,
node2_seq1,
node1_seq2, // Should replace node1_seq1
node3_seq1,
node1_seq3, // Should replace node1_seq2
},
missingPeers: 3,
expectedCount: 3,
description: "Should handle mix of unique nodes and duplicates correctly",
eval: nil, // Basic count validation is sufficient
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{
MaxPeers: 30,
},
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := s.findPeers(ctxWithTimeout, tt.missingPeers)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
// callbackIterator allows us to execute callbacks at specific points during iteration
type callbackIterator struct {
nodes []*enode.Node
index int
callbacks map[int]func() // map from index to callback function
}
func (c *callbackIterator) Next() bool {
// Execute callback before checking if we can continue (if one exists)
if callback, exists := c.callbacks[c.index]; exists {
callback()
}
return c.index < len(c.nodes)
}
func (c *callbackIterator) Node() *enode.Node {
if c.index >= len(c.nodes) {
return nil
}
node := c.nodes[c.index]
c.index++
return node
}
func (c *callbackIterator) Close() {
// Nothing to clean up for this simple implementation
}
func TestFindPeers_received_bad_existing_node(t *testing.T) {
// This test successfully triggers delete(nodeByNodeID, node.ID()) in subnets.go by:
// 1. Processing node1_seq1 first (passes filterPeer, gets added to map
// 2. Callback marks peer as bad before processing node1_seq2"
// 3. Processing node1_seq2 (fails filterPeer, triggers delete since ok=true
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
// Create LocalNode with same ID but different sequences
localNode1 := createTestNodeWithID(t, "testnode")
node1_seq1 := localNode1.Node() // Get current node
currentSeq := node1_seq1.Seq()
setNodeSeq(localNode1, currentSeq+1) // Increment sequence by 1
node1_seq2 := localNode1.Node() // This should have higher seq
// Additional node to ensure we have enough peers to process
localNode2 := createTestNodeWithID(t, "othernode")
node2 := localNode2.Node()
fakePeer := testp2p.NewTestP2P(t)
service := &Service{
cfg: &Config{
MaxPeers: 30,
},
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Create iterator with callback that marks peer as bad before processing node1_seq2
iter := &callbackIterator{
nodes: []*enode.Node{node1_seq1, node1_seq2, node2},
index: 0,
callbacks: map[int]func(){
1: func() { // Before processing node1_seq2 (index 1)
// Mark peer as bad before processing node1_seq2
peerData, _, _ := convertToAddrInfo(node1_seq2)
if peerData != nil {
service.peers.Add(node1_seq2.Record(), peerData.ID, nil, network.DirUnknown)
// Mark as bad peer - need enough increments to exceed threshold (6)
for i := 0; i < 10; i++ {
service.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
}
}
},
},
}
localNode := createTestNodeRandom(t)
service.dv5Listener = testp2p.NewMockListener(localNode, iter)
// Run findPeers - node1_seq1 gets processed first, then callback marks peer bad, then node1_seq2 fails
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := service.findPeers(ctxWithTimeout, 3)
require.NoError(t, err)
require.Equal(t, 1, len(result))
}

View File

@@ -13,10 +13,17 @@ import (
"github.com/sirupsen/logrus"
)
var errEth2ENRDigestMismatch = errors.New("fork digest of peer does not match local value")
var (
errForkScheduleMismatch = errors.New("peer fork schedule incompatible")
errCurrentDigestMismatch = errors.Wrap(errForkScheduleMismatch, "current_fork_digest mismatch")
errNextVersionMismatch = errors.Wrap(errForkScheduleMismatch, "next_fork_version mismatch")
errNextDigestMismatch = errors.Wrap(errForkScheduleMismatch, "nfd (next fork digest) mismatch")
)
// ENR key used for Ethereum consensus-related fork data.
var eth2ENRKey = params.BeaconNetworkConfig().ETH2Key
const (
eth2EnrKey = "eth2" // The `eth2` ENR entry advertizes the node's view of the fork schedule with an ssz-encoded ENRForkID value.
nfdEnrKey = "nfd" // The `nfd` ENR entry separately advertizes the "next fork digest" aspect of the fork schedule.
)
// ForkDigest returns the current fork digest of
// the node according to the local clock.
@@ -33,44 +40,86 @@ func (s *Service) currentForkDigest() ([4]byte, error) {
// Compares fork ENRs between an incoming peer's record and our node's
// local record values for current and next fork version/epoch.
func compareForkENR(self, peer *enr.Record) error {
peerForkENR, err := forkEntry(peer)
peerEntry, err := forkEntry(peer)
if err != nil {
return err
}
currentForkENR, err := forkEntry(self)
selfEntry, err := forkEntry(self)
if err != nil {
return err
}
enrString, err := SerializeENR(peer)
peerString, err := SerializeENR(peer)
if err != nil {
return err
}
// Clients SHOULD connect to peers with current_fork_digest, next_fork_version,
// and next_fork_epoch that match local values.
if !bytes.Equal(peerForkENR.CurrentForkDigest, currentForkENR.CurrentForkDigest) {
return errors.Wrapf(errEth2ENRDigestMismatch,
if !bytes.Equal(peerEntry.CurrentForkDigest, selfEntry.CurrentForkDigest) {
return errors.Wrapf(errCurrentDigestMismatch,
"fork digest of peer with ENR %s: %v, does not match local value: %v",
enrString,
peerForkENR.CurrentForkDigest,
currentForkENR.CurrentForkDigest,
peerString,
peerEntry.CurrentForkDigest,
selfEntry.CurrentForkDigest,
)
}
// Clients MAY connect to peers with the same current_fork_version but a
// different next_fork_version/next_fork_epoch. Unless ENRForkID is manually
// updated to matching prior to the earlier next_fork_epoch of the two clients,
// these type of connecting clients will be unable to successfully interact
// starting at the earlier next_fork_epoch.
if peerForkENR.NextForkEpoch != currentForkENR.NextForkEpoch {
if peerEntry.NextForkEpoch != selfEntry.NextForkEpoch {
log.WithFields(logrus.Fields{
"peerNextForkEpoch": peerForkENR.NextForkEpoch,
"peerENR": enrString,
"peerNextForkEpoch": peerEntry.NextForkEpoch,
"peerNextForkVersion": peerEntry.NextForkVersion,
"peerENR": peerString,
}).Trace("Peer matches fork digest but has different next fork epoch")
// We allow the connection because we have a different view of the next fork epoch. This
// could be due to peers that have no upgraded ahead of a fork or BPO schedule change, so
// we allow the connection to continue until the fork boundary.
return nil
}
if !bytes.Equal(peerForkENR.NextForkVersion, currentForkENR.NextForkVersion) {
log.WithFields(logrus.Fields{
"peerNextForkVersion": peerForkENR.NextForkVersion,
"peerENR": enrString,
}).Trace("Peer matches fork digest but has different next fork version")
// Since we agree on the next fork epoch, we require next fork version to also be in agreement.
if !bytes.Equal(peerEntry.NextForkVersion, selfEntry.NextForkVersion) {
return errors.Wrapf(errNextVersionMismatch,
"next fork version of peer with ENR %s: %#x, does not match local value: %#x",
peerString, peerEntry.NextForkVersion, selfEntry.NextForkVersion)
}
// Fulu adds the following to the spec:
// ---
// A new entry is added to the ENR under the key nfd, short for next fork digest. This entry
// communicates the digest of the next scheduled fork, regardless of whether it is a regular
// or a Blob-Parameters-Only fork. This new entry MUST be added once FULU_FORK_EPOCH is assigned
// any value other than FAR_FUTURE_EPOCH. Adding this entry prior to the Fulu fork will not
// impact peering as nodes will ignore unknown ENR entries and nfd mismatches do not cause
// disconnects.
// When discovering and interfacing with peers, nodes MUST evaluate nfd alongside their existing
// consideration of the ENRForkID::next_* fields under the eth2 key, to form a more accurate
// view of the peer's intended next fork for the purposes of sustained peering. If there is a
// mismatch, the node MUST NOT disconnect before the fork boundary, but it MAY disconnect
// at/after the fork boundary.
// Nodes unprepared to follow the Fulu fork will be unaware of nfd entries. However, their
// existing comparison of eth2 entries (concretely next_fork_epoch) is sufficient to detect
// upcoming divergence.
// ---
// Because this is a new in-bound connection, we lean into the pre-fulu point that clients
// MAY connect to peers with the same current_fork_version but a different
// next_fork_version/next_fork_epoch, which implies we can chose to not connect to them when these
// don't match.
//
// Given that the next_fork_epoch matches, we will require the next_fork_digest to match.
if !params.FuluEnabled() {
return nil
}
peerNFD, selfNFD := nfd(peer), nfd(self)
if peerNFD != selfNFD {
return errors.Wrapf(errNextDigestMismatch,
"next fork digest of peer with ENR %s: %v, does not match local value: %v",
peerString, peerNFD, selfNFD)
}
return nil
}
@@ -102,7 +151,7 @@ func updateENR(node *enode.LocalNode, entry, next params.NetworkScheduleEntry) e
if err != nil {
return err
}
forkEntry := enr.WithEntry(eth2ENRKey, enc)
forkEntry := enr.WithEntry(eth2EnrKey, enc)
node.Set(forkEntry)
return nil
}
@@ -111,7 +160,7 @@ func updateENR(node *enode.LocalNode, entry, next params.NetworkScheduleEntry) e
// under the Ethereum consensus EnrKey
func forkEntry(record *enr.Record) (*pb.ENRForkID, error) {
sszEncodedForkEntry := make([]byte, 16)
entry := enr.WithEntry(eth2ENRKey, &sszEncodedForkEntry)
entry := enr.WithEntry(eth2EnrKey, &sszEncodedForkEntry)
err := record.Load(entry)
if err != nil {
return nil, err
@@ -122,3 +171,15 @@ func forkEntry(record *enr.Record) (*pb.ENRForkID, error) {
}
return forkEntry, nil
}
// nfd retrieves the value of the `nfd` ("next fork digest") key from an ENR record.
func nfd(record *enr.Record) [4]byte {
digest := [4]byte{}
entry := enr.WithEntry(nfdEnrKey, &digest)
if err := record.Load(entry); err != nil {
// Treat a missing nfd entry as an empty digest.
// We do this to avoid errors when checking peers that have not upgraded for fulu.
return [4]byte{}
}
return digest
}

View File

@@ -16,14 +16,12 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestCompareForkENR(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096
params.BeaconConfig().InitializeForkSchedule()
logrus.SetLevel(logrus.TraceLevel)
db, err := enode.OpenDB("")
assert.NoError(t, err)
@@ -61,10 +59,10 @@ func TestCompareForkENR(t *testing.T) {
require.NoError(t, updateENR(peer, currentCopy, next))
return peer.Node()
},
expectErr: errEth2ENRDigestMismatch,
expectErr: errCurrentDigestMismatch,
},
{
name: "next fork version mismatch",
name: "next_fork_epoch match, next_fork_version mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
@@ -75,25 +73,44 @@ func TestCompareForkENR(t *testing.T) {
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectLog: "Peer matches fork digest but has different next fork version",
expectErr: errNextVersionMismatch,
},
{
name: "next fork epoch mismatch",
name: "next fork epoch mismatch, next fork digest mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
nextCopy := next
// next epoch does not match, and neither does the next fork digest.
nextCopy.Epoch = nextCopy.Epoch + 1
nfd := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
require.NotEqual(t, next.ForkDigest, nfd)
//peer.Set(enr.WithEntry(nfdEnrKey, nfd[:]))
nextCopy.ForkDigest = nfd
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectLog: "Peer matches fork digest but has different next fork epoch",
// no error because we allow a different next fork version / digest if the next fork epoch does not match
},
{
name: "next fork epoch -match-, next fork digest mismatch",
node: func(t *testing.T) *enode.Node {
peer := enode.NewLocalNode(db, k)
nextCopy := next
nfd := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
// next epoch *does match*, but the next fork digest doesn't - so we should get an error.
require.NotEqual(t, next.ForkDigest, nfd)
nextCopy.ForkDigest = nfd
//peer.Set(enr.WithEntry(nfdEnrKey, nfd[:]))
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectErr: errNextDigestMismatch,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
hook := logTest.NewGlobal()
peer := c.node(t)
err := compareForkENR(self.Node().Record(), peer.Record())
if c.expectErr != nil {
@@ -101,13 +118,27 @@ func TestCompareForkENR(t *testing.T) {
} else {
require.NoError(t, err, "Expected no error comparing fork ENRs")
}
if c.expectLog != "" {
require.LogsContain(t, hook, c.expectLog, "Expected log message not found")
}
})
}
}
func TestNfdSetAndLoad(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096
params.BeaconConfig().InitializeForkSchedule()
db, err := enode.OpenDB("")
assert.NoError(t, err)
_, k := createAddrAndPrivKey(t)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
next.ForkDigest = [4]byte{0xFF, 0xFF, 0xFF, 0xFF} // Ensure a unique digest for testing.
self := enode.NewLocalNode(db, k)
require.NoError(t, updateENR(self, current, next))
n := nfd(self.Node().Record())
assert.Equal(t, next.ForkDigest, n, "Expected nfd to match next fork digest")
}
func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
@@ -122,7 +153,7 @@ func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
}
enc, err := enrForkID.MarshalSSZ()
require.NoError(t, err)
entry := enr.WithEntry(eth2ENRKey, enc)
entry := enr.WithEntry(eth2EnrKey, enc)
temp := t.TempDir()
randNum := rand.Int()
tempPath := path.Join(temp, strconv.Itoa(randNum))

View File

@@ -51,7 +51,7 @@ type (
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
BroadcastDataColumn(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
BroadcastDataColumnSidecar(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -42,7 +42,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 10.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, nil, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
assert.NoError(t, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
_, _, topicMap, err := scorer.GossipData("peer1")
assert.NoError(t, err)
assert.Equal(t, uint64(100), topicMap["a"].TimeInMesh, "incorrect time in mesh")

View File

@@ -10,6 +10,7 @@ import (
pubsub "github.com/libp2p/go-libp2p-pubsub"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -40,49 +41,68 @@ func (s *Service) setAllForkDigests() {
}
}
var (
errNotReadyToSubscribe = fmt.Errorf("not ready to subscribe, service is not initialized")
errMissingLeadingSlash = fmt.Errorf("topic is missing leading slash")
errTopicMissingProtocolVersion = fmt.Errorf("topic is missing protocol version (eth2)")
errTopicPathWrongPartCount = fmt.Errorf("topic path has wrong part count")
errDigestInvalid = fmt.Errorf("digest is invalid")
errDigestUnexpected = fmt.Errorf("digest is unexpected")
errSnappySuffixMissing = fmt.Errorf("snappy suffix is missing")
errTopicNotFound = fmt.Errorf("topic not found in gossip topic mappings")
)
// CanSubscribe returns true if the topic is of interest and we could subscribe to it.
func (s *Service) CanSubscribe(topic string) bool {
if !s.isInitialized() {
if err := s.checkSubscribable(topic); err != nil {
if !errors.Is(err, errNotReadyToSubscribe) {
logrus.WithError(err).WithField("topic", topic).Debug("CanSubscribe failed")
}
return false
}
return true
}
func (s *Service) checkSubscribable(topic string) error {
if !s.isInitialized() {
return errNotReadyToSubscribe
}
parts := strings.Split(topic, "/")
if len(parts) != 5 {
return false
return errTopicPathWrongPartCount
}
// The topic must start with a slash, which means the first part will be empty.
if parts[0] != "" {
return false
return errMissingLeadingSlash
}
if parts[1] != "eth2" {
return false
protocol, rawDigest, suffix := parts[1], parts[2], parts[4]
if protocol != "eth2" {
return errTopicMissingProtocolVersion
}
if suffix != encoder.ProtocolSuffixSSZSnappy {
return errSnappySuffixMissing
}
var digest [4]byte
dl, err := hex.Decode(digest[:], []byte(parts[2]))
if err == nil && dl != 4 {
err = fmt.Errorf("expected 4 bytes, got %d", dl)
}
dl, err := hex.Decode(digest[:], []byte(rawDigest))
if err != nil {
log.WithError(err).WithField("topic", topic).WithField("digest", parts[2]).Error("CanSubscribe failed to parse message")
return false
return errors.Wrapf(errDigestInvalid, "%v", err)
}
if dl != 4 {
return errors.Wrapf(errDigestInvalid, "wrong byte length")
}
if _, ok := s.allForkDigests[digest]; !ok {
log.WithField("topic", topic).WithField("digest", fmt.Sprintf("%#x", digest)).Error("CanSubscribe failed to find digest in allForkDigests")
return false
}
if parts[4] != encoder.ProtocolSuffixSSZSnappy {
return false
return errDigestUnexpected
}
// Check the incoming topic matches any topic mapping. This includes a check for part[3].
for gt := range gossipTopicMappings {
if _, err := scanfcheck(strings.Join(parts[0:4], "/"), gt); err == nil {
return true
return nil
}
}
return false
return errTopicNotFound
}
// FilterIncomingSubscriptions is invoked for all RPCs containing subscription notifications.
@@ -100,7 +120,22 @@ func (s *Service) FilterIncomingSubscriptions(peerID peer.ID, subs []*pubsubpb.R
return nil, pubsub.ErrTooManySubscriptions
}
return pubsub.FilterSubscriptions(subs, s.CanSubscribe), nil
return pubsub.FilterSubscriptions(subs, s.logCheckSubscribableError(peerID)), nil
}
func (s *Service) logCheckSubscribableError(pid peer.ID) func(string) bool {
return func(topic string) bool {
if err := s.checkSubscribable(topic); err != nil {
if !errors.Is(err, errNotReadyToSubscribe) {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"topic": topic,
}).Debug("Peer subscription rejected")
}
return false
}
return true
}
}
// scanfcheck uses fmt.Sscanf to check that a given string matches expected format. This method

View File

@@ -169,7 +169,7 @@ var (
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnsByRootIdentifiers),
RPCDataColumnSidecarsByRootTopicV1: p2ptypes.DataColumnsByRootIdentifiers{},
}
// Maps all registered protocol prefixes.

View File

@@ -13,13 +13,13 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
@@ -30,48 +30,6 @@ import (
const testPingInterval = 100 * time.Millisecond
type mockListener struct {
localNode *enode.LocalNode
}
func (m mockListener) Self() *enode.Node {
return m.localNode.Node()
}
func (mockListener) Close() {
// no-op
}
func (mockListener) Lookup(enode.ID) []*enode.Node {
panic("implement me")
}
func (mockListener) ReadRandomNodes(_ []*enode.Node) int {
panic("implement me")
}
func (mockListener) Resolve(*enode.Node) *enode.Node {
panic("implement me")
}
func (mockListener) Ping(*enode.Node) error {
panic("implement me")
}
func (mockListener) RequestENR(*enode.Node) (*enode.Node, error) {
panic("implement me")
}
func (mockListener) LocalNode() *enode.LocalNode {
panic("implement me")
}
func (mockListener) RandomNodes() enode.Iterator {
panic("implement me")
}
func (mockListener) RebootListener() error { panic("implement me") }
func createHost(t *testing.T, port uint) (host.Host, *ecdsa.PrivateKey, net.IP) {
_, pkey := createAddrAndPrivKey(t)
ipAddr := net.ParseIP("127.0.0.1")
@@ -87,7 +45,7 @@ func TestService_Stop_SetsStartedToFalse(t *testing.T) {
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}, DB: testDB.SetupDB(t)})
require.NoError(t, err)
s.started = true
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
assert.NoError(t, s.Stop())
assert.Equal(t, false, s.started)
}
@@ -113,7 +71,7 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
}
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
s.custodyInfo = &custodyInfo{}
exitRoutine := make(chan bool)
go func() {
@@ -133,14 +91,14 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
func TestService_Status_NotRunning(t *testing.T) {
params.SetupTestConfigCleanup(t)
s := &Service{started: false}
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
assert.ErrorContains(t, "not running", s.Status(), "Status returned wrong error")
}
func TestService_Status_NoGenesisTimeSet(t *testing.T) {
params.SetupTestConfigCleanup(t)
s := &Service{started: true}
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
assert.ErrorContains(t, "no genesis time set", s.Status(), "Status returned wrong error")
s.genesisTime = time.Now()

View File

@@ -27,8 +27,6 @@ import (
"github.com/prysmaticlabs/go-bitfield"
)
const nfdEnrKey = "nfd" // The ENR record key for "nfd" (Next Fork Digest).
var (
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
@@ -136,6 +134,24 @@ func (s *Service) FindAndDialPeersWithSubnets(
return nil
}
// updateDefectiveSubnets updates the defective subnets map when a node with matching subnets is found.
// It decrements the defective count for each subnet the node satisfies and removes subnets
// that are fully satisfied (count reaches 0).
func updateDefectiveSubnets(
nodeSubnets map[uint64]bool,
defectiveSubnets map[uint64]int,
) {
for subnet := range defectiveSubnets {
if !nodeSubnets[subnet] {
continue
}
defectiveSubnets[subnet]--
if defectiveSubnets[subnet] == 0 {
delete(defectiveSubnets, subnet)
}
}
}
// findPeersWithSubnets finds peers subscribed to defective subnets in batches
// until enough peers are found or the context is canceled.
// It returns new peers found during the search.
@@ -171,6 +187,7 @@ func (s *Service) findPeersWithSubnets(
// Crawl the network for peers subscribed to the defective subnets.
nodeByNodeID := make(map[enode.ID]*enode.Node)
for len(defectiveSubnets) > 0 && iterator.Next() {
if err := ctx.Err(); err != nil {
// Convert the map to a slice.
@@ -182,14 +199,28 @@ func (s *Service) findPeersWithSubnets(
return peersToDial, err
}
// Get all needed subnets that the node is subscribed to.
// Skip nodes that are not subscribed to any of the defective subnets.
node := iterator.Node()
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() >= node.Seq() {
continue // keep existing and skip.
}
// Treat nodes that exist in nodeByNodeID with higher seq numbers as new peers
// Skip peer not matching the filter.
if !s.filterPeer(node) {
if ok {
// this means the existing peer with the lower sequence number is no longer valid
delete(nodeByNodeID, existing.ID())
// Note: We are choosing to not rollback changes to the defective subnets map in favor of calling s.defectiveSubnets once again after dialing peers.
// This is a case that should rarely happen and should be handled through a second iteration in FindAndDialPeersWithSubnets
}
continue
}
// Get all needed subnets that the node is subscribed to.
// Skip nodes that are not subscribed to any of the defective subnets.
nodeSubnets, err := filter(node)
if err != nil {
return nil, errors.Wrap(err, "filter node")
@@ -198,30 +229,14 @@ func (s *Service) findPeersWithSubnets(
continue
}
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() > node.Seq() {
continue
}
nodeByNodeID[node.ID()] = node
// We found a new peer. Modify the defective subnets map
// and the filter accordingly.
for subnet := range defectiveSubnets {
if !nodeSubnets[subnet] {
continue
}
nodeByNodeID[node.ID()] = node
defectiveSubnets[subnet]--
if defectiveSubnets[subnet] == 0 {
delete(defectiveSubnets, subnet)
}
filter, err = s.nodeFilter(topicFormat, defectiveSubnets)
if err != nil {
return nil, errors.Wrap(err, "node filter")
}
updateDefectiveSubnets(nodeSubnets, defectiveSubnets)
filter, err = s.nodeFilter(topicFormat, defectiveSubnets)
if err != nil {
return nil, errors.Wrap(err, "node filter")
}
}

View File

@@ -10,14 +10,19 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
ecdsaprysm "github.com/OffchainLabs/prysm/v6/crypto/ecdsa"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/prysmaticlabs/go-bitfield"
)
@@ -541,3 +546,552 @@ func TestInitializePersistentSubnets(t *testing.T) {
assert.Equal(t, 2, len(subs))
assert.Equal(t, true, expTime.After(time.Now()))
}
func TestFindPeersWithSubnets_NodeDeduplication(t *testing.T) {
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
localNode1 := createTestNodeWithID(t, "node1")
localNode2 := createTestNodeWithID(t, "node2")
localNode3 := createTestNodeWithID(t, "node3")
// Create different sequence versions of node1 with subnet 1
setNodeSubnets(localNode1, []uint64{1})
setNodeSeq(localNode1, 1)
node1_seq1_subnet1 := localNode1.Node()
setNodeSeq(localNode1, 2)
node1_seq2_subnet1 := localNode1.Node() // Same ID, higher seq
setNodeSeq(localNode1, 3)
node1_seq3_subnet1 := localNode1.Node() // Same ID, even higher seq
// Node2 with different sequences and subnets
setNodeSubnets(localNode2, []uint64{1})
node2_seq1_subnet1 := localNode2.Node()
setNodeSubnets(localNode2, []uint64{2}) // Different subnet
setNodeSeq(localNode2, 2)
node2_seq2_subnet2 := localNode2.Node()
// Node3 with multiple subnets
setNodeSubnets(localNode3, []uint64{1, 2})
node3_seq1_subnet1_2 := localNode3.Node()
tests := []struct {
name string
nodes []*enode.Node
defectiveSubnets map[uint64]int
expectedCount int
description string
eval func(t *testing.T, result []*enode.Node) // Custom validation function
}{
{
name: "No duplicates - unique nodes with same subnet",
nodes: []*enode.Node{
node2_seq1_subnet1,
node3_seq1_subnet1_2,
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should return all unique nodes subscribed to subnet",
eval: nil, // No special validation needed
},
{
name: "Duplicate with lower seq first - should replace",
nodes: []*enode.Node{
node1_seq1_subnet1,
node1_seq2_subnet1, // Higher seq, should replace
node2_seq1_subnet1, // Different node to ensure we process enough nodes
},
defectiveSubnets: map[uint64]int{1: 2}, // Need 2 peers for subnet 1
expectedCount: 2,
description: "Should replace with higher seq node for same subnet",
eval: func(t *testing.T, result []*enode.Node) {
found := false
for _, node := range result {
if node.ID() == node1_seq2_subnet1.ID() && node.Seq() == node1_seq2_subnet1.Seq() {
found = true
break
}
}
require.Equal(t, true, found, "Should have node with higher seq")
},
},
{
name: "Duplicate with higher seq first - should keep existing",
nodes: []*enode.Node{
node1_seq3_subnet1, // Higher seq
node1_seq2_subnet1, // Lower seq, should be skipped (continue branch)
node1_seq1_subnet1, // Even lower seq, should also be skipped (continue branch)
node2_seq1_subnet1, // Different node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should keep existing node with higher seq and skip lower seq duplicates",
eval: func(t *testing.T, result []*enode.Node) {
found := false
for _, node := range result {
if node.ID() == node1_seq3_subnet1.ID() && node.Seq() == node1_seq3_subnet1.Seq() {
found = true
break
}
}
require.Equal(t, true, found, "Should have node with highest seq")
},
},
{
name: "Multiple updates for same node",
nodes: []*enode.Node{
node1_seq1_subnet1,
node1_seq2_subnet1, // Should replace seq1
node1_seq3_subnet1, // Should replace seq2
node2_seq1_subnet1, // Different node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should keep updating to highest seq",
eval: func(t *testing.T, result []*enode.Node) {
found := false
for _, node := range result {
if node.ID() == node1_seq3_subnet1.ID() && node.Seq() == node1_seq3_subnet1.Seq() {
found = true
break
}
}
require.Equal(t, true, found, "Should have node with highest seq")
},
},
{
name: "Duplicate with equal seq in subnets - should skip",
nodes: []*enode.Node{
node1_seq2_subnet1, // First occurrence
node1_seq2_subnet1, // Same exact node instance, should be skipped (continue branch)
node2_seq1_subnet1, // Different node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should skip duplicate with equal sequence number in subnet search",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
node1Count := 0
for _, node := range result {
if node.ID() == node1_seq2_subnet1.ID() {
require.Equal(t, node1_seq2_subnet1.Seq(), node.Seq(), "Node1 should have expected seq")
foundNode1 = true
node1Count++
}
if node.ID() == node2_seq1_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, true, foundNode1, "Should have node1")
require.Equal(t, true, foundNode2, "Should have node2")
require.Equal(t, 1, node1Count, "Should have exactly one instance of node1")
},
},
{
name: "Mix with different subnets",
nodes: []*enode.Node{
node2_seq1_subnet1,
node2_seq2_subnet2, // Higher seq but different subnet
node3_seq1_subnet1_2,
},
defectiveSubnets: map[uint64]int{1: 2, 2: 1},
expectedCount: 2, // node2 (latest) and node3
description: "Should handle nodes with different subnet subscriptions",
eval: nil, // Basic count validation is sufficient
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
digest, err := s.currentForkDigest()
require.NoError(t, err)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
defer cancel()
result, err := s.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
tt.defectiveSubnets,
)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
func TestFindPeersWithSubnets_FilterPeerRemoval(t *testing.T) {
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
localNode1 := createTestNodeWithID(t, "node1")
localNode2 := createTestNodeWithID(t, "node2")
localNode3 := createTestNodeWithID(t, "node3")
// Create versions of node1 with subnet 1
setNodeSubnets(localNode1, []uint64{1})
setNodeSeq(localNode1, 1)
node1_seq1_valid_subnet1 := localNode1.Node()
// Create bad version (higher seq)
setNodeSeq(localNode1, 2)
node1_seq2_bad_subnet1 := localNode1.Node()
// Create another valid version
setNodeSeq(localNode1, 3)
node1_seq3_valid_subnet1 := localNode1.Node()
// Node2 with subnet 1
setNodeSubnets(localNode2, []uint64{1})
node2_seq1_valid_subnet1 := localNode2.Node()
// Node3 with subnet 1 and 2
setNodeSubnets(localNode3, []uint64{1, 2})
node3_seq1_valid_subnet1_2 := localNode3.Node()
tests := []struct {
name string
nodes []*enode.Node
defectiveSubnets map[uint64]int
expectedCount int
description string
eval func(t *testing.T, result []*enode.Node)
}{
{
name: "Valid node in subnet followed by bad version - should remove",
nodes: []*enode.Node{
node1_seq1_valid_subnet1, // First add valid node with subnet 1
node1_seq2_bad_subnet1, // Invalid version with higher seq - should delete
node2_seq1_valid_subnet1, // Different valid node with subnet 1
},
defectiveSubnets: map[uint64]int{1: 2}, // Need 2 peers for subnet 1
expectedCount: 1, // Only node2 should remain
description: "Should remove node from map when bad version arrives, even if it has required subnet",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq1_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should have been removed despite having subnet")
require.Equal(t, true, foundNode2, "Node2 should be present")
},
},
{
name: "Bad node with subnet stays bad even with higher seq",
nodes: []*enode.Node{
node1_seq2_bad_subnet1, // First bad node - not added
node1_seq3_valid_subnet1, // Higher seq but same bad peer ID
node2_seq1_valid_subnet1, // Different valid node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 1, // Only node2 (node1 remains bad)
description: "Bad peer with subnet remains bad even with higher seq",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq3_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should remain bad despite having subnet")
require.Equal(t, true, foundNode2, "Node2 should be present")
},
},
{
name: "Mixed valid and bad nodes with subnets",
nodes: []*enode.Node{
node1_seq1_valid_subnet1, // Add valid node1 with subnet
node2_seq1_valid_subnet1, // Add valid node2 with subnet
node1_seq2_bad_subnet1, // Invalid update for node1 - should remove
node3_seq1_valid_subnet1_2, // Add valid node3 with multiple subnets
},
defectiveSubnets: map[uint64]int{1: 3}, // Need 3 peers for subnet 1
expectedCount: 2, // Only node2 and node3 should remain
description: "Should handle removal of nodes with subnets when they become bad",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
foundNode3 := false
for _, node := range result {
if node.ID() == node1_seq1_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
if node.ID() == node3_seq1_valid_subnet1_2.ID() {
foundNode3 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should have been removed")
require.Equal(t, true, foundNode2, "Node2 should be present")
require.Equal(t, true, foundNode3, "Node3 should be present")
},
},
{
name: "Node with subnet marked bad stays bad for all sequences",
nodes: []*enode.Node{
node1_seq1_valid_subnet1, // Add valid node1 with subnet
node1_seq2_bad_subnet1, // Bad update - should remove and mark bad
node1_seq3_valid_subnet1, // Higher seq but still same bad peer ID
node2_seq1_valid_subnet1, // Different valid node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 1, // Only node2 (node1 stays bad)
description: "Once marked bad, subnet peer stays bad for all sequences",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq3_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should stay bad")
require.Equal(t, true, foundNode2, "Node2 should be present")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Initialize flags for subnet operations
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
// Create test P2P instance
fakePeer := testp2p.NewTestP2P(t)
// Create mock service
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Mark specific node versions as "bad" to simulate filterPeer failures
for _, node := range tt.nodes {
if node == node1_seq2_bad_subnet1 {
// Get peer ID from the node to mark it as bad
peerData, _, _ := convertToAddrInfo(node)
if peerData != nil {
s.peers.Add(node.Record(), peerData.ID, nil, network.DirUnknown)
// Mark as bad peer - this will make filterPeer return false
s.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
s.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
s.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
}
}
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
digest, err := s.currentForkDigest()
require.NoError(t, err)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
defer cancel()
result, err := s.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
tt.defectiveSubnets,
)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
// callbackIterator allows us to execute callbacks at specific points during iteration
type callbackIteratorForSubnets struct {
nodes []*enode.Node
index int
callbacks map[int]func() // map from index to callback function
}
func (c *callbackIteratorForSubnets) Next() bool {
// Execute callback before checking if we can continue (if one exists)
if callback, exists := c.callbacks[c.index]; exists {
callback()
}
return c.index < len(c.nodes)
}
func (c *callbackIteratorForSubnets) Node() *enode.Node {
if c.index >= len(c.nodes) {
return nil
}
node := c.nodes[c.index]
c.index++
return node
}
func (c *callbackIteratorForSubnets) Close() {
// Nothing to clean up for this simple implementation
}
func TestFindPeersWithSubnets_received_bad_existing_node(t *testing.T) {
// This test successfully triggers delete(nodeByNodeID, node.ID()) in subnets.go by:
// 1. Processing node1_seq1 first (passes filterPeer, gets added to map
// 2. Callback marks peer as bad before processing node1_seq2"
// 3. Processing node1_seq2 (fails filterPeer, triggers delete since ok=true
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
// Create LocalNode with same ID but different sequences
localNode1 := createTestNodeWithID(t, "testnode")
setNodeSubnets(localNode1, []uint64{1})
node1_seq1 := localNode1.Node() // Get current node
currentSeq := node1_seq1.Seq()
setNodeSeq(localNode1, currentSeq+1) // Increment sequence by 1
node1_seq2 := localNode1.Node() // This should have higher seq
// Additional node to ensure we have enough peers to process
localNode2 := createTestNodeWithID(t, "othernode")
setNodeSubnets(localNode2, []uint64{1})
node2 := localNode2.Node()
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
fakePeer := testp2p.NewTestP2P(t)
service := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Create iterator with callback that marks peer as bad before processing node1_seq2
iter := &callbackIteratorForSubnets{
nodes: []*enode.Node{node1_seq1, node1_seq2, node2},
index: 0,
callbacks: map[int]func(){
1: func() { // Before processing node1_seq2 (index 1)
// Mark peer as bad before processing node1_seq2
peerData, _, _ := convertToAddrInfo(node1_seq2)
if peerData != nil {
service.peers.Add(node1_seq2.Record(), peerData.ID, nil, network.DirUnknown)
// Mark as bad peer - need enough increments to exceed threshold (6)
for i := 0; i < 10; i++ {
service.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
}
}
},
},
}
localNode := createTestNodeRandom(t)
service.dv5Listener = testp2p.NewMockListener(localNode, iter)
digest, err := service.currentForkDigest()
require.NoError(t, err)
// Run findPeersWithSubnets - node1_seq1 gets processed first, then callback marks peer bad, then node1_seq2 fails
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := service.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
map[uint64]int{1: 2}, // Need 2 peers for subnet 1
)
require.NoError(t, err)
require.Equal(t, 1, len(result))
require.Equal(t, localNode2.Node().ID(), result[0].ID()) // only node2 should remain
}

View File

@@ -7,6 +7,7 @@ go_library(
"fuzz_p2p.go",
"mock_broadcaster.go",
"mock_host.go",
"mock_listener.go",
"mock_metadataprovider.go",
"mock_peermanager.go",
"mock_peersprovider.go",

View File

@@ -167,8 +167,8 @@ func (*FakeP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfac
return nil
}
// BroadcastDataColumn -- fake.
func (*FakeP2P) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
// BroadcastDataColumnSidecar -- fake.
func (*FakeP2P) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
return nil
}

View File

@@ -62,8 +62,8 @@ func (m *MockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumn([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
// BroadcastDataColumnSidecar broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumnSidecar([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}

View File

@@ -0,0 +1,128 @@
package testing
import (
"github.com/ethereum/go-ethereum/p2p/enode"
)
// MockListener is a mock implementation of the Listener and ListenerRebooter interfaces
// that can be used in tests. It provides configurable behavior for all methods.
type MockListener struct {
LocalNodeFunc func() *enode.LocalNode
SelfFunc func() *enode.Node
RandomNodesFunc func() enode.Iterator
LookupFunc func(enode.ID) []*enode.Node
ResolveFunc func(*enode.Node) *enode.Node
PingFunc func(*enode.Node) error
RequestENRFunc func(*enode.Node) (*enode.Node, error)
RebootFunc func() error
CloseFunc func()
// Default implementations
localNode *enode.LocalNode
iterator enode.Iterator
}
// NewMockListener creates a new MockListener with default implementations
func NewMockListener(localNode *enode.LocalNode, iterator enode.Iterator) *MockListener {
return &MockListener{
localNode: localNode,
iterator: iterator,
}
}
func (m *MockListener) LocalNode() *enode.LocalNode {
if m.LocalNodeFunc != nil {
return m.LocalNodeFunc()
}
return m.localNode
}
func (m *MockListener) Self() *enode.Node {
if m.SelfFunc != nil {
return m.SelfFunc()
}
if m.localNode != nil {
return m.localNode.Node()
}
return nil
}
func (m *MockListener) RandomNodes() enode.Iterator {
if m.RandomNodesFunc != nil {
return m.RandomNodesFunc()
}
return m.iterator
}
func (m *MockListener) Lookup(id enode.ID) []*enode.Node {
if m.LookupFunc != nil {
return m.LookupFunc(id)
}
return nil
}
func (m *MockListener) Resolve(node *enode.Node) *enode.Node {
if m.ResolveFunc != nil {
return m.ResolveFunc(node)
}
return nil
}
func (m *MockListener) Ping(node *enode.Node) error {
if m.PingFunc != nil {
return m.PingFunc(node)
}
return nil
}
func (m *MockListener) RequestENR(node *enode.Node) (*enode.Node, error) {
if m.RequestENRFunc != nil {
return m.RequestENRFunc(node)
}
return nil, nil
}
func (m *MockListener) RebootListener() error {
if m.RebootFunc != nil {
return m.RebootFunc()
}
return nil
}
func (m *MockListener) Close() {
if m.CloseFunc != nil {
m.CloseFunc()
}
}
// MockIterator is a mock implementation of enode.Iterator for testing
type MockIterator struct {
Nodes []*enode.Node
Position int
Closed bool
}
func NewMockIterator(nodes []*enode.Node) *MockIterator {
return &MockIterator{
Nodes: nodes,
}
}
func (m *MockIterator) Next() bool {
if m.Closed || m.Position >= len(m.Nodes) {
return false
}
m.Position++
return true
}
func (m *MockIterator) Node() *enode.Node {
if m.Position == 0 || m.Position > len(m.Nodes) {
return nil
}
return m.Nodes[m.Position-1]
}
func (m *MockIterator) Close() {
m.Closed = true
}

View File

@@ -228,8 +228,8 @@ func (p *TestP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interf
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumn([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
// BroadcastDataColumnSidecar broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumnSidecar([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
p.BroadcastCalled.Store(true)
return nil
}

View File

@@ -72,6 +72,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"handlers_equivocation_test.go",
"handlers_pool_test.go",
"handlers_state_test.go",
"handlers_test.go",

View File

@@ -701,7 +701,7 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
// Validate and optionally broadcast sidecars on equivocation.
if err := s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
b, err := blocks.NewSignedBeaconBlock(genericBlock.Block)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
@@ -855,7 +855,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
// Validate and optionally broadcast sidecars on equivocation.
if err := s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
b, err := blocks.NewSignedBeaconBlock(genericBlock.Block)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
@@ -1220,7 +1220,7 @@ func (s *Server) GetStateFork(w http.ResponseWriter, r *http.Request) {
fork := st.Fork()
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status"+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1331,7 +1331,7 @@ func (s *Server) GetCommittees(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
@@ -1512,7 +1512,7 @@ func (s *Server) GetFinalityCheckpoints(w http.ResponseWriter, r *http.Request)
}
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1686,7 +1686,7 @@ func (s *Server) GetPendingConsolidations(w http.ResponseWriter, r *http.Request
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1742,7 +1742,7 @@ func (s *Server) GetPendingDeposits(w http.ResponseWriter, r *http.Request) {
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1798,7 +1798,7 @@ func (s *Server) GetPendingPartialWithdrawals(w http.ResponseWriter, r *http.Req
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1851,7 +1851,7 @@ func (s *Server) GetProposerLookahead(w http.ResponseWriter, r *http.Request) {
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()

View File

@@ -0,0 +1,35 @@
package beacon
import (
"encoding/json"
"testing"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
rpctesting "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
// TestBlocks_NewSignedBeaconBlock_EquivocationFix tests that blocks.NewSignedBeaconBlock
// correctly handles the fixed case where genericBlock.Block is passed instead of genericBlock
func TestBlocks_NewSignedBeaconBlock_EquivocationFix(t *testing.T) {
// Parse the Phase0 JSON block
var block structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &block)
require.NoError(t, err)
// Convert to generic format
genericBlock, err := block.ToGeneric()
require.NoError(t, err)
// Test the FIX: pass genericBlock.Block instead of genericBlock
// This is what our fix changed in handlers.go line 704 and 858
_, err = blocks.NewSignedBeaconBlock(genericBlock.Block)
require.NoError(t, err, "NewSignedBeaconBlock should work with genericBlock.Block")
// Test the BROKEN version: pass genericBlock directly (this should fail)
_, err = blocks.NewSignedBeaconBlock(genericBlock)
if err == nil {
t.Errorf("NewSignedBeaconBlock should fail with whole genericBlock but succeeded")
}
}

View File

@@ -56,7 +56,7 @@ func (s *Server) GetStateRoot(w http.ResponseWriter, r *http.Request) {
}
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -125,7 +125,7 @@ func (s *Server) GetRandao(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
@@ -227,7 +227,7 @@ func (s *Server) GetSyncCommittees(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}

View File

@@ -44,7 +44,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -222,7 +222,7 @@ func (s *Server) GetValidator(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -258,7 +258,7 @@ func (s *Server) GetValidatorBalances(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -419,7 +419,7 @@ func (s *Server) getValidatorIdentitiesJSON(
) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()

View File

@@ -46,7 +46,7 @@ func (s *Server) getBeaconStateV2(ctx context.Context, w http.ResponseWriter, id
isOptimistic, err := helpers.IsOptimistic(ctx, id, s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check if state is optimistic: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()

View File

@@ -12,6 +12,7 @@ go_library(
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
@@ -21,6 +22,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/httputil:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -40,6 +42,7 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
@@ -57,5 +60,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -2,11 +2,14 @@ package helpers
import (
"errors"
"net/http"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/network/httputil"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
@@ -28,6 +31,22 @@ func PrepareStateFetchGRPCError(err error) error {
return status.Errorf(codes.Internal, "Invalid state ID: %v", err)
}
// HandleIsOptimisticError handles errors from IsOptimistic function calls and writes appropriate HTTP responses.
func HandleIsOptimisticError(w http.ResponseWriter, err error) {
var fetchErr *lookup.FetchStateError
if errors.As(err, &fetchErr) {
shared.WriteStateFetchError(w, err)
return
}
var blockRootsNotFoundErr *lookup.BlockRootsNotFoundError
if errors.As(err, &blockRootsNotFoundErr) {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusNotFound)
return
}
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
}
// IndexedVerificationFailure represents a collection of verification failures.
type IndexedVerificationFailure struct {
Failures []*SingleIndexedVerificationFailure `json:"failures"`

View File

@@ -56,7 +56,8 @@ func IsOptimistic(
if bytesutil.IsHex(stateId) {
id, err := hexutil.Decode(stateIdString)
if err != nil {
return false, err
e := lookup.NewStateIdParseError(err)
return false, &e
}
return isStateRootOptimistic(ctx, id, optimisticModeFetcher, stateFetcher, chainInfo, database)
} else if len(stateId) == 32 {
@@ -127,7 +128,7 @@ func isStateRootOptimistic(
) (bool, error) {
st, err := stateFetcher.State(ctx, stateId)
if err != nil {
return true, errors.Wrap(err, "could not fetch state")
return true, lookup.NewFetchStateError(err)
}
if st.Slot() == chainInfo.HeadSlot() {
return optimisticModeFetcher.IsOptimistic(ctx)
@@ -137,7 +138,7 @@ func isStateRootOptimistic(
return true, errors.Wrapf(err, "could not get block roots for slot %d", st.Slot())
}
if !has {
return true, errors.New("no block roots returned from the database")
return true, lookup.NewBlockRootsNotFoundError()
}
for _, r := range roots {
b, err := database.Block(ctx, r)

View File

@@ -1,12 +1,15 @@
package helpers
import (
"net/http"
"net/http/httptest"
"strconv"
"testing"
chainmock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
@@ -21,6 +24,7 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
func TestIsOptimistic(t *testing.T) {
@@ -226,7 +230,67 @@ func TestIsOptimistic(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, true, o)
})
t.Run("State not found", func(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b.SetStateRoot(bytesutil.PadTo([]byte("root"), 32))
db := dbtest.SetupDB(t)
require.NoError(t, db.SaveBlock(ctx, b))
chainSt, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, chainSt.SetSlot(fieldparams.SlotsPerEpoch))
bRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
cs := &chainmock.ChainService{State: chainSt, OptimisticRoots: map[[32]byte]bool{bRoot: true}}
mf := &testutil.MockStater{
CustomError: lookup.NewFetchStateError(nil),
}
_, err = IsOptimistic(ctx, []byte(hexutil.Encode(bytesutil.PadTo([]byte("root"), 32))), cs, mf, cs, db)
var fetchErr *lookup.FetchStateError
require.Equal(t, true, errors.As(err, &fetchErr))
})
t.Run("stateId invalid", func(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b.SetStateRoot(bytesutil.PadTo([]byte("root"), 32))
db := dbtest.SetupDB(t)
require.NoError(t, db.SaveBlock(ctx, b))
chainSt, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, chainSt.SetSlot(fieldparams.SlotsPerEpoch))
bRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
cs := &chainmock.ChainService{State: chainSt, OptimisticRoots: map[[32]byte]bool{bRoot: true}}
mf := &testutil.MockStater{
CustomError: lookup.NewFetchStateError(nil),
}
_, err = IsOptimistic(ctx, []byte("0xabc"), cs, mf, cs, db)
var fetchErr *lookup.FetchStateError
require.Equal(t, false, errors.As(err, &fetchErr))
})
t.Run("block roots not found", func(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b.SetStateRoot(bytesutil.PadTo([]byte("root"), 32))
db := dbtest.SetupDB(t)
require.NoError(t, db.SaveBlock(ctx, b))
chainSt, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, chainSt.SetSlot(fieldparams.SlotsPerEpoch))
bRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
cs := &chainmock.ChainService{State: chainSt, OptimisticRoots: map[[32]byte]bool{bRoot: true}}
st, err := util.NewBeaconState()
require.NoError(t, st.SetSlot(primitives.Slot(fieldparams.SlotsPerEpoch+1)))
require.NoError(t, err)
mf := &testutil.MockStater{BeaconState: st}
_, err = IsOptimistic(ctx, []byte(hexutil.Encode(bytesutil.PadTo([]byte("root"), 32))), cs, mf, cs, db)
var blockRootsNotFoundErr *lookup.BlockRootsNotFoundError
require.Equal(t, true, errors.As(err, &blockRootsNotFoundErr))
})
})
t.Run("slot", func(t *testing.T) {
t.Run("head is not optimistic", func(t *testing.T) {
cs := &chainmock.ChainService{Optimistic: false}
@@ -319,6 +383,36 @@ func TestIsOptimistic(t *testing.T) {
})
}
func TestHandleIsOptimisticError(t *testing.T) {
t.Run("fetch-state error handled as 404", func(t *testing.T) {
rr := httptest.NewRecorder()
notFoundErr := lookup.StateNotFoundError{}
fetchErr := lookup.NewFetchStateError(&notFoundErr)
HandleIsOptimisticError(rr, fetchErr)
require.Equal(t, http.StatusNotFound, rr.Code)
require.StringContains(t, notFoundErr.Error(), rr.Body.String())
})
t.Run("no block roots error handled as 404", func(t *testing.T) {
rr := httptest.NewRecorder()
blockRootsErr := lookup.NewBlockRootsNotFoundError()
HandleIsOptimisticError(rr, blockRootsErr)
require.Equal(t, http.StatusNotFound, rr.Code)
require.StringContains(t, blockRootsErr.Error(), rr.Body.String())
})
t.Run("generic error handled as 500", func(t *testing.T) {
rr := httptest.NewRecorder()
genericErr := errors.New("boom")
HandleIsOptimisticError(rr, genericErr)
require.Equal(t, http.StatusInternalServerError, rr.Code)
require.StringContains(t, "Could not check optimistic status: boom", rr.Body.String())
})
}
// prepareForkchoiceState prepares a beacon state with the given data to mock
// insert into forkchoice
func prepareForkchoiceState(

View File

@@ -23,6 +23,20 @@ import (
"github.com/pkg/errors"
)
type BlockRootsNotFoundError struct {
message string
}
func NewBlockRootsNotFoundError() *BlockRootsNotFoundError {
return &BlockRootsNotFoundError{
message: "no block roots returned from the database",
}
}
func (e BlockRootsNotFoundError) Error() string {
return e.message
}
// BlockIdParseError represents an error scenario where a block ID could not be parsed.
type BlockIdParseError struct {
message string
@@ -341,7 +355,7 @@ func (p *BeaconDbBlocker) blobsFromStoredDataColumns(block blocks.ROBlock, indic
stored := summary.Stored()
count := uint64(len(stored))
if count < peerdas.MinimumColumnsCountToReconstruct() {
if count < peerdas.MinimumColumnCountToReconstruct() {
// There is no way to reconstruct the data columns.
return nil, &core.RpcError{
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--%s` flag to ensure this call to succeed, or retry later if it is already the case", flags.SubscribeAllDataSubnets.Name),

View File

@@ -443,7 +443,7 @@ func TestGetBlob(t *testing.T) {
setupFulu(t)
_, dataColumnStorage := filesystem.NewEphemeralDataColumnStorageAndFs(t)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars[1 : peerdas.MinimumColumnsCountToReconstruct()+1])
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars[1 : peerdas.MinimumColumnCountToReconstruct()+1])
require.NoError(t, err)
blocker := &BeaconDbBlocker{

View File

@@ -21,6 +21,27 @@ import (
"github.com/pkg/errors"
)
type FetchStateError struct {
message string
cause error
}
func NewFetchStateError(cause error) *FetchStateError {
return &FetchStateError{
message: "could not fetch state",
cause: cause,
}
}
func (e *FetchStateError) Error() string {
if e.cause != nil {
return e.message + ": " + e.cause.Error()
}
return e.message
}
func (e *FetchStateError) Unwrap() error { return e.cause }
// StateIdParseError represents an error scenario where a state ID could not be parsed.
type StateIdParseError struct {
message string

View File

@@ -56,11 +56,7 @@ func (s *Server) GetValidatorCount(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateID), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
errJson := &httputil.DefaultJsonError{
Message: fmt.Sprintf("could not check if slot's block is optimistic: %v", err),
Code: http.StatusInternalServerError,
}
httputil.WriteError(w, errJson)
helpers.HandleIsOptimisticError(w, err)
return
}

View File

@@ -248,6 +248,8 @@ func (vs *Server) sendBlocks(stream ethpb.BeaconNodeValidator_StreamBlocksAltair
b.Block = &ethpb.StreamBlocksResponse_DenebBlock{DenebBlock: p}
case *ethpb.SignedBeaconBlockElectra:
b.Block = &ethpb.StreamBlocksResponse_ElectraBlock{ElectraBlock: p}
case *ethpb.SignedBeaconBlockFulu:
b.Block = &ethpb.StreamBlocksResponse_FuluBlock{FuluBlock: p}
default:
log.Errorf("Unknown block type %T", p)
}

View File

@@ -381,3 +381,92 @@ func TestServer_StreamSlotsVerified_OnHeadUpdated(t *testing.T) {
}
<-exitRoutine
}
func TestServer_StreamBlocksVerified_FuluBlock(t *testing.T) {
db := dbTest.SetupDB(t)
ctx := t.Context()
beaconState, privs := util.DeterministicGenesisStateFulu(t, 32)
c, err := altair.NextSyncCommittee(ctx, beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(c))
b, err := util.GenerateFullBlockFulu(beaconState, privs, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wrappedBlk := util.SaveBlock(t, ctx, db, b)
chainService := &chainMock.ChainService{State: beaconState}
server := &Server{
Ctx: ctx,
StateNotifier: chainService.StateNotifier(),
HeadFetcher: chainService,
}
exitRoutine := make(chan bool)
ctrl := gomock.NewController(t)
defer ctrl.Finish()
mockStream := mock.NewMockBeaconNodeValidatorAltair_StreamBlocksServer(ctrl)
mockStream.EXPECT().Send(&ethpb.StreamBlocksResponse{Block: &ethpb.StreamBlocksResponse_FuluBlock{FuluBlock: b}}).Do(func(arg0 interface{}) {
exitRoutine <- true
})
mockStream.EXPECT().Context().Return(ctx).AnyTimes()
go func(tt *testing.T) {
err := server.StreamBlocksAltair(&ethpb.StreamBlocksRequest{VerifiedOnly: true}, mockStream)
if s, _ := status.FromError(err); s.Code() != codes.Canceled {
assert.NoError(tt, err)
}
}(t)
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
for sent := 0; sent == 0; {
sent = server.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{Slot: b.Block.Slot, BlockRoot: r, SignedBlock: wrappedBlk},
})
}
<-exitRoutine
}
func TestServer_StreamBlocks_FuluBlock(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.BeaconConfig())
ctx := t.Context()
beaconState, privs := util.DeterministicGenesisStateFulu(t, 64)
c, err := altair.NextSyncCommittee(ctx, beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(c))
b, err := util.GenerateFullBlockFulu(beaconState, privs, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
chainService := &chainMock.ChainService{State: beaconState}
server := &Server{
Ctx: ctx,
BlockNotifier: chainService.BlockNotifier(),
HeadFetcher: chainService,
}
exitRoutine := make(chan bool)
ctrl := gomock.NewController(t)
defer ctrl.Finish()
mockStream := mock.NewMockBeaconNodeValidatorAltair_StreamBlocksServer(ctrl)
mockStream.EXPECT().Send(&ethpb.StreamBlocksResponse{Block: &ethpb.StreamBlocksResponse_FuluBlock{FuluBlock: b}}).Do(func(arg0 interface{}) {
exitRoutine <- true
})
mockStream.EXPECT().Context().Return(ctx).AnyTimes()
go func(tt *testing.T) {
err := server.StreamBlocksAltair(&ethpb.StreamBlocksRequest{}, mockStream)
if s, _ := status.FromError(err); s.Code() != codes.Canceled {
assert.NoError(tt, err)
}
}(t)
wrappedBlk, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
for sent := 0; sent == 0; {
sent = server.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: wrappedBlk},
})
}
<-exitRoutine
}

View File

@@ -120,6 +120,7 @@ type Config struct {
Router *http.ServeMux
ClockWaiter startup.ClockWaiter
BlobStorage *filesystem.BlobStorage
DataColumnStorage *filesystem.DataColumnStorage
TrackedValidatorsCache *cache.TrackedValidatorsCache
PayloadIDCache *cache.PayloadIDCache
LCStore *lightClient.Store
@@ -196,6 +197,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
BlobStorage: s.cfg.BlobStorage,
DataColumnStorage: s.cfg.DataColumnStorage,
}
rewardFetcher := &rewards.BlockRewardService{Replayer: ch, DB: s.cfg.BeaconDB}
coreService := &core.Service{

View File

@@ -15,10 +15,14 @@ type MockStater struct {
BeaconStateRoot []byte
StatesBySlot map[primitives.Slot]state.BeaconState
StatesByRoot map[[32]byte]state.BeaconState
CustomError error
}
// State --
func (m *MockStater) State(ctx context.Context, id []byte) (state.BeaconState, error) {
if m.CustomError != nil {
return nil, m.CustomError
}
if m.StateProviderFunc != nil {
return m.StateProviderFunc(ctx, id)
}

View File

@@ -8,6 +8,7 @@ go_library(
"broadcast_bls_changes.go",
"context.go",
"custody.go",
"data_column_sidecars.go",
"data_columns_reconstruct.go",
"deadlines.go",
"decode_pubsub.go",
@@ -167,6 +168,7 @@ go_test(
"broadcast_bls_changes_test.go",
"context_test.go",
"custody_test.go",
"data_column_sidecars_test.go",
"data_columns_reconstruct_test.go",
"decode_pubsub_test.go",
"error_test.go",
@@ -281,6 +283,7 @@ go_test(
"@com_github_golang_snappy//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",

View File

@@ -91,9 +91,7 @@ func (bs *blobSync) validateNext(rb blocks.ROBlob) error {
return err
}
sc := blocks.NewSidecarFromBlobSidecar(rb)
if err := bs.store.Persist(bs.current, sc); err != nil {
if err := bs.store.Persist(bs.current, rb); err != nil {
return err
}

View File

@@ -0,0 +1,878 @@
package sync
import (
"bytes"
"context"
"math"
"slices"
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
prysmP2P "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
goPeer "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// DataColumnSidecarsParams stores the common parameters needed to
// fetch data column sidecars from peers.
type DataColumnSidecarsParams struct {
Ctx context.Context // Context
Tor blockchain.TemporalOracle // Temporal oracle, useful to get the current slot
P2P prysmP2P.P2P // P2P network interface
RateLimiter *leakybucket.Collector // Rate limiter for outgoing requests
CtxMap ContextByteVersions // Context map, useful to know if a message is mapped to the correct fork
Storage filesystem.DataColumnStorageReader // Data columns storage
NewVerifier verification.NewDataColumnsVerifier // Data columns verifier to check to conformity of incoming data column sidecars
}
// FetchDataColumnSidecars retrieves data column sidecars from storage and peers for the given
// blocks and requested data column indices. It employs a multi-step strategy:
//
// 1. Direct retrieval: If all requested columns are available in storage, they are
// retrieved directly without reconstruction.
// 2. Reconstruction-based retrieval: If some requested columns are missing but sufficient
// stored columns exist (at least the minimum required for reconstruction), the function
// reconstructs all columns and extracts the requested indices.
// 3. Peer retrieval: If storage and reconstruction fail, missing columns are requested
// from connected peers that are expected to custody the required data.
//
// The function returns a map of block roots to their corresponding verified read-only data
// columns. It returns an error if data column storage is unavailable, if storage/reconstruction
// operations fail unexpectedly, or if not all requested columns could be retrieved from peers.
func FetchDataColumnSidecars(
params DataColumnSidecarsParams,
roBlocks []blocks.ROBlock,
indicesMap map[uint64]bool,
) (map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn, error) {
if len(roBlocks) == 0 || len(indicesMap) == 0 {
return nil, nil
}
indices := sortedSliceFromMap(indicesMap)
slotsWithCommitments := make(map[primitives.Slot]bool)
indicesByRootToQuery := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
indicesByRootStored := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
result := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
for _, roBlock := range roBlocks {
// Filter out blocks without commitments.
block := roBlock.Block()
commitments, err := block.Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrapf(err, "get blob kzg commitments for block root %#x", roBlock.Root())
}
if len(commitments) == 0 {
continue
}
slotsWithCommitments[block.Slot()] = true
root := roBlock.Root()
// Step 1: Get the requested sidecars for this root if available in storage
requestedColumns, err := tryGetDirectColumns(params.Storage, root, indices)
if err != nil {
return nil, errors.Wrapf(err, "try get direct columns for root %#x", root)
}
if requestedColumns != nil {
result[root] = requestedColumns
continue
}
// Step 2: If step 1 failed, reconstruct the requested sidecars from what is available in storage
requestedColumns, err = tryGetReconstructedColumns(params.Storage, root, indices)
if err != nil {
return nil, errors.Wrapf(err, "try get reconstructed columns for root %#x", root)
}
if requestedColumns != nil {
result[root] = requestedColumns
continue
}
// Step 3a: If steps 1 and 2 failed, keep track of the sidecars that need to be queried from peers
// and those that are already stored.
indicesToQueryMap, indicesStoredMap := categorizeIndices(params.Storage, root, indices)
if len(indicesToQueryMap) > 0 {
indicesByRootToQuery[root] = indicesToQueryMap
}
if len(indicesStoredMap) > 0 {
indicesByRootStored[root] = indicesStoredMap
}
}
// Early return if no sidecars need to be queried from peers.
if len(indicesByRootToQuery) == 0 {
return result, nil
}
// Step 3b: Request missing sidecars from peers.
start, count := time.Now(), computeTotalCount(indicesByRootToQuery)
fromPeersResult, err := tryRequestingColumnsFromPeers(params, roBlocks, slotsWithCommitments, indicesByRootToQuery)
if err != nil {
return nil, errors.Wrap(err, "request from peers")
}
log.WithFields(logrus.Fields{"duration": time.Since(start), "count": count}).Debug("Requested data column sidecars from peers")
for root, verifiedSidecars := range fromPeersResult {
result[root] = append(result[root], verifiedSidecars...)
}
// Step 3c: Load the stored sidecars.
for root, indicesStored := range indicesByRootStored {
requestedColumns, err := tryGetDirectColumns(params.Storage, root, sortedSliceFromMap(indicesStored))
if err != nil {
return nil, errors.Wrapf(err, "try get direct columns for root %#x", root)
}
result[root] = append(result[root], requestedColumns...)
}
return result, nil
}
// tryGetDirectColumns attempts to retrieve all requested columns directly from storage
// if they are all available. Returns the columns if successful, and nil if at least one
// requested sidecar is not available in the storage.
func tryGetDirectColumns(storage filesystem.DataColumnStorageReader, blockRoot [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error) {
// Check if all requested indices are present in cache
storedIndices := storage.Summary(blockRoot).Stored()
allRequestedPresent := true
for _, requestedIndex := range indices {
if !storedIndices[requestedIndex] {
allRequestedPresent = false
break
}
}
if !allRequestedPresent {
return nil, nil
}
// All requested data is present, retrieve directly from DB
requestedColumns, err := storage.Get(blockRoot, indices)
if err != nil {
return nil, errors.Wrapf(err, "failed to get data columns for block root %#x", blockRoot)
}
return requestedColumns, nil
}
// tryGetReconstructedColumns attempts to retrieve columns using reconstruction
// if sufficient columns are available. Returns the columns if successful, nil and nil if insufficient columns,
// or nil and error if an error occurs.
func tryGetReconstructedColumns(storage filesystem.DataColumnStorageReader, blockRoot [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error) {
// Check if we have enough columns for reconstruction
summary := storage.Summary(blockRoot)
if summary.Count() < peerdas.MinimumColumnCountToReconstruct() {
return nil, nil
}
// Retrieve all stored columns for reconstruction
allStoredColumns, err := storage.Get(blockRoot, nil)
if err != nil {
return nil, errors.Wrapf(err, "failed to get all stored columns for reconstruction for block root %#x", blockRoot)
}
// Attempt reconstruction
reconstructedColumns, err := peerdas.ReconstructDataColumnSidecars(allStoredColumns)
if err != nil {
return nil, errors.Wrapf(err, "failed to reconstruct data columns for block root %#x", blockRoot)
}
// Health check: ensure we have the expected number of columns
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(reconstructedColumns)) != numberOfColumns {
return nil, errors.Errorf("reconstructed %d columns but expected %d for block root %#x", len(reconstructedColumns), numberOfColumns, blockRoot)
}
// Extract only the requested indices from reconstructed data using direct indexing
requestedColumns := make([]blocks.VerifiedRODataColumn, 0, len(indices))
for _, requestedIndex := range indices {
if requestedIndex >= numberOfColumns {
return nil, errors.Errorf("requested column index %d exceeds maximum %d for block root %#x", requestedIndex, numberOfColumns-1, blockRoot)
}
requestedColumns = append(requestedColumns, reconstructedColumns[requestedIndex])
}
return requestedColumns, nil
}
// categorizeIndices separates indices into those that need to be queried from peers
// and those that are already stored.
func categorizeIndices(storage filesystem.DataColumnStorageReader, blockRoot [fieldparams.RootLength]byte, indices []uint64) (map[uint64]bool, map[uint64]bool) {
indicesToQuery := make(map[uint64]bool, len(indices))
indicesStored := make(map[uint64]bool, len(indices))
allStoredIndices := storage.Summary(blockRoot).Stored()
for _, index := range indices {
if allStoredIndices[index] {
indicesStored[index] = true
continue
}
indicesToQuery[index] = true
}
return indicesToQuery, indicesStored
}
// tryRequestingColumnsFromPeers attempts to request missing data column sidecars from connected peers.
// It explores the connected peers to find those that are expected to custody the requested columns
// and returns only when all requested columns are either retrieved or have been tried to be retrieved
// by all possible peers.
// Returns a map of block roots to their verified read-only data column sidecars and a map of block roots.
// Returns an error if at least one requested column could not be retrieved.
// WARNING: This function alters `missingIndicesByRoot`. The caller should NOT use it after running this function.
func tryRequestingColumnsFromPeers(
p DataColumnSidecarsParams,
roBlocks []blocks.ROBlock,
slotsWithCommitments map[primitives.Slot]bool,
missingIndicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
) (map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn, error) {
// Create a new random source for peer selection.
randomSource := rand.NewGenerator()
// Compute slots by block root.
slotByRoot := computeSlotByBlockRoot(roBlocks)
// Determine all sidecars each peers are expected to custody.
connectedPeersSlice := p.P2P.Peers().Connected()
connectedPeers := make(map[goPeer.ID]bool, len(connectedPeersSlice))
for _, peer := range connectedPeersSlice {
connectedPeers[peer] = true
}
indicesByRootByPeer, err := computeIndicesByRootByPeer(p.P2P, slotByRoot, missingIndicesByRoot, connectedPeers)
if err != nil {
return nil, errors.Wrap(err, "explore peers")
}
verifiedColumnsByRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
for len(missingIndicesByRoot) > 0 && len(indicesByRootByPeer) > 0 {
// Select peers to query the missing sidecars from.
indicesByRootByPeerToQuery, err := selectPeers(p, randomSource, len(missingIndicesByRoot), indicesByRootByPeer)
if err != nil {
return nil, errors.Wrap(err, "select peers")
}
// Remove selected peers from the maps.
for peer := range indicesByRootByPeerToQuery {
delete(connectedPeers, peer)
}
// Fetch the sidecars from the chosen peers.
roDataColumnsByPeer := fetchDataColumnSidecarsFromPeers(p, slotByRoot, slotsWithCommitments, indicesByRootByPeerToQuery)
// Verify the received data column sidecars.
verifiedRoDataColumnSidecars, err := verifyDataColumnSidecarsByPeer(p.P2P, p.NewVerifier, roDataColumnsByPeer)
if err != nil {
return nil, errors.Wrap(err, "verify data columns sidecars by peer")
}
// Remove the verified sidecars from the missing indices map and compute the new verified columns by root.
newMissingIndicesByRoot, localVerifiedColumnsByRoot := updateResults(verifiedRoDataColumnSidecars, missingIndicesByRoot)
missingIndicesByRoot = newMissingIndicesByRoot
for root, verifiedRoDataColumns := range localVerifiedColumnsByRoot {
verifiedColumnsByRoot[root] = append(verifiedColumnsByRoot[root], verifiedRoDataColumns...)
}
// Compute indices by root by peers with the updated missing indices and connected peers.
indicesByRootByPeer, err = computeIndicesByRootByPeer(p.P2P, slotByRoot, missingIndicesByRoot, connectedPeers)
if err != nil {
return nil, errors.Wrap(err, "explore peers")
}
}
if len(missingIndicesByRoot) > 0 {
return nil, errors.New("not all requested data column sidecars were retrieved from peers")
}
return verifiedColumnsByRoot, nil
}
// selectPeers selects peers to query the sidecars.
// It begins by randomly selecting a peer in `origIndicesByRootByPeer` that has enough bandwidth,
// and assigns to it all its available sidecars. Then, it randomly select an other peer, until
// all sidecars in `missingIndicesByRoot` are covered.
func selectPeers(
p DataColumnSidecarsParams,
randomSource *rand.Rand,
count int,
origIndicesByRootByPeer map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool,
) (map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool, error) {
const randomPeerTimeout = 30 * time.Second
// Select peers to query the missing sidecars from.
indicesByRootByPeer := copyIndicesByRootByPeer(origIndicesByRootByPeer)
internalIndicesByRootByPeer := copyIndicesByRootByPeer(indicesByRootByPeer)
indicesByRootByPeerToQuery := make(map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool)
for len(internalIndicesByRootByPeer) > 0 {
// Randomly select a peer with enough bandwidth.
peer, err := func() (goPeer.ID, error) {
ctx, cancel := context.WithTimeout(p.Ctx, randomPeerTimeout)
defer cancel()
peer, err := randomPeer(ctx, randomSource, p.RateLimiter, count, internalIndicesByRootByPeer)
if err != nil {
return "", errors.Wrap(err, "select random peer")
}
return peer, err
}()
if err != nil {
return nil, err
}
// Query all the sidecars that peer can offer us.
newIndicesByRoot, ok := internalIndicesByRootByPeer[peer]
if !ok {
return nil, errors.Errorf("peer %s not found in internal indices by root by peer map", peer)
}
indicesByRootByPeerToQuery[peer] = newIndicesByRoot
// Remove this peer from the maps to avoid re-selection.
delete(indicesByRootByPeer, peer)
delete(internalIndicesByRootByPeer, peer)
// Delete the corresponding sidecars from other peers in the internal map
// to avoid re-selection during this iteration.
for peer, indicesByRoot := range internalIndicesByRootByPeer {
for root, indices := range indicesByRoot {
newIndices := newIndicesByRoot[root]
for index := range newIndices {
delete(indices, index)
}
if len(indices) == 0 {
delete(indicesByRoot, root)
}
}
if len(indicesByRoot) == 0 {
delete(internalIndicesByRootByPeer, peer)
}
}
}
return indicesByRootByPeerToQuery, nil
}
// updateResults updates the missing indices and verified sidecars maps based on the newly verified sidecars.
func updateResults(
verifiedSidecars []blocks.VerifiedRODataColumn,
origMissingIndicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
) (map[[fieldparams.RootLength]byte]map[uint64]bool, map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn) {
// Copy the original map to avoid modifying it directly.
missingIndicesByRoot := copyIndicesByRoot(origMissingIndicesByRoot)
verifiedSidecarsByRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
for _, verifiedSidecar := range verifiedSidecars {
blockRoot := verifiedSidecar.BlockRoot()
index := verifiedSidecar.Index
// Add to the result map grouped by block root
verifiedSidecarsByRoot[blockRoot] = append(verifiedSidecarsByRoot[blockRoot], verifiedSidecar)
if indices, ok := missingIndicesByRoot[blockRoot]; ok {
delete(indices, index)
if len(indices) == 0 {
delete(missingIndicesByRoot, blockRoot)
}
}
}
return missingIndicesByRoot, verifiedSidecarsByRoot
}
// fetchDataColumnSidecarsFromPeers retrieves data column sidecars from peers.
func fetchDataColumnSidecarsFromPeers(
params DataColumnSidecarsParams,
slotByRoot map[[fieldparams.RootLength]byte]primitives.Slot,
slotsWithCommitments map[primitives.Slot]bool,
indicesByRootByPeer map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool,
) map[goPeer.ID][]blocks.RODataColumn {
var (
wg sync.WaitGroup
mut sync.Mutex
)
roDataColumnsByPeer := make(map[goPeer.ID][]blocks.RODataColumn)
wg.Add(len(indicesByRootByPeer))
for peerID, indicesByRoot := range indicesByRootByPeer {
go func(peerID goPeer.ID, indicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool) {
defer wg.Done()
requestedCount := 0
for _, indices := range indicesByRoot {
requestedCount += len(indices)
}
log := log.WithFields(logrus.Fields{
"peerID": peerID,
"agent": agentString(peerID, params.P2P.Host()),
"blockCount": len(indicesByRoot),
"totalRequestedCount": requestedCount,
})
roDataColumns, err := sendDataColumnSidecarsRequest(params, slotByRoot, slotsWithCommitments, peerID, indicesByRoot)
if err != nil {
log.WithError(err).Warning("Failed to send data column sidecars request")
return
}
mut.Lock()
defer mut.Unlock()
roDataColumnsByPeer[peerID] = roDataColumns
}(peerID, indicesByRoot)
}
wg.Wait()
return roDataColumnsByPeer
}
func sendDataColumnSidecarsRequest(
params DataColumnSidecarsParams,
slotByRoot map[[fieldparams.RootLength]byte]primitives.Slot,
slotsWithCommitments map[primitives.Slot]bool,
peerID goPeer.ID,
indicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
) ([]blocks.RODataColumn, error) {
const batchSize = 32
rootCount := int64(len(indicesByRoot))
requestedSidecarsCount := 0
for _, indices := range indicesByRoot {
requestedSidecarsCount += len(indices)
}
log := log.WithFields(logrus.Fields{
"peerID": peerID,
"agent": agentString(peerID, params.P2P.Host()),
"requestedSidecars": requestedSidecarsCount,
})
// Try to build a by range byRangeRequest first.
byRangeRequests, err := buildByRangeRequests(slotByRoot, slotsWithCommitments, indicesByRoot, batchSize)
if err != nil {
return nil, errors.Wrap(err, "craft by range request")
}
// If we have a valid by range request, send it.
if len(byRangeRequests) > 0 {
count := 0
for _, indices := range indicesByRoot {
count += len(indices)
}
start := time.Now()
roDataColumns := make([]blocks.RODataColumn, 0, count)
for _, request := range byRangeRequests {
if params.RateLimiter != nil {
params.RateLimiter.Add(peerID.String(), rootCount)
}
localRoDataColumns, err := SendDataColumnSidecarsByRangeRequest(params, peerID, request)
if err != nil {
return nil, errors.Wrapf(err, "send data column sidecars by range request to peer %s", peerID)
}
roDataColumns = append(roDataColumns, localRoDataColumns...)
}
log.WithFields(logrus.Fields{
"respondedSidecars": len(roDataColumns),
"requests": len(byRangeRequests),
"type": "byRange",
"duration": time.Since(start),
}).Debug("Received data column sidecars")
return roDataColumns, nil
}
// Build identifiers for the by root request.
byRootRequest := buildByRootRequest(indicesByRoot)
// Send the by root request.
start := time.Now()
if params.RateLimiter != nil {
params.RateLimiter.Add(peerID.String(), rootCount)
}
roDataColumns, err := SendDataColumnSidecarsByRootRequest(params, peerID, byRootRequest)
if err != nil {
return nil, errors.Wrapf(err, "send data column sidecars by root request to peer %s", peerID)
}
log.WithFields(logrus.Fields{
"respondedSidecars": len(roDataColumns),
"requests": 1,
"type": "byRoot",
"duration": time.Since(start),
}).Debug("Received data column sidecars")
return roDataColumns, nil
}
// buildByRangeRequests constructs a by range request from the given indices,
// only if the indices are the same all blocks and if the blocks are contiguous.
// (Missing blocks or blocks without commitments do count as contiguous)
// If one of this condition is not met, returns nil.
func buildByRangeRequests(
slotByRoot map[[fieldparams.RootLength]byte]primitives.Slot,
slotsWithCommitments map[primitives.Slot]bool,
indicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
batchSize uint64,
) ([]*ethpb.DataColumnSidecarsByRangeRequest, error) {
if len(indicesByRoot) == 0 {
return nil, nil
}
var reference map[uint64]bool
slots := make([]primitives.Slot, 0, len(slotByRoot))
for root, indices := range indicesByRoot {
if reference == nil {
reference = indices
}
if !compareIndices(reference, indices) {
return nil, nil
}
slot, ok := slotByRoot[root]
if !ok {
return nil, errors.Errorf("slot not found for block root %#x", root)
}
slots = append(slots, slot)
}
slices.Sort(slots)
for i := 1; i < len(slots); i++ {
previous, current := slots[i-1], slots[i]
if current == previous+1 {
continue
}
for j := previous + 1; j < current; j++ {
if slotsWithCommitments[j] {
return nil, nil
}
}
}
columns := sortedSliceFromMap(reference)
startSlot, endSlot := slots[0], slots[len(slots)-1]
totalCount := uint64(endSlot - startSlot + 1)
requests := make([]*ethpb.DataColumnSidecarsByRangeRequest, 0, totalCount/batchSize)
for start := startSlot; start <= endSlot; start += primitives.Slot(batchSize) {
end := min(start+primitives.Slot(batchSize)-1, endSlot)
request := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: start,
Count: uint64(end - start + 1),
Columns: columns,
}
requests = append(requests, request)
}
return requests, nil
}
// buildByRootRequest constructs a by root request from the given indices.
func buildByRootRequest(indicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool) p2ptypes.DataColumnsByRootIdentifiers {
identifiers := make(p2ptypes.DataColumnsByRootIdentifiers, 0, len(indicesByRoot))
for root, indices := range indicesByRoot {
identifier := &eth.DataColumnsByRootIdentifier{
BlockRoot: root[:],
Columns: sortedSliceFromMap(indices),
}
identifiers = append(identifiers, identifier)
}
// Sort identifiers to have a deterministic output.
slices.SortFunc(identifiers, func(left, right *eth.DataColumnsByRootIdentifier) int {
if cmp := bytes.Compare(left.BlockRoot, right.BlockRoot); cmp != 0 {
return cmp
}
return slices.Compare(left.Columns, right.Columns)
})
return identifiers
}
// verifyDataColumnSidecarsByPeer verifies the received data column sidecars.
// If at least one sidecar from a peer is invalid, the peer is downscored and
// all its sidecars are rejected. (Sidecars from other peers are still accepted.)
func verifyDataColumnSidecarsByPeer(
p2p prysmP2P.P2P,
newVerifier verification.NewDataColumnsVerifier,
roDataColumnsByPeer map[goPeer.ID][]blocks.RODataColumn,
) ([]blocks.VerifiedRODataColumn, error) {
// First optimistically verify all received data columns in a single batch.
count := 0
for _, columns := range roDataColumnsByPeer {
count += len(columns)
}
roDataColumnSidecars := make([]blocks.RODataColumn, 0, count)
for _, columns := range roDataColumnsByPeer {
roDataColumnSidecars = append(roDataColumnSidecars, columns...)
}
verifiedRoDataColumnSidecars, err := verifyByRootDataColumnSidecars(newVerifier, roDataColumnSidecars)
if err == nil {
// This is the happy path where all sidecars are verified.
return verifiedRoDataColumnSidecars, nil
}
// An error occurred during verification, which means that at least one sidecar is invalid.
// Reverify peer by peer to identify faulty peer(s), reject all its sidecars, and downscore it.
verifiedRoDataColumnSidecars = make([]blocks.VerifiedRODataColumn, 0, count)
for peer, columns := range roDataColumnsByPeer {
peerVerifiedRoDataColumnSidecars, err := verifyByRootDataColumnSidecars(newVerifier, columns)
if err != nil {
// This peer has invalid sidecars.
log := log.WithError(err).WithField("peerID", peer)
newScore := p2p.Peers().Scorers().BadResponsesScorer().Increment(peer)
log.Warning("Peer returned invalid data column sidecars")
log.WithFields(logrus.Fields{"reason": "invalidDataColumnSidecars", "newScore": newScore}).Debug("Downscore peer")
}
verifiedRoDataColumnSidecars = append(verifiedRoDataColumnSidecars, peerVerifiedRoDataColumnSidecars...)
}
return verifiedRoDataColumnSidecars, nil
}
// verifyByRootDataColumnSidecars verifies the provided read-only data columns against the
// requirements for data column sidecars received via the by root request.
func verifyByRootDataColumnSidecars(newVerifier verification.NewDataColumnsVerifier, roDataColumns []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error) {
verifier := newVerifier(roDataColumns, verification.ByRootRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return nil, errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return nil, errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return nil, errors.Wrap(err, "sidecar KZG proof verified")
}
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
if err != nil {
return nil, errors.Wrap(err, "verified RO data columns - should never happen")
}
return verifiedRoDataColumns, nil
}
// computeIndicesByRootByPeer returns a peers->root->indices map only for
// root and indices given in `indicesByBlockRoot`. It also only selects peers
// for a given root only if its head state is higher than the block slot.
func computeIndicesByRootByPeer(
p2p prysmP2P.P2P,
slotByBlockRoot map[[fieldparams.RootLength]byte]primitives.Slot,
indicesByBlockRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
peers map[goPeer.ID]bool,
) (map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool, error) {
// First, compute custody columns for all peers
peersByIndex := make(map[uint64]map[goPeer.ID]bool)
headSlotByPeer := make(map[goPeer.ID]primitives.Slot)
for peer := range peers {
// Computes the custody columns for each peer
nodeID, err := prysmP2P.ConvertPeerIDToNodeID(peer)
if err != nil {
return nil, errors.Wrapf(err, "convert peer ID to node ID for peer %s", peer)
}
custodyGroupCount := p2p.CustodyGroupCountFromPeer(peer)
dasInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrapf(err, "peerdas info for peer %s", peer)
}
for column := range dasInfo.CustodyColumns {
if _, exists := peersByIndex[column]; !exists {
peersByIndex[column] = make(map[goPeer.ID]bool)
}
peersByIndex[column][peer] = true
}
// Compute the head slot for each peer
peerChainState, err := p2p.Peers().ChainState(peer)
if err != nil {
return nil, errors.Wrapf(err, "get chain state for peer %s", peer)
}
if peerChainState == nil {
return nil, errors.Errorf("chain state is nil for peer %s", peer)
}
headSlotByPeer[peer] = peerChainState.HeadSlot
}
// For each block root and its indices, find suitable peers
indicesByRootByPeer := make(map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool)
for blockRoot, indices := range indicesByBlockRoot {
blockSlot, ok := slotByBlockRoot[blockRoot]
if !ok {
return nil, errors.Errorf("slot not found for block root %#x", blockRoot)
}
for index := range indices {
peers := peersByIndex[index]
for peer := range peers {
peerHeadSlot, ok := headSlotByPeer[peer]
if !ok {
return nil, errors.Errorf("head slot not found for peer %s", peer)
}
if peerHeadSlot < blockSlot {
continue
}
// Build peers->root->indices map
if _, exists := indicesByRootByPeer[peer]; !exists {
indicesByRootByPeer[peer] = make(map[[fieldparams.RootLength]byte]map[uint64]bool)
}
if _, exists := indicesByRootByPeer[peer][blockRoot]; !exists {
indicesByRootByPeer[peer][blockRoot] = make(map[uint64]bool)
}
indicesByRootByPeer[peer][blockRoot][index] = true
}
}
}
return indicesByRootByPeer, nil
}
// randomPeer selects a random peer. If no peers has enough bandwidth, it will wait and retry.
// Returns the selected peer ID and any error.
func randomPeer(
ctx context.Context,
randomSource *rand.Rand,
rateLimiter *leakybucket.Collector,
count int,
indicesByRootByPeer map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool,
) (goPeer.ID, error) {
const waitPeriod = 5 * time.Second
peerCount := len(indicesByRootByPeer)
if peerCount == 0 {
return "", errors.New("no peers available")
}
for ctx.Err() == nil {
nonRateLimitedPeers := make([]goPeer.ID, 0, len(indicesByRootByPeer))
for peer := range indicesByRootByPeer {
remaining := int64(math.MaxInt64)
if rateLimiter != nil {
remaining = rateLimiter.Remaining(peer.String())
}
if remaining >= int64(count) {
nonRateLimitedPeers = append(nonRateLimitedPeers, peer)
}
}
if len(nonRateLimitedPeers) == 0 {
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
time.Sleep(waitPeriod)
continue
}
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
}
return "", ctx.Err()
}
// copyIndicesByRootByPeer creates a deep copy of the given nested map.
// Returns a new map with the same structure and contents.
func copyIndicesByRootByPeer(original map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool) map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool {
copied := make(map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool, len(original))
for peer, indicesByRoot := range original {
copied[peer] = copyIndicesByRoot(indicesByRoot)
}
return copied
}
// copyIndicesByRoot creates a deep copy of the given nested map.
// Returns a new map with the same structure and contents.
func copyIndicesByRoot(original map[[fieldparams.RootLength]byte]map[uint64]bool) map[[fieldparams.RootLength]byte]map[uint64]bool {
copied := make(map[[fieldparams.RootLength]byte]map[uint64]bool, len(original))
for root, indexMap := range original {
copied[root] = make(map[uint64]bool, len(indexMap))
for index, value := range indexMap {
copied[root][index] = value
}
}
return copied
}
// compareIndices compares two map[uint64]bool and returns true if they are equal.
func compareIndices(left, right map[uint64]bool) bool {
if len(left) != len(right) {
return false
}
for key, leftValue := range left {
rightValue, exists := right[key]
if !exists || leftValue != rightValue {
return false
}
}
return true
}
// sortedSliceFromMap converts a map[uint64]bool to a sorted slice of keys.
func sortedSliceFromMap(m map[uint64]bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
slices.Sort(result)
return result
}
// computeSlotByBlockRoot maps each block root to its corresponding slot.
func computeSlotByBlockRoot(roBlocks []blocks.ROBlock) map[[fieldparams.RootLength]byte]primitives.Slot {
slotByBlockRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, len(roBlocks))
for _, roBlock := range roBlocks {
slotByBlockRoot[roBlock.Root()] = roBlock.Block().Slot()
}
return slotByBlockRoot
}
// computeTotalCount calculates the total count of indices across all roots.
func computeTotalCount(input map[[fieldparams.RootLength]byte]map[uint64]bool) int {
totalCount := 0
for _, indices := range input {
totalCount += len(indices)
}
return totalCount
}

View File

@@ -0,0 +1,984 @@
package sync
import (
"context"
"fmt"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
)
func TestFetchDataColumnSidecars(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Slot 1: All needed sidecars are available in storage
// Slot 2: No commitment
// Slot 3: All sidecars are saved excepted the needed ones
// Slot 4: Some sidecars are in the storage, other have to be retrieved from peers.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
storage := filesystem.NewEphemeralDataColumnStorage(t)
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
const blobCount = 3
indices := map[uint64]bool{31: true, 81: true, 106: true}
// Block 1
block1, _, verifiedSidecars1 := util.GenerateTestFuluBlockWithSidecars(t, blobCount, util.WithSlot(1))
root1 := block1.Root()
toStore1 := make([]blocks.VerifiedRODataColumn, 0, len(indices))
for index := range indices {
sidecar := verifiedSidecars1[index]
toStore1 = append(toStore1, sidecar)
}
err = storage.Save(toStore1)
require.NoError(t, err)
// Block 2
block2, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 0, util.WithSlot(2))
// Block 3
block3, _, verifiedSidecars3 := util.GenerateTestFuluBlockWithSidecars(t, blobCount, util.WithSlot(3))
root3 := block3.Root()
toStore3 := make([]blocks.VerifiedRODataColumn, 0, numberOfColumns-uint64(len(indices)))
for i := range numberOfColumns {
if !indices[i] {
sidecar := verifiedSidecars3[i]
toStore3 = append(toStore3, sidecar)
}
}
err = storage.Save(toStore3)
require.NoError(t, err)
// Block 4
block4, _, verifiedSidecars4 := util.GenerateTestFuluBlockWithSidecars(t, blobCount, util.WithSlot(4))
root4 := block4.Root()
toStore4 := []blocks.VerifiedRODataColumn{verifiedSidecars4[106]}
err = storage.Save(toStore4)
require.NoError(t, err)
privateKeyBytes := [32]byte{1}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes[:])
require.NoError(t, err)
// Peers
protocol := fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1)
p2p, other := testp2p.NewTestP2P(t), testp2p.NewTestP2P(t, libp2p.Identity(privateKey))
p2p.Peers().SetConnectionState(other.PeerID(), peers.Connected)
p2p.Connect(other)
p2p.Peers().SetChainState(other.PeerID(), &ethpb.StatusV2{
HeadSlot: 4,
})
expectedRequest := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 4,
Count: 1,
Columns: []uint64{31, 81},
}
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})
gs := startup.NewClockSynchronizer()
err = gs.SetClock(startup.NewClock(time.Unix(4113849600, 0), [fieldparams.RootLength]byte{}))
require.NoError(t, err)
waiter := verification.NewInitializerWaiter(gs, nil, nil)
initializer, err := waiter.WaitForInitializer(t.Context())
require.NoError(t, err)
newDataColumnsVerifier := newDataColumnsVerifierFromInitializer(initializer)
other.SetStreamHandler(protocol, func(stream network.Stream) {
actualRequest := new(ethpb.DataColumnSidecarsByRangeRequest)
err := other.Encoding().DecodeWithMaxLength(stream, actualRequest)
assert.NoError(t, err)
assert.DeepEqual(t, expectedRequest, actualRequest)
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), verifiedSidecars4[31].DataColumnSidecar)
assert.NoError(t, err)
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), verifiedSidecars4[81].DataColumnSidecar)
assert.NoError(t, err)
err = stream.CloseWrite()
assert.NoError(t, err)
})
params := DataColumnSidecarsParams{
Ctx: t.Context(),
Tor: clock,
P2P: p2p,
RateLimiter: leakybucket.NewCollector(1., 10, time.Second, false /* deleteEmptyBuckets */),
CtxMap: ctxMap,
Storage: storage,
NewVerifier: newDataColumnsVerifier,
}
expected := map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn{
root1: {verifiedSidecars1[31], verifiedSidecars1[81], verifiedSidecars1[106]},
// no root2 (no commitments in this block)
root3: {verifiedSidecars3[31], verifiedSidecars3[81], verifiedSidecars3[106]},
root4: {verifiedSidecars4[31], verifiedSidecars4[81], verifiedSidecars4[106]},
}
blocks := []blocks.ROBlock{block1, block2, block3, block4}
actual, err := FetchDataColumnSidecars(params, blocks, indices)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for root := range expected {
require.Equal(t, len(expected[root]), len(actual[root]))
for i := range expected[root] {
require.DeepSSZEqual(t, expected[root][i], actual[root][i])
}
}
}
func TestCategorizeIndices(t *testing.T) {
storage := filesystem.NewEphemeralDataColumnStorage(t)
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 1, Index: 14, Column: [][]byte{{1}, {2}, {3}}},
})
err := storage.Save(verifiedRoSidecars)
require.NoError(t, err)
expectedToQuery := map[uint64]bool{13: true}
expectedStored := map[uint64]bool{12: true, 14: true}
actualToQuery, actualStored := categorizeIndices(storage, verifiedRoSidecars[0].BlockRoot(), []uint64{12, 13, 14})
require.Equal(t, len(expectedToQuery), len(actualToQuery))
require.Equal(t, len(expectedStored), len(actualStored))
for index := range expectedToQuery {
require.Equal(t, true, actualToQuery[index])
}
for index := range expectedStored {
require.Equal(t, true, actualStored[index])
}
}
func TestSelectPeers(t *testing.T) {
const (
count = 3
seed = 46
)
params := DataColumnSidecarsParams{
Ctx: t.Context(),
RateLimiter: leakybucket.NewCollector(1., 10, time.Second, false /* deleteEmptyBuckets */),
}
randomSource := rand.NewGenerator()
indicesByRootByPeer := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
"peer1": {
{1}: {12: true, 13: true},
{2}: {13: true, 14: true, 15: true},
{3}: {14: true, 15: true},
},
"peer2": {
{1}: {13: true, 14: true},
{2}: {13: true, 14: true, 15: true},
{3}: {14: true, 16: true},
},
}
expected_1 := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
"peer1": {
{1}: {12: true, 13: true},
{2}: {13: true, 14: true, 15: true},
{3}: {14: true, 15: true},
},
"peer2": {
{1}: {14: true},
{3}: {16: true},
},
}
expected_2 := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
"peer1": {
{1}: {12: true},
{3}: {15: true},
},
"peer2": {
{1}: {13: true, 14: true},
{2}: {13: true, 14: true, 15: true},
{3}: {14: true, 16: true},
},
}
actual, err := selectPeers(params, randomSource, count, indicesByRootByPeer)
expected := expected_1
if len(actual["peer1"]) == 2 {
expected = expected_2
}
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for peerID := range expected {
require.Equal(t, len(expected[peerID]), len(actual[peerID]))
for root := range expected[peerID] {
require.Equal(t, len(expected[peerID][root]), len(actual[peerID][root]))
for indices := range expected[peerID][root] {
require.Equal(t, expected[peerID][root][indices], actual[peerID][root][indices])
}
}
}
}
func TestUpdateResults(t *testing.T) {
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 1, Index: 13, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 2, Index: 13, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 2, Index: 14, Column: [][]byte{{1}, {2}, {3}}},
})
missingIndicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
verifiedSidecars[0].BlockRoot(): {12: true, 13: true},
verifiedSidecars[2].BlockRoot(): {13: true, 14: true, 15: true},
}
expectedMissingIndicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
verifiedSidecars[2].BlockRoot(): {15: true},
}
expectedVerifiedSidecarsByRoot := map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn{
verifiedSidecars[0].BlockRoot(): {verifiedSidecars[0], verifiedSidecars[1]},
verifiedSidecars[2].BlockRoot(): {verifiedSidecars[2], verifiedSidecars[3]},
}
actualMissingIndicesByRoot, actualVerifiedSidecarsByRoot := updateResults(verifiedSidecars, missingIndicesByRoot)
require.DeepEqual(t, expectedMissingIndicesByRoot, actualMissingIndicesByRoot)
require.DeepEqual(t, expectedVerifiedSidecarsByRoot, actualVerifiedSidecarsByRoot)
}
func TestFetchDataColumnSidecarsFromPeers(t *testing.T) {
const count = 4
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
expectedResponseSidecarPb := &ethpb.DataColumnSidecar{
Index: 2,
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 1,
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
expectedResponseSidecar, err := blocks.NewRODataColumn(expectedResponseSidecarPb)
require.NoError(t, err)
slotByRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
{1}: 1,
{3}: 3,
{4}: 4,
{7}: 7,
}
slotsWithCommitments := map[primitives.Slot]bool{
1: true,
3: true,
4: true,
7: true,
}
expectedRequest := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 1,
Count: 7,
Columns: []uint64{1, 2},
}
protocol := fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1)
p2p, other := testp2p.NewTestP2P(t), testp2p.NewTestP2P(t)
p2p.Connect(other)
other.SetStreamHandler(protocol, func(stream network.Stream) {
receivedRequest := new(ethpb.DataColumnSidecarsByRangeRequest)
err := other.Encoding().DecodeWithMaxLength(stream, receivedRequest)
assert.NoError(t, err)
assert.DeepEqual(t, expectedRequest, receivedRequest)
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), expectedResponseSidecarPb)
assert.NoError(t, err)
err = stream.CloseWrite()
assert.NoError(t, err)
})
indicesByRootByPeer := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
other.PeerID(): {
{1}: {1: true, 2: true},
{3}: {1: true, 2: true},
{4}: {1: true, 2: true},
{7}: {1: true, 2: true},
},
}
params := DataColumnSidecarsParams{
Ctx: t.Context(),
Tor: clock,
P2P: p2p,
CtxMap: ctxMap,
RateLimiter: leakybucket.NewCollector(1., 1, time.Second, false /* deleteEmptyBuckets */),
}
expectedResponse := map[peer.ID][]blocks.RODataColumn{
other.PeerID(): {expectedResponseSidecar},
}
actualResponse := fetchDataColumnSidecarsFromPeers(params, slotByRoot, slotsWithCommitments, indicesByRootByPeer)
require.Equal(t, len(expectedResponse), len(actualResponse))
for peerID := range expectedResponse {
require.DeepSSZEqual(t, expectedResponse[peerID], actualResponse[peerID])
}
}
func TestSendDataColumnSidecarsRequest(t *testing.T) {
const count = 4
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
expectedResponsePb := &ethpb.DataColumnSidecar{
Index: 2,
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 1,
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
expectedResponse, err := blocks.NewRODataColumn(expectedResponsePb)
require.NoError(t, err)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
t.Run("contiguous", func(t *testing.T) {
indicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
{1}: {1: true, 2: true},
{3}: {1: true, 2: true},
{4}: {1: true, 2: true},
{7}: {1: true, 2: true},
}
slotByRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
{1}: 1,
{3}: 3,
{4}: 4,
{7}: 7,
}
slotsWithCommitments := map[primitives.Slot]bool{
1: true,
3: true,
4: true,
7: true,
}
expectedRequest := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 1,
Count: 7,
Columns: []uint64{1, 2},
}
protocol := fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1)
p2p, other := testp2p.NewTestP2P(t), testp2p.NewTestP2P(t)
p2p.Connect(other)
other.SetStreamHandler(protocol, func(stream network.Stream) {
receivedRequest := new(ethpb.DataColumnSidecarsByRangeRequest)
err := other.Encoding().DecodeWithMaxLength(stream, receivedRequest)
assert.NoError(t, err)
assert.DeepEqual(t, expectedRequest, receivedRequest)
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), expectedResponsePb)
assert.NoError(t, err)
err = stream.CloseWrite()
assert.NoError(t, err)
})
params := DataColumnSidecarsParams{
Ctx: t.Context(),
Tor: clock,
P2P: p2p,
CtxMap: ctxMap,
RateLimiter: leakybucket.NewCollector(1., 1, time.Second, false /* deleteEmptyBuckets */),
}
actualResponse, err := sendDataColumnSidecarsRequest(params, slotByRoot, slotsWithCommitments, other.PeerID(), indicesByRoot)
require.NoError(t, err)
require.DeepEqual(t, expectedResponse, actualResponse[0])
})
t.Run("non contiguous", func(t *testing.T) {
indicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
expectedResponse.BlockRoot(): {1: true, 2: true},
{4}: {1: true, 2: true},
{7}: {1: true, 2: true},
}
slotByRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
expectedResponse.BlockRoot(): 1,
{4}: 4,
{7}: 7,
}
slotsWithCommitments := map[primitives.Slot]bool{
1: true,
3: true,
4: true,
7: true,
}
roots := [...][fieldparams.RootLength]byte{expectedResponse.BlockRoot(), {4}, {7}}
expectedRequest := &p2ptypes.DataColumnsByRootIdentifiers{
{
BlockRoot: roots[1][:],
Columns: []uint64{1, 2},
},
{
BlockRoot: roots[2][:],
Columns: []uint64{1, 2},
},
{
BlockRoot: roots[0][:],
Columns: []uint64{1, 2},
},
}
protocol := fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRootTopicV1)
p2p, other := testp2p.NewTestP2P(t), testp2p.NewTestP2P(t)
p2p.Connect(other)
other.SetStreamHandler(protocol, func(stream network.Stream) {
receivedRequest := new(p2ptypes.DataColumnsByRootIdentifiers)
err := other.Encoding().DecodeWithMaxLength(stream, receivedRequest)
assert.NoError(t, err)
assert.DeepSSZEqual(t, *expectedRequest, *receivedRequest)
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), expectedResponsePb)
assert.NoError(t, err)
err = stream.CloseWrite()
assert.NoError(t, err)
})
params := DataColumnSidecarsParams{
Ctx: t.Context(),
Tor: clock,
P2P: p2p,
CtxMap: ctxMap,
RateLimiter: leakybucket.NewCollector(1., 1, time.Second, false /* deleteEmptyBuckets */),
}
actualResponse, err := sendDataColumnSidecarsRequest(params, slotByRoot, slotsWithCommitments, other.PeerID(), indicesByRoot)
require.NoError(t, err)
require.DeepEqual(t, expectedResponse, actualResponse[0])
})
}
func TestBuildByRangeRequests(t *testing.T) {
const nullBatchSize = 0
t.Run("empty", func(t *testing.T) {
actual, err := buildByRangeRequests(nil, nil, nil, nullBatchSize)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
})
t.Run("missing Root", func(t *testing.T) {
indicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
{1}: {1: true, 2: true},
}
_, err := buildByRangeRequests(nil, nil, indicesByRoot, nullBatchSize)
require.NotNil(t, err)
})
t.Run("indices differ", func(t *testing.T) {
indicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
{1}: {1: true, 2: true},
{2}: {1: true, 2: true},
{3}: {2: true, 3: true},
}
slotByRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
{1}: 1,
{2}: 2,
{3}: 3,
}
actual, err := buildByRangeRequests(slotByRoot, nil, indicesByRoot, nullBatchSize)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
})
t.Run("slots non contiguous", func(t *testing.T) {
indicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
{1}: {1: true, 2: true},
{2}: {1: true, 2: true},
}
slotByRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
{1}: 1,
{2}: 3,
}
slotsWithCommitments := map[primitives.Slot]bool{
1: true,
2: true,
3: true,
}
actual, err := buildByRangeRequests(slotByRoot, slotsWithCommitments, indicesByRoot, nullBatchSize)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
})
t.Run("nominal", func(t *testing.T) {
const batchSize = 3
indicesByRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
{1}: {1: true, 2: true},
{3}: {1: true, 2: true},
{4}: {1: true, 2: true},
{7}: {1: true, 2: true},
}
slotByRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
{1}: 1,
{3}: 3,
{4}: 4,
{7}: 7,
}
slotsWithCommitments := map[primitives.Slot]bool{
1: true,
3: true,
4: true,
7: true,
}
expected := []*ethpb.DataColumnSidecarsByRangeRequest{
{
StartSlot: 1,
Count: 3,
Columns: []uint64{1, 2},
},
{
StartSlot: 4,
Count: 3,
Columns: []uint64{1, 2},
},
{
StartSlot: 7,
Count: 1,
Columns: []uint64{1, 2},
},
}
actual, err := buildByRangeRequests(slotByRoot, slotsWithCommitments, indicesByRoot, batchSize)
require.NoError(t, err)
require.DeepEqual(t, expected, actual)
})
}
func TestBuildByRootRequest(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
input := map[[fieldparams.RootLength]byte]map[uint64]bool{
root1: {1: true, 2: true},
root2: {3: true},
}
expected := p2ptypes.DataColumnsByRootIdentifiers{
{
BlockRoot: root1[:],
Columns: []uint64{1, 2},
},
{
BlockRoot: root2[:],
Columns: []uint64{3},
},
}
actual := buildByRootRequest(input)
require.DeepSSZEqual(t, expected, actual)
}
func TestVerifyDataColumnSidecarsByPeer(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("nominal", func(t *testing.T) {
const (
start, stop = 0, 15
blobCount = 1
)
p2p := testp2p.NewTestP2P(t)
// Setup test data and expectations
_, roDataColumnSidecars, expected := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
roDataColumnsByPeer := map[peer.ID][]blocks.RODataColumn{
"peer1": roDataColumnSidecars[start:5],
"peer2": roDataColumnSidecars[5:9],
"peer3": roDataColumnSidecars[9:stop],
}
gs := startup.NewClockSynchronizer()
err := gs.SetClock(startup.NewClock(time.Unix(4113849600, 0), [fieldparams.RootLength]byte{}))
require.NoError(t, err)
waiter := verification.NewInitializerWaiter(gs, nil, nil)
initializer, err := waiter.WaitForInitializer(t.Context())
require.NoError(t, err)
newDataColumnsVerifier := newDataColumnsVerifierFromInitializer(initializer)
actual, err := verifyDataColumnSidecarsByPeer(p2p, newDataColumnsVerifier, roDataColumnsByPeer)
require.NoError(t, err)
require.Equal(t, stop-start, len(actual))
for i := range actual {
actualSidecar := actual[i]
index := actualSidecar.Index
expectedSidecar := expected[index]
require.DeepEqual(t, expectedSidecar, actualSidecar)
}
})
t.Run("one rogue peer", func(t *testing.T) {
const (
start, middle, stop = 0, 5, 15
blobCount = 1
)
p2p := testp2p.NewTestP2P(t)
// Setup test data and expectations
_, roDataColumnSidecars, expected := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
// Modify one sidecar to ensure proof verification fails.
if roDataColumnSidecars[middle].KzgProofs[0][0] == 0 {
roDataColumnSidecars[middle].KzgProofs[0][0]++
} else {
roDataColumnSidecars[middle].KzgProofs[0][0]--
}
roDataColumnsByPeer := map[peer.ID][]blocks.RODataColumn{
"peer1": roDataColumnSidecars[start:middle],
"peer2": roDataColumnSidecars[5:middle],
"peer3": roDataColumnSidecars[middle:stop],
}
gs := startup.NewClockSynchronizer()
err := gs.SetClock(startup.NewClock(time.Unix(4113849600, 0), [fieldparams.RootLength]byte{}))
require.NoError(t, err)
waiter := verification.NewInitializerWaiter(gs, nil, nil)
initializer, err := waiter.WaitForInitializer(t.Context())
require.NoError(t, err)
newDataColumnsVerifier := newDataColumnsVerifierFromInitializer(initializer)
actual, err := verifyDataColumnSidecarsByPeer(p2p, newDataColumnsVerifier, roDataColumnsByPeer)
require.NoError(t, err)
require.Equal(t, middle-start, len(actual))
for i := range actual {
actualSidecar := actual[i]
index := actualSidecar.Index
expectedSidecar := expected[index]
require.DeepEqual(t, expectedSidecar, actualSidecar)
}
})
}
func TestComputeIndicesByRootByPeer(t *testing.T) {
peerIdStrs := []string{
"16Uiu2HAm3k5Npu6EaYWxiEvzsdLseEkjVyoVhvbxWEuyqdBgBBbq", // Custodies 89, 94, 97 & 122
"16Uiu2HAmTwQPAwzTr6hTgBmKNecCfH6kP3Kbzxj36ZRyyQ46L6gf", // Custodies 1, 11, 37 & 86
"16Uiu2HAmMDB5uUePTpN7737m78ehePfWPtBL9qMGdH8kCygjzNA8", // Custodies 2, 37, 38 & 68
"16Uiu2HAmTAE5Vxf7Pgfk7eWpmCvVJdSba4C9xg4xkYuuvnVbgfFx", // Custodies 10, 29, 36 & 108
}
headSlotByPeer := map[string]primitives.Slot{
"16Uiu2HAm3k5Npu6EaYWxiEvzsdLseEkjVyoVhvbxWEuyqdBgBBbq": 89,
"16Uiu2HAmTwQPAwzTr6hTgBmKNecCfH6kP3Kbzxj36ZRyyQ46L6gf": 10,
"16Uiu2HAmMDB5uUePTpN7737m78ehePfWPtBL9qMGdH8kCygjzNA8": 12,
"16Uiu2HAmTAE5Vxf7Pgfk7eWpmCvVJdSba4C9xg4xkYuuvnVbgfFx": 9,
}
p2p := testp2p.NewTestP2P(t)
peers := p2p.Peers()
peerIDs := make([]peer.ID, 0, len(peerIdStrs))
for _, peerIdStr := range peerIdStrs {
peerID, err := peer.Decode(peerIdStr)
require.NoError(t, err)
peers.SetChainState(peerID, &ethpb.StatusV2{
HeadSlot: headSlotByPeer[peerIdStr],
})
peerIDs = append(peerIDs, peerID)
}
slotByBlockRoot := map[[fieldparams.RootLength]byte]primitives.Slot{
[fieldparams.RootLength]byte{1}: 8,
[fieldparams.RootLength]byte{2}: 10,
[fieldparams.RootLength]byte{3}: 9,
[fieldparams.RootLength]byte{4}: 50,
}
indicesByBlockRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
[fieldparams.RootLength]byte{1}: {3: true, 4: true, 5: true},
[fieldparams.RootLength]byte{2}: {1: true, 10: true, 37: true, 80: true},
[fieldparams.RootLength]byte{3}: {10: true, 38: true, 39: true, 40: true},
[fieldparams.RootLength]byte{4}: {89: true, 108: true, 122: true},
}
expected := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
peerIDs[0]: {
[fieldparams.RootLength]byte{4}: {89: true, 122: true},
},
peerIDs[1]: {
[fieldparams.RootLength]byte{2}: {1: true, 37: true},
},
peerIDs[2]: {
[fieldparams.RootLength]byte{2}: {37: true},
[fieldparams.RootLength]byte{3}: {38: true},
},
peerIDs[3]: {
[fieldparams.RootLength]byte{3}: {10: true},
},
}
peerIDsMap := make(map[peer.ID]bool, len(peerIDs))
for _, id := range peerIDs {
peerIDsMap[id] = true
}
actual, err := computeIndicesByRootByPeer(p2p, slotByBlockRoot, indicesByBlockRoot, peerIDsMap)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for peer, indicesByRoot := range expected {
require.Equal(t, len(indicesByRoot), len(actual[peer]))
for root, indices := range indicesByRoot {
require.Equal(t, len(indices), len(actual[peer][root]))
for index := range indices {
require.Equal(t, actual[peer][root][index], true)
}
}
}
}
func TestRandomPeer(t *testing.T) {
// Fixed seed.
const seed = 42
randomSource := rand.NewGenerator()
t.Run("no peers", func(t *testing.T) {
pid, err := randomPeer(t.Context(), randomSource, leakybucket.NewCollector(4, 8, time.Second, false /* deleteEmptyBuckets */), 1, nil)
require.NotNil(t, err)
require.Equal(t, peer.ID(""), pid)
})
t.Run("context cancelled", func(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
cancel()
indicesByRootByPeer := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{peer.ID("peer1"): {}}
pid, err := randomPeer(ctx, randomSource, leakybucket.NewCollector(4, 8, time.Second, false /* deleteEmptyBuckets */), 1, indicesByRootByPeer)
require.NotNil(t, err)
require.Equal(t, peer.ID(""), pid)
})
t.Run("nominal", func(t *testing.T) {
const count = 1
collector := leakybucket.NewCollector(4, 8, time.Second, false /* deleteEmptyBuckets */)
peer1, peer2, peer3 := peer.ID("peer1"), peer.ID("peer2"), peer.ID("peer3")
indicesByRootByPeer := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
peer1: {},
peer2: {},
peer3: {},
}
pid, err := randomPeer(t.Context(), randomSource, collector, count, indicesByRootByPeer)
require.NoError(t, err)
require.Equal(t, true, map[peer.ID]bool{peer1: true, peer2: true, peer3: true}[pid])
})
}
func TestCopyIndicesByRootByPeer(t *testing.T) {
original := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
peer.ID("peer1"): {
[fieldparams.RootLength]byte{1}: {1: true, 3: true},
[fieldparams.RootLength]byte{2}: {2: true},
},
peer.ID("peer2"): {
[fieldparams.RootLength]byte{1}: {1: true},
},
}
copied := copyIndicesByRootByPeer(original)
require.Equal(t, len(original), len(copied))
for peer, indicesByRoot := range original {
require.Equal(t, len(indicesByRoot), len(copied[peer]))
for root, indices := range indicesByRoot {
require.Equal(t, len(indices), len(copied[peer][root]))
for index := range indices {
require.Equal(t, copied[peer][root][index], true)
}
}
}
}
func TestCompareIndices(t *testing.T) {
left := map[uint64]bool{3: true, 5: true, 7: true}
right := map[uint64]bool{5: true}
require.Equal(t, false, compareIndices(left, right))
left = map[uint64]bool{3: true, 5: true, 7: true}
right = map[uint64]bool{3: true, 6: true, 7: true}
require.Equal(t, false, compareIndices(left, right))
left = map[uint64]bool{3: true, 5: true, 7: true}
right = map[uint64]bool{5: true, 7: true, 3: true}
require.Equal(t, true, compareIndices(left, right))
}
func TestSlortedSliceFromMap(t *testing.T) {
input := map[uint64]bool{54: true, 23: true, 35: true}
expected := []uint64{23, 35, 54}
actual := sortedSliceFromMap(input)
require.DeepEqual(t, expected, actual)
}
func TestComputeSlotByBlockRoot(t *testing.T) {
const (
count = 3
multiplier = 10
)
roBlocks := make([]blocks.ROBlock, 0, count)
for i := range count {
signedBlock := util.NewBeaconBlock()
signedBlock.Block.Slot = primitives.Slot(i).Mul(multiplier)
roSignedBlock, err := blocks.NewSignedBeaconBlock(signedBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlockWithRoot(roSignedBlock, [fieldparams.RootLength]byte{byte(i)})
require.NoError(t, err)
roBlocks = append(roBlocks, roBlock)
}
expected := map[[fieldparams.RootLength]byte]primitives.Slot{
[fieldparams.RootLength]byte{0}: primitives.Slot(0),
[fieldparams.RootLength]byte{1}: primitives.Slot(10),
[fieldparams.RootLength]byte{2}: primitives.Slot(20),
}
actual := computeSlotByBlockRoot(roBlocks)
require.Equal(t, len(expected), len(actual))
for k, v := range expected {
require.Equal(t, v, actual[k])
}
}
func TestComputeTotalCount(t *testing.T) {
input := map[[fieldparams.RootLength]byte]map[uint64]bool{
[fieldparams.RootLength]byte{1}: {1: true, 3: true},
[fieldparams.RootLength]byte{2}: {2: true},
}
const expected = 3
actual := computeTotalCount(input)
require.Equal(t, expected, actual)
}

View File

@@ -44,7 +44,7 @@ func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
numberOfColumns := params.BeaconConfig().NumberOfColumns
// If reconstruction is not possible or if all columns are already stored, exit early.
if storedColumnsCount < peerdas.MinimumColumnsCountToReconstruct() || storedColumnsCount == numberOfColumns {
if storedColumnsCount < peerdas.MinimumColumnCountToReconstruct() || storedColumnsCount == numberOfColumns {
return nil
}
@@ -198,7 +198,7 @@ func (s *Service) broadcastMissingDataColumnSidecars(
subnet := peerdas.ComputeSubnetForDataColumnSidecar(verifiedRODataColumn.Index)
// Broadcast the missing data column.
if err := s.cfg.p2p.BroadcastDataColumn(root, subnet, verifiedRODataColumn.DataColumnSidecar); err != nil {
if err := s.cfg.p2p.BroadcastDataColumnSidecar(root, subnet, verifiedRODataColumn.DataColumnSidecar); err != nil {
log.WithError(err).Error("Broadcast data column")
}

View File

@@ -30,7 +30,7 @@ func TestReconstructDataColumns(t *testing.T) {
root, block := roBlock.Root(), roBlock.Block()
slot, proposerIndex := block.Slot(), block.ProposerIndex()
minimumCount := peerdas.MinimumColumnsCountToReconstruct()
minimumCount := peerdas.MinimumColumnCountToReconstruct()
t.Run("not enough stored sidecars", func(t *testing.T) {
storage := filesystem.NewEphemeralDataColumnStorage(t)
@@ -61,7 +61,7 @@ func TestReconstructDataColumns(t *testing.T) {
const cgc = 8
storage := filesystem.NewEphemeralDataColumnStorage(t)
minimumCount := peerdas.MinimumColumnsCountToReconstruct()
minimumCount := peerdas.MinimumColumnCountToReconstruct()
err := storage.Save(verifiedRoDataColumns[:minimumCount])
require.NoError(t, err)

View File

@@ -20,6 +20,7 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
@@ -72,7 +73,9 @@ go_test(
deps = [
"//async/abool:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
@@ -89,6 +92,7 @@ go_test(
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -3,11 +3,13 @@ package initialsync
import (
"context"
"fmt"
"slices"
"sort"
"strings"
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
@@ -15,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
prysmsync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/verify"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -34,7 +37,6 @@ import (
)
const (
// maxPendingRequests limits how many concurrent fetch request one can initiate.
maxPendingRequests = 64
// peersPercentagePerRequest caps percentage of peers to be used in a request.
@@ -78,6 +80,8 @@ type blocksFetcherConfig struct {
peerFilterCapacityWeight float64
mode syncMode
bs filesystem.BlobStorageSummarizer
dcs filesystem.DataColumnStorageReader
cv verification.NewDataColumnsVerifier
}
// blocksFetcher is a service to fetch chain data from peers.
@@ -94,6 +98,8 @@ type blocksFetcher struct {
p2p p2p.P2P
db db.ReadOnlyDatabase
bs filesystem.BlobStorageSummarizer
dcs filesystem.DataColumnStorageReader
cv verification.NewDataColumnsVerifier
blocksPerPeriod uint64
rateLimiter *leakybucket.Collector
peerLocks map[peer.ID]*peerLock
@@ -124,7 +130,7 @@ type fetchRequestResponse struct {
blobsFrom peer.ID
start primitives.Slot
count uint64
bwb []blocks.BlockWithROBlobs
bwb []blocks.BlockWithROSidecars
err error
}
@@ -162,6 +168,8 @@ func newBlocksFetcher(ctx context.Context, cfg *blocksFetcherConfig) *blocksFetc
p2p: cfg.p2p,
db: cfg.db,
bs: cfg.bs,
dcs: cfg.dcs,
cv: cfg.cv,
blocksPerPeriod: uint64(blocksPerPeriod),
rateLimiter: rateLimiter,
peerLocks: make(map[peer.ID]*peerLock),
@@ -298,7 +306,7 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
response := &fetchRequestResponse{
start: start,
count: count,
bwb: []blocks.BlockWithROBlobs{},
bwb: []blocks.BlockWithROSidecars{},
err: nil,
}
@@ -317,30 +325,114 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
if f.mode == modeStopOnFinalizedEpoch {
highestFinalizedSlot := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(targetEpoch + 1))
if start > highestFinalizedSlot {
response.err = fmt.Errorf("%w, slot: %d, highest finalized slot: %d",
errSlotIsTooHigh, start, highestFinalizedSlot)
response.err = fmt.Errorf(
"%w, slot: %d, highest finalized slot: %d",
errSlotIsTooHigh, start, highestFinalizedSlot,
)
return response
}
}
response.bwb, response.blocksFrom, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
if response.err == nil {
pid, bwb, err := f.fetchBlobsFromPeer(ctx, response.bwb, response.blocksFrom, peers)
pid, err := f.fetchSidecars(ctx, response.blocksFrom, peers, response.bwb)
if err != nil {
log.WithError(err).Error("Failed to fetch sidecars")
response.err = err
}
response.bwb = bwb
response.blobsFrom = pid
}
return response
}
// fetchBlocksFromPeer fetches blocks from a single randomly selected peer.
// fetchSidecars fetches sidecars corresponding to blocks in `response.bwb`.
// It mutates `Blobs` and `Columns` fields of `response.bwb` with fetched sidecars.
// `pid` is the initial peer to request blob from (usually the peer from which the block originated),
// `peers` is a list of peers to use for the request blobs if `pid` fails.
// `bwScs` must me sorted by slot.
// It returns the peer ID from which blobs were fetched (if any).
func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []peer.ID, bwScs []blocks.BlockWithROSidecars) (peer.ID, error) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
if len(bwScs) == 0 {
return "", nil
}
firstFuluIndex, err := findFirstFuluIndex(bwScs)
if err != nil {
return "", errors.Wrap(err, "find first Fulu index")
}
preFulu := bwScs[:firstFuluIndex]
postFulu := bwScs[firstFuluIndex:]
var blobsPid peer.ID
if len(preFulu) > 0 {
// Fetch blob sidecars.
blobsPid, err = f.fetchBlobsFromPeer(ctx, preFulu, pid, peers)
if err != nil {
return "", errors.Wrap(err, "fetch blobs from peer")
}
}
if len(postFulu) == 0 {
return blobsPid, nil
}
// Compute the columns to request.
custodyGroupCount, err := f.p2p.CustodyGroupCount()
if err != nil {
return blobsPid, errors.Wrap(err, "custody group count")
}
samplingSize := max(custodyGroupCount, samplesPerSlot)
info, _, err := peerdas.Info(f.p2p.NodeID(), samplingSize)
if err != nil {
return blobsPid, errors.Wrap(err, "custody info")
}
params := prysmsync.DataColumnSidecarsParams{
Ctx: ctx,
Tor: f.clock,
P2P: f.p2p,
RateLimiter: f.rateLimiter,
CtxMap: f.ctxMap,
Storage: f.dcs,
NewVerifier: f.cv,
}
roBlocks := make([]blocks.ROBlock, 0, len(postFulu))
for _, block := range postFulu {
roBlocks = append(roBlocks, block.Block)
}
verifiedRoDataColumnsByRoot, err := prysmsync.FetchDataColumnSidecars(params, roBlocks, info.CustodyColumns)
if err != nil {
return "", errors.Wrap(err, "fetch data column sidecars")
}
// Populate the response.
for i := range bwScs {
bwSc := &bwScs[i]
root := bwSc.Block.Root()
if columns, ok := verifiedRoDataColumnsByRoot[root]; ok {
bwSc.Columns = columns
}
}
return blobsPid, nil
}
// fetchBlocksFromPeer fetches blocks from a single randomly selected peer, sorted by slot.
func (f *blocksFetcher) fetchBlocksFromPeer(
ctx context.Context,
start primitives.Slot, count uint64,
peers []peer.ID,
) ([]blocks.BlockWithROBlobs, peer.ID, error) {
) ([]blocks.BlockWithROSidecars, peer.ID, error) {
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlocksFromPeer")
defer span.End()
@@ -355,8 +447,7 @@ func (f *blocksFetcher) fetchBlocksFromPeer(
// peers are dialed first.
peers = append(bestPeers, peers...)
peers = dedupPeers(peers)
for i := 0; i < len(peers); i++ {
p := peers[i]
for _, p := range peers {
blocks, err := f.requestBlocks(ctx, req, p)
if err != nil {
log.WithField("peer", p).WithError(err).Debug("Could not request blocks by range from peer")
@@ -380,14 +471,14 @@ func (f *blocksFetcher) fetchBlocksFromPeer(
return nil, "", errNoPeersAvailable
}
func sortedBlockWithVerifiedBlobSlice(bs []interfaces.ReadOnlySignedBeaconBlock) ([]blocks.BlockWithROBlobs, error) {
rb := make([]blocks.BlockWithROBlobs, len(bs))
for i, b := range bs {
func sortedBlockWithVerifiedBlobSlice(blks []interfaces.ReadOnlySignedBeaconBlock) ([]blocks.BlockWithROSidecars, error) {
rb := make([]blocks.BlockWithROSidecars, len(blks))
for i, b := range blks {
ro, err := blocks.NewROBlock(b)
if err != nil {
return nil, err
}
rb[i] = blocks.BlockWithROBlobs{Block: ro}
rb[i] = blocks.BlockWithROSidecars{Block: ro}
}
sort.Sort(blocks.BlockWithROBlobsSlice(rb))
return rb, nil
@@ -403,7 +494,8 @@ type commitmentCountList []commitmentCount
// countCommitments makes a list of all blocks that have commitments that need to be satisfied.
// This gives us a representation to finish building the request that is lightweight and readable for testing.
func countCommitments(bwb []blocks.BlockWithROBlobs, retentionStart primitives.Slot) commitmentCountList {
// `bwb` must be sorted by slot.
func countCommitments(bwb []blocks.BlockWithROSidecars, retentionStart primitives.Slot) commitmentCountList {
if len(bwb) == 0 {
return nil
}
@@ -485,7 +577,9 @@ func (r *blobRange) Request() *p2ppb.BlobSidecarsByRangeRequest {
var errBlobVerification = errors.New("peer unable to serve aligned BlobSidecarsByRange and BeaconBlockSidecarsByRange responses")
var errMissingBlobsForBlockCommitments = errors.Wrap(errBlobVerification, "blobs unavailable for processing block with kzg commitments")
func verifyAndPopulateBlobs(bwb []blocks.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) ([]blocks.BlockWithROBlobs, error) {
// verifyAndPopulateBlobs mutate the input `bwb` argument by adding verified blobs.
// This function mutates the input `bwb` argument.
func verifyAndPopulateBlobs(bwb []blocks.BlockWithROSidecars, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) error {
blobsByRoot := make(map[[32]byte][]blocks.ROBlob)
for i := range blobs {
if blobs[i].Slot() < req.StartSlot {
@@ -495,46 +589,53 @@ func verifyAndPopulateBlobs(bwb []blocks.BlockWithROBlobs, blobs []blocks.ROBlob
blobsByRoot[br] = append(blobsByRoot[br], blobs[i])
}
for i := range bwb {
bwi, err := populateBlock(bwb[i], blobsByRoot[bwb[i].Block.Root()], req, bss)
err := populateBlock(&bwb[i], blobsByRoot[bwb[i].Block.Root()], req, bss)
if err != nil {
if errors.Is(err, errDidntPopulate) {
continue
}
return bwb, err
return err
}
bwb[i] = bwi
}
return bwb, nil
return nil
}
var errDidntPopulate = errors.New("skipping population of block")
func populateBlock(bw blocks.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) (blocks.BlockWithROBlobs, error) {
// populateBlock verifies and populates blobs for a block.
// This function mutates the input `bw` argument.
func populateBlock(bw *blocks.BlockWithROSidecars, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) error {
blk := bw.Block
if blk.Version() < version.Deneb || blk.Block().Slot() < req.StartSlot {
return bw, errDidntPopulate
return errDidntPopulate
}
commits, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return bw, errDidntPopulate
return errDidntPopulate
}
if len(commits) == 0 {
return bw, errDidntPopulate
return errDidntPopulate
}
// Drop blobs on the floor if we already have them.
if bss != nil && bss.Summary(blk.Root()).AllAvailable(len(commits)) {
return bw, errDidntPopulate
return errDidntPopulate
}
if len(commits) != len(blobs) {
return bw, missingCommitError(blk.Root(), blk.Block().Slot(), commits)
return missingCommitError(blk.Root(), blk.Block().Slot(), commits)
}
for ci := range commits {
if err := verify.BlobAlignsWithBlock(blobs[ci], blk); err != nil {
return bw, err
return err
}
}
bw.Blobs = blobs
return bw, nil
return nil
}
func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) error {
@@ -547,29 +648,38 @@ func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) e
}
// fetchBlobsFromPeer fetches blocks from a single randomly selected peer.
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROBlobs, pid peer.ID, peers []peer.ID) (peer.ID, []blocks.BlockWithROBlobs, error) {
// This function mutates the input `bwb` argument.
// `pid` is the initial peer to request blobs from (usually the peer from which the block originated),
// `peers` is a list of peers to use for the request if `pid` fails.
// `bwb` must be sorted by slot.
// It returns the peer ID from which blobs were fetched.
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROSidecars, pid peer.ID, peers []peer.ID) (peer.ID, error) {
if len(bwb) == 0 {
return "", nil
}
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlobsFromPeer")
defer span.End()
if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch {
return "", bwb, nil
return "", nil
}
blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot())
if err != nil {
return "", nil, err
return "", err
}
// Construct request message based on observed interval of blocks in need of blobs.
req := countCommitments(bwb, blobWindowStart).blobRange(f.bs).Request()
if req == nil {
return "", bwb, nil
return "", nil
}
peers = f.filterPeers(ctx, peers, peersPercentagePerRequest)
// We dial the initial peer first to ensure that we get the desired set of blobs.
wantedPeers := append([]peer.ID{pid}, peers...)
bestPeers := f.hasSufficientBandwidth(wantedPeers, req.Count)
peers = append([]peer.ID{pid}, peers...)
peers = f.hasSufficientBandwidth(peers, req.Count)
// We append the best peers to the front so that higher capacity
// peers are dialed first. If all of them fail, we fallback to the
// initial peer we wanted to request blobs from.
peers = append(bestPeers, pid)
peers = append(peers, pid)
for i := 0; i < len(peers); i++ {
p := peers[i]
blobs, err := f.requestBlobs(ctx, req, p)
@@ -578,14 +688,24 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.Blo
continue
}
f.p2p.Peers().Scorers().BlockProviderScorer().Touch(p)
robs, err := verifyAndPopulateBlobs(bwb, blobs, req, f.bs)
if err != nil {
if err := verifyAndPopulateBlobs(bwb, blobs, req, f.bs); err != nil {
log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response")
continue
}
return p, robs, err
return p, err
}
return "", nil, errNoPeersAvailable
return "", errNoPeersAvailable
}
// sortedSliceFromMap returns a sorted slice of keys from a map.
func sortedSliceFromMap(m map[uint64]bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
slices.Sort(result)
return result
}
// requestBlocks is a wrapper for handling BeaconBlocksByRangeRequest requests/streams.
@@ -642,6 +762,7 @@ func (f *blocksFetcher) requestBlobs(ctx context.Context, req *p2ppb.BlobSidecar
}
f.rateLimiter.Add(pid.String(), int64(req.Count))
l.Unlock()
return prysmsync.SendBlobsByRangeRequest(ctx, f.clock, f.p2p, pid, f.ctxMap, req)
}
@@ -699,13 +820,17 @@ func (f *blocksFetcher) waitForBandwidth(pid peer.ID, count uint64) error {
}
func (f *blocksFetcher) hasSufficientBandwidth(peers []peer.ID, count uint64) []peer.ID {
filteredPeers := []peer.ID{}
for _, p := range peers {
if uint64(f.rateLimiter.Remaining(p.String())) < count {
filteredPeers := make([]peer.ID, 0, len(peers))
for _, peer := range peers {
remaining := uint64(0)
if remainingInt := f.rateLimiter.Remaining(peer.String()); remainingInt > 0 {
remaining = uint64(remainingInt)
}
if remaining < count {
continue
}
copiedP := p
filteredPeers = append(filteredPeers, copiedP)
filteredPeers = append(filteredPeers, peer)
}
return filteredPeers
}
@@ -745,3 +870,23 @@ func dedupPeers(peers []peer.ID) []peer.ID {
}
return newPeerList
}
// findFirstFuluIndex returns the index of the first block with a version >= Fulu.
// It returns an error if blocks are not correctly sorted by version regarding Fulu.
func findFirstFuluIndex(bwScs []blocks.BlockWithROSidecars) (int, error) {
firstFuluIndex := len(bwScs)
for i, bwSc := range bwScs {
blockVersion := bwSc.Block.Version()
if blockVersion >= version.Fulu && firstFuluIndex > i {
firstFuluIndex = i
continue
}
if blockVersion < version.Fulu && firstFuluIndex <= i {
return 0, errors.New("blocks are not sorted by version")
}
}
return firstFuluIndex, nil
}

View File

@@ -12,11 +12,12 @@ import (
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pm "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2pt "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
beaconsync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -266,7 +267,7 @@ func TestBlocksFetcher_RoundRobin(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
p := p2pt.NewTestP2P(t)
p := p2ptest.NewTestP2P(t)
connectPeers(t, p, tt.peers, p.Peers())
cache.RLock()
genesisRoot := cache.rootCache[0]
@@ -307,9 +308,9 @@ func TestBlocksFetcher_RoundRobin(t *testing.T) {
fetcher.stop()
}()
processFetchedBlocks := func() ([]blocks.BlockWithROBlobs, error) {
processFetchedBlocks := func() ([]blocks.BlockWithROSidecars, error) {
defer cancel()
var unionRespBlocks []blocks.BlockWithROBlobs
var unionRespBlocks []blocks.BlockWithROSidecars
for {
select {
@@ -398,6 +399,7 @@ func TestBlocksFetcher_scheduleRequest(t *testing.T) {
fetcher.scheduleRequest(t.Context(), 1, blockBatchLimit))
})
}
func TestBlocksFetcher_handleRequest(t *testing.T) {
blockBatchLimit := flags.Get().BlockBatchLimit
chainConfig := struct {
@@ -455,7 +457,7 @@ func TestBlocksFetcher_handleRequest(t *testing.T) {
}
}()
var bwb []blocks.BlockWithROBlobs
var bwb []blocks.BlockWithROSidecars
select {
case <-ctx.Done():
t.Error(ctx.Err())
@@ -531,9 +533,9 @@ func TestBlocksFetcher_requestBeaconBlocksByRange(t *testing.T) {
}
func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
p1 := p2pt.NewTestP2P(t)
p2 := p2pt.NewTestP2P(t)
p3 := p2pt.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p3 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
p1.Connect(p3)
require.Equal(t, 2, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
@@ -543,7 +545,7 @@ func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
Count: 64,
}
topic := p2pm.RPCBlocksByRangeTopicV1
topic := p2p.RPCBlocksByRangeTopicV1
protocol := libp2pcore.ProtocolID(topic + p2.Encoding().ProtocolSuffix())
streamHandlerFn := func(stream network.Stream) {
assert.NoError(t, stream.Close())
@@ -602,15 +604,15 @@ func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
}
func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
p1 := p2pt.NewTestP2P(t)
p2 := p2pt.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
req := &ethpb.BeaconBlocksByRangeRequest{
Count: 64,
}
topic := p2pm.RPCBlocksByRangeTopicV1
topic := p2p.RPCBlocksByRangeTopicV1
protocol := libp2pcore.ProtocolID(topic + p2.Encoding().ProtocolSuffix())
streamHandlerFn := func(stream network.Stream) {
assert.NoError(t, stream.Close())
@@ -638,7 +640,7 @@ func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
}
func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T) {
p1 := p2pt.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
tests := []struct {
name string
req *ethpb.BeaconBlocksByRangeRequest
@@ -883,7 +885,7 @@ func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T)
},
}
topic := p2pm.RPCBlocksByRangeTopicV1
topic := p2p.RPCBlocksByRangeTopicV1
protocol := libp2pcore.ProtocolID(topic + p1.Encoding().ProtocolSuffix())
ctx, cancel := context.WithCancel(t.Context())
@@ -893,7 +895,7 @@ func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p2 := p2pt.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
p2.BHost.SetStreamHandler(protocol, tt.handlerGenFn(tt.req))
@@ -993,7 +995,7 @@ func TestBlobRangeForBlocks(t *testing.T) {
func TestBlobRequest(t *testing.T) {
var nilReq *ethpb.BlobSidecarsByRangeRequest
// no blocks
req := countCommitments([]blocks.BlockWithROBlobs{}, 0).blobRange(nil).Request()
req := countCommitments([]blocks.BlockWithROSidecars{}, 0).blobRange(nil).Request()
require.Equal(t, nilReq, req)
blks, _ := util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, 10)
sbbs := make([]interfaces.ReadOnlySignedBeaconBlock, len(blks))
@@ -1026,22 +1028,16 @@ func TestBlobRequest(t *testing.T) {
}
func TestCountCommitments(t *testing.T) {
// no blocks
// blocks before retention start filtered
// blocks without commitments filtered
// pre-deneb filtered
// variety of commitment counts are accurate, from 1 to max
type testcase struct {
name string
bwb func(t *testing.T, c testcase) []blocks.BlockWithROBlobs
numBlocks int
retStart primitives.Slot
resCount int
name string
bwb func(t *testing.T, c testcase) []blocks.BlockWithROSidecars
retStart primitives.Slot
resCount int
}
cases := []testcase{
{
name: "nil blocks is safe",
bwb: func(t *testing.T, c testcase) []blocks.BlockWithROBlobs {
bwb: func(t *testing.T, c testcase) []blocks.BlockWithROSidecars {
return nil
},
retStart: 0,
@@ -1179,7 +1175,7 @@ func TestCommitmentCountList(t *testing.T) {
}
}
func testSequenceBlockWithBlob(t *testing.T, nblocks int) ([]blocks.BlockWithROBlobs, []blocks.ROBlob) {
func testSequenceBlockWithBlob(t *testing.T, nblocks int) ([]blocks.BlockWithROSidecars, []blocks.ROBlob) {
blks, blobs := util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, nblocks)
sbbs := make([]interfaces.ReadOnlySignedBeaconBlock, len(blks))
for i := range blks {
@@ -1190,7 +1186,7 @@ func testSequenceBlockWithBlob(t *testing.T, nblocks int) ([]blocks.BlockWithROB
return bwb, blobs
}
func testReqFromResp(bwb []blocks.BlockWithROBlobs) *ethpb.BlobSidecarsByRangeRequest {
func testReqFromResp(bwb []blocks.BlockWithROSidecars) *ethpb.BlobSidecarsByRangeRequest {
return &ethpb.BlobSidecarsByRangeRequest{
StartSlot: bwb[0].Block.Block().Slot(),
Count: uint64(bwb[len(bwb)-1].Block.Block().Slot()-bwb[0].Block.Block().Slot()) + 1,
@@ -1207,7 +1203,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
}
require.Equal(t, len(blobs), len(expectedCommits))
bwb, err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
require.NoError(t, err)
for _, bw := range bwb {
commits, err := bw.Block.Block().Body().BlobKzgCommitments()
@@ -1228,7 +1224,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
})
t.Run("missing blobs", func(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10)
_, err := verifyAndPopulateBlobs(bwb, blobs[1:], testReqFromResp(bwb), nil)
err := verifyAndPopulateBlobs(bwb, blobs[1:], testReqFromResp(bwb), nil)
require.ErrorIs(t, err, errMissingBlobsForBlockCommitments)
})
t.Run("no blobs for last block", func(t *testing.T) {
@@ -1240,7 +1236,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
blobs = blobs[0 : len(blobs)-len(cmts)]
lastBlk, _ = util.GenerateTestDenebBlockWithSidecar(t, lastBlk.Block().ParentRoot(), lastBlk.Block().Slot(), 0)
bwb[lastIdx].Block = lastBlk
_, err = verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
err = verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
require.NoError(t, err)
})
t.Run("blobs not copied if all locally available", func(t *testing.T) {
@@ -1254,7 +1250,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
r7: {0, 1, 2, 3, 4, 5},
}
bss := filesystem.NewMockBlobStorageSummarizer(t, onDisk)
bwb, err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), bss)
err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), bss)
require.NoError(t, err)
require.Equal(t, 6, len(bwb[i1].Blobs))
require.Equal(t, 0, len(bwb[i7].Blobs))
@@ -1302,3 +1298,203 @@ func TestBlockFetcher_HasSufficientBandwidth(t *testing.T) {
}
assert.Equal(t, 2, len(receivedPeers))
}
func TestSortedSliceFromMap(t *testing.T) {
m := map[uint64]bool{1: true, 3: true, 2: true, 4: true}
expected := []uint64{1, 2, 3, 4}
actual := sortedSliceFromMap(m)
require.DeepSSZEqual(t, expected, actual)
}
func TestFetchSidecars(t *testing.T) {
ctx := t.Context()
t.Run("No blocks", func(t *testing.T) {
fetcher := new(blocksFetcher)
pid, err := fetcher.fetchSidecars(ctx, "", nil, []blocks.BlockWithROSidecars{})
assert.NoError(t, err)
assert.Equal(t, peer.ID(""), pid)
})
t.Run("Nominal", func(t *testing.T) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
samplesPerSlot := beaconConfig.SamplesPerSlot
// Define "now" to be one epoch after genesis time + retention period.
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
secondsPerSlot := beaconConfig.SecondsPerSlot
slotsPerEpoch := beaconConfig.SlotsPerEpoch
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
nowWrtGenesisSecs := retentionEpochs.Add(1).Mul(secondsPerEpoch)
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)
genesisValidatorRoot := [fieldparams.RootLength]byte{}
nower := func() time.Time { return now }
clock := startup.NewClock(genesisTime, genesisValidatorRoot, startup.WithNower(nower))
// Define a Deneb block with blobs out of retention period.
denebBlock := util.NewBeaconBlockDeneb()
denebBlock.Block.Slot = 0 // Genesis slot, out of retention period.
signedDenebBlock, err := blocks.NewSignedBeaconBlock(denebBlock)
require.NoError(t, err)
roDebebBlock, err := blocks.NewROBlock(signedDenebBlock)
require.NoError(t, err)
// Define a Fulu block with blobs in the retention period.
fuluBlock := util.NewBeaconBlockFulu()
fuluBlock.Block.Slot = slotsPerEpoch // Within retention period.
fuluBlock.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, fieldparams.KzgCommitmentSize)} // Dummy commitment.
signedFuluBlock, err := blocks.NewSignedBeaconBlock(fuluBlock)
require.NoError(t, err)
roFuluBlock, err := blocks.NewROBlock(signedFuluBlock)
require.NoError(t, err)
bodyRoot, err := fuluBlock.Block.Body.HashTreeRoot()
require.NoError(t, err)
// Create and save data column sidecars for this fulu block in the database.
params := make([]util.DataColumnParam, 0, numberOfColumns)
for i := range numberOfColumns {
param := util.DataColumnParam{Index: i, Slot: slotsPerEpoch, BodyRoot: bodyRoot[:]}
params = append(params, param)
}
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
// Create a data columns storage.
dir := t.TempDir()
dataColumnStorage, err := filesystem.NewDataColumnStorage(ctx, filesystem.WithDataColumnBasePath(dir))
require.NoError(t, err)
// Save the data column sidecars to the storage.
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// Create a blocks fetcher.
fetcher := &blocksFetcher{
clock: clock,
p2p: p2ptest.NewTestP2P(t),
dcs: dataColumnStorage,
}
// Fetch sidecars.
blocksWithSidecars := []blocks.BlockWithROSidecars{
{Block: roDebebBlock},
{Block: roFuluBlock},
}
pid, err := fetcher.fetchSidecars(ctx, "", nil, blocksWithSidecars)
require.NoError(t, err)
require.Equal(t, peer.ID(""), pid)
// Verify that block with sidecars were modified correctly.
require.Equal(t, 0, len(blocksWithSidecars[0].Blobs))
require.Equal(t, 0, len(blocksWithSidecars[0].Columns))
require.Equal(t, 0, len(blocksWithSidecars[1].Blobs))
// We don't check the content of the columns here. The extensive test is done
// in TestFetchDataColumnsSidecars.
require.Equal(t, samplesPerSlot, uint64(len(blocksWithSidecars[1].Columns)))
})
}
func TestFirstFuluIndex(t *testing.T) {
bellatrix := util.NewBeaconBlockBellatrix()
signedBellatrix, err := blocks.NewSignedBeaconBlock(bellatrix)
require.NoError(t, err)
roBellatrix, err := blocks.NewROBlock(signedBellatrix)
require.NoError(t, err)
capella := util.NewBeaconBlockCapella()
signedCapella, err := blocks.NewSignedBeaconBlock(capella)
require.NoError(t, err)
roCapella, err := blocks.NewROBlock(signedCapella)
require.NoError(t, err)
deneb := util.NewBeaconBlockDeneb()
signedDeneb, err := blocks.NewSignedBeaconBlock(deneb)
require.NoError(t, err)
roDeneb, err := blocks.NewROBlock(signedDeneb)
require.NoError(t, err)
fulu := util.NewBeaconBlockFulu()
signedFulu, err := blocks.NewSignedBeaconBlock(fulu)
require.NoError(t, err)
roFulu, err := blocks.NewROBlock(signedFulu)
require.NoError(t, err)
tests := []struct {
name string
setupBlocks func(t *testing.T) []blocks.BlockWithROSidecars
expectedIndex int
expectError bool
}{
{
name: "all blocks are pre-Fulu",
setupBlocks: func(t *testing.T) []blocks.BlockWithROSidecars {
return []blocks.BlockWithROSidecars{
{Block: roBellatrix},
{Block: roCapella},
{Block: roDeneb},
}
},
expectedIndex: 3, // Should be the length of the slice
expectError: false,
},
{
name: "all blocks are Fulu or later",
setupBlocks: func(t *testing.T) []blocks.BlockWithROSidecars {
return []blocks.BlockWithROSidecars{
{Block: roFulu},
{Block: roFulu},
}
},
expectedIndex: 0,
expectError: false,
},
{
name: "mixed blocks correctly sorted",
setupBlocks: func(t *testing.T) []blocks.BlockWithROSidecars {
return []blocks.BlockWithROSidecars{
{Block: roBellatrix},
{Block: roCapella},
{Block: roDeneb},
{Block: roFulu},
{Block: roFulu},
}
},
expectedIndex: 3, // Index where Fulu blocks start
expectError: false,
},
{
name: "mixed blocks incorrectly sorted",
setupBlocks: func(t *testing.T) []blocks.BlockWithROSidecars {
return []blocks.BlockWithROSidecars{
{Block: roBellatrix},
{Block: roCapella},
{Block: roFulu},
{Block: roDeneb},
{Block: roFulu},
}
},
expectedIndex: 0,
expectError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
blocks := tt.setupBlocks(t)
index, err := findFirstFuluIndex(blocks)
if tt.expectError {
require.NotNil(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tt.expectedIndex, index)
})
}
}

View File

@@ -24,7 +24,7 @@ import (
type forkData struct {
blocksFrom peer.ID
blobsFrom peer.ID
bwb []blocks.BlockWithROBlobs
bwb []blocks.BlockWithROSidecars
}
// nonSkippedSlotAfter checks slots after the given one in an attempt to find a non-empty future slot.
@@ -275,16 +275,18 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, slot
"slot": block.Block().Slot(),
"root": fmt.Sprintf("%#x", parentRoot),
}).Debug("Block with unknown parent root has been found")
altBlocks, err := sortedBlockWithVerifiedBlobSlice(blocks[i-1:])
bwb, err := sortedBlockWithVerifiedBlobSlice(blocks[i-1:])
if err != nil {
return nil, errors.Wrap(err, "invalid blocks received in findForkWithPeer")
}
// We need to fetch the blobs for the given alt-chain if any exist, so that we can try to verify and import
// the blocks.
bpid, bwb, err := f.fetchBlobsFromPeer(ctx, altBlocks, pid, []peer.ID{pid})
bpid, err := f.fetchSidecars(ctx, pid, []peer.ID{pid}, bwb)
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findForkWithPeer")
return nil, errors.Wrap(err, "fetch sidecars")
}
// The caller will use the BlocksWith VerifiedBlobs in bwb as the starting point for
// round-robin syncing the alternate chain.
return &forkData{blocksFrom: pid, blobsFrom: bpid, bwb: bwb}, nil
@@ -303,10 +305,9 @@ func (f *blocksFetcher) findAncestor(ctx context.Context, pid peer.ID, b interfa
if err != nil {
return nil, errors.Wrap(err, "received invalid blocks in findAncestor")
}
var bpid peer.ID
bpid, bwb, err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid})
bpid, err := f.fetchSidecars(ctx, pid, []peer.ID{pid}, bwb)
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findAncestor")
return nil, errors.Wrap(err, "fetch sidecars")
}
return &forkData{
blocksFrom: pid,
@@ -350,9 +351,12 @@ func (f *blocksFetcher) calculateHeadAndTargetEpochs() (headEpoch, targetEpoch p
cp := f.chain.FinalizedCheckpt()
headEpoch = cp.Epoch
targetEpoch, peers = f.p2p.Peers().BestFinalized(params.BeaconConfig().MaxPeersToSync, headEpoch)
} else {
headEpoch = slots.ToEpoch(f.chain.HeadSlot())
targetEpoch, peers = f.p2p.Peers().BestNonFinalized(flags.Get().MinimumSyncPeers, headEpoch)
return headEpoch, targetEpoch, peers
}
headEpoch = slots.ToEpoch(f.chain.HeadSlot())
targetEpoch, peers = f.p2p.Peers().BestNonFinalized(flags.Get().MinimumSyncPeers, headEpoch)
return headEpoch, targetEpoch, peers
}

View File

@@ -72,6 +72,8 @@ type blocksQueueConfig struct {
db db.ReadOnlyDatabase
mode syncMode
bs filesystem.BlobStorageSummarizer
dcs filesystem.DataColumnStorageReader
cv verification.NewDataColumnsVerifier
}
// blocksQueue is a priority queue that serves as a intermediary between block fetchers (producers)
@@ -96,7 +98,7 @@ type blocksQueue struct {
type blocksQueueFetchedData struct {
blocksFrom peer.ID
blobsFrom peer.ID
bwb []blocks.BlockWithROBlobs
bwb []blocks.BlockWithROSidecars
}
// newBlocksQueue creates initialized priority queue.
@@ -115,6 +117,8 @@ func newBlocksQueue(ctx context.Context, cfg *blocksQueueConfig) *blocksQueue {
db: cfg.db,
clock: cfg.clock,
bs: cfg.bs,
dcs: cfg.dcs,
cv: cfg.cv,
})
}
highestExpectedSlot := cfg.highestExpectedSlot

View File

@@ -263,7 +263,7 @@ func TestBlocksQueue_Loop(t *testing.T) {
highestExpectedSlot: tt.highestExpectedSlot,
})
assert.NoError(t, queue.start())
processBlock := func(b blocks.BlockWithROBlobs) error {
processBlock := func(b blocks.BlockWithROSidecars) error {
block := b.Block
if !beaconDB.HasBlock(ctx, block.Block().ParentRoot()) {
return fmt.Errorf("%w: %#x", errParentDoesNotExist, block.Block().ParentRoot())
@@ -275,7 +275,7 @@ func TestBlocksQueue_Loop(t *testing.T) {
return mc.ReceiveBlock(ctx, block, root, nil)
}
var blocks []blocks.BlockWithROBlobs
var blocks []blocks.BlockWithROSidecars
for data := range queue.fetchedData {
for _, b := range data.bwb {
if err := processBlock(b); err != nil {
@@ -538,7 +538,7 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
require.NoError(t, err)
response := &fetchRequestResponse{
blocksFrom: "abc",
bwb: []blocks.BlockWithROBlobs{
bwb: []blocks.BlockWithROSidecars{
{Block: blocks.ROBlock{ReadOnlySignedBeaconBlock: wsb}},
{Block: blocks.ROBlock{ReadOnlySignedBeaconBlock: wsbCopy}},
},
@@ -640,7 +640,7 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
queue.smm.machines[256].fetched.blocksFrom = pidDataParsed
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
queue.smm.machines[256].fetched.bwb = []blocks.BlockWithROBlobs{
queue.smm.machines[256].fetched.bwb = []blocks.BlockWithROSidecars{
{Block: rwsb},
}
@@ -674,7 +674,7 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
queue.smm.machines[320].fetched.blocksFrom = pidDataParsed
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROBlobs{
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROSidecars{
{Block: rwsb},
}
@@ -705,7 +705,7 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
queue.smm.machines[320].fetched.blocksFrom = pidDataParsed
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROBlobs{
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROSidecars{
{Block: rwsb},
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"encoding/hex"
"fmt"
"sort"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
@@ -13,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/paulbellamy/ratecounter"
@@ -78,6 +80,8 @@ func (s *Service) startBlocksQueue(ctx context.Context, highestSlot primitives.S
highestExpectedSlot: highestSlot,
mode: mode,
bs: s.cfg.BlobStorage,
dcs: s.cfg.DataColumnStorage,
cv: s.newDataColumnsVerifier,
}
queue := newBlocksQueue(ctx, cfg)
if err := queue.start(); err != nil {
@@ -157,31 +161,82 @@ func (s *Service) processFetchedDataRegSync(ctx context.Context, data *blocksQue
log.WithError(err).Debug("Batch did not contain a valid sequence of unprocessed blocks")
return 0, err
}
if len(bwb) == 0 {
return 0, nil
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(),
"firstUnprocessed": bwb[0].Block.Block().Slot(),
// Separate blocks with blobs from blocks with data columns.
fistDataColumnIndex := sort.Search(len(bwb), func(i int) bool {
return bwb[i].Block.Version() >= version.Fulu
})
blocksWithBlobs := bwb[:fistDataColumnIndex]
blocksWithDataColumns := bwb[fistDataColumnIndex:]
blobBatchVerifier := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
lazilyPersistentStoreBlobs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, blobBatchVerifier)
log := log.WithField("firstSlot", data.bwb[0].Block.Block().Slot())
logBlobs, logDataColumns := log, log
if len(blocksWithBlobs) > 0 {
logBlobs = logBlobs.WithField("firstUnprocessed", blocksWithBlobs[0].Block.Block().Slot())
}
for i, b := range bwb {
sidecars := blocks.NewSidecarsFromBlobSidecars(b.Blobs)
if err := avs.Persist(s.clock.CurrentSlot(), sidecars...); err != nil {
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Batch failure due to BlobSidecar issues")
for i, b := range blocksWithBlobs {
if err := lazilyPersistentStoreBlobs.Persist(s.clock.CurrentSlot(), b.Blobs...); err != nil {
logBlobs.WithError(err).WithFields(syncFields(b.Block)).Warning("Batch failure due to BlobSidecar issues")
return uint64(i), err
}
if err := s.processBlock(ctx, s.genesisTime, b, s.cfg.Chain.ReceiveBlock, avs); err != nil {
if err := s.processBlock(ctx, s.genesisTime, b, s.cfg.Chain.ReceiveBlock, lazilyPersistentStoreBlobs); err != nil {
if errors.Is(err, errParentDoesNotExist) {
log.WithFields(batchFields).WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
logBlobs.WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
WithFields(syncFields(b.Block)).Debug("Could not process batch blocks due to missing parent")
} else {
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Block processing failure")
logBlobs.WithError(err).WithFields(syncFields(b.Block)).Warn("Block processing failure")
}
return uint64(i), err
}
}
if len(blocksWithDataColumns) == 0 {
return uint64(len(bwb)), nil
}
// Save data column sidecars.
count := 0
for _, b := range blocksWithDataColumns {
count += len(b.Columns)
}
sidecarsToSave := make([]blocks.VerifiedRODataColumn, 0, count)
for _, blockWithDataColumns := range blocksWithDataColumns {
sidecarsToSave = append(sidecarsToSave, blockWithDataColumns.Columns...)
}
if err := s.cfg.DataColumnStorage.Save(sidecarsToSave); err != nil {
return 0, errors.Wrap(err, "save data column sidecars")
}
for i, b := range blocksWithDataColumns {
logDataColumns := logDataColumns.WithFields(syncFields(b.Block))
if err := s.processBlock(ctx, s.genesisTime, b, s.cfg.Chain.ReceiveBlock, nil); err != nil {
switch {
case errors.Is(err, errParentDoesNotExist):
logDataColumns.
WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
Debug("Could not process batch blocks due to missing parent")
return uint64(i), err
default:
logDataColumns.WithError(err).Warning("Block processing failure")
return uint64(i), err
}
}
}
return uint64(len(bwb)), nil
}
@@ -193,12 +248,18 @@ func syncFields(b blocks.ROBlock) logrus.Fields {
}
// highestFinalizedEpoch returns the absolute highest finalized epoch of all connected peers.
// Note this can be lower than our finalized epoch if we have no peers or peers that are all behind us.
// It returns `0` if no peers are connected.
// Note this can be lower than our finalized epoch if our connected peers are all behind us.
func (s *Service) highestFinalizedEpoch() primitives.Epoch {
highest := primitives.Epoch(0)
for _, pid := range s.cfg.P2P.Peers().Connected() {
peerChainState, err := s.cfg.P2P.Peers().ChainState(pid)
if err == nil && peerChainState != nil && peerChainState.FinalizedEpoch > highest {
if err != nil || peerChainState == nil {
continue
}
if peerChainState.FinalizedEpoch > highest {
highest = peerChainState.FinalizedEpoch
}
}
@@ -250,7 +311,7 @@ func (s *Service) logBatchSyncStatus(firstBlk blocks.ROBlock, nBlocks int) {
func (s *Service) processBlock(
ctx context.Context,
genesis time.Time,
bwb blocks.BlockWithROBlobs,
bwb blocks.BlockWithROSidecars,
blockReceiver blockReceiverFn,
avs das.AvailabilityStore,
) error {
@@ -269,7 +330,7 @@ func (s *Service) processBlock(
type processedChecker func(context.Context, blocks.ROBlock) bool
func validUnprocessed(ctx context.Context, bwb []blocks.BlockWithROBlobs, headSlot primitives.Slot, isProc processedChecker) ([]blocks.BlockWithROBlobs, error) {
func validUnprocessed(ctx context.Context, bwb []blocks.BlockWithROSidecars, headSlot primitives.Slot, isProc processedChecker) ([]blocks.BlockWithROSidecars, error) {
// use a pointer to avoid confusing the zero-value with the case where the first element is processed.
var processed *int
for i := range bwb {
@@ -299,43 +360,100 @@ func validUnprocessed(ctx context.Context, bwb []blocks.BlockWithROBlobs, headSl
return bwb[nonProcessedIdx:], nil
}
func (s *Service) processBatchedBlocks(ctx context.Context, bwb []blocks.BlockWithROBlobs, bFunc batchBlockReceiverFn) (uint64, error) {
if len(bwb) == 0 {
func (s *Service) processBatchedBlocks(ctx context.Context, bwb []blocks.BlockWithROSidecars, bFunc batchBlockReceiverFn) (uint64, error) {
bwbCount := uint64(len(bwb))
if bwbCount == 0 {
return 0, errors.New("0 blocks provided into method")
}
headSlot := s.cfg.Chain.HeadSlot()
var err error
bwb, err = validUnprocessed(ctx, bwb, headSlot, s.isProcessedBlock)
bwb, err := validUnprocessed(ctx, bwb, headSlot, s.isProcessedBlock)
if err != nil {
return 0, err
}
if len(bwb) == 0 {
return 0, nil
}
first := bwb[0].Block
if !s.cfg.Chain.HasBlock(ctx, first.Block().ParentRoot()) {
firstBlock := bwb[0].Block
if !s.cfg.Chain.HasBlock(ctx, firstBlock.Block().ParentRoot()) {
return 0, fmt.Errorf("%w: %#x (in processBatchedBlocks, slot=%d)",
errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot())
errParentDoesNotExist, firstBlock.Block().ParentRoot(), firstBlock.Block().Slot())
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
s.logBatchSyncStatus(first, len(bwb))
for _, bb := range bwb {
if len(bb.Blobs) == 0 {
firstFuluIndex, err := findFirstFuluIndex(bwb)
if err != nil {
return 0, errors.Wrap(err, "finding first Fulu index")
}
blocksWithBlobs := bwb[:firstFuluIndex]
blocksWithDataColumns := bwb[firstFuluIndex:]
if err := s.processBlocksWithBlobs(ctx, blocksWithBlobs, bFunc, firstBlock); err != nil {
return 0, errors.Wrap(err, "processing blocks with blobs")
}
if err := s.processBlocksWithDataColumns(ctx, blocksWithDataColumns, bFunc, firstBlock); err != nil {
return 0, errors.Wrap(err, "processing blocks with data columns")
}
return bwbCount, nil
}
func (s *Service) processBlocksWithBlobs(ctx context.Context, bwbs []blocks.BlockWithROSidecars, bFunc batchBlockReceiverFn, firstBlock blocks.ROBlock) error {
bwbCount := len(bwbs)
if bwbCount == 0 {
return nil
}
batchVerifier := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
persistentStore := das.NewLazilyPersistentStore(s.cfg.BlobStorage, batchVerifier)
s.logBatchSyncStatus(firstBlock, bwbCount)
for _, bwb := range bwbs {
if len(bwb.Blobs) == 0 {
continue
}
sidecars := blocks.NewSidecarsFromBlobSidecars(bb.Blobs)
if err := avs.Persist(s.clock.CurrentSlot(), sidecars...); err != nil {
return 0, err
if err := persistentStore.Persist(s.clock.CurrentSlot(), bwb.Blobs...); err != nil {
return errors.Wrap(err, "persisting blobs")
}
}
robs := blocks.BlockWithROBlobsSlice(bwb).ROBlocks()
return uint64(len(bwb)), bFunc(ctx, robs, avs)
robs := blocks.BlockWithROBlobsSlice(bwbs).ROBlocks()
if err := bFunc(ctx, robs, persistentStore); err != nil {
return errors.Wrap(err, "processing blocks with blobs")
}
return nil
}
func (s *Service) processBlocksWithDataColumns(ctx context.Context, bwbs []blocks.BlockWithROSidecars, bFunc batchBlockReceiverFn, firstBlock blocks.ROBlock) error {
bwbCount := len(bwbs)
if bwbCount == 0 {
return nil
}
s.logBatchSyncStatus(firstBlock, bwbCount)
// Save data column sidecars.
count := 0
for _, bwb := range bwbs {
count += len(bwb.Columns)
}
sidecarsToSave := make([]blocks.VerifiedRODataColumn, 0, count)
for _, blockWithDataColumns := range bwbs {
sidecarsToSave = append(sidecarsToSave, blockWithDataColumns.Columns...)
}
if err := s.cfg.DataColumnStorage.Save(sidecarsToSave); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
robs := blocks.BlockWithROBlobsSlice(bwbs).ROBlocks()
if err := bFunc(ctx, robs, nil); err != nil {
return errors.Wrap(err, "process post-Fulu blocks")
}
return nil
}
func isPunishableError(err error) bool {

View File

@@ -8,9 +8,11 @@ import (
"github.com/OffchainLabs/prysm/v6/async/abool"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pt "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -373,7 +375,7 @@ func TestService_processBlock(t *testing.T) {
require.NoError(t, err)
rowsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.processBlock(ctx, genesis, blocks.BlockWithROBlobs{Block: rowsb}, func(
err = s.processBlock(ctx, genesis, blocks.BlockWithROSidecars{Block: rowsb}, func(
ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, _ das.AvailabilityStore) error {
assert.NoError(t, s.cfg.Chain.ReceiveBlock(ctx, block, blockRoot, nil))
return nil
@@ -385,7 +387,7 @@ func TestService_processBlock(t *testing.T) {
require.NoError(t, err)
rowsb, err = blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.processBlock(ctx, genesis, blocks.BlockWithROBlobs{Block: rowsb}, func(
err = s.processBlock(ctx, genesis, blocks.BlockWithROSidecars{Block: rowsb}, func(
ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, _ das.AvailabilityStore) error {
return nil
}, nil)
@@ -396,7 +398,7 @@ func TestService_processBlock(t *testing.T) {
require.NoError(t, err)
rowsb, err = blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.processBlock(ctx, genesis, blocks.BlockWithROBlobs{Block: rowsb}, func(
err = s.processBlock(ctx, genesis, blocks.BlockWithROSidecars{Block: rowsb}, func(
ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, _ das.AvailabilityStore) error {
assert.NoError(t, s.cfg.Chain.ReceiveBlock(ctx, block, blockRoot, nil))
return nil
@@ -432,7 +434,7 @@ func TestService_processBlockBatch(t *testing.T) {
s.genesisTime = genesis
t.Run("process non-linear batch", func(t *testing.T) {
var batch []blocks.BlockWithROBlobs
var batch []blocks.BlockWithROSidecars
currBlockRoot := genesisBlkRoot
for i := primitives.Slot(1); i < 10; i++ {
parentRoot := currBlockRoot
@@ -446,11 +448,11 @@ func TestService_processBlockBatch(t *testing.T) {
require.NoError(t, err)
rowsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
batch = append(batch, blocks.BlockWithROBlobs{Block: rowsb})
batch = append(batch, blocks.BlockWithROSidecars{Block: rowsb})
currBlockRoot = blk1Root
}
var batch2 []blocks.BlockWithROBlobs
var batch2 []blocks.BlockWithROSidecars
for i := primitives.Slot(10); i < 20; i++ {
parentRoot := currBlockRoot
blk1 := util.NewBeaconBlock()
@@ -463,7 +465,7 @@ func TestService_processBlockBatch(t *testing.T) {
require.NoError(t, err)
rowsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
batch2 = append(batch2, blocks.BlockWithROBlobs{Block: rowsb})
batch2 = append(batch2, blocks.BlockWithROSidecars{Block: rowsb})
currBlockRoot = blk1Root
}
@@ -485,7 +487,7 @@ func TestService_processBlockBatch(t *testing.T) {
assert.ErrorContains(t, "block is already processed", err)
require.Equal(t, uint64(0), count)
var badBatch2 []blocks.BlockWithROBlobs
var badBatch2 []blocks.BlockWithROSidecars
for i, b := range batch2 {
// create a non-linear batch
if i%3 == 0 && i != 0 {
@@ -685,7 +687,7 @@ func TestService_ValidUnprocessed(t *testing.T) {
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, genesisBlk)
var batch []blocks.BlockWithROBlobs
var batch []blocks.BlockWithROSidecars
currBlockRoot := genesisBlkRoot
for i := primitives.Slot(1); i < 10; i++ {
parentRoot := currBlockRoot
@@ -699,7 +701,7 @@ func TestService_ValidUnprocessed(t *testing.T) {
require.NoError(t, err)
rowsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
batch = append(batch, blocks.BlockWithROBlobs{Block: rowsb})
batch = append(batch, blocks.BlockWithROSidecars{Block: rowsb})
currBlockRoot = blk1Root
}
@@ -712,3 +714,155 @@ func TestService_ValidUnprocessed(t *testing.T) {
// Ensure that the unprocessed batch is returned correctly.
assert.Equal(t, len(retBlocks), len(batch)-2)
}
func TestService_PropcessFetchedDataRegSync(t *testing.T) {
ctx := t.Context()
// Create a data columns storage.
dir := t.TempDir()
dataColumnStorage, err := filesystem.NewDataColumnStorage(ctx, filesystem.WithDataColumnBasePath(dir))
require.NoError(t, err)
// Create Fulu blocks.
fuluBlock1 := util.NewBeaconBlockFulu()
signedFuluBlock1, err := blocks.NewSignedBeaconBlock(fuluBlock1)
require.NoError(t, err)
roFuluBlock1, err := blocks.NewROBlock(signedFuluBlock1)
require.NoError(t, err)
block1Root := roFuluBlock1.Root()
fuluBlock2 := util.NewBeaconBlockFulu()
fuluBlock2.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, fieldparams.KzgCommitmentSize)} // Dummy commitment.
fuluBlock2.Block.Slot = 1
fuluBlock2.Block.ParentRoot = block1Root[:]
signedFuluBlock2, err := blocks.NewSignedBeaconBlock(fuluBlock2)
require.NoError(t, err)
roFuluBlock2, err := blocks.NewROBlock(signedFuluBlock2)
require.NoError(t, err)
block2Root := roFuluBlock2.Root()
parentRoot2 := roFuluBlock2.Block().ParentRoot()
bodyRoot2, err := roFuluBlock2.Block().Body().HashTreeRoot()
require.NoError(t, err)
// Create a mock chain service.
const validatorCount = uint64(64)
state, _ := util.DeterministicGenesisState(t, validatorCount)
chain := &mock.ChainService{
FinalizedCheckPoint: &eth.Checkpoint{},
DB: dbtest.SetupDB(t),
State: state,
Root: block1Root[:],
}
// Create a new service instance.
service := &Service{
cfg: &Config{
Chain: chain,
DataColumnStorage: dataColumnStorage,
},
counter: ratecounter.NewRateCounter(counterSeconds * time.Second),
}
// Save the parent block in the database.
err = chain.DB.SaveBlock(ctx, roFuluBlock1)
require.NoError(t, err)
// Create data column sidecars.
const count = uint64(3)
params := make([]util.DataColumnParam, 0, count)
for i := range count {
param := util.DataColumnParam{Index: i, BodyRoot: bodyRoot2[:], ParentRoot: parentRoot2[:], Slot: roFuluBlock2.Block().Slot()}
params = append(params, param)
}
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
blocksWithSidecars := []blocks.BlockWithROSidecars{
{Block: roFuluBlock2, Columns: verifiedRoDataColumnSidecars},
}
data := &blocksQueueFetchedData{
bwb: blocksWithSidecars,
}
actual, err := service.processFetchedDataRegSync(ctx, data)
require.NoError(t, err)
require.Equal(t, uint64(1), actual)
// Check block and data column sidecars were saved correctly.
require.Equal(t, true, chain.DB.HasBlock(ctx, block2Root))
summary := dataColumnStorage.Summary(block2Root)
for i := range count {
require.Equal(t, true, summary.HasIndex(i))
}
}
func TestService_processBlocksWithDataColumns(t *testing.T) {
ctx := t.Context()
t.Run("no blocks", func(t *testing.T) {
fuluBlock := util.NewBeaconBlockFulu()
signedFuluBlock, err := blocks.NewSignedBeaconBlock(fuluBlock)
require.NoError(t, err)
roFuluBlock, err := blocks.NewROBlock(signedFuluBlock)
require.NoError(t, err)
service := new(Service)
err = service.processBlocksWithDataColumns(ctx, nil, nil, roFuluBlock)
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
fuluBlock := util.NewBeaconBlockFulu()
fuluBlock.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, fieldparams.KzgCommitmentSize)} // Dummy commitment.
signedFuluBlock, err := blocks.NewSignedBeaconBlock(fuluBlock)
require.NoError(t, err)
roFuluBlock, err := blocks.NewROBlock(signedFuluBlock)
require.NoError(t, err)
bodyRoot, err := roFuluBlock.Block().Body().HashTreeRoot()
require.NoError(t, err)
// Create data column sidecars.
const count = uint64(3)
params := make([]util.DataColumnParam, 0, count)
for i := range count {
param := util.DataColumnParam{Index: i, BodyRoot: bodyRoot[:]}
params = append(params, param)
}
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
blocksWithSidecars := []blocks.BlockWithROSidecars{
{Block: roFuluBlock, Columns: verifiedRoDataColumnSidecars},
}
// Create a data columns storage.
dir := t.TempDir()
dataColumnStorage, err := filesystem.NewDataColumnStorage(ctx, filesystem.WithDataColumnBasePath(dir))
require.NoError(t, err)
// Create a service.
service := &Service{
cfg: &Config{
P2P: p2pt.NewTestP2P(t),
DataColumnStorage: dataColumnStorage,
},
counter: ratecounter.NewRateCounter(counterSeconds * time.Second),
}
receiverFunc := func(ctx context.Context, blks []blocks.ROBlock, avs das.AvailabilityStore) error {
require.Equal(t, 1, len(blks))
return nil
}
err = service.processBlocksWithDataColumns(ctx, blocksWithSidecars, receiverFunc, roFuluBlock)
require.NoError(t, err)
// Verify that the data columns were saved correctly.
summary := dataColumnStorage.Summary(roFuluBlock.Root())
for i := range count {
require.Equal(t, true, summary.HasIndex(i))
}
})
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
blockfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/block"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
@@ -53,22 +54,24 @@ type Config struct {
ClockWaiter startup.ClockWaiter
InitialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
DataColumnStorage *filesystem.DataColumnStorage
}
// Service service.
type Service struct {
cfg *Config
ctx context.Context
cancel context.CancelFunc
synced *abool.AtomicBool
chainStarted *abool.AtomicBool
counter *ratecounter.RateCounter
genesisChan chan time.Time
clock *startup.Clock
verifierWaiter *verification.InitializerWaiter
newBlobVerifier verification.NewBlobVerifier
ctxMap sync.ContextByteVersions
genesisTime time.Time
cfg *Config
ctx context.Context
cancel context.CancelFunc
synced *abool.AtomicBool
chainStarted *abool.AtomicBool
counter *ratecounter.RateCounter
genesisChan chan time.Time
clock *startup.Clock
verifierWaiter *verification.InitializerWaiter
newBlobVerifier verification.NewBlobVerifier
newDataColumnsVerifier verification.NewDataColumnsVerifier
ctxMap sync.ContextByteVersions
genesisTime time.Time
}
// Option is a functional option for the initial-sync Service.
@@ -149,6 +152,7 @@ func (s *Service) Start() {
return
}
s.newBlobVerifier = newBlobVerifierFromInitializer(v)
s.newDataColumnsVerifier = newDataColumnsVerifierFromInitializer(v)
gt := clock.GenesisTime()
if gt.IsZero() {
@@ -175,19 +179,22 @@ func (s *Service) Start() {
}
s.chainStarted.Set()
log.Info("Starting initial chain sync...")
// Are we already in sync, or close to it?
if slots.ToEpoch(s.cfg.Chain.HeadSlot()) == slots.ToEpoch(currentSlot) {
log.Info("Already synced to the current chain head")
s.markSynced()
return
}
peers, err := s.waitForMinimumPeers()
if err != nil {
log.WithError(err).Error("Error waiting for minimum number of peers")
return
}
if err := s.fetchOriginBlobs(peers); err != nil {
log.WithError(err).Error("Failed to fetch missing blobs for checkpoint origin")
if err := s.fetchOriginSidecars(peers); err != nil {
log.WithError(err).Error("Error fetching origin sidecars")
return
}
if err := s.roundRobinSync(); err != nil {
@@ -200,6 +207,48 @@ func (s *Service) Start() {
s.markSynced()
}
// fetchOriginSidecars fetches origin sidecars
func (s *Service) fetchOriginSidecars(peers []peer.ID) error {
blockRoot, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx)
if errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
return nil
}
block, err := s.cfg.DB.Block(s.ctx, blockRoot)
if err != nil {
return errors.Wrap(err, "block")
}
currentSlot, blockSlot := s.clock.CurrentSlot(), block.Block().Slot()
currentEpoch, blockEpoch := slots.ToEpoch(currentSlot), slots.ToEpoch(blockSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}
roBlock, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return errors.Wrap(err, "new ro block with root")
}
blockVersion := roBlock.Version()
if blockVersion >= version.Fulu {
if err := s.fetchOriginColumns(peers, roBlock); err != nil {
return errors.Wrap(err, "fetch origin columns")
}
return nil
}
if blockVersion >= version.Deneb {
if err := s.fetchOriginBlobs(peers, roBlock); err != nil {
return errors.Wrap(err, "fetch origin blobs")
}
}
return nil
}
// Stop initial sync.
func (s *Service) Stop() error {
s.cancel()
@@ -304,23 +353,9 @@ func missingBlobRequest(blk blocks.ROBlock, store *filesystem.BlobStorage) (p2pt
return req, nil
}
func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
r, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx)
if errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
return nil
}
blk, err := s.cfg.DB.Block(s.ctx, r)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", r)).Error("Block for checkpoint sync origin root not found in db")
return err
}
if !params.WithinDAPeriod(slots.ToEpoch(blk.Block().Slot()), slots.ToEpoch(s.clock.CurrentSlot())) {
return nil
}
rob, err := blocks.NewROBlockWithRoot(blk, r)
if err != nil {
return err
}
func (s *Service) fetchOriginBlobs(pids []peer.ID, rob blocks.ROBlock) error {
r := rob.Root()
req, err := missingBlobRequest(rob, s.cfg.BlobStorage)
if err != nil {
return err
@@ -335,16 +370,17 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
if err != nil {
continue
}
if len(blobSidecars) != len(req) {
continue
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
current := s.clock.CurrentSlot()
sidecars := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
if err := avs.Persist(current, sidecars...); err != nil {
if err := avs.Persist(current, blobSidecars...); err != nil {
return err
}
if err := avs.IsDataAvailable(s.ctx, current, rob); err != nil {
log.WithField("root", fmt.Sprintf("%#x", r)).WithField("peerID", pids[i]).Warn("Blobs from peer for origin block were unusable")
continue
@@ -355,6 +391,67 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
return fmt.Errorf("no connected peer able to provide blobs for checkpoint sync block %#x", r)
}
func (s *Service) fetchOriginColumns(pids []peer.ID, roBlock blocks.ROBlock) error {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// Return early if the origin block has no blob commitments.
commitments, err := roBlock.Block().Body().BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "fetch blob commitments")
}
if len(commitments) == 0 {
return nil
}
// Compute the columns to request.
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount()
if err != nil {
return errors.Wrap(err, "custody group count")
}
samplingSize := max(custodyGroupCount, samplesPerSlot)
info, _, err := peerdas.Info(s.cfg.P2P.NodeID(), samplingSize)
if err != nil {
return errors.Wrap(err, "fetch peer info")
}
// Fetch origin data column sidecars.
root := roBlock.Root()
params := sync.DataColumnSidecarsParams{
Ctx: s.ctx,
Tor: s.clock,
P2P: s.cfg.P2P,
CtxMap: s.ctxMap,
Storage: s.cfg.DataColumnStorage,
NewVerifier: s.newDataColumnsVerifier,
}
verfifiedRoDataColumnsByRoot, err := sync.FetchDataColumnSidecars(params, []blocks.ROBlock{roBlock}, info.CustodyColumns)
if err != nil {
return errors.Wrap(err, "fetch data column sidecars")
}
// Save origin data columns to disk.
verifiedRoDataColumnsSidecars, ok := verfifiedRoDataColumnsByRoot[root]
if !ok {
return fmt.Errorf("cannot extract origins data column sidecars for block root %#x - should never happen", root)
}
if err := s.cfg.DataColumnStorage.Save(verifiedRoDataColumnsSidecars); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", roBlock.Root()),
"blobCount": len(commitments),
"columnCount": len(verifiedRoDataColumnsSidecars),
}).Info("Successfully downloaded data columns for checkpoint sync block")
return nil
}
func shufflePeers(pids []peer.ID) {
rg := rand.NewGenerator()
rg.Shuffle(len(pids), func(i, j int) {
@@ -367,3 +464,9 @@ func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.
return ini.NewBlobVerifier(b, reqs)
}
}
func newDataColumnsVerifierFromInitializer(ini *verification.Initializer) verification.NewDataColumnsVerifier {
return func(roDataColumns []blocks.RODataColumn, reqs []verification.Requirement) verification.DataColumnsVerifier {
return ini.NewDataColumnsVerifier(roDataColumns, reqs)
}
}

View File

@@ -7,14 +7,17 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pt "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -138,7 +141,7 @@ func TestService_InitStartStop(t *testing.T) {
},
}
p := p2pt.NewTestP2P(t)
p := p2ptest.NewTestP2P(t)
connectPeers(t, p, []*peerData{}, p.Peers())
for i, tt := range tests {
if i == 0 {
@@ -328,7 +331,7 @@ func TestService_markSynced(t *testing.T) {
}
func TestService_Resync(t *testing.T) {
p := p2pt.NewTestP2P(t)
p := p2ptest.NewTestP2P(t)
connectPeers(t, p, []*peerData{
{blocks: makeSequence(1, 160), finalizedEpoch: 5, headSlot: 160},
}, p.Peers())
@@ -511,5 +514,152 @@ func TestOriginOutsideRetention(t *testing.T) {
require.NoError(t, concreteDB.SaveOriginCheckpointBlockRoot(ctx, blk.Root()))
// This would break due to missing service dependencies, but will return nil fast due to being outside retention.
require.Equal(t, false, params.WithinDAPeriod(slots.ToEpoch(blk.Block().Slot()), slots.ToEpoch(clock.CurrentSlot())))
require.NoError(t, s.fetchOriginBlobs([]peer.ID{}))
require.NoError(t, s.fetchOriginSidecars([]peer.ID{}))
}
func TestFetchOriginSidecars(t *testing.T) {
ctx := t.Context()
beaconConfig := params.BeaconConfig()
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
secondsPerSlot := beaconConfig.SecondsPerSlot
slotsPerEpoch := beaconConfig.SlotsPerEpoch
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
genesisValidatorRoot := [fieldparams.RootLength]byte{}
t.Run("out of retention period", func(t *testing.T) {
// Create an origin block.
block := util.NewBeaconBlockFulu()
signedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBlock)
require.NoError(t, err)
// Save the block.
db := dbtest.SetupDB(t)
err = db.SaveOriginCheckpointBlockRoot(ctx, roBlock.Root())
require.NoError(t, err)
err = db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Define "now" to be one epoch after genesis time + retention period.
nowWrtGenesisSecs := retentionEpochs.Add(1).Mul(secondsPerEpoch)
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)
nower := func() time.Time { return now }
clock := startup.NewClock(genesisTime, genesisValidatorRoot, startup.WithNower(nower))
service := &Service{
cfg: &Config{
DB: db,
},
clock: clock,
}
err = service.fetchOriginSidecars(nil)
require.NoError(t, err)
})
t.Run("no commitments", func(t *testing.T) {
// Create an origin block.
block := util.NewBeaconBlockFulu()
signedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBlock)
require.NoError(t, err)
// Save the block.
db := dbtest.SetupDB(t)
err = db.SaveOriginCheckpointBlockRoot(ctx, roBlock.Root())
require.NoError(t, err)
err = db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Define "now" to be after genesis time + retention period.
nowWrtGenesisSecs := retentionEpochs.Mul(secondsPerEpoch)
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)
nower := func() time.Time { return now }
clock := startup.NewClock(genesisTime, genesisValidatorRoot, startup.WithNower(nower))
service := &Service{
cfg: &Config{
DB: db,
P2P: p2ptest.NewTestP2P(t),
},
clock: clock,
}
err = service.fetchOriginSidecars(nil)
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create block and sidecars.
const blobCount = 1
roBlock, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
// Save the block.
db := dbtest.SetupDB(t)
err = db.SaveOriginCheckpointBlockRoot(ctx, roBlock.Root())
require.NoError(t, err)
err = db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Create a data columns storage.
dir := t.TempDir()
dataColumnStorage, err := filesystem.NewDataColumnStorage(ctx, filesystem.WithDataColumnBasePath(dir))
require.NoError(t, err)
// Compute the columns to request.
p2p := p2ptest.NewTestP2P(t)
custodyGroupCount, err := p2p.CustodyGroupCount()
require.NoError(t, err)
samplingSize := max(custodyGroupCount, samplesPerSlot)
info, _, err := peerdas.Info(p2p.NodeID(), samplingSize)
require.NoError(t, err)
// Save all sidecars except what we need.
toSave := make([]blocks.VerifiedRODataColumn, 0, uint64(len(verifiedRoSidecars))-samplingSize)
for _, sidecar := range verifiedRoSidecars {
if !info.CustodyColumns[sidecar.Index] {
toSave = append(toSave, sidecar)
}
}
err = dataColumnStorage.Save(toSave)
require.NoError(t, err)
// Define "now" to be after genesis time + retention period.
nowWrtGenesisSecs := retentionEpochs.Mul(secondsPerEpoch)
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)
nower := func() time.Time { return now }
clock := startup.NewClock(genesisTime, genesisValidatorRoot, startup.WithNower(nower))
service := &Service{
cfg: &Config{
DB: db,
P2P: p2p,
DataColumnStorage: dataColumnStorage,
},
clock: clock,
}
err = service.fetchOriginSidecars(nil)
require.NoError(t, err)
// Check that needed sidecars are saved.
summary := dataColumnStorage.Summary(roBlock.Root())
for index := range info.CustodyColumns {
require.Equal(t, true, summary.HasIndex(index))
}
})
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/OffchainLabs/prysm/v6/async"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -175,8 +176,9 @@ func (s *Service) getBlocksInQueue(slot primitives.Slot) []interfaces.ReadOnlySi
func (s *Service) removeBlockFromQueue(b interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
s.pendingQueueLock.Lock()
defer s.pendingQueueLock.Unlock()
if err := s.deleteBlockFromPendingQueue(b.Block().Slot(), b, blkRoot); err != nil {
return err
return errors.Wrap(err, "delete block from pending queue")
}
return nil
}
@@ -196,41 +198,82 @@ func (s *Service) hasPeer() bool {
var errNoPeersForPending = errors.New("no suitable peers to process pending block queue, delaying")
// processAndBroadcastBlock validates, processes, and broadcasts a block.
// part of the function is to request missing blobs from peers if the block contains kzg commitments.
func (s *Service) processAndBroadcastBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
// Part of the function is to request missing sidecars from peers if the block contains kzg commitments.
func (s *Service) processAndBroadcastBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, blkRoot [fieldparams.RootLength]byte) error {
if err := s.processBlock(ctx, b, blkRoot); err != nil {
return errors.Wrap(err, "process block")
}
if err := s.receiveAndBroadCastBlock(ctx, b, blkRoot, b.Block().Slot()); err != nil {
return errors.Wrap(err, "receive and broadcast block")
}
return nil
}
func (s *Service) processBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, blkRoot [fieldparams.RootLength]byte) error {
blockSlot := b.Block().Slot()
if err := s.validateBeaconBlock(ctx, b, blkRoot); err != nil {
if !errors.Is(ErrOptimisticParent, err) {
log.WithError(err).WithField("slot", b.Block().Slot()).Debug("Could not validate block")
log.WithError(err).WithField("slot", blockSlot).Debug("Could not validate block")
return err
}
}
request, err := s.pendingBlobsRequestForBlock(blkRoot, b)
blockEpoch, denebForkEpoch, fuluForkEpoch := slots.ToEpoch(blockSlot), params.BeaconConfig().DenebForkEpoch, params.BeaconConfig().FuluForkEpoch
roBlock, err := blocks.NewROBlockWithRoot(b, blkRoot)
if err != nil {
return err
}
if len(request) > 0 {
peers := s.getBestPeers()
peerCount := len(peers)
if peerCount == 0 {
return errors.Wrapf(errNoPeersForPending, "block root=%#x", blkRoot)
}
if err := s.sendAndSaveBlobSidecars(ctx, request, peers[rand.NewGenerator().Int()%peerCount], b); err != nil {
return err
}
return errors.Wrap(err, "new ro block with root")
}
if blockEpoch >= fuluForkEpoch {
if err := s.requestAndSaveMissingDataColumnSidecars([]blocks.ROBlock{roBlock}); err != nil {
return errors.Wrap(err, "request and save missing data column sidecars")
}
return nil
}
if blockEpoch >= denebForkEpoch {
request, err := s.pendingBlobsRequestForBlock(blkRoot, b)
if err != nil {
return errors.Wrap(err, "pending blobs request for block")
}
if len(request) > 0 {
peers := s.getBestPeers()
peerCount := len(peers)
if peerCount == 0 {
return errors.Wrapf(errNoPeersForPending, "block root=%#x", blkRoot)
}
if err := s.sendAndSaveBlobSidecars(ctx, request, peers[rand.NewGenerator().Int()%peerCount], b); err != nil {
return errors.Wrap(err, "send and save blob sidecars")
}
}
return nil
}
return nil
}
func (s *Service) receiveAndBroadCastBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, blkRoot [fieldparams.RootLength]byte, blockSlot primitives.Slot) error {
if err := s.cfg.chain.ReceiveBlock(ctx, b, blkRoot, nil); err != nil {
return err
return errors.Wrap(err, "receive block")
}
s.setSeenBlockIndexSlot(b.Block().Slot(), b.Block().ProposerIndex())
s.setSeenBlockIndexSlot(blockSlot, b.Block().ProposerIndex())
pb, err := b.Proto()
if err != nil {
log.WithError(err).Debug("Could not get protobuf block")
return err
}
if err := s.cfg.p2p.Broadcast(ctx, pb); err != nil {
log.WithError(err).Debug("Could not broadcast block")
return err
@@ -286,58 +329,113 @@ func (s *Service) sendBatchRootRequest(ctx context.Context, roots [][32]byte, ra
ctx, span := prysmTrace.StartSpan(ctx, "sendBatchRootRequest")
defer span.End()
roots = dedupRoots(roots)
s.pendingQueueLock.RLock()
for i := len(roots) - 1; i >= 0; i-- {
r := roots[i]
if s.seenPendingBlocks[r] || s.cfg.chain.BlockBeingSynced(r) {
roots = append(roots[:i], roots[i+1:]...)
} else {
log.WithField("blockRoot", fmt.Sprintf("%#x", r)).Debug("Requesting block by root")
}
}
s.pendingQueueLock.RUnlock()
// Exit early if there are no roots to request.
if len(roots) == 0 {
return nil
}
bestPeers := s.getBestPeers()
if len(bestPeers) == 0 {
// Filter out roots that are already seen in pending blocks or being synced.
roots = s.filterOutPendingAndSynced(roots)
// Nothing to do, exit early.
if len(roots) == 0 {
return nil
}
// Randomly choose a peer to query from our best peers. If that peer cannot return
// all the requested blocks, we randomly select another peer.
pid := bestPeers[randGen.Int()%len(bestPeers)]
for i := 0; i < numOfTries; i++ {
// Fetch best peers to request blocks from.
bestPeers := s.getBestPeers()
// No suitable peer, exit early.
if len(bestPeers) == 0 {
log.WithField("roots", fmt.Sprintf("%#x", roots)).Debug("Send batch root request: No suitable peers")
return nil
}
// Randomly choose a peer to query from our best peers.
// If that peer cannot return all the requested blocks,
// we randomly select another peer.
randomIndex := randGen.Int() % len(bestPeers)
pid := bestPeers[randomIndex]
for range numOfTries {
req := p2ptypes.BeaconBlockByRootsReq(roots)
currentEpoch := slots.ToEpoch(s.cfg.clock.CurrentSlot())
// Get the current epoch.
currentSlot := s.cfg.clock.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
// Trim the request to the maximum number of blocks we can request if needed.
maxReqBlock := params.MaxRequestBlock(currentEpoch)
if uint64(len(roots)) > maxReqBlock {
rootCount := uint64(len(roots))
if rootCount > maxReqBlock {
req = roots[:maxReqBlock]
}
// Send the request to the peer.
if err := s.sendBeaconBlocksRequest(ctx, &req, pid); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Debug("Could not send recent block request")
}
newRoots := make([][32]byte, 0, len(roots))
s.pendingQueueLock.RLock()
for _, rt := range roots {
if !s.seenPendingBlocks[rt] {
newRoots = append(newRoots, rt)
// Filter out roots that are already seen in pending blocks.
newRoots := make([][32]byte, 0, rootCount)
func() {
s.pendingQueueLock.RLock()
defer s.pendingQueueLock.RUnlock()
for _, rt := range roots {
if !s.seenPendingBlocks[rt] {
newRoots = append(newRoots, rt)
}
}
}
s.pendingQueueLock.RUnlock()
}()
// Exit early if all roots have been seen.
// This is the happy path.
if len(newRoots) == 0 {
break
return nil
}
// Choosing a new peer with the leftover set of
// roots to request.
// There is still some roots that have not been seen.
// Choosing a new peer with the leftover set of oots to request.
roots = newRoots
pid = bestPeers[randGen.Int()%len(bestPeers)]
// Choose a new peer to query.
randomIndex = randGen.Int() % len(bestPeers)
pid = bestPeers[randomIndex]
}
// Some roots are still missing after all allowed tries.
// This is the unhappy path.
log.WithFields(logrus.Fields{
"roots": fmt.Sprintf("%#x", roots),
"tries": numOfTries,
}).Debug("Send batch root request: Some roots are still missing after all allowed tries")
return nil
}
// filterOutPendingAndSynced filters out roots that are already seen in pending blocks or being synced.
func (s *Service) filterOutPendingAndSynced(roots [][fieldparams.RootLength]byte) [][fieldparams.RootLength]byte {
// Remove duplicates (if any) from the list of roots.
roots = dedupRoots(roots)
// Filters out in place roots that are already seen in pending blocks or being synced.
s.pendingQueueLock.RLock()
defer s.pendingQueueLock.RUnlock()
for i := len(roots) - 1; i >= 0; i-- {
r := roots[i]
if s.seenPendingBlocks[r] || s.cfg.chain.BlockBeingSynced(r) {
roots = append(roots[:i], roots[i+1:]...)
continue
}
log.WithField("blockRoot", fmt.Sprintf("%#x", r)).Debug("Requesting block by root")
}
return roots
}
func (s *Service) sortedPendingSlots() []primitives.Slot {
s.pendingQueueLock.RLock()
defer s.pendingQueueLock.RUnlock()

View File

@@ -4,11 +4,13 @@ import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/verify"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -20,15 +22,19 @@ import (
"github.com/pkg/errors"
)
// sendBeaconBlocksRequest sends a recent beacon blocks request to a peer to get
// those corresponding blocks from that peer.
// sendBeaconBlocksRequest sends the `requests` beacon blocks by root requests to
// the peer with the given `id`. For each received block, it inserts the block into the
// pending queue. Then, for each received blocks, it checks if all corresponding sidecars
// are stored, and, if not, sends the corresponding sidecar requests and stores the received sidecars.
// For sidecars, only blob sidecars will be requested to the peer with the given `id`.
// For other types of sidecars, the request will be sent to the best peers.
func (s *Service) sendBeaconBlocksRequest(ctx context.Context, requests *types.BeaconBlockByRootsReq, id peer.ID) error {
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
requestedRoots := make(map[[32]byte]struct{})
requestedRoots := make(map[[fieldparams.RootLength]byte]bool)
for _, root := range *requests {
requestedRoots[root] = struct{}{}
requestedRoots[root] = true
}
blks, err := SendBeaconBlocksByRootRequest(ctx, s.cfg.clock, s.cfg.p2p, id, requests, func(blk interfaces.ReadOnlySignedBeaconBlock) error {
@@ -36,39 +42,124 @@ func (s *Service) sendBeaconBlocksRequest(ctx context.Context, requests *types.B
if err != nil {
return err
}
if _, ok := requestedRoots[blkRoot]; !ok {
if ok := requestedRoots[blkRoot]; !ok {
return fmt.Errorf("received unexpected block with root %x", blkRoot)
}
s.pendingQueueLock.Lock()
defer s.pendingQueueLock.Unlock()
if err := s.insertBlockToPendingQueue(blk.Block().Slot(), blk, blkRoot); err != nil {
return err
return errors.Wrapf(err, "insert block to pending queue for block with root %x", blkRoot)
}
return nil
})
// The following part deals with sidecars.
postFuluBlocks := make([]blocks.ROBlock, 0, len(blks))
for _, blk := range blks {
// Skip blocks before deneb because they have no blob.
if blk.Version() < version.Deneb {
blockVersion := blk.Version()
if blockVersion >= version.Fulu {
roBlock, err := blocks.NewROBlock(blk)
if err != nil {
return errors.Wrap(err, "new ro block")
}
postFuluBlocks = append(postFuluBlocks, roBlock)
continue
}
blkRoot, err := blk.Block().HashTreeRoot()
if err != nil {
return err
}
request, err := s.pendingBlobsRequestForBlock(blkRoot, blk)
if err != nil {
return err
}
if len(request) == 0 {
if blockVersion >= version.Deneb {
if err := s.requestAndSaveMissingBlobSidecars(blk, id); err != nil {
return errors.Wrap(err, "request and save missing blob sidecars")
}
continue
}
if err := s.sendAndSaveBlobSidecars(ctx, request, id, blk); err != nil {
return err
}
}
if err := s.requestAndSaveMissingDataColumnSidecars(postFuluBlocks); err != nil {
return errors.Wrap(err, "request and save missing data columns")
}
return err
}
// requestAndSaveMissingDataColumns checks if the data columns are missing for the given block.
// If so, requests them and saves them to the storage.
func (s *Service) requestAndSaveMissingDataColumnSidecars(blks []blocks.ROBlock) error {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
custodyGroupCount, err := s.cfg.p2p.CustodyGroupCount()
if err != nil {
return errors.Wrap(err, "fetch custody group count from peer")
}
samplingSize := max(custodyGroupCount, samplesPerSlot)
info, _, err := peerdas.Info(s.cfg.p2p.NodeID(), samplingSize)
if err != nil {
return errors.Wrap(err, "custody info")
}
// Fetch missing data column sidecars.
params := DataColumnSidecarsParams{
Ctx: s.ctx,
Tor: s.cfg.clock,
P2P: s.cfg.p2p,
CtxMap: s.ctxMap,
Storage: s.cfg.dataColumnStorage,
NewVerifier: s.newColumnsVerifier,
}
sidecarsByRoot, err := FetchDataColumnSidecars(params, blks, info.CustodyColumns)
if err != nil {
return errors.Wrap(err, "fetch data column sidecars")
}
// Save the sidecars to the storage.
count := 0
for _, sidecars := range sidecarsByRoot {
count += len(sidecars)
}
sidecarsToSave := make([]blocks.VerifiedRODataColumn, 0, count)
for _, sidecars := range sidecarsByRoot {
sidecarsToSave = append(sidecarsToSave, sidecars...)
}
if err := s.cfg.dataColumnStorage.Save(sidecarsToSave); err != nil {
return errors.Wrap(err, "save")
}
return nil
}
func (s *Service) requestAndSaveMissingBlobSidecars(block interfaces.ReadOnlySignedBeaconBlock, peerID peer.ID) error {
blockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "hash tree root")
}
request, err := s.pendingBlobsRequestForBlock(blockRoot, block)
if err != nil {
return errors.Wrap(err, "pending blobs request for block")
}
if len(request) == 0 {
return nil
}
if err := s.sendAndSaveBlobSidecars(s.ctx, request, peerID, block); err != nil {
return errors.Wrap(err, "send and save blob sidecars")
}
return nil
}
// beaconBlocksRootRPCHandler looks up the request blocks from the database from the given block roots.
func (s *Service) beaconBlocksRootRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)

View File

@@ -36,12 +36,12 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Check if the message type is the one expected.
ref, ok := msg.(*types.DataColumnsByRootIdentifiers)
ref, ok := msg.(types.DataColumnsByRootIdentifiers)
if !ok {
return notDataColumnsByRootIdentifiersError
}
requestedColumnIdents := *ref
requestedColumnIdents := ref
remotePeer := stream.Conn().RemotePeer()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)

View File

@@ -68,7 +68,7 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
stream, err := localP2P.BHost.NewStream(t.Context(), remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &types.DataColumnsByRootIdentifiers{{Columns: []uint64{1, 2, 3}}}
msg := types.DataColumnsByRootIdentifiers{{Columns: []uint64{1, 2, 3}}}
require.Equal(t, true, localP2P.Peers().Scorers().BadResponsesScorer().Score(remoteP2P.PeerID()) >= 0)
err = service.dataColumnSidecarByRootRPCHandler(t.Context(), msg, stream)
@@ -169,7 +169,7 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &types.DataColumnsByRootIdentifiers{
msg := types.DataColumnsByRootIdentifiers{
{
BlockRoot: root0[:],
Columns: []uint64{1, 2, 3},

View File

@@ -22,6 +22,7 @@ import (
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
goPeer "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -404,11 +405,8 @@ func readChunkedBlobSidecar(stream network.Stream, encoding encoder.NetworkEncod
// SendDataColumnSidecarsByRangeRequest sends a request for data column sidecars by range
// and returns the fetched data column sidecars.
func SendDataColumnSidecarsByRangeRequest(
ctx context.Context,
tor blockchain.TemporalOracle,
p2pApi p2p.P2P,
p DataColumnSidecarsParams,
pid peer.ID,
ctxMap ContextByteVersions,
request *ethpb.DataColumnSidecarsByRangeRequest,
) ([]blocks.RODataColumn, error) {
// Return early if nothing to request.
@@ -428,7 +426,7 @@ func SendDataColumnSidecarsByRangeRequest(
}
// Build the topic.
currentSlot := tor.CurrentSlot()
currentSlot := p.Tor.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
topic, err := p2p.TopicFromMessage(p2p.DataColumnSidecarsByRangeName, currentEpoch)
if err != nil {
@@ -453,7 +451,7 @@ func SendDataColumnSidecarsByRangeRequest(
})
// Send the request.
stream, err := p2pApi.Send(ctx, request, topic, pid)
stream, err := p.P2P.Send(p.Ctx, request, topic, pid)
if err != nil {
return nil, errors.Wrap(err, "p2p send")
}
@@ -463,7 +461,7 @@ func SendDataColumnSidecarsByRangeRequest(
roDataColumns := make([]blocks.RODataColumn, 0, totalCount)
for range totalCount {
// Avoid reading extra chunks if the context is done.
if err := ctx.Err(); err != nil {
if err := p.Ctx.Err(); err != nil {
return nil, err
}
@@ -473,7 +471,7 @@ func SendDataColumnSidecarsByRangeRequest(
}
roDataColumn, err := readChunkedDataColumnSidecar(
stream, p2pApi, ctxMap,
stream, p.P2P, p.CtxMap,
validatorSlotWithinBounds,
isSidecarIndexRequested(request),
)
@@ -492,7 +490,7 @@ func SendDataColumnSidecarsByRangeRequest(
}
// All requested sidecars were delivered by the peer. Expecting EOF.
if _, err := readChunkedDataColumnSidecar(stream, p2pApi, ctxMap); !errors.Is(err, io.EOF) {
if _, err := readChunkedDataColumnSidecar(stream, p.P2P, p.CtxMap); !errors.Is(err, io.EOF) {
return nil, errors.Wrapf(errMaxResponseDataColumnSidecarsExceeded, "requestedCount=%d", totalCount)
}
@@ -539,22 +537,10 @@ func isSidecarIndexRequested(request *ethpb.DataColumnSidecarsByRangeRequest) Da
// SendDataColumnSidecarsByRootRequest sends a request for data column sidecars by root
// and returns the fetched data column sidecars.
func SendDataColumnSidecarsByRootRequest(
ctx context.Context,
tor blockchain.TemporalOracle,
p2pApi p2p.P2P,
pid peer.ID,
ctxMap ContextByteVersions,
request p2ptypes.DataColumnsByRootIdentifiers,
) ([]blocks.RODataColumn, error) {
// Return early if the request is nil.
if request == nil {
return nil, nil
}
func SendDataColumnSidecarsByRootRequest(p DataColumnSidecarsParams, peer goPeer.ID, identifiers p2ptypes.DataColumnsByRootIdentifiers) ([]blocks.RODataColumn, error) {
// Compute how many sidecars are requested.
count := uint64(0)
for _, identifier := range request {
for _, identifier := range identifiers {
count += uint64(len(identifier.Columns))
}
@@ -570,13 +556,15 @@ func SendDataColumnSidecarsByRootRequest(
}
// Get the topic for the request.
topic, err := p2p.TopicFromMessage(p2p.DataColumnSidecarsByRootName, slots.ToEpoch(tor.CurrentSlot()))
currentSlot := p.Tor.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
topic, err := p2p.TopicFromMessage(p2p.DataColumnSidecarsByRootName, currentEpoch)
if err != nil {
return nil, errors.Wrap(err, "topic from message")
}
// Send the request to the peer.
stream, err := p2pApi.Send(ctx, request, topic, pid)
stream, err := p.P2P.Send(p.Ctx, identifiers, topic, peer)
if err != nil {
return nil, errors.Wrap(err, "p2p api send")
}
@@ -587,7 +575,7 @@ func SendDataColumnSidecarsByRootRequest(
// Read the data column sidecars from the stream.
for range count {
roDataColumn, err := readChunkedDataColumnSidecar(stream, p2pApi, ctxMap, isSidecarIndexRootRequested(request))
roDataColumn, err := readChunkedDataColumnSidecar(stream, p.P2P, p.CtxMap, isSidecarIndexRootRequested(identifiers))
if errors.Is(err, io.EOF) {
return roDataColumns, nil
}
@@ -603,7 +591,7 @@ func SendDataColumnSidecarsByRootRequest(
}
// All requested sidecars were delivered by the peer. Expecting EOF.
if _, err := readChunkedDataColumnSidecar(stream, p2pApi, ctxMap); !errors.Is(err, io.EOF) {
if _, err := readChunkedDataColumnSidecar(stream, p.P2P, p.CtxMap); !errors.Is(err, io.EOF) {
return nil, errors.Wrapf(errMaxResponseDataColumnSidecarsExceeded, "requestedCount=%d", count)
}
@@ -629,11 +617,11 @@ func isSidecarIndexRootRequested(request p2ptypes.DataColumnsByRootIdentifiers)
indices, ok := columnsIndexFromRoot[root]
if !ok {
return errors.Errorf("root #%x returned by peer but not requested", root)
return errors.Errorf("root %#x returned by peer but not requested", root)
}
if !indices[index] {
return errors.Errorf("index %d for root #%x returned by peer but not requested", index, root)
return errors.Errorf("index %d for root %#x returned by peer but not requested", index, root)
}
return nil

View File

@@ -915,7 +915,7 @@ func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
for _, tc := range nilTestCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := SendDataColumnSidecarsByRangeRequest(t.Context(), nil, nil, "aRandomPID", nil, tc.request)
actual, err := SendDataColumnSidecarsByRangeRequest(DataColumnSidecarsParams{Ctx: t.Context()}, "", tc.request)
require.NoError(t, err)
require.IsNil(t, actual)
})
@@ -928,7 +928,7 @@ func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
params.OverrideBeaconConfig(beaconConfig)
request := &ethpb.DataColumnSidecarsByRangeRequest{Count: 1, Columns: []uint64{1, 2, 3}}
_, err := SendDataColumnSidecarsByRangeRequest(t.Context(), nil, nil, "aRandomPID", nil, request)
_, err := SendDataColumnSidecarsByRangeRequest(DataColumnSidecarsParams{Ctx: t.Context()}, "", request)
require.ErrorContains(t, errMaxRequestDataColumnSidecarsExceeded.Error(), err)
})
@@ -1040,7 +1040,14 @@ func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
assert.NoError(t, err)
})
actual, err := SendDataColumnSidecarsByRangeRequest(t.Context(), clock, p1, p2.PeerID(), ctxMap, requestSent)
parameters := DataColumnSidecarsParams{
Ctx: t.Context(),
Tor: clock,
P2P: p1,
CtxMap: ctxMap,
}
actual, err := SendDataColumnSidecarsByRangeRequest(parameters, p2.PeerID(), requestSent)
if tc.expectedError != nil {
require.ErrorContains(t, tc.expectedError.Error(), err)
if util.WaitTimeout(&wg, time.Second) {
@@ -1208,7 +1215,7 @@ func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
for _, tc := range nilTestCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := SendDataColumnSidecarsByRootRequest(t.Context(), nil, nil, "aRandomPID", nil, tc.request)
actual, err := SendDataColumnSidecarsByRootRequest(DataColumnSidecarsParams{Ctx: t.Context()}, "", tc.request)
require.NoError(t, err)
require.IsNil(t, actual)
})
@@ -1225,7 +1232,7 @@ func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
{Columns: []uint64{4, 5, 6}},
}
_, err := SendDataColumnSidecarsByRootRequest(t.Context(), nil, nil, "aRandomPID", nil, request)
_, err := SendDataColumnSidecarsByRootRequest(DataColumnSidecarsParams{Ctx: t.Context()}, "", request)
require.ErrorContains(t, errMaxRequestDataColumnSidecarsExceeded.Error(), err)
})
@@ -1346,7 +1353,13 @@ func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
assert.NoError(t, err)
})
actual, err := SendDataColumnSidecarsByRootRequest(t.Context(), clock, p1, p2.PeerID(), ctxMap, sentRequest)
parameters := DataColumnSidecarsParams{
Ctx: t.Context(),
Tor: clock,
P2P: p1,
CtxMap: ctxMap,
}
actual, err := SendDataColumnSidecarsByRootRequest(parameters, p2.PeerID(), sentRequest)
if tc.expectedError != nil {
require.ErrorContains(t, tc.expectedError.Error(), err)
if util.WaitTimeout(&wg, time.Second) {

View File

@@ -38,7 +38,10 @@ func (s *Service) maintainPeerStatuses() {
go func(id peer.ID) {
defer wg.Done()
log := log.WithField("peer", id)
log := log.WithFields(logrus.Fields{
"peer": id,
"agent": agentString(id, s.cfg.p2p.Host()),
})
// If our peer status has not been updated correctly we disconnect over here
// and set the connection state over here instead.

View File

@@ -494,12 +494,12 @@ func (s *Service) subscribeWithParameters(p subscribeParameters) {
log.WithError(err).Error("Could not subscribe to subnets")
}
currentSlot := s.cfg.clock.CurrentSlot()
slotDuration := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
neededSubnets := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
minimumPeersPerSubnet := flags.Get().MinimumPeersPerSubnet
// Subscribe to expected subnets and search for peers if needed at every slot.
go func() {
currentSlot := s.cfg.clock.CurrentSlot()
neededSubnets := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDuration)
defer cancel()
@@ -515,6 +515,9 @@ func (s *Service) subscribeWithParameters(p subscribeParameters) {
for {
select {
case <-slotTicker.C():
currentSlot := s.cfg.clock.CurrentSlot()
neededSubnets := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
if err := s.subscribeToSubnets(parameters); err != nil {
if errors.Is(err, errInvalidDigest) {
log.WithField("topics", shortTopic).Debug("Digest is invalid, stopping subscription")
@@ -548,6 +551,7 @@ func (s *Service) subscribeWithParameters(p subscribeParameters) {
for {
select {
case <-logTicker.C:
currentSlot := s.cfg.clock.CurrentSlot()
subnetsToFindPeersIndex := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
isSubnetWithMissingPeers := false

View File

@@ -151,7 +151,7 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, ro
sidecar := sidecars[columnIndex]
if err := s.cfg.p2p.BroadcastDataColumn(blockRoot, sidecar.Index, sidecar.DataColumnSidecar); err != nil {
if err := s.cfg.p2p.BroadcastDataColumnSidecar(blockRoot, sidecar.Index, sidecar.DataColumnSidecar); err != nil {
log.WithError(err).Error("Failed to broadcast data column")
}

View File

@@ -18,7 +18,7 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
}
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
return errors.Wrap(err, "receive data column")
return errors.Wrap(err, "receive data column sidecar")
}
slot := sidecar.Slot()
@@ -26,7 +26,7 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
root := sidecar.BlockRoot()
if err := s.reconstructSaveBroadcastDataColumnSidecars(ctx, slot, proposerIndex, root); err != nil {
return errors.Wrap(err, "reconstruct data columns")
return errors.Wrap(err, "reconstruct/save/broadcast data column sidecars")
}
return nil

View File

@@ -17,9 +17,12 @@ var (
// BlobAlignsWithBlock verifies if the blob aligns with the block.
func BlobAlignsWithBlock(blob blocks.ROBlob, block blocks.ROBlock) error {
if block.Version() < version.Deneb {
blockVersion := block.Version()
if blockVersion < version.Deneb || blockVersion >= version.Fulu {
return nil
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(blob.Slot())
if blob.Index >= uint64(maxBlobsPerBlock) {
return errors.Wrapf(ErrIncorrectBlobIndex, "index %d exceeds MAX_BLOBS_PER_BLOCK %d", blob.Index, maxBlobsPerBlock)

View File

@@ -47,6 +47,19 @@ var (
RequireSidecarKzgProofVerified,
}
// ByRootRequestDataColumnSidecarRequirements defines the set of requirements that DataColumnSidecars received
// via the by root request must satisfy in order to upgrade an RODataColumn to a VerifiedRODataColumn.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyroot-v1
ByRootRequestDataColumnSidecarRequirements = []Requirement{
RequireValidFields,
RequireSidecarInclusionProven,
RequireSidecarKzgProofVerified,
}
// SpectestDataColumnSidecarRequirements is used by the forkchoice spectests when verifying data columns used in the on_block tests.
SpectestDataColumnSidecarRequirements = requirementList(GossipDataColumnSidecarRequirements).excluding(
RequireSidecarParentSeen, RequireSidecarParentValid)
errColumnsInvalid = errors.New("data columns failed verification")
errBadTopicLength = errors.New("topic length is invalid")
errBadTopic = errors.New("topic is not of the one expected")

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed regression in find peer functions introduced in PR#15471, where nodes with equal sequence numbers were incorrectly skipped and the peer count was incorrectly reduced when replacing nodes with higher sequence numbers.

View File

@@ -0,0 +1,3 @@
### Added
- Added specification references which map spec to implementation

View File

@@ -0,0 +1,3 @@
### Changed
- Renamed various variables/functions to be more clear.

View File

@@ -0,0 +1,2 @@
### Ignored
- initialize data column storage in rpc handlers.

View File

@@ -0,0 +1,2 @@
### Ignored
- decrease log level for peer subscription failures using invalid digests.

View File

@@ -0,0 +1,2 @@
### Ignored
- omits non-standard blob schedule entry struct fields from marshaling.

View File

@@ -0,0 +1,2 @@
### Changed
- Reject incoming connections when the fork schedule of the connecting peer (parsed from their ENR) has a matching next_fork_epoch, but mismatched next_fork_version or nfd (next fork digest).

View File

@@ -0,0 +1,2 @@
### Added
- Data columns syncing for Fusaka.

Some files were not shown because too many files have changed in this diff Show More