Compare commits

...

63 Commits

Author SHA1 Message Date
Taranpreet26311
cffe93ddeb Update linter version 2024-08-22 14:37:46 +04:00
Taranpreet26311
2b58b462e2 Updates to latest 2024-08-12 19:23:51 +04:00
Taranpreet26311
292d42e454 Add "intrange" linter to ignore 2024-08-12 18:45:56 +04:00
Taranpreet26311
057af30542 Try to downgrade to even lower lint version 2024-08-12 18:06:21 +04:00
Taranpreet26311
534459e269 Downgrade lint version 2024-08-12 18:00:39 +04:00
Taranpreet26311
b0fa3017b7 Bump up build version 2024-08-12 17:58:06 +04:00
Taranpreet26311
15574a80a5 Update linter version 2024-08-12 17:16:00 +04:00
Taranpreet26311
b91b9f6fa5 Update lint to go 1.22.6 2024-08-12 15:30:14 +04:00
Manu NALEPA
d48f89c166 DO NOT MERGE: Test range feature. 2024-08-10 14:34:16 +02:00
Manu NALEPA
289c44a1f2 PeerDAS: Add MetadataV3 with custody_subnet_count (#14274)
* `sendPingRequest`: Add some comments.

* `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`.

* `pingHandler`: Add comments.

* `sendMetaDataRequest`: Add comments and implement an unique test.

* Gather `SchemaVersion`s in the same `const` definition.

* Define `SchemaVersionV3`.

* `MetaDataV1`: Fix comment.

* Proto: Define `MetaDataV2`.

* `MetaDataV2`: Generate SSZ.

* `newColumnSubnetIDs`: Use smaller lines.

* `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`.

* `RefreshPersistentSubnets`: Refactor tests (no functional change).

* `RefreshPersistentSubnets`: Refactor and add comments (no functional change).

* `RefreshPersistentSubnets`: Compare cache with both ENR & metadata.

* `RefreshPersistentSubnets`: Manage peerDAS.

* `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`.

* `CustodyCountFromRemotePeer`: Retrieve the count from metadata.

Then default to ENR, then default to the default value.

* Update beacon-chain/sync/rpc_metadata.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix duplicate case.

* Remove version testing.

* `debug.proto`: Stop breaking ordering.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-08-05 09:44:49 +02:00
Manu NALEPA
c9fa9389ab Fix data columns sampling (#14263)
* Fix the obvious...

* Data columns sampling: Modify logging.

* `waitForChainStart`: Set it threadsafe - Do only wait once.

* Sampling: Wait for chain start before running the sampling.

Reason: `newDataColumnSampler1D` needs `s.ctxMap`.
`s.ctxMap` is only set when chain is started.

Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine.

==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not.

* Adresse Nishant's comments.

* Sampling: Improve logging.

* `waitForChainStart`: Remove `chainIsStarted` check.
2024-07-29 14:27:09 +02:00
Manu NALEPA
940782f323 PeerDAS: Fix initial sync (#14208)
* `SendDataColumnsByRangeRequest`: Add some new fields in logs.

* `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`.

* Implement `fetchDataColumnsFromPeers`.

* `fetchBlobsFromPeer`: Return only one error.
2024-07-29 13:22:38 +02:00
Manu NALEPA
8c684c1919 Make deepsource happy (#14237)
* DeepSource: Pass heavy objects by pointers.

* `removeBlockFromQueue`: Remove redundant error checking.

* `fetchBlobsFromPeer`: Use same variable for `append`.

* Remove unused arguments.

* Combine types.

* `Persist`: Add documentation.

* Remove unused receiver

* Remove duplicated import.

* Stop using both pointer and value receiver at the same time.

* `verifyAndPopulateColumns`: Remove unused parameter

* Stop using mpty slice literal used to declare a variable.
2024-07-29 13:22:38 +02:00
Manu NALEPA
8f431c1a79 PeerDAS: Run reconstruction in parallel. (#14236)
* PeerDAS: Run reconstruction in parallel.

* `isDataAvailableDataColumns` --> `isDataColumnsAvailable`

* `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received.

* Make deepsource happy.
2024-07-29 13:22:38 +02:00
Justin Traglia
c3d7069d0d Update ckzg4844 to latest version of das branch (#14223)
* Update ckzg4844 to latest version

* Run go mod tidy

* Remove unnecessary tests & run goimports

* Remove fieldparams from blockchain/kzg

* Add back blank line

* Avoid large copies

* Run gazelle

* Use trusted setup from the specs & fix issue with struct

* Run goimports

* Fix mistake in makeCellsAndProofs

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:22:38 +02:00
Nishant Das
006e8db8bb Add Current Changes (#14231) 2024-07-29 13:22:38 +02:00
Manu NALEPA
5e4f25f25b Implement and use filterPeerForDataColumnsSubnet. (#14230) 2024-07-29 13:22:38 +02:00
Francis Li
52cdd9a538 [PeerDAS] Parallelize data column sampling (#14105)
* PeerDAS: parallelizing sample queries

* PeerDAS: select sample from non custodied columns

* Finish rebase

* Add more test cases
2024-07-29 13:22:38 +02:00
kevaundray
6064e2b5c7 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-07-29 13:22:38 +02:00
Nishant Das
1fa7903c17 Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-07-29 13:22:38 +02:00
Manu NALEPA
467ad45e64 PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-07-29 13:22:38 +02:00
kevaundray
70a4875c68 chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-07-29 13:22:38 +02:00
Manu NALEPA
952d2b3b27 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-07-29 13:22:38 +02:00
Nishant Das
0857060867 Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-07-29 13:22:38 +02:00
kevaundray
4b011087d2 chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-07-29 13:21:39 +02:00
kevaundray
c14ee7268d chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-07-29 13:21:39 +02:00
Manu NALEPA
5c7c3a6c40 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-07-29 13:21:39 +02:00
Francis Li
b30a9d520f PeerDAS: add data column batch config (#14122) 2024-07-29 13:21:39 +02:00
Francis Li
9cc6a7951d PeerDAS: move custody subnet count into helper function (#14117) 2024-07-29 13:21:39 +02:00
Manu NALEPA
73a1130c98 Fix columns sampling (#14118) 2024-07-29 13:21:39 +02:00
Francis Li
c81aaa70c5 [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-07-29 13:21:39 +02:00
Francis Li
0c80ddb1b6 [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-07-29 13:21:39 +02:00
Francis Li
2863dcb9ea [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-07-29 13:21:39 +02:00
Manu NALEPA
ce4b6b5464 PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
c93db748ba PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
ccafc1019c PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-07-29 13:21:39 +02:00
Manu NALEPA
6bf6671725 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
a5b128b983 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
747de1ac95 PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-07-29 13:19:43 +02:00
Manu NALEPA
687e164470 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-07-29 13:19:43 +02:00
Nishant Das
68b9d9668b Fix Custody Columns (#14021) 2024-07-29 13:19:43 +02:00
Nishant Das
c80c6ef5f4 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-07-29 13:19:43 +02:00
Nishant Das
a279ce6330 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-07-29 13:19:43 +02:00
Manu NALEPA
e142ece77c Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-07-29 13:19:43 +02:00
Manu NALEPA
7f6a5f44f3 Fix beacon chain config. (#14017) 2024-07-29 13:19:43 +02:00
Nishant Das
a280c6b671 Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-07-29 13:19:43 +02:00
Manu NALEPA
5371299b32 Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-07-29 13:19:43 +02:00
Nishant Das
a23bb9f0bd Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-07-29 13:19:43 +02:00
Nishant Das
bca3806794 Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:19:43 +02:00
Justin Traglia
a14b1f11a4 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-07-29 13:19:42 +02:00
Manu NALEPA
6850904568 SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-07-29 13:19:42 +02:00
Manu NALEPA
a3887807bc Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-07-29 13:19:42 +02:00
Nishant Das
0914df3bbd Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:19:42 +02:00
Manu NALEPA
e501279798 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-07-29 13:19:42 +02:00
Nishant Das
6d61e1ea2a Update .bazelrc (#13931) 2024-07-29 13:19:42 +02:00
Manu NALEPA
4d45fc0e87 Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-07-29 13:19:42 +02:00
Manu NALEPA
23d7f87cef Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-07-29 13:19:42 +02:00
Nishant Das
a4db00d7ef Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:19:42 +02:00
Nishant Das
b6a4e1213c Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-07-29 13:19:42 +02:00
Nishant Das
86088d9c85 Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-07-29 13:19:42 +02:00
Nishant Das
e10e610589 add in networking params (#13866) 2024-07-29 13:19:42 +02:00
Nishant Das
e16a25b6cc add it (#13865) 2024-07-29 13:19:42 +02:00
Manu NALEPA
f129dfdee6 Add in column sidecars protos (#13862) 2024-07-29 13:18:55 +02:00
173 changed files with 15613 additions and 2136 deletions

View File

@@ -22,6 +22,7 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true build:blst_disabled --define blst_disabled=true

View File

@@ -48,13 +48,13 @@ jobs:
- name: Set up Go 1.22 - name: Set up Go 1.22
uses: actions/setup-go@v3 uses: actions/setup-go@v3
with: with:
go-version: '1.22.3' go-version: '1.22.6'
id: go id: go
- name: Golangci-lint - name: Golangci-lint
uses: golangci/golangci-lint-action@v3 uses: golangci/golangci-lint-action@v3
with: with:
version: v1.55.2 version: v1.60.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build: build:
@@ -64,7 +64,7 @@ jobs:
- name: Set up Go 1.x - name: Set up Go 1.x
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: '1.22.3' go-version: '1.22.6'
id: go id: go
- name: Check out code into the Go module directory - name: Check out code into the Go module directory

View File

@@ -6,7 +6,7 @@ run:
- proto - proto
- tools/analyzers - tools/analyzers
timeout: 10m timeout: 10m
go: '1.22.3' go: '1.22.6'
linters: linters:
enable-all: true enable-all: true

View File

@@ -26,6 +26,7 @@ go_library(
"receive_attestation.go", "receive_attestation.go",
"receive_blob.go", "receive_blob.go",
"receive_block.go", "receive_block.go",
"receive_data_column.go",
"service.go", "service.go",
"tracked_proposer.go", "tracked_proposer.go",
"weak_subjectivity_checks.go", "weak_subjectivity_checks.go",
@@ -48,6 +49,7 @@ go_library(
"//beacon-chain/core/feed:go_default_library", "//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library", "//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library", "//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library", "//beacon-chain/core/transition:go_default_library",
@@ -159,6 +161,7 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library", "//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library", "//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library", "//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library", "//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library", "//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library", "//beacon-chain/state/state-native:go_default_library",

View File

@@ -33,6 +33,7 @@ var (
) )
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK") var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
var errMaxDataColumnsExceeded = errors.New("Expected data columns for node exceeds NUMBER_OF_COLUMNS")
// An invalid block is the block that fails state transition based on the core protocol rules. // An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block. // The beacon node shall not be accepting nor building blocks that branch off from an invalid block.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library( go_library(
name = "go_default_library", name = "go_default_library",
srcs = [ srcs = [
"kzg.go",
"trusted_setup.go", "trusted_setup.go",
"validation.go", "validation.go",
], ],
@@ -12,6 +13,9 @@ go_library(
deps = [ deps = [
"//consensus-types/blocks:go_default_library", "//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library", "@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library", "@com_github_pkg_errors//:go_default_library",
], ],
) )

View File

@@ -0,0 +1,109 @@
package kzg
import (
"errors"
ckzg4844 "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
)
// BytesPerBlob is the number of bytes in a single blob.
const BytesPerBlob = ckzg4844.BytesPerBlob
// Blob represents a serialized chunk of data.
type Blob [BytesPerBlob]byte
// BytesPerCell is the number of bytes in a single cell.
const BytesPerCell = ckzg4844.BytesPerCell
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
// Commitment represent a KZG commitment to a Blob.
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [48]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48
// Bytes32 is a 32-byte array.
type Bytes32 = ckzg4844.Bytes32
// CellsAndProofs represents the Cells and Proofs corresponding to
// a single blob.
type CellsAndProofs struct {
Cells []Cell
Proofs []Proof
}
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
comm, err := kzg4844.BlobToCommitment(kzg4844.Blob(*blob))
if err != nil {
return Commitment{}, err
}
return Commitment(comm), nil
}
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
proof, err := kzg4844.ComputeBlobProof(kzg4844.Blob(*blob), kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
}
return Proof(proof), nil
}
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
ckzgBlob := (*ckzg4844.Blob)(blob)
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(ckzgBlob)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, cells []Cell, proofsBytes []Bytes48) (bool, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(cells))
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
for i := range partialCells {
ckzgPartialCells[i] = ckzg4844.Cell(partialCells[i])
}
ckzgCells, ckzgProofs, err := ckzg4844.RecoverCellsAndKZGProofs(cellIndices, ckzgPartialCells)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
// Convert cells/proofs to the CellsAndProofs type defined in this package.
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
if len(ckzgCells) != len(ckzgProofs) {
return CellsAndProofs{}, errors.New("different number of cells/proofs")
}
var cells []Cell
var proofs []Proof
for i := range ckzgCells {
cells = append(cells, Cell(ckzgCells[i]))
proofs = append(proofs, Proof(ckzgProofs[i]))
}
return CellsAndProofs{
Cells: cells,
Proofs: proofs,
}, nil
}

View File

@@ -5,6 +5,8 @@ import (
"encoding/json" "encoding/json"
GoKZG "github.com/crate-crypto/go-kzg-4844" GoKZG "github.com/crate-crypto/go-kzg-4844"
CKZG "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@@ -12,17 +14,53 @@ var (
//go:embed trusted_setup.json //go:embed trusted_setup.json
embeddedTrustedSetup []byte // 1.2Mb embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context kzgContext *GoKZG.Context
kzgLoaded bool
) )
type TrustedSetup struct {
G1Monomial [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_monomial"`
G1Lagrange [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_lagrange"`
G2Monomial [65]GoKZG.G2CompressedHexStr `json:"g2_monomial"`
}
func Start() error { func Start() error {
parsedSetup := GoKZG.JSONTrustedSetup{} trustedSetup := &TrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup) err := json.Unmarshal(embeddedTrustedSetup, trustedSetup)
if err != nil { if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON") return errors.Wrap(err, "could not parse trusted setup JSON")
} }
kzgContext, err = GoKZG.NewContext4096(&parsedSetup) kzgContext, err = GoKZG.NewContext4096(&GoKZG.JSONTrustedSetup{
SetupG2: trustedSetup.G2Monomial[:],
SetupG1Lagrange: trustedSetup.G1Lagrange})
if err != nil { if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context") return errors.Wrap(err, "could not initialize go-kzg context")
} }
// Length of a G1 point, converted from hex to binary.
g1MonomialBytes := make([]byte, len(trustedSetup.G1Monomial)*(len(trustedSetup.G1Monomial[0])-2)/2)
for i, g1 := range &trustedSetup.G1Monomial {
copy(g1MonomialBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G1 point, converted from hex to binary.
g1LagrangeBytes := make([]byte, len(trustedSetup.G1Lagrange)*(len(trustedSetup.G1Lagrange[0])-2)/2)
for i, g1 := range &trustedSetup.G1Lagrange {
copy(g1LagrangeBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2MonomialBytes := make([]byte, len(trustedSetup.G2Monomial)*(len(trustedSetup.G2Monomial[0])-2)/2)
for i, g2 := range &trustedSetup.G2Monomial {
copy(g2MonomialBytes[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// TODO: Provide a configuration option for this.
var precompute uint = 0
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1MonomialBytes, g1LagrangeBytes, g2MonomialBytes, precompute); err != nil {
panic(err)
}
}
kzgLoaded = true
return nil return nil
} }

File diff suppressed because it is too large Load Diff

View File

@@ -118,9 +118,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
} }
// WithP2PBroadcaster to broadcast messages after appropriate processing. // WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option { func WithP2PBroadcaster(p p2p.Acceser) Option {
return func(s *Service) error { return func(s *Service) error {
s.cfg.P2p = p s.cfg.P2P = p
return nil return nil
} }
} }

View File

@@ -6,10 +6,14 @@ import (
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state" statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time" coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das" "github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
@@ -29,8 +33,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation" "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version" "github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots" "github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
) )
// A custom slot deadline for processing state slots in our cache. // A custom slot deadline for processing state slots in our cache.
@@ -499,7 +501,7 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
} }
indices, err := bs.Indices(root) indices, err := bs.Indices(root)
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "indices")
} }
missing := make(map[uint64]struct{}, len(expected)) missing := make(map[uint64]struct{}, len(expected))
for i := range expected { for i := range expected {
@@ -513,12 +515,39 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
return missing, nil return missing, nil
} }
func missingDataColumns(bs *filesystem.BlobStorage, root [32]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > int(params.BeaconConfig().NumberOfColumns) {
return nil, errMaxDataColumnsExceeded
}
indices, err := bs.ColumnIndices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]bool, len(expected))
for col := range expected {
if !indices[col] {
missing[col] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available, // isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful. // or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any // The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is // sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars. // closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error { func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if coreTime.PeerDASIsActive(signed.Block().Slot()) {
return s.isDataColumnsAvailable(ctx, root, signed)
}
if signed.Version() < version.Deneb { if signed.Version() < version.Deneb {
return nil return nil
} }
@@ -548,7 +577,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// get a map of BlobSidecar indices that are not currently available. // get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments) missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil { if err != nil {
return err return errors.Wrap(err, "missing indices")
} }
// If there are no missing indices, all BlobSidecars are available. // If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 { if len(missing) == 0 {
@@ -567,8 +596,13 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if len(missing) == 0 { if len(missing) == 0 {
return return
} }
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.") log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
}) })
defer nst.Stop() defer nst.Stop()
} }
@@ -590,12 +624,130 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
} }
} }
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields { func (s *Service) isDataColumnsAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
return logrus.Fields{ if signed.Version() < version.Deneb {
"slot": slot, return nil
"root": fmt.Sprintf("%#x", root), }
"blobsExpected": expected,
"blobsWaiting": missing, block := signed.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "blob KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
colMap, err := peerdas.CustodyColumns(s.cfg.P2P.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Expected is the number of custody data columnns a node is expected to have.
expected := len(colMap)
if expected == 0 {
return nil
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
// Get the count of data columns we already have in the store.
retrievedDataColumns, err := s.blobStorage.ColumnIndices(root)
if err != nil {
return errors.Wrap(err, "column indices")
}
retrievedDataColumnsCount := uint64(len(retrievedDataColumns))
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Get a map of data column indices that are not currently available.
missing, err := missingDataColumns(s.blobStorage, root, colMap)
if err != nil {
return err
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missing) == 0 {
return nil
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": len(missing),
}).Error("Still waiting for data columns DA check at slot end.")
})
defer nst.Stop()
}
for {
select {
case rootIndex := <-rootIndexChan:
if rootIndex.Root != root {
// This is not the root we are looking for.
continue
}
// This is a data column we are expecting.
if _, ok := missing[rootIndex.Index]; ok {
retrievedDataColumnsCount++
}
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Remove the index from the missing map.
delete(missing, rootIndex.Index)
// Exit if there is no more missing data columns.
if len(missing) == 0 {
return nil
}
case <-ctx.Done():
missingIndexes := make([]uint64, 0, len(missing))
for val := range missing {
copiedVal := val
missingIndexes = append(missingIndexes, copiedVal)
}
return errors.Wrapf(ctx.Err(), "context deadline waiting for data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndexes)
}
} }
} }
@@ -678,7 +830,7 @@ func (s *Service) waitForSync() error {
} }
} }
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error { func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} { if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err)) return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
} }

View File

@@ -51,6 +51,12 @@ type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
} }
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire. // SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface { type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing) ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)

View File

@@ -0,0 +1,14 @@
package blockchain
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
)
func (s *Service) ReceiveDataColumn(ds blocks.VerifiedRODataColumn) error {
if err := s.blobStorage.SaveDataColumn(ds); err != nil {
return errors.Wrap(err, "save data column")
}
return nil
}

View File

@@ -81,7 +81,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager BLSToExecPool blstoexec.PoolManager
P2p p2p.Broadcaster P2P p2p.Acceser
MaxRoutines int MaxRoutines int
StateNotifier statefeed.Notifier StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer ForkChoiceStore f.ForkChoicer
@@ -106,15 +106,17 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct { type blobNotifierMap struct {
sync.RWMutex sync.RWMutex
notifiers map[[32]byte]chan uint64 notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
} }
// notifyIndex notifies a blob by its index for a given root. // notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels. // It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) { func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
if idx >= fieldparams.MaxBlobsPerBlock { // TODO: Separate Data Columns from blobs
return /*
} if idx >= fieldparams.MaxBlobsPerBlock {
return
}*/
bn.Lock() bn.Lock()
seen := bn.seenIndex[root] seen := bn.seenIndex[root]
@@ -128,7 +130,7 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
// Retrieve or create the notifier channel for the given root. // Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root] c, ok := bn.notifiers[root]
if !ok { if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock) c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c bn.notifiers[root] = c
} }
@@ -142,7 +144,7 @@ func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
defer bn.Unlock() defer bn.Unlock()
c, ok := bn.notifiers[root] c, ok := bn.notifiers[root]
if !ok { if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock) c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c bn.notifiers[root] = c
} }
return c return c
@@ -168,7 +170,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{ bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64), notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool), seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
} }
srv := &Service{ srv := &Service{
ctx: ctx, ctx: ctx,

View File

@@ -97,7 +97,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()), WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()), WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()), WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}), WithP2PBroadcaster(&mockAccesser{}),
WithStateNotifier(&mockBeaconNode{}), WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc), WithForkChoiceStore(fc),
WithAttestationService(attService), WithAttestationService(attService),
@@ -579,7 +579,7 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) { func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap // Initialize a blobNotifierMap
bn := &blobNotifierMap{ bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool), seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
notifiers: make(map[[32]byte]chan uint64), notifiers: make(map[[32]byte]chan uint64),
} }

View File

@@ -19,6 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations" "github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec" "github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup" "github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen" "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -45,6 +46,11 @@ type mockBroadcaster struct {
broadcastCalled bool broadcastCalled bool
} }
type mockAccesser struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error { func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true mb.broadcastCalled = true
return nil return nil
@@ -65,6 +71,11 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil return nil
} }
func (mb *mockBroadcaster) BroadcastDataColumn(_ context.Context, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) { func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
} }

View File

@@ -628,6 +628,11 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil return nil
} }
// ReceiveDataColumn implements the same method in chain service
func (*ChainService) ReceiveDataColumn(_ blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service // TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) { func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil return c.TargetRoot, nil

View File

@@ -8,6 +8,7 @@ go_library(
"attestation_data.go", "attestation_data.go",
"balance_cache_key.go", "balance_cache_key.go",
"checkpoint_state.go", "checkpoint_state.go",
"column_subnet_ids.go",
"committee.go", "committee.go",
"committee_disabled.go", # keep "committee_disabled.go", # keep
"committees.go", "committees.go",

70
beacon-chain/cache/column_subnet_ids.go vendored Normal file
View File

@@ -0,0 +1,70 @@
package cache
import (
"sync"
"time"
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
type columnSubnetIDs struct {
colSubCache *cache.Cache
colSubLock sync.RWMutex
}
// ColumnSubnetIDs for column subnet participants
var ColumnSubnetIDs = newColumnSubnetIDs()
const columnKey = "columns"
func newColumnSubnetIDs() *columnSubnetIDs {
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
epochDuration := time.Duration(slotsPerEpoch.Mul(secondsPerSlot))
// Set the default duration of a column subnet subscription as the column expiry period.
minEpochsForDataColumnSidecarsRequest := time.Duration(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
subLength := epochDuration * minEpochsForDataColumnSidecarsRequest
persistentCache := cache.New(subLength*time.Second, epochDuration*time.Second)
return &columnSubnetIDs{colSubCache: persistentCache}
}
// GetColumnSubnets retrieves the data column subnets.
func (s *columnSubnetIDs) GetColumnSubnets() ([]uint64, bool, time.Time) {
s.colSubLock.RLock()
defer s.colSubLock.RUnlock()
id, duration, ok := s.colSubCache.GetWithExpiration(columnKey)
if !ok {
return nil, false, time.Time{}
}
// Retrieve indices from the cache.
idxs, ok := id.([]uint64)
if !ok {
return nil, false, time.Time{}
}
return idxs, ok, duration
}
// AddColumnSubnets adds the relevant data column subnets.
func (s *columnSubnetIDs) AddColumnSubnets(colIdx []uint64) {
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Set(columnKey, colIdx, 0)
}
// EmptyAllCaches empties out all the related caches and flushes any stored
// entries on them. This should only ever be used for testing, in normal
// production, handling of the relevant subnets for each role is done
// separately.
func (s *columnSubnetIDs) EmptyAllCaches() {
// Clear the cache.
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Flush()
}

View File

@@ -96,6 +96,24 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain) return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
} }
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs // VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it // from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch. // via the respective epoch.

View File

@@ -32,6 +32,9 @@ const (
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc // AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8 AttesterSlashingReceived = 8
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 9
) )
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events. // UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -77,3 +80,7 @@ type ProposerSlashingReceivedData struct {
type AttesterSlashingReceivedData struct { type AttesterSlashingReceivedData struct {
AttesterSlashing ethpb.AttSlashing AttesterSlashing ethpb.AttSlashing
} }
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}

View File

@@ -78,6 +78,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) { func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache() helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize) validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{ syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +265,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
} }
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) { func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache() helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize) validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -0,0 +1,38 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["helpers.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["helpers_test.go"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -0,0 +1,359 @@
package peerdas
import (
"encoding/binary"
"math"
"math/big"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/holiman/uint256"
errors "github.com/pkg/errors"
kzg "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
const (
CustodySubnetCountEnrKey = "csc"
)
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
type Csc uint64
func (Csc) ENRKey() string { return CustodySubnetCountEnrKey }
var (
// Custom errors
errCustodySubnetCountTooLarge = errors.New("custody subnet count larger than data column sidecar subnet count")
errIndexTooLarge = errors.New("column index is larger than the specified columns count")
errMismatchLength = errors.New("mismatch in the length of the commitments and proofs")
errRecordNil = errors.New("record is nil")
errCannotLoadCustodySubnetCount = errors.New("cannot load the custody subnet count from peer")
// maxUint256 is the maximum value of a uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
// CustodyColumnSubnets computes the subnets the node should participate in for custody.
func CustodyColumnSubnets(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Check if the custody subnet count is larger than the data column sidecar subnet count.
if custodySubnetCount > dataColumnSidecarSubnetCount {
return nil, errCustodySubnetCountTooLarge
}
// First, compute the subnet IDs that the node should participate in.
subnetIds := make(map[uint64]bool, custodySubnetCount)
one := uint256.NewInt(1)
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(subnetIds)) < custodySubnetCount; currentId.Add(currentId, one) {
// Convert to big endian bytes.
currentIdBytesBigEndian := currentId.Bytes32()
// Convert to little endian.
currentIdBytesLittleEndian := bytesutil.ReverseByteOrder(currentIdBytesBigEndian[:])
// Hash the result.
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the subnet ID.
subnetId := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % dataColumnSidecarSubnetCount
// Add the subnet to the map.
subnetIds[subnetId] = true
// Overflow prevention.
if currentId.Cmp(maxUint256) == 0 {
currentId = uint256.NewInt(0)
}
}
return subnetIds, nil
}
// CustodyColumns computes the columns the node should custody.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#helper-functions
func CustodyColumns(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Compute the custodied subnets.
subnetIds, err := CustodyColumnSubnets(nodeId, custodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody subnets")
}
columnsPerSubnet := fieldparams.NumberOfColumns / dataColumnSidecarSubnetCount
// Knowing the subnet ID and the number of columns per subnet, select all the columns the node should custody.
// Columns belonging to the same subnet are contiguous.
columnIndices := make(map[uint64]bool, custodySubnetCount*columnsPerSubnet)
for i := uint64(0); i < columnsPerSubnet; i++ {
for subnetId := range subnetIds {
columnIndex := dataColumnSidecarSubnetCount*i + subnetId
columnIndices[columnIndex] = true
}
}
return columnIndices, nil
}
// DataColumnSidecars computes the data column sidecars from the signed block and blobs.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#recover_matrix
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, blobs []kzg.Blob) ([]*ethpb.DataColumnSidecar, error) {
blobsCount := len(blobs)
if blobsCount == 0 {
return nil, nil
}
// Get the signed block header.
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
// Get the block body.
block := signedBlock.Block()
blockBody := block.Body()
// Get the blob KZG commitments.
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Compute the KZG commitments inclusion proof.
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
}
// Compute cells and proofs.
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobsCount)
for i := range blobs {
blob := &blobs[i]
blobCellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(blob)
if err != nil {
return nil, errors.Wrap(err, "compute cells and KZG proofs")
}
cellsAndProofs = append(cellsAndProofs, blobCellsAndProofs)
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, fieldparams.NumberOfColumns)
for columnIndex := uint64(0); columnIndex < fieldparams.NumberOfColumns; columnIndex++ {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// DataColumnSidecarsForReconstruct is a TEMPORARY function until there is an official specification for it.
// It is scheduled for deletion.
func DataColumnSidecarsForReconstruct(
blobKzgCommitments [][]byte,
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
kzgCommitmentsInclusionProof [][]byte,
cellsAndProofs []kzg.CellsAndProofs,
) ([]*ethpb.DataColumnSidecar, error) {
// Each CellsAndProofs corresponds to a Blob
// So we can get the BlobCount by checking the length of CellsAndProofs
blobsCount := len(cellsAndProofs)
if blobsCount == 0 {
return nil, nil
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, fieldparams.NumberOfColumns)
for columnIndex := uint64(0); columnIndex < fieldparams.NumberOfColumns; columnIndex++ {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// VerifyDataColumnSidecarKZGProofs verifies the provided KZG Proofs for the particular
// data column.
func VerifyDataColumnSidecarKZGProofs(sc *ethpb.DataColumnSidecar) (bool, error) {
if sc.ColumnIndex >= params.BeaconConfig().NumberOfColumns {
return false, errIndexTooLarge
}
if len(sc.DataColumn) != len(sc.KzgCommitments) || len(sc.KzgCommitments) != len(sc.KzgProof) {
return false, errMismatchLength
}
var commitments []kzg.Bytes48
var indices []uint64
var cells []kzg.Cell
var proofs []kzg.Bytes48
for i := range sc.DataColumn {
commitments = append(commitments, kzg.Bytes48(sc.KzgCommitments[i]))
indices = append(indices, sc.ColumnIndex)
cells = append(cells, kzg.Cell(sc.DataColumn[i]))
proofs = append(proofs, kzg.Bytes48(sc.KzgProof[i]))
}
return kzg.VerifyCellKZGProofBatch(commitments, indices, cells, proofs)
}
// CustodySubnetCount returns the number of subnets the node should participate in for custody.
func CustodySubnetCount() uint64 {
count := params.BeaconConfig().CustodyRequirement
if flags.Get().SubscribeToAllSubnets {
count = params.BeaconConfig().DataColumnSidecarSubnetCount
}
return count
}
// HypergeomCDF computes the hypergeometric cumulative distribution function.
// https://en.wikipedia.org/wiki/Hypergeometric_distribution
func HypergeomCDF(k, M, n, N uint64) float64 {
denominatorInt := new(big.Int).Binomial(int64(M), int64(N)) // lint:ignore uintcast
denominator := new(big.Float).SetInt(denominatorInt)
rBig := big.NewFloat(0)
for i := uint64(0); i < k+1; i++ {
a := new(big.Int).Binomial(int64(n), int64(i)) // lint:ignore uintcast
b := new(big.Int).Binomial(int64(M-n), int64(N-i))
numeratorInt := new(big.Int).Mul(a, b)
numerator := new(big.Float).SetInt(numeratorInt)
item := new(big.Float).Quo(numerator, denominator)
rBig.Add(rBig, item)
}
r, _ := rBig.Float64()
return r
}
// ExtendedSampleCount computes, for a given number of samples per slot and allowed failures the
// number of samples we should actually query from peers.
// TODO: Add link to the specification once it is available.
func ExtendedSampleCount(samplesPerSlot, allowedFailures uint64) uint64 {
// Retrieve the columns count
columnsCount := params.BeaconConfig().NumberOfColumns
// If half of the columns are missing, we are able to reconstruct the data.
// If half of the columns + 1 are missing, we are not able to reconstruct the data.
// This is the smallest worst case.
worstCaseMissing := columnsCount/2 + 1
// Compute the false positive threshold.
falsePositiveThreshold := HypergeomCDF(0, columnsCount, worstCaseMissing, samplesPerSlot)
var sampleCount uint64
// Finally, compute the extended sample count.
for sampleCount = samplesPerSlot; sampleCount < columnsCount+1; sampleCount++ {
if HypergeomCDF(allowedFailures, columnsCount, worstCaseMissing, sampleCount) <= falsePositiveThreshold {
break
}
}
return sampleCount
}
func CustodyCountFromRecord(record *enr.Record) (uint64, error) {
// By default, we assume the peer custodies the minimum number of subnets.
if record == nil {
return 0, errRecordNil
}
// Load the `custody_subnet_count`
var csc Csc
if err := record.Load(&csc); err != nil {
return 0, errCannotLoadCustodySubnetCount
}
return uint64(csc), nil
}
func CanSelfReconstruct(numCol uint64) bool {
total := params.BeaconConfig().NumberOfColumns
// if total is odd, then we need total / 2 + 1 columns to reconstruct
// if total is even, then we need total / 2 columns to reconstruct
columnsNeeded := total/2 + total%2
return numCol >= columnsNeeded
}

View File

@@ -0,0 +1,144 @@
package peerdas_test
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"fmt"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
)
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
dbBlock := util.NewBeaconBlockDeneb()
require.NoError(t, kzg.Start())
var (
comms [][]byte
blobs []kzg.Blob
)
for i := int64(0); i < 6; i++ {
blob := GetRandBlob(i)
commitment, _, err := GenerateCommitmentAndProof(&blob)
require.NoError(t, err)
comms = append(comms, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = comms
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
sCars, err := peerdas.DataColumnSidecars(sBlock, blobs)
require.NoError(t, err)
for i, sidecar := range sCars {
verified, err := peerdas.VerifyDataColumnSidecarKZGProofs(sidecar)
require.NoError(t, err)
require.Equal(t, true, verified, fmt.Sprintf("sidecar %d failed", i))
}
}
func TestHypergeomCDF(t *testing.T) {
// Test case from https://en.wikipedia.org/wiki/Hypergeometric_distribution
// Population size: 1000, number of successes in population: 500, sample size: 10, number of successes in sample: 5
// Expected result: 0.072
const (
expected = 0.0796665913283742
margin = 0.000001
)
actual := peerdas.HypergeomCDF(5, 128, 65, 16)
require.Equal(t, true, expected-margin <= actual && actual <= expected+margin)
}
func TestExtendedSampleCount(t *testing.T) {
const samplesPerSlot = 16
testCases := []struct {
name string
allowedMissings uint64
extendedSampleCount uint64
}{
{name: "allowedMissings=0", allowedMissings: 0, extendedSampleCount: 16},
{name: "allowedMissings=1", allowedMissings: 1, extendedSampleCount: 20},
{name: "allowedMissings=2", allowedMissings: 2, extendedSampleCount: 24},
{name: "allowedMissings=3", allowedMissings: 3, extendedSampleCount: 27},
{name: "allowedMissings=4", allowedMissings: 4, extendedSampleCount: 29},
{name: "allowedMissings=5", allowedMissings: 5, extendedSampleCount: 32},
{name: "allowedMissings=6", allowedMissings: 6, extendedSampleCount: 35},
{name: "allowedMissings=7", allowedMissings: 7, extendedSampleCount: 37},
{name: "allowedMissings=8", allowedMissings: 8, extendedSampleCount: 40},
{name: "allowedMissings=9", allowedMissings: 9, extendedSampleCount: 42},
{name: "allowedMissings=10", allowedMissings: 10, extendedSampleCount: 44},
{name: "allowedMissings=11", allowedMissings: 11, extendedSampleCount: 47},
{name: "allowedMissings=12", allowedMissings: 12, extendedSampleCount: 49},
{name: "allowedMissings=13", allowedMissings: 13, extendedSampleCount: 51},
{name: "allowedMissings=14", allowedMissings: 14, extendedSampleCount: 53},
{name: "allowedMissings=15", allowedMissings: 15, extendedSampleCount: 55},
{name: "allowedMissings=16", allowedMissings: 16, extendedSampleCount: 57},
{name: "allowedMissings=17", allowedMissings: 17, extendedSampleCount: 59},
{name: "allowedMissings=18", allowedMissings: 18, extendedSampleCount: 61},
{name: "allowedMissings=19", allowedMissings: 19, extendedSampleCount: 63},
{name: "allowedMissings=20", allowedMissings: 20, extendedSampleCount: 65},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := peerdas.ExtendedSampleCount(samplesPerSlot, tc.allowedMissings)
require.Equal(t, tc.extendedSampleCount, result)
})
}
}

View File

@@ -53,6 +53,11 @@ func HigherEqualThanAltairVersionAndEpoch(s state.BeaconState, e primitives.Epoc
return s.Version() >= version.Altair && e >= params.BeaconConfig().AltairForkEpoch return s.Version() >= version.Altair && e >= params.BeaconConfig().AltairForkEpoch
} }
// PeerDASIsActive checks whether peerDAS is active at the provided slot.
func PeerDASIsActive(slot primitives.Slot) bool {
return params.PeerDASEnabled() && slots.ToEpoch(slot) >= params.BeaconConfig().Eip7594ForkEpoch
}
// CanUpgradeToAltair returns true if the input `slot` can upgrade to Altair. // CanUpgradeToAltair returns true if the input `slot` can upgrade to Altair.
// Spec code: // Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == ALTAIR_FORK_EPOCH // If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == ALTAIR_FORK_EPOCH

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library", name = "go_default_library",
srcs = [ srcs = [
"availability.go", "availability.go",
"availability_columns.go",
"cache.go", "cache.go",
"iface.go", "iface.go",
"mock.go", "mock.go",
@@ -20,6 +21,7 @@ go_library(
"//runtime/logging:go_default_library", "//runtime/logging:go_default_library",
"//runtime/version:go_default_library", "//runtime/version:go_default_library",
"//time/slots:go_default_library", "//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library", "@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library", "@com_github_sirupsen_logrus//:go_default_library",
], ],

View File

@@ -0,0 +1,152 @@
package das
import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.BlobStorage
cache *cache
verifier ColumnBatchVerifier
nodeID enode.ID
}
type ColumnBatchVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, sc []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
func NewLazilyPersistentStoreColumn(store *filesystem.BlobStorage, verifier ColumnBatchVerifier, id enode.ID) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newCache(),
verifier: verifier,
nodeID: id,
}
}
// Persist do nothing at the moment.
// TODO: Very Ugly, change interface to allow for columns and blobs
func (*LazilyPersistentStoreColumn) Persist(_ primitives.Slot, _ ...blocks.ROBlob) error {
return nil
}
// PersistColumns adds columns to the working column cache. columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) PersistColumns(current primitives.Slot, sc ...blocks.RODataColumn) error {
if len(sc) == 0 {
return nil
}
if len(sc) > 1 {
first := sc[0].BlockRoot()
for i := 1; i < len(sc); i++ {
if first != sc[i].BlockRoot() {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromColumn(sc[0])
entry := s.cache.ensure(key)
for i := range sc {
if err := entry.stashColumns(&sc[i]); err != nil {
return err
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := fullCommitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could check data availability for block %#x", b.Root())
}
// Return early for blocks that are pre-deneb or which do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
key := keyFromBlock(b)
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
sumz, err := s.store.WaitForSummarizer(ctx)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", b.Root())).
WithError(err).
Debug("Failed to receive BlobStorageSummarizer within IsDataAvailable")
} else {
entry.setDiskSummary(sumz.Summary(root))
}
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
sidecars, err := entry.filterColumns(root, &blockCommitments)
if err != nil {
return errors.Wrap(err, "incomplete BlobSidecar batch")
}
// Do thorough verifications of each BlobSidecar for the block.
// Same as above, we don't save BlobSidecars if there are any problems with the batch.
vscs, err := s.verifier.VerifiedRODataColumns(ctx, b, sidecars)
if err != nil {
var me verification.VerificationMultiError
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(log.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
log.WithFields(lf).
Debug("invalid ColumnSidecars received")
}
return errors.Wrapf(err, "invalid ColumnSidecars received for block %#x", root)
}
// Ensure that each column sidecar is written to disk.
for i := range vscs {
if err := s.store.SaveDataColumn(vscs[i]); err != nil {
return errors.Wrapf(err, "failed to save ColumnSidecar index %d for block %#x", vscs[i].ColumnIndex, root)
}
}
// All ColumnSidecars are persisted - da check succeeds.
return nil
}
func fullCommitmentsToCheck(b blocks.ROBlock, current primitives.Slot) (safeCommitmentsArray, error) {
var ar safeCommitmentsArray
if b.Version() < version.Deneb {
return ar, nil
}
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return ar, nil
}
kc, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return ar, err
}
for i := range ar {
copy(ar[i], kc)
}
return ar, nil
}

View File

@@ -2,6 +2,7 @@ package das
import ( import (
"bytes" "bytes"
"reflect"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -38,6 +39,10 @@ func keyFromSidecar(sc blocks.ROBlob) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()} return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
} }
func keyFromColumn(sc blocks.RODataColumn) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
// keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value. // keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value.
func keyFromBlock(b blocks.ROBlock) cacheKey { func keyFromBlock(b blocks.ROBlock) cacheKey {
return cacheKey{slot: b.Block().Slot(), root: b.Root()} return cacheKey{slot: b.Block().Slot(), root: b.Root()}
@@ -61,6 +66,7 @@ func (c *cache) delete(key cacheKey) {
// cacheEntry holds a fixed-length cache of BlobSidecars. // cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct { type cacheEntry struct {
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
colScs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.BlobStorageSummary diskSummary filesystem.BlobStorageSummary
} }
@@ -82,6 +88,17 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
return nil return nil
} }
func (e *cacheEntry) stashColumns(sc *blocks.RODataColumn) error {
if sc.ColumnIndex >= fieldparams.NumberOfColumns {
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.ColumnIndex)
}
if e.colScs[sc.ColumnIndex] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.ColumnIndex, sc.KzgCommitments)
}
e.colScs[sc.ColumnIndex] = sc
return nil
}
// filter evicts sidecars that are not committed to by the block and returns custom // filter evicts sidecars that are not committed to by the block and returns custom
// errors if the cache is missing any of the commitments, or if the commitments in // errors if the cache is missing any of the commitments, or if the commitments in
// the cache do not match those found in the block. If err is nil, then all expected // the cache do not match those found in the block. If err is nil, then all expected
@@ -117,6 +134,35 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
return scs, nil return scs, nil
} }
func (e *cacheEntry) filterColumns(root [32]byte, kc *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
if e.diskSummary.AllAvailable(kc.count()) {
return nil, nil
}
scs := make([]blocks.RODataColumn, 0, kc.count())
for i := uint64(0); i < fieldparams.NumberOfColumns; i++ {
// We already have this blob, we don't need to write it or validate it.
if e.diskSummary.HasIndex(i) {
continue
}
if kc[i] == nil {
if e.colScs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
}
continue
}
if e.colScs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !reflect.DeepEqual(kc[i], e.colScs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.colScs[i].KzgCommitments, kc[i])
}
scs = append(scs, *e.colScs[i])
}
return scs, nil
}
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding // safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// gratuitous bounds checks. // gratuitous bounds checks.
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte
@@ -129,3 +175,14 @@ func (s safeCommitmentArray) count() int {
} }
return fieldparams.MaxBlobsPerBlock return fieldparams.MaxBlobsPerBlock
} }
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
func (s *safeCommitmentsArray) count() int {
for i := range s {
if s[i] == nil {
return i
}
}
return fieldparams.NumberOfColumns
}

View File

@@ -13,6 +13,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem", importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
deps = [ deps = [
"//async/event:go_default_library",
"//beacon-chain/verification:go_default_library", "//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library", "//config/fieldparams:go_default_library",
"//config/params:go_default_library", "//config/params:go_default_library",

View File

@@ -12,6 +12,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification" "github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams" fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks" "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -39,8 +40,15 @@ const (
directoryPermissions = 0700 directoryPermissions = 0700
) )
// BlobStorageOption is a functional option for configuring a BlobStorage. type (
type BlobStorageOption func(*BlobStorage) error // BlobStorageOption is a functional option for configuring a BlobStorage.
BlobStorageOption func(*BlobStorage) error
RootIndexPair struct {
Root [fieldparams.RootLength]byte
Index uint64
}
)
// WithBasePath is a required option that sets the base path of blob storage. // WithBasePath is a required option that sets the base path of blob storage.
func WithBasePath(base string) BlobStorageOption { func WithBasePath(base string) BlobStorageOption {
@@ -70,7 +78,10 @@ func WithSaveFsync(fsync bool) BlobStorageOption {
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be // attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node. // initialized once per beacon node.
func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) { func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
b := &BlobStorage{} b := &BlobStorage{
DataColumnFeed: new(event.Feed),
}
for _, o := range opts { for _, o := range opts {
if err := o(b); err != nil { if err := o(b); err != nil {
return nil, errors.Wrap(err, "failed to create blob storage") return nil, errors.Wrap(err, "failed to create blob storage")
@@ -99,6 +110,7 @@ type BlobStorage struct {
fsync bool fsync bool
fs afero.Fs fs afero.Fs
pruner *blobPruner pruner *blobPruner
DataColumnFeed *event.Feed
} }
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache // WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
@@ -221,6 +233,110 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
return nil return nil
} }
// SaveDataColumn saves a data column to our local filesystem.
func (bs *BlobStorage) SaveDataColumn(column blocks.VerifiedRODataColumn) error {
startTime := time.Now()
fname := namerForDataColumn(column)
sszPath := fname.path()
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
}
if exists {
log.Trace("Ignoring a duplicate data column sidecar save attempt")
return nil
}
if bs.pruner != nil {
hRoot, err := column.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
return err
}
if err := bs.pruner.notify(hRoot, column.SignedBlockHeader.Header.Slot, column.ColumnIndex); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", hRoot)
}
}
// Serialize the ethpb.DataColumnSidecar to binary data using SSZ.
sidecarData, err := column.MarshalSSZ()
if err != nil {
return errors.Wrap(err, "failed to serialize sidecar data")
} else if len(sidecarData) == 0 {
return errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath(fmt.Sprintf("%p", sidecarData))
partialMoved := false
// Ensure the partial file is deleted.
defer func() {
if partialMoved {
return
}
// It's expected to error if the save is successful.
err = bs.fs.Remove(partPath)
if err == nil {
log.WithFields(logrus.Fields{
"partPath": partPath,
}).Debugf("Removed partial file")
}
}()
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(partPath)
if err != nil {
return errors.Wrap(err, "failed to create partial file")
}
n, err := partialFile.Write(sidecarData)
if err != nil {
closeErr := partialFile.Close()
if closeErr != nil {
return closeErr
}
return errors.Wrap(err, "failed to write to partial file")
}
if bs.fsync {
if err := partialFile.Sync(); err != nil {
return err
}
}
if err := partialFile.Close(); err != nil {
return err
}
if n != len(sidecarData) {
return fmt.Errorf("failed to write the full bytes of sidecarData, wrote only %d of %d bytes", n, len(sidecarData))
}
if n == 0 {
return errEmptyBlobWritten
}
// Atomically rename the partial file to its final name.
err = bs.fs.Rename(partPath, sszPath)
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
partialMoved = true
// Notify the data column notifier that a new data column has been saved.
bs.DataColumnFeed.Send(RootIndexPair{
Root: column.BlockRoot(),
Index: column.ColumnIndex,
})
// TODO: Use new metrics for data columns
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
// Get retrieves a single BlobSidecar by its root and index. // Get retrieves a single BlobSidecar by its root and index.
// Since BlobStorage only writes blobs that have undergone full verification, the return // Since BlobStorage only writes blobs that have undergone full verification, the return
// value is always a VerifiedROBlob. // value is always a VerifiedROBlob.
@@ -246,6 +362,20 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return verification.BlobSidecarNoop(ro) return verification.BlobSidecarNoop(ro)
} }
// GetColumn retrieves a single DataColumnSidecar by its root and index.
func (bs *BlobStorage) GetColumn(root [32]byte, idx uint64) (*ethpb.DataColumnSidecar, error) {
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
if err != nil {
return nil, err
}
s := &ethpb.DataColumnSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return nil, err
}
return s, nil
}
// Remove removes all blobs for a given root. // Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error { func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir() rootDir := blobNamer{root: root}.dir()
@@ -289,6 +419,61 @@ func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]boo
return mask, nil return mask, nil
} }
// ColumnIndices retrieve the stored column indexes from our filesystem.
func (bs *BlobStorage) ColumnIndices(root [32]byte) (map[uint64]bool, error) {
custody := make(map[uint64]bool, fieldparams.NumberOfColumns)
// Get all the files in the directory.
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
// If the directory does not exist, we do not custody any columns.
if os.IsNotExist(err) {
return nil, nil
}
return nil, errors.Wrap(err, "read directory")
}
// Iterate over all the entries in the directory.
for _, entry := range entries {
// If the entry is a directory, skip it.
if entry.IsDir() {
continue
}
// If the entry does not have the correct extension, skip it.
name := entry.Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
// The file should be in the `<index>.<extension>` format.
// Skip the file if it does not match the format.
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
// Get the column index from the file name.
columnIndexStr := parts[0]
columnIndex, err := strconv.ParseUint(columnIndexStr, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
// If the column index is out of bounds, return an error.
if columnIndex >= fieldparams.NumberOfColumns {
return nil, errors.Wrapf(errIndexOutOfBounds, "invalid index %d", columnIndex)
}
// Mark the column index as in custody.
custody[columnIndex] = true
}
return custody, nil
}
// Clear deletes all files on the filesystem. // Clear deletes all files on the filesystem.
func (bs *BlobStorage) Clear() error { func (bs *BlobStorage) Clear() error {
dirs, err := listDir(bs.fs, ".") dirs, err := listDir(bs.fs, ".")
@@ -321,6 +506,10 @@ func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index} return blobNamer{root: sc.BlockRoot(), index: sc.Index}
} }
func namerForDataColumn(col blocks.VerifiedRODataColumn) blobNamer {
return blobNamer{root: col.BlockRoot(), index: col.ColumnIndex}
}
func (p blobNamer) dir() string { func (p blobNamer) dir() string {
return rootString(p.root) return rootString(p.root)
} }

View File

@@ -9,7 +9,7 @@ import (
) )
// blobIndexMask is a bitmask representing the set of blob indices that are currently set. // blobIndexMask is a bitmask representing the set of blob indices that are currently set.
type blobIndexMask [fieldparams.MaxBlobsPerBlock]bool type blobIndexMask [fieldparams.NumberOfColumns]bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about. // BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct { type BlobStorageSummary struct {
@@ -26,6 +26,15 @@ func (s BlobStorageSummary) HasIndex(idx uint64) bool {
return s.mask[idx] return s.mask[idx]
} }
// HasDataColumnIndex true if the DataColumnSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasDataColumnIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
if idx >= fieldparams.NumberOfColumns {
return false
}
return s.mask[idx]
}
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1. // AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
func (s BlobStorageSummary) AllAvailable(count int) bool { func (s BlobStorageSummary) AllAvailable(count int) bool {
if count > fieldparams.MaxBlobsPerBlock { if count > fieldparams.MaxBlobsPerBlock {
@@ -39,6 +48,21 @@ func (s BlobStorageSummary) AllAvailable(count int) bool {
return true return true
} }
// AllDataColumnsAvailable returns true if we have all datacolumns for corresponding indices.
func (s BlobStorageSummary) AllDataColumnsAvailable(indices map[uint64]bool) bool {
if uint64(len(indices)) > fieldparams.NumberOfColumns {
return false
}
for indice := range indices {
if !s.mask[indice] {
return false
}
}
return true
}
// BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root. // BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root.
// The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root. // The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root.
type BlobStorageSummarizer interface { type BlobStorageSummarizer interface {
@@ -68,9 +92,12 @@ func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
} }
func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error { func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error {
if idx >= fieldparams.MaxBlobsPerBlock { // TODO: Separate blob index checks from data column index checks
return errIndexOutOfBounds /*
} if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
*/
s.mu.Lock() s.mu.Lock()
defer s.mu.Unlock() defer s.mu.Unlock()
v := s.cache[key] v := s.cache[key]

View File

@@ -9,6 +9,7 @@ import (
) )
func TestSlotByRoot_Summary(t *testing.T) { func TestSlotByRoot_Summary(t *testing.T) {
t.Skip("Use new test for data columns")
var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask
firstSet[0] = true firstSet[0] = true
lastSet[len(lastSet)-1] = true lastSet[len(lastSet)-1] = true
@@ -148,3 +149,108 @@ func TestAllAvailable(t *testing.T) {
}) })
} }
} }
func TestHasDataColumnIndex(t *testing.T) {
storedIndices := map[uint64]bool{
1: true,
3: true,
5: true,
}
cases := []struct {
name string
idx uint64
expected bool
}{
{
name: "index is too high",
idx: fieldparams.NumberOfColumns,
expected: false,
},
{
name: "non existing index",
idx: 2,
expected: false,
},
{
name: "existing index",
idx: 3,
expected: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
var mask blobIndexMask
for idx := range storedIndices {
mask[idx] = true
}
sum := BlobStorageSummary{mask: mask}
require.Equal(t, c.expected, sum.HasDataColumnIndex(c.idx))
})
}
}
func TestAllDataColumnAvailable(t *testing.T) {
tooManyColumns := make(map[uint64]bool, fieldparams.NumberOfColumns+1)
for i := uint64(0); i < fieldparams.NumberOfColumns+1; i++ {
tooManyColumns[i] = true
}
columns346 := map[uint64]bool{
3: true,
4: true,
6: true,
}
columns36 := map[uint64]bool{
3: true,
6: true,
}
cases := []struct {
name string
storedIndices map[uint64]bool
testedIndices map[uint64]bool
expected bool
}{
{
name: "no tested indices",
storedIndices: columns346,
testedIndices: map[uint64]bool{},
expected: true,
},
{
name: "too many tested indices",
storedIndices: columns346,
testedIndices: tooManyColumns,
expected: false,
},
{
name: "not all tested indices are stored",
storedIndices: columns36,
testedIndices: columns346,
expected: false,
},
{
name: "all tested indices are stored",
storedIndices: columns346,
testedIndices: columns36,
expected: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
var mask blobIndexMask
for idx := range c.storedIndices {
mask[idx] = true
}
sum := BlobStorageSummary{mask: mask}
require.Equal(t, c.expected, sum.AllDataColumnsAvailable(c.testedIndices))
})
}
}

View File

@@ -64,6 +64,10 @@ func newBlobPruner(fs afero.Fs, retain primitives.Epoch, opts ...prunerOpt) (*bl
// notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache // notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache
// of root->slot mappings and decide when to evict old blobs based on the age of present blobs. // of root->slot mappings and decide when to evict old blobs based on the age of present blobs.
func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) error { func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) error {
for i := range 42 {
log.WithField("index", i).Info("test")
}
if err := p.cache.ensure(root, latest, idx); err != nil { if err := p.cache.ensure(root, latest, idx); err != nil {
return err return err
} }

View File

@@ -23,10 +23,10 @@ import (
"go.opencensus.io/trace" "go.opencensus.io/trace"
) )
// used to represent errors for inconsistent slot ranges. // Used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided") var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root. // Block retrieval by root. Return nil if block is not found.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) { func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block") ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End() defer span.End()

View File

@@ -988,6 +988,7 @@ func (b *BeaconNode) registerRPCService(router *mux.Router) error {
FinalizationFetcher: chainService, FinalizationFetcher: chainService,
BlockReceiver: chainService, BlockReceiver: chainService,
BlobReceiver: chainService, BlobReceiver: chainService,
DataColumnReceiver: chainService,
AttestationReceiver: chainService, AttestationReceiver: chainService,
GenesisTimeFetcher: chainService, GenesisTimeFetcher: chainService,
GenesisFetcher: chainService, GenesisFetcher: chainService,

View File

@@ -7,6 +7,7 @@ go_library(
"broadcaster.go", "broadcaster.go",
"config.go", "config.go",
"connection_gater.go", "connection_gater.go",
"custody.go",
"dial_relay_node.go", "dial_relay_node.go",
"discovery.go", "discovery.go",
"doc.go", "doc.go",
@@ -46,6 +47,7 @@ go_library(
"//beacon-chain/core/altair:go_default_library", "//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/feed/state:go_default_library", "//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library", "//beacon-chain/core/time:go_default_library",
"//beacon-chain/db:go_default_library", "//beacon-chain/db:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library", "//beacon-chain/p2p/encoder:go_default_library",
@@ -56,6 +58,7 @@ go_library(
"//beacon-chain/startup:go_default_library", "//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library", "//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library", "//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library", "//config/params:go_default_library",
"//consensus-types/primitives:go_default_library", "//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library", "//consensus-types/wrapper:go_default_library",
@@ -74,6 +77,8 @@ go_library(
"//runtime/version:go_default_library", "//runtime/version:go_default_library",
"//time:go_default_library", "//time:go_default_library",
"//time/slots:go_default_library", "//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library", "@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
@@ -115,6 +120,7 @@ go_test(
"addr_factory_test.go", "addr_factory_test.go",
"broadcaster_test.go", "broadcaster_test.go",
"connection_gater_test.go", "connection_gater_test.go",
"custody_test.go",
"dial_relay_node_test.go", "dial_relay_node_test.go",
"discovery_test.go", "discovery_test.go",
"fork_test.go", "fork_test.go",
@@ -136,9 +142,11 @@ go_test(
flaky = True, flaky = True,
tags = ["requires-network"], tags = ["requires-network"],
deps = [ deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library", "//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/testing:go_default_library", "//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library", "//beacon-chain/p2p/encoder:go_default_library",
@@ -151,6 +159,7 @@ go_test(
"//cmd/beacon-chain/flags:go_default_library", "//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library", "//config/fieldparams:go_default_library",
"//config/params:go_default_library", "//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library", "//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library", "//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library", "//container/leaky-bucket:go_default_library",
@@ -161,13 +170,12 @@ go_test(
"//network/forks:go_default_library", "//network/forks:go_default_library",
"//proto/eth/v1:go_default_library", "//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library", "//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//proto/testing:go_default_library", "//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library", "//testing/assert:go_default_library",
"//testing/require:go_default_library", "//testing/require:go_default_library",
"//testing/util:go_default_library", "//testing/util:go_default_library",
"//time:go_default_library", "//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library", "@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library", "@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -9,16 +9,18 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz" ssz "github.com/prysmaticlabs/fastssz"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash" "github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing" "github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots" "github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
) )
// ErrMessageNotMapped occurs on a Broadcast attempt when a message has not been defined in the // ErrMessageNotMapped occurs on a Broadcast attempt when a message has not been defined in the
@@ -96,7 +98,12 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
return nil return nil
} }
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att, forkDigest [4]byte) { func (s *Service) internalBroadcastAttestation(
ctx context.Context,
subnet uint64,
att ethpb.Att,
forkDigest [fieldparams.VersionLength]byte,
) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation") _, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End() defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline. ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -152,7 +159,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
} }
} }
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [4]byte) { func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [fieldparams.VersionLength]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee") _, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End() defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline. ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -228,7 +235,12 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
return nil return nil
} }
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) { func (s *Service) internalBroadcastBlob(
ctx context.Context,
subnet uint64,
blobSidecar *ethpb.BlobSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob") _, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End() defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline. ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -243,7 +255,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
s.subnetLocker(wrappedSubIdx).RUnlock() s.subnetLocker(wrappedSubIdx).RUnlock()
if !hasPeer { if !hasPeer {
blobSidecarCommitteeBroadcastAttempts.Inc() blobSidecarBroadcastAttempts.Inc()
if err := func() error { if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock() s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock() defer s.subnetLocker(wrappedSubIdx).Unlock()
@@ -252,7 +264,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
return err return err
} }
if ok { if ok {
blobSidecarCommitteeBroadcasts.Inc() blobSidecarBroadcasts.Inc()
return nil return nil
} }
return errors.New("failed to find peers for subnet") return errors.New("failed to find peers for subnet")
@@ -268,6 +280,99 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
} }
} }
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
// TODO: Add tests
func (s *Service) BroadcastDataColumn(ctx context.Context, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
// Ensure the data column sidecar is not nil.
if dataColumnSidecar == nil {
return errors.Errorf("attempted to broadcast nil data column sidecar at subnet %d", columnSubnet)
}
// Retrieve the current fork digest.
forkDigest, err := s.currentForkDigest()
if err != nil {
err := errors.Wrap(err, "current fork digest")
tracing.AnnotateError(span, err)
return err
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumn(ctx, columnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumn(
ctx context.Context,
columnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumn")
defer span.End()
// Increase the number of broadcast attempts.
dataColumnSidecarBroadcastAttempts.Inc()
// Clear parent context / deadline.
ctx = trace.NewContext(context.Background(), span)
// Define a one-slot length context timeout.
oneSlot := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
ctx, cancel := context.WithTimeout(ctx, oneSlot)
defer cancel()
// Build the topic corresponding to this column subnet and this fork digest.
topic := dataColumnSubnetToTopic(columnSubnet, forkDigest)
// Compute the wrapped subnet index.
wrappedSubIdx := columnSubnet + dataColumnSubnetVal
// Check if we have peers with this subnet.
hasPeer := func() bool {
s.subnetLocker(wrappedSubIdx).RLock()
defer s.subnetLocker(wrappedSubIdx).RUnlock()
return s.hasPeerWithSubnet(topic)
}()
// If no peers are found, attempt to find peers with this subnet.
if !hasPeer {
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
ok, err := s.FindPeersWithSubnet(ctx, topic, columnSubnet, 1 /*threshold*/)
if err != nil {
return errors.Wrap(err, "find peers for subnet")
}
if ok {
return nil
}
return errors.New("failed to find peers for subnet")
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
tracing.AnnotateError(span, err)
}
}
// Broadcast the data column sidecar to the network.
if err := s.broadcastObject(ctx, dataColumnSidecar, topic); err != nil {
log.WithError(err).Error("Failed to broadcast data column sidecar")
tracing.AnnotateError(span, err)
}
// Increase the number of successful broadcasts.
dataColumnSidecarBroadcasts.Inc()
}
// method to broadcast messages to other peers in our gossip mesh. // method to broadcast messages to other peers in our gossip mesh.
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error { func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject") ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
@@ -297,14 +402,18 @@ func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic
return nil return nil
} }
func attestationToTopic(subnet uint64, forkDigest [4]byte) string { func attestationToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(AttestationSubnetTopicFormat, forkDigest, subnet) return fmt.Sprintf(AttestationSubnetTopicFormat, forkDigest, subnet)
} }
func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string { func syncCommitteeToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet) return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
} }
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string { func blobSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet) return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
} }
func dataColumnSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(DataColumnSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -13,11 +13,16 @@ import (
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host" "github.com/libp2p/go-libp2p/core/host"
"github.com/prysmaticlabs/go-bitfield" "github.com/prysmaticlabs/go-bitfield"
"google.golang.org/protobuf/proto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
p2ptest "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing" p2ptest "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams" fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper" "github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -25,7 +30,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert" "github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require" "github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util" "github.com/prysmaticlabs/prysm/v5/testing/util"
"google.golang.org/protobuf/proto"
) )
func TestService_Broadcast(t *testing.T) { func TestService_Broadcast(t *testing.T) {
@@ -520,3 +524,70 @@ func TestService_BroadcastBlob(t *testing.T) {
require.NoError(t, p.BroadcastBlob(ctx, subnet, blobSidecar)) require.NoError(t, p.BroadcastBlob(ctx, subnet, blobSidecar))
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s") require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
} }
func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, kzg.Start())
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
p := &Service{
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockElectra())
require.NoError(t, err)
blobs := make([]kzg.Blob, fieldparams.MaxBlobsPerBlock)
sidecars, err := peerdas.DataColumnSidecars(b, blobs)
require.NoError(t, err)
sidecar := sidecars[0]
subnet := uint64(0)
topic := DataColumnSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(sidecar)] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
wg.Add(1)
go func(tt *testing.T) {
defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
msg, err := sub.Next(ctx)
require.NoError(t, err)
result := &ethpb.DataColumnSidecar{}
require.NoError(t, p.Encoding().DecodeGossip(msg.Data, result))
require.DeepEqual(t, result, sidecar)
}(t)
// Attempt to broadcast nil object should fail.
ctx := context.Background()
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastDataColumn(ctx, subnet, nil))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastDataColumn(ctx, subnet, sidecar))
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
}

118
beacon-chain/p2p/custody.go Normal file
View File

@@ -0,0 +1,118 @@
package p2p
import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
// GetValidCustodyPeers returns a list of peers that custody a super set of the local node's custody columns.
func (s *Service) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
// Get the total number of columns.
numberOfColumns := params.BeaconConfig().NumberOfColumns
localCustodySubnetCount := peerdas.CustodySubnetCount()
localCustodyColumns, err := peerdas.CustodyColumns(s.NodeID(), localCustodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns for local node")
}
localCustotyColumnsCount := uint64(len(localCustodyColumns))
// Find the valid peers.
validPeers := make([]peer.ID, 0, len(peers))
loop:
for _, pid := range peers {
// Get the custody subnets count of the remote peer.
remoteCustodySubnetCount := s.CustodyCountFromRemotePeer(pid)
// Get the remote node ID from the peer ID.
remoteNodeID, err := ConvertPeerIDToNodeID(pid)
if err != nil {
return nil, errors.Wrap(err, "convert peer ID to node ID")
}
// Get the custody columns of the remote peer.
remoteCustodyColumns, err := peerdas.CustodyColumns(remoteNodeID, remoteCustodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
remoteCustodyColumnsCount := uint64(len(remoteCustodyColumns))
// If the remote peer custodies less columns than the local node, skip it.
if remoteCustodyColumnsCount < localCustotyColumnsCount {
continue
}
// If the remote peers custodies all the possible columns, add it to the list.
if remoteCustodyColumnsCount == numberOfColumns {
copiedId := pid
validPeers = append(validPeers, copiedId)
continue
}
// Filter out invalid peers.
for c := range localCustodyColumns {
if !remoteCustodyColumns[c] {
continue loop
}
}
copiedId := pid
// Add valid peer to list
validPeers = append(validPeers, copiedId)
}
return validPeers, nil
}
// CustodyCountFromRemotePeer retrieves the custody count from a remote peer.
func (s *Service) CustodyCountFromRemotePeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// First, try to get the custody count from the peer's metadata.
metadata, err := s.peers.Metadata(pid)
if err != nil {
log.WithError(err).WithField("peerID", pid).Debug("Failed to retrieve metadata for peer, defaulting to the ENR value")
}
if metadata != nil {
custodyCount := metadata.CustodySubnetCount()
if custodyCount > 0 {
return custodyCount
}
}
log.WithField("peerID", pid).Debug("Failed to retrieve custody count from metadata for peer, defaulting to the ENR value")
// Retrieve the ENR of the peer.
record, err := s.peers.ENR(pid)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Debug("Failed to retrieve ENR for peer, defaulting to the default value")
return custodyRequirement
}
// Retrieve the custody subnets count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Debug("Failed to retrieve custody count from ENR for peer, defaulting to the default value")
return custodyRequirement
}
return custodyCount
}

View File

@@ -0,0 +1,196 @@
package p2p
import (
"context"
"crypto/ecdsa"
"net"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
prysmNetwork "github.com/prysmaticlabs/prysm/v5/network"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func createPeer(t *testing.T, privateKeyOffset int, custodyCount uint64) (*enr.Record, peer.ID, *ecdsa.PrivateKey) {
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(privateKeyOffset + i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
peerID, err := peer.IDFromPrivateKey(unmarshalledPrivateKey)
require.NoError(t, err)
record := &enr.Record{}
record.Set(peerdas.Csc(custodyCount))
record.Set(enode.Secp256k1(privateKey.PublicKey))
return record, peerID, privateKey
}
func TestGetValidCustodyPeers(t *testing.T) {
genesisValidatorRoot := make([]byte, 32)
for i := 0; i < 32; i++ {
genesisValidatorRoot[i] = byte(i)
}
service := &Service{
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: genesisValidatorRoot,
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
ipAddrString, err := prysmNetwork.ExternalIPv4()
require.NoError(t, err)
ipAddr := net.ParseIP(ipAddrString)
custodyRequirement := params.BeaconConfig().CustodyRequirement
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Peer 1 custodies exactly the same columns than us.
// (We use the same keys pair than ours for simplicity)
peer1Record, peer1ID, localPrivateKey := createPeer(t, 1, custodyRequirement)
// Peer 2 custodies all the columns.
peer2Record, peer2ID, _ := createPeer(t, 2, dataColumnSidecarSubnetCount)
// Peer 3 custodies different columns than us (but the same count).
// (We use the same public key than peer 2 for simplicity)
peer3Record, peer3ID, _ := createPeer(t, 3, custodyRequirement)
// Peer 4 custodies less columns than us.
peer4Record, peer4ID, _ := createPeer(t, 4, custodyRequirement-1)
listener, err := service.createListener(ipAddr, localPrivateKey)
require.NoError(t, err)
service.dv5Listener = listener
service.peers.Add(peer1Record, peer1ID, nil, network.DirOutbound)
service.peers.Add(peer2Record, peer2ID, nil, network.DirOutbound)
service.peers.Add(peer3Record, peer3ID, nil, network.DirOutbound)
service.peers.Add(peer4Record, peer4ID, nil, network.DirOutbound)
actual, err := service.GetValidCustodyPeers([]peer.ID{peer1ID, peer2ID, peer3ID, peer4ID})
require.NoError(t, err)
expected := []peer.ID{peer1ID, peer2ID}
require.DeepSSZEqual(t, expected, actual)
}
func TestCustodyCountFromRemotePeer(t *testing.T) {
const (
expectedENR uint64 = 7
expectedMetadata uint64 = 8
pid = "test-id"
)
csc := peerdas.Csc(expectedENR)
// Define a nil record
var nilRecord *enr.Record = nil
// Define an empty record (record with non `csc` entry)
emptyRecord := &enr.Record{}
// Define a nominal record
nominalRecord := &enr.Record{}
nominalRecord.Set(csc)
// Define a metadata with zero custody.
zeroMetadata := wrapper.WrappedMetadataV2(&pb.MetaDataV2{
CustodySubnetCount: 0,
})
// Define a nominal metadata.
nominalMetadata := wrapper.WrappedMetadataV2(&pb.MetaDataV2{
CustodySubnetCount: expectedMetadata,
})
testCases := []struct {
name string
record *enr.Record
metadata metadata.Metadata
expected uint64
}{
{
name: "No metadata - No ENR",
record: nilRecord,
expected: params.BeaconConfig().CustodyRequirement,
},
{
name: "No metadata - Empty ENR",
record: emptyRecord,
expected: params.BeaconConfig().CustodyRequirement,
},
{
name: "No Metadata - ENR",
record: nominalRecord,
expected: expectedENR,
},
{
name: "Metadata with 0 value - ENR",
record: nominalRecord,
metadata: zeroMetadata,
expected: expectedENR,
},
{
name: "Metadata - ENR",
record: nominalRecord,
metadata: nominalMetadata,
expected: expectedMetadata,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create peers status.
peers := peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
// Set the metadata.
if tc.metadata != nil {
peers.SetMetadata(pid, tc.metadata)
}
// Add a new peer with the record.
peers.Add(tc.record, pid, nil, network.DirOutbound)
// Create a new service.
service := &Service{
peers: peers,
metaData: tc.metadata,
}
// Retrieve the custody count from the remote peer.
actual := service.CustodyCountFromRemotePeer(pid)
// Verify the result.
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -15,7 +15,9 @@ import (
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield" "github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache" "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags" "github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/features" "github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
@@ -42,67 +44,148 @@ const (
udp6 udp6
) )
const quickProtocolEnrKey = "quic"
type quicProtocol uint16 type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node. // quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" } func (quicProtocol) ENRKey() string { return quickProtocolEnrKey }
// RefreshENR uses an epoch to refresh the enr entry for our node // RefreshPersistentSubnets checks that we are tracking our local persistent subnets for a variety of gossip topics.
// with the tracked committee ids for the epoch, allowing our node // This routine checks for our attestation, sync committee and data column subnets and updates them if they have
// to be dynamically discoverable by others given our tracked committee ids. // been rotated.
func (s *Service) RefreshENR() { func (s *Service) RefreshPersistentSubnets() {
// return early if discv5 isn't running // Return early if discv5 service isn't running.
if s.dv5Listener == nil || !s.isInitialized() { if s.dv5Listener == nil || !s.isInitialized() {
return return
} }
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
if err := initializePersistentSubnets(s.dv5Listener.LocalNode().ID(), currEpoch); err != nil { // Get the current epoch.
currentSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
// Get our node ID.
nodeID := s.dv5Listener.LocalNode().ID()
// Get our node record.
record := s.dv5Listener.Self().Record()
// Get the version of our metadata.
metadataVersion := s.Metadata().Version()
// Initialize persistent subnets.
if err := initializePersistentSubnets(nodeID, currentEpoch); err != nil {
log.WithError(err).Error("Could not initialize persistent subnets") log.WithError(err).Error("Could not initialize persistent subnets")
return return
} }
// Initialize persistent column subnets.
if err := initializePersistentColumnSubnets(nodeID); err != nil {
log.WithError(err).Error("Could not initialize persistent column subnets")
return
}
// Get the current attestation subnet bitfield.
bitV := bitfield.NewBitvector64() bitV := bitfield.NewBitvector64()
committees := cache.SubnetIDs.GetAllSubnets() attestationCommittees := cache.SubnetIDs.GetAllSubnets()
for _, idx := range committees { for _, idx := range attestationCommittees {
bitV.SetBitAt(idx, true) bitV.SetBitAt(idx, true)
} }
currentBitV, err := attBitvector(s.dv5Listener.Self().Record())
// Get the attestation subnet bitfield we store in our record.
inRecordBitV, err := attBitvector(record)
if err != nil { if err != nil {
log.WithError(err).Error("Could not retrieve att bitfield") log.WithError(err).Error("Could not retrieve att bitfield")
return return
} }
// Compare current epoch with our fork epochs // Get the attestation subnet bitfield in our metadata.
inMetadataBitV := s.Metadata().AttnetsBitfield()
// Is our attestation bitvector record up to date?
isBitVUpToDate := bytes.Equal(bitV, inRecordBitV) && bytes.Equal(bitV, inMetadataBitV)
// Compare current epoch with Altair fork epoch
altairForkEpoch := params.BeaconConfig().AltairForkEpoch altairForkEpoch := params.BeaconConfig().AltairForkEpoch
switch {
case currEpoch < altairForkEpoch: if currentEpoch < altairForkEpoch {
// Phase 0 behaviour. // Phase 0 behaviour.
if bytes.Equal(bitV, currentBitV) { if isBitVUpToDate {
// return early if bitfield hasn't changed // Return early if bitfield hasn't changed.
return return
} }
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadata(bitV) s.updateSubnetRecordWithMetadata(bitV)
default:
// Retrieve sync subnets from application level // Ping all peers.
// cache. s.pingPeers()
bitS := bitfield.Bitvector4{byte(0x00)}
committees = cache.SyncSubnetIDs.GetAllSubnets(currEpoch) return
for _, idx := range committees {
bitS.SetBitAt(idx, true)
}
currentBitS, err := syncBitvector(s.dv5Listener.Self().Record())
if err != nil {
log.WithError(err).Error("Could not retrieve sync bitfield")
return
}
if bytes.Equal(bitV, currentBitV) && bytes.Equal(bitS, currentBitS) &&
s.Metadata().Version() == version.Altair {
// return early if bitfields haven't changed
return
}
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
} }
// ping all peers to inform them of new metadata
// Get the current sync subnet bitfield.
bitS := bitfield.Bitvector4{byte(0x00)}
syncCommittees := cache.SyncSubnetIDs.GetAllSubnets(currentEpoch)
for _, idx := range syncCommittees {
bitS.SetBitAt(idx, true)
}
// Get the sync subnet bitfield we store in our record.
inRecordBitS, err := syncBitvector(record)
if err != nil {
log.WithError(err).Error("Could not retrieve sync bitfield")
return
}
// Get the sync subnet bitfield in our metadata.
currentBitSInMetadata := s.Metadata().SyncnetsBitfield()
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
// Compare current epoch with EIP-7594 fork epoch.
eip7594ForkEpoch := params.BeaconConfig().Eip7594ForkEpoch
if currentEpoch < eip7594ForkEpoch {
// Altair behaviour.
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
// Nothing to do, return early.
return
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Ping all peers to inform them of new metadata
s.pingPeers()
return
}
// Get the current custody subnet count.
custodySubnetCount := peerdas.CustodySubnetCount()
// Get the custody subnet count we store in our record.
inRecordCustodySubnetCount, err := peerdas.CustodyCountFromRecord(record)
if err != nil {
log.WithError(err).Error("Could not retrieve custody subnet count")
return
}
// Get the custody subnet count in our metadata.
inMetadataCustodySubnetCount := s.Metadata().CustodySubnetCount()
isCustodySubnetCountUpToDate := (custodySubnetCount == inRecordCustodySubnetCount && custodySubnetCount == inMetadataCustodySubnetCount)
if isBitVUpToDate && isBitSUpToDate && isCustodySubnetCountUpToDate {
// Nothing to do, return early.
return
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodySubnetCount)
// Ping all peers.
s.pingPeers() s.pingPeers()
} }
@@ -258,6 +341,10 @@ func (s *Service) createLocalNode(
localNode.Set(quicEntry) localNode.Set(quicEntry)
} }
if params.PeerDASEnabled() {
localNode.Set(peerdas.Csc(peerdas.CustodySubnetCount()))
}
localNode.SetFallbackIP(ipAddr) localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort) localNode.SetFallbackUDP(udpPort)
@@ -346,6 +433,8 @@ func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nodes that are already active. // Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) { if s.peers.IsActive(peerData.ID) {
// Constantly update enr for known peers
s.peers.UpdateENR(node.Record(), peerData.ID)
return false return false
} }

View File

@@ -16,12 +16,15 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover" "github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/host" "github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network" "github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/prysmaticlabs/go-bitfield" "github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing" mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache" "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
@@ -30,13 +33,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper" "github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket" leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
prysmNetwork "github.com/prysmaticlabs/prysm/v5/network" prysmNetwork "github.com/prysmaticlabs/prysm/v5/network"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert" "github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require" "github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test" logTest "github.com/sirupsen/logrus/hooks/test"
) )
@@ -131,6 +133,10 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
} }
func TestCreateLocalNode(t *testing.T) { func TestCreateLocalNode(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.Eip7594ForkEpoch = 1
params.OverrideBeaconConfig(cfg)
testCases := []struct { testCases := []struct {
name string name string
cfg *Config cfg *Config
@@ -227,6 +233,11 @@ func TestCreateLocalNode(t *testing.T) {
syncSubnets := new([]byte) syncSubnets := new([]byte)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, syncSubnets))) require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, syncSubnets)))
require.DeepSSZEqual(t, []byte{0}, *syncSubnets) require.DeepSSZEqual(t, []byte{0}, *syncSubnets)
// Check custody_subnet_count config.
custodySubnetCount := new(uint64)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(peerdas.CustodySubnetCountEnrKey, custodySubnetCount)))
require.Equal(t, uint64(1), *custodySubnetCount)
}) })
} }
} }
@@ -435,177 +446,314 @@ func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState)
return id return id
} }
func TestRefreshENR_ForkBoundaries(t *testing.T) { func createAndConnectPeer(t *testing.T, p2pService *testp2p.TestP2P, offset int) {
// Create the private key.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(offset + i)
}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
// Create the peer.
peer := testp2p.NewTestP2P(t, swarmt.OptPeerPrivateKey(privateKey))
// Add the peer and connect it.
p2pService.Peers().Add(&enr.Record{}, peer.PeerID(), nil, network.DirOutbound)
p2pService.Peers().SetConnectionState(peer.PeerID(), peers.PeerConnected)
p2pService.Connect(peer)
}
// Define the ping count.
var actualPingCount int
type check struct {
pingCount int
metadataSequenceNumber uint64
attestationSubnets []uint64
syncSubnets []uint64
custodySubnetCount *uint64
}
func checkPingCountCacheMetadataRecord(
t *testing.T,
service *Service,
expected check,
) {
// Check the ping count.
require.Equal(t, expected.pingCount, actualPingCount)
// Check the attestation subnets in the cache.
actualAttestationSubnets := cache.SubnetIDs.GetAllSubnets()
require.DeepSSZEqual(t, expected.attestationSubnets, actualAttestationSubnets)
// Check the metadata sequence number.
actualMetadataSequenceNumber := service.metaData.SequenceNumber()
require.Equal(t, expected.metadataSequenceNumber, actualMetadataSequenceNumber)
// Compute expected attestation subnets bits.
expectedBitV := bitfield.NewBitvector64()
exists := false
for _, idx := range expected.attestationSubnets {
exists = true
expectedBitV.SetBitAt(idx, true)
}
// Check attnets in ENR.
var actualBitVENR bitfield.Bitvector64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(attSubnetEnrKey, &actualBitVENR))
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBitV, actualBitVENR)
// Check attnets in metadata.
if !exists {
expectedBitV = nil
}
actualBitVMetadata := service.metaData.AttnetsBitfield()
require.DeepSSZEqual(t, expectedBitV, actualBitVMetadata)
if expected.syncSubnets != nil {
// Compute expected sync subnets bits.
expectedBitS := bitfield.NewBitvector4()
exists = false
for _, idx := range expected.syncSubnets {
exists = true
expectedBitS.SetBitAt(idx, true)
}
// Check syncnets in ENR.
var actualBitSENR bitfield.Bitvector4
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, &actualBitSENR))
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBitS, actualBitSENR)
// Check syncnets in metadata.
if !exists {
expectedBitS = nil
}
actualBitSMetadata := service.metaData.SyncnetsBitfield()
require.DeepSSZEqual(t, expectedBitS, actualBitSMetadata)
}
if expected.custodySubnetCount != nil {
// Check custody subnet count in ENR.
var actualCustodySubnetCount uint64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(peerdas.CustodySubnetCountEnrKey, &actualCustodySubnetCount))
require.NoError(t, err)
require.Equal(t, *expected.custodySubnetCount, actualCustodySubnetCount)
// Check custody subnet count in metadata.
actualCustodySubnetCountMetadata := service.metaData.CustodySubnetCount()
require.Equal(t, *expected.custodySubnetCount, actualCustodySubnetCountMetadata)
}
}
func TestRefreshPersistentSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t) params.SetupTestConfigCleanup(t)
// Clean up caches after usage. // Clean up caches after usage.
defer cache.SubnetIDs.EmptyAllCaches() defer cache.SubnetIDs.EmptyAllCaches()
defer cache.SyncSubnetIDs.EmptyAllCaches()
tests := []struct { const (
name string altairForkEpoch = 5
svcBuilder func(t *testing.T) *Service eip7594ForkEpoch = 10
postValidation func(t *testing.T, s *Service) )
custodySubnetCount := uint64(1)
// Set up epochs.
defaultCfg := params.BeaconConfig()
cfg := defaultCfg.Copy()
cfg.AltairForkEpoch = altairForkEpoch
cfg.Eip7594ForkEpoch = eip7594ForkEpoch
params.OverrideBeaconConfig(cfg)
// Compute the number of seconds per epoch.
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
secondsPerEpoch := secondsPerSlot * uint64(slotsPerEpoch)
testCases := []struct {
name string
epochSinceGenesis uint64
checks []check
}{ }{
{ {
name: "metadata no change", name: "Phase0",
svcBuilder: func(t *testing.T) *Service { epochSinceGenesis: 0,
port := 2000 checks: []check{
ipAddr, pkey := createAddrAndPrivKey(t) {
s := &Service{ pingCount: 0,
genesisTime: time.Now(), metadataSequenceNumber: 0,
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32), attestationSubnets: []uint64{},
cfg: &Config{UDPPort: uint(port)}, },
} {
listener, err := s.createListener(ipAddr, pkey) pingCount: 1,
assert.NoError(t, err) metadataSequenceNumber: 1,
s.dv5Listener = listener attestationSubnets: []uint64{40, 41},
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0)) },
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}) {
return s pingCount: 1,
}, metadataSequenceNumber: 1,
postValidation: func(t *testing.T, s *Service) { attestationSubnets: []uint64{40, 41},
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix()))) },
subs, err := computeSubscribedSubnets(s.dv5Listener.LocalNode().ID(), currEpoch) {
assert.NoError(t, err) pingCount: 1,
metadataSequenceNumber: 1,
bitV := bitfield.NewBitvector64() attestationSubnets: []uint64{40, 41},
for _, idx := range subs { },
bitV.SetBitAt(idx, true)
}
assert.DeepEqual(t, bitV, s.metaData.AttnetsBitfield())
}, },
}, },
{ {
name: "metadata updated", name: "Altair",
svcBuilder: func(t *testing.T) *Service { epochSinceGenesis: altairForkEpoch,
port := 2000 checks: []check{
ipAddr, pkey := createAddrAndPrivKey(t) {
s := &Service{ pingCount: 0,
genesisTime: time.Now(), metadataSequenceNumber: 0,
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32), attestationSubnets: []uint64{},
cfg: &Config{UDPPort: uint(port)}, syncSubnets: nil,
} },
listener, err := s.createListener(ipAddr, pkey) {
assert.NoError(t, err) pingCount: 1,
s.dv5Listener = listener metadataSequenceNumber: 1,
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0)) attestationSubnets: []uint64{40, 41},
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}) syncSubnets: nil,
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0) },
return s {
}, pingCount: 2,
postValidation: func(t *testing.T, s *Service) { metadataSequenceNumber: 2,
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield()) attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
},
}, },
}, },
{ {
name: "metadata updated at fork epoch", name: "PeerDAS",
svcBuilder: func(t *testing.T) *Service { epochSinceGenesis: eip7594ForkEpoch,
port := 2000 checks: []check{
ipAddr, pkey := createAddrAndPrivKey(t) {
s := &Service{ pingCount: 0,
genesisTime: time.Now().Add(-5 * oneEpochDuration()), metadataSequenceNumber: 0,
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32), attestationSubnets: []uint64{},
cfg: &Config{UDPPort: uint(port)}, syncSubnets: nil,
} },
listener, err := s.createListener(ipAddr, pkey) {
assert.NoError(t, err) pingCount: 1,
metadataSequenceNumber: 1,
// Update params attestationSubnets: []uint64{40, 41},
cfg := params.BeaconConfig().Copy() syncSubnets: nil,
cfg.AltairForkEpoch = 5 custodySubnetCount: &custodySubnetCount,
params.OverrideBeaconConfig(cfg) },
params.BeaconConfig().InitializeForkSchedule() {
pingCount: 2,
s.dv5Listener = listener metadataSequenceNumber: 2,
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0)) attestationSubnets: []uint64{40, 41},
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01}) syncSubnets: []uint64{1, 2},
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0) custodySubnetCount: &custodySubnetCount,
return s },
}, {
postValidation: func(t *testing.T, s *Service) { pingCount: 2,
assert.Equal(t, version.Altair, s.metaData.Version()) metadataSequenceNumber: 2,
assert.DeepEqual(t, bitfield.Bitvector4{0x00}, s.metaData.MetadataObjV1().Syncnets) attestationSubnets: []uint64{40, 41},
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield()) syncSubnets: []uint64{1, 2},
}, custodySubnetCount: &custodySubnetCount,
}, },
{
name: "metadata updated at fork epoch with no bitfield",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-5 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x00}, s.metaData.MetadataObjV1().Syncnets)
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
subs, err := computeSubscribedSubnets(s.dv5Listener.LocalNode().ID(), currEpoch)
assert.NoError(t, err)
bitV := bitfield.NewBitvector64()
for _, idx := range subs {
bitV.SetBitAt(idx, true)
}
assert.DeepEqual(t, bitV, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated past fork epoch with bitfields",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-6 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte{'A'}, 0, []uint64{0, 1}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x03}, s.metaData.MetadataObjV1().Syncnets)
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
}, },
}, },
} }
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { for _, tc := range testCases {
s := tt.svcBuilder(t) t.Run(tc.name, func(t *testing.T) {
s.RefreshENR() const peerOffset = 1
tt.postValidation(t, s)
s.dv5Listener.Close() // Initialize the ping count.
actualPingCount = 0
// Create the private key.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
// Create a p2p service.
p2p := testp2p.NewTestP2P(t)
// Create and connect a peer.
createAndConnectPeer(t, p2p, peerOffset)
// Create a service.
service := &Service{
pingMethod: func(_ context.Context, _ peer.ID) error {
actualPingCount++
return nil
},
cfg: &Config{UDPPort: 2000},
peers: p2p.Peers(),
genesisTime: time.Now().Add(-time.Duration(tc.epochSinceGenesis*secondsPerEpoch) * time.Second),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
}
// Create a listener.
listener, err := service.createListener(nil, privateKey)
require.NoError(t, err)
// Set the listener and the metadata.
service.dv5Listener = listener
service.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[0])
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[1])
// Add a sync committee subnet.
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte{'a'}, altairForkEpoch, []uint64{1, 2}, 1*time.Hour)
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[2])
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[3])
// Clean the test.
service.dv5Listener.Close()
cache.SubnetIDs.EmptyAllCaches() cache.SubnetIDs.EmptyAllCaches()
cache.SyncSubnetIDs.EmptyAllCaches() cache.SyncSubnetIDs.EmptyAllCaches()
}) })
} }
// Reset the config.
params.OverrideBeaconConfig(defaultCfg)
} }

View File

@@ -121,7 +121,7 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultAttesterSlashingTopicParams(), nil return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage): case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil return defaultBlsToExecutionChangeTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage): case strings.Contains(topic, GossipBlobSidecarMessage), strings.Contains(topic, GossipDataColumnSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated. // TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil return defaultBlockTopicParams(), nil
default: default:

View File

@@ -22,6 +22,7 @@ var gossipTopicMappings = map[string]func() proto.Message{
SyncCommitteeSubnetTopicFormat: func() proto.Message { return &ethpb.SyncCommitteeMessage{} }, SyncCommitteeSubnetTopicFormat: func() proto.Message { return &ethpb.SyncCommitteeMessage{} },
BlsToExecutionChangeSubnetTopicFormat: func() proto.Message { return &ethpb.SignedBLSToExecutionChange{} }, BlsToExecutionChangeSubnetTopicFormat: func() proto.Message { return &ethpb.SignedBLSToExecutionChange{} },
BlobSubnetTopicFormat: func() proto.Message { return &ethpb.BlobSidecar{} }, BlobSubnetTopicFormat: func() proto.Message { return &ethpb.BlobSidecar{} },
DataColumnSubnetTopicFormat: func() proto.Message { return &ethpb.DataColumnSidecar{} },
} }
// GossipTopicMappings is a function to return the assigned data type // GossipTopicMappings is a function to return the assigned data type

View File

@@ -3,6 +3,7 @@ package p2p
import ( import (
"context" "context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/connmgr" "github.com/libp2p/go-libp2p/core/connmgr"
@@ -28,6 +29,12 @@ type P2P interface {
ConnectionHandler ConnectionHandler
PeersProvider PeersProvider
MetadataProvider MetadataProvider
CustodyHandler
}
type Acceser interface {
Broadcaster
PeerManager
} }
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol. // Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
@@ -36,6 +43,7 @@ type Broadcaster interface {
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastDataColumn(ctx context.Context, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
} }
// SetStreamHandler configures p2p to handle streams of a certain topic ID. // SetStreamHandler configures p2p to handle streams of a certain topic ID.
@@ -81,8 +89,9 @@ type PeerManager interface {
PeerID() peer.ID PeerID() peer.ID
Host() host.Host Host() host.Host
ENR() *enr.Record ENR() *enr.Record
NodeID() enode.ID
DiscoveryAddresses() ([]multiaddr.Multiaddr, error) DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshENR() RefreshPersistentSubnets()
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error) FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error) AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
} }
@@ -102,3 +111,8 @@ type MetadataProvider interface {
Metadata() metadata.Metadata Metadata() metadata.Metadata
MetadataSeq() uint64 MetadataSeq() uint64
} }
type CustodyHandler interface {
CustodyCountFromRemotePeer(peer.ID) uint64
GetValidCustodyPeers([]peer.ID) ([]peer.ID, error)
}

View File

@@ -60,17 +60,25 @@ var (
"the subnet. The beacon node increments this counter when the broadcast is blocked " + "the subnet. The beacon node increments this counter when the broadcast is blocked " +
"until a subnet peer can be found.", "until a subnet peer can be found.",
}) })
blobSidecarCommitteeBroadcasts = promauto.NewCounter(prometheus.CounterOpts{ blobSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_broadcasts", Name: "p2p_blob_sidecar_committee_broadcasts",
Help: "The number of blob sidecar committee messages that were broadcast with no peer on.", Help: "The number of blob sidecar messages that were broadcast with no peer on.",
}) })
syncCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{ syncCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_sync_committee_subnet_attempted_broadcasts", Name: "p2p_sync_committee_subnet_attempted_broadcasts",
Help: "The number of sync committee that were attempted to be broadcast.", Help: "The number of sync committee that were attempted to be broadcast.",
}) })
blobSidecarCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{ blobSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_attempted_broadcasts", Name: "p2p_blob_sidecar_committee_attempted_broadcasts",
Help: "The number of blob sidecar committee messages that were attempted to be broadcast.", Help: "The number of blob sidecar messages that were attempted to be broadcast.",
})
dataColumnSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_broadcasts",
Help: "The number of data column sidecar messages that were broadcasted.",
})
dataColumnSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_attempted_broadcasts",
Help: "The number of data column sidecar messages that were attempted to be broadcast.",
}) })
// Gossip Tracer Metrics // Gossip Tracer Metrics

View File

@@ -159,6 +159,14 @@ func (p *Status) Add(record *enr.Record, pid peer.ID, address ma.Multiaddr, dire
p.addIpToTracker(pid) p.addIpToTracker(pid)
} }
func (p *Status) UpdateENR(record *enr.Record, pid peer.ID) {
p.store.Lock()
defer p.store.Unlock()
if peerData, ok := p.store.PeerData(pid); ok {
peerData.Enr = record
}
}
// Address returns the multiaddress of the given remote peer. // Address returns the multiaddress of the given remote peer.
// This will error if the peer does not exist. // This will error if the peer does not exist.
func (p *Status) Address(pid peer.ID) (ma.Multiaddr, error) { func (p *Status) Address(pid peer.ID) (ma.Multiaddr, error) {

View File

@@ -165,14 +165,14 @@ func (s *Service) pubsubOptions() []pubsub.Option {
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) { func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers) addrs, err := PeersFromStringAddrs(peers)
if err != nil { if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %w", err) return nil, fmt.Errorf("cannot convert peers raw ENRs into multiaddresses: %w", err)
} }
if len(addrs) == 0 { if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list") return nil, fmt.Errorf("converting peers raw ENRs into multiaddresses resulted in an empty list")
} }
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...) directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil { if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %w", err) return nil, fmt.Errorf("cannot convert peers multiaddresses into AddrInfos: %w", err)
} }
return directAddrInfos, nil return directAddrInfos, nil
} }

View File

@@ -90,7 +90,7 @@ func TestService_CanSubscribe(t *testing.T) {
formatting := []interface{}{digest} formatting := []interface{}{digest}
// Special case for attestation subnets which have a second formatting placeholder. // Special case for attestation subnets which have a second formatting placeholder.
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat { if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat || topic == DataColumnSubnetTopicFormat {
formatting = append(formatting, 0 /* some subnet ID */) formatting = append(formatting, 0 /* some subnet ID */)
} }

View File

@@ -10,11 +10,16 @@ import (
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
) )
// SchemaVersionV1 specifies the schema version for our rpc protocol ID. const (
const SchemaVersionV1 = "/1" // SchemaVersionV1 specifies the schema version for our rpc protocol ID.
SchemaVersionV1 = "/1"
// SchemaVersionV2 specifies the next schema version for our rpc protocol ID. // SchemaVersionV2 specifies the next schema version for our rpc protocol ID.
const SchemaVersionV2 = "/2" SchemaVersionV2 = "/2"
// SchemaVersionV3 specifies the next schema version for our rpc protocol ID.
SchemaVersionV3 = "/3"
)
// Specifies the protocol prefix for all our Req/Resp topics. // Specifies the protocol prefix for all our Req/Resp topics.
const protocolPrefix = "/eth2/beacon_chain/req" const protocolPrefix = "/eth2/beacon_chain/req"
@@ -43,6 +48,12 @@ const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic. // BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
const BlobSidecarsByRootName = "/blob_sidecars_by_root" const BlobSidecarsByRootName = "/blob_sidecars_by_root"
// DataColumnSidecarsByRootName is the name for the DataColumnSidecarsByRoot v1 message topic.
const DataColumnSidecarsByRootName = "/data_column_sidecars_by_root"
// DataColumnSidecarsByRangeName is the name for the DataColumnSidecarsByRange v1 message topic.
const DataColumnSidecarsByRangeName = "/data_column_sidecars_by_range"
const ( const (
// V1 RPC Topics // V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method. // RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -65,6 +76,12 @@ const (
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root. New in deneb. // RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root. New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/ // /eth2/beacon_chain/req/blob_sidecars_by_root/1/
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1 RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root. New in PeerDAS.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1
RPCDataColumnSidecarsByRootTopicV1 = protocolPrefix + DataColumnSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot. New in PeerDAS.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics // V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method. // RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
@@ -73,6 +90,9 @@ const (
RPCBlocksByRootTopicV2 = protocolPrefix + BeaconBlocksByRootsMessageName + SchemaVersionV2 RPCBlocksByRootTopicV2 = protocolPrefix + BeaconBlocksByRootsMessageName + SchemaVersionV2
// RPCMetaDataTopicV2 defines the v2 topic for the metadata rpc method. // RPCMetaDataTopicV2 defines the v2 topic for the metadata rpc method.
RPCMetaDataTopicV2 = protocolPrefix + MetadataMessageName + SchemaVersionV2 RPCMetaDataTopicV2 = protocolPrefix + MetadataMessageName + SchemaVersionV2
// V3 RPC Topics
RPCMetaDataTopicV3 = protocolPrefix + MetadataMessageName + SchemaVersionV3
) )
// RPC errors for topic parsing. // RPC errors for topic parsing.
@@ -97,10 +117,15 @@ var RPCTopicMappings = map[string]interface{}{
// RPC Metadata Message // RPC Metadata Message
RPCMetaDataTopicV1: new(interface{}), RPCMetaDataTopicV1: new(interface{}),
RPCMetaDataTopicV2: new(interface{}), RPCMetaDataTopicV2: new(interface{}),
RPCMetaDataTopicV3: new(interface{}),
// BlobSidecarsByRange v1 Message // BlobSidecarsByRange v1 Message
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest), RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message // BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq), RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
// DataColumnSidecarsByRange v1 Message
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnSidecarsByRootReq),
} }
// Maps all registered protocol prefixes. // Maps all registered protocol prefixes.
@@ -119,6 +144,8 @@ var messageMapping = map[string]bool{
MetadataMessageName: true, MetadataMessageName: true,
BlobSidecarsByRangeName: true, BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true, BlobSidecarsByRootName: true,
DataColumnSidecarsByRootName: true,
DataColumnSidecarsByRangeName: true,
} }
// Maps all the RPC messages which are to updated in altair. // Maps all the RPC messages which are to updated in altair.
@@ -128,9 +155,15 @@ var altairMapping = map[string]bool{
MetadataMessageName: true, MetadataMessageName: true,
} }
// Maps all the RPC messages which are to updated with peerDAS fork epoch.
var peerDASMapping = map[string]bool{
MetadataMessageName: true,
}
var versionMapping = map[string]bool{ var versionMapping = map[string]bool{
SchemaVersionV1: true, SchemaVersionV1: true,
SchemaVersionV2: true, SchemaVersionV2: true,
SchemaVersionV3: true,
} }
// OmitContextBytesV1 keeps track of which RPC methods do not write context bytes in their v1 incarnations. // OmitContextBytesV1 keeps track of which RPC methods do not write context bytes in their v1 incarnations.
@@ -258,13 +291,25 @@ func (r RPCTopic) Version() string {
// TopicFromMessage constructs the rpc topic from the provided message // TopicFromMessage constructs the rpc topic from the provided message
// type and epoch. // type and epoch.
func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) { func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
// Check if the topic is known.
if !messageMapping[msg] { if !messageMapping[msg] {
return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg) return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg)
} }
// Base version is version 1.
version := SchemaVersionV1 version := SchemaVersionV1
// Check if the message is to be updated in altair.
isAltair := epoch >= params.BeaconConfig().AltairForkEpoch isAltair := epoch >= params.BeaconConfig().AltairForkEpoch
if isAltair && altairMapping[msg] { if isAltair && altairMapping[msg] {
version = SchemaVersionV2 version = SchemaVersionV2
} }
// Check if the message is to be updated in peerDAS.
isPeerDAS := epoch >= params.BeaconConfig().Eip7594ForkEpoch
if isPeerDAS && peerDASMapping[msg] {
version = SchemaVersionV3
}
return protocolPrefix + msg + version, nil return protocolPrefix + msg + version, nil
} }

View File

@@ -42,7 +42,7 @@ func (s *Service) Send(ctx context.Context, message interface{}, baseTopic strin
return nil, err return nil, err
} }
// do not encode anything if we are sending a metadata request // do not encode anything if we are sending a metadata request
if baseTopic != RPCMetaDataTopicV1 && baseTopic != RPCMetaDataTopicV2 { if baseTopic != RPCMetaDataTopicV1 && baseTopic != RPCMetaDataTopicV2 && baseTopic != RPCMetaDataTopicV3 {
castedMsg, ok := message.(ssz.Marshaler) castedMsg, ok := message.(ssz.Marshaler)
if !ok { if !ok {
return nil, errors.Errorf("%T does not support the ssz marshaller interface", message) return nil, errors.Errorf("%T does not support the ssz marshaller interface", message)

View File

@@ -226,7 +226,7 @@ func (s *Service) Start() {
} }
// Initialize metadata according to the // Initialize metadata according to the
// current epoch. // current epoch.
s.RefreshENR() s.RefreshPersistentSubnets()
// Periodic functions. // Periodic functions.
async.RunEvery(s.ctx, params.BeaconConfig().TtfbTimeoutDuration(), func() { async.RunEvery(s.ctx, params.BeaconConfig().TtfbTimeoutDuration(), func() {
@@ -234,7 +234,7 @@ func (s *Service) Start() {
}) })
async.RunEvery(s.ctx, 30*time.Minute, s.Peers().Prune) async.RunEvery(s.ctx, 30*time.Minute, s.Peers().Prune)
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics) async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR) async.RunEvery(s.ctx, refreshRate, s.RefreshPersistentSubnets)
async.RunEvery(s.ctx, 1*time.Minute, func() { async.RunEvery(s.ctx, 1*time.Minute, func() {
inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC)) inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC))
inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP)) inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP))
@@ -358,6 +358,15 @@ func (s *Service) ENR() *enr.Record {
return s.dv5Listener.Self().Record() return s.dv5Listener.Self().Record()
} }
// NodeID returns the local node's node ID
// for discovery.
func (s *Service) NodeID() enode.ID {
if s.dv5Listener == nil {
return [32]byte{}
}
return s.dv5Listener.Self().ID()
}
// DiscoveryAddresses represents our enr addresses as multiaddresses. // DiscoveryAddresses represents our enr addresses as multiaddresses.
func (s *Service) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) { func (s *Service) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if s.dv5Listener == nil { if s.dv5Listener == nil {

View File

@@ -11,8 +11,11 @@ import (
"github.com/holiman/uint256" "github.com/holiman/uint256"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield" "github.com/prysmaticlabs/go-bitfield"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache" "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags" "github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives" "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -21,7 +24,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math" mathutil "github.com/prysmaticlabs/prysm/v5/math"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
) )
var attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount var attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
@@ -29,12 +31,13 @@ var syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
var attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey var attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
var syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey var syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
var custodySubnetCountEnrKey = params.BeaconNetworkConfig().CustodySubnetCountKey
// The value used with the subnet, in order // The value used with the subnet, in order
// to create an appropriate key to retrieve // to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate // the relevant lock. This is used to differentiate
// sync subnets from attestation subnets. This is deliberately // sync subnets from others. This is deliberately
// chosen as more than 64(attestation subnet count). // chosen as more than 64 (attestation subnet count).
const syncLockerVal = 100 const syncLockerVal = 100
// The value used with the blob sidecar subnet, in order // The value used with the blob sidecar subnet, in order
@@ -44,6 +47,13 @@ const syncLockerVal = 100
// chosen more than sync and attestation subnet combined. // chosen more than sync and attestation subnet combined.
const blobSubnetLockerVal = 110 const blobSubnetLockerVal = 110
// The value used with the data column sidecar subnet, in order
// to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate
// data column subnets from others. This is deliberately
// chosen more than sync, attestation and blob subnet (6) combined.
const dataColumnSubnetVal = 150
// FindPeersWithSubnet performs a network search for peers // FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then it tries to connect // subscribed to a particular subnet. Then it tries to connect
// with those peers. This method will block until either: // with those peers. This method will block until either:
@@ -72,8 +82,10 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
iterator = filterNodes(ctx, iterator, s.filterPeerForAttSubnet(index)) iterator = filterNodes(ctx, iterator, s.filterPeerForAttSubnet(index))
case strings.Contains(topic, GossipSyncCommitteeMessage): case strings.Contains(topic, GossipSyncCommitteeMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForSyncSubnet(index)) iterator = filterNodes(ctx, iterator, s.filterPeerForSyncSubnet(index))
case strings.Contains(topic, GossipDataColumnSidecarMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForDataColumnsSubnet(index))
default: default:
return false, errors.New("no subnet exists for provided topic") return false, errors.Errorf("no subnet exists for provided topic: %s", topic)
} }
wg := new(sync.WaitGroup) wg := new(sync.WaitGroup)
@@ -153,6 +165,22 @@ func (s *Service) filterPeerForSyncSubnet(index uint64) func(node *enode.Node) b
} }
} }
// returns a method with filters peers specifically for a particular data column subnet.
func (s *Service) filterPeerForDataColumnsSubnet(index uint64) func(node *enode.Node) bool {
return func(node *enode.Node) bool {
if !s.filterPeer(node) {
return false
}
subnets, err := dataColumnSubnets(node.ID(), node.Record())
if err != nil {
return false
}
return subnets[index]
}
}
// lower threshold to broadcast object compared to searching // lower threshold to broadcast object compared to searching
// for a subnet. So that even in the event of poor peer // for a subnet. So that even in the event of poor peer
// connectivity, we can still broadcast an attestation. // connectivity, we can still broadcast an attestation.
@@ -192,6 +220,35 @@ func (s *Service) updateSubnetRecordWithMetadataV2(bitVAtt bitfield.Bitvector64,
}) })
} }
// updateSubnetRecordWithMetadataV3 updates:
// - attestation subnet tracked,
// - sync subnets tracked, and
// - custody subnet count
// both in the node's record and in the node's metadata.
func (s *Service) updateSubnetRecordWithMetadataV3(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodySubnetCount uint64,
) {
attSubnetsEntry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
syncSubnetsEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
custodySubnetCountEntry := enr.WithEntry(custodySubnetCountEnrKey, custodySubnetCount)
localNode := s.dv5Listener.LocalNode()
localNode.Set(attSubnetsEntry)
localNode.Set(syncSubnetsEntry)
localNode.Set(custodySubnetCountEntry)
newSeqNumber := s.metaData.SequenceNumber() + 1
s.metaData = wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: newSeqNumber,
Attnets: bitVAtt,
Syncnets: bitVSync,
CustodySubnetCount: custodySubnetCount,
})
}
func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error { func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
_, ok, expTime := cache.SubnetIDs.GetPersistentSubnets() _, ok, expTime := cache.SubnetIDs.GetPersistentSubnets()
if ok && expTime.After(time.Now()) { if ok && expTime.After(time.Now()) {
@@ -206,6 +263,25 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
return nil return nil
} }
func initializePersistentColumnSubnets(id enode.ID) error {
_, ok, expTime := cache.ColumnSubnetIDs.GetColumnSubnets()
if ok && expTime.After(time.Now()) {
return nil
}
subsMap, err := peerdas.CustodyColumnSubnets(id, peerdas.CustodySubnetCount())
if err != nil {
return err
}
subs := make([]uint64, 0, len(subsMap))
for sub := range subsMap {
subs = append(subs, sub)
}
cache.ColumnSubnetIDs.AddColumnSubnets(subs)
return nil
}
// Spec pseudocode definition: // Spec pseudocode definition:
// //
// def compute_subscribed_subnets(node_id: NodeID, epoch: Epoch) -> Sequence[SubnetID]: // def compute_subscribed_subnets(node_id: NodeID, epoch: Epoch) -> Sequence[SubnetID]:
@@ -329,6 +405,25 @@ func syncSubnets(record *enr.Record) ([]uint64, error) {
return committeeIdxs, nil return committeeIdxs, nil
} }
func dataColumnSubnets(nodeID enode.ID, record *enr.Record) (map[uint64]bool, error) {
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the custody count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
if err != nil {
// If we fail to retrieve the custody count, we default to the custody requirement.
custodyCount = custodyRequirement
}
// Retrieve the custody subnets from the remote peer
custodyColumnsSubnets, err := peerdas.CustodyColumnSubnets(nodeID, custodyCount)
if err != nil {
return nil, errors.Wrap(err, "custody column subnets")
}
return custodyColumnsSubnets, nil
}
// Parses the attestation subnets ENR entry in a node and extracts its value // Parses the attestation subnets ENR entry in a node and extracts its value
// as a bitvector for further manipulation. // as a bitvector for further manipulation.
func attBitvector(record *enr.Record) (bitfield.Bitvector64, error) { func attBitvector(record *enr.Record) (bitfield.Bitvector64, error) {
@@ -355,10 +450,11 @@ func syncBitvector(record *enr.Record) (bitfield.Bitvector4, error) {
// The subnet locker is a map which keeps track of all // The subnet locker is a map which keeps track of all
// mutexes stored per subnet. This locker is re-used // mutexes stored per subnet. This locker is re-used
// between both the attestation and sync subnets. In // between both the attestation, sync and blob subnets.
// order to differentiate between attestation and sync // Sync subnets are stored by (subnet+syncLockerVal).
// subnets. Sync subnets are stored by (subnet+syncLockerVal). This // Blob subnets are stored by (subnet+blobSubnetLockerVal).
// is to prevent conflicts while allowing both subnets // Data column subnets are stored by (subnet+dataColumnSubnetVal).
// This is to prevent conflicts while allowing subnets
// to use a single locker. // to use a single locker.
func (s *Service) subnetLocker(i uint64) *sync.RWMutex { func (s *Service) subnetLocker(i uint64) *sync.RWMutex {
s.subnetsLockLock.Lock() s.subnetsLockLock.Lock()

View File

@@ -17,9 +17,11 @@ go_library(
"//beacon-chain:__subpackages__", "//beacon-chain:__subpackages__",
], ],
deps = [ deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library", "//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library", "//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library", "//beacon-chain/p2p/peers/scorers:go_default_library",
"//config/params:go_default_library",
"//proto/prysm/v1alpha1:go_default_library", "//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library", "//proto/prysm/v1alpha1/metadata:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library", "@com_github_ethereum_go_ethereum//crypto:go_default_library",

View File

@@ -3,6 +3,7 @@ package testing
import ( import (
"context" "context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/control" "github.com/libp2p/go-libp2p/core/control"
@@ -27,148 +28,166 @@ func NewFuzzTestP2P() *FakeP2P {
} }
// Encoding -- fake. // Encoding -- fake.
func (_ *FakeP2P) Encoding() encoder.NetworkEncoding { func (*FakeP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{} return &encoder.SszNetworkEncoder{}
} }
// AddConnectionHandler -- fake. // AddConnectionHandler -- fake.
func (_ *FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) { func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
} }
// AddDisconnectionHandler -- fake. // AddDisconnectionHandler -- fake.
func (_ *FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) { func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
} }
// AddPingMethod -- fake. // AddPingMethod -- fake.
func (_ *FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) { func (*FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
} }
// PeerID -- fake. // PeerID -- fake.
func (_ *FakeP2P) PeerID() peer.ID { func (*FakeP2P) PeerID() peer.ID {
return "fake" return "fake"
} }
// ENR returns the enr of the local peer. // ENR returns the enr of the local peer.
func (_ *FakeP2P) ENR() *enr.Record { func (*FakeP2P) ENR() *enr.Record {
return new(enr.Record) return new(enr.Record)
} }
// NodeID returns the node id of the local peer.
func (*FakeP2P) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses -- fake // DiscoveryAddresses -- fake
func (_ *FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) { func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil return nil, nil
} }
// FindPeersWithSubnet mocks the p2p func. // FindPeersWithSubnet mocks the p2p func.
func (_ *FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) { func (*FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil return false, nil
} }
// RefreshENR mocks the p2p func. // RefreshPersistentSubnets mocks the p2p func.
func (_ *FakeP2P) RefreshENR() {} func (*FakeP2P) RefreshPersistentSubnets() {}
// LeaveTopic -- fake. // LeaveTopic -- fake.
func (_ *FakeP2P) LeaveTopic(_ string) error { func (*FakeP2P) LeaveTopic(_ string) error {
return nil return nil
} }
// Metadata -- fake. // Metadata -- fake.
func (_ *FakeP2P) Metadata() metadata.Metadata { func (*FakeP2P) Metadata() metadata.Metadata {
return nil return nil
} }
// Peers -- fake. // Peers -- fake.
func (_ *FakeP2P) Peers() *peers.Status { func (*FakeP2P) Peers() *peers.Status {
return nil return nil
} }
// PublishToTopic -- fake. // PublishToTopic -- fake.
func (_ *FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error { func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil return nil
} }
// Send -- fake. // Send -- fake.
func (_ *FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) { func (*FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
return nil, nil return nil, nil
} }
// PubSub -- fake. // PubSub -- fake.
func (_ *FakeP2P) PubSub() *pubsub.PubSub { func (*FakeP2P) PubSub() *pubsub.PubSub {
return nil return nil
} }
// MetadataSeq -- fake. // MetadataSeq -- fake.
func (_ *FakeP2P) MetadataSeq() uint64 { func (*FakeP2P) MetadataSeq() uint64 {
return 0 return 0
} }
// SetStreamHandler -- fake. // SetStreamHandler -- fake.
func (_ *FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) { func (*FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
} }
// SubscribeToTopic -- fake. // SubscribeToTopic -- fake.
func (_ *FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) { func (*FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
return nil, nil return nil, nil
} }
// JoinTopic -- fake. // JoinTopic -- fake.
func (_ *FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) { func (*FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
return nil, nil return nil, nil
} }
// Host -- fake. // Host -- fake.
func (_ *FakeP2P) Host() host.Host { func (*FakeP2P) Host() host.Host {
return nil return nil
} }
// Disconnect -- fake. // Disconnect -- fake.
func (_ *FakeP2P) Disconnect(_ peer.ID) error { func (*FakeP2P) Disconnect(_ peer.ID) error {
return nil return nil
} }
// Broadcast -- fake. // Broadcast -- fake.
func (_ *FakeP2P) Broadcast(_ context.Context, _ proto.Message) error { func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil return nil
} }
// BroadcastAttestation -- fake. // BroadcastAttestation -- fake.
func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error { func (*FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
return nil return nil
} }
// BroadcastSyncCommitteeMessage -- fake. // BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error { func (*FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil return nil
} }
// BroadcastBlob -- fake. // BroadcastBlob -- fake.
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error { func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
return nil
}
// BroadcastDataColumn -- fake.
func (*FakeP2P) BroadcastDataColumn(_ context.Context, _ uint64, _ *ethpb.DataColumnSidecar) error {
return nil return nil
} }
// InterceptPeerDial -- fake. // InterceptPeerDial -- fake.
func (_ *FakeP2P) InterceptPeerDial(peer.ID) (allow bool) { func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true return true
} }
// InterceptAddrDial -- fake. // InterceptAddrDial -- fake.
func (_ *FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) { func (*FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true return true
} }
// InterceptAccept -- fake. // InterceptAccept -- fake.
func (_ *FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) { func (*FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true return true
} }
// InterceptSecured -- fake. // InterceptSecured -- fake.
func (_ *FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) { func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true return true
} }
// InterceptUpgraded -- fake. // InterceptUpgraded -- fake.
func (_ *FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) { func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0 return true, 0
} }
func (*FakeP2P) CustodyCountFromRemotePeer(peer.ID) uint64 {
return 0
}
func (*FakeP2P) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -48,6 +48,12 @@ func (m *MockBroadcaster) BroadcastBlob(context.Context, uint64, *ethpb.BlobSide
return nil return nil
} }
// BroadcastDataColumn broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumn(context.Context, uint64, *ethpb.DataColumnSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}
// NumMessages returns the number of messages broadcasted. // NumMessages returns the number of messages broadcasted.
func (m *MockBroadcaster) NumMessages() int { func (m *MockBroadcaster) NumMessages() int {
m.msgLock.Lock() m.msgLock.Lock()

View File

@@ -18,12 +18,12 @@ type MockHost struct {
} }
// ID -- // ID --
func (_ *MockHost) ID() peer.ID { func (*MockHost) ID() peer.ID {
return "" return ""
} }
// Peerstore -- // Peerstore --
func (_ *MockHost) Peerstore() peerstore.Peerstore { func (*MockHost) Peerstore() peerstore.Peerstore {
return nil return nil
} }
@@ -33,46 +33,46 @@ func (m *MockHost) Addrs() []ma.Multiaddr {
} }
// Network -- // Network --
func (_ *MockHost) Network() network.Network { func (*MockHost) Network() network.Network {
return nil return nil
} }
// Mux -- // Mux --
func (_ *MockHost) Mux() protocol.Switch { func (*MockHost) Mux() protocol.Switch {
return nil return nil
} }
// Connect -- // Connect --
func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error { func (*MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
return nil return nil
} }
// SetStreamHandler -- // SetStreamHandler --
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {} func (*MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch -- // SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) { func (*MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
} }
// RemoveStreamHandler -- // RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {} func (*MockHost) RemoveStreamHandler(_ protocol.ID) {}
// NewStream -- // NewStream --
func (_ *MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) { func (*MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
return nil, nil return nil, nil
} }
// Close -- // Close --
func (_ *MockHost) Close() error { func (*MockHost) Close() error {
return nil return nil
} }
// ConnManager -- // ConnManager --
func (_ *MockHost) ConnManager() connmgr.ConnManager { func (*MockHost) ConnManager() connmgr.ConnManager {
return nil return nil
} }
// EventBus -- // EventBus --
func (_ *MockHost) EventBus() event.Bus { func (*MockHost) EventBus() event.Bus {
return nil return nil
} }

View File

@@ -4,6 +4,7 @@ import (
"context" "context"
"errors" "errors"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/host" "github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
@@ -20,7 +21,7 @@ type MockPeerManager struct {
} }
// Disconnect . // Disconnect .
func (_ *MockPeerManager) Disconnect(peer.ID) error { func (*MockPeerManager) Disconnect(peer.ID) error {
return nil return nil
} }
@@ -39,6 +40,11 @@ func (m MockPeerManager) ENR() *enr.Record {
return m.Enr return m.Enr
} }
// NodeID .
func (m MockPeerManager) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses . // DiscoveryAddresses .
func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) { func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if m.FailDiscoveryAddr { if m.FailDiscoveryAddr {
@@ -47,13 +53,13 @@ func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return m.DiscoveryAddr, nil return m.DiscoveryAddr, nil
} }
// RefreshENR . // RefreshPersistentSubnets .
func (_ MockPeerManager) RefreshENR() {} func (MockPeerManager) RefreshPersistentSubnets() {}
// FindPeersWithSubnet . // FindPeersWithSubnet .
func (_ MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) { func (MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return true, nil return true, nil
} }
// AddPingMethod . // AddPingMethod .
func (_ MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {} func (MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}

View File

@@ -10,6 +10,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
core "github.com/libp2p/go-libp2p/core" core "github.com/libp2p/go-libp2p/core"
@@ -22,9 +23,11 @@ import (
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing" swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/multiformats/go-multiaddr" "github.com/multiformats/go-multiaddr"
ssz "github.com/prysmaticlabs/fastssz" ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/config/params"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata" "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@@ -33,8 +36,11 @@ import (
// We have to declare this again here to prevent a circular dependency // We have to declare this again here to prevent a circular dependency
// with the main p2p package. // with the main p2p package.
const metatadataV1Topic = "/eth2/beacon_chain/req/metadata/1" const (
const metatadataV2Topic = "/eth2/beacon_chain/req/metadata/2" metadataV1Topic = "/eth2/beacon_chain/req/metadata/1"
metadataV2Topic = "/eth2/beacon_chain/req/metadata/2"
metadataV3Topic = "/eth2/beacon_chain/req/metadata/3"
)
// TestP2P represents a p2p implementation that can be used for testing. // TestP2P represents a p2p implementation that can be used for testing.
type TestP2P struct { type TestP2P struct {
@@ -50,9 +56,10 @@ type TestP2P struct {
} }
// NewTestP2P initializes a new p2p test service. // NewTestP2P initializes a new p2p test service.
func NewTestP2P(t *testing.T) *TestP2P { func NewTestP2P(t *testing.T, opts ...swarmt.Option) *TestP2P {
opts = append(opts, swarmt.OptDisableQUIC)
ctx := context.Background() ctx := context.Background()
h := bhost.NewBlankHost(swarmt.GenSwarm(t, swarmt.OptDisableQUIC)) h := bhost.NewBlankHost(swarmt.GenSwarm(t, opts...))
ps, err := pubsub.NewFloodSub(ctx, h, ps, err := pubsub.NewFloodSub(ctx, h,
pubsub.WithMessageSigning(false), pubsub.WithMessageSigning(false),
pubsub.WithStrictSignatureVerification(false), pubsub.WithStrictSignatureVerification(false),
@@ -183,6 +190,12 @@ func (p *TestP2P) BroadcastBlob(context.Context, uint64, *ethpb.BlobSidecar) err
return nil return nil
} }
// BroadcastDataColumn broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumn(context.Context, uint64, *ethpb.DataColumnSidecar) error {
p.BroadcastCalled.Store(true)
return nil
}
// SetStreamHandler for RPC. // SetStreamHandler for RPC.
func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler) { func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler) {
p.BHost.SetStreamHandler(protocol.ID(topic), handler) p.BHost.SetStreamHandler(protocol.ID(topic), handler)
@@ -232,7 +245,7 @@ func (p *TestP2P) LeaveTopic(topic string) error {
} }
// Encoding returns ssz encoding. // Encoding returns ssz encoding.
func (_ *TestP2P) Encoding() encoder.NetworkEncoding { func (*TestP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{} return &encoder.SszNetworkEncoder{}
} }
@@ -259,12 +272,17 @@ func (p *TestP2P) Host() host.Host {
} }
// ENR returns the enr of the local peer. // ENR returns the enr of the local peer.
func (_ *TestP2P) ENR() *enr.Record { func (*TestP2P) ENR() *enr.Record {
return new(enr.Record) return new(enr.Record)
} }
// NodeID returns the node id of the local peer.
func (*TestP2P) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses -- // DiscoveryAddresses --
func (_ *TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) { func (*TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil return nil, nil
} }
@@ -310,6 +328,8 @@ func (p *TestP2P) AddDisconnectionHandler(f func(ctx context.Context, id peer.ID
// Send a message to a specific peer. // Send a message to a specific peer.
func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid peer.ID) (network.Stream, error) { func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid peer.ID) (network.Stream, error) {
metadataTopics := map[string]bool{metadataV1Topic: true, metadataV2Topic: true, metadataV3Topic: true}
t := topic t := topic
if t == "" { if t == "" {
return nil, fmt.Errorf("protocol doesn't exist for proto message: %v", msg) return nil, fmt.Errorf("protocol doesn't exist for proto message: %v", msg)
@@ -319,7 +339,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
return nil, err return nil, err
} }
if topic != metatadataV1Topic && topic != metatadataV2Topic { if !metadataTopics[topic] {
castedMsg, ok := msg.(ssz.Marshaler) castedMsg, ok := msg.(ssz.Marshaler)
if !ok { if !ok {
p.t.Fatalf("%T doesn't support ssz marshaler", msg) p.t.Fatalf("%T doesn't support ssz marshaler", msg)
@@ -346,7 +366,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
} }
// Started always returns true. // Started always returns true.
func (_ *TestP2P) Started() bool { func (*TestP2P) Started() bool {
return true return true
} }
@@ -356,12 +376,12 @@ func (p *TestP2P) Peers() *peers.Status {
} }
// FindPeersWithSubnet mocks the p2p func. // FindPeersWithSubnet mocks the p2p func.
func (_ *TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) { func (*TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil return false, nil
} }
// RefreshENR mocks the p2p func. // RefreshPersistentSubnets mocks the p2p func.
func (_ *TestP2P) RefreshENR() {} func (*TestP2P) RefreshPersistentSubnets() {}
// ForkDigest mocks the p2p func. // ForkDigest mocks the p2p func.
func (p *TestP2P) ForkDigest() ([4]byte, error) { func (p *TestP2P) ForkDigest() ([4]byte, error) {
@@ -379,31 +399,54 @@ func (p *TestP2P) MetadataSeq() uint64 {
} }
// AddPingMethod mocks the p2p func. // AddPingMethod mocks the p2p func.
func (_ *TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) { func (*TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
// no-op // no-op
} }
// InterceptPeerDial . // InterceptPeerDial .
func (_ *TestP2P) InterceptPeerDial(peer.ID) (allow bool) { func (*TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true return true
} }
// InterceptAddrDial . // InterceptAddrDial .
func (_ *TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) { func (*TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true return true
} }
// InterceptAccept . // InterceptAccept .
func (_ *TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) { func (*TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true return true
} }
// InterceptSecured . // InterceptSecured .
func (_ *TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) { func (*TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true return true
} }
// InterceptUpgraded . // InterceptUpgraded .
func (_ *TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) { func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0 return true, 0
} }
func (s *TestP2P) CustodyCountFromRemotePeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
record, err := s.peers.ENR(pid)
if err != nil {
return custodyRequirement
}
// Retrieve the custody subnets count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
if err != nil {
return custodyRequirement
}
return custodyCount
}
func (*TestP2P) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -30,6 +30,9 @@ const (
GossipBlsToExecutionChangeMessage = "bls_to_execution_change" GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
// GossipBlobSidecarMessage is the name for the blob sidecar message type. // GossipBlobSidecarMessage is the name for the blob sidecar message type.
GossipBlobSidecarMessage = "blob_sidecar" GossipBlobSidecarMessage = "blob_sidecar"
// GossipDataColumnSidecarMessage is the name for the data column sidecar message type.
GossipDataColumnSidecarMessage = "data_column_sidecar"
// Topic Formats // Topic Formats
// //
// AttestationSubnetTopicFormat is the topic format for the attestation subnet. // AttestationSubnetTopicFormat is the topic format for the attestation subnet.
@@ -52,4 +55,6 @@ const (
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
// BlobSubnetTopicFormat is the topic format for the blob subnet. // BlobSubnetTopicFormat is the topic format for the blob subnet.
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d" BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
// DataColumnSubnetTopicFormat is the topic format for the data column subnet.
DataColumnSubnetTopicFormat = GossipProtocolAndDigest + GossipDataColumnSidecarMessage + "_%d"
) )

View File

@@ -87,10 +87,10 @@ func InitializeDataMaps() {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
}, },
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (metadata.Metadata, error) { bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
}, },
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (metadata.Metadata, error) { bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
}, },
} }

View File

@@ -9,10 +9,15 @@ var (
ErrInvalidSequenceNum = errors.New("invalid sequence number provided") ErrInvalidSequenceNum = errors.New("invalid sequence number provided")
ErrGeneric = errors.New("internal service error") ErrGeneric = errors.New("internal service error")
ErrRateLimited = errors.New("rate limited") ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded") ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count") ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob slot < minimum_request_epoch") ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrDataColumnLTMinRequest = errors.New("data column epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrMaxDataColumnReqExceeded = errors.New("requested more than MAX_REQUEST_DATA_COLUMN_SIDECARS")
ErrResourceUnavailable = errors.New("resource requested unavailable") ErrResourceUnavailable = errors.New("resource requested unavailable")
ErrInvalidColumnIndex = errors.New("invalid column index requested")
) )

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz" ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
) )
@@ -183,31 +184,118 @@ func (b *BlobSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
return nil return nil
} }
var _ sort.Interface = BlobSidecarsByRootReq{} var _ sort.Interface = (*BlobSidecarsByRootReq)(nil)
// Less reports whether the element with index i must sort before the element with index j. // Less reports whether the element with index i must sort before the element with index j.
// BlobIdentifier will be sorted in lexicographic order by root, with Blob Index as tiebreaker for a given root. // BlobIdentifier will be sorted in lexicographic order by root, with Blob Index as tiebreaker for a given root.
func (s BlobSidecarsByRootReq) Less(i, j int) bool { func (s *BlobSidecarsByRootReq) Less(i, j int) bool {
rootCmp := bytes.Compare(s[i].BlockRoot, s[j].BlockRoot) rootCmp := bytes.Compare((*s)[i].BlockRoot, (*s)[j].BlockRoot)
if rootCmp != 0 { if rootCmp != 0 {
// They aren't equal; return true if i < j, false if i > j. // They aren't equal; return true if i < j, false if i > j.
return rootCmp < 0 return rootCmp < 0
} }
// They are equal; blob index is the tie breaker. // They are equal; blob index is the tie breaker.
return s[i].Index < s[j].Index return (*s)[i].Index < (*s)[j].Index
} }
// Swap swaps the elements with indexes i and j. // Swap swaps the elements with indexes i and j.
func (s BlobSidecarsByRootReq) Swap(i, j int) { func (s *BlobSidecarsByRootReq) Swap(i, j int) {
s[i], s[j] = s[j], s[i] (*s)[i], (*s)[j] = (*s)[j], (*s)[i]
} }
// Len is the number of elements in the collection. // Len is the number of elements in the collection.
func (s BlobSidecarsByRootReq) Len() int { func (s *BlobSidecarsByRootReq) Len() int {
return len(s) return len(*s)
}
// ===================================
// DataColumnSidecarsByRootReq section
// ===================================
var _ ssz.Marshaler = (*DataColumnSidecarsByRootReq)(nil)
var _ ssz.Unmarshaler = (*DataColumnSidecarsByRootReq)(nil)
var _ sort.Interface = (*DataColumnSidecarsByRootReq)(nil)
// DataColumnSidecarsByRootReq is used to specify a list of data column targets (root+index) in a DataColumnSidecarsByRoot RPC request.
type DataColumnSidecarsByRootReq []*eth.DataColumnIdentifier
// DataColumnIdentifier is a fixed size value, so we can compute its fixed size at start time (see init below)
var dataColumnIdSize int
// UnmarshalSSZ implements ssz.Unmarshaler. It unmarshals the provided bytes buffer into the DataColumnSidecarsByRootReq value.
func (d *DataColumnSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
bufLen := len(buf)
maxLen := int(params.BeaconConfig().MaxRequestDataColumnSidecars) * dataColumnIdSize
if bufLen > maxLen {
return errors.Errorf("expected buffer with length of up to %d but received length %d", maxLen, bufLen)
}
if bufLen%dataColumnIdSize != 0 {
return errors.Wrapf(ssz.ErrIncorrectByteSize, "size=%d", bufLen)
}
count := bufLen / dataColumnIdSize
*d = make([]*eth.DataColumnIdentifier, count)
for i := 0; i < count; i++ {
id := &eth.DataColumnIdentifier{}
err := id.UnmarshalSSZ(buf[i*dataColumnIdSize : (i+1)*dataColumnIdSize])
if err != nil {
return err
}
(*d)[i] = id
}
return nil
}
// MarshalSSZ implements ssz.Marshaler. It serializes the DataColumnSidecarsByRootReq value to a byte slice.
func (d *DataColumnSidecarsByRootReq) MarshalSSZ() ([]byte, error) {
buf := make([]byte, d.SizeSSZ())
for i, id := range *d {
bytes, err := id.MarshalSSZ()
if err != nil {
return nil, err
}
copy(buf[i*dataColumnIdSize:(i+1)*dataColumnIdSize], bytes)
}
return buf, nil
}
// MarshalSSZTo implements ssz.Marshaler. It appends the serialized DataColumnSidecarsByRootReq value to the provided byte slice.
func (d *DataColumnSidecarsByRootReq) MarshalSSZTo(dst []byte) ([]byte, error) {
mobj, err := d.MarshalSSZ()
if err != nil {
return nil, err
}
return append(dst, mobj...), nil
}
// SizeSSZ implements ssz.Marshaler. It returns the size of the serialized representation.
func (d *DataColumnSidecarsByRootReq) SizeSSZ() int {
return len(*d) * dataColumnIdSize
}
// Len implements sort.Interface. It returns the number of elements in the collection.
func (d *DataColumnSidecarsByRootReq) Len() int {
return len(*d)
}
// Less implements sort.Interface. It reports whether the element with index i must sort before the element with index j.
func (d *DataColumnSidecarsByRootReq) Less(i, j int) bool {
rootCmp := bytes.Compare((*d)[i].BlockRoot, (*d)[j].BlockRoot)
if rootCmp != 0 {
return rootCmp < 0
}
return (*d)[i].ColumnIndex < (*d)[j].ColumnIndex
}
// Swap implements sort.Interface. It swaps the elements with indexes i and j.
func (d *DataColumnSidecarsByRootReq) Swap(i, j int) {
(*d)[i], (*d)[j] = (*d)[j], (*d)[i]
} }
func init() { func init() {
sizer := &eth.BlobIdentifier{} blobSizer := &eth.BlobIdentifier{}
blobIdSize = sizer.SizeSSZ() blobIdSize = blobSizer.SizeSSZ()
dataColumnSizer := &eth.DataColumnIdentifier{}
dataColumnIdSize = dataColumnSizer.SizeSSZ()
} }

View File

@@ -5,6 +5,7 @@ import (
"testing" "testing"
ssz "github.com/prysmaticlabs/fastssz" ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives" "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -194,3 +195,136 @@ func hexDecodeOrDie(t *testing.T, str string) []byte {
require.NoError(t, err) require.NoError(t, err)
return decoded return decoded
} }
// =====================================
// DataColumnSidecarsByRootReq section
// =====================================
func generateDataColumnIdentifiers(n int) []*eth.DataColumnIdentifier {
r := make([]*eth.DataColumnIdentifier, n)
for i := 0; i < n; i++ {
r[i] = &eth.DataColumnIdentifier{
BlockRoot: bytesutil.PadTo([]byte{byte(i)}, 32),
ColumnIndex: uint64(i),
}
}
return r
}
func TestDataColumnSidecarsByRootReq_MarshalUnmarshal(t *testing.T) {
cases := []struct {
name string
ids []*eth.DataColumnIdentifier
marshalErr error
unmarshalErr string
unmarshalMod func([]byte) []byte
}{
{
name: "empty list",
},
{
name: "single item list",
ids: generateDataColumnIdentifiers(1),
},
{
name: "10 item list",
ids: generateDataColumnIdentifiers(10),
},
{
name: "wonky unmarshal size",
ids: generateDataColumnIdentifiers(10),
unmarshalMod: func(in []byte) []byte {
in = append(in, byte(0))
return in
},
unmarshalErr: ssz.ErrIncorrectByteSize.Error(),
},
{
name: "size too big",
ids: generateDataColumnIdentifiers(1),
unmarshalMod: func(in []byte) []byte {
maxLen := params.BeaconConfig().MaxRequestDataColumnSidecars * uint64(dataColumnIdSize)
add := make([]byte, maxLen)
in = append(in, add...)
return in
},
unmarshalErr: "expected buffer with length of up to",
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
req := DataColumnSidecarsByRootReq(c.ids)
bytes, err := req.MarshalSSZ()
if c.marshalErr != nil {
require.ErrorIs(t, err, c.marshalErr)
return
}
require.NoError(t, err)
if c.unmarshalMod != nil {
bytes = c.unmarshalMod(bytes)
}
got := &DataColumnSidecarsByRootReq{}
err = got.UnmarshalSSZ(bytes)
if c.unmarshalErr != "" {
require.ErrorContains(t, c.unmarshalErr, err)
return
}
require.NoError(t, err)
for i, id := range *got {
require.DeepEqual(t, c.ids[i], id)
}
})
}
// Test MarshalSSZTo
req := DataColumnSidecarsByRootReq(generateDataColumnIdentifiers(10))
buf := make([]byte, 0)
buf, err := req.MarshalSSZTo(buf)
require.NoError(t, err)
require.Equal(t, len(buf), int(req.SizeSSZ()))
var unmarshalled DataColumnSidecarsByRootReq
err = unmarshalled.UnmarshalSSZ(buf)
require.NoError(t, err)
require.DeepEqual(t, req, unmarshalled)
}
func TestDataColumnSidecarsByRootReq_Sort(t *testing.T) {
ids := []*eth.DataColumnIdentifier{
{
BlockRoot: bytesutil.PadTo([]byte{3}, 32),
ColumnIndex: 0,
},
{
BlockRoot: bytesutil.PadTo([]byte{2}, 32),
ColumnIndex: 2,
},
{
BlockRoot: bytesutil.PadTo([]byte{2}, 32),
ColumnIndex: 1,
},
{
BlockRoot: bytesutil.PadTo([]byte{1}, 32),
ColumnIndex: 2,
},
{
BlockRoot: bytesutil.PadTo([]byte{0}, 32),
ColumnIndex: 3,
},
}
req := DataColumnSidecarsByRootReq(ids)
require.Equal(t, true, req.Less(4, 3))
require.Equal(t, true, req.Less(3, 2))
require.Equal(t, true, req.Less(2, 1))
require.Equal(t, true, req.Less(1, 0))
require.Equal(t, 5, req.Len())
ids = []*eth.DataColumnIdentifier{
{
BlockRoot: bytesutil.PadTo([]byte{0}, 32),
ColumnIndex: 3,
},
}
req = DataColumnSidecarsByRootReq(ids)
require.Equal(t, 1, req.Len())
}

View File

@@ -12,10 +12,15 @@ import (
"path" "path"
"time" "time"
"github.com/btcsuite/btcd/btcec/v2"
gCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto" "github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield" "github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper" "github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa" ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/io/file" "github.com/prysmaticlabs/prysm/v5/io/file"
@@ -62,6 +67,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
} }
if defaultKeysExist { if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath) return privKeyFromFile(defaultKeyPath)
} }
@@ -71,8 +77,8 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err return nil, err
} }
// If the StaticPeerID flag is not set, return the private key. // If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !cfg.StaticPeerID { if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv) return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
} }
@@ -89,7 +95,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err return nil, err
} }
log.Info("Wrote network key to file") log.WithField("path", defaultKeyPath).Info("Wrote network key to file")
// Read the key from the defaultKeyPath file just written // Read the key from the defaultKeyPath file just written
// for the strongest guarantee that the next start will be the same as this one. // for the strongest guarantee that the next start will be the same as this one.
return privKeyFromFile(defaultKeyPath) return privKeyFromFile(defaultKeyPath)
@@ -173,3 +179,23 @@ func verifyConnectivity(addr string, port uint, protocol string) {
} }
} }
} }
func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
// Retrieve the public key object of the peer under "crypto" form.
pubkeyObjCrypto, err := pid.ExtractPublicKey()
if err != nil {
return [32]byte{}, errors.Wrap(err, "extract public key")
}
// Extract the bytes representation of the public key.
compressedPubKeyBytes, err := pubkeyObjCrypto.Raw()
if err != nil {
return [32]byte{}, errors.Wrap(err, "public key raw")
}
// Retrieve the public key object of the peer under "SECP256K1" form.
pubKeyObjSecp256k1, err := btcec.ParsePubKey(compressedPubKeyBytes)
if err != nil {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/assert" "github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require" "github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -64,3 +65,19 @@ func TestSerializeENR(t *testing.T) {
assert.ErrorContains(t, "could not serialize nil record", err) assert.ErrorContains(t, "could not serialize nil record", err)
}) })
} }
func TestConvertPeerIDToNodeID(t *testing.T) {
const (
peerIDStr = "16Uiu2HAmRrhnqEfybLYimCiAYer2AtZKDGamQrL1VwRCyeh2YiFc"
expectedNodeIDStr = "eed26c5d2425ab95f57246a5dca87317c41cacee4bcafe8bbe57e5965527c290"
)
peerID, err := peer.Decode(peerIDStr)
require.NoError(t, err)
actualNodeID, err := ConvertPeerIDToNodeID(peerID)
require.NoError(t, err)
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}

View File

@@ -79,6 +79,7 @@ func TestGetSpec(t *testing.T) {
config.DenebForkEpoch = 105 config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion") config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107 config.ElectraForkEpoch = 107
config.Eip7594ForkEpoch = 109
config.BLSWithdrawalPrefixByte = byte('b') config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c') config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24 config.GenesisDelay = 24
@@ -192,7 +193,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{}) data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok) require.Equal(t, true, ok)
assert.Equal(t, 155, len(data)) assert.Equal(t, 156, len(data))
for k, v := range data { for k, v := range data {
t.Run(k, func(t *testing.T) { t.Run(k, func(t *testing.T) {
switch k { switch k {
@@ -270,6 +271,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v) assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH": case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v) assert.Equal(t, "107", v)
case "EIP7594_FORK_EPOCH":
assert.Equal(t, "109", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY": case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v) assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX": case "BLS_WITHDRAWAL_PREFIX":

View File

@@ -105,6 +105,8 @@ func (ds *Server) getPeer(pid peer.ID) (*ethpb.DebugPeerResponse, error) {
peerInfo.MetadataV0 = metadata.MetadataObjV0() peerInfo.MetadataV0 = metadata.MetadataObjV0()
case metadata.MetadataObjV1() != nil: case metadata.MetadataObjV1() != nil:
peerInfo.MetadataV1 = metadata.MetadataObjV1() peerInfo.MetadataV1 = metadata.MetadataObjV1()
case metadata.MetadataObjV2() != nil:
peerInfo.MetadataV2 = metadata.MetadataObjV2()
} }
} }
addresses := peerStore.Addrs(pid) addresses := peerStore.Addrs(pid)

View File

@@ -1,3 +1,5 @@
# gazelle:ignore
load("@prysm//tools/go:def.bzl", "go_library", "go_test") load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library( go_library(
@@ -36,6 +38,7 @@ go_library(
"//api/client/builder:go_default_library", "//api/client/builder:go_default_library",
"//async/event:go_default_library", "//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library", "//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/builder:go_default_library", "//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library", "//beacon-chain/cache/depositsnapshot:go_default_library",
@@ -45,6 +48,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library", "//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library", "//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library", "//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library", "//beacon-chain/core/transition:go_default_library",
@@ -176,7 +180,6 @@ common_deps = [
"@org_golang_google_protobuf//types/known/emptypb:go_default_library", "@org_golang_google_protobuf//types/known/emptypb:go_default_library",
] ]
# gazelle:ignore
go_test( go_test(
name = "go_default_test", name = "go_default_test",
timeout = "moderate", timeout = "moderate",

View File

@@ -13,15 +13,20 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
builderapi "github.com/prysmaticlabs/prysm/v5/api/client/builder" builderapi "github.com/prysmaticlabs/prysm/v5/api/client/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain" "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder" "github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache" "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block" blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/kv" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state" "github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks" "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces" "github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -261,7 +266,13 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
} }
// ProposeBeaconBlock handles the proposal of beacon blocks. // ProposeBeaconBlock handles the proposal of beacon blocks.
// TODO: Add tests
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) { func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
var (
blobSidecars []*ethpb.BlobSidecar
dataColumnSideCars []*ethpb.DataColumnSidecar
)
ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock") ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock")
defer span.End() defer span.End()
@@ -273,15 +284,18 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
if err != nil { if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", "decode block failed", err) return nil, status.Errorf(codes.InvalidArgument, "%s: %v", "decode block failed", err)
} }
isPeerDASEnabled := coreTime.PeerDASIsActive(block.Block().Slot())
var sidecars []*ethpb.BlobSidecar
if block.IsBlinded() { if block.IsBlinded() {
block, sidecars, err = vs.handleBlindedBlock(ctx, block) block, blobSidecars, dataColumnSideCars, err = vs.handleBlindedBlock(ctx, block, isPeerDASEnabled)
if err != nil {
return nil, status.Errorf(codes.Internal, "%s: %v", "handle blinded block", err)
}
} else { } else {
sidecars, err = vs.handleUnblindedBlock(block, req) blobSidecars, dataColumnSideCars, err = handleUnblindedBlock(block, req, isPeerDASEnabled)
} if err != nil {
if err != nil { return nil, status.Errorf(codes.Internal, "%s: %v", "handle unblided block", err)
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err) }
} }
root, err := block.Block().HashTreeRoot() root, err := block.Block().HashTreeRoot()
@@ -302,8 +316,14 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
errChan <- nil errChan <- nil
}() }()
if err := vs.broadcastAndReceiveBlobs(ctx, sidecars, root); err != nil { if isPeerDASEnabled {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err) if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSideCars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive data columns: %v", err)
}
} else {
if err := vs.broadcastAndReceiveBlobs(ctx, blobSidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
}
} }
wg.Wait() wg.Wait()
@@ -315,47 +335,83 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
} }
// handleBlindedBlock processes blinded beacon blocks. // handleBlindedBlock processes blinded beacon blocks.
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) { func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock, isPeerDASEnabled bool) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
if block.Version() < version.Bellatrix { if block.Version() < version.Bellatrix {
return nil, nil, errors.New("pre-Bellatrix blinded block") return nil, nil, nil, errors.New("pre-Bellatrix blinded block")
} }
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() { if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return nil, nil, errors.New("unconfigured block builder") return nil, nil, nil, errors.New("unconfigured block builder")
} }
copiedBlock, err := block.Copy() copiedBlock, err := block.Copy()
if err != nil { if err != nil {
return nil, nil, err return nil, nil, nil, errors.Wrap(err, "block copy")
} }
payload, bundle, err := vs.BlockBuilder.SubmitBlindedBlock(ctx, block) payload, bundle, err := vs.BlockBuilder.SubmitBlindedBlock(ctx, block)
if err != nil { if err != nil {
return nil, nil, errors.Wrap(err, "submit blinded block failed") return nil, nil, nil, errors.Wrap(err, "submit blinded block")
} }
if err := copiedBlock.Unblind(payload); err != nil { if err := copiedBlock.Unblind(payload); err != nil {
return nil, nil, errors.Wrap(err, "unblind failed") return nil, nil, nil, errors.Wrap(err, "unblind")
} }
sidecars, err := unblindBlobsSidecars(copiedBlock, bundle) if isPeerDASEnabled {
dataColumnSideCars, err := unblindDataColumnsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "unblind data columns sidecars")
}
return copiedBlock, nil, dataColumnSideCars, nil
}
blobSidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if err != nil { if err != nil {
return nil, nil, errors.Wrap(err, "unblind sidecars failed") return nil, nil, nil, errors.Wrap(err, "unblind blobs sidecars")
} }
return copiedBlock, sidecars, nil return copiedBlock, blobSidecars, nil, nil
} }
// handleUnblindedBlock processes unblinded beacon blocks. // handleUnblindedBlock processes unblinded beacon blocks.
func (vs *Server) handleUnblindedBlock(block interfaces.SignedBeaconBlock, req *ethpb.GenericSignedBeaconBlock) ([]*ethpb.BlobSidecar, error) { func handleUnblindedBlock(block interfaces.SignedBeaconBlock, req *ethpb.GenericSignedBeaconBlock, isPeerDASEnabled bool) ([]*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
dbBlockContents := req.GetDeneb() dbBlockContents := req.GetDeneb()
if dbBlockContents == nil { if dbBlockContents == nil {
return nil, nil return nil, nil, nil
} }
return BuildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
if isPeerDASEnabled {
// Convert blobs from slices to array.
blobs := make([]kzg.Blob, 0, len(dbBlockContents.Blobs))
for _, blob := range dbBlockContents.Blobs {
if len(blob) != kzg.BytesPerBlob {
return nil, nil, errors.Errorf("invalid blob size. expected %d bytes, got %d bytes", kzg.BytesPerBlob, len(blob))
}
blobs = append(blobs, kzg.Blob(blob))
}
dataColumnSideCars, err := peerdas.DataColumnSidecars(block, blobs)
if err != nil {
return nil, nil, errors.Wrap(err, "data column sidecars")
}
return nil, dataColumnSideCars, nil
}
blobSidecars, err := BuildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
if err != nil {
return nil, nil, errors.Wrap(err, "build blob sidecars")
}
return blobSidecars, nil, nil
} }
// broadcastReceiveBlock broadcasts a block and handles its reception. // broadcastReceiveBlock broadcasts a block and handles its reception.
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [32]byte) error { func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
protoBlock, err := block.Proto() protoBlock, err := block.Proto()
if err != nil { if err != nil {
return errors.Wrap(err, "protobuf conversion failed") return errors.Wrap(err, "protobuf conversion failed")
@@ -371,7 +427,7 @@ func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.Si
} }
// broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars. // broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars.
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [32]byte) error { func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [fieldparams.RootLength]byte) error {
eg, eCtx := errgroup.WithContext(ctx) eg, eCtx := errgroup.WithContext(ctx)
for i, sc := range sidecars { for i, sc := range sidecars {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with. // Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
@@ -400,6 +456,53 @@ func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethp
return eg.Wait() return eg.Wait()
} }
// broadcastAndReceiveDataColumns handles the broadcasting and reception of data columns sidecars.
func (vs *Server) broadcastAndReceiveDataColumns(ctx context.Context, sidecars []*ethpb.DataColumnSidecar, root [fieldparams.RootLength]byte) error {
eg, _ := errgroup.WithContext(ctx)
dataColumnsWithholdCount := features.Get().DataColumnsWithholdCount
for i, sd := range sidecars {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
colIdx, sidecar := i, sd
eg.Go(func() error {
// Compute the subnet index based on the column index.
subnet := uint64(colIdx) % params.BeaconConfig().DataColumnSidecarSubnetCount
if colIdx < dataColumnsWithholdCount {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"subnet": subnet,
"dataColumnIndex": colIdx,
}).Warning("Withholding data column")
} else {
if err := vs.P2P.BroadcastDataColumn(ctx, subnet, sidecar); err != nil {
return errors.Wrap(err, "broadcast data column")
}
}
roDataColumn, err := blocks.NewRODataColumnWithRoot(sidecar, root)
if err != nil {
return errors.Wrap(err, "new read-only data column with root")
}
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
if err := vs.DataColumnReceiver.ReceiveDataColumn(verifiedRODataColumn); err != nil {
return errors.Wrap(err, "receive data column")
}
vs.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.DataColumnSidecarReceived,
Data: &operation.DataColumnSidecarReceivedData{DataColumn: &verifiedRODataColumn},
})
return nil
})
}
return eg.Wait()
}
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer. // PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
func (vs *Server) PrepareBeaconProposer( func (vs *Server) PrepareBeaconProposer(
_ context.Context, request *ethpb.PrepareBeaconProposerRequest, _ context.Context, request *ethpb.PrepareBeaconProposerRequest,

View File

@@ -72,7 +72,6 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
} }
setFeeRecipientIfBurnAddress(&val) setFeeRecipientIfBurnAddress(&val)
var err error
if ok && payloadId != [8]byte{} { if ok && payloadId != [8]byte{} {
// Payload ID is cache hit. Return the cached payload ID. // Payload ID is cache hit. Return the cached payload ID.
var pid primitives.PayloadID var pid primitives.PayloadID

View File

@@ -941,7 +941,7 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
return &ethpb.GenericSignedBeaconBlock{Block: blk} return &ethpb.GenericSignedBeaconBlock{Block: blk}
}, },
useBuilder: true, useBuilder: true,
err: "unblind sidecars failed: commitment value doesn't match block", err: "unblind blobs sidecars: commitment value doesn't match block",
}, },
} }

View File

@@ -67,6 +67,7 @@ type Server struct {
SyncCommitteePool synccommittee.Pool SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
MockEth1Votes bool MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher

View File

@@ -4,6 +4,8 @@ import (
"bytes" "bytes"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks" consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces" "github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -68,3 +70,29 @@ func unblindBlobsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.B
} }
return sidecars, nil return sidecars, nil
} }
// TODO: Add tests
func unblindDataColumnsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.BlobsBundle) ([]*ethpb.DataColumnSidecar, error) {
// Check if the block is at least a Deneb block.
if block.Version() < version.Deneb {
return nil, nil
}
// Convert blobs from slices to array.
blobs := make([]kzg.Blob, 0, len(bundle.Blobs))
for _, blob := range bundle.Blobs {
if len(blob) != kzg.BytesPerBlob {
return nil, errors.Errorf("invalid blob size. expected %d bytes, got %d bytes", kzg.BytesPerBlob, len(blob))
}
blobs = append(blobs, kzg.Blob(blob))
}
// Retrieve data columns from blobs.
dataColumnSidecars, err := peerdas.DataColumnSidecars(block, blobs)
if err != nil {
return nil, errors.Wrap(err, "data column sidecars")
}
return dataColumnSidecars, nil
}

View File

@@ -107,6 +107,7 @@ type Config struct {
AttestationReceiver blockchain.AttestationReceiver AttestationReceiver blockchain.AttestationReceiver
BlockReceiver blockchain.BlockReceiver BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
ExecutionChainService execution.Chain ExecutionChainService execution.Chain
ChainStartFetcher execution.ChainStartFetcher ChainStartFetcher execution.ChainStartFetcher
ExecutionChainInfoFetcher execution.ChainInfoFetcher ExecutionChainInfoFetcher execution.ChainInfoFetcher
@@ -249,6 +250,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
P2P: s.cfg.Broadcaster, P2P: s.cfg.Broadcaster,
BlockReceiver: s.cfg.BlockReceiver, BlockReceiver: s.cfg.BlockReceiver,
BlobReceiver: s.cfg.BlobReceiver, BlobReceiver: s.cfg.BlobReceiver,
DataColumnReceiver: s.cfg.DataColumnReceiver,
MockEth1Votes: s.cfg.MockEth1Votes, MockEth1Votes: s.cfg.MockEth1Votes,
Eth1BlockFetcher: s.cfg.ExecutionChainService, Eth1BlockFetcher: s.cfg.ExecutionChainService,
PendingDepositsFetcher: s.cfg.PendingDepositFetcher, PendingDepositsFetcher: s.cfg.PendingDepositFetcher,

View File

@@ -7,6 +7,8 @@ go_library(
"block_batcher.go", "block_batcher.go",
"broadcast_bls_changes.go", "broadcast_bls_changes.go",
"context.go", "context.go",
"data_columns_reconstruct.go",
"data_columns_sampling.go",
"deadlines.go", "deadlines.go",
"decode_pubsub.go", "decode_pubsub.go",
"doc.go", "doc.go",
@@ -25,6 +27,8 @@ go_library(
"rpc_blob_sidecars_by_range.go", "rpc_blob_sidecars_by_range.go",
"rpc_blob_sidecars_by_root.go", "rpc_blob_sidecars_by_root.go",
"rpc_chunked_response.go", "rpc_chunked_response.go",
"rpc_data_column_sidecars_by_range.go",
"rpc_data_column_sidecars_by_root.go",
"rpc_goodbye.go", "rpc_goodbye.go",
"rpc_metadata.go", "rpc_metadata.go",
"rpc_ping.go", "rpc_ping.go",
@@ -37,6 +41,7 @@ go_library(
"subscriber_beacon_blocks.go", "subscriber_beacon_blocks.go",
"subscriber_blob_sidecar.go", "subscriber_blob_sidecar.go",
"subscriber_bls_to_execution_change.go", "subscriber_bls_to_execution_change.go",
"subscriber_data_column_sidecar.go",
"subscriber_handlers.go", "subscriber_handlers.go",
"subscriber_sync_committee_message.go", "subscriber_sync_committee_message.go",
"subscriber_sync_contribution_proof.go", "subscriber_sync_contribution_proof.go",
@@ -48,6 +53,7 @@ go_library(
"validate_beacon_blocks.go", "validate_beacon_blocks.go",
"validate_blob.go", "validate_blob.go",
"validate_bls_to_execution_change.go", "validate_bls_to_execution_change.go",
"validate_data_column.go",
"validate_proposer_slashing.go", "validate_proposer_slashing.go",
"validate_sync_committee_message.go", "validate_sync_committee_message.go",
"validate_sync_contribution_proof.go", "validate_sync_contribution_proof.go",
@@ -64,6 +70,7 @@ go_library(
"//async/abool:go_default_library", "//async/abool:go_default_library",
"//async/event:go_default_library", "//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library", "//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library", "//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library", "//beacon-chain/core/blocks:go_default_library",
@@ -72,7 +79,9 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library", "//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library", "//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library", "//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library", "//beacon-chain/core/transition/interop:go_default_library",
"//beacon-chain/db:go_default_library", "//beacon-chain/db:go_default_library",
@@ -140,6 +149,7 @@ go_library(
"@com_github_trailofbits_go_mutexasserts//:go_default_library", "@com_github_trailofbits_go_mutexasserts//:go_default_library",
"@io_opencensus_go//trace:go_default_library", "@io_opencensus_go//trace:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
], ],
) )
@@ -152,6 +162,7 @@ go_test(
"block_batcher_test.go", "block_batcher_test.go",
"broadcast_bls_changes_test.go", "broadcast_bls_changes_test.go",
"context_test.go", "context_test.go",
"data_columns_sampling_test.go",
"decode_pubsub_test.go", "decode_pubsub_test.go",
"error_test.go", "error_test.go",
"fork_watcher_test.go", "fork_watcher_test.go",
@@ -193,12 +204,14 @@ go_test(
deps = [ deps = [
"//async/abool:go_default_library", "//async/abool:go_default_library",
"//beacon-chain/blockchain:go_default_library", "//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library", "//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library", "//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/feed:go_default_library", "//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library", "//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library", "//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library", "//beacon-chain/core/transition:go_default_library",
@@ -236,27 +249,31 @@ go_test(
"//crypto/bls:go_default_library", "//crypto/bls:go_default_library",
"//crypto/rand:go_default_library", "//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library", "//encoding/bytesutil:go_default_library",
"//encoding/ssz/equality:go_default_library",
"//network/forks:go_default_library", "//network/forks:go_default_library",
"//proto/engine/v1:go_default_library", "//proto/engine/v1:go_default_library",
"//proto/eth/v2:go_default_library", "//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library", "//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library", "//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library", "//proto/prysm/v1alpha1/metadata:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library", "//testing/assert:go_default_library",
"//testing/require:go_default_library", "//testing/require:go_default_library",
"//testing/util:go_default_library", "//testing/util:go_default_library",
"//time:go_default_library", "//time:go_default_library",
"//time/slots:go_default_library", "//time/slots:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_d4l3k_messagediff//:go_default_library", "@com_github_d4l3k_messagediff//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library", "@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library", "@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_golang_snappy//:go_default_library", "@com_github_golang_snappy//:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library", "@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library", "@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library", "@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library", "@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/net/swarm/testing:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library", "@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library", "@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library", "@com_github_patrickmn_go_cache//:go_default_library",

View File

@@ -0,0 +1,328 @@
package sync
import (
"context"
"fmt"
"sort"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
kzg "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
const broadCastMissingDataColumnsTimeIntoSlot = 3 * time.Second
// recoverCellsAndProofs recovers the cells and proofs from the data column sidecars.
func recoverCellsAndProofs(
dataColumnSideCars []*ethpb.DataColumnSidecar,
columnsCount int,
blockRoot [fieldparams.RootLength]byte,
) ([]kzg.CellsAndProofs, error) {
var wg errgroup.Group
if len(dataColumnSideCars) == 0 {
return nil, errors.New("no data column sidecars")
}
// Check if all columns have the same length.
blobCount := len(dataColumnSideCars[0].DataColumn)
for _, sidecar := range dataColumnSideCars {
length := len(sidecar.DataColumn)
if length != blobCount {
return nil, errors.New("columns do not have the same length")
}
}
// Recover cells and compute proofs in parallel.
recoveredCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
for blobIndex := 0; blobIndex < blobCount; blobIndex++ {
bIndex := blobIndex
wg.Go(func() error {
start := time.Now()
cellsIndices := make([]uint64, 0, columnsCount)
cells := make([]kzg.Cell, 0, columnsCount)
for _, sidecar := range dataColumnSideCars {
// Build the cell indices.
cellsIndices = append(cellsIndices, sidecar.ColumnIndex)
// Get the cell.
column := sidecar.DataColumn
cell := column[bIndex]
cells = append(cells, kzg.Cell(cell))
}
// Recover the cells and proofs for the corresponding blob
cellsAndProofs, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
if err != nil {
return errors.Wrapf(err, "recover cells and KZG proofs for blob %d", bIndex)
}
recoveredCellsAndProofs[bIndex] = cellsAndProofs
log.WithFields(logrus.Fields{
"elapsed": time.Since(start),
"index": bIndex,
"root": fmt.Sprintf("%x", blockRoot),
}).Debug("Recovered cells and proofs")
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, err
}
return recoveredCellsAndProofs, nil
}
func (s *Service) reconstructDataColumns(ctx context.Context, verifiedRODataColumn blocks.VerifiedRODataColumn) error {
// Lock to prevent concurrent reconstruction.
s.dataColumsnReconstructionLock.Lock()
defer s.dataColumsnReconstructionLock.Unlock()
// Get the block root.
blockRoot := verifiedRODataColumn.BlockRoot()
// Get the columns we store.
storedDataColumns, err := s.cfg.blobStorage.ColumnIndices(blockRoot)
if err != nil {
return errors.Wrap(err, "columns indices")
}
storedColumnsCount := len(storedDataColumns)
numberOfColumns := fieldparams.NumberOfColumns
// If less than half of the columns are stored, reconstruction is not possible.
// If all columns are stored, no need to reconstruct.
if storedColumnsCount < numberOfColumns/2 || storedColumnsCount == numberOfColumns {
return nil
}
// Retrieve the custodied columns.
custodiedColumns, err := peerdas.CustodyColumns(s.cfg.p2p.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
return errors.Wrap(err, "custodied columns")
}
// Load the data columns sidecars.
dataColumnSideCars := make([]*ethpb.DataColumnSidecar, 0, storedColumnsCount)
for index := range storedDataColumns {
dataColumnSidecar, err := s.cfg.blobStorage.GetColumn(blockRoot, index)
if err != nil {
return errors.Wrap(err, "get column")
}
dataColumnSideCars = append(dataColumnSideCars, dataColumnSidecar)
}
// Recover cells and proofs
recoveredCellsAndProofs, err := recoverCellsAndProofs(dataColumnSideCars, storedColumnsCount, blockRoot)
if err != nil {
return errors.Wrap(err, "recover cells and proofs")
}
// Reconstruct the data columns sidecars.
dataColumnSidecars, err := peerdas.DataColumnSidecarsForReconstruct(
verifiedRODataColumn.KzgCommitments,
verifiedRODataColumn.SignedBlockHeader,
verifiedRODataColumn.KzgCommitmentsInclusionProof,
recoveredCellsAndProofs,
)
if err != nil {
return errors.Wrap(err, "data column sidecars")
}
// Save the data columns sidecars in the database.
for _, dataColumnSidecar := range dataColumnSidecars {
shouldSave := custodiedColumns[dataColumnSidecar.ColumnIndex]
if !shouldSave {
// We do not custody this column, so we dot not need to save it.
continue
}
roDataColumn, err := blocks.NewRODataColumnWithRoot(dataColumnSidecar, blockRoot)
if err != nil {
return errors.Wrap(err, "new read-only data column with root")
}
verifiedRoDataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
if err := s.cfg.blobStorage.SaveDataColumn(verifiedRoDataColumn); err != nil {
return errors.Wrap(err, "save column")
}
}
log.WithField("root", fmt.Sprintf("%x", blockRoot)).Debug("Data columns reconstructed and saved successfully")
// Schedule the broadcast.
if err := s.scheduleReconstructedDataColumnsBroadcast(ctx, blockRoot, verifiedRODataColumn); err != nil {
return errors.Wrap(err, "schedule reconstructed data columns broadcast")
}
return nil
}
func (s *Service) scheduleReconstructedDataColumnsBroadcast(
ctx context.Context,
blockRoot [fieldparams.RootLength]byte,
dataColumn blocks.VerifiedRODataColumn,
) error {
// Retrieve the slot of the block.
slot := dataColumn.Slot()
// Get the time corresponding to the start of the slot.
slotStart, err := slots.ToTime(uint64(s.cfg.chain.GenesisTime().Unix()), slot)
if err != nil {
return errors.Wrap(err, "to time")
}
// Compute when to broadcast the missing data columns.
broadcastTime := slotStart.Add(broadCastMissingDataColumnsTimeIntoSlot)
// Compute the waiting time. This could be negative. In such a case, broadcast immediately.
waitingTime := time.Until(broadcastTime)
time.AfterFunc(waitingTime, func() {
s.dataColumsnReconstructionLock.Lock()
defer s.deleteReceivedDataColumns(blockRoot)
defer s.dataColumsnReconstructionLock.Unlock()
// Get the received by gossip data columns.
receivedDataColumns := s.receivedDataColumns(blockRoot)
if receivedDataColumns == nil {
log.WithField("root", fmt.Sprintf("%x", blockRoot)).Error("No received data columns")
}
// Get the data columns we should store.
custodiedDataColumns, err := peerdas.CustodyColumns(s.cfg.p2p.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
log.WithError(err).Error("Custody columns")
}
// Get the data columns we actually store.
storedDataColumns, err := s.cfg.blobStorage.ColumnIndices(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%x", blockRoot)).WithError(err).Error("Columns indices")
return
}
// Compute the missing data columns (data columns we should custody but we do not have received via gossip.)
missingColumns := make(map[uint64]bool, len(custodiedDataColumns))
for column := range custodiedDataColumns {
if ok := receivedDataColumns[column]; !ok {
missingColumns[column] = true
}
}
// Exit early if there are no missing data columns.
// This is the happy path.
if len(missingColumns) == 0 {
return
}
for column := range missingColumns {
if ok := storedDataColumns[column]; !ok {
// This column was not received nor reconstructed. This should not happen.
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%x", blockRoot),
"slot": slot,
"column": column,
}).Error("Data column not received nor reconstructed.")
continue
}
// Get the non received but reconstructed data column.
dataColumnSidecar, err := s.cfg.blobStorage.GetColumn(blockRoot, column)
if err != nil {
log.WithError(err).Error("Get column")
continue
}
// Compute the subnet for this column.
subnet := column % params.BeaconConfig().DataColumnSidecarSubnetCount
// Broadcast the missing data column.
if err := s.cfg.p2p.BroadcastDataColumn(ctx, subnet, dataColumnSidecar); err != nil {
log.WithError(err).Error("Broadcast data column")
}
}
// Get the missing data columns under sorted form.
missingColumnsList := make([]uint64, 0, len(missingColumns))
for column := range missingColumns {
missingColumnsList = append(missingColumnsList, column)
}
// Sort the missing data columns.
sort.Slice(missingColumnsList, func(i, j int) bool {
return missingColumnsList[i] < missingColumnsList[j]
})
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%x", blockRoot),
"slot": slot,
"timeIntoSlot": broadCastMissingDataColumnsTimeIntoSlot,
"columns": missingColumnsList,
}).Debug("Broadcasting not seen via gossip but reconstructed data columns.")
})
return nil
}
// setReceivedDataColumn marks the data column for a given root as received.
func (s *Service) setReceivedDataColumn(root [fieldparams.RootLength]byte, columnIndex uint64) {
s.receivedDataColumnsFromRootLock.Lock()
defer s.receivedDataColumnsFromRootLock.Unlock()
// Get all the received data columns for this root.
receivedDataColumns, ok := s.receivedDataColumnsFromRoot[root]
if !ok {
// Create the map for this block root if needed.
receivedDataColumns = make(map[uint64]bool, params.BeaconConfig().NumberOfColumns)
s.receivedDataColumnsFromRoot[root] = receivedDataColumns
}
// Mark the data column as received.
receivedDataColumns[columnIndex] = true
}
// receivedDataColumns returns the received data columns for a given root.
func (s *Service) receivedDataColumns(root [fieldparams.RootLength]byte) map[uint64]bool {
s.receivedDataColumnsFromRootLock.RLock()
defer s.receivedDataColumnsFromRootLock.RUnlock()
// Get all the received data columns for this root.
receivedDataColumns, ok := s.receivedDataColumnsFromRoot[root]
if !ok {
return nil
}
// Copy the received data columns.
copied := make(map[uint64]bool, len(receivedDataColumns))
for column, received := range receivedDataColumns {
copied[column] = received
}
return copied
}
// deleteReceivedDataColumns deletes the received data columns for a given root.
func (s *Service) deleteReceivedDataColumns(root [fieldparams.RootLength]byte) {
s.receivedDataColumnsFromRootLock.Lock()
defer s.receivedDataColumnsFromRootLock.Unlock()
delete(s.receivedDataColumnsFromRoot, root)
}

View File

@@ -0,0 +1,565 @@
package sync
import (
"context"
"fmt"
"sort"
"sync"
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
const PeerRefreshInterval = 1 * time.Minute
type roundSummary struct {
RequestedColumns []uint64
MissingColumns map[uint64]bool
}
// DataColumnSampler defines the interface for sampling data columns from peers for requested block root and samples count.
type DataColumnSampler interface {
// Run starts the data column sampling service.
Run(ctx context.Context)
}
var _ DataColumnSampler = (*dataColumnSampler1D)(nil)
// dataColumnSampler1D implements the DataColumnSampler interface for PeerDAS 1D.
type dataColumnSampler1D struct {
sync.RWMutex
p2p p2p.P2P
clock *startup.Clock
ctxMap ContextByteVersions
stateNotifier statefeed.Notifier
// nonCustodyColumns is a set of columns that are not custodied by the node.
nonCustodyColumns map[uint64]bool
// columnFromPeer maps a peer to the columns it is responsible for custody.
columnFromPeer map[peer.ID]map[uint64]bool
// peerFromColumn maps a column to the peer responsible for custody.
peerFromColumn map[uint64]map[peer.ID]bool
}
// newDataColumnSampler1D creates a new 1D data column sampler.
func newDataColumnSampler1D(
p2p p2p.P2P,
clock *startup.Clock,
ctxMap ContextByteVersions,
stateNotifier statefeed.Notifier,
) *dataColumnSampler1D {
numColumns := params.BeaconConfig().NumberOfColumns
peerFromColumn := make(map[uint64]map[peer.ID]bool, numColumns)
for i := uint64(0); i < numColumns; i++ {
peerFromColumn[i] = make(map[peer.ID]bool)
}
return &dataColumnSampler1D{
p2p: p2p,
clock: clock,
ctxMap: ctxMap,
stateNotifier: stateNotifier,
columnFromPeer: make(map[peer.ID]map[uint64]bool),
peerFromColumn: peerFromColumn,
}
}
// Run implements DataColumnSampler.
func (d *dataColumnSampler1D) Run(ctx context.Context) {
// verify if we need to run sampling or not, if not, return directly
csc := peerdas.CustodySubnetCount()
columns, err := peerdas.CustodyColumns(d.p2p.NodeID(), csc)
if err != nil {
log.WithError(err).Error("Failed to determine local custody columns")
return
}
custodyColumnsCount := uint64(len(columns))
if peerdas.CanSelfReconstruct(custodyColumnsCount) {
log.WithFields(logrus.Fields{
"custodyColumnsCount": custodyColumnsCount,
"totalColumns": params.BeaconConfig().NumberOfColumns,
}).Debug("The node custodies at least the half the data columns, no need to sample")
return
}
// initialize non custody columns.
d.nonCustodyColumns = make(map[uint64]bool)
for i := uint64(0); i < params.BeaconConfig().NumberOfColumns; i++ {
if exists := columns[i]; !exists {
d.nonCustodyColumns[i] = true
}
}
// initialize peer info first.
d.refreshPeerInfo()
// periodically refresh peer info to keep peer <-> column mapping up to date.
async.RunEvery(ctx, PeerRefreshInterval, d.refreshPeerInfo)
// start the sampling loop.
d.samplingRoutine(ctx)
}
func (d *dataColumnSampler1D) samplingRoutine(ctx context.Context) {
stateCh := make(chan *feed.Event, 1)
stateSub := d.stateNotifier.StateFeed().Subscribe(stateCh)
defer stateSub.Unsubscribe()
for {
select {
case evt := <-stateCh:
d.handleStateNotification(ctx, evt)
case err := <-stateSub.Err():
log.WithError(err).Error("DataColumnSampler1D subscription to state feed failed")
case <-ctx.Done():
log.Debug("Context canceled, exiting data column sampling loop.")
return
}
}
}
// Refresh peer information.
func (d *dataColumnSampler1D) refreshPeerInfo() {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
columnsPerSubnet := fieldparams.NumberOfColumns / dataColumnSidecarSubnetCount
d.Lock()
defer d.Unlock()
activePeers := d.p2p.Peers().Active()
d.prunePeerInfo(activePeers)
for _, pid := range activePeers {
csc := d.p2p.CustodyCountFromRemotePeer(pid)
columns, ok := d.columnFromPeer[pid]
columnsCount := uint64(len(columns))
if ok && columnsCount == csc*columnsPerSubnet {
// No change for this peer.
continue
}
nid, err := p2p.ConvertPeerIDToNodeID(pid)
if err != nil {
log.WithError(err).WithField("peerID", pid).Error("Failed to convert peer ID to node ID")
continue
}
columns, err = peerdas.CustodyColumns(nid, csc)
if err != nil {
log.WithError(err).WithField("peerID", pid).Error("Failed to determine peer custody columns")
continue
}
d.columnFromPeer[pid] = columns
for column := range columns {
d.peerFromColumn[column][pid] = true
}
}
columnWithNoPeers := make([]uint64, 0)
for column, peers := range d.peerFromColumn {
if len(peers) == 0 {
columnWithNoPeers = append(columnWithNoPeers, column)
}
}
if len(columnWithNoPeers) > 0 {
log.WithField("columnWithNoPeers", columnWithNoPeers).Warn("Some columns have no peers responsible for custody")
}
}
// prunePeerInfo prunes inactive peers from peerFromColumn and columnFromPeer.
// This should not be called outside of refreshPeerInfo without being locked.
func (d *dataColumnSampler1D) prunePeerInfo(activePeers []peer.ID) {
active := make(map[peer.ID]bool)
for _, pid := range activePeers {
active[pid] = true
}
for pid := range d.columnFromPeer {
if !active[pid] {
d.prunePeer(pid)
}
}
}
// prunePeer removes a peer from stored peer info map, it should be called with lock held.
func (d *dataColumnSampler1D) prunePeer(pid peer.ID) {
delete(d.columnFromPeer, pid)
for _, peers := range d.peerFromColumn {
delete(peers, pid)
}
}
func (d *dataColumnSampler1D) handleStateNotification(ctx context.Context, event *feed.Event) {
if event.Type != statefeed.BlockProcessed {
return
}
data, ok := event.Data.(*statefeed.BlockProcessedData)
if !ok {
log.Error("Event feed data is not of type *statefeed.BlockProcessedData")
return
}
if !data.Verified {
// We only process blocks that have been verified
log.Error("Data is not verified")
return
}
if data.SignedBlock.Version() < version.Deneb {
log.Debug("Pre Deneb block, skipping data column sampling")
return
}
if !coreTime.PeerDASIsActive(data.Slot) {
// We do not trigger sampling if peerDAS is not active yet.
return
}
// Get the commitments for this block.
commitments, err := data.SignedBlock.Block().Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
return
}
// Skip if there are no commitments.
if len(commitments) == 0 {
log.Debug("No commitments in block, skipping data column sampling")
return
}
// Randomize columns for sample selection.
randomizedColumns := randomizeColumns(d.nonCustodyColumns)
samplesCount := min(params.BeaconConfig().SamplesPerSlot, uint64(len(d.nonCustodyColumns))-params.BeaconConfig().NumberOfColumns/2)
// TODO: Use the first output of `incrementalDAS` as input of the fork choice rule.
_, _, err = d.incrementalDAS(ctx, data.BlockRoot, randomizedColumns, samplesCount)
if err != nil {
log.WithError(err).Error("Failed to run incremental DAS")
}
}
// incrementalDAS samples data columns from active peers using incremental DAS.
// https://ethresear.ch/t/lossydas-lossy-incremental-and-diagonal-sampling-for-data-availability/18963#incrementaldas-dynamically-increase-the-sample-size-10
// According to https://github.com/ethereum/consensus-specs/issues/3825, we're going to select query samples exclusively from the non custody columns.
func (d *dataColumnSampler1D) incrementalDAS(
ctx context.Context,
root [fieldparams.RootLength]byte,
columns []uint64,
sampleCount uint64,
) (bool, []roundSummary, error) {
allowedFailures := uint64(0)
firstColumnToSample, extendedSampleCount := uint64(0), peerdas.ExtendedSampleCount(sampleCount, allowedFailures)
roundSummaries := make([]roundSummary, 0, 1) // We optimistically allocate only one round summary.
start := time.Now()
for round := 1; ; /*No exit condition */ round++ {
if extendedSampleCount > uint64(len(columns)) {
// We already tried to sample all possible columns, this is the unhappy path.
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"round": round - 1,
}).Warning("Some columns are still missing after trying to sample all possible columns")
return false, roundSummaries, nil
}
// Get the columns to sample for this round.
columnsToSample := columns[firstColumnToSample:extendedSampleCount]
columnsToSampleCount := extendedSampleCount - firstColumnToSample
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"columns": columnsToSample,
"round": round,
}).Debug("Start data columns sampling")
// Sample data columns from peers in parallel.
retrievedSamples := d.sampleDataColumns(ctx, root, columnsToSample)
missingSamples := make(map[uint64]bool)
for _, column := range columnsToSample {
if !retrievedSamples[column] {
missingSamples[column] = true
}
}
roundSummaries = append(roundSummaries, roundSummary{
RequestedColumns: columnsToSample,
MissingColumns: missingSamples,
})
retrievedSampleCount := uint64(len(retrievedSamples))
if retrievedSampleCount == columnsToSampleCount {
// All columns were correctly sampled, this is the happy path.
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"neededRounds": round,
"duration": time.Since(start),
}).Debug("All columns were successfully sampled")
return true, roundSummaries, nil
}
if retrievedSampleCount > columnsToSampleCount {
// This should never happen.
return false, nil, errors.New("retrieved more columns than requested")
}
// missing columns, extend the samples.
allowedFailures += columnsToSampleCount - retrievedSampleCount
oldExtendedSampleCount := extendedSampleCount
firstColumnToSample = extendedSampleCount
extendedSampleCount = peerdas.ExtendedSampleCount(sampleCount, allowedFailures)
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"round": round,
"missingColumnsCount": allowedFailures,
"currentSampleIndex": oldExtendedSampleCount,
"nextSampleIndex": extendedSampleCount,
}).Debug("Some columns are still missing after sampling this round.")
}
}
func (d *dataColumnSampler1D) sampleDataColumns(
ctx context.Context,
root [fieldparams.RootLength]byte,
columns []uint64,
) map[uint64]bool {
// distribute samples to peer
peerToColumns := d.distributeSamplesToPeer(columns)
var (
mu sync.Mutex
wg sync.WaitGroup
)
res := make(map[uint64]bool)
sampleFromPeer := func(pid peer.ID, cols map[uint64]bool) {
defer wg.Done()
retrieved := d.sampleDataColumnsFromPeer(ctx, pid, root, cols)
mu.Lock()
for col := range retrieved {
res[col] = true
}
mu.Unlock()
}
// sample from peers in parallel
for pid, cols := range peerToColumns {
wg.Add(1)
go sampleFromPeer(pid, cols)
}
wg.Wait()
return res
}
// distributeSamplesToPeer distributes samples to peers based on the columns they are responsible for.
// Currently it randomizes peer selection for a column and did not take into account whole peer distribution balance. It could be improved if needed.
func (d *dataColumnSampler1D) distributeSamplesToPeer(
columns []uint64,
) map[peer.ID]map[uint64]bool {
dist := make(map[peer.ID]map[uint64]bool)
for _, col := range columns {
peers := d.peerFromColumn[col]
if len(peers) == 0 {
log.WithField("column", col).Warn("No peers responsible for custody of column")
continue
}
pid := selectRandomPeer(peers)
if _, ok := dist[pid]; !ok {
dist[pid] = make(map[uint64]bool)
}
dist[pid][col] = true
}
return dist
}
func (d *dataColumnSampler1D) sampleDataColumnsFromPeer(
ctx context.Context,
pid peer.ID,
root [fieldparams.RootLength]byte,
requestedColumns map[uint64]bool,
) map[uint64]bool {
retrievedColumns := make(map[uint64]bool)
req := make(types.DataColumnSidecarsByRootReq, 0)
for col := range requestedColumns {
req = append(req, &eth.DataColumnIdentifier{
BlockRoot: root[:],
ColumnIndex: col,
})
}
// Send the request to the peer.
roDataColumns, err := SendDataColumnSidecarByRoot(ctx, d.clock, d.p2p, pid, d.ctxMap, &req)
if err != nil {
log.WithError(err).Error("Failed to send data column sidecar by root")
return nil
}
for _, roDataColumn := range roDataColumns {
if verifyColumn(roDataColumn, root, pid, requestedColumns) {
retrievedColumns[roDataColumn.ColumnIndex] = true
}
}
if len(retrievedColumns) == len(requestedColumns) {
log.WithFields(logrus.Fields{
"peerID": pid,
"root": fmt.Sprintf("%#x", root),
"requestedColumns": sortedSliceFromMap(requestedColumns),
}).Debug("Sampled columns from peer successfully")
} else {
log.WithFields(logrus.Fields{
"peerID": pid,
"root": fmt.Sprintf("%#x", root),
"requestedColumns": sortedSliceFromMap(requestedColumns),
"retrievedColumns": sortedSliceFromMap(retrievedColumns),
}).Debug("Sampled columns from peer with some errors")
}
return retrievedColumns
}
// randomizeColumns returns a slice containing all the numbers between 0 and colNum in a random order.
func randomizeColumns(columns map[uint64]bool) []uint64 {
// Create a slice from columns.
randomized := make([]uint64, 0, len(columns))
for column := range columns {
randomized = append(randomized, column)
}
// Shuffle the slice.
rand.NewGenerator().Shuffle(len(randomized), func(i, j int) {
randomized[i], randomized[j] = randomized[j], randomized[i]
})
return randomized
}
// sortedSliceFromMap returns a sorted list of keys from a map.
func sortedSliceFromMap(m map[uint64]bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
sort.Slice(result, func(i, j int) bool {
return result[i] < result[j]
})
return result
}
// selectRandomPeer returns a random peer from the given list of peers.
func selectRandomPeer(peers map[peer.ID]bool) peer.ID {
pick := rand.NewGenerator().Uint64() % uint64(len(peers))
for k := range peers {
if pick == 0 {
return k
}
pick--
}
// This should never be reached.
return peer.ID("")
}
// verifyColumn verifies the retrieved column against the root, the index,
// the KZG inclusion and the KZG proof.
func verifyColumn(
roDataColumn blocks.RODataColumn,
root [32]byte,
pid peer.ID,
requestedColumns map[uint64]bool,
) bool {
retrievedColumn := roDataColumn.ColumnIndex
// Filter out columns with incorrect root.
actualRoot := roDataColumn.BlockRoot()
if actualRoot != root {
log.WithFields(logrus.Fields{
"peerID": pid,
"requestedRoot": fmt.Sprintf("%#x", root),
"actualRoot": fmt.Sprintf("%#x", actualRoot),
}).Debug("Retrieved root does not match requested root")
return false
}
// Filter out columns that were not requested.
if !requestedColumns[retrievedColumn] {
columnsToSampleList := sortedSliceFromMap(requestedColumns)
log.WithFields(logrus.Fields{
"peerID": pid,
"requestedColumns": columnsToSampleList,
"retrievedColumn": retrievedColumn,
}).Debug("Retrieved column was not requested")
return false
}
// Filter out columns which did not pass the KZG inclusion proof verification.
if err := blocks.VerifyKZGInclusionProofColumn(roDataColumn.DataColumnSidecar); err != nil {
log.WithFields(logrus.Fields{
"peerID": pid,
"root": fmt.Sprintf("%#x", root),
"index": retrievedColumn,
}).Debug("Failed to verify KZG inclusion proof for retrieved column")
return false
}
// Filter out columns which did not pass the KZG proof verification.
verified, err := peerdas.VerifyDataColumnSidecarKZGProofs(roDataColumn.DataColumnSidecar)
if err != nil {
log.WithFields(logrus.Fields{
"peerID": pid,
"root": fmt.Sprintf("%#x", root),
"index": retrievedColumn,
}).Debug("Error when verifying KZG proof for retrieved column")
return false
}
if !verified {
log.WithFields(logrus.Fields{
"peerID": pid,
"root": fmt.Sprintf("%#x", root),
"index": retrievedColumn,
}).Debug("Failed to verify KZG proof for retrieved column")
return false
}
return true
}

View File

@@ -0,0 +1,510 @@
package sync
import (
"bytes"
"context"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/sirupsen/logrus"
kzg "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
p2ptest "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
p2pTypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestRandomizeColumns(t *testing.T) {
const count uint64 = 128
// Generate columns.
columns := make(map[uint64]bool, count)
for i := uint64(0); i < count; i++ {
columns[i] = true
}
// Randomize columns.
randomizedColumns := randomizeColumns(columns)
// Convert back to a map.
randomizedColumnsMap := make(map[uint64]bool, count)
for _, column := range randomizedColumns {
randomizedColumnsMap[column] = true
}
// Check duplicates and missing columns.
require.Equal(t, len(columns), len(randomizedColumnsMap))
// Check the values.
for column := range randomizedColumnsMap {
require.Equal(t, true, column < count)
}
}
// createAndConnectPeer creates a peer with a private key `offset` fixed.
// The peer is added and connected to `p2pService`.
// If a `RPCDataColumnSidecarsByRootTopicV1` request is made with column index `i`,
// then the peer will respond with the `dataColumnSidecars[i]` if it is not in `columnsNotToRespond`.
// (If `len(dataColumnSidecars) < i`, then this function will panic.)
func createAndConnectPeer(
t *testing.T,
p2pService *p2ptest.TestP2P,
chainService *mock.ChainService,
dataColumnSidecars []*ethpb.DataColumnSidecar,
custodySubnetCount uint64,
columnsNotToRespond map[uint64]bool,
offset int,
) *p2ptest.TestP2P {
// Create the private key, depending on the offset.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(offset + i)
}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
// Create the peer.
peer := p2ptest.NewTestP2P(t, swarmt.OptPeerPrivateKey(privateKey))
peer.SetStreamHandler(p2p.RPCDataColumnSidecarsByRootTopicV1+"/ssz_snappy", func(stream network.Stream) {
// Decode the request.
req := new(p2pTypes.DataColumnSidecarsByRootReq)
err := peer.Encoding().DecodeWithMaxLength(stream, req)
require.NoError(t, err)
for _, identifier := range *req {
// Filter out the columns not to respond.
if columnsNotToRespond[identifier.ColumnIndex] {
continue
}
// Create the response.
resp := dataColumnSidecars[identifier.ColumnIndex]
// Send the response.
err := WriteDataColumnSidecarChunk(stream, chainService, p2pService.Encoding(), resp)
require.NoError(t, err)
}
// Close the stream.
closeStream(stream, log)
})
// Create the record and set the custody count.
enr := &enr.Record{}
enr.Set(peerdas.Csc(custodySubnetCount))
// Add the peer and connect it.
p2pService.Peers().Add(enr, peer.PeerID(), nil, network.DirOutbound)
p2pService.Peers().SetConnectionState(peer.PeerID(), peers.PeerConnected)
p2pService.Connect(peer)
return peer
}
type dataSamplerTest struct {
ctx context.Context
p2pSvc *p2ptest.TestP2P
peers []*p2ptest.TestP2P
ctxMap map[[4]byte]int
chainSvc *mock.ChainService
blockRoot [32]byte
blobs []kzg.Blob
kzgCommitments [][]byte
kzgProofs [][]byte
dataColumnSidecars []*ethpb.DataColumnSidecar
}
func setupDefaultDataColumnSamplerTest(t *testing.T) (*dataSamplerTest, *dataColumnSampler1D) {
const (
blobCount uint64 = 3
custodyRequirement uint64 = 1
)
test, sampler := setupDataColumnSamplerTest(t, blobCount)
// Custody columns: [6, 38, 70, 102]
p1 := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, custodyRequirement, map[uint64]bool{}, 1)
// Custody columns: [3, 35, 67, 99]
p2 := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, custodyRequirement, map[uint64]bool{}, 2)
// Custody columns: [12, 44, 76, 108]
p3 := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, custodyRequirement, map[uint64]bool{}, 3)
test.peers = []*p2ptest.TestP2P{p1, p2, p3}
return test, sampler
}
func setupDataColumnSamplerTest(t *testing.T, blobCount uint64) (*dataSamplerTest, *dataColumnSampler1D) {
require.NoError(t, kzg.Start())
// Generate random blobs, commitments and inclusion proofs.
blobs := make([]kzg.Blob, blobCount)
kzgCommitments := make([][]byte, blobCount)
kzgProofs := make([][]byte, blobCount)
for i := uint64(0); i < blobCount; i++ {
blob := getRandBlob(int64(i))
kzgCommitment, kzgProof, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
blobs[i] = blob
kzgCommitments[i] = kzgCommitment[:]
kzgProofs[i] = kzgProof[:]
}
dbBlock := util.NewBeaconBlockDeneb()
dbBlock.Block.Body.BlobKzgCommitments = kzgCommitments
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
dataColumnSidecars, err := peerdas.DataColumnSidecars(sBlock, blobs)
require.NoError(t, err)
blockRoot, err := dataColumnSidecars[0].GetSignedBlockHeader().Header.HashTreeRoot()
require.NoError(t, err)
p2pSvc := p2ptest.NewTestP2P(t)
chainSvc, clock := defaultMockChain(t)
test := &dataSamplerTest{
ctx: context.Background(),
p2pSvc: p2pSvc,
peers: []*p2ptest.TestP2P{},
ctxMap: map[[4]byte]int{{245, 165, 253, 66}: version.Deneb},
chainSvc: chainSvc,
blockRoot: blockRoot,
blobs: blobs,
kzgCommitments: kzgCommitments,
kzgProofs: kzgProofs,
dataColumnSidecars: dataColumnSidecars,
}
sampler := newDataColumnSampler1D(p2pSvc, clock, test.ctxMap, nil)
return test, sampler
}
func TestDataColumnSampler1D_PeerManagement(t *testing.T) {
testCases := []struct {
numPeers int
custodyRequirement uint64
expectedColumns [][]uint64
prunePeers map[int]bool // Peers to prune.
}{
{
numPeers: 3,
custodyRequirement: 1,
expectedColumns: [][]uint64{
{6, 38, 70, 102},
{3, 35, 67, 99},
{12, 44, 76, 108},
},
prunePeers: map[int]bool{
0: true,
},
},
{
numPeers: 3,
custodyRequirement: 2,
expectedColumns: [][]uint64{
{6, 16, 38, 48, 70, 80, 102, 112},
{3, 13, 35, 45, 67, 77, 99, 109},
{12, 31, 44, 63, 76, 95, 108, 127},
},
prunePeers: map[int]bool{
0: true,
},
},
}
for _, tc := range testCases {
test, sampler := setupDataColumnSamplerTest(t, uint64(tc.numPeers))
for i := 0; i < tc.numPeers; i++ {
p := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, tc.custodyRequirement, nil, i+1)
test.peers = append(test.peers, p)
}
// confirm everything works
sampler.refreshPeerInfo()
require.Equal(t, params.BeaconConfig().NumberOfColumns, uint64(len(sampler.peerFromColumn)))
require.Equal(t, tc.numPeers, len(sampler.columnFromPeer))
for i, peer := range test.peers {
// confirm peer has the expected columns
require.Equal(t, len(tc.expectedColumns[i]), len(sampler.columnFromPeer[peer.PeerID()]))
for _, column := range tc.expectedColumns[i] {
require.Equal(t, true, sampler.columnFromPeer[peer.PeerID()][column])
}
// confirm column to peer mapping are correct
for _, column := range tc.expectedColumns[i] {
require.Equal(t, true, sampler.peerFromColumn[column][peer.PeerID()])
}
}
// prune peers
for peer := range tc.prunePeers {
err := test.p2pSvc.Disconnect(test.peers[peer].PeerID())
test.p2pSvc.Peers().SetConnectionState(test.peers[peer].PeerID(), peers.PeerDisconnected)
require.NoError(t, err)
}
sampler.refreshPeerInfo()
require.Equal(t, tc.numPeers-len(tc.prunePeers), len(sampler.columnFromPeer))
for i, peer := range test.peers {
for _, column := range tc.expectedColumns[i] {
expected := true
if tc.prunePeers[i] {
expected = false
}
require.Equal(t, expected, sampler.peerFromColumn[column][peer.PeerID()])
}
}
}
}
func TestDataColumnSampler1D_SampleDistribution(t *testing.T) {
testCases := []struct {
numPeers int
custodyRequirement uint64
columnsToDistribute [][]uint64
expectedDistribution []map[int][]uint64
}{
{
numPeers: 3,
custodyRequirement: 1,
// peer custody maps
// p0: {6, 38, 70, 102},
// p1: {3, 35, 67, 99},
// p2: {12, 44, 76, 108},
columnsToDistribute: [][]uint64{
{3, 6, 12},
{6, 3, 12, 38, 35, 44},
{6, 38, 70},
{11},
},
expectedDistribution: []map[int][]uint64{
{
0: {6}, // p1
1: {3}, // p2
2: {12}, // p3
},
{
0: {6, 38}, // p1
1: {3, 35}, // p2
2: {12, 44}, // p3
},
{
0: {6, 38, 70}, // p1
},
{},
},
},
{
numPeers: 3,
custodyRequirement: 2,
// peer custody maps
// p0: {6, 16, 38, 48, 70, 80, 102, 112},
// p1: {3, 13, 35, 45, 67, 77, 99, 109},
// p2: {12, 31, 44, 63, 76, 95, 108, 127},
columnsToDistribute: [][]uint64{
{3, 6, 12, 109, 112, 127}, // all covered by peers
{13, 16, 31, 32}, // 32 not in covered by peers
},
expectedDistribution: []map[int][]uint64{
{
0: {6, 112}, // p1
1: {3, 109}, // p2
2: {12, 127}, // p3
},
{
0: {16}, // p1
1: {13}, // p2
2: {31}, // p3
},
},
},
}
for _, tc := range testCases {
test, sampler := setupDataColumnSamplerTest(t, uint64(tc.numPeers))
for i := 0; i < tc.numPeers; i++ {
p := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, tc.custodyRequirement, nil, i+1)
test.peers = append(test.peers, p)
}
sampler.refreshPeerInfo()
for idx, columns := range tc.columnsToDistribute {
result := sampler.distributeSamplesToPeer(columns)
require.Equal(t, len(tc.expectedDistribution[idx]), len(result))
for peerIdx, dist := range tc.expectedDistribution[idx] {
for _, column := range dist {
peerID := test.peers[peerIdx].PeerID()
require.Equal(t, true, result[peerID][column])
}
}
}
}
}
func TestDataColumnSampler1D_SampleDataColumns(t *testing.T) {
test, sampler := setupDefaultDataColumnSamplerTest(t)
sampler.refreshPeerInfo()
// Sample all columns.
sampleColumns := []uint64{6, 3, 12, 38, 35, 44, 70, 67, 76, 102, 99, 108}
retrieved := sampler.sampleDataColumns(test.ctx, test.blockRoot, sampleColumns)
require.Equal(t, 12, len(retrieved))
for _, column := range sampleColumns {
require.Equal(t, true, retrieved[column])
}
// Sample a subset of columns.
sampleColumns = []uint64{6, 3, 12, 38, 35, 44}
retrieved = sampler.sampleDataColumns(test.ctx, test.blockRoot, sampleColumns)
require.Equal(t, 6, len(retrieved))
for _, column := range sampleColumns {
require.Equal(t, true, retrieved[column])
}
// Sample a subset of columns with missing columns.
sampleColumns = []uint64{6, 3, 12, 127}
retrieved = sampler.sampleDataColumns(test.ctx, test.blockRoot, sampleColumns)
require.Equal(t, 3, len(retrieved))
require.DeepEqual(t, map[uint64]bool{6: true, 3: true, 12: true}, retrieved)
}
func TestDataColumnSampler1D_IncrementalDAS(t *testing.T) {
testCases := []struct {
name string
samplesCount uint64
possibleColumnsToRequest []uint64
columnsNotToRespond map[uint64]bool
expectedSuccess bool
expectedRoundSummaries []roundSummary
}{
{
name: "All columns are correctly sampled in a single round",
samplesCount: 5,
possibleColumnsToRequest: []uint64{70, 35, 99, 6, 38, 3, 67, 102, 12, 44, 76, 108},
columnsNotToRespond: map[uint64]bool{},
expectedSuccess: true,
expectedRoundSummaries: []roundSummary{
{
RequestedColumns: []uint64{70, 35, 99, 6, 38},
MissingColumns: map[uint64]bool{},
},
},
},
{
name: "Two missing columns in the first round, ok in the second round",
samplesCount: 5,
possibleColumnsToRequest: []uint64{70, 35, 99, 6, 38, 3, 67, 102, 12, 44, 76, 108},
columnsNotToRespond: map[uint64]bool{6: true, 70: true},
expectedSuccess: true,
expectedRoundSummaries: []roundSummary{
{
RequestedColumns: []uint64{70, 35, 99, 6, 38},
MissingColumns: map[uint64]bool{70: true, 6: true},
},
{
RequestedColumns: []uint64{3, 67, 102, 12, 44, 76},
MissingColumns: map[uint64]bool{},
},
},
},
{
name: "Two missing columns in the first round, one missing in the second round. Fail to sample.",
samplesCount: 5,
possibleColumnsToRequest: []uint64{70, 35, 99, 6, 38, 3, 67, 102, 12, 44, 76, 108},
columnsNotToRespond: map[uint64]bool{6: true, 70: true, 3: true},
expectedSuccess: false,
expectedRoundSummaries: []roundSummary{
{
RequestedColumns: []uint64{70, 35, 99, 6, 38},
MissingColumns: map[uint64]bool{70: true, 6: true},
},
{
RequestedColumns: []uint64{3, 67, 102, 12, 44, 76},
MissingColumns: map[uint64]bool{3: true},
},
},
},
}
for _, tc := range testCases {
test, sampler := setupDataColumnSamplerTest(t, 3)
p1 := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, 1, tc.columnsNotToRespond, 1)
p2 := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, 1, tc.columnsNotToRespond, 2)
p3 := createAndConnectPeer(t, test.p2pSvc, test.chainSvc, test.dataColumnSidecars, 1, tc.columnsNotToRespond, 3)
test.peers = []*p2ptest.TestP2P{p1, p2, p3}
sampler.refreshPeerInfo()
success, summaries, err := sampler.incrementalDAS(test.ctx, test.blockRoot, tc.possibleColumnsToRequest, tc.samplesCount)
require.NoError(t, err)
require.Equal(t, tc.expectedSuccess, success)
require.DeepEqual(t, tc.expectedRoundSummaries, summaries)
}
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
for i := 0; i < len(blob); i += 32 {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+32], fieldElementBytes[:])
}
return blob
}
func generateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}

View File

@@ -45,6 +45,8 @@ func (s *Service) decodePubsubMessage(msg *pubsub.Message) (ssz.Unmarshaler, err
topic = p2p.GossipTypeMapping[reflect.TypeOf(&ethpb.SyncCommitteeMessage{})] topic = p2p.GossipTypeMapping[reflect.TypeOf(&ethpb.SyncCommitteeMessage{})]
case strings.Contains(topic, p2p.GossipBlobSidecarMessage): case strings.Contains(topic, p2p.GossipBlobSidecarMessage):
topic = p2p.GossipTypeMapping[reflect.TypeOf(&ethpb.BlobSidecar{})] topic = p2p.GossipTypeMapping[reflect.TypeOf(&ethpb.BlobSidecar{})]
case strings.Contains(topic, p2p.GossipDataColumnSidecarMessage):
topic = p2p.GossipTypeMapping[reflect.TypeOf(&ethpb.DataColumnSidecar{})]
} }
base := p2p.GossipTopicMappings(topic, 0) base := p2p.GossipTopicMappings(topic, 0)

View File

@@ -67,6 +67,11 @@ func (s *Service) registerForUpcomingFork(currEpoch primitives.Epoch) error {
s.registerRPCHandlersDeneb() s.registerRPCHandlersDeneb()
} }
} }
// Specially handle peerDAS
if params.PeerDASEnabled() && currEpoch+1 == params.BeaconConfig().Eip7594ForkEpoch {
s.registerRPCHandlersPeerDAS()
}
return nil return nil
} }
@@ -121,5 +126,9 @@ func (s *Service) deregisterFromPastFork(currEpoch primitives.Epoch) error {
} }
} }
} }
// Handle PeerDAS as its a special case.
if params.PeerDASEnabled() && currEpoch > 0 && (currEpoch-1) == params.BeaconConfig().Eip7594ForkEpoch {
s.unregisterBlobHandlers()
}
return nil return nil
} }

View File

@@ -388,6 +388,7 @@ func TestService_CheckForPreviousEpochFork(t *testing.T) {
} }
} }
// oneEpoch returns the duration of one epoch.
func oneEpoch() time.Duration { func oneEpoch() time.Duration {
return time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second return time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
} }

View File

@@ -20,6 +20,8 @@ go_library(
"//beacon-chain/blockchain:go_default_library", "//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/feed/block:go_default_library", "//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/state:go_default_library", "//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library", "//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library", "//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library", "//beacon-chain/db:go_default_library",
@@ -32,6 +34,7 @@ go_library(
"//beacon-chain/sync/verify:go_default_library", "//beacon-chain/sync/verify:go_default_library",
"//beacon-chain/verification:go_default_library", "//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library", "//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library", "//config/params:go_default_library",
"//consensus-types/blocks:go_default_library", "//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library", "//consensus-types/interfaces:go_default_library",
@@ -69,7 +72,9 @@ go_test(
tags = ["CI_race_detection"], tags = ["CI_race_detection"],
deps = [ deps = [
"//async/abool:go_default_library", "//async/abool:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library", "//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/das:go_default_library", "//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library", "//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library", "//beacon-chain/db/filesystem:go_default_library",
@@ -92,18 +97,26 @@ go_test(
"//consensus-types/primitives:go_default_library", "//consensus-types/primitives:go_default_library",
"//container/leaky-bucket:go_default_library", "//container/leaky-bucket:go_default_library",
"//container/slice:go_default_library", "//container/slice:go_default_library",
"//crypto/ecdsa:go_default_library",
"//crypto/hash:go_default_library", "//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library", "//encoding/bytesutil:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library", "//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library", "//testing/assert:go_default_library",
"//testing/require:go_default_library", "//testing/require:go_default_library",
"//testing/util:go_default_library", "//testing/util:go_default_library",
"//time:go_default_library", "//time:go_default_library",
"//time/slots:go_default_library", "//time/slots:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library", "@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library", "@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library", "@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/net/swarm/testing:go_default_library",
"@com_github_paulbellamy_ratecounter//:go_default_library", "@com_github_paulbellamy_ratecounter//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library", "@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library", "@com_github_sirupsen_logrus//hooks/test:go_default_library",

View File

@@ -3,6 +3,7 @@ package initialsync
import ( import (
"context" "context"
"fmt" "fmt"
"math"
"sort" "sort"
"strings" "strings"
"sync" "sync"
@@ -10,6 +11,11 @@ import (
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
@@ -18,19 +24,17 @@ import (
prysmsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync" prysmsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify" "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags" "github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks" "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
blocks2 "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces" "github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives" "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket" leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
"github.com/prysmaticlabs/prysm/v5/crypto/rand" "github.com/prysmaticlabs/prysm/v5/crypto/rand"
"github.com/prysmaticlabs/prysm/v5/math" mathPrysm "github.com/prysmaticlabs/prysm/v5/math"
p2ppb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" p2ppb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version" "github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots" "github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
) )
const ( const (
@@ -123,7 +127,7 @@ type fetchRequestResponse struct {
pid peer.ID pid peer.ID
start primitives.Slot start primitives.Slot
count uint64 count uint64
bwb []blocks2.BlockWithROBlobs bwb []blocks.BlockWithROBlobs
err error err error
} }
@@ -172,7 +176,7 @@ func maxBatchLimit() int {
if params.DenebEnabled() { if params.DenebEnabled() {
maxLimit = params.BeaconConfig().MaxRequestBlocksDeneb maxLimit = params.BeaconConfig().MaxRequestBlocksDeneb
} }
castedMaxLimit, err := math.Int(maxLimit) castedMaxLimit, err := mathPrysm.Int(maxLimit)
if err != nil { if err != nil {
// Should be impossible to hit this case. // Should be impossible to hit this case.
log.WithError(err).Error("Unable to calculate the max batch limit") log.WithError(err).Error("Unable to calculate the max batch limit")
@@ -289,7 +293,7 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
response := &fetchRequestResponse{ response := &fetchRequestResponse{
start: start, start: start,
count: count, count: count,
bwb: []blocks2.BlockWithROBlobs{}, bwb: []blocks.BlockWithROBlobs{},
err: nil, err: nil,
} }
@@ -315,13 +319,20 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
} }
response.bwb, response.pid, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers) response.bwb, response.pid, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
if response.err == nil {
bwb, err := f.fetchBlobsFromPeer(ctx, response.bwb, response.pid, peers) if response.err != nil {
if err != nil { return response
response.err = err
}
response.bwb = bwb
} }
if coreTime.PeerDASIsActive(start) {
response.err = f.fetchDataColumnsFromPeers(ctx, response.bwb, peers)
return response
}
if err := f.fetchBlobsFromPeer(ctx, response.bwb, response.pid, peers); err != nil {
response.err = err
}
return response return response
} }
@@ -330,7 +341,7 @@ func (f *blocksFetcher) fetchBlocksFromPeer(
ctx context.Context, ctx context.Context,
start primitives.Slot, count uint64, start primitives.Slot, count uint64,
peers []peer.ID, peers []peer.ID,
) ([]blocks2.BlockWithROBlobs, peer.ID, error) { ) ([]blocks.BlockWithROBlobs, peer.ID, error) {
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlocksFromPeer") ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlocksFromPeer")
defer span.End() defer span.End()
@@ -363,16 +374,16 @@ func (f *blocksFetcher) fetchBlocksFromPeer(
return nil, "", errNoPeersAvailable return nil, "", errNoPeersAvailable
} }
func sortedBlockWithVerifiedBlobSlice(blocks []interfaces.ReadOnlySignedBeaconBlock) ([]blocks2.BlockWithROBlobs, error) { func sortedBlockWithVerifiedBlobSlice(blks []interfaces.ReadOnlySignedBeaconBlock) ([]blocks.BlockWithROBlobs, error) {
rb := make([]blocks2.BlockWithROBlobs, len(blocks)) rb := make([]blocks.BlockWithROBlobs, len(blks))
for i, b := range blocks { for i, b := range blks {
ro, err := blocks2.NewROBlock(b) ro, err := blocks.NewROBlock(b)
if err != nil { if err != nil {
return nil, err return nil, err
} }
rb[i] = blocks2.BlockWithROBlobs{Block: ro} rb[i] = blocks.BlockWithROBlobs{Block: ro}
} }
sort.Sort(blocks2.BlockWithROBlobsSlice(rb)) sort.Sort(blocks.BlockWithROBlobsSlice(rb))
return rb, nil return rb, nil
} }
@@ -386,7 +397,7 @@ type commitmentCountList []commitmentCount
// countCommitments makes a list of all blocks that have commitments that need to be satisfied. // countCommitments makes a list of all blocks that have commitments that need to be satisfied.
// This gives us a representation to finish building the request that is lightweight and readable for testing. // This gives us a representation to finish building the request that is lightweight and readable for testing.
func countCommitments(bwb []blocks2.BlockWithROBlobs, retentionStart primitives.Slot) commitmentCountList { func countCommitments(bwb []blocks.BlockWithROBlobs, retentionStart primitives.Slot) commitmentCountList {
if len(bwb) == 0 { if len(bwb) == 0 {
return nil return nil
} }
@@ -465,10 +476,22 @@ func (r *blobRange) Request() *p2ppb.BlobSidecarsByRangeRequest {
} }
} }
func (r *blobRange) RequestDataColumns() *p2ppb.DataColumnSidecarsByRangeRequest {
if r == nil {
return nil
}
return &p2ppb.DataColumnSidecarsByRangeRequest{
StartSlot: r.low,
Count: uint64(r.high.SubSlot(r.low)) + 1,
}
}
var errBlobVerification = errors.New("peer unable to serve aligned BlobSidecarsByRange and BeaconBlockSidecarsByRange responses") var errBlobVerification = errors.New("peer unable to serve aligned BlobSidecarsByRange and BeaconBlockSidecarsByRange responses")
var errMissingBlobsForBlockCommitments = errors.Wrap(errBlobVerification, "blobs unavailable for processing block with kzg commitments") var errMissingBlobsForBlockCommitments = errors.Wrap(errBlobVerification, "blobs unavailable for processing block with kzg commitments")
func verifyAndPopulateBlobs(bwb []blocks2.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) ([]blocks2.BlockWithROBlobs, error) { // verifyAndPopulateBlobs mutate the input `bwb` argument by adding verified blobs.
// This function mutates the input `bwb` argument.
func verifyAndPopulateBlobs(bwb []blocks.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) error {
blobsByRoot := make(map[[32]byte][]blocks.ROBlob) blobsByRoot := make(map[[32]byte][]blocks.ROBlob)
for i := range blobs { for i := range blobs {
if blobs[i].Slot() < req.StartSlot { if blobs[i].Slot() < req.StartSlot {
@@ -478,46 +501,53 @@ func verifyAndPopulateBlobs(bwb []blocks2.BlockWithROBlobs, blobs []blocks.ROBlo
blobsByRoot[br] = append(blobsByRoot[br], blobs[i]) blobsByRoot[br] = append(blobsByRoot[br], blobs[i])
} }
for i := range bwb { for i := range bwb {
bwi, err := populateBlock(bwb[i], blobsByRoot[bwb[i].Block.Root()], req, bss) err := populateBlock(&bwb[i], blobsByRoot[bwb[i].Block.Root()], req, bss)
if err != nil { if err != nil {
if errors.Is(err, errDidntPopulate) { if errors.Is(err, errDidntPopulate) {
continue continue
} }
return bwb, err return err
} }
bwb[i] = bwi
} }
return bwb, nil return nil
} }
var errDidntPopulate = errors.New("skipping population of block") var errDidntPopulate = errors.New("skipping population of block")
func populateBlock(bw blocks2.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) (blocks2.BlockWithROBlobs, error) { // populateBlock verifies and populates blobs for a block.
// This function mutates the input `bw` argument.
func populateBlock(bw *blocks.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) error {
blk := bw.Block blk := bw.Block
if blk.Version() < version.Deneb || blk.Block().Slot() < req.StartSlot { if blk.Version() < version.Deneb || blk.Block().Slot() < req.StartSlot {
return bw, errDidntPopulate return errDidntPopulate
} }
commits, err := blk.Block().Body().BlobKzgCommitments() commits, err := blk.Block().Body().BlobKzgCommitments()
if err != nil { if err != nil {
return bw, errDidntPopulate return errDidntPopulate
} }
if len(commits) == 0 { if len(commits) == 0 {
return bw, errDidntPopulate return errDidntPopulate
} }
// Drop blobs on the floor if we already have them. // Drop blobs on the floor if we already have them.
if bss != nil && bss.Summary(blk.Root()).AllAvailable(len(commits)) { if bss != nil && bss.Summary(blk.Root()).AllAvailable(len(commits)) {
return bw, errDidntPopulate return errDidntPopulate
} }
if len(commits) != len(blobs) { if len(commits) != len(blobs) {
return bw, missingCommitError(blk.Root(), blk.Block().Slot(), commits) return missingCommitError(blk.Root(), blk.Block().Slot(), commits)
} }
for ci := range commits { for ci := range commits {
if err := verify.BlobAlignsWithBlock(blobs[ci], blk); err != nil { if err := verify.BlobAlignsWithBlock(blobs[ci], blk); err != nil {
return bw, err return err
} }
} }
bw.Blobs = blobs bw.Blobs = blobs
return bw, nil return nil
} }
func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) error { func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) error {
@@ -530,29 +560,30 @@ func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) e
} }
// fetchBlobsFromPeer fetches blocks from a single randomly selected peer. // fetchBlobsFromPeer fetches blocks from a single randomly selected peer.
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.BlockWithROBlobs, pid peer.ID, peers []peer.ID) ([]blocks2.BlockWithROBlobs, error) { // This function mutates the input `bwb` argument.
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROBlobs, pid peer.ID, peers []peer.ID) error {
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlobsFromPeer") ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlobsFromPeer")
defer span.End() defer span.End()
if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch { if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch {
return bwb, nil return nil
} }
blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot()) blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot())
if err != nil { if err != nil {
return nil, err return err
} }
// Construct request message based on observed interval of blocks in need of blobs. // Construct request message based on observed interval of blocks in need of blobs.
req := countCommitments(bwb, blobWindowStart).blobRange(f.bs).Request() req := countCommitments(bwb, blobWindowStart).blobRange(f.bs).Request()
if req == nil { if req == nil {
return bwb, nil return nil
} }
peers = f.filterPeers(ctx, peers, peersPercentagePerRequest) peers = f.filterPeers(ctx, peers, peersPercentagePerRequest)
// We dial the initial peer first to ensure that we get the desired set of blobs. // We dial the initial peer first to ensure that we get the desired set of blobs.
wantedPeers := append([]peer.ID{pid}, peers...) peers = append([]peer.ID{pid}, peers...)
bestPeers := f.hasSufficientBandwidth(wantedPeers, req.Count) peers = f.hasSufficientBandwidth(peers, req.Count)
// We append the best peers to the front so that higher capacity // We append the best peers to the front so that higher capacity
// peers are dialed first. If all of them fail, we fallback to the // peers are dialed first. If all of them fail, we fallback to the
// initial peer we wanted to request blobs from. // initial peer we wanted to request blobs from.
peers = append(bestPeers, pid) peers = append(peers, pid)
for i := 0; i < len(peers); i++ { for i := 0; i < len(peers); i++ {
p := peers[i] p := peers[i]
blobs, err := f.requestBlobs(ctx, req, p) blobs, err := f.requestBlobs(ctx, req, p)
@@ -561,14 +592,561 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.Bl
continue continue
} }
f.p2p.Peers().Scorers().BlockProviderScorer().Touch(p) f.p2p.Peers().Scorers().BlockProviderScorer().Touch(p)
robs, err := verifyAndPopulateBlobs(bwb, blobs, req, f.bs) if err := verifyAndPopulateBlobs(bwb, blobs, req, f.bs); err != nil {
if err != nil {
log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response") log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response")
continue continue
} }
return robs, err return err
} }
return nil, errNoPeersAvailable return errNoPeersAvailable
}
// sortedSliceFromMap returns a sorted slice of keys from a map.
func sortedSliceFromMap(m map[uint64]bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
sort.Slice(result, func(i, j int) bool {
return result[i] < result[j]
})
return result
}
// blocksWithMissingDataColumnsBoundaries finds the first and last block in `bwb` that:
// - are in the blob retention period,
// - contain at least one blob, and
// - have at least one missing data column.
func (f *blocksFetcher) blocksWithMissingDataColumnsBoundaries(
bwb []blocks.BlockWithROBlobs,
currentSlot primitives.Slot,
localCustodyColumns map[uint64]bool,
) (bool, int, int, error) {
// Get, regarding the current slot, the minimum slot for which we should serve data columns.
columnWindowStart, err := prysmsync.DataColumnsRPCMinValidSlot(currentSlot)
if err != nil {
return false, 0, 0, errors.Wrap(err, "data columns RPC min valid slot")
}
// Find the first block with a slot higher than or equal to columnWindowStart,
firstWindowIndex := -1
for i := range bwb {
if bwb[i].Block.Block().Slot() >= columnWindowStart {
firstWindowIndex = i
break
}
}
if firstWindowIndex == -1 {
// There is no block with slot greater than or equal to columnWindowStart.
return false, 0, 0, nil
}
// Find the first block which contains blob commitments and for which some data columns are missing.
firstIndex := -1
for i := firstWindowIndex; i < len(bwb); i++ {
// Is there any blob commitment in this block?
commits, err := bwb[i].Block.Block().Body().BlobKzgCommitments()
if err != nil {
return false, 0, 0, errors.Wrap(err, "blob KZG commitments")
}
if len(commits) == 0 {
continue
}
// Is there at least one column we should custody that is not in our store?
root := bwb[i].Block.Root()
allColumnsAreAvailable := f.bs.Summary(root).AllDataColumnsAvailable(localCustodyColumns)
if !allColumnsAreAvailable {
firstIndex = i
break
}
}
if firstIndex == -1 {
// There is no block with at least one missing data column.
return false, 0, 0, nil
}
// Find the last block which contains blob commitments and for which some data columns are missing.
lastIndex := len(bwb) - 1
for i := lastIndex; i >= firstIndex; i-- {
// Is there any blob commitment in this block?
commits, err := bwb[i].Block.Block().Body().BlobKzgCommitments()
if err != nil {
return false, 0, 0, errors.Wrap(err, "blob KZG commitments")
}
if len(commits) == 0 {
continue
}
// Is there at least one column we should custody that is not in our store?
root := bwb[i].Block.Root()
allColumnsAreAvailable := f.bs.Summary(root).AllDataColumnsAvailable(localCustodyColumns)
if !allColumnsAreAvailable {
lastIndex = i
break
}
}
return true, firstIndex, lastIndex, nil
}
// custodyAllNeededColumns filter `inputPeers` that custody all columns in `columns`.
func (f *blocksFetcher) custodyAllNeededColumns(inputPeers []peer.ID, columns map[uint64]bool) ([]peer.ID, error) {
outputPeers := make([]peer.ID, 0, len(inputPeers))
loop:
for _, peer := range inputPeers {
// Get the node ID from the peer ID.
nodeID, err := p2p.ConvertPeerIDToNodeID(peer)
if err != nil {
return nil, errors.Wrap(err, "convert peer ID to node ID")
}
// Get the custody columns count from the peer.
custodyCount := f.p2p.CustodyCountFromRemotePeer(peer)
// Get the custody columns from the peer.
remoteCustodyColumns, err := peerdas.CustodyColumns(nodeID, custodyCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
for column := range columns {
if !remoteCustodyColumns[column] {
continue loop
}
}
outputPeers = append(outputPeers, peer)
}
return outputPeers, nil
}
// filterPeersForDataColumns filters peers able to serve us `dataColumns`.
func (f *blocksFetcher) filterPeersForDataColumns(
ctx context.Context,
blocksCount uint64,
dataColumns map[uint64]bool,
peers []peer.ID,
) ([]peer.ID, error) {
// Filter peers based on the percentage of peers to be used in a request.
peers = f.filterPeers(ctx, peers, peersPercentagePerRequest)
// Filter peers on bandwidth.
peers = f.hasSufficientBandwidth(peers, blocksCount)
// Select peers which custody ALL wanted columns.
// Basically, it is very unlikely that a non-supernode peer will have custody of all columns.
// TODO: Modify to retrieve data columns from all possible peers.
// TODO: If a peer does respond some of the request columns, do not re-request responded columns.
peers, err := f.custodyAllNeededColumns(peers, dataColumns)
if err != nil {
return nil, errors.Wrap(err, "custody all needed columns")
}
// Randomize the order of the peers.
randGen := rand.NewGenerator()
randGen.Shuffle(len(peers), func(i, j int) {
peers[i], peers[j] = peers[j], peers[i]
})
return peers, nil
}
// custodyColumns returns the columns we should custody.
func (f *blocksFetcher) custodyColumns() (map[uint64]bool, error) {
// Retrieve our node ID.
localNodeID := f.p2p.NodeID()
// Retrieve the number of colums subnets we should custody.
localCustodySubnetCount := peerdas.CustodySubnetCount()
// Retrieve the columns we should custody.
localCustodyColumns, err := peerdas.CustodyColumns(localNodeID, localCustodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
return localCustodyColumns, nil
}
// missingColumnsFromRoot returns the missing columns indexed by root.
func (f *blocksFetcher) missingColumnsFromRoot(
custodyColumns map[uint64]bool,
bwb []blocks.BlockWithROBlobs,
) (map[[fieldparams.RootLength]byte]map[uint64]bool, error) {
result := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
for i := 0; i < len(bwb); i++ {
block := bwb[i].Block
// Retrieve the blob KZG commitments.
commitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Skip if there are no commitments.
if len(commitments) == 0 {
continue
}
// Retrieve the root.
root := block.Root()
for column := range custodyColumns {
// If there is at least one commitment for this block and if a column we should custody
// is not in our store, then we should retrieve it.
if !f.bs.Summary(root).HasDataColumnIndex(column) {
if _, ok := result[root]; !ok {
result[root] = make(map[uint64]bool)
}
result[root][column] = true
}
}
}
return result, nil
}
// indicesFromRoot returns the indices indexed by root.
func indicesFromRoot(bwb []blocks.BlockWithROBlobs) map[[fieldparams.RootLength]byte][]int {
result := make(map[[fieldparams.RootLength]byte][]int, len(bwb))
for i := 0; i < len(bwb); i++ {
root := bwb[i].Block.Root()
result[root] = append(result[root], i)
}
return result
}
// blockFromRoot returns the block indexed by root.
func blockFromRoot(bwb []blocks.BlockWithROBlobs) map[[fieldparams.RootLength]byte]blocks.ROBlock {
result := make(map[[fieldparams.RootLength]byte]blocks.ROBlock, len(bwb))
for i := 0; i < len(bwb); i++ {
root := bwb[i].Block.Root()
result[root] = bwb[i].Block
}
return result
}
// minInt returns the minimum integer in a slice.
func minInt(slice []int) int {
min := math.MaxInt
for _, item := range slice {
if item < min {
min = item
}
}
return min
}
// maxInt returns the maximum integer in a slice.
func maxInt(slice []int) int {
max := math.MinInt
for _, item := range slice {
if item > max {
max = item
}
}
return max
}
// requestDataColumnsFromPeers send `request` to each peer in `peers` until a peer returns at least one data column.
func (f *blocksFetcher) requestDataColumnsFromPeers(
ctx context.Context,
request *p2ppb.DataColumnSidecarsByRangeRequest,
peers []peer.ID,
) ([]blocks.RODataColumn, peer.ID, error) {
for _, peer := range peers {
if ctx.Err() != nil {
return nil, "", ctx.Err()
}
err := func() error {
l := f.peerLock(peer)
l.Lock()
defer l.Unlock()
log.WithFields(logrus.Fields{
"peer": peer,
"start": request.StartSlot,
"count": request.Count,
"capacity": f.rateLimiter.Remaining(peer.String()),
"score": f.p2p.Peers().Scorers().BlockProviderScorer().FormatScorePretty(peer),
}).Debug("Requesting data columns")
// We're intentionally abusing the block rate limit here, treating data column requests as if they were block requests.
// Since column requests take more bandwidth than blocks, we should improve how we account for the different kinds
// of requests, more in proportion to the cost of serving them.
if f.rateLimiter.Remaining(peer.String()) < int64(request.Count) {
if err := f.waitForBandwidth(peer, request.Count); err != nil {
return errors.Wrap(err, "wait for bandwidth")
}
}
f.rateLimiter.Add(peer.String(), int64(request.Count))
return nil
}()
if err != nil {
return nil, "", err
}
roDataColumns, err := prysmsync.SendDataColumnsByRangeRequest(ctx, f.clock, f.p2p, peer, f.ctxMap, request)
if err != nil {
log.WithField("peer", peer).WithError(err).Warning("Could not request data columns by range from peer")
continue
}
// If the peer did not return any data columns, go to the next peer.
if len(roDataColumns) == 0 {
log.WithField("peer", peer).Warning("Peer did not return any data columns")
continue
}
// We have received at least one data columns from the peer.
return roDataColumns, peer, nil
}
// No peer returned any data columns.
return nil, "", nil
}
// firstLastIndices returns the first and last indices where we have missing columns.
func firstLastIndices(
missingColumnsFromRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
indicesFromRoot map[[fieldparams.RootLength]byte][]int,
) (int, int) {
firstIndex, lastIndex := math.MaxInt, -1
for root := range missingColumnsFromRoot {
indices := indicesFromRoot[root]
index := minInt(indices)
if index < firstIndex {
firstIndex = index
}
index = maxInt(indices)
if index > lastIndex {
lastIndex = index
}
}
return firstIndex, lastIndex
}
// processRetrievedDataColumns processes the retrieved data columns.
// This function:
// - Mutate `bwb` by adding the retrieved data columns.
// - Mutate `missingColumnsFromRoot` by removing the columns that have been retrieved.
func processRetrievedDataColumns(
roDataColumns []blocks.RODataColumn,
blockFromRoot map[[fieldparams.RootLength]byte]blocks.ROBlock,
indicesFromRoot map[[fieldparams.RootLength]byte][]int,
missingColumnsFromRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
bwb []blocks.BlockWithROBlobs,
) {
retrievedColumnsFromRoot := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
// Verify and populate columns
for i := range roDataColumns {
dataColumn := roDataColumns[i]
root := dataColumn.BlockRoot()
columnIndex := dataColumn.ColumnIndex
missingColumns, ok := missingColumnsFromRoot[root]
if !ok {
continue
}
if !missingColumns[columnIndex] {
continue
}
// Verify the data column.
if err := verify.ColumnAlignsWithBlock(dataColumn, blockFromRoot[root]); err != nil {
// TODO: Should we downscore the peer for that?
continue
}
// Populate the block with the data column.
for _, index := range indicesFromRoot[root] {
if bwb[index].Columns == nil {
bwb[index].Columns = make([]blocks.RODataColumn, 0)
}
bwb[index].Columns = append(bwb[index].Columns, dataColumn)
}
// Populate the retrieved columns.
if _, ok := retrievedColumnsFromRoot[root]; !ok {
retrievedColumnsFromRoot[root] = make(map[uint64]bool)
}
retrievedColumnsFromRoot[root][columnIndex] = true
// Remove the column from the missing columns.
delete(missingColumnsFromRoot[root], columnIndex)
if len(missingColumnsFromRoot[root]) == 0 {
delete(missingColumnsFromRoot, root)
}
}
}
// retrieveMissingDataColumnsFromPeers retrieves the missing data columns from the peers.
// This function:
// - Mutate `bwb` by adding the retrieved data columns.
// - Mutate `missingColumnsFromRoot` by removing the columns that have been retrieved.
// This function returns when all the missing data columns have been retrieved.
func (f *blocksFetcher) retrieveMissingDataColumnsFromPeers(ctx context.Context,
bwb []blocks.BlockWithROBlobs,
missingColumnsFromRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
indicesFromRoot map[[fieldparams.RootLength]byte][]int,
peers []peer.ID,
) error {
for len(missingColumnsFromRoot) > 0 {
if ctx.Err() != nil {
return ctx.Err()
}
// Get the first and last indices where we have missing columns.
firstIndex, lastIndex := firstLastIndices(missingColumnsFromRoot, indicesFromRoot)
// Get the first and the last slot.
firstSlot := bwb[firstIndex].Block.Block().Slot()
lastSlot := bwb[lastIndex].Block.Block().Slot()
// Get the number of blocks to retrieve.
blocksCount := uint64(lastSlot - firstSlot + 1)
// Get the missing data columns.
missingDataColumns := make(map[uint64]bool)
for _, columns := range missingColumnsFromRoot {
for column := range columns {
missingDataColumns[column] = true
}
}
// Filter peers.
peers, err := f.filterPeersForDataColumns(ctx, blocksCount, missingDataColumns, peers)
if err != nil {
return errors.Wrap(err, "filter peers for data columns")
}
if len(peers) == 0 {
log.Warning("No peers available to retrieve missing data columns, retrying in 5 seconds")
time.Sleep(5 * time.Second)
continue
}
// Get the first slot for which we should retrieve data columns.
startSlot := bwb[firstIndex].Block.Block().Slot()
// Build the request.
request := &p2ppb.DataColumnSidecarsByRangeRequest{
StartSlot: startSlot,
Count: blocksCount,
Columns: sortedSliceFromMap(missingDataColumns),
}
// Get all the blocks and data columns we should retrieve.
blockFromRoot := blockFromRoot(bwb[firstIndex : lastIndex+1])
// Iterate request over all peers, and exit as soon as at least one data column is retrieved.
roDataColumns, peer, err := f.requestDataColumnsFromPeers(ctx, request, peers)
if err != nil {
return errors.Wrap(err, "request data columns from peers")
}
// Process the retrieved data columns.
processRetrievedDataColumns(roDataColumns, blockFromRoot, indicesFromRoot, missingColumnsFromRoot, bwb)
if len(missingColumnsFromRoot) > 0 {
for root, columns := range missingColumnsFromRoot {
log.WithFields(logrus.Fields{
"peer": peer,
"root": fmt.Sprintf("%#x", root),
"slot": blockFromRoot[root].Block().Slot(),
"columns": columns,
}).Debug("Peer did not correctly return data columns")
}
}
}
return nil
}
// fetchDataColumnsFromPeers looks at the blocks in `bwb` and retrieves all
// data columns for with the block has blob commitments, and for which our store is missing data columns
// we should custody.
// This function mutates `bwb` by adding the retrieved data columns.
// Preqrequisite: bwb is sorted by slot.
func (f *blocksFetcher) fetchDataColumnsFromPeers(
ctx context.Context,
bwb []blocks.BlockWithROBlobs,
peers []peer.ID,
) error {
ctx, span := trace.StartSpan(ctx, "initialsync.fetchColumnsFromPeer")
defer span.End()
// Get the current slot.
currentSlot := f.clock.CurrentSlot()
// If there is no data columns before deneb. Early return.
if slots.ToEpoch(currentSlot) < params.BeaconConfig().DenebForkEpoch {
return nil
}
// Get the columns we custody.
localCustodyColumns, err := f.custodyColumns()
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Find the first and last block in `bwb` that:
// - are in the blob retention period,
// - contain at least one blob, and
// - have at least one missing data column.
someColumnsAreMissing, firstIndex, lastIndex, err := f.blocksWithMissingDataColumnsBoundaries(bwb, currentSlot, localCustodyColumns)
if err != nil {
return errors.Wrap(err, "blocks with missing data columns boundaries")
}
// If there is no block with missing data columns, early return.
if !someColumnsAreMissing {
return nil
}
// Get all missing columns indexed by root.
missingColumnsFromRoot, err := f.missingColumnsFromRoot(localCustodyColumns, bwb[firstIndex:lastIndex+1])
if err != nil {
return errors.Wrap(err, "missing columns from root")
}
// Get all indices indexed by root.
indicesFromRoot := indicesFromRoot(bwb)
// Retrieve the missing data columns from the peers.
if err := f.retrieveMissingDataColumnsFromPeers(ctx, bwb, missingColumnsFromRoot, indicesFromRoot, peers); err != nil {
return errors.Wrap(err, "retrieve missing data columns from peers")
}
log.Debug("Successfully retrieved all data columns")
return nil
} }
// requestBlocks is a wrapper for handling BeaconBlocksByRangeRequest requests/streams. // requestBlocks is a wrapper for handling BeaconBlocksByRangeRequest requests/streams.
@@ -625,6 +1203,7 @@ func (f *blocksFetcher) requestBlobs(ctx context.Context, req *p2ppb.BlobSidecar
} }
f.rateLimiter.Add(pid.String(), int64(req.Count)) f.rateLimiter.Add(pid.String(), int64(req.Count))
l.Unlock() l.Unlock()
return prysmsync.SendBlobsByRangeRequest(ctx, f.clock, f.p2p, pid, f.ctxMap, req) return prysmsync.SendBlobsByRangeRequest(ctx, f.clock, f.p2p, pid, f.ctxMap, req)
} }
@@ -665,7 +1244,7 @@ func (f *blocksFetcher) waitForBandwidth(pid peer.ID, count uint64) error {
// Exit early if we have sufficient capacity // Exit early if we have sufficient capacity
return nil return nil
} }
intCount, err := math.Int(count) intCount, err := mathPrysm.Int(count)
if err != nil { if err != nil {
return err return err
} }
@@ -682,7 +1261,8 @@ func (f *blocksFetcher) waitForBandwidth(pid peer.ID, count uint64) error {
} }
func (f *blocksFetcher) hasSufficientBandwidth(peers []peer.ID, count uint64) []peer.ID { func (f *blocksFetcher) hasSufficientBandwidth(peers []peer.ID, count uint64) []peer.ID {
filteredPeers := []peer.ID{} var filteredPeers []peer.ID
for _, p := range peers { for _, p := range peers {
if uint64(f.rateLimiter.Remaining(p.String())) < count { if uint64(f.rateLimiter.Remaining(p.String())) < count {
continue continue

View File

@@ -1,7 +1,10 @@
package initialsync package initialsync
import ( import (
"bytes"
"context" "context"
"crypto/sha256"
"encoding/binary"
"fmt" "fmt"
"math" "math"
"sort" "sort"
@@ -9,14 +12,23 @@ import (
"testing" "testing"
"time" "time"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
libp2pcore "github.com/libp2p/go-libp2p/core" libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network" "github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing" mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
dbtest "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing" dbtest "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
p2pm "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pt "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
p2ptest "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup" "github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
beaconsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync" beaconsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags" "github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
@@ -27,8 +39,11 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives" "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket" leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
"github.com/prysmaticlabs/prysm/v5/container/slice" "github.com/prysmaticlabs/prysm/v5/container/slice"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1" ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert" "github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require" "github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util" "github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -267,7 +282,7 @@ func TestBlocksFetcher_RoundRobin(t *testing.T) {
beaconDB := dbtest.SetupDB(t) beaconDB := dbtest.SetupDB(t)
p := p2pt.NewTestP2P(t) p := p2ptest.NewTestP2P(t)
connectPeers(t, p, tt.peers, p.Peers()) connectPeers(t, p, tt.peers, p.Peers())
cache.RLock() cache.RLock()
genesisRoot := cache.rootCache[0] genesisRoot := cache.rootCache[0]
@@ -532,9 +547,9 @@ func TestBlocksFetcher_requestBeaconBlocksByRange(t *testing.T) {
} }
func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) { func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
p1 := p2pt.NewTestP2P(t) p1 := p2ptest.NewTestP2P(t)
p2 := p2pt.NewTestP2P(t) p2 := p2ptest.NewTestP2P(t)
p3 := p2pt.NewTestP2P(t) p3 := p2ptest.NewTestP2P(t)
p1.Connect(p2) p1.Connect(p2)
p1.Connect(p3) p1.Connect(p3)
require.Equal(t, 2, len(p1.BHost.Network().Peers()), "Expected peers to be connected") require.Equal(t, 2, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
@@ -544,7 +559,7 @@ func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
Count: 64, Count: 64,
} }
topic := p2pm.RPCBlocksByRangeTopicV1 topic := p2p.RPCBlocksByRangeTopicV1
protocol := libp2pcore.ProtocolID(topic + p2.Encoding().ProtocolSuffix()) protocol := libp2pcore.ProtocolID(topic + p2.Encoding().ProtocolSuffix())
streamHandlerFn := func(stream network.Stream) { streamHandlerFn := func(stream network.Stream) {
assert.NoError(t, stream.Close()) assert.NoError(t, stream.Close())
@@ -603,15 +618,15 @@ func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
} }
func TestBlocksFetcher_WaitForBandwidth(t *testing.T) { func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
p1 := p2pt.NewTestP2P(t) p1 := p2ptest.NewTestP2P(t)
p2 := p2pt.NewTestP2P(t) p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2) p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected") require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
req := &ethpb.BeaconBlocksByRangeRequest{ req := &ethpb.BeaconBlocksByRangeRequest{
Count: 64, Count: 64,
} }
topic := p2pm.RPCBlocksByRangeTopicV1 topic := p2p.RPCBlocksByRangeTopicV1
protocol := libp2pcore.ProtocolID(topic + p2.Encoding().ProtocolSuffix()) protocol := libp2pcore.ProtocolID(topic + p2.Encoding().ProtocolSuffix())
streamHandlerFn := func(stream network.Stream) { streamHandlerFn := func(stream network.Stream) {
assert.NoError(t, stream.Close()) assert.NoError(t, stream.Close())
@@ -639,7 +654,7 @@ func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
} }
func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T) { func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T) {
p1 := p2pt.NewTestP2P(t) p1 := p2ptest.NewTestP2P(t)
tests := []struct { tests := []struct {
name string name string
req *ethpb.BeaconBlocksByRangeRequest req *ethpb.BeaconBlocksByRangeRequest
@@ -884,7 +899,7 @@ func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T)
}, },
} }
topic := p2pm.RPCBlocksByRangeTopicV1 topic := p2p.RPCBlocksByRangeTopicV1
protocol := libp2pcore.ProtocolID(topic + p1.Encoding().ProtocolSuffix()) protocol := libp2pcore.ProtocolID(topic + p1.Encoding().ProtocolSuffix())
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
@@ -894,7 +909,7 @@ func TestBlocksFetcher_requestBlocksFromPeerReturningInvalidBlocks(t *testing.T)
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
p2 := p2pt.NewTestP2P(t) p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2) p1.Connect(p2)
p2.BHost.SetStreamHandler(protocol, tt.handlerGenFn(tt.req)) p2.BHost.SetStreamHandler(protocol, tt.handlerGenFn(tt.req))
@@ -1027,17 +1042,11 @@ func TestBlobRequest(t *testing.T) {
} }
func TestCountCommitments(t *testing.T) { func TestCountCommitments(t *testing.T) {
// no blocks
// blocks before retention start filtered
// blocks without commitments filtered
// pre-deneb filtered
// variety of commitment counts are accurate, from 1 to max
type testcase struct { type testcase struct {
name string name string
bwb func(t *testing.T, c testcase) []blocks.BlockWithROBlobs bwb func(t *testing.T, c testcase) []blocks.BlockWithROBlobs
numBlocks int retStart primitives.Slot
retStart primitives.Slot resCount int
resCount int
} }
cases := []testcase{ cases := []testcase{
{ {
@@ -1208,7 +1217,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
} }
require.Equal(t, len(blobs), len(expectedCommits)) require.Equal(t, len(blobs), len(expectedCommits))
bwb, err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil) err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
require.NoError(t, err) require.NoError(t, err)
for _, bw := range bwb { for _, bw := range bwb {
commits, err := bw.Block.Block().Body().BlobKzgCommitments() commits, err := bw.Block.Block().Body().BlobKzgCommitments()
@@ -1229,7 +1238,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
}) })
t.Run("missing blobs", func(t *testing.T) { t.Run("missing blobs", func(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10) bwb, blobs := testSequenceBlockWithBlob(t, 10)
_, err := verifyAndPopulateBlobs(bwb, blobs[1:], testReqFromResp(bwb), nil) err := verifyAndPopulateBlobs(bwb, blobs[1:], testReqFromResp(bwb), nil)
require.ErrorIs(t, err, errMissingBlobsForBlockCommitments) require.ErrorIs(t, err, errMissingBlobsForBlockCommitments)
}) })
t.Run("no blobs for last block", func(t *testing.T) { t.Run("no blobs for last block", func(t *testing.T) {
@@ -1241,7 +1250,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
blobs = blobs[0 : len(blobs)-len(cmts)] blobs = blobs[0 : len(blobs)-len(cmts)]
lastBlk, _ = util.GenerateTestDenebBlockWithSidecar(t, lastBlk.Block().ParentRoot(), lastBlk.Block().Slot(), 0) lastBlk, _ = util.GenerateTestDenebBlockWithSidecar(t, lastBlk.Block().ParentRoot(), lastBlk.Block().Slot(), 0)
bwb[lastIdx].Block = lastBlk bwb[lastIdx].Block = lastBlk
_, err = verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil) err = verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
require.NoError(t, err) require.NoError(t, err)
}) })
t.Run("blobs not copied if all locally available", func(t *testing.T) { t.Run("blobs not copied if all locally available", func(t *testing.T) {
@@ -1255,7 +1264,7 @@ func TestVerifyAndPopulateBlobs(t *testing.T) {
r7: {0, 1, 2, 3, 4, 5}, r7: {0, 1, 2, 3, 4, 5},
} }
bss := filesystem.NewMockBlobStorageSummarizer(t, onDisk) bss := filesystem.NewMockBlobStorageSummarizer(t, onDisk)
bwb, err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), bss) err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), bss)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 6, len(bwb[i1].Blobs)) require.Equal(t, 6, len(bwb[i1].Blobs))
require.Equal(t, 0, len(bwb[i7].Blobs)) require.Equal(t, 0, len(bwb[i7].Blobs))
@@ -1303,3 +1312,813 @@ func TestBlockFetcher_HasSufficientBandwidth(t *testing.T) {
} }
assert.Equal(t, 2, len(receivedPeers)) assert.Equal(t, 2, len(receivedPeers))
} }
func TestSortedSliceFromMap(t *testing.T) {
m := map[uint64]bool{1: true, 3: true, 2: true, 4: true}
expected := []uint64{1, 2, 3, 4}
actual := sortedSliceFromMap(m)
require.DeepSSZEqual(t, expected, actual)
}
type blockParams struct {
slot primitives.Slot
hasBlobs bool
}
func rootFromUint64(u uint64) [fieldparams.RootLength]byte {
var root [fieldparams.RootLength]byte
binary.LittleEndian.PutUint64(root[:], u)
return root
}
func createPeer(t *testing.T, privateKeyOffset int, custodyCount uint64) (*enr.Record, peer.ID) {
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(privateKeyOffset + i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
peerID, err := peer.IDFromPrivateKey(unmarshalledPrivateKey)
require.NoError(t, err)
record := &enr.Record{}
record.Set(peerdas.Csc(custodyCount))
record.Set(enode.Secp256k1(privateKey.PublicKey))
return record, peerID
}
func TestCustodyAllNeededColumns(t *testing.T) {
const dataColumnsCount = 31
p2p := p2ptest.NewTestP2P(t)
dataColumns := make(map[uint64]bool, dataColumnsCount)
for i := range dataColumnsCount {
dataColumns[uint64(i)] = true
}
custodyCounts := [...]uint64{4, 32, 4, 32}
peersID := make([]peer.ID, 0, len(custodyCounts))
for _, custodyCount := range custodyCounts {
peerRecord, peerID := createPeer(t, len(peersID), custodyCount)
peersID = append(peersID, peerID)
p2p.Peers().Add(peerRecord, peerID, nil, network.DirOutbound)
}
expected := []peer.ID{peersID[1], peersID[3]}
blocksFetcher := newBlocksFetcher(context.Background(), &blocksFetcherConfig{
p2p: p2p,
})
actual, err := blocksFetcher.custodyAllNeededColumns(peersID, dataColumns)
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
}
func TestCustodyColumns(t *testing.T) {
blocksFetcher := newBlocksFetcher(context.Background(), &blocksFetcherConfig{
p2p: p2ptest.NewTestP2P(t),
})
expected := map[uint64]bool{6: true, 38: true, 70: true, 102: true}
actual, err := blocksFetcher.custodyColumns()
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for column := range expected {
require.Equal(t, true, actual[column])
}
}
func TestMinInt(t *testing.T) {
input := []int{1, 2, 3, 4, 5, 5, 4, 3, 2, 1}
const expected = 1
actual := minInt(input)
require.Equal(t, expected, actual)
}
func TestMaxInt(t *testing.T) {
input := []int{1, 2, 3, 4, 5, 5, 4, 3, 2, 1}
const expected = 5
actual := maxInt(input)
require.Equal(t, expected, actual)
}
// deterministicRandomness returns a random bytes array based on the seed
func deterministicRandomness(t *testing.T, seed int64) [32]byte {
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
require.NoError(t, err)
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// getRandFieldElement returns a serialized random field element in big-endian
func getRandFieldElement(t *testing.T, seed int64) [32]byte {
bytes := deterministicRandomness(t, seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// getRandBlob returns a random blob using the passed seed as entropy
func getRandBlob(t *testing.T, seed int64) kzg.Blob {
var blob kzg.Blob
for i := 0; i < len(blob); i += 32 {
fieldElementBytes := getRandFieldElement(t, seed+int64(i))
copy(blob[i:i+32], fieldElementBytes[:])
}
return blob
}
type (
responseParams struct {
slot primitives.Slot
columnIndex uint64
alterate bool
}
peerParams struct {
// Custody subnet count
csc uint64
// key: RPCDataColumnSidecarsByRangeTopicV1 stringified
// value: The list of all slotxindex to respond by request number
toRespond map[string][][]responseParams
}
)
// createAndConnectPeer creates a peer and connects it to the p2p service.
// The peer will respond to the `RPCDataColumnSidecarsByRangeTopicV1` topic.
func createAndConnectPeer(
t *testing.T,
p2pService *p2ptest.TestP2P,
chainService *mock.ChainService,
dataColumnsSidecarFromSlot map[primitives.Slot][]*ethpb.DataColumnSidecar,
peerParams peerParams,
offset int,
) *p2ptest.TestP2P {
// Create the private key, depending on the offset.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(offset + i)
}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
// Create the peer.
peer := p2ptest.NewTestP2P(t, swarmt.OptPeerPrivateKey(privateKey))
// Create a call counter.
countFromRequest := make(map[string]int, len(peerParams.toRespond))
peer.SetStreamHandler(p2p.RPCDataColumnSidecarsByRangeTopicV1+"/ssz_snappy", func(stream network.Stream) {
// Decode the request.
req := new(ethpb.DataColumnSidecarsByRangeRequest)
err := peer.Encoding().DecodeWithMaxLength(stream, req)
require.NoError(t, err)
// Convert the request to a string.
reqString := req.String()
// Get the response to send.
items, ok := peerParams.toRespond[reqString]
require.Equal(t, true, ok)
for _, responseParams := range items[countFromRequest[reqString]] {
// Get data columns sidecars for this slot.
dataColumnsSidecar, ok := dataColumnsSidecarFromSlot[responseParams.slot]
require.Equal(t, true, ok)
// Get the data column sidecar.
dataColumn := dataColumnsSidecar[responseParams.columnIndex]
// Alter the data column if needed.
initialValue0, initialValue1 := dataColumn.DataColumn[0][0], dataColumn.DataColumn[0][1]
if responseParams.alterate {
dataColumn.DataColumn[0][0] = 0
dataColumn.DataColumn[0][1] = 0
}
// Send the response.
err := beaconsync.WriteDataColumnSidecarChunk(stream, chainService, p2pService.Encoding(), dataColumn)
require.NoError(t, err)
if responseParams.alterate {
// Restore the data column.
dataColumn.DataColumn[0][0] = initialValue0
dataColumn.DataColumn[0][1] = initialValue1
}
}
// Close the stream.
err = stream.Close()
require.NoError(t, err)
// Increment the call counter.
countFromRequest[reqString]++
})
// Create the record and set the custody count.
enr := &enr.Record{}
enr.Set(peerdas.Csc(peerParams.csc))
// Add the peer and connect it.
p2pService.Peers().Add(enr, peer.PeerID(), nil, network.DirOutbound)
p2pService.Peers().SetConnectionState(peer.PeerID(), peers.PeerConnected)
p2pService.Connect(peer)
return peer
}
func defaultMockChain(t *testing.T, currentSlot uint64) (*mock.ChainService, *startup.Clock) {
de := params.BeaconConfig().DenebForkEpoch
df, err := forks.Fork(de)
require.NoError(t, err)
denebBuffer := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest + 1000
ce := de + denebBuffer
fe := ce - 2
cs, err := slots.EpochStart(ce)
require.NoError(t, err)
now := time.Now()
genOffset := primitives.Slot(params.BeaconConfig().SecondsPerSlot) * cs
genesisTime := now.Add(-1 * time.Second * time.Duration(int64(genOffset)))
clock := startup.NewClock(genesisTime, [32]byte{}, startup.WithNower(
func() time.Time {
return genesisTime.Add(time.Duration(currentSlot*params.BeaconConfig().SecondsPerSlot) * time.Second)
},
))
chain := &mock.ChainService{
FinalizedCheckPoint: &ethpb.Checkpoint{Epoch: fe},
Fork: df,
}
return chain, clock
}
func TestFirstLastIndices(t *testing.T) {
missingColumnsFromRoot := map[[fieldparams.RootLength]byte]map[uint64]bool{
rootFromUint64(42): {1: true, 3: true, 5: true},
rootFromUint64(43): {2: true, 4: true, 6: true},
rootFromUint64(44): {7: true, 8: true, 9: true},
}
indicesFromRoot := map[[fieldparams.RootLength]byte][]int{
rootFromUint64(42): {5, 6, 7},
rootFromUint64(43): {8, 9},
rootFromUint64(44): {3, 2, 1},
}
const (
expectedFirst = 1
expectedLast = 9
)
actualFirst, actualLast := firstLastIndices(missingColumnsFromRoot, indicesFromRoot)
require.Equal(t, expectedFirst, actualFirst)
require.Equal(t, expectedLast, actualLast)
}
func TestFetchDataColumnsFromPeers(t *testing.T) {
const blobsCount = 6
testCases := []struct {
// Name of the test case.
name string
// INPUTS
// ------
// Fork epochs.
denebForkEpoch primitives.Epoch
eip7954ForkEpoch primitives.Epoch
// Current slot.
currentSlot uint64
// Blocks with blobs parameters.
blocksParams []blockParams
// - Position in the slice: Stored data columns in the store for the
// nth position in the input bwb.
// - Key : Column index
// - Value : Always true
storedDataColumns []map[int]bool
peersParams []peerParams
// OUTPUTS
// -------
// Data columns that should be added to `bwb`.
addedRODataColumns [][]int
}{
{
name: "Deneb fork epoch not reached",
denebForkEpoch: primitives.Epoch(math.MaxUint64),
blocksParams: []blockParams{
{slot: 1, hasBlobs: true},
{slot: 2, hasBlobs: true},
{slot: 3, hasBlobs: true},
},
addedRODataColumns: [][]int{nil, nil, nil},
},
{
name: "All blocks are before EIP-7954 fork epoch",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 25, hasBlobs: false},
{slot: 26, hasBlobs: false},
{slot: 27, hasBlobs: false},
{slot: 28, hasBlobs: false},
},
addedRODataColumns: [][]int{nil, nil, nil, nil},
},
{
name: "All blocks with commitments before are EIP-7954 fork epoch",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 25, hasBlobs: false},
{slot: 26, hasBlobs: true},
{slot: 27, hasBlobs: true},
{slot: 32, hasBlobs: false},
{slot: 33, hasBlobs: false},
},
addedRODataColumns: [][]int{nil, nil, nil, nil, nil},
},
{
name: "Some blocks with blobs but without any missing data columns",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 25, hasBlobs: false},
{slot: 26, hasBlobs: true},
{slot: 27, hasBlobs: true},
{slot: 32, hasBlobs: false},
{slot: 33, hasBlobs: true},
},
storedDataColumns: []map[int]bool{
nil,
nil,
nil,
nil,
{6: true, 38: true, 70: true, 102: true},
},
addedRODataColumns: [][]int{nil, nil, nil, nil, nil},
},
{
name: "Some blocks with blobs with missing data columns - one round needed",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 25, hasBlobs: false},
{slot: 27, hasBlobs: true},
{slot: 32, hasBlobs: false},
{slot: 33, hasBlobs: true},
{slot: 34, hasBlobs: true},
{slot: 35, hasBlobs: false},
{slot: 36, hasBlobs: true},
{slot: 37, hasBlobs: true},
{slot: 38, hasBlobs: true},
{slot: 39, hasBlobs: false},
},
storedDataColumns: []map[int]bool{
nil,
nil,
nil,
{6: true, 38: true, 70: true, 102: true},
{6: true, 70: true},
nil,
{6: true, 38: true, 70: true, 102: true},
{38: true, 102: true},
{6: true, 38: true, 70: true, 102: true},
nil,
},
peersParams: []peerParams{
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 34,
Count: 4,
Columns: []uint64{6, 38, 70, 102},
}).String(): {
{
{slot: 34, columnIndex: 6},
{slot: 34, columnIndex: 38},
{slot: 34, columnIndex: 70},
{slot: 34, columnIndex: 102},
{slot: 36, columnIndex: 6},
{slot: 36, columnIndex: 38},
{slot: 36, columnIndex: 70},
{slot: 36, columnIndex: 102},
{slot: 37, columnIndex: 6},
{slot: 37, columnIndex: 38},
{slot: 37, columnIndex: 70},
{slot: 37, columnIndex: 102},
},
},
},
},
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 34,
Count: 4,
Columns: []uint64{6, 38, 70, 102},
}).String(): {
{
{slot: 34, columnIndex: 6},
{slot: 34, columnIndex: 38},
{slot: 34, columnIndex: 70},
{slot: 34, columnIndex: 102},
{slot: 36, columnIndex: 6},
{slot: 36, columnIndex: 38},
{slot: 36, columnIndex: 70},
{slot: 36, columnIndex: 102},
{slot: 37, columnIndex: 6},
{slot: 37, columnIndex: 38},
{slot: 37, columnIndex: 70},
{slot: 37, columnIndex: 102},
},
},
},
},
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 34,
Count: 4,
Columns: []uint64{6, 38, 70, 102},
}).String(): {
{},
},
},
},
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 34,
Count: 4,
Columns: []uint64{6, 38, 70, 102},
}).String(): {
{},
},
},
},
},
addedRODataColumns: [][]int{
nil,
nil,
nil,
nil,
{38, 102},
nil,
nil,
{6, 70},
nil,
nil,
},
},
{
name: "Some blocks with blobs with missing data columns - several rounds needed",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 25, hasBlobs: false},
{slot: 27, hasBlobs: true},
{slot: 32, hasBlobs: false},
{slot: 33, hasBlobs: true},
{slot: 34, hasBlobs: true},
{slot: 35, hasBlobs: false},
{slot: 37, hasBlobs: true},
{slot: 38, hasBlobs: true},
{slot: 39, hasBlobs: false},
},
storedDataColumns: []map[int]bool{
nil,
nil,
nil,
{6: true, 38: true, 70: true, 102: true},
{6: true, 70: true},
nil,
{38: true, 102: true},
{6: true, 38: true, 70: true, 102: true},
nil,
},
peersParams: []peerParams{
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 34,
Count: 4,
Columns: []uint64{6, 38, 70, 102},
}).String(): {
{
{slot: 34, columnIndex: 38},
},
},
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 34,
Count: 4,
Columns: []uint64{6, 70, 102},
}).String(): {
{
{slot: 34, columnIndex: 102},
},
},
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 37,
Count: 1,
Columns: []uint64{6, 70},
}).String(): {
{
{slot: 37, columnIndex: 6},
{slot: 37, columnIndex: 70},
},
},
},
},
{csc: 0},
{csc: 0},
},
addedRODataColumns: [][]int{
nil,
nil,
nil,
nil,
{38, 102},
nil,
{6, 70},
nil,
nil,
},
},
{
name: "Some blocks with blobs with missing data columns - no peers response at first",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 38, hasBlobs: true},
},
storedDataColumns: []map[int]bool{
{38: true, 102: true},
},
peersParams: []peerParams{
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 38,
Count: 1,
Columns: []uint64{6, 70},
}).String(): {
nil,
{
{slot: 38, columnIndex: 6},
{slot: 38, columnIndex: 70},
},
},
},
},
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 38,
Count: 1,
Columns: []uint64{6, 70},
}).String(): {
nil,
{
{slot: 38, columnIndex: 6},
{slot: 38, columnIndex: 70},
},
},
},
},
},
addedRODataColumns: [][]int{
{6, 70},
},
},
{
name: "Some blocks with blobs with missing data columns - first response is invalid",
denebForkEpoch: 0,
eip7954ForkEpoch: 1,
currentSlot: 40,
blocksParams: []blockParams{
{slot: 38, hasBlobs: true},
},
storedDataColumns: []map[int]bool{
{38: true, 102: true},
},
peersParams: []peerParams{
{
csc: 32,
toRespond: map[string][][]responseParams{
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 38,
Count: 1,
Columns: []uint64{6, 70},
}).String(): {
{
{slot: 38, columnIndex: 6, alterate: true},
{slot: 38, columnIndex: 70},
},
},
(&ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 38,
Count: 1,
Columns: []uint64{6},
}).String(): {
{
{slot: 38, columnIndex: 6},
},
},
},
},
},
addedRODataColumns: [][]int{
{70, 6},
},
},
}
for _, tc := range testCases {
// Consistency checks.
require.Equal(t, len(tc.blocksParams), len(tc.addedRODataColumns))
// Create a context.
ctx := context.Background()
// Initialize the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create blocks, RO data columns and data columns sidecar from slot.
roBlocks := make([]blocks.ROBlock, len(tc.blocksParams))
roDatasColumns := make([][]blocks.RODataColumn, len(tc.blocksParams))
dataColumnsSidecarFromSlot := make(map[primitives.Slot][]*ethpb.DataColumnSidecar, len(tc.blocksParams))
for i, blockParams := range tc.blocksParams {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
pbSignedBeaconBlock.Block.Slot = blockParams.slot
if blockParams.hasBlobs {
blobs := make([]kzg.Blob, blobsCount)
blobKzgCommitments := make([][]byte, blobsCount)
for j := range blobsCount {
blob := getRandBlob(t, int64(i+j))
blobs[j] = blob
blobKzgCommitment, err := kzg.BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobKzgCommitments[j] = blobKzgCommitment[:]
}
pbSignedBeaconBlock.Block.Body.BlobKzgCommitments = blobKzgCommitments
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
require.NoError(t, err)
pbDataColumnsSidecar, err := peerdas.DataColumnSidecars(signedBeaconBlock, blobs)
require.NoError(t, err)
dataColumnsSidecarFromSlot[blockParams.slot] = pbDataColumnsSidecar
roDataColumns := make([]blocks.RODataColumn, 0, len(pbDataColumnsSidecar))
for _, pbDataColumnSidecar := range pbDataColumnsSidecar {
roDataColumn, err := blocks.NewRODataColumn(pbDataColumnSidecar)
require.NoError(t, err)
roDataColumns = append(roDataColumns, roDataColumn)
}
roDatasColumns[i] = roDataColumns
}
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
roBlocks[i] = roBlock
}
// Set the Deneb fork epoch.
params.BeaconConfig().DenebForkEpoch = tc.denebForkEpoch
// Set the EIP-7594 fork epoch.
params.BeaconConfig().Eip7594ForkEpoch = tc.eip7954ForkEpoch
// Save the blocks in the store.
storage := make(map[[fieldparams.RootLength]byte][]int)
for index, columns := range tc.storedDataColumns {
root := roBlocks[index].Root()
columnsSlice := make([]int, 0, len(columns))
for column := range columns {
columnsSlice = append(columnsSlice, column)
}
storage[root] = columnsSlice
}
blobStorageSummarizer := filesystem.NewMockBlobStorageSummarizer(t, storage)
// Create a chain and a clock.
chain, clock := defaultMockChain(t, tc.currentSlot)
// Create the P2P service.
p2p := p2ptest.NewTestP2P(t)
// Connect the peers.
peers := make([]*p2ptest.TestP2P, 0, len(tc.peersParams))
for i, peerParams := range tc.peersParams {
peer := createAndConnectPeer(t, p2p, chain, dataColumnsSidecarFromSlot, peerParams, i)
peers = append(peers, peer)
}
peersID := make([]peer.ID, 0, len(peers))
for _, peer := range peers {
peerID := peer.PeerID()
peersID = append(peersID, peerID)
}
// Create `bwb`.
bwb := make([]blocks.BlockWithROBlobs, 0, len(tc.blocksParams))
for _, roBlock := range roBlocks {
bwb = append(bwb, blocks.BlockWithROBlobs{Block: roBlock})
}
// Create the block fetcher.
blocksFetcher := newBlocksFetcher(ctx, &blocksFetcherConfig{
clock: clock,
ctxMap: map[[4]byte]int{{245, 165, 253, 66}: version.Deneb},
p2p: p2p,
bs: blobStorageSummarizer,
})
// Fetch the data columns from the peers.
err = blocksFetcher.fetchDataColumnsFromPeers(ctx, bwb, peersID)
require.NoError(t, err)
// Check the added RO data columns.
for i := range bwb {
blockWithROBlobs := bwb[i]
addedRODataColumns := tc.addedRODataColumns[i]
if addedRODataColumns == nil {
require.Equal(t, 0, len(blockWithROBlobs.Columns))
continue
}
expectedRODataColumns := make([]blocks.RODataColumn, 0, len(tc.addedRODataColumns[i]))
for _, column := range addedRODataColumns {
roDataColumn := roDatasColumns[i][column]
expectedRODataColumns = append(expectedRODataColumns, roDataColumn)
}
actualRODataColumns := blockWithROBlobs.Columns
require.DeepSSZEqual(t, expectedRODataColumns, actualRODataColumns)
}
}
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors" "github.com/pkg/errors"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
p2pTypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types" p2pTypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags" "github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
@@ -236,18 +237,18 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, slot
Count: reqCount, Count: reqCount,
Step: 1, Step: 1,
} }
blocks, err := f.requestBlocks(ctx, req, pid) reqBlocks, err := f.requestBlocks(ctx, req, pid)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot fetch blocks: %w", err) return nil, fmt.Errorf("cannot fetch blocks: %w", err)
} }
if len(blocks) == 0 { if len(reqBlocks) == 0 {
return nil, errNoAlternateBlocks return nil, errNoAlternateBlocks
} }
// If the first block is not connected to the current canonical chain, we'll stop processing this batch. // If the first block is not connected to the current canonical chain, we'll stop processing this batch.
// Instead, we'll work backwards from the first block until we find a common ancestor, // Instead, we'll work backwards from the first block until we find a common ancestor,
// and then begin processing from there. // and then begin processing from there.
first := blocks[0] first := reqBlocks[0]
if !f.chain.HasBlock(ctx, first.Block().ParentRoot()) { if !f.chain.HasBlock(ctx, first.Block().ParentRoot()) {
// Backtrack on a root, to find a common ancestor from which we can resume syncing. // Backtrack on a root, to find a common ancestor from which we can resume syncing.
fork, err := f.findAncestor(ctx, pid, first) fork, err := f.findAncestor(ctx, pid, first)
@@ -260,8 +261,8 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, slot
// Traverse blocks, and if we've got one that doesn't have parent in DB, backtrack on it. // Traverse blocks, and if we've got one that doesn't have parent in DB, backtrack on it.
// Note that we start from the second element in the array, because we know that the first element is in the db, // Note that we start from the second element in the array, because we know that the first element is in the db,
// otherwise we would have gone into the findAncestor early return path above. // otherwise we would have gone into the findAncestor early return path above.
for i := 1; i < len(blocks); i++ { for i := 1; i < len(reqBlocks); i++ {
block := blocks[i] block := reqBlocks[i]
parentRoot := block.Block().ParentRoot() parentRoot := block.Block().ParentRoot()
// Step through blocks until we find one that is not in the chain. The goal is to find the point where the // Step through blocks until we find one that is not in the chain. The goal is to find the point where the
// chain observed in the peer diverges from the locally known chain, and then collect up the remainder of the // chain observed in the peer diverges from the locally known chain, and then collect up the remainder of the
@@ -274,16 +275,22 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, slot
"slot": block.Block().Slot(), "slot": block.Block().Slot(),
"root": fmt.Sprintf("%#x", parentRoot), "root": fmt.Sprintf("%#x", parentRoot),
}).Debug("Block with unknown parent root has been found") }).Debug("Block with unknown parent root has been found")
altBlocks, err := sortedBlockWithVerifiedBlobSlice(blocks[i-1:]) bwb, err := sortedBlockWithVerifiedBlobSlice(reqBlocks[i-1:])
if err != nil { if err != nil {
return nil, errors.Wrap(err, "invalid blocks received in findForkWithPeer") return nil, errors.Wrap(err, "invalid blocks received in findForkWithPeer")
} }
if coreTime.PeerDASIsActive(block.Block().Slot()) {
if err := f.fetchDataColumnsFromPeers(ctx, bwb, []peer.ID{pid}); err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findForkWithPeer")
}
} else {
if err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid}); err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findForkWithPeer")
}
}
// We need to fetch the blobs for the given alt-chain if any exist, so that we can try to verify and import // We need to fetch the blobs for the given alt-chain if any exist, so that we can try to verify and import
// the blocks. // the blocks.
bwb, err := f.fetchBlobsFromPeer(ctx, altBlocks, pid, []peer.ID{pid})
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findForkWithPeer")
}
// The caller will use the BlocksWith VerifiedBlobs in bwb as the starting point for // The caller will use the BlocksWith VerifiedBlobs in bwb as the starting point for
// round-robin syncing the alternate chain. // round-robin syncing the alternate chain.
return &forkData{peer: pid, bwb: bwb}, nil return &forkData{peer: pid, bwb: bwb}, nil
@@ -302,9 +309,14 @@ func (f *blocksFetcher) findAncestor(ctx context.Context, pid peer.ID, b interfa
if err != nil { if err != nil {
return nil, errors.Wrap(err, "received invalid blocks in findAncestor") return nil, errors.Wrap(err, "received invalid blocks in findAncestor")
} }
bwb, err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid}) if coreTime.PeerDASIsActive(b.Block().Slot()) {
if err != nil { if err := f.fetchDataColumnsFromPeers(ctx, bwb, []peer.ID{pid}); err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findAncestor") return nil, errors.Wrap(err, "unable to retrieve columns for blocks found in findAncestor")
}
} else {
if err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid}); err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findAncestor")
}
} }
return &forkData{ return &forkData{
peer: pid, peer: pid,

View File

@@ -9,6 +9,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
"github.com/paulbellamy/ratecounter" "github.com/paulbellamy/ratecounter"
"github.com/pkg/errors" "github.com/pkg/errors"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition" "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das" "github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -172,27 +173,52 @@ func (s *Service) processFetchedDataRegSync(
if len(bwb) == 0 { if len(bwb) == 0 {
return return
} }
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements) if coreTime.PeerDASIsActive(startSlot) {
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv) avs := das.NewLazilyPersistentStoreColumn(s.cfg.BlobStorage, emptyVerifier{}, s.cfg.P2P.NodeID())
batchFields := logrus.Fields{ batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(), "firstSlot": data.bwb[0].Block.Block().Slot(),
"firstUnprocessed": bwb[0].Block.Block().Slot(), "firstUnprocessed": bwb[0].Block.Block().Slot(),
}
for _, b := range bwb {
if err := avs.Persist(s.clock.CurrentSlot(), b.Blobs...); err != nil {
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Batch failure due to BlobSidecar issues")
return
} }
if err := s.processBlock(ctx, genesis, b, s.cfg.Chain.ReceiveBlock, avs); err != nil { for _, b := range bwb {
switch { if err := avs.PersistColumns(s.clock.CurrentSlot(), b.Columns...); err != nil {
case errors.Is(err, errParentDoesNotExist): log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Batch failure due to DataColumnSidecar issues")
log.WithFields(batchFields).WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
WithFields(syncFields(b.Block)).Debug("Could not process batch blocks due to missing parent")
return return
default: }
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Block processing failure") if err := s.processBlock(ctx, genesis, b, s.cfg.Chain.ReceiveBlock, avs); err != nil {
switch {
case errors.Is(err, errParentDoesNotExist):
log.WithFields(batchFields).WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
WithFields(syncFields(b.Block)).Debug("Could not process batch blocks due to missing parent")
return
default:
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Block processing failure")
return
}
}
}
} else {
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(),
"firstUnprocessed": bwb[0].Block.Block().Slot(),
}
for _, b := range bwb {
if err := avs.Persist(s.clock.CurrentSlot(), b.Blobs...); err != nil {
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Batch failure due to BlobSidecar issues")
return return
} }
if err := s.processBlock(ctx, genesis, b, s.cfg.Chain.ReceiveBlock, avs); err != nil {
switch {
case errors.Is(err, errParentDoesNotExist):
log.WithFields(batchFields).WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
WithFields(syncFields(b.Block)).Debug("Could not process batch blocks due to missing parent")
return
default:
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Block processing failure")
return
}
}
} }
} }
} }
@@ -330,20 +356,34 @@ func (s *Service) processBatchedBlocks(ctx context.Context, genesis time.Time,
return fmt.Errorf("%w: %#x (in processBatchedBlocks, slot=%d)", return fmt.Errorf("%w: %#x (in processBatchedBlocks, slot=%d)",
errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot()) errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot())
} }
var aStore das.AvailabilityStore
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements) if coreTime.PeerDASIsActive(first.Block().Slot()) {
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv) avs := das.NewLazilyPersistentStoreColumn(s.cfg.BlobStorage, emptyVerifier{}, s.cfg.P2P.NodeID())
s.logBatchSyncStatus(genesis, first, len(bwb)) s.logBatchSyncStatus(genesis, first, len(bwb))
for _, bb := range bwb { for _, bb := range bwb {
if len(bb.Blobs) == 0 { if len(bb.Columns) == 0 {
continue continue
}
if err := avs.PersistColumns(s.clock.CurrentSlot(), bb.Columns...); err != nil {
return err
}
} }
if err := avs.Persist(s.clock.CurrentSlot(), bb.Blobs...); err != nil { aStore = avs
return err } else {
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
s.logBatchSyncStatus(genesis, first, len(bwb))
for _, bb := range bwb {
if len(bb.Blobs) == 0 {
continue
}
if err := avs.Persist(s.clock.CurrentSlot(), bb.Blobs...); err != nil {
return err
}
} }
aStore = avs
} }
return bFunc(ctx, blocks.BlockWithROBlobsSlice(bwb).ROBlocks(), aStore)
return bFunc(ctx, blocks.BlockWithROBlobsSlice(bwb).ROBlocks(), avs)
} }
// updatePeerScorerStats adjusts monitored metrics for a peer. // updatePeerScorerStats adjusts monitored metrics for a peer.
@@ -380,3 +420,15 @@ func (s *Service) isProcessedBlock(ctx context.Context, blk blocks.ROBlock) bool
} }
return false return false
} }
type emptyVerifier struct {
}
func (_ emptyVerifier) VerifiedRODataColumns(_ context.Context, _ blocks.ROBlock, cols []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error) {
var verCols []blocks.VerifiedRODataColumn
for _, col := range cols {
vCol := blocks.NewVerifiedRODataColumn(col)
verCols = append(verCols, vCol)
}
return verCols, nil
}

View File

@@ -11,10 +11,14 @@ import (
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
"github.com/paulbellamy/ratecounter" "github.com/paulbellamy/ratecounter"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/async/abool" "github.com/prysmaticlabs/prysm/v5/async/abool"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain" "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block" blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state" statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das" "github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem" "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -32,7 +36,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime/version" "github.com/prysmaticlabs/prysm/v5/runtime/version"
prysmTime "github.com/prysmaticlabs/prysm/v5/time" prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots" "github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
) )
var _ runtime.Service = (*Service)(nil) var _ runtime.Service = (*Service)(nil)
@@ -184,9 +187,16 @@ func (s *Service) Start() {
log.WithError(err).Error("Error waiting for minimum number of peers") log.WithError(err).Error("Error waiting for minimum number of peers")
return return
} }
if err := s.fetchOriginBlobs(peers); err != nil { if coreTime.PeerDASIsActive(s.cfg.Chain.HeadSlot()) {
log.WithError(err).Error("Failed to fetch missing blobs for checkpoint origin") if err := s.fetchOriginColumns(peers); err != nil {
return log.WithError(err).Error("Failed to fetch missing columns for checkpoint origin")
return
}
} else {
if err := s.fetchOriginBlobs(peers); err != nil {
log.WithError(err).Error("Failed to fetch missing blobs for checkpoint origin")
return
}
} }
if err := s.roundRobinSync(gt); err != nil { if err := s.roundRobinSync(gt); err != nil {
if errors.Is(s.ctx.Err(), context.Canceled) { if errors.Is(s.ctx.Err(), context.Canceled) {
@@ -306,6 +316,56 @@ func missingBlobRequest(blk blocks.ROBlock, store *filesystem.BlobStorage) (p2pt
return req, nil return req, nil
} }
func (s *Service) missingColumnRequest(roBlock blocks.ROBlock, store *filesystem.BlobStorage) (p2ptypes.DataColumnSidecarsByRootReq, error) {
// No columns for pre-Deneb blocks.
if roBlock.Version() < version.Deneb {
return nil, nil
}
// Get the block root.
blockRoot := roBlock.Root()
// Get the commitments from the block.
commitments, err := roBlock.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "failed to get blob KZG commitments")
}
// Return early if there are no commitments.
if len(commitments) == 0 {
return nil, nil
}
// Check which columns are already on disk.
storedColumns, err := store.ColumnIndices(blockRoot)
if err != nil {
return nil, errors.Wrapf(err, "error checking existing blobs for checkpoint sync block root %#x", blockRoot)
}
// Get our node ID.
nodeID := s.cfg.P2P.NodeID()
// Get the custodied columns.
custodiedColumns, err := peerdas.CustodyColumns(nodeID, peerdas.CustodySubnetCount())
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
// Build blob sidecars by root requests based on missing columns.
req := make(p2ptypes.DataColumnSidecarsByRootReq, 0, len(commitments))
for columnIndex := range custodiedColumns {
isColumnAvailable := storedColumns[columnIndex]
if !isColumnAvailable {
req = append(req, &eth.DataColumnIdentifier{
BlockRoot: blockRoot[:],
ColumnIndex: columnIndex,
})
}
}
return req, nil
}
func (s *Service) fetchOriginBlobs(pids []peer.ID) error { func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
r, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx) r, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx)
if errors.Is(err, db.ErrNotFoundOriginBlockRoot) { if errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
@@ -356,6 +416,59 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
return fmt.Errorf("no connected peer able to provide blobs for checkpoint sync block %#x", r) return fmt.Errorf("no connected peer able to provide blobs for checkpoint sync block %#x", r)
} }
func (s *Service) fetchOriginColumns(pids []peer.ID) error {
r, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx)
if errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
return nil
}
blk, err := s.cfg.DB.Block(s.ctx, r)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", r)).Error("Block for checkpoint sync origin root not found in db")
return err
}
if !params.WithinDAPeriod(slots.ToEpoch(blk.Block().Slot()), slots.ToEpoch(s.clock.CurrentSlot())) {
return nil
}
rob, err := blocks.NewROBlockWithRoot(blk, r)
if err != nil {
return err
}
req, err := s.missingColumnRequest(rob, s.cfg.BlobStorage)
if err != nil {
return err
}
if len(req) == 0 {
log.WithField("root", fmt.Sprintf("%#x", r)).Debug("All columns for checkpoint block are present")
return nil
}
shufflePeers(pids)
pids, err = s.cfg.P2P.GetValidCustodyPeers(pids)
if err != nil {
return err
}
for i := range pids {
sidecars, err := sync.SendDataColumnSidecarByRoot(s.ctx, s.clock, s.cfg.P2P, pids[i], s.ctxMap, &req)
if err != nil {
continue
}
if len(sidecars) != len(req) {
continue
}
avs := das.NewLazilyPersistentStoreColumn(s.cfg.BlobStorage, emptyVerifier{}, s.cfg.P2P.NodeID())
current := s.clock.CurrentSlot()
if err := avs.PersistColumns(current, sidecars...); err != nil {
return err
}
if err := avs.IsDataAvailable(s.ctx, current, rob); err != nil {
log.WithField("root", fmt.Sprintf("%#x", r)).WithField("peerID", pids[i]).Warn("Columns from peer for origin block were unusable")
continue
}
log.WithField("nColumns", len(sidecars)).WithField("root", fmt.Sprintf("%#x", r)).Info("Successfully downloaded blobs for checkpoint sync block")
return nil
}
return fmt.Errorf("no connected peer able to provide columns for checkpoint sync block %#x", r)
}
func shufflePeers(pids []peer.ID) { func shufflePeers(pids []peer.ID) {
rg := rand.NewGenerator() rg := rand.NewGenerator()
rg.Shuffle(len(pids), func(i, j int) { rg.Shuffle(len(pids), func(i, j int) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async" "github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain" "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types" p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks" "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -174,10 +175,8 @@ func (s *Service) getBlocksInQueue(slot primitives.Slot) []interfaces.ReadOnlySi
func (s *Service) removeBlockFromQueue(b interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error { func (s *Service) removeBlockFromQueue(b interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
s.pendingQueueLock.Lock() s.pendingQueueLock.Lock()
defer s.pendingQueueLock.Unlock() defer s.pendingQueueLock.Unlock()
if err := s.deleteBlockFromPendingQueue(b.Block().Slot(), b, blkRoot); err != nil {
return err return s.deleteBlockFromPendingQueue(b.Block().Slot(), b, blkRoot)
}
return nil
} }
// isBlockInQueue checks if a block's parent root is in the pending queue. // isBlockInQueue checks if a block's parent root is in the pending queue.
@@ -204,19 +203,40 @@ func (s *Service) processAndBroadcastBlock(ctx context.Context, b interfaces.Rea
} }
} }
request, err := s.pendingBlobsRequestForBlock(blkRoot, b) if coreTime.PeerDASIsActive(b.Block().Slot()) {
if err != nil { request, err := s.pendingDataColumnRequestForBlock(blkRoot, b)
return err if err != nil {
}
if len(request) > 0 {
peers := s.getBestPeers()
peerCount := len(peers)
if peerCount == 0 {
return errors.Wrapf(errNoPeersForPending, "block root=%#x", blkRoot)
}
if err := s.sendAndSaveBlobSidecars(ctx, request, peers[rand.NewGenerator().Int()%peerCount], b); err != nil {
return err return err
} }
if len(request) > 0 {
peers := s.getBestPeers()
peers, err = s.cfg.p2p.GetValidCustodyPeers(peers)
if err != nil {
return err
}
peerCount := len(peers)
if peerCount == 0 {
return errors.Wrapf(errNoPeersForPending, "block root=%#x", blkRoot)
}
if err := s.sendAndSaveDataColumnSidecars(ctx, request, peers[rand.NewGenerator().Int()%peerCount], b); err != nil {
return err
}
}
} else {
request, err := s.pendingBlobsRequestForBlock(blkRoot, b)
if err != nil {
return err
}
if len(request) > 0 {
peers := s.getBestPeers()
peerCount := len(peers)
if peerCount == 0 {
return errors.Wrapf(errNoPeersForPending, "block root=%#x", blkRoot)
}
if err := s.sendAndSaveBlobSidecars(ctx, request, peers[rand.NewGenerator().Int()%peerCount], b); err != nil {
return err
}
}
} }
if err := s.cfg.chain.ReceiveBlock(ctx, b, blkRoot, nil); err != nil { if err := s.cfg.chain.ReceiveBlock(ctx, b, blkRoot, nil); err != nil {

View File

@@ -7,12 +7,14 @@ import (
"github.com/libp2p/go-libp2p/core/network" "github.com/libp2p/go-libp2p/core/network"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/trailofbits/go-mutexasserts"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types" p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags" "github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket" leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
"github.com/sirupsen/logrus"
"github.com/trailofbits/go-mutexasserts"
) )
const defaultBurstLimit = 5 const defaultBurstLimit = 5
@@ -46,13 +48,18 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
allowedBlobsPerSecond := float64(flags.Get().BlobBatchLimit) allowedBlobsPerSecond := float64(flags.Get().BlobBatchLimit)
allowedBlobsBurst := int64(flags.Get().BlobBatchLimitBurstFactor * flags.Get().BlobBatchLimit) allowedBlobsBurst := int64(flags.Get().BlobBatchLimitBurstFactor * flags.Get().BlobBatchLimit)
// Initialize data column limits.
allowedDataColumnsPerSecond := float64(flags.Get().DataColumnBatchLimit * int(params.BeaconConfig().CustodyRequirement))
allowedDataColumnsBurst := int64(flags.Get().DataColumnBatchLimitBurstFactor * flags.Get().DataColumnBatchLimit * int(params.BeaconConfig().CustodyRequirement))
// Set topic map for all rpc topics. // Set topic map for all rpc topics.
topicMap := make(map[string]*leakybucket.Collector, len(p2p.RPCTopicMappings)) topicMap := make(map[string]*leakybucket.Collector, len(p2p.RPCTopicMappings))
// Goodbye Message // Goodbye Message
topicMap[addEncoding(p2p.RPCGoodByeTopicV1)] = leakybucket.NewCollector(1, 1, leakyBucketPeriod, false /* deleteEmptyBuckets */) topicMap[addEncoding(p2p.RPCGoodByeTopicV1)] = leakybucket.NewCollector(1, 1, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// MetadataV0 Message // Metadata Message
topicMap[addEncoding(p2p.RPCMetaDataTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */) topicMap[addEncoding(p2p.RPCMetaDataTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCMetaDataTopicV2)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */) topicMap[addEncoding(p2p.RPCMetaDataTopicV2)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCMetaDataTopicV3)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// Ping Message // Ping Message
topicMap[addEncoding(p2p.RPCPingTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */) topicMap[addEncoding(p2p.RPCPingTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// Status Message // Status Message
@@ -66,6 +73,9 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
// for BlobSidecarsByRoot and BlobSidecarsByRange // for BlobSidecarsByRoot and BlobSidecarsByRange
blobCollector := leakybucket.NewCollector(allowedBlobsPerSecond, allowedBlobsBurst, blockBucketPeriod, false) blobCollector := leakybucket.NewCollector(allowedBlobsPerSecond, allowedBlobsBurst, blockBucketPeriod, false)
// for DataColumnSidecarsByRoot and DataColumnSidecarsByRange
columnCollector := leakybucket.NewCollector(allowedDataColumnsPerSecond, allowedDataColumnsBurst, blockBucketPeriod, false)
// BlocksByRoots requests // BlocksByRoots requests
topicMap[addEncoding(p2p.RPCBlocksByRootTopicV1)] = blockCollector topicMap[addEncoding(p2p.RPCBlocksByRootTopicV1)] = blockCollector
topicMap[addEncoding(p2p.RPCBlocksByRootTopicV2)] = blockCollectorV2 topicMap[addEncoding(p2p.RPCBlocksByRootTopicV2)] = blockCollectorV2
@@ -79,6 +89,11 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
// BlobSidecarsByRangeV1 // BlobSidecarsByRangeV1
topicMap[addEncoding(p2p.RPCBlobSidecarsByRangeTopicV1)] = blobCollector topicMap[addEncoding(p2p.RPCBlobSidecarsByRangeTopicV1)] = blobCollector
// DataColumnSidecarsByRootV1
topicMap[addEncoding(p2p.RPCDataColumnSidecarsByRootTopicV1)] = columnCollector
// DataColumnSidecarsByRangeV1
topicMap[addEncoding(p2p.RPCDataColumnSidecarsByRangeTopicV1)] = columnCollector
// General topic for all rpc requests. // General topic for all rpc requests.
topicMap[rpcLimiterTopic] = leakybucket.NewCollector(5, defaultBurstLimit*2, leakyBucketPeriod, false /* deleteEmptyBuckets */) topicMap[rpcLimiterTopic] = leakybucket.NewCollector(5, defaultBurstLimit*2, leakyBucketPeriod, false /* deleteEmptyBuckets */)

View File

@@ -18,7 +18,7 @@ import (
func TestNewRateLimiter(t *testing.T) { func TestNewRateLimiter(t *testing.T) {
rlimiter := newRateLimiter(mockp2p.NewTestP2P(t)) rlimiter := newRateLimiter(mockp2p.NewTestP2P(t))
assert.Equal(t, len(rlimiter.limiterMap), 12, "correct number of topics not registered") assert.Equal(t, 15, len(rlimiter.limiterMap), "correct number of topics not registered")
} }
func TestNewRateLimiter_FreeCorrectly(t *testing.T) { func TestNewRateLimiter_FreeCorrectly(t *testing.T) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/libp2p/go-libp2p/core/protocol" "github.com/libp2p/go-libp2p/core/protocol"
"github.com/pkg/errors" "github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz" ssz "github.com/prysmaticlabs/fastssz"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types" p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/config/params" "github.com/prysmaticlabs/prysm/v5/config/params"
@@ -51,7 +52,9 @@ func (s *Service) registerRPCHandlers() {
s.pingHandler, s.pingHandler,
) )
s.registerRPCHandlersAltair() s.registerRPCHandlersAltair()
if currEpoch >= params.BeaconConfig().DenebForkEpoch { if coreTime.PeerDASIsActive(slots.UnsafeEpochStart(currEpoch)) {
s.registerRPCHandlersPeerDAS()
} else if currEpoch >= params.BeaconConfig().DenebForkEpoch {
s.registerRPCHandlersDeneb() s.registerRPCHandlersDeneb()
} }
return return
@@ -109,6 +112,21 @@ func (s *Service) registerRPCHandlersDeneb() {
) )
} }
func (s *Service) registerRPCHandlersPeerDAS() {
s.registerRPC(
p2p.RPCDataColumnSidecarsByRootTopicV1,
s.dataColumnSidecarByRootRPCHandler,
)
s.registerRPC(
p2p.RPCDataColumnSidecarsByRangeTopicV1,
s.dataColumnSidecarsByRangeRPCHandler,
)
s.registerRPC(
p2p.RPCMetaDataTopicV3,
s.metaDataHandler,
)
}
// Remove all v1 Stream handlers that are no longer supported // Remove all v1 Stream handlers that are no longer supported
// from altair onwards. // from altair onwards.
func (s *Service) unregisterPhase0Handlers() { func (s *Service) unregisterPhase0Handlers() {
@@ -121,6 +139,14 @@ func (s *Service) unregisterPhase0Handlers() {
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullMetadataTopic)) s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullMetadataTopic))
} }
func (s *Service) unregisterBlobHandlers() {
fullBlobRangeTopic := p2p.RPCBlobSidecarsByRangeTopicV1 + s.cfg.p2p.Encoding().ProtocolSuffix()
fullBlobRootTopic := p2p.RPCBlobSidecarsByRootTopicV1 + s.cfg.p2p.Encoding().ProtocolSuffix()
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullBlobRangeTopic))
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullBlobRootTopic))
}
// registerRPC for a given topic with an expected protobuf message type. // registerRPC for a given topic with an expected protobuf message type.
func (s *Service) registerRPC(baseTopic string, handle rpcHandler) { func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
topic := baseTopic + s.cfg.p2p.Encoding().ProtocolSuffix() topic := baseTopic + s.cfg.p2p.Encoding().ProtocolSuffix()
@@ -193,7 +219,7 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
// since metadata requests do not have any data in the payload, we // since metadata requests do not have any data in the payload, we
// do not decode anything. // do not decode anything.
if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 { if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 || baseTopic == p2p.RPCMetaDataTopicV3 {
if err := handle(ctx, base, stream); err != nil { if err := handle(ctx, base, stream); err != nil {
messageFailedProcessingCounter.WithLabelValues(topic).Inc() messageFailedProcessingCounter.WithLabelValues(topic).Inc()
if !errors.Is(err, p2ptypes.ErrWrongForkDigestVersion) { if !errors.Is(err, p2ptypes.ErrWrongForkDigestVersion) {

View File

@@ -7,6 +7,9 @@ import (
libp2pcore "github.com/libp2p/go-libp2p/core" libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution" "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types" "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify" "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify"
@@ -55,15 +58,28 @@ func (s *Service) sendRecentBeaconBlocksRequest(ctx context.Context, requests *t
if err != nil { if err != nil {
return err return err
} }
request, err := s.pendingBlobsRequestForBlock(blkRoot, blk) if coreTime.PeerDASIsActive(blk.Block().Slot()) {
if err != nil { request, err := s.pendingDataColumnRequestForBlock(blkRoot, blk)
return err if err != nil {
} return errors.Wrap(err, "pending data column request for block")
if len(request) == 0 { }
continue if len(request) == 0 {
} continue
if err := s.sendAndSaveBlobSidecars(ctx, request, id, blk); err != nil { }
return err if err := s.sendAndSaveDataColumnSidecars(ctx, request, id, blk); err != nil {
return errors.Wrap(err, "send and save data column sidecars")
}
} else {
request, err := s.pendingBlobsRequestForBlock(blkRoot, blk)
if err != nil {
return errors.Wrap(err, "pending blobs request for block")
}
if len(request) == 0 {
continue
}
if err := s.sendAndSaveBlobSidecars(ctx, request, id, blk); err != nil {
return errors.Wrap(err, "send and save blob sidecars")
}
} }
} }
return err return err
@@ -170,6 +186,36 @@ func (s *Service) sendAndSaveBlobSidecars(ctx context.Context, request types.Blo
return nil return nil
} }
func (s *Service) sendAndSaveDataColumnSidecars(ctx context.Context, request types.DataColumnSidecarsByRootReq, peerID peer.ID, block interfaces.ReadOnlySignedBeaconBlock) error {
if len(request) == 0 {
return nil
}
sidecars, err := SendDataColumnSidecarByRoot(ctx, s.cfg.clock, s.cfg.p2p, peerID, s.ctxMap, &request)
if err != nil {
return err
}
RoBlock, err := blocks.NewROBlock(block)
if err != nil {
return err
}
for _, sidecar := range sidecars {
if err := verify.ColumnAlignsWithBlock(sidecar, RoBlock); err != nil {
return err
}
log.WithFields(columnFields(sidecar)).Debug("Received data column sidecar RPC")
}
for i := range sidecars {
verfiedCol := blocks.NewVerifiedRODataColumn(sidecars[i])
if err := s.cfg.blobStorage.SaveDataColumn(verfiedCol); err != nil {
return err
}
}
return nil
}
func (s *Service) pendingBlobsRequestForBlock(root [32]byte, b interfaces.ReadOnlySignedBeaconBlock) (types.BlobSidecarsByRootReq, error) { func (s *Service) pendingBlobsRequestForBlock(root [32]byte, b interfaces.ReadOnlySignedBeaconBlock) (types.BlobSidecarsByRootReq, error) {
if b.Version() < version.Deneb { if b.Version() < version.Deneb {
return nil, nil // Block before deneb has no blob. return nil, nil // Block before deneb has no blob.
@@ -181,7 +227,27 @@ func (s *Service) pendingBlobsRequestForBlock(root [32]byte, b interfaces.ReadOn
if len(cc) == 0 { if len(cc) == 0 {
return nil, nil return nil, nil
} }
return s.constructPendingBlobsRequest(root, len(cc))
blobIdentifiers, err := s.constructPendingBlobsRequest(root, len(cc))
if err != nil {
return nil, errors.Wrap(err, "construct pending blobs request")
}
return blobIdentifiers, nil
}
func (s *Service) pendingDataColumnRequestForBlock(root [32]byte, b interfaces.ReadOnlySignedBeaconBlock) (types.DataColumnSidecarsByRootReq, error) {
if b.Version() < version.Deneb {
return nil, nil // Block before deneb has no blob.
}
cc, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, err
}
if len(cc) == 0 {
return nil, nil
}
return s.constructPendingColumnRequest(root)
} }
// constructPendingBlobsRequest creates a request for BlobSidecars by root, considering blobs already in DB. // constructPendingBlobsRequest creates a request for BlobSidecars by root, considering blobs already in DB.
@@ -191,12 +257,40 @@ func (s *Service) constructPendingBlobsRequest(root [32]byte, commitments int) (
} }
stored, err := s.cfg.blobStorage.Indices(root) stored, err := s.cfg.blobStorage.Indices(root)
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "indices")
} }
return requestsForMissingIndices(stored, commitments, root), nil return requestsForMissingIndices(stored, commitments, root), nil
} }
func (s *Service) constructPendingColumnRequest(root [32]byte) (types.DataColumnSidecarsByRootReq, error) {
// Retrieve the storedColumns columns for the current root.
storedColumns, err := s.cfg.blobStorage.ColumnIndices(root)
if err != nil {
return nil, errors.Wrap(err, "column indices")
}
// Retrieve the columns we should custody.
custodiedColumns, err := peerdas.CustodyColumns(s.cfg.p2p.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
// Build the request for the missing columns.
req := make(types.DataColumnSidecarsByRootReq, 0, len(custodiedColumns))
for column := range custodiedColumns {
isColumnStored := storedColumns[column]
if !isColumnStored {
req = append(req, &eth.DataColumnIdentifier{
BlockRoot: root[:],
ColumnIndex: column,
})
}
}
return req, nil
}
// requestsForMissingIndices constructs a slice of BlobIdentifiers that are missing from // requestsForMissingIndices constructs a slice of BlobIdentifiers that are missing from
// local storage, based on a mapping that represents which indices are locally stored, // local storage, based on a mapping that represents which indices are locally stored,
// and the highest expected index. // and the highest expected index.

View File

@@ -40,7 +40,7 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
return err return err
} }
// Sort the identifiers so that requests for the same blob root will be adjacent, minimizing db lookups. // Sort the identifiers so that requests for the same blob root will be adjacent, minimizing db lookups.
sort.Sort(blobIdents) sort.Sort(&blobIdents)
batchSize := flags.Get().BlobBatchLimit batchSize := flags.Get().BlobBatchLimit
var ticker *time.Ticker var ticker *time.Ticker

View File

@@ -45,7 +45,7 @@ func (c *blobsTestCase) filterExpectedByRoot(t *testing.T, scs []blocks.ROBlob,
message: p2pTypes.ErrBlobLTMinRequest.Error(), message: p2pTypes.ErrBlobLTMinRequest.Error(),
}} }}
} }
sort.Sort(req) sort.Sort(&req)
var expect []*expectedBlobChunk var expect []*expectedBlobChunk
blockOffset := 0 blockOffset := 0
if len(scs) == 0 { if len(scs) == 0 {

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks" "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces" "github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/network/forks" "github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version" "github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots" "github.com/prysmaticlabs/prysm/v5/time/slots"
) )
@@ -155,3 +156,22 @@ func WriteBlobSidecarChunk(stream libp2pcore.Stream, tor blockchain.TemporalOrac
_, err = encoding.EncodeWithMaxLength(stream, sidecar) _, err = encoding.EncodeWithMaxLength(stream, sidecar)
return err return err
} }
// WriteDataColumnSidecarChunk writes data column chunk object to stream.
// response_chunk ::= <result> | <context-bytes> | <encoding-dependent-header> | <encoded-payload>
func WriteDataColumnSidecarChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, sidecar *ethpb.DataColumnSidecar) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
ctxBytes, err := forks.ForkDigestFromEpoch(slots.ToEpoch(sidecar.SignedBlockHeader.Header.Slot), valRoot[:])
if err != nil {
return err
}
if err := writeContextToStream(ctxBytes[:], stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, sidecar)
return err
}

View File

@@ -0,0 +1,196 @@
package sync
import (
"context"
"time"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
func (s *Service) streamDataColumnBatch(ctx context.Context, batch blockBatch, wQuota uint64, wantedIndexes map[uint64]bool, stream libp2pcore.Stream) (uint64, error) {
// Defensive check to guard against underflow.
if wQuota == 0 {
return 0, nil
}
_, span := trace.StartSpan(ctx, "sync.streamDataColumnBatch")
defer span.End()
for _, b := range batch.canonical() {
root := b.Root()
idxs, err := s.cfg.blobStorage.ColumnIndices(b.Root())
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
return wQuota, errors.Wrapf(err, "could not retrieve sidecars for block root %#x", root)
}
for i, l := uint64(0), uint64(len(idxs)); i < l; i++ {
// index not available or unwanted, skip
if !idxs[i] || !wantedIndexes[i] {
continue
}
// We won't check for file not found since the .Indices method should normally prevent that from happening.
sc, err := s.cfg.blobStorage.GetColumn(b.Root(), i)
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
return wQuota, errors.Wrapf(err, "could not retrieve data column sidecar: index %d, block root %#x", i, root)
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if chunkErr := WriteDataColumnSidecarChunk(stream, s.cfg.chain, s.cfg.p2p.Encoding(), sc); chunkErr != nil {
log.WithError(chunkErr).Debug("Could not send a chunked response")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, chunkErr)
return wQuota, chunkErr
}
s.rateLimiter.add(stream, 1)
wQuota -= 1
// Stop streaming results once the quota of writes for the request is consumed.
if wQuota == 0 {
return 0, nil
}
}
}
return wQuota, nil
}
// dataColumnSidecarsByRangeRPCHandler looks up the request data columns from the database from a given start slot index
func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
var err error
ctx, span := trace.StartSpan(ctx, "sync.DataColumnSidecarsByRangeHandler")
defer span.End()
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
SetRPCStreamDeadlines(stream)
log := log.WithField("handler", p2p.DataColumnSidecarsByRangeName[1:]) // slice the leading slash off the name var
r, ok := msg.(*pb.DataColumnSidecarsByRangeRequest)
if !ok {
return errors.New("message is not type *pb.DataColumnSidecarsByRangeRequest")
}
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err
}
rp, err := validateDataColumnsByRange(r, s.cfg.chain.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
tracing.AnnotateError(span, err)
return err
}
// Ticker to stagger out large requests.
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
batcher, err := newBlockRangeBatcher(rp, s.cfg.beaconDB, s.rateLimiter, s.cfg.chain.IsCanonical, ticker)
if err != nil {
log.WithError(err).Info("error in DataColumnSidecarsByRange batch")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
return err
}
// Derive the wanted columns for the request.
wantedColumns := map[uint64]bool{}
for _, c := range r.Columns {
wantedColumns[c] = true
}
var batch blockBatch
wQuota := params.BeaconConfig().MaxRequestDataColumnSidecars
for batch, ok = batcher.next(ctx, stream); ok; batch, ok = batcher.next(ctx, stream) {
batchStart := time.Now()
wQuota, err = s.streamDataColumnBatch(ctx, batch, wQuota, wantedColumns, stream)
rpcBlobsByRangeResponseLatency.Observe(float64(time.Since(batchStart).Milliseconds()))
if err != nil {
return err
}
// once we have written MAX_REQUEST_BLOB_SIDECARS, we're done serving the request
if wQuota == 0 {
break
}
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in DataColumnSidecarsByRange batch")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
return err
}
closeStream(stream, log)
return nil
}
// Set the count limit to the number of blobs in a batch.
func columnBatchLimit() uint64 {
return uint64(flags.Get().BlockBatchLimit) / fieldparams.MaxBlobsPerBlock
}
// TODO: Generalize between data columns and blobs, while the validation parameters used are different they
// are the same value in the config. Can this be safely abstracted ?
func validateDataColumnsByRange(r *pb.DataColumnSidecarsByRangeRequest, current primitives.Slot) (rangeParams, error) {
if r.Count == 0 {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "invalid request Count parameter")
}
rp := rangeParams{
start: r.StartSlot,
size: r.Count,
}
// Peers may overshoot the current slot when in initial sync, so we don't want to penalize them by treating the
// request as an error. So instead we return a set of params that acts as a noop.
if rp.start > current {
return rangeParams{start: current, end: current, size: 0}, nil
}
var err error
rp.end, err = rp.start.SafeAdd(rp.size - 1)
if err != nil {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "overflow start + count -1")
}
maxRequest := params.MaxRequestBlock(slots.ToEpoch(current))
// Allow some wiggle room, up to double the MaxRequestBlocks past the current slot,
// to give nodes syncing close to the head of the chain some margin for error.
maxStart, err := current.SafeAdd(maxRequest * 2)
if err != nil {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "current + maxRequest * 2 > max uint")
}
// Clients MUST keep a record of signed data column sidecars seen on the epoch range
// [max(current_epoch - MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS, DENEB_FORK_EPOCH), current_epoch]
// where current_epoch is defined by the current wall-clock time,
// and clients MUST support serving requests of data columns on this range.
minStartSlot, err := DataColumnsRPCMinValidSlot(current)
if err != nil {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "DataColumnsRPCMinValidSlot error")
}
if rp.start > maxStart {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "start > maxStart")
}
if rp.start < minStartSlot {
rp.start = minStartSlot
}
if rp.end > current {
rp.end = current
}
if rp.end < rp.start {
rp.end = rp.start
}
limit := columnBatchLimit()
if limit > maxRequest {
limit = maxRequest
}
if rp.size > limit {
rp.size = limit
}
return rp, nil
}

View File

@@ -0,0 +1,223 @@
package sync
import (
"context"
"fmt"
"math"
"sort"
"time"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.dataColumnSidecarByRootRPCHandler")
defer span.End()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
SetRPCStreamDeadlines(stream)
log := log.WithField("handler", p2p.DataColumnSidecarsByRootName[1:]) // slice the leading slash off the name var
// We use the same type as for blobs as they are the same data structure.
// TODO: Make the type naming more generic to be extensible to data columns
ref, ok := msg.(*types.DataColumnSidecarsByRootReq)
if !ok {
return errors.New("message is not type DataColumnSidecarsByRootReq")
}
requestedColumnIdents := *ref
if err := validateDataColumnsByRootRequest(requestedColumnIdents); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
return errors.Wrap(err, "validate data columns by root request")
}
// Sort the identifiers so that requests for the same blob root will be adjacent, minimizing db lookups.
sort.Sort(&requestedColumnIdents)
requestedColumnsList := make([]uint64, 0, len(requestedColumnIdents))
for _, ident := range requestedColumnIdents {
requestedColumnsList = append(requestedColumnsList, ident.ColumnIndex)
}
batchSize := flags.Get().DataColumnBatchLimit
var ticker *time.Ticker
if len(requestedColumnIdents) > batchSize {
ticker = time.NewTicker(time.Second)
}
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
cs := s.cfg.clock.CurrentSlot()
minReqSlot, err := DataColumnsRPCMinValidSlot(cs)
if err != nil {
return errors.Wrapf(err, "unexpected error computing min valid blob request slot, current_slot=%d", cs)
}
// Compute all custodied columns.
custodiedColumns, err := peerdas.CustodyColumns(s.cfg.p2p.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
log.WithError(err).Errorf("unexpected error retrieving the node id")
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
return errors.Wrap(err, "custody columns")
}
custodiedColumnsList := make([]uint64, 0, len(custodiedColumns))
for column := range custodiedColumns {
custodiedColumnsList = append(custodiedColumnsList, column)
}
// Sort the custodied columns by index.
sort.Slice(custodiedColumnsList, func(i, j int) bool {
return custodiedColumnsList[i] < custodiedColumnsList[j]
})
log.WithFields(logrus.Fields{
"custodied": custodiedColumnsList,
"requested": requestedColumnsList,
"custodiedCount": len(custodiedColumnsList),
"requestedCount": len(requestedColumnsList),
}).Debug("Data column sidecar by root request received")
// Subscribe to the data column feed.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.cfg.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
for i := range requestedColumnIdents {
if err := ctx.Err(); err != nil {
closeStream(stream, log)
return errors.Wrap(err, "context error")
}
// Throttle request processing to no more than batchSize/sec.
if ticker != nil && i != 0 && i%batchSize == 0 {
for {
select {
case <-ticker.C:
log.Debug("Throttling data column sidecar request")
case <-ctx.Done():
log.Debug("Context closed, exiting routine")
return nil
}
}
}
s.rateLimiter.add(stream, 1)
requestedRoot, requestedIndex := bytesutil.ToBytes32(requestedColumnIdents[i].BlockRoot), requestedColumnIdents[i].ColumnIndex
// Decrease the peer's score if it requests a column that is not custodied.
isCustodied := custodiedColumns[requestedIndex]
if !isCustodied {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.writeErrorResponseToStream(responseCodeInvalidRequest, types.ErrInvalidColumnIndex.Error(), stream)
return types.ErrInvalidColumnIndex
}
// TODO: Differentiate between blobs and columns for our storage engine
// If the data column is nil, it means it is not yet available in the db.
// We wait for it to be available.
// Retrieve the data column from the database.
dataColumnSidecar, err := s.cfg.blobStorage.GetColumn(requestedRoot, requestedIndex)
if err != nil && !db.IsNotFound(err) {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
return errors.Wrap(err, "get column")
}
if err != nil && db.IsNotFound(err) {
fields := logrus.Fields{
"root": fmt.Sprintf("%#x", requestedRoot),
"index": requestedIndex,
}
log.WithFields(fields).Debug("Peer requested data column sidecar by root not found in db, waiting for it to be available")
loop:
for {
select {
case receivedRootIndex := <-rootIndexChan:
if receivedRootIndex.Root == requestedRoot && receivedRootIndex.Index == requestedIndex {
// This is the data column we are looking for.
log.WithFields(fields).Debug("Data column sidecar by root is now available in the db")
break loop
}
case <-ctx.Done():
closeStream(stream, log)
return errors.Errorf("context closed while waiting for data column with root %#x and index %d", requestedRoot, requestedIndex)
}
}
// Retrieve the data column from the db.
dataColumnSidecar, err = s.cfg.blobStorage.GetColumn(requestedRoot, requestedIndex)
if err != nil {
// This time, no error (even not found error) should be returned.
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
return errors.Wrap(err, "get column")
}
}
// If any root in the request content references a block earlier than minimum_request_epoch,
// peers MAY respond with error code 3: ResourceUnavailable or not include the data column in the response.
// note: we are deviating from the spec to allow requests for data column that are before minimum_request_epoch,
// up to the beginning of the retention period.
if dataColumnSidecar.SignedBlockHeader.Header.Slot < minReqSlot {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrDataColumnLTMinRequest.Error(), stream)
log.WithError(types.ErrDataColumnLTMinRequest).
Debugf("requested data column for block %#x before minimum_request_epoch", requestedColumnIdents[i].BlockRoot)
return types.ErrDataColumnLTMinRequest
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if chunkErr := WriteDataColumnSidecarChunk(stream, s.cfg.chain, s.cfg.p2p.Encoding(), dataColumnSidecar); chunkErr != nil {
log.WithError(chunkErr).Debug("Could not send a chunked response")
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, chunkErr)
return chunkErr
}
}
closeStream(stream, log)
return nil
}
func validateDataColumnsByRootRequest(colIdents types.DataColumnSidecarsByRootReq) error {
if uint64(len(colIdents)) > params.BeaconConfig().MaxRequestDataColumnSidecars {
return types.ErrMaxDataColumnReqExceeded
}
return nil
}
func DataColumnsRPCMinValidSlot(current primitives.Slot) (primitives.Slot, error) {
// Avoid overflow if we're running on a config where deneb is set to far future epoch.
if params.BeaconConfig().DenebForkEpoch == math.MaxUint64 || !coreTime.PeerDASIsActive(current) {
return primitives.Slot(math.MaxUint64), nil
}
minReqEpochs := params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest
currEpoch := slots.ToEpoch(current)
minStart := params.BeaconConfig().Eip7594ForkEpoch
if currEpoch > minReqEpochs && currEpoch-minReqEpochs > minStart {
minStart = currEpoch - minReqEpochs
}
return slots.EpochStart(minStart)
}

View File

@@ -21,97 +21,168 @@ import (
func (s *Service) metaDataHandler(_ context.Context, _ interface{}, stream libp2pcore.Stream) error { func (s *Service) metaDataHandler(_ context.Context, _ interface{}, stream libp2pcore.Stream) error {
SetRPCStreamDeadlines(stream) SetRPCStreamDeadlines(stream)
// Validate the incoming request regarding rate limiting.
if err := s.rateLimiter.validateRequest(stream, 1); err != nil { if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err return errors.Wrap(err, "validate request")
} }
s.rateLimiter.add(stream, 1) s.rateLimiter.add(stream, 1)
if s.cfg.p2p.Metadata() == nil || s.cfg.p2p.Metadata().IsNil() { // Retrieve our metadata.
metadata := s.cfg.p2p.Metadata()
// Handle the case our metadata is nil.
if metadata == nil || metadata.IsNil() {
nilErr := errors.New("nil metadata stored for host") nilErr := errors.New("nil metadata stored for host")
resp, err := s.generateErrorResponse(responseCodeServerError, types.ErrGeneric.Error()) resp, err := s.generateErrorResponse(responseCodeServerError, types.ErrGeneric.Error())
if err != nil { if err != nil {
log.WithError(err).Debug("Could not generate a response error") log.WithError(err).Debug("Could not generate a response error")
} else if _, err := stream.Write(resp); err != nil { return nilErr
}
if _, err := stream.Write(resp); err != nil {
log.WithError(err).Debug("Could not write to stream") log.WithError(err).Debug("Could not write to stream")
} }
return nilErr return nilErr
} }
// Get the stream version from the protocol.
_, _, streamVersion, err := p2p.TopicDeconstructor(string(stream.Protocol())) _, _, streamVersion, err := p2p.TopicDeconstructor(string(stream.Protocol()))
if err != nil { if err != nil {
wrappedErr := errors.Wrap(err, "topic deconstructor")
resp, genErr := s.generateErrorResponse(responseCodeServerError, types.ErrGeneric.Error()) resp, genErr := s.generateErrorResponse(responseCodeServerError, types.ErrGeneric.Error())
if genErr != nil { if genErr != nil {
log.WithError(genErr).Debug("Could not generate a response error") log.WithError(genErr).Debug("Could not generate a response error")
} else if _, wErr := stream.Write(resp); wErr != nil { return wrappedErr
}
if _, wErr := stream.Write(resp); wErr != nil {
log.WithError(wErr).Debug("Could not write to stream") log.WithError(wErr).Debug("Could not write to stream")
} }
return err return wrappedErr
} }
currMd := s.cfg.p2p.Metadata()
// Handle the case where the stream version is not recognized.
metadataVersion := metadata.Version()
switch streamVersion { switch streamVersion {
case p2p.SchemaVersionV1: case p2p.SchemaVersionV1:
// We have a v1 metadata object saved locally, so we switch metadataVersion {
// convert it back to a v0 metadata object. case version.Altair, version.Deneb:
if currMd.Version() != version.Phase0 { metadata = wrapper.WrappedMetadataV0(
currMd = wrapper.WrappedMetadataV0(
&pb.MetaDataV0{ &pb.MetaDataV0{
Attnets: currMd.AttnetsBitfield(), Attnets: metadata.AttnetsBitfield(),
SeqNumber: currMd.SequenceNumber(), SeqNumber: metadata.SequenceNumber(),
}) })
} }
case p2p.SchemaVersionV2: case p2p.SchemaVersionV2:
// We have a v0 metadata object saved locally, so we switch metadataVersion {
// convert it to a v1 metadata object. case version.Phase0:
if currMd.Version() != version.Altair { metadata = wrapper.WrappedMetadataV1(
currMd = wrapper.WrappedMetadataV1(
&pb.MetaDataV1{ &pb.MetaDataV1{
Attnets: currMd.AttnetsBitfield(), Attnets: metadata.AttnetsBitfield(),
SeqNumber: currMd.SequenceNumber(), SeqNumber: metadata.SequenceNumber(),
Syncnets: bitfield.Bitvector4{byte(0x00)}, Syncnets: bitfield.Bitvector4{byte(0x00)},
}) })
case version.Deneb:
metadata = wrapper.WrappedMetadataV1(
&pb.MetaDataV1{
Attnets: metadata.AttnetsBitfield(),
SeqNumber: metadata.SequenceNumber(),
Syncnets: metadata.SyncnetsBitfield(),
})
}
case p2p.SchemaVersionV3:
switch metadataVersion {
case version.Phase0:
metadata = wrapper.WrappedMetadataV2(
&pb.MetaDataV2{
Attnets: metadata.AttnetsBitfield(),
SeqNumber: metadata.SequenceNumber(),
Syncnets: bitfield.Bitvector4{byte(0x00)},
CustodySubnetCount: 0,
})
case version.Altair:
metadata = wrapper.WrappedMetadataV2(
&pb.MetaDataV2{
Attnets: metadata.AttnetsBitfield(),
SeqNumber: metadata.SequenceNumber(),
Syncnets: metadata.SyncnetsBitfield(),
CustodySubnetCount: 0,
})
} }
} }
// Write the METADATA response into the stream.
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil { if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err return errors.Wrap(err, "write metadata response")
} }
_, err = s.cfg.p2p.Encoding().EncodeWithMaxLength(stream, currMd)
// Encode the metadata and write it to the stream.
_, err = s.cfg.p2p.Encoding().EncodeWithMaxLength(stream, metadata)
if err != nil { if err != nil {
return err return errors.Wrap(err, "encode metadata")
} }
closeStream(stream, log) closeStream(stream, log)
return nil return nil
} }
func (s *Service) sendMetaDataRequest(ctx context.Context, id peer.ID) (metadata.Metadata, error) { // sendMetaDataRequest sends a METADATA request to the peer and return the response.
func (s *Service) sendMetaDataRequest(ctx context.Context, peerID peer.ID) (metadata.Metadata, error) {
ctx, cancel := context.WithTimeout(ctx, respTimeout) ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel() defer cancel()
topic, err := p2p.TopicFromMessage(p2p.MetadataMessageName, slots.ToEpoch(s.cfg.clock.CurrentSlot())) // Compute the current epoch.
currentSlot := s.cfg.clock.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
// Compute the topic for the metadata request regarding the current epoch.
topic, err := p2p.TopicFromMessage(p2p.MetadataMessageName, currentEpoch)
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "topic from message")
} }
stream, err := s.cfg.p2p.Send(ctx, new(interface{}), topic, id)
// Send the METADATA request to the peer.
message := new(interface{})
stream, err := s.cfg.p2p.Send(ctx, message, topic, peerID)
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "send metadata request")
} }
defer closeStream(stream, log) defer closeStream(stream, log)
// Read the METADATA response from the peer.
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding()) code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
if err != nil { if err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer()) s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
return nil, err return nil, errors.Wrap(err, "read status code")
} }
if code != 0 { if code != 0 {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer()) s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
return nil, errors.New(errMsg) return nil, errors.New(errMsg)
} }
// Get the genesis validators root.
valRoot := s.cfg.clock.GenesisValidatorsRoot() valRoot := s.cfg.clock.GenesisValidatorsRoot()
rpcCtx, err := forks.ForkDigestFromEpoch(slots.ToEpoch(s.cfg.clock.CurrentSlot()), valRoot[:])
// Get the fork digest from the current epoch and the genesis validators root.
rpcCtx, err := forks.ForkDigestFromEpoch(currentEpoch, valRoot[:])
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "fork digest from epoch")
} }
// Instantiate zero value of the metadata.
msg, err := extractDataTypeFromTypeMap(types.MetaDataMap, rpcCtx[:], s.cfg.clock) msg, err := extractDataTypeFromTypeMap(types.MetaDataMap, rpcCtx[:], s.cfg.clock)
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "extract data type from type map")
} }
// Defensive check to ensure valid objects are being sent. // Defensive check to ensure valid objects are being sent.
topicVersion := "" topicVersion := ""
switch msg.Version() { switch msg.Version() {
@@ -119,13 +190,20 @@ func (s *Service) sendMetaDataRequest(ctx context.Context, id peer.ID) (metadata
topicVersion = p2p.SchemaVersionV1 topicVersion = p2p.SchemaVersionV1
case version.Altair: case version.Altair:
topicVersion = p2p.SchemaVersionV2 topicVersion = p2p.SchemaVersionV2
case version.Deneb:
topicVersion = p2p.SchemaVersionV3
} }
// Validate the version of the topic.
if err := validateVersion(topicVersion, stream); err != nil { if err := validateVersion(topicVersion, stream); err != nil {
return nil, err return nil, err
} }
// Decode the metadata from the peer.
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil { if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer()) s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
return nil, err return nil, err
} }
return msg, nil return msg, nil
} }

Some files were not shown because too many files have changed in this diff Show More