Compare commits

...

101 Commits

Author SHA1 Message Date
Manu NALEPA
d5454096be Revert "Separate type for unaggregated network attestations (#14659)"
This reverts commit 153d1872ae.
2025-01-15 21:17:24 +01:00
Manu NALEPA
f6aa197c4e Merge branch 'develop' into peerDAS 2025-01-15 20:59:22 +01:00
Manu NALEPA
6fb349ea76 unmarshalState: Use hasFuluKey. 2025-01-15 20:48:25 +01:00
Manu NALEPA
e5a425f5c7 Merge branch 'develop' into peerDAS 2025-01-15 17:18:34 +01:00
Manu NALEPA
f157d37e4c peerDAS: Decouple network subnets from das-core. (#14784)
https://github.com/ethereum/consensus-specs/pull/3832/
2025-01-14 10:45:05 +01:00
Manu NALEPA
5f08559bef Merge branch 'develop' into peerDAS 2025-01-08 10:18:18 +01:00
Manu NALEPA
a082d2aecd Merge branch 'fulu-boilerplate' into peerDAS 2025-01-06 13:45:33 +01:00
Manu NALEPA
bcfaff8504 Upgraded state to <fork> log: Move from debug to info.
Rationale:
This log is the only one notifying the user a new fork happened.
A new fork is always a little bit stressful for a node operator.
Having at least one log indicating the client switched fork is something useful.
2025-01-05 16:22:43 +01:00
Manu NALEPA
d8e09c346f Implement the Fulu fork boilerplate. 2025-01-05 16:22:38 +01:00
Manu NALEPA
876519731b Prepare for future fork boilerplate. 2025-01-05 16:14:02 +01:00
Manu NALEPA
de05b83aca Merge branch 'develop' into peerDAS 2024-12-30 15:11:02 +01:00
Manu NALEPA
56c73e7193 Merge branch 'develop' into peerDAS 2024-12-27 22:11:36 +01:00
Manu NALEPA
859ac008a8 Activate peerDAS at electra. (#14734) 2024-12-27 09:48:57 +01:00
Manu NALEPA
f882bd27c8 Merge branch 'develop' into peerDAS 2024-12-18 16:15:32 +01:00
Manu NALEPA
361e5759c1 Merge branch 'develop' into peerDAS 2024-12-17 22:19:20 +01:00
Manu NALEPA
34ef0da896 Merge branch 'develop' into peerDAS 2024-12-10 23:11:45 +01:00
Manu NALEPA
726e8b962f Revert "Revert "Add error count prom metric (#14670)""
This reverts commit 5f17317c1c.
2024-12-10 21:49:40 +01:00
Manu NALEPA
453ea01deb disconnectFromPeer: Remove unused function. 2024-11-28 17:37:30 +01:00
Manu NALEPA
6537f8011e Merge branch 'peerDAS' into peerDAS-do-not-merge 2024-11-28 17:27:44 +01:00
Manu NALEPA
5f17317c1c Revert "Add error count prom metric (#14670)"
This reverts commit b28b1ed6ce.
2024-11-28 16:37:19 +01:00
Manu NALEPA
3432ffa4a3 PeerDAS: Batch columns verifications (#14559)
* `ColumnAlignsWithBlock`: Split lines.

* Data columns verifications: Batch

* Remove completely `DataColumnBatchVerifier`.

Only `DataColumnsVerifier` (with `s`) on columns remains.
It is the responsability of the function which receive the data column
(either by gossip, by range request or by root request) to verify the
data column wrt. corresponding checks.

* Fix Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9dac67635b streamDataColumnBatch: Sort columns by index. (#14542)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyrange-v1

The following data column sidecars, where they exist, MUST be sent in (slot, column_index) order.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9be69fbd07 PeerDAS: Fix major bug in dataColumnSidecarsByRangeRPCHandler and allow syncing from full nodes. (#14532)
* `validateDataColumnsByRange`: `current` ==> `currentSlot`.

* `validateRequest`: Extract `remotePeer` variable.

* `dataColumnSidecarsByRangeRPCHandler`: Small non functional refactor.

* `streamDataColumnBatch`: Fix major bug.

Before this commit, the node was unable to respond with a data column index higher than the count of stored data columns.
For example, if there is 8 data columns stored for a given block, the node was
able to respond for data columns indices 1, 3, and 5, but not for 10, 16 or 127.

The issue was visible only for full nodes, since super nodes always store 128 data columns.

* Initial sync: Fetch data columns from all peers.
(Not only from supernodes.)

* Nishant's comment: Fix `lastSlot` and `endSlot` duplication.

* Address Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
e21261e893 Data columns initial sync: Rework. (#14522) 2024-11-27 10:37:03 +01:00
Nishant Das
da53a8fc48 Fix Commitments Check (#14493)
* Fix Commitments Check

* `highestFinalizedEpoch`: Refactor (no functional change).

* `retrieveMissingDataColumnsFromPeers`: Fix logs.

* `VerifyDataColumnSidecarKZGProofs`: Optimise with capacity.

* Save data columns when initial syncing.

* `dataColumnSidecarsByRangeRPCHandler`: Add logs when a request enters.

* Improve logging.

* Improve logging.

* `peersWithDataColumns: Do not filter any more on peer head slot.

* Fix Nishant's comment.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:37:03 +01:00
Manu NALEPA
a14634e656 PeerDAS: Improve initial sync logs (#14496)
* `retrieveMissingDataColumnsFromPeers`: Search only for needed peers.

* Improve logging.
2024-11-27 10:37:03 +01:00
Manu NALEPA
43761a8066 PeerDAS: Fix initial sync with super nodes (#14495)
* Improve logging.

* `retrieveMissingDataColumnsFromPeers`: Limit to `512` items per request.

* `retrieveMissingDataColumnsFromPeers`: Allow `nil` peers.

Before this commit:
If, when this funcion is called, we are not yet connected to enough peers, then `peers` will be possibly not be satisfaying,
and, if new peers are connected, we will never see them.

After this commit:
If `peers` is `nil`, then we regularly check for all connected peers.
If `peers` is not `nil`, then we use them.
2024-11-27 10:37:03 +01:00
Manu NALEPA
01dbc337c0 PeerDAS: Fix initial sync (#14494)
* `BestFinalized`: Refactor (no functional change).

* `BestNonFinalized`: Refactor (no functional change).

* `beaconBlocksByRangeRPCHandler`: Remove useless log.

The same is already printed at the start of the function.

* `calculateHeadAndTargetEpochs`: Avoid `else`.

* `ConvertPeerIDToNodeID`: Improve error.

* Stop printing noisy "peer should be banned" logs.

* Initial sync: Request data columns from peers which:
- custody a superset of columns we need, and
- have a head slot >= our target slot.

* `requestDataColumnsFromPeers`: Shuffle peers before requesting.

Before this commit, we always requests peers in the same order,
until one responds something.
Without shuffling, we always requests data columns from the same
peer.

* `requestDataColumnsFromPeers`: If error from a peer, just log the error and skip the peer.

* Improve logging.

* Fix tests.
2024-11-27 10:37:03 +01:00
Nishant Das
92f9b55fcb Put Subscriber in Goroutine (#14486) 2024-11-27 10:36:18 +01:00
Manu NALEPA
f65f12f58b Stop disconnecting peers for bad response / excessive colocation. (#14483) 2024-11-27 10:36:17 +01:00
Manu NALEPA
f2b61a3dcf PeerDAS: Misc improvements (#14482)
* `retrieveMissingDataColumnsFromPeers`: Improve logging.

* `dataColumnSidecarByRootRPCHandler`: Stop decreasing peer's score if asking for a column we do not custody.

* `dataColumnSidecarByRootRPCHandler`: If a data column is unavailable, stop waiting for it.

This behaviour was useful for peer sampling.
Now, just return the data column if we store it.
If we don't, skip.

* Dirty code comment.

* `retrieveMissingDataColumnsFromPeers`: Improve logs.

* `SendDataColumnsByRangeRequest`: Improve logs.

* `dataColumnSidecarsByRangeRPCHandler`: Improve logs.
2024-11-27 10:34:38 +01:00
Manu NALEPA
77a6d29a2e PeerDAS: Re-enable full node joining the main fork (#14475)
* `columnErrBuilder`: Uses `Wrap` instead of `Join`.

Reason: `Join` makes a carriage return. The log is quite unreadable.

* `validateDataColumn`: Improve log.

* `areDataColumnsAvailable`: Improve log.

* `SendDataColumnSidecarByRoot` ==> `SendDataColumnSidecarsByRootRequest`.

* `handleDA`: Refactor error message.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

Reason: There is no notion at all of "recent" in the function.

If the caller decides to call this function only with "recent" blocks, that's fine.
However, the function itself will know nothing about the "recentness" of these blocks.

* `sendBatchRootRequest`: Improve comments.

* `sendBeaconBlocksRequest`: Avoid `else` usage and use map of bool instead of `struct{}`.

* `wrapAndReportValidation`: Remove `agent` from log.

Reason: This prevent the log to hold on one line, and it is not really useful to debug.

* `validateAggregateAndProof`: Add comments.

* `GetValidCustodyPeers`: Fix typo.

* `GetValidCustodyPeers` ==> `DataColumnsAdmissibleCustodyPeers`.

* `CustodyHandler` ==> `DataColumnsHandler`.

* `CustodyCountFromRemotePeer` ==> `DataColumnsCustodyCountFromRemotePeer`.

* Implement `DataColumnsAdmissibleSubnetSamplingPeers`.

* Use `SubnetSamplingSize` instead of `CustodySubnetCount` where needed.

* Revert "`wrapAndReportValidation`: Remove `agent` from log."

This reverts commit 55db351102.
2024-11-27 10:34:38 +01:00
Manu NALEPA
31d16da3a0 PeerDAS: Multiple improvements (#14467)
* `scheduleReconstructedDataColumnsBroadcast`: Really minor refactor.

* `receivedDataColumnsFromRootLock` -> `dataColumnsFromRootLock`

* `reconstructDataColumns`: Stop looking into the DB to know if we have some columns.

Before this commit:
Each time we receive a column, we look into the filesystem for all columns we store.
==> For 128 columns, it looks for 1 + 2 + 3 + ... + 128 = 128(128+1)/2 = 8256 files look.

Also, as soon as a column is saved into the file system, then if, right after, we
look at the filesystem again, we assume the column will be available (strict consistency).
It happens not to be always true.

==> Sometimes, we can reconstruct and reseed columns more than once, because of this lack of filesystem strict consistency.

After this commit:
We use a (strictly consistent) cache to determine if we received a column or not.
==> No more consistency issue, and less stress for the filesystem.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

Before this commit, logged values assumed that all requested columns correspond to
the same block root, which is not always the case.

After this commit, we know which columns are requested for which root.

* Add a log when broadcasting a data column.

This is useful to debug "lost data columns" in devnet.

* Address Nishant's comment
2024-11-27 10:34:38 +01:00
Justin Traglia
19221b77bd Update c-kzg-4844 to v2.0.1 (#14421) 2024-11-27 10:34:38 +01:00
Manu NALEPA
83df293647 Peerdas: Several updates (#14459)
* `validateDataColumn`: Refactor logging.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

* `isDataAvailable`: Improve logging.

* Add hidden debug flag: `--data-columns-reject-slot-multiple`.

* Add more logs about peer disconnection.

* `validPeersExist` --> `enoughPeersAreConnected`

* `beaconBlocksByRangeRPCHandler`: Add remote Peer ID in logs.

* Stop calling twice `writeErrorResponseToStream` in case of rate limit.
2024-11-27 10:34:37 +01:00
Manu NALEPA
c20c09ce36 Peerdas: Full subnet sampling and sendBatchRootRequest fix. (#14452)
* `sendBatchRootRequest`: Refactor and add comments.

* `sendBatchRootRequest`: Do send requests to peers that custodies a superset of our columns.

Before this commit, we sent "data columns by root requests" for data columns peers do not custody.

* Data columns: Use subnet sampling only.

(Instead of peer sampling.)

aaa

* `areDataColumnsAvailable`: Improve logs.

* `GetBeaconBlock`: Improve logs.

Rationale: A `begin` log should always be followed by a `success` log or a `failure` log.
2024-11-27 10:30:29 +01:00
Manu NALEPA
2191faaa3f Fix CPU usage in small devnets (#14446)
* `CustodyCountFromRemotePeer`: Set happy path in the outer scope.

* `FindPeersWithSubnet`: Improve logging.

* `listenForNewNodes`: Avoid infinite loop in a small subnet.

* Address Nishant's comment.

* FIx Nishant's comment.
2024-11-27 10:30:29 +01:00
Nishant Das
2de1e6f3e4 Revert "Change Custody Count to Uint8 (#14386)" (#14415)
This reverts commit bd7ec3fa97.
2024-11-27 10:30:29 +01:00
Manu NALEPA
db44df3964 Fix Initial Sync with 128 data columns subnets (#14403)
* `pingPeers`: Add log with new ENR when modified.

* `p2p Start`: Use idiomatic go error syntax.

* P2P `start`: Fix error message.

* Use not bootnodes at all if the `--chain-config-file` flag is used and no `--bootstrap-node` flag is used.

Before this commit, if the  `--chain-config-file` flag is used and no `--bootstrap-node` flag is used, then bootnodes are (incorrectly) defaulted on `mainnet` ones.

* `validPeersExist`: Centralize logs.

* `AddConnectionHandler`: Improve logging.

"Peer connected" does not really reflect the fact that a new peer is actually connected. --> "New peer connection" is more clear.

Also, instead of writing `0`, `1`or `2` for direction, now it's writted "Unknown", "Inbound", "Outbound".

* Logging: Add 2 decimals for timestamt in text and JSON logs.

* Improve "no valid peers" logging.

* Improve "Some columns have no peers responsible for custody" logging.

* `pubsubSubscriptionRequestLimit`: Increase to be consistent with data columns.

* `sendPingRequest`: Improve logging.

* `FindPeersWithSubnet`: Regularly recheck in our current set of peers if we have enough peers for this topic.

Before this commit, new peers HAD to be found, even if current peers are eventually acceptable.
For very small network, it used to lead to infinite search.

* `subscribeDynamicWithSyncSubnets`: Use exactly the same subscription function initially and every slot.

* Make deepsource happier.

* Nishant's commend: Change peer disconnected log.

* NIshant's comment: Change `Too many incoming subscription` log from error to debug.

* `FindPeersWithSubnet`: Address Nishant's comment.

* `batchSize`: Address Nishant's comment.

* `pingPeers` ==> `pingPeersAndLogEnr`.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:30:29 +01:00
Nishant Das
f92eb44c89 Add Data Column Computation Metrics (#14400)
* Add Data Column Metrics

* Shift it All To Peerdas Package
2024-11-27 10:24:03 +01:00
Nishant Das
a26980b64d Set Precompute at 8 (#14399) 2024-11-27 10:24:03 +01:00
Manu NALEPA
f58cf7e626 PeerDAS: Improve logging and reduce the number of needed goroutines for reconstruction (#14397)
* `broadcastAndReceiveDataColumns`: Use real `sidecar.ColumnIndex` instead of position in the slice.

And improve logging as well.

* `isDataColumnsAvailable`: Improve logging.

* `validateDataColumn`: Print `Accepted data column sidecar gossip` really at the end.

* Subscriber: Improve logging.

* `sendAndSaveDataColumnSidecars`: Use common used function for logging.

* `dataColumnSidecarByRootRPCHandler`: Logging - Pring `all` instead of all the columns for a super node.

* Verification: Improve logging.

* `DataColumnsWithholdCount`: Set as `uint64` instead `int`.

* `DataColumnFields`: Improve logging.

* Logging: Remove now useless private `columnFields`function.

* Avoid useless goroutines blocking for reconstruction.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Address Nishant's comment.

* Improve logging.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Nishant Das
68da7dabe2 Fix Bugs in PeerDAS Testing (#14396)
* Fix Various Bugs in PeerDAS

* Remove Log

* Remove useless copy var.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d1e43a2c02 Change Custody Count to Uint8 (#14386)
* Add Changes for Uint8 Csc

* Fix Build

* Fix Build for Sync

* Fix Discovery Test
2024-11-27 10:24:03 +01:00
Nishant Das
3652bec2f8 Use Data Column Validation Across Prysm (#14377)
* Use Data Column Validation Everywhere

* Fix Build

* Fix Lint

* Fix Clock Synchronizer

* Fix Panic
2024-11-27 10:24:03 +01:00
Nishant Das
81b7a1725f Update Config To Latest Value (#14352)
* Update values

* Update Spec To v1.5.0-alpha.5

* Fix Discovery Tests

* Hardcode Subnet Count For Tests

* Fix All Initial Sync Tests

* Gazelle

* Less Chaotic Service Initialization

* Gazelle
2024-11-27 10:24:03 +01:00
Nishant Das
0c917079c4 Fix CI in PeerDAS (#14347)
* Update go.yml

* Disable mnd

* Update .golangci.yml

* Update go.yml

* Update go.yml

* Update .golangci.yml

* Update go.yml

* Fix Lint Issues

* Remove comment

* Update .golangci.yml
2024-11-27 10:24:03 +01:00
Manu NALEPA
a732fe7021 Implement /eth/v1/beacon/blob_sidecars/{block_id} for peerDAS. (#14312)
* `parseIndices`: `O(n**2)` ==> `O(n)`.

* PeerDAS: Implement `/eth/v1/beacon/blob_sidecars/{block_id}`.

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Rename some functions.

* `Blobs`: Fix empty slice.

* `recoverCellsAndProofs` --> Move function in `beacon-chain/core/peerdas`.

* peerDAS helpers: Add missing tests.

* Implement `CustodyColumnCount`.

* `RecoverCellsAndProofs`: Remove useless argument `columnsCount`.

* Tests: Add cleanups.

* `blobsFromStoredDataColumns`: Reconstruct if needed.

* Make deepsource happy.

* Beacon API: Use provided indices.

* Make deepsource happier.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d75a7aae6a Add Data Column Verification (#14287)
* Persist All Changes

* Fix All Tests

* Fix Build

* Fix Build

* Fix Build

* Fix Test Again

* Add missing verification

* Add Test Cases for Data Column Validation

* Fix comments for methods

* Fix comments for methods

* Fix Test

* Manu's Review
2024-11-27 10:24:03 +01:00
Manu NALEPA
e788a46e82 PeerDAS: Add MetadataV3 with custody_subnet_count (#14274)
* `sendPingRequest`: Add some comments.

* `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`.

* `pingHandler`: Add comments.

* `sendMetaDataRequest`: Add comments and implement an unique test.

* Gather `SchemaVersion`s in the same `const` definition.

* Define `SchemaVersionV3`.

* `MetaDataV1`: Fix comment.

* Proto: Define `MetaDataV2`.

* `MetaDataV2`: Generate SSZ.

* `newColumnSubnetIDs`: Use smaller lines.

* `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`.

* `RefreshPersistentSubnets`: Refactor tests (no functional change).

* `RefreshPersistentSubnets`: Refactor and add comments (no functional change).

* `RefreshPersistentSubnets`: Compare cache with both ENR & metadata.

* `RefreshPersistentSubnets`: Manage peerDAS.

* `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`.

* `CustodyCountFromRemotePeer`: Retrieve the count from metadata.

Then default to ENR, then default to the default value.

* Update beacon-chain/sync/rpc_metadata.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix duplicate case.

* Remove version testing.

* `debug.proto`: Stop breaking ordering.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Manu NALEPA
199543125a Fix data columns sampling (#14263)
* Fix the obvious...

* Data columns sampling: Modify logging.

* `waitForChainStart`: Set it threadsafe - Do only wait once.

* Sampling: Wait for chain start before running the sampling.

Reason: `newDataColumnSampler1D` needs `s.ctxMap`.
`s.ctxMap` is only set when chain is started.

Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine.

==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not.

* Adresse Nishant's comments.

* Sampling: Improve logging.

* `waitForChainStart`: Remove `chainIsStarted` check.
2024-11-27 10:19:07 +01:00
Manu NALEPA
ca63efa770 PeerDAS: Fix initial sync (#14208)
* `SendDataColumnsByRangeRequest`: Add some new fields in logs.

* `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`.

* Implement `fetchDataColumnsFromPeers`.

* `fetchBlobsFromPeer`: Return only one error.
2024-11-27 10:19:07 +01:00
Manu NALEPA
345e6edd9c Make deepsource happy (#14237)
* DeepSource: Pass heavy objects by pointers.

* `removeBlockFromQueue`: Remove redundant error checking.

* `fetchBlobsFromPeer`: Use same variable for `append`.

* Remove unused arguments.

* Combine types.

* `Persist`: Add documentation.

* Remove unused receiver

* Remove duplicated import.

* Stop using both pointer and value receiver at the same time.

* `verifyAndPopulateColumns`: Remove unused parameter

* Stop using mpty slice literal used to declare a variable.
2024-11-27 10:19:07 +01:00
Manu NALEPA
6403064126 PeerDAS: Run reconstruction in parallel. (#14236)
* PeerDAS: Run reconstruction in parallel.

* `isDataAvailableDataColumns` --> `isDataColumnsAvailable`

* `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received.

* Make deepsource happy.
2024-11-27 10:19:07 +01:00
Justin Traglia
0517d76631 Update ckzg4844 to latest version of das branch (#14223)
* Update ckzg4844 to latest version

* Run go mod tidy

* Remove unnecessary tests & run goimports

* Remove fieldparams from blockchain/kzg

* Add back blank line

* Avoid large copies

* Run gazelle

* Use trusted setup from the specs & fix issue with struct

* Run goimports

* Fix mistake in makeCellsAndProofs

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:19:07 +01:00
Nishant Das
000d480f77 Add Current Changes (#14231) 2024-11-27 10:19:07 +01:00
Manu NALEPA
b40a8ed37e Implement and use filterPeerForDataColumnsSubnet. (#14230) 2024-11-27 10:19:07 +01:00
Francis Li
d21c2bd63e [PeerDAS] Parallelize data column sampling (#14105)
* PeerDAS: parallelizing sample queries

* PeerDAS: select sample from non custodied columns

* Finish rebase

* Add more test cases
2024-11-27 10:19:07 +01:00
kevaundray
7a256e93f7 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-11-27 10:19:07 +01:00
Nishant Das
07fe76c2da Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-11-27 10:19:07 +01:00
Manu NALEPA
54affa897f PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-11-27 10:16:50 +01:00
kevaundray
ac4c5fae3c chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-11-27 10:16:50 +01:00
Manu NALEPA
2845d87077 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-11-27 10:16:50 +01:00
Nishant Das
dc2c90b8ed Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-11-27 10:16:50 +01:00
kevaundray
b469157e1f chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-11-27 10:16:50 +01:00
kevaundray
2697794e58 chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-11-27 10:16:50 +01:00
Manu NALEPA
48cf24edb4 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-11-27 10:16:50 +01:00
Francis Li
78f90db90b PeerDAS: add data column batch config (#14122) 2024-11-27 10:15:27 +01:00
Francis Li
d0a3b9bc1d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
bfdb6dab86 Fix columns sampling (#14118) 2024-11-27 10:15:27 +01:00
Francis Li
7dd2fd52af [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-11-27 10:15:27 +01:00
Francis Li
b6bad9331b [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-11-27 10:15:27 +01:00
Francis Li
6e2122085d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
7a847292aa PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
81f4db0afa PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
a7dc2e6c8b PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-11-27 10:15:27 +01:00
Manu NALEPA
0a010b5088 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
1e335e2cf2 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
42f4c0f14e PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
d3c12abe25 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-11-27 10:15:27 +01:00
Nishant Das
b0ba05b4f4 Fix Custody Columns (#14021) 2024-11-27 10:15:27 +01:00
Nishant Das
e206506489 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-11-27 10:15:27 +01:00
Nishant Das
013cb28663 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-11-27 10:15:27 +01:00
Manu NALEPA
496914cb39 Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-11-27 10:15:27 +01:00
Nishant Das
c032e78888 Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-11-27 10:15:26 +01:00
Manu NALEPA
5e4deff6fd Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-11-27 10:15:26 +01:00
Nishant Das
6daa91c465 Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-11-27 10:15:26 +01:00
Nishant Das
32ce6423eb Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Justin Traglia
b0ea450df5 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-11-27 10:15:26 +01:00
Manu NALEPA
8bd10df423 SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-11-27 10:15:26 +01:00
Manu NALEPA
dcbb543be2 Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-11-27 10:15:26 +01:00
Nishant Das
be0580e1a9 Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Manu NALEPA
1355178115 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-11-27 10:15:26 +01:00
Nishant Das
b78c3485b9 Update .bazelrc (#13931) 2024-11-27 10:15:26 +01:00
Manu NALEPA
f503efc6ed Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-11-27 10:15:26 +01:00
Manu NALEPA
1bfbd3980e Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-11-27 10:15:26 +01:00
Nishant Das
3e722ea1bc Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Nishant Das
d844026433 Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-11-27 10:15:26 +01:00
Nishant Das
9ffc19d5ef Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-11-27 10:15:26 +01:00
Nishant Das
3e23f6e879 add it (#13865) 2024-11-27 10:11:55 +01:00
Manu NALEPA
c688c84393 Add in column sidecars protos (#13862) 2024-11-27 10:11:55 +01:00
230 changed files with 18978 additions and 3227 deletions

View File

@@ -22,6 +22,7 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

View File

@@ -432,32 +432,6 @@ func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
}, nil
}
func (a *SingleAttestation) ToConsensus() (*eth.SingleAttestation, error) {
ci, err := strconv.ParseUint(a.CommitteeIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "CommitteeIndex")
}
ai, err := strconv.ParseUint(a.AttesterIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "AttesterIndex")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
}
sig, err := bytesutil.DecodeHexWithLength(a.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SingleAttestation{
CommitteeId: primitives.CommitteeIndex(ci),
AttesterIndex: primitives.ValidatorIndex(ai),
Data: data,
Signature: sig,
}, nil
}
func AttElectraFromConsensus(a *eth.AttestationElectra) *AttestationElectra {
return &AttestationElectra{
AggregationBits: hexutil.Encode(a.AggregationBits),

View File

@@ -36,13 +36,6 @@ type AttestationElectra struct {
CommitteeBits string `json:"committee_bits"`
}
type SingleAttestation struct {
CommitteeIndex string `json:"committee_index"`
AttesterIndex string `json:"attester_index"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type AttestationData struct {
Slot string `json:"slot"`
CommitteeIndex string `json:"index"`

View File

@@ -25,6 +25,7 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_data_column.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -48,6 +49,7 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -157,6 +159,7 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",

View File

@@ -33,6 +33,7 @@ var (
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
var errMaxDataColumnsExceeded = errors.New("Expected data columns for node exceeds NUMBER_OF_COLUMNS")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"kzg.go",
"trusted_setup.go",
"validation.go",
],
@@ -12,6 +13,9 @@ go_library(
deps = [
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,111 @@
package kzg
import (
"errors"
ckzg4844 "github.com/ethereum/c-kzg-4844/v2/bindings/go"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
)
// BytesPerBlob is the number of bytes in a single blob.
const BytesPerBlob = ckzg4844.BytesPerBlob
// Blob represents a serialized chunk of data.
type Blob [BytesPerBlob]byte
// BytesPerCell is the number of bytes in a single cell.
const BytesPerCell = ckzg4844.BytesPerCell
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
// Commitment represent a KZG commitment to a Blob.
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [48]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48
// Bytes32 is a 32-byte array.
type Bytes32 = ckzg4844.Bytes32
// CellsAndProofs represents the Cells and Proofs corresponding to
// a single blob.
type CellsAndProofs struct {
Cells []Cell
Proofs []Proof
}
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
kzgBlob := kzg4844.Blob(*blob)
comm, err := kzg4844.BlobToCommitment(&kzgBlob)
if err != nil {
return Commitment{}, err
}
return Commitment(comm), nil
}
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
kzgBlob := kzg4844.Blob(*blob)
proof, err := kzg4844.ComputeBlobProof(&kzgBlob, kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
}
return Proof(proof), nil
}
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
ckzgBlob := (*ckzg4844.Blob)(blob)
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(ckzgBlob)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, cells []Cell, proofsBytes []Bytes48) (bool, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(cells))
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
for i := range partialCells {
ckzgPartialCells[i] = ckzg4844.Cell(partialCells[i])
}
ckzgCells, ckzgProofs, err := ckzg4844.RecoverCellsAndKZGProofs(cellIndices, ckzgPartialCells)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
// Convert cells/proofs to the CellsAndProofs type defined in this package.
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
if len(ckzgCells) != len(ckzgProofs) {
return CellsAndProofs{}, errors.New("different number of cells/proofs")
}
var cells []Cell
var proofs []Proof
for i := range ckzgCells {
cells = append(cells, Cell(ckzgCells[i]))
proofs = append(proofs, Proof(ckzgProofs[i]))
}
return CellsAndProofs{
Cells: cells,
Proofs: proofs,
}, nil
}

View File

@@ -5,6 +5,8 @@ import (
"encoding/json"
GoKZG "github.com/crate-crypto/go-kzg-4844"
CKZG "github.com/ethereum/c-kzg-4844/v2/bindings/go"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
@@ -12,17 +14,53 @@ var (
//go:embed trusted_setup.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context
kzgLoaded bool
)
type TrustedSetup struct {
G1Monomial [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_monomial"`
G1Lagrange [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_lagrange"`
G2Monomial [65]GoKZG.G2CompressedHexStr `json:"g2_monomial"`
}
func Start() error {
parsedSetup := GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup)
trustedSetup := &TrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, trustedSetup)
if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON")
}
kzgContext, err = GoKZG.NewContext4096(&parsedSetup)
kzgContext, err = GoKZG.NewContext4096(&GoKZG.JSONTrustedSetup{
SetupG2: trustedSetup.G2Monomial[:],
SetupG1Lagrange: trustedSetup.G1Lagrange})
if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context")
}
// Length of a G1 point, converted from hex to binary.
g1MonomialBytes := make([]byte, len(trustedSetup.G1Monomial)*(len(trustedSetup.G1Monomial[0])-2)/2)
for i, g1 := range &trustedSetup.G1Monomial {
copy(g1MonomialBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G1 point, converted from hex to binary.
g1LagrangeBytes := make([]byte, len(trustedSetup.G1Lagrange)*(len(trustedSetup.G1Lagrange[0])-2)/2)
for i, g1 := range &trustedSetup.G1Lagrange {
copy(g1LagrangeBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2MonomialBytes := make([]byte, len(trustedSetup.G2Monomial)*(len(trustedSetup.G2Monomial[0])-2)/2)
for i, g2 := range &trustedSetup.G2Monomial {
copy(g2MonomialBytes[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// TODO: Provide a configuration option for this.
var precompute uint = 8
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1MonomialBytes, g1LagrangeBytes, g2MonomialBytes, precompute); err != nil {
panic(err)
}
}
kzgLoaded = true
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -126,9 +126,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
func WithP2PBroadcaster(p p2p.Acceser) Option {
return func(s *Service) error {
s.cfg.P2p = p
s.cfg.P2P = p
return nil
}
}

View File

@@ -3,11 +3,13 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
@@ -234,7 +236,9 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
nodeID := s.cfg.P2P.NodeID()
if err := avs.IsDataAvailable(ctx, nodeID, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
@@ -514,7 +518,7 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
}
indices, err := bs.Indices(root, slot)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "indices")
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
@@ -528,12 +532,40 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
return missing, nil
}
func missingDataColumns(bs *filesystem.BlobStorage, root [32]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > int(params.BeaconConfig().NumberOfColumns) {
return nil, errMaxDataColumnsExceeded
}
indices, err := bs.ColumnIndices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]bool, len(expected))
for col := range expected {
if !indices[col] {
missing[col] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if coreTime.PeerDASIsActive(signed.Block().Slot()) {
return s.areDataColumnsAvailable(ctx, root, signed)
}
if signed.Version() < version.Deneb {
return nil
}
@@ -563,7 +595,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
if err != nil {
return err
return errors.Wrap(err, "missing indices")
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
@@ -582,8 +614,13 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if len(missing) == 0 {
return
}
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
})
defer nst.Stop()
}
@@ -605,12 +642,176 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
}
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Fulu {
return nil
}
block := signed.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "blob KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
// All columns to sample need to be available for the block to be considered available.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
nodeID := s.cfg.P2P.NodeID()
custodyGroupSamplingSize := peerdas.CustodyGroupSamplingSize()
custodyGroups, err := peerdas.CustodyGroups(nodeID, custodyGroupSamplingSize)
if err != nil {
return errors.Wrap(err, "custody groups")
}
// Exit early if the node is not expected to custody any data columns.
if len(custodyGroups) == 0 {
return nil
}
// Get the custody columns from the groups.
columnsMap, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
// Get the count of data columns we already have in the store.
retrievedDataColumns, err := s.blobStorage.ColumnIndices(root)
if err != nil {
return errors.Wrap(err, "column indices")
}
retrievedDataColumnsCount := uint64(len(retrievedDataColumns))
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumns(s.blobStorage, root, columnsMap)
if err != nil {
return err
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missingMap) == 0 {
return nil
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
missingMapCount := uint64(len(missingMap))
if missingMapCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(columnsMap))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(columnsMap)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": missing,
}).Error("Some data columns are still unavailable at slot end")
})
defer nst.Stop()
}
for {
select {
case rootIndex := <-rootIndexChan:
if rootIndex.Root != root {
// This is not the root we are looking for.
continue
}
// This is a data column we are expecting.
if _, ok := missingMap[rootIndex.Index]; ok {
retrievedDataColumnsCount++
}
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Remove the index from the missing map.
delete(missingMap, rootIndex.Index)
// Exit if there is no more missing data columns.
if len(missingMap) == 0 {
return nil
}
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missingMap))
if missingIndicesCount < numberOfColumns {
missingIndices = uint64MapToSortedSlice(missingMap)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
}
}
}
@@ -697,7 +898,7 @@ func (s *Service) waitForSync() error {
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
}

View File

@@ -51,6 +51,12 @@ type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
@@ -69,6 +75,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
return nil
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
defer s.blockBeingSynced.unset(blockRoot)
@@ -77,6 +84,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
@@ -92,10 +100,12 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
}
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -231,12 +241,14 @@ func (s *Service) handleDA(
if err != nil {
return 0, err
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
nodeID := s.cfg.P2P.NodeID()
if err := avs.IsDataAvailable(ctx, nodeID, s.CurrentSlot(), rob); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability")
return 0, errors.Wrap(err, "is data available")
}
}
daWaitedTime := time.Since(daStartTime)

View File

@@ -0,0 +1,14 @@
package blockchain
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
)
func (s *Service) ReceiveDataColumn(ds blocks.VerifiedRODataColumn) error {
if err := s.blobStorage.SaveDataColumn(ds); err != nil {
return errors.Wrap(err, "save data column")
}
return nil
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -80,7 +81,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2p p2p.Broadcaster
P2P p2p.Acceser
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
@@ -105,22 +106,26 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][]bool
// TODO: Separate blobs from data columns
// seenIndex map[[32]byte][]bool
seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitives.Slot) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if idx >= uint64(maxBlobsPerBlock) {
return
}
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// if idx >= uint64(maxBlobsPerBlock) {
// return
// }
bn.Lock()
seen := bn.seenIndex[root]
if seen == nil {
seen = make([]bool, maxBlobsPerBlock)
}
// TODO: Separate blobs from data columns
// if seen == nil {
// seen = make([]bool, maxBlobsPerBlock)
// }
if seen[idx] {
bn.Unlock()
return
@@ -131,7 +136,9 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
// c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
@@ -141,12 +148,15 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
}
func (bn *blobNotifierMap) forRoot(root [32]byte, slot primitives.Slot) chan uint64 {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
bn.Lock()
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
// c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
return c
@@ -172,7 +182,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][]bool),
// TODO: Separate blobs from data columns
// seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
}
srv := &Service{
ctx: ctx,

View File

@@ -97,13 +97,14 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithP2PBroadcaster(&mockAccesser{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc),
WithAttestationService(attService),
WithStateGen(stateGen),
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithClockSynchronizer(startup.NewClockSynchronizer()),
WithP2PBroadcaster(&mockAccesser{}),
}
chainService, err := NewService(ctx, opts...)
@@ -587,7 +588,9 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][]bool),
// TODO: Separate blobs from data columns
// seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
notifiers: make(map[[32]byte]chan uint64),
}

View File

@@ -19,8 +19,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"google.golang.org/protobuf/proto"
@@ -45,6 +47,11 @@ type mockBroadcaster struct {
broadcastCalled bool
}
type mockAccesser struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
@@ -65,6 +72,11 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ context.Context, _ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}
@@ -122,6 +134,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithSyncChecker(mock.MockChecker{}),
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
WithP2PBroadcaster(&mockAccesser{}),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -702,6 +702,11 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil
}
// ReceiveDataColumn implements the same method in chain service
func (*ChainService) ReceiveDataColumn(_ blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil

View File

@@ -9,6 +9,7 @@ go_library(
"attestation_data.go",
"balance_cache_key.go",
"checkpoint_state.go",
"column_subnet_ids.go",
"committee.go",
"committee_disabled.go", # keep
"committees.go",

70
beacon-chain/cache/column_subnet_ids.go vendored Normal file
View File

@@ -0,0 +1,70 @@
package cache
import (
"sync"
"time"
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
type columnSubnetIDs struct {
colSubCache *cache.Cache
colSubLock sync.RWMutex
}
// ColumnSubnetIDs for column subnet participants
var ColumnSubnetIDs = newColumnSubnetIDs()
const columnKey = "columns"
func newColumnSubnetIDs() *columnSubnetIDs {
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
epochDuration := time.Duration(slotsPerEpoch.Mul(secondsPerSlot))
// Set the default duration of a column subnet subscription as the column expiry period.
minEpochsForDataColumnSidecarsRequest := time.Duration(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
subLength := epochDuration * minEpochsForDataColumnSidecarsRequest
persistentCache := cache.New(subLength*time.Second, epochDuration*time.Second)
return &columnSubnetIDs{colSubCache: persistentCache}
}
// GetColumnSubnets retrieves the data column subnets.
func (s *columnSubnetIDs) GetColumnSubnets() ([]uint64, bool, time.Time) {
s.colSubLock.RLock()
defer s.colSubLock.RUnlock()
id, duration, ok := s.colSubCache.GetWithExpiration(columnKey)
if !ok {
return nil, false, time.Time{}
}
// Retrieve indices from the cache.
idxs, ok := id.([]uint64)
if !ok {
return nil, false, time.Time{}
}
return idxs, ok, duration
}
// AddColumnSubnets adds the relevant data column subnets.
func (s *columnSubnetIDs) AddColumnSubnets(colIdx []uint64) {
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Set(columnKey, colIdx, 0)
}
// EmptyAllCaches empties out all the related caches and flushes any stored
// entries on them. This should only ever be used for testing, in normal
// production, handling of the relevant subnets for each role is done
// separately.
func (s *columnSubnetIDs) EmptyAllCaches() {
// Clear the cache.
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Flush()
}

View File

@@ -96,6 +96,24 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.

View File

@@ -32,6 +32,9 @@ const (
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 9
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -77,3 +80,7 @@ type ProposerSlashingReceivedData struct {
type AttesterSlashingReceivedData struct {
AttesterSlashing ethpb.AttSlashing
}
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}

View File

@@ -32,9 +32,6 @@ func ValidateNilAttestation(attestation ethpb.Att) error {
if attestation.GetData().Target == nil {
return errors.New("attestation's target can't be nil")
}
if attestation.IsSingle() {
return nil
}
if attestation.GetAggregationBits() == nil {
return errors.New("attestation's bitfield can't be nil")
}

View File

@@ -308,16 +308,6 @@ func TestValidateNilAttestation(t *testing.T) {
},
errString: "",
},
{
name: "single attestation",
attestation: &ethpb.SingleAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{},
Source: &ethpb.Checkpoint{},
},
},
errString: "",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -97,13 +97,6 @@ func TestVerifyBitfieldLength_OK(t *testing.T) {
assert.NoError(t, helpers.VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
}
func TestVerifyBitfieldLength_Incorrect(t *testing.T) {
helpers.ClearCache()
bf := bitfield.NewBitlist(1)
require.ErrorContains(t, "wanted participants bitfield length 2, got: 1", helpers.VerifyBitfieldLength(bf, 2))
}
func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
helpers.ClearCache()

View File

@@ -78,6 +78,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +265,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -0,0 +1,51 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"helpers.go",
"log.go",
"metrics.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["helpers_test.go"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -0,0 +1,677 @@
package peerdas
import (
"context"
"encoding/binary"
"fmt"
"math"
"math/big"
"slices"
"time"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/holiman/uint256"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
const (
CustodyGroupCountEnrKey = "cgc"
)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/p2p-interface.md#the-discovery-domain-discv5
type Cgc uint64
func (Cgc) ENRKey() string { return CustodyGroupCountEnrKey }
var (
// Custom errors
errCustodyGroupCountTooLarge = errors.New("custody group count too large")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
errIndexTooLarge = errors.New("column index is larger than the specified columns count")
errMismatchLength = errors.New("mismatch in the length of the commitments and proofs")
errRecordNil = errors.New("record is nil")
errCannotLoadCustodyGroupCount = errors.New("cannot load the custody group count from peer")
// maxUint256 is the maximum value of a uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) (map[uint64]bool, error) {
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
// Check if the custody group count is larger than the number of custody groups.
if custodyGroupCount > numberOfCustodyGroup {
return nil, errCustodyGroupCountTooLarge
}
custodyGroups := make(map[uint64]bool, custodyGroupCount)
one := uint256.NewInt(1)
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(custodyGroups)) < custodyGroupCount; currentId.Add(currentId, one) {
// Convert to big endian bytes.
currentIdBytesBigEndian := currentId.Bytes32()
// Convert to little endian.
currentIdBytesLittleEndian := bytesutil.ReverseByteOrder(currentIdBytesBigEndian[:])
// Hash the result.
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the custody group ID.
custodyGroupId := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroup
// Add the custody group to the map.
custodyGroups[custodyGroupId] = true
// Overflow prevention.
if currentId.Cmp(maxUint256) == 0 {
currentId = uint256.NewInt(0)
}
}
// Final check.
if uint64(len(custodyGroups)) != custodyGroupCount {
return nil, errWrongComputedCustodyGroupCount
}
return custodyGroups, nil
}
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
if custodyGroup > numberOfCustodyGroup {
return nil, errCustodyGroupCountTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroup
columns := make([]uint64, 0, columnsPerGroup)
for i := range columnsPerGroup {
column := numberOfCustodyGroup*i + custodyGroup
columns = append(columns, column)
}
return columns, nil
}
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
if columnIndex >= numberOfColumns {
return 0, errIndexTooLarge
}
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
return columnIndex / columnsPerGroup, nil
}
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
return columnIndex % dataColumnSidecarSubnetCount
}
// CustodyColumns computes the columns the node should custody.
func CustodyColumns(custodyGroups map[uint64]bool) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
custodyGroupCount := len(custodyGroups)
// Compute the columns for each custody group.
columns := make(map[uint64]bool, custodyGroupCount)
for group := range custodyGroups {
if group >= numberOfCustodyGroups {
return nil, errCustodyGroupCountTooLarge
}
groupColumns, err := ComputeColumnsForCustodyGroup(group)
if err != nil {
return nil, errors.Wrap(err, "compute columns for custody group")
}
for _, column := range groupColumns {
columns[column] = true
}
}
return columns, nil
}
// DataColumnSubnets computes the subnets for the data columns.
func DataColumnSubnets(dataColumns map[uint64]bool) map[uint64]bool {
subnets := make(map[uint64]bool, len(dataColumns))
for column := range dataColumns {
subnet := ComputeSubnetForDataColumnSidecar(column)
subnets[subnet] = true
}
return subnets
}
// DataColumnSidecars computes the data column sidecars from the signed block and blobs.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#get_data_column_sidecars
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, blobs []kzg.Blob) ([]*ethpb.DataColumnSidecar, error) {
startTime := time.Now()
blobsCount := len(blobs)
if blobsCount == 0 {
return nil, nil
}
// Get the signed block header.
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
// Get the block body.
block := signedBlock.Block()
blockBody := block.Body()
// Get the blob KZG commitments.
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Compute the KZG commitments inclusion proof.
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
}
// Compute cells and proofs.
cellsAndProofs := make([]kzg.CellsAndProofs, blobsCount)
eg, _ := errgroup.WithContext(context.Background())
for i := range blobs {
blobIndex := i
eg.Go(func() error {
blob := &blobs[blobIndex]
blobCellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(blob)
if err != nil {
return errors.Wrap(err, "compute cells and KZG proofs")
}
cellsAndProofs[blobIndex] = blobCellsAndProofs
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, fieldparams.NumberOfColumns)
for columnIndex := uint64(0); columnIndex < fieldparams.NumberOfColumns; columnIndex++ {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(startTime).Milliseconds()))
return sidecars, nil
}
// populateAndFilterIndices returns a sorted slices of indices, setting all indices if none are provided,
// and filtering out indices higher than the blob count.
func populateAndFilterIndices(indices map[uint64]bool, blobCount uint64) []uint64 {
// If no indices are provided, provide all blobs.
if len(indices) == 0 {
for i := range blobCount {
indices[i] = true
}
}
// Filter blobs index higher than the blob count.
filteredIndices := make(map[uint64]bool, len(indices))
for i := range indices {
if i < blobCount {
filteredIndices[i] = true
}
}
// Transform set to slice.
indicesSlice := make([]uint64, 0, len(filteredIndices))
for i := range filteredIndices {
indicesSlice = append(indicesSlice, i)
}
// Sort the indices.
slices.Sort[[]uint64](indicesSlice)
return indicesSlice
}
// Blobs extract blobs from `dataColumnsSidecar`.
// This can be seen as the reciprocal function of DataColumnSidecars.
// `dataColumnsSidecar` needs to contain the datacolumns corresponding to the non-extended matrix,
// else an error will be returned.
// (`dataColumnsSidecar` can contain extra columns, but they will be ignored.)
func Blobs(indices map[uint64]bool, dataColumnsSidecar []*ethpb.DataColumnSidecar) ([]*blocks.VerifiedROBlob, error) {
columnCount := fieldparams.NumberOfColumns
neededColumnCount := columnCount / 2
// Check if all needed columns are present.
sliceIndexFromColumnIndex := make(map[uint64]int, len(dataColumnsSidecar))
for i := range dataColumnsSidecar {
dataColumnSideCar := dataColumnsSidecar[i]
columnIndex := dataColumnSideCar.ColumnIndex
if columnIndex < uint64(neededColumnCount) {
sliceIndexFromColumnIndex[columnIndex] = i
}
}
actualColumnCount := len(sliceIndexFromColumnIndex)
// Get missing columns.
if actualColumnCount < neededColumnCount {
missingColumns := make(map[int]bool, neededColumnCount-actualColumnCount)
for i := range neededColumnCount {
if _, ok := sliceIndexFromColumnIndex[uint64(i)]; !ok {
missingColumns[i] = true
}
}
missingColumnsSlice := make([]int, 0, len(missingColumns))
for i := range missingColumns {
missingColumnsSlice = append(missingColumnsSlice, i)
}
slices.Sort[[]int](missingColumnsSlice)
return nil, errors.Errorf("some columns are missing: %v", missingColumnsSlice)
}
// It is safe to retrieve the first column since we already checked that `dataColumnsSidecar` is not empty.
firstDataColumnSidecar := dataColumnsSidecar[0]
blobCount := uint64(len(firstDataColumnSidecar.DataColumn))
// Check all colums have te same length.
for i := range dataColumnsSidecar {
if uint64(len(dataColumnsSidecar[i].DataColumn)) != blobCount {
return nil, errors.Errorf("mismatch in the length of the data columns, expected %d, got %d", blobCount, len(dataColumnsSidecar[i].DataColumn))
}
}
// Reconstruct verified RO blobs from columns.
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, blobCount)
// Populate and filter indices.
indicesSlice := populateAndFilterIndices(indices, blobCount)
for _, blobIndex := range indicesSlice {
var blob kzg.Blob
// Compute the content of the blob.
for columnIndex := range neededColumnCount {
sliceIndex, ok := sliceIndexFromColumnIndex[uint64(columnIndex)]
if !ok {
return nil, errors.Errorf("missing column %d, this should never happen", columnIndex)
}
dataColumnSideCar := dataColumnsSidecar[sliceIndex]
cell := dataColumnSideCar.DataColumn[blobIndex]
for i := 0; i < len(cell); i++ {
blob[columnIndex*kzg.BytesPerCell+i] = cell[i]
}
}
// Retrieve the blob KZG commitment.
blobKZGCommitment := kzg.Commitment(firstDataColumnSidecar.KzgCommitments[blobIndex])
// Compute the blob KZG proof.
blobKzgProof, err := kzg.ComputeBlobKZGProof(&blob, blobKZGCommitment)
if err != nil {
return nil, errors.Wrap(err, "compute blob KZG proof")
}
blobSidecar := &ethpb.BlobSidecar{
Index: blobIndex,
Blob: blob[:],
KzgCommitment: blobKZGCommitment[:],
KzgProof: blobKzgProof[:],
SignedBlockHeader: firstDataColumnSidecar.SignedBlockHeader,
CommitmentInclusionProof: firstDataColumnSidecar.KzgCommitmentsInclusionProof,
}
roBlob, err := blocks.NewROBlob(blobSidecar)
if err != nil {
return nil, errors.Wrap(err, "new RO blob")
}
verifiedROBlob := blocks.NewVerifiedROBlob(roBlob)
verifiedROBlobs = append(verifiedROBlobs, &verifiedROBlob)
}
return verifiedROBlobs, nil
}
// DataColumnSidecarsForReconstruct is a TEMPORARY function until there is an official specification for it.
// It is scheduled for deletion.
func DataColumnSidecarsForReconstruct(
blobKzgCommitments [][]byte,
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
kzgCommitmentsInclusionProof [][]byte,
cellsAndProofs []kzg.CellsAndProofs,
) ([]*ethpb.DataColumnSidecar, error) {
// Each CellsAndProofs corresponds to a Blob
// So we can get the BlobCount by checking the length of CellsAndProofs
blobsCount := len(cellsAndProofs)
if blobsCount == 0 {
return nil, nil
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, fieldparams.NumberOfColumns)
for columnIndex := uint64(0); columnIndex < fieldparams.NumberOfColumns; columnIndex++ {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// VerifyDataColumnsSidecarKZGProofs verifies the provided KZG Proofs of data columns.
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) (bool, error) {
// Retrieve the number of columns.
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Compute the total count.
count := 0
for _, sidecar := range sidecars {
count += len(sidecar.DataColumn)
}
commitments := make([]kzg.Bytes48, 0, count)
indices := make([]uint64, 0, count)
cells := make([]kzg.Cell, 0, count)
proofs := make([]kzg.Bytes48, 0, count)
for _, sidecar := range sidecars {
// Check if the columns index is not too large
if sidecar.ColumnIndex >= numberOfColumns {
return false, errIndexTooLarge
}
// Check if the KZG commitments size and data column size match.
if len(sidecar.DataColumn) != len(sidecar.KzgCommitments) {
return false, errMismatchLength
}
// Check if the KZG proofs size and data column size match.
if len(sidecar.DataColumn) != len(sidecar.KzgProof) {
return false, errMismatchLength
}
for i := range sidecar.DataColumn {
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
indices = append(indices, sidecar.ColumnIndex)
cells = append(cells, kzg.Cell(sidecar.DataColumn[i]))
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProof[i]))
}
}
// Verify all the batch at once.
verified, err := kzg.VerifyCellKZGProofBatch(commitments, indices, cells, proofs)
if err != nil {
return false, errors.Wrap(err, "verify cell KZG proof batch")
}
return verified, nil
}
// CustodyGroupCount returns the number of groups the node should participate in for custody.
func CustodyGroupCount() uint64 {
if flags.Get().SubscribeToAllSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
return params.BeaconConfig().CustodyRequirement
}
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
func CustodyGroupSamplingSize() uint64 {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
custodyGroupCount := CustodyGroupCount()
return max(samplesPerSlot, custodyGroupCount)
}
// HypergeomCDF computes the hypergeometric cumulative distribution function.
// https://en.wikipedia.org/wiki/Hypergeometric_distribution
func HypergeomCDF(k, M, n, N uint64) float64 {
denominatorInt := new(big.Int).Binomial(int64(M), int64(N)) // lint:ignore uintcast
denominator := new(big.Float).SetInt(denominatorInt)
rBig := big.NewFloat(0)
for i := uint64(0); i < k+1; i++ {
a := new(big.Int).Binomial(int64(n), int64(i)) // lint:ignore uintcast
b := new(big.Int).Binomial(int64(M-n), int64(N-i))
numeratorInt := new(big.Int).Mul(a, b)
numerator := new(big.Float).SetInt(numeratorInt)
item := new(big.Float).Quo(numerator, denominator)
rBig.Add(rBig, item)
}
r, _ := rBig.Float64()
return r
}
// ExtendedSampleCount computes, for a given number of samples per slot and allowed failures the
// number of samples we should actually query from peers.
// TODO: Add link to the specification once it is available.
func ExtendedSampleCount(samplesPerSlot, allowedFailures uint64) uint64 {
// Retrieve the columns count
columnsCount := params.BeaconConfig().NumberOfColumns
// If half of the columns are missing, we are able to reconstruct the data.
// If half of the columns + 1 are missing, we are not able to reconstruct the data.
// This is the smallest worst case.
worstCaseMissing := columnsCount/2 + 1
// Compute the false positive threshold.
falsePositiveThreshold := HypergeomCDF(0, columnsCount, worstCaseMissing, samplesPerSlot)
var sampleCount uint64
// Finally, compute the extended sample count.
for sampleCount = samplesPerSlot; sampleCount < columnsCount+1; sampleCount++ {
if HypergeomCDF(allowedFailures, columnsCount, worstCaseMissing, sampleCount) <= falsePositiveThreshold {
break
}
}
return sampleCount
}
// CustodyGroupCountFromRecord extracts the custody group count from an ENR record.
func CustodyGroupCountFromRecord(record *enr.Record) (uint64, error) {
if record == nil {
return 0, errRecordNil
}
// Load the `cgc`
var cgc Cgc
if cgc := record.Load(&cgc); cgc != nil {
return 0, errCannotLoadCustodyGroupCount
}
return uint64(cgc), nil
}
func CanSelfReconstruct(custodyGroupCount uint64) bool {
total := params.BeaconConfig().NumberOfCustodyGroups
// If total is odd, then we need total / 2 + 1 columns to reconstruct.
// If total is even, then we need total / 2 columns to reconstruct.
custodyGroupsNeeded := total/2 + total%2
return custodyGroupCount >= custodyGroupsNeeded
}
// RecoverCellsAndProofs recovers the cells and proofs from the data column sidecars.
func RecoverCellsAndProofs(
dataColumnSideCars []*ethpb.DataColumnSidecar,
blockRoot [fieldparams.RootLength]byte,
) ([]kzg.CellsAndProofs, error) {
var wg errgroup.Group
dataColumnSideCarsCount := len(dataColumnSideCars)
if dataColumnSideCarsCount == 0 {
return nil, errors.New("no data column sidecars")
}
// Check if all columns have the same length.
blobCount := len(dataColumnSideCars[0].DataColumn)
for _, sidecar := range dataColumnSideCars {
length := len(sidecar.DataColumn)
if length != blobCount {
return nil, errors.New("columns do not have the same length")
}
}
// Recover cells and compute proofs in parallel.
recoveredCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
for blobIndex := 0; blobIndex < blobCount; blobIndex++ {
bIndex := blobIndex
wg.Go(func() error {
start := time.Now()
cellsIndices := make([]uint64, 0, dataColumnSideCarsCount)
cells := make([]kzg.Cell, 0, dataColumnSideCarsCount)
for _, sidecar := range dataColumnSideCars {
// Build the cell indices.
cellsIndices = append(cellsIndices, sidecar.ColumnIndex)
// Get the cell.
column := sidecar.DataColumn
cell := column[bIndex]
cells = append(cells, kzg.Cell(cell))
}
// Recover the cells and proofs for the corresponding blob
cellsAndProofs, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
if err != nil {
return errors.Wrapf(err, "recover cells and KZG proofs for blob %d", bIndex)
}
recoveredCellsAndProofs[bIndex] = cellsAndProofs
log.WithFields(logrus.Fields{
"elapsed": time.Since(start),
"index": bIndex,
"root": fmt.Sprintf("%x", blockRoot),
}).Debug("Recovered cells and proofs")
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, err
}
return recoveredCellsAndProofs, nil
}

View File

@@ -0,0 +1,531 @@
package peerdas_test
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"errors"
"fmt"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
)
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
dbBlock := util.NewBeaconBlockDeneb()
require.NoError(t, kzg.Start())
var (
comms [][]byte
blobs []kzg.Blob
)
for i := int64(0); i < 6; i++ {
blob := GetRandBlob(i)
commitment, _, err := GenerateCommitmentAndProof(&blob)
require.NoError(t, err)
comms = append(comms, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = comms
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
sCars, err := peerdas.DataColumnSidecars(sBlock, blobs)
require.NoError(t, err)
for i, sidecar := range sCars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
verified, err := peerdas.VerifyDataColumnsSidecarKZGProofs([]blocks.RODataColumn{roCol})
require.NoError(t, err)
require.Equal(t, true, verified, fmt.Sprintf("sidecar %d failed", i))
}
}
func TestDataColumnSidecars(t *testing.T) {
var expected []*ethpb.DataColumnSidecar = nil
actual, err := peerdas.DataColumnSidecars(nil, []kzg.Blob{})
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
}
func TestBlobs(t *testing.T) {
blobsIndice := map[uint64]bool{}
almostAllColumns := make([]*ethpb.DataColumnSidecar, 0, fieldparams.NumberOfColumns/2)
for i := 2; i < fieldparams.NumberOfColumns/2+2; i++ {
almostAllColumns = append(almostAllColumns, &ethpb.DataColumnSidecar{
ColumnIndex: uint64(i),
})
}
testCases := []struct {
name string
input []*ethpb.DataColumnSidecar
expected []*blocks.VerifiedROBlob
err error
}{
{
name: "empty input",
input: []*ethpb.DataColumnSidecar{},
expected: nil,
err: errors.New("some columns are missing: [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63]"),
},
{
name: "missing columns",
input: almostAllColumns,
expected: nil,
err: errors.New("some columns are missing: [0 1]"),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := peerdas.Blobs(blobsIndice, tc.input)
if tc.err != nil {
require.Equal(t, tc.err.Error(), err.Error())
} else {
require.NoError(t, err)
}
require.DeepSSZEqual(t, tc.expected, actual)
})
}
}
func TestDataColumnsSidecarsBlobsRoundtrip(t *testing.T) {
const blobCount = 5
blobsIndex := map[uint64]bool{}
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Generate random blobs and their corresponding commitments and proofs.
blobs := make([]kzg.Blob, 0, blobCount)
blobKzgCommitments := make([]*kzg.Commitment, 0, blobCount)
blobKzgProofs := make([]*kzg.Proof, 0, blobCount)
for blobIndex := range blobCount {
// Create a random blob.
blob := GetRandBlob(int64(blobIndex))
blobs = append(blobs, blob)
// Generate a blobKZGCommitment for the blob.
blobKZGCommitment, proof, err := GenerateCommitmentAndProof(&blob)
require.NoError(t, err)
blobKzgCommitments = append(blobKzgCommitments, blobKZGCommitment)
blobKzgProofs = append(blobKzgProofs, proof)
}
// Set the commitments into the block.
blobZkgCommitmentsBytes := make([][]byte, 0, blobCount)
for _, blobKZGCommitment := range blobKzgCommitments {
blobZkgCommitmentsBytes = append(blobZkgCommitmentsBytes, blobKZGCommitment[:])
}
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = blobZkgCommitmentsBytes
// Generate verified RO blobs.
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, blobCount)
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
commitmentInclusionProof, err := blocks.MerkleProofKZGCommitments(signedBeaconBlock.Block().Body())
require.NoError(t, err)
for blobIndex := range blobCount {
blob := blobs[blobIndex]
blobKZGCommitment := blobKzgCommitments[blobIndex]
blobKzgProof := blobKzgProofs[blobIndex]
// Get the signed beacon block header.
signedBeaconBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
blobSidecar := &ethpb.BlobSidecar{
Index: uint64(blobIndex),
Blob: blob[:],
KzgCommitment: blobKZGCommitment[:],
KzgProof: blobKzgProof[:],
SignedBlockHeader: signedBeaconBlockHeader,
CommitmentInclusionProof: commitmentInclusionProof,
}
roBlob, err := blocks.NewROBlob(blobSidecar)
require.NoError(t, err)
verifiedROBlob := blocks.NewVerifiedROBlob(roBlob)
verifiedROBlobs = append(verifiedROBlobs, &verifiedROBlob)
}
// Compute data columns sidecars from the signed beacon block and from the blobs.
dataColumnsSidecar, err := peerdas.DataColumnSidecars(signedBeaconBlock, blobs)
require.NoError(t, err)
// Compute the blobs from the data columns sidecar.
roundtripBlobs, err := peerdas.Blobs(blobsIndex, dataColumnsSidecar)
require.NoError(t, err)
// Check that the blobs are the same.
require.DeepSSZEqual(t, verifiedROBlobs, roundtripBlobs)
}
func TestCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllSubnets bool
expected uint64
}{
{
name: "subscribeToAllSubnets=false",
subscribeToAllSubnets: false,
expected: params.BeaconConfig().CustodyRequirement,
},
{
name: "subscribeToAllSubnets=true",
subscribeToAllSubnets: true,
expected: params.BeaconConfig().DataColumnSidecarSubnetCount,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set flags.
resetFlags := flags.Get()
defer func() {
flags.Init(resetFlags)
}()
params.SetupTestConfigCleanup(t)
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = tc.subscribeToAllSubnets
flags.Init(gFlags)
// Get the custody subnet count.
actual := peerdas.CustodyGroupCount()
require.Equal(t, tc.expected, actual)
})
}
}
func TestHypergeomCDF(t *testing.T) {
// Test case from https://en.wikipedia.org/wiki/Hypergeometric_distribution
// Population size: 1000, number of successes in population: 500, sample size: 10, number of successes in sample: 5
// Expected result: 0.072
const (
expected = 0.0796665913283742
margin = 0.000001
)
actual := peerdas.HypergeomCDF(5, 128, 65, 16)
require.Equal(t, true, expected-margin <= actual && actual <= expected+margin)
}
func TestExtendedSampleCount(t *testing.T) {
const samplesPerSlot = 16
testCases := []struct {
name string
allowedMissings uint64
extendedSampleCount uint64
}{
{name: "allowedMissings=0", allowedMissings: 0, extendedSampleCount: 16},
{name: "allowedMissings=1", allowedMissings: 1, extendedSampleCount: 20},
{name: "allowedMissings=2", allowedMissings: 2, extendedSampleCount: 24},
{name: "allowedMissings=3", allowedMissings: 3, extendedSampleCount: 27},
{name: "allowedMissings=4", allowedMissings: 4, extendedSampleCount: 29},
{name: "allowedMissings=5", allowedMissings: 5, extendedSampleCount: 32},
{name: "allowedMissings=6", allowedMissings: 6, extendedSampleCount: 35},
{name: "allowedMissings=7", allowedMissings: 7, extendedSampleCount: 37},
{name: "allowedMissings=8", allowedMissings: 8, extendedSampleCount: 40},
{name: "allowedMissings=9", allowedMissings: 9, extendedSampleCount: 42},
{name: "allowedMissings=10", allowedMissings: 10, extendedSampleCount: 44},
{name: "allowedMissings=11", allowedMissings: 11, extendedSampleCount: 47},
{name: "allowedMissings=12", allowedMissings: 12, extendedSampleCount: 49},
{name: "allowedMissings=13", allowedMissings: 13, extendedSampleCount: 51},
{name: "allowedMissings=14", allowedMissings: 14, extendedSampleCount: 53},
{name: "allowedMissings=15", allowedMissings: 15, extendedSampleCount: 55},
{name: "allowedMissings=16", allowedMissings: 16, extendedSampleCount: 57},
{name: "allowedMissings=17", allowedMissings: 17, extendedSampleCount: 59},
{name: "allowedMissings=18", allowedMissings: 18, extendedSampleCount: 61},
{name: "allowedMissings=19", allowedMissings: 19, extendedSampleCount: 63},
{name: "allowedMissings=20", allowedMissings: 20, extendedSampleCount: 65},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := peerdas.ExtendedSampleCount(samplesPerSlot, tc.allowedMissings)
require.Equal(t, tc.extendedSampleCount, result)
})
}
}
func TestCustodyGroupCountFromRecord(t *testing.T) {
const expected uint64 = 7
// Create an Ethereum record.
record := &enr.Record{}
record.Set(peerdas.Cgc(expected))
actual, err := peerdas.CustodyGroupCountFromRecord(record)
require.NoError(t, err)
require.Equal(t, expected, actual)
}
func TestCanSelfReconstruct(t *testing.T) {
testCases := []struct {
name string
totalNumberOfCustodyGroups uint64
custodyNumberOfGroups uint64
expected bool
}{
{
name: "totalNumberOfCustodyGroups=64, custodyNumberOfGroups=31",
totalNumberOfCustodyGroups: 64,
custodyNumberOfGroups: 31,
expected: false,
},
{
name: "totalNumberOfCustodyGroups=64, custodyNumberOfGroups=32",
totalNumberOfCustodyGroups: 64,
custodyNumberOfGroups: 32,
expected: true,
},
{
name: "totalNumberOfCustodyGroups=65, custodyNumberOfGroups=32",
totalNumberOfCustodyGroups: 65,
custodyNumberOfGroups: 32,
expected: false,
},
{
name: "totalNumberOfCustodyGroups=63, custodyNumberOfGroups=33",
totalNumberOfCustodyGroups: 65,
custodyNumberOfGroups: 33,
expected: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfCustodyGroups = tc.totalNumberOfCustodyGroups
params.OverrideBeaconConfig(cfg)
// Check if reconstuction is possible.
actual := peerdas.CanSelfReconstruct(tc.custodyNumberOfGroups)
require.Equal(t, tc.expected, actual)
})
}
}
func TestReconstructionRoundTrip(t *testing.T) {
params.SetupTestConfigCleanup(t)
const blobCount = 5
var blockRoot [fieldparams.RootLength]byte
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
require.NoError(t, kzg.Start())
// Generate random blobs and their corresponding commitments.
var (
blobsKzgCommitments [][]byte
blobs []kzg.Blob
)
for i := range blobCount {
blob := GetRandBlob(int64(i))
commitment, _, err := GenerateCommitmentAndProof(&blob)
require.NoError(t, err)
blobsKzgCommitments = append(blobsKzgCommitments, commitment[:])
blobs = append(blobs, blob)
}
// Generate a signed beacon block.
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = blobsKzgCommitments
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Get the signed beacon block header.
signedBeaconBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
// Convert data columns sidecars from signed block and blobs.
dataColumnSidecars, err := peerdas.DataColumnSidecars(signedBeaconBlock, blobs)
require.NoError(t, err)
// Create verified RO data columns.
verifiedRoDataColumns := make([]*blocks.VerifiedRODataColumn, 0, blobCount)
for _, dataColumnSidecar := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumn(dataColumnSidecar)
require.NoError(t, err)
verifiedRoDataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRoDataColumns = append(verifiedRoDataColumns, &verifiedRoDataColumn)
}
verifiedRoDataColumn := verifiedRoDataColumns[0]
numberOfColumns := params.BeaconConfig().NumberOfColumns
var noDataColumns []*ethpb.DataColumnSidecar
dataColumnsWithDifferentLengths := []*ethpb.DataColumnSidecar{
{DataColumn: [][]byte{{}, {}}},
{DataColumn: [][]byte{{}}},
}
notEnoughDataColumns := dataColumnSidecars[:numberOfColumns/2-1]
originalDataColumns := dataColumnSidecars[:numberOfColumns/2]
extendedDataColumns := dataColumnSidecars[numberOfColumns/2:]
evenDataColumns := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns/2)
oddDataColumns := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns/2)
allDataColumns := dataColumnSidecars
for i, dataColumn := range dataColumnSidecars {
if i%2 == 0 {
evenDataColumns = append(evenDataColumns, dataColumn)
} else {
oddDataColumns = append(oddDataColumns, dataColumn)
}
}
testCases := []struct {
name string
dataColumnsSidecar []*ethpb.DataColumnSidecar
isError bool
}{
{
name: "No data columns sidecars",
dataColumnsSidecar: noDataColumns,
isError: true,
},
{
name: "Data columns sidecar with different lengths",
dataColumnsSidecar: dataColumnsWithDifferentLengths,
isError: true,
},
{
name: "All columns are present (no actual need to reconstruct)",
dataColumnsSidecar: allDataColumns,
isError: false,
},
{
name: "Only original columns are present",
dataColumnsSidecar: originalDataColumns,
isError: false,
},
{
name: "Only extended columns are present",
dataColumnsSidecar: extendedDataColumns,
isError: false,
},
{
name: "Only even columns are present",
dataColumnsSidecar: evenDataColumns,
isError: false,
},
{
name: "Only odd columns are present",
dataColumnsSidecar: oddDataColumns,
isError: false,
},
{
name: "Not enough columns to reconstruct",
dataColumnsSidecar: notEnoughDataColumns,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Recover cells and proofs from available data columns sidecars.
cellsAndProofs, err := peerdas.RecoverCellsAndProofs(tc.dataColumnsSidecar, blockRoot)
isError := (err != nil)
require.Equal(t, tc.isError, isError)
if isError {
return
}
// Recover all data columns sidecars from cells and proofs.
reconstructedDataColumnsSideCars, err := peerdas.DataColumnSidecarsForReconstruct(
blobsKzgCommitments,
signedBeaconBlockHeader,
verifiedRoDataColumn.KzgCommitmentsInclusionProof,
cellsAndProofs,
)
require.NoError(t, err)
expected := dataColumnSidecars
actual := reconstructedDataColumnsSideCars
require.DeepSSZEqual(t, expected, actual)
})
}
}

View File

@@ -0,0 +1,5 @@
package peerdas
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "peerdas")

View File

@@ -0,0 +1,14 @@
package peerdas
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
},
)

View File

@@ -53,6 +53,11 @@ func HigherEqualThanAltairVersionAndEpoch(s state.BeaconState, e primitives.Epoc
return s.Version() >= version.Altair && e >= params.BeaconConfig().AltairForkEpoch
}
// PeerDASIsActive checks whether peerDAS is active at the provided slot.
func PeerDASIsActive(slot primitives.Slot) bool {
return params.FuluEnabled() && slots.ToEpoch(slot) >= params.BeaconConfig().FuluForkEpoch
}
// CanUpgradeToAltair returns true if the input `slot` can upgrade to Altair.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == ALTAIR_FORK_EPOCH

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"availability.go",
"availability_columns.go",
"cache.go",
"iface.go",
"mock.go",
@@ -11,14 +12,17 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -27,6 +31,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"availability_columns_test.go",
"availability_test.go",
"cache_test.go",
],
@@ -34,6 +39,7 @@ go_test(
deps = [
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -41,6 +47,7 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
@@ -80,7 +81,7 @@ func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.RO
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, _ enode.ID, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())

View File

@@ -0,0 +1,190 @@
package das
import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.BlobStorage
cache *cache
}
func NewLazilyPersistentStoreColumn(store *filesystem.BlobStorage) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newCache(),
}
}
// Persist do nothing at the moment.
// TODO: Very Ugly, change interface to allow for columns and blobs
func (*LazilyPersistentStoreColumn) Persist(_ primitives.Slot, _ ...blocks.ROBlob) error {
return nil
}
// PersistColumns adds columns to the working column cache. columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) PersistColumns(current primitives.Slot, sc ...blocks.RODataColumn) error {
if len(sc) == 0 {
return nil
}
if len(sc) > 1 {
first := sc[0].BlockRoot()
for i := 1; i < len(sc); i++ {
if first != sc[i].BlockRoot() {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromColumn(sc[0])
entry := s.cache.ensure(key)
for i := range sc {
if err := entry.stashColumns(&sc[i]); err != nil {
return err
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(
ctx context.Context,
nodeID enode.ID,
currentSlot primitives.Slot,
block blocks.ROBlock,
) error {
blockCommitments, err := fullCommitmentsToCheck(nodeID, block, currentSlot)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
}
// Return early for blocks that do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
// Build the cache key for the block.
key := keyFromBlock(block)
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
entry := s.cache.ensure(key)
// Delete the cache entry for the block at the end.
defer s.cache.delete(key)
// Get the root of the block.
blockRoot := block.Root()
// Wait for the summarizer to be ready before proceeding.
summarizer, err := s.store.WaitForSummarizer(ctx)
if err != nil {
log.
WithField("root", fmt.Sprintf("%#x", blockRoot)).
WithError(err).
Debug("Failed to receive BlobStorageSummarizer within IsDataAvailable")
} else {
// Get the summary for the block, and set it in the cache entry.
summary := summarizer.Summary(blockRoot)
entry.setDiskSummary(summary)
}
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
roDataColumns, err := entry.filterColumns(blockRoot, blockCommitments)
if err != nil {
return errors.Wrap(err, "incomplete DataColumnSidecar batch")
}
// Create verified RO data columns from RO data columns.
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
}
// Ensure that each column sidecar is written to disk.
for _, verifiedRODataColumn := range verifiedRODataColumns {
if err := s.store.SaveDataColumn(verifiedRODataColumn); err != nil {
return errors.Wrapf(err, "save data columns for index `%d` for block `%#x`", verifiedRODataColumn.ColumnIndex, blockRoot)
}
}
// All ColumnSidecars are persisted - data availability check succeeds.
return nil
}
// fullCommitmentsToCheck returns the commitments to check for a given block.
func fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
// Compute the block epoch.
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current spoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return &safeCommitmentsArray{}, nil
}
// Retrieve the KZG commitments for the block.
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Return early if there are no commitments in the block.
if len(kzgCommitments) == 0 {
return &safeCommitmentsArray{}, nil
}
// Retrieve the groups count.
custodyGroupCount := peerdas.CustodyGroupCount()
// Retrieve custody groups.
custodyGroups, err := peerdas.CustodyGroups(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody groups")
}
// Retrieve custody columns.
custodyColumns, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
// Create a safe commitments array for the custody columns.
commitmentsArray := &safeCommitmentsArray{}
for column := range custodyColumns {
commitmentsArray[column] = kzgCommitments
}
return commitmentsArray, nil
}

View File

@@ -0,0 +1,94 @@
package das
import (
"testing"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestFullCommitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
}{
{
name: "pre fulu",
block: func(t *testing.T) blocks.ROBlock {
bb := util.NewBeaconBlockElectra()
sb, err := blocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
},
{
name: "commitments within da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockFulu()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockElectra()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
b := c.block(t)
co, err := fullCommitmentsToCheck(enode.ID{}, b, c.slot)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
}
for i := 0; i < len(co); i++ {
require.DeepEqual(t, c.commits, co[i])
}
})
}
}

View File

@@ -5,6 +5,7 @@ import (
"context"
"testing"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
@@ -123,18 +124,18 @@ func TestLazilyPersistent_Missing(t *testing.T) {
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[2]))
err := as.IsDataAvailable(ctx, 1, blk)
err := as.IsDataAvailable(ctx, enode.ID{}, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, scs[0]))
err = as.IsDataAvailable(ctx, 1, blk)
err = as.IsDataAvailable(ctx, enode.ID{}, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
require.NoError(t, as.IsDataAvailable(ctx, enode.ID{}, 1, blk))
}
func TestLazilyPersistent_Mismatch(t *testing.T) {
@@ -149,7 +150,7 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
err := as.IsDataAvailable(ctx, 1, blk)
err := as.IsDataAvailable(ctx, enode.ID{}, 1, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
}

View File

@@ -2,9 +2,11 @@ package das
import (
"bytes"
"reflect"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -38,6 +40,10 @@ func keyFromSidecar(sc blocks.ROBlob) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
func keyFromColumn(sc blocks.RODataColumn) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
// keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value.
func keyFromBlock(b blocks.ROBlock) cacheKey {
return cacheKey{slot: b.Block().Slot(), root: b.Root()}
@@ -61,6 +67,7 @@ func (c *cache) delete(key cacheKey) {
// cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct {
scs []*blocks.ROBlob
colScs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.BlobStorageSummary
}
@@ -86,6 +93,17 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
return nil
}
func (e *cacheEntry) stashColumns(sc *blocks.RODataColumn) error {
if sc.ColumnIndex >= fieldparams.NumberOfColumns {
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.ColumnIndex)
}
if e.colScs[sc.ColumnIndex] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.ColumnIndex, sc.KzgCommitments)
}
e.colScs[sc.ColumnIndex] = sc
return nil
}
// filter evicts sidecars that are not committed to by the block and returns custom
// errors if the cache is missing any of the commitments, or if the commitments in
// the cache do not match those found in the block. If err is nil, then all expected
@@ -121,3 +139,66 @@ func (e *cacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([
return scs, nil
}
func (e *cacheEntry) filterColumns(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
if e.diskSummary.AllDataColumnsAvailable(nonEmptyIndices) {
return nil, nil
}
commitmentsCount := commitmentsArray.count()
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
for i := range uint64(fieldparams.NumberOfColumns) {
// Skip if we already store this data column.
if e.diskSummary.HasIndex(i) {
continue
}
if commitmentsArray[i] == nil {
continue
}
if e.colScs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !reflect.DeepEqual(commitmentsArray[i], e.colScs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.colScs[i].KzgCommitments, commitmentsArray[i])
}
sidecars = append(sidecars, *e.colScs[i])
}
return sidecars, nil
}
// safeCommitmentsArray is a fixed size array of commitments.
// This is helpful for avoiding gratuitous bounds checks.
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
// count returns the number of commitments in the array.
func (s *safeCommitmentsArray) count() int {
count := 0
for i := range s {
if s[i] != nil {
count++
}
}
return count
}
// nonEmptyIndices returns a map of indices that are non-nil in the array.
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
columns := make(map[uint64]bool)
for i := range s {
if s[i] != nil {
columns[uint64(i)] = true
}
}
return columns
}

View File

@@ -3,6 +3,7 @@ package das
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
@@ -14,6 +15,6 @@ import (
// IsDataAvailable guarantees that all blobs committed to in the block have been
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
IsDataAvailable(ctx context.Context, nodeID enode.ID, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROBlob) error
}

View File

@@ -3,6 +3,7 @@ package das
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
@@ -16,7 +17,7 @@ type MockAvailabilityStore struct {
var _ AvailabilityStore = &MockAvailabilityStore{}
// IsDataAvailable satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, _ enode.ID, current primitives.Slot, b blocks.ROBlock) error {
if m.VerifyAvailabilityCallback != nil {
return m.VerifyAvailabilityCallback(ctx, current, b)
}

View File

@@ -13,6 +13,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -42,6 +43,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",

View File

@@ -12,7 +12,9 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -39,8 +41,15 @@ const (
directoryPermissions = 0700
)
// BlobStorageOption is a functional option for configuring a BlobStorage.
type BlobStorageOption func(*BlobStorage) error
type (
// BlobStorageOption is a functional option for configuring a BlobStorage.
BlobStorageOption func(*BlobStorage) error
RootIndexPair struct {
Root [fieldparams.RootLength]byte
Index uint64
}
)
// WithBasePath is a required option that sets the base path of blob storage.
func WithBasePath(base string) BlobStorageOption {
@@ -70,7 +79,10 @@ func WithSaveFsync(fsync bool) BlobStorageOption {
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node.
func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
b := &BlobStorage{}
b := &BlobStorage{
DataColumnFeed: new(event.Feed),
}
for _, o := range opts {
if err := o(b); err != nil {
return nil, errors.Wrap(err, "failed to create blob storage")
@@ -99,6 +111,7 @@ type BlobStorage struct {
fsync bool
fs afero.Fs
pruner *blobPruner
DataColumnFeed *event.Feed
}
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
@@ -223,6 +236,112 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
return nil
}
// SaveDataColumn saves a data column to our local filesystem.
func (bs *BlobStorage) SaveDataColumn(column blocks.VerifiedRODataColumn) error {
startTime := time.Now()
fname := namerForDataColumn(column)
sszPath := fname.path()
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
}
if exists {
log.Trace("Ignoring a duplicate data column sidecar save attempt")
return nil
}
if bs.pruner != nil {
hRoot, err := column.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
return err
}
if err := bs.pruner.notify(hRoot, column.SignedBlockHeader.Header.Slot, column.ColumnIndex); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", hRoot)
}
}
// Serialize the ethpb.DataColumnSidecar to binary data using SSZ.
sidecarData, err := column.MarshalSSZ()
if err != nil {
return errors.Wrap(err, "failed to serialize sidecar data")
} else if len(sidecarData) == 0 {
return errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath(fmt.Sprintf("%p", sidecarData))
partialMoved := false
// Ensure the partial file is deleted.
defer func() {
if partialMoved {
return
}
// It's expected to error if the save is successful.
err = bs.fs.Remove(partPath)
if err == nil {
log.WithFields(logrus.Fields{
"partPath": partPath,
}).Debugf("Removed partial file")
}
}()
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(partPath)
if err != nil {
return errors.Wrap(err, "failed to create partial file")
}
n, err := partialFile.Write(sidecarData)
if err != nil {
closeErr := partialFile.Close()
if closeErr != nil {
return closeErr
}
return errors.Wrap(err, "failed to write to partial file")
}
if bs.fsync {
if err := partialFile.Sync(); err != nil {
return err
}
}
if err := partialFile.Close(); err != nil {
return err
}
if n != len(sidecarData) {
return fmt.Errorf("failed to write the full bytes of sidecarData, wrote only %d of %d bytes", n, len(sidecarData))
}
if n == 0 {
return errEmptyBlobWritten
}
// Atomically rename the partial file to its final name.
err = bs.fs.Rename(partPath, sszPath)
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
partialMoved = true
// Notify the data column notifier that a new data column has been saved.
if bs.DataColumnFeed != nil {
bs.DataColumnFeed.Send(RootIndexPair{
Root: column.BlockRoot(),
Index: column.ColumnIndex,
})
}
// TODO: Use new metrics for data columns
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
// Get retrieves a single BlobSidecar by its root and index.
// Since BlobStorage only writes blobs that have undergone full verification, the return
// value is always a VerifiedROBlob.
@@ -248,6 +367,20 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return verification.BlobSidecarNoop(ro)
}
// GetColumn retrieves a single DataColumnSidecar by its root and index.
func (bs *BlobStorage) GetColumn(root [32]byte, idx uint64) (*ethpb.DataColumnSidecar, error) {
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
if err != nil {
return nil, err
}
s := &ethpb.DataColumnSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return nil, err
}
return s, nil
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
@@ -294,6 +427,61 @@ func (bs *BlobStorage) Indices(root [32]byte, s primitives.Slot) ([]bool, error)
return mask, nil
}
// ColumnIndices retrieve the stored column indexes from our filesystem.
func (bs *BlobStorage) ColumnIndices(root [32]byte) (map[uint64]bool, error) {
custody := make(map[uint64]bool, fieldparams.NumberOfColumns)
// Get all the files in the directory.
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
// If the directory does not exist, we do not custody any columns.
if os.IsNotExist(err) {
return nil, nil
}
return nil, errors.Wrap(err, "read directory")
}
// Iterate over all the entries in the directory.
for _, entry := range entries {
// If the entry is a directory, skip it.
if entry.IsDir() {
continue
}
// If the entry does not have the correct extension, skip it.
name := entry.Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
// The file should be in the `<index>.<extension>` format.
// Skip the file if it does not match the format.
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
// Get the column index from the file name.
columnIndexStr := parts[0]
columnIndex, err := strconv.ParseUint(columnIndexStr, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
// If the column index is out of bounds, return an error.
if columnIndex >= fieldparams.NumberOfColumns {
return nil, errors.Wrapf(errIndexOutOfBounds, "invalid index %d", columnIndex)
}
// Mark the column index as in custody.
custody[columnIndex] = true
}
return custody, nil
}
// Clear deletes all files on the filesystem.
func (bs *BlobStorage) Clear() error {
dirs, err := listDir(bs.fs, ".")
@@ -326,6 +514,10 @@ func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index}
}
func namerForDataColumn(col blocks.VerifiedRODataColumn) blobNamer {
return blobNamer{root: col.BlockRoot(), index: col.ColumnIndex}
}
func (p blobNamer) dir() string {
return rootString(p.root)
}

View File

@@ -9,8 +9,11 @@ import (
)
// blobIndexMask is a bitmask representing the set of blob indices that are currently set.
// TODO: Separate blobs from data columns
type blobIndexMask []bool
// type blobIndexMask [fieldparams.NumberOfColumns]bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct {
slot primitives.Slot
@@ -30,6 +33,21 @@ func (s BlobStorageSummary) HasIndex(idx uint64) bool {
return s.mask[idx]
}
// HasDataColumnIndex true if the DataColumnSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasDataColumnIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
numberOfColumns := params.BeaconConfig().NumberOfColumns
if idx >= numberOfColumns {
return false
}
if idx >= uint64(len(s.mask)) {
return false
}
return s.mask[idx]
}
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
func (s BlobStorageSummary) AllAvailable(count int) bool {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s.slot)
@@ -47,6 +65,21 @@ func (s BlobStorageSummary) AllAvailable(count int) bool {
return true
}
// AllDataColumnsAvailable returns true if we have all datacolumns for corresponding indices.
func (s BlobStorageSummary) AllDataColumnsAvailable(indices map[uint64]bool) bool {
if len(indices) > len(s.mask) {
return false
}
for indice := range indices {
if !s.mask[indice] {
return false
}
}
return true
}
// BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root.
// The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root.
type BlobStorageSummarizer interface {
@@ -76,16 +109,19 @@ func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
}
func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if idx >= uint64(maxBlobsPerBlock) {
return errIndexOutOfBounds
}
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// if idx >= uint64(maxBlobsPerBlock) {
// return errIndexOutOfBounds
// }
s.mu.Lock()
defer s.mu.Unlock()
v := s.cache[key]
v.slot = slot
if v.mask == nil {
v.mask = make(blobIndexMask, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
numberOfColumns := params.BeaconConfig().NumberOfColumns
v.mask = make(blobIndexMask, numberOfColumns)
}
if !v.mask[idx] {
s.updateMetrics(1)

View File

@@ -3,6 +3,7 @@ package filesystem
import (
"testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -152,3 +153,124 @@ func TestAllAvailable(t *testing.T) {
})
}
}
func TestHasDataColumnIndex(t *testing.T) {
storedIndices := map[uint64]bool{
1: true,
3: true,
5: true,
}
cases := []struct {
name string
idx uint64
expected bool
}{
{
name: "index is too high",
idx: fieldparams.NumberOfColumns,
expected: false,
},
{
name: "non existing index",
idx: 2,
expected: false,
},
{
name: "existing index",
idx: 3,
expected: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
// Get the maximum index that is stored.
maxIndex := uint64(0)
for index := range storedIndices {
if index > maxIndex {
maxIndex = index
}
}
mask := make(blobIndexMask, maxIndex+1)
for idx := range storedIndices {
mask[idx] = true
}
sum := BlobStorageSummary{mask: mask}
require.Equal(t, c.expected, sum.HasDataColumnIndex(c.idx))
})
}
}
func TestAllDataColumnAvailable(t *testing.T) {
tooManyColumns := make(map[uint64]bool, fieldparams.NumberOfColumns+1)
for i := uint64(0); i < fieldparams.NumberOfColumns+1; i++ {
tooManyColumns[i] = true
}
columns346 := map[uint64]bool{
3: true,
4: true,
6: true,
}
columns36 := map[uint64]bool{
3: true,
6: true,
}
cases := []struct {
name string
storedIndices map[uint64]bool
testedIndices map[uint64]bool
expected bool
}{
{
name: "no tested indices",
storedIndices: columns346,
testedIndices: map[uint64]bool{},
expected: true,
},
{
name: "too many tested indices",
storedIndices: columns346,
testedIndices: tooManyColumns,
expected: false,
},
{
name: "not all tested indices are stored",
storedIndices: columns36,
testedIndices: columns346,
expected: false,
},
{
name: "all tested indices are stored",
storedIndices: columns346,
testedIndices: columns36,
expected: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
// Get the maximum index that is stored.
maxIndex := uint64(0)
for index := range c.storedIndices {
if index > maxIndex {
maxIndex = index
}
}
mask := make(blobIndexMask, maxIndex+1)
for idx := range c.storedIndices {
mask[idx] = true
}
sum := BlobStorageSummary{mask: mask}
require.Equal(t, c.expected, sum.AllDataColumnsAvailable(c.testedIndices))
})
}
}

View File

@@ -953,6 +953,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
FinalizationFetcher: chainService,
BlockReceiver: chainService,
BlobReceiver: chainService,
DataColumnReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,

View File

@@ -7,6 +7,7 @@ go_library(
"broadcaster.go",
"config.go",
"connection_gater.go",
"custody.go",
"dial_relay_node.go",
"discovery.go",
"doc.go",
@@ -45,6 +46,7 @@ go_library(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
@@ -55,6 +57,7 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
@@ -116,6 +119,7 @@ go_test(
"addr_factory_test.go",
"broadcaster_test.go",
"connection_gater_test.go",
"custody_test.go",
"dial_relay_node_test.go",
"discovery_test.go",
"fork_test.go",
@@ -137,9 +141,11 @@ go_test(
flaky = True,
tags = ["requires-network"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
@@ -152,6 +158,7 @@ go_test(
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",
@@ -162,6 +169,7 @@ go_test(
"//network/forks:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//proto/testing:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
@@ -96,7 +97,12 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
return nil
}
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att, forkDigest [4]byte) {
func (s *Service) internalBroadcastAttestation(
ctx context.Context,
subnet uint64,
att ethpb.Att,
forkDigest [fieldparams.VersionLength]byte,
) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -152,7 +158,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
}
}
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [4]byte) {
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [fieldparams.VersionLength]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -228,7 +234,12 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
return nil
}
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
func (s *Service) internalBroadcastBlob(
ctx context.Context,
subnet uint64,
blobSidecar *ethpb.BlobSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -243,7 +254,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
s.subnetLocker(wrappedSubIdx).RUnlock()
if !hasPeer {
blobSidecarCommitteeBroadcastAttempts.Inc()
blobSidecarBroadcastAttempts.Inc()
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
@@ -252,7 +263,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
return err
}
if ok {
blobSidecarCommitteeBroadcasts.Inc()
blobSidecarBroadcasts.Inc()
return nil
}
return errors.New("failed to find peers for subnet")
@@ -268,6 +279,120 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
}
}
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
// TODO: Add tests
func (s *Service) BroadcastDataColumn(
ctx context.Context,
root [fieldparams.RootLength]byte,
columnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
// Ensure the data column sidecar is not nil.
if dataColumnSidecar == nil {
return errors.Errorf("attempted to broadcast nil data column sidecar at subnet %d", columnSubnet)
}
// Retrieve the current fork digest.
forkDigest, err := s.currentForkDigest()
if err != nil {
err := errors.Wrap(err, "current fork digest")
tracing.AnnotateError(span, err)
return err
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumn(ctx, root, columnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumn(
ctx context.Context,
root [fieldparams.RootLength]byte,
columnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumn")
defer span.End()
// Increase the number of broadcast attempts.
dataColumnSidecarBroadcastAttempts.Inc()
// Clear parent context / deadline.
ctx = trace.NewContext(context.Background(), span)
// Define a one-slot length context timeout.
oneSlot := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
ctx, cancel := context.WithTimeout(ctx, oneSlot)
defer cancel()
// Build the topic corresponding to this column subnet and this fork digest.
topic := dataColumnSubnetToTopic(columnSubnet, forkDigest)
// Compute the wrapped subnet index.
wrappedSubIdx := columnSubnet + dataColumnSubnetVal
// Check if we have peers with this subnet.
hasPeer := func() bool {
s.subnetLocker(wrappedSubIdx).RLock()
defer s.subnetLocker(wrappedSubIdx).RUnlock()
return s.hasPeerWithSubnet(topic)
}()
// If no peers are found, attempt to find peers with this subnet.
if !hasPeer {
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
ok, err := s.FindPeersWithSubnet(ctx, topic, columnSubnet, 1 /*threshold*/)
if err != nil {
return errors.Wrap(err, "find peers for subnet")
}
if ok {
return nil
}
return errors.New("failed to find peers for subnet")
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
tracing.AnnotateError(span, err)
}
}
// Broadcast the data column sidecar to the network.
if err := s.broadcastObject(ctx, dataColumnSidecar, topic); err != nil {
log.WithError(err).Error("Failed to broadcast data column sidecar")
tracing.AnnotateError(span, err)
}
header := dataColumnSidecar.SignedBlockHeader.GetHeader()
slot := header.GetSlot()
slotStartTime, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
if err != nil {
log.WithError(err).Error("Failed to convert slot to time")
}
log.WithFields(logrus.Fields{
"slot": slot,
"timeSinceSlotStart": time.Since(slotStartTime),
"root": fmt.Sprintf("%#x", root),
"columnSubnet": columnSubnet,
}).Debug("Broadcasted data column sidecar")
// Increase the number of successful broadcasts.
dataColumnSidecarBroadcasts.Inc()
}
// method to broadcast messages to other peers in our gossip mesh.
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
@@ -297,14 +422,18 @@ func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic
return nil
}
func attestationToTopic(subnet uint64, forkDigest [4]byte) string {
func attestationToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(AttestationSubnetTopicFormat, forkDigest, subnet)
}
func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string {
func syncCommitteeToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
}
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string {
func blobSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}
func dataColumnSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(DataColumnSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -12,11 +12,16 @@ import (
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/prysmaticlabs/go-bitfield"
"google.golang.org/protobuf/proto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
p2ptest "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -24,7 +29,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"google.golang.org/protobuf/proto"
)
func TestService_Broadcast(t *testing.T) {
@@ -516,3 +520,71 @@ func TestService_BroadcastBlob(t *testing.T) {
require.NoError(t, p.BroadcastBlob(ctx, subnet, blobSidecar))
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
}
func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, kzg.Start())
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
p := &Service{
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockElectra())
require.NoError(t, err)
blobs := make([]kzg.Blob, 6)
sidecars, err := peerdas.DataColumnSidecars(b, blobs)
require.NoError(t, err)
sidecar := sidecars[0]
subnet := uint64(0)
topic := DataColumnSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(sidecar)] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
wg.Add(1)
go func(tt *testing.T) {
defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
msg, err := sub.Next(ctx)
require.NoError(t, err)
result := &ethpb.DataColumnSidecar{}
require.NoError(t, p.Encoding().DecodeGossip(msg.Data, result))
require.DeepEqual(t, result, sidecar)
}(t)
// Attempt to broadcast nil object should fail.
ctx := context.Background()
var root [fieldparams.RootLength]byte
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastDataColumn(ctx, root, subnet, nil))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastDataColumn(ctx, root, subnet, sidecar))
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
}

147
beacon-chain/p2p/custody.go Normal file
View File

@@ -0,0 +1,147 @@
package p2p
import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
// AdmissibleCustodyGroupsPeers returns a list of peers that custody a super set of the local node's custody groups.
func (s *Service) AdmissibleCustodyGroupsPeers(peers []peer.ID) ([]peer.ID, error) {
localCustodyGroupCount := peerdas.CustodyGroupCount()
return s.custodyGroupsAdmissiblePeers(peers, localCustodyGroupCount)
}
// AdmissibleCustodySamplingPeers returns a list of peers that custody a super set of the local node's sampling columns.
func (s *Service) AdmissibleCustodySamplingPeers(peers []peer.ID) ([]peer.ID, error) {
localSubnetSamplingSize := peerdas.CustodyGroupSamplingSize()
return s.custodyGroupsAdmissiblePeers(peers, localSubnetSamplingSize)
}
// custodyGroupsAdmissiblePeers filters out `peers` that do not custody a super set of our own custody groups.
func (s *Service) custodyGroupsAdmissiblePeers(peers []peer.ID, custodyGroupCount uint64) ([]peer.ID, error) {
// Get the total number of custody groups.
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
// Retrieve the local node ID.
localNodeId := s.NodeID()
// Retrieve the needed custody groups.
neededCustodyGroups, err := peerdas.CustodyGroups(localNodeId, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody groups")
}
// Find the valid peers.
validPeers := make([]peer.ID, 0, len(peers))
loop:
for _, pid := range peers {
// Get the custody group count of the remote peer.
remoteCustodyGroupCount := s.CustodyGroupCountFromPeer(pid)
// If the remote peer custodies less groups than we do, skip it.
if remoteCustodyGroupCount < custodyGroupCount {
continue
}
// Get the remote node ID from the peer ID.
remoteNodeID, err := ConvertPeerIDToNodeID(pid)
if err != nil {
return nil, errors.Wrap(err, "convert peer ID to node ID")
}
// Get the custody groups of the remote peer.
remoteCustodyGroups, err := peerdas.CustodyGroups(remoteNodeID, remoteCustodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody groups")
}
remoteCustodyGroupsCount := uint64(len(remoteCustodyGroups))
// If the remote peers custodies all the possible columns, add it to the list.
if remoteCustodyGroupsCount == numberOfCustodyGroups {
validPeers = append(validPeers, pid)
continue
}
// Filter out invalid peers.
for custodyGroup := range neededCustodyGroups {
if !remoteCustodyGroups[custodyGroup] {
continue loop
}
}
// Add valid peer to list
validPeers = append(validPeers, pid)
}
return validPeers, nil
}
// custodyGroupCountFromPeerENR retrieves the custody count from the peer ENR.
// If the ENR is not available, it defaults to the minimum number of custody groups
// an honest node custodies and serves samples from.
func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of groups.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
record, err := s.peers.ENR(pid)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Debug("Failed to retrieve ENR for peer, defaulting to the default value")
return custodyRequirement
}
// Retrieve the custody group count from the ENR.
custodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Debug("Failed to retrieve custody group count from ENR for peer, defaulting to the default value")
return custodyRequirement
}
return custodyGroupCount
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then
// falls back to the minimum number of custody groups an honest node should custodiy
// and serve samples from if ENR is not available.
func (s *Service) CustodyGroupCountFromPeer(pid peer.ID) uint64 {
// Try to get the custody group count from the peer's metadata.
metadata, err := s.peers.Metadata(pid)
if err != nil {
// On error, default to the ENR value.
log.WithError(err).WithField("peerID", pid).Debug("Failed to retrieve metadata for peer, defaulting to the ENR value")
return s.custodyGroupCountFromPeerENR(pid)
}
// If the metadata is nil, default to the ENR value.
if metadata == nil {
log.WithField("peerID", pid).Debug("Metadata is nil, defaulting to the ENR value")
return s.custodyGroupCountFromPeerENR(pid)
}
// Get the custody subnets count from the metadata.
custodyCount := metadata.CustodyGroupCount()
// If the custody count is null, default to the ENR value.
if custodyCount == 0 {
log.WithField("peerID", pid).Debug("The custody count extracted from the metadata equals to 0, defaulting to the ENR value")
return s.custodyGroupCountFromPeerENR(pid)
}
return custodyCount
}

View File

@@ -0,0 +1,201 @@
package p2p
import (
"context"
"crypto/ecdsa"
"net"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
prysmNetwork "github.com/prysmaticlabs/prysm/v5/network"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func createPeer(t *testing.T, privateKeyOffset int, custodyCount uint64) (*enr.Record, peer.ID, *ecdsa.PrivateKey) {
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(privateKeyOffset + i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
peerID, err := peer.IDFromPrivateKey(unmarshalledPrivateKey)
require.NoError(t, err)
record := &enr.Record{}
record.Set(peerdas.Cgc(custodyCount))
record.Set(enode.Secp256k1(privateKey.PublicKey))
return record, peerID, privateKey
}
func TestAdmissibleCustodyGroupsPeers(t *testing.T) {
genesisValidatorRoot := make([]byte, 32)
for i := 0; i < 32; i++ {
genesisValidatorRoot[i] = byte(i)
}
service := &Service{
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: genesisValidatorRoot,
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
ipAddrString, err := prysmNetwork.ExternalIPv4()
require.NoError(t, err)
ipAddr := net.ParseIP(ipAddrString)
custodyRequirement := params.BeaconConfig().CustodyRequirement
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Peer 1 custodies exactly the same groups than us.
// (We use the same keys pair than ours for simplicity)
peer1Record, peer1ID, localPrivateKey := createPeer(t, 1, custodyRequirement)
// Peer 2 custodies all the groups.
peer2Record, peer2ID, _ := createPeer(t, 2, dataColumnSidecarSubnetCount)
// Peer 3 custodies different groups than us (but the same count).
// (We use the same public key than peer 2 for simplicity)
peer3Record, peer3ID, _ := createPeer(t, 3, custodyRequirement)
// Peer 4 custodies less groups than us.
peer4Record, peer4ID, _ := createPeer(t, 4, custodyRequirement-1)
createListener := func() (*discover.UDPv5, error) {
return service.createListener(ipAddr, localPrivateKey)
}
listener, err := newListener(createListener)
require.NoError(t, err)
service.dv5Listener = listener
service.peers.Add(peer1Record, peer1ID, nil, network.DirOutbound)
service.peers.Add(peer2Record, peer2ID, nil, network.DirOutbound)
service.peers.Add(peer3Record, peer3ID, nil, network.DirOutbound)
service.peers.Add(peer4Record, peer4ID, nil, network.DirOutbound)
actual, err := service.AdmissibleCustodyGroupsPeers([]peer.ID{peer1ID, peer2ID, peer3ID, peer4ID})
require.NoError(t, err)
expected := []peer.ID{peer1ID, peer2ID}
require.DeepSSZEqual(t, expected, actual)
}
func TestCustodyGroupCountFromPeer(t *testing.T) {
const (
expectedENR uint64 = 7
expectedMetadata uint64 = 8
pid = "test-id"
)
cgc := peerdas.Cgc(expectedENR)
// Define a nil record
var nilRecord *enr.Record = nil
// Define an empty record (record with non `cgc` entry)
emptyRecord := &enr.Record{}
// Define a nominal record
nominalRecord := &enr.Record{}
nominalRecord.Set(cgc)
// Define a metadata with zero custody.
zeroMetadata := wrapper.WrappedMetadataV2(&pb.MetaDataV2{
CustodyGroupCount: 0,
})
// Define a nominal metadata.
nominalMetadata := wrapper.WrappedMetadataV2(&pb.MetaDataV2{
CustodyGroupCount: expectedMetadata,
})
testCases := []struct {
name string
record *enr.Record
metadata metadata.Metadata
expected uint64
}{
{
name: "No metadata - No ENR",
record: nilRecord,
expected: params.BeaconConfig().CustodyRequirement,
},
{
name: "No metadata - Empty ENR",
record: emptyRecord,
expected: params.BeaconConfig().CustodyRequirement,
},
{
name: "No Metadata - ENR",
record: nominalRecord,
expected: expectedENR,
},
{
name: "Metadata with 0 value - ENR",
record: nominalRecord,
metadata: zeroMetadata,
expected: expectedENR,
},
{
name: "Metadata - ENR",
record: nominalRecord,
metadata: nominalMetadata,
expected: expectedMetadata,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create peers status.
peers := peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
// Set the metadata.
if tc.metadata != nil {
peers.SetMetadata(pid, tc.metadata)
}
// Add a new peer with the record.
peers.Add(tc.record, pid, nil, network.DirOutbound)
// Create a new service.
service := &Service{
peers: peers,
metaData: tc.metadata,
}
// Retrieve the custody count from the remote peer.
actual := service.CustodyGroupCountFromPeer(pid)
// Verify the result.
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -163,6 +164,12 @@ func (s *Service) RefreshPersistentSubnets() {
return
}
// Initialize persistent column subnets.
if err := initializePersistentColumnSubnets(nodeID); err != nil {
log.WithError(err).Error("Could not initialize persistent column subnets")
return
}
// Get the current attestation subnet bitfield.
bitV := bitfield.NewBitvector64()
attestationCommittees := cache.SubnetIDs.GetAllSubnets()
@@ -219,18 +226,51 @@ func (s *Service) RefreshPersistentSubnets() {
// Get the sync subnet bitfield in our metadata.
currentBitSInMetadata := s.Metadata().SyncnetsBitfield()
// Is our sync bitvector record up to date?
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
// Compare current epoch with the Fulu fork epoch.
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
if currentEpoch < fuluForkEpoch {
// Altair behaviour.
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
// Nothing to do, return early.
return
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Ping all peers to inform them of new metadata
s.pingPeersAndLogEnr()
return
}
// Get the current custody group count.
custodyGroupCount := peerdas.CustodyGroupCount()
// Get the custody group count we store in our record.
inRecordCustodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
log.WithError(err).Error("Could not retrieve custody subnet count")
return
}
// Get the custody group count in our metadata.
inMetadataCustodyGroupCount := s.Metadata().CustodyGroupCount()
isCustodyGroupCountUpToDate := (custodyGroupCount == inRecordCustodyGroupCount && custodyGroupCount == inMetadataCustodyGroupCount)
if isBitVUpToDate && isBitSUpToDate && isCustodyGroupCountUpToDate {
// Nothing to do, return early.
return
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodyGroupCount)
// Ping all peers to inform them of new metadata
// Ping all peers.
s.pingPeersAndLogEnr()
}
@@ -457,6 +497,11 @@ func (s *Service) createLocalNode(
localNode.Set(quicEntry)
}
if params.FuluEnabled() {
custodyGroupCount := peerdas.CustodyGroupCount()
localNode.Set(peerdas.Cgc(custodyGroupCount))
}
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)

View File

@@ -25,6 +25,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
@@ -140,6 +141,15 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
func TestCreateLocalNode(t *testing.T) {
params.SetupTestConfigCleanup(t)
// Set the fulu fork epoch to something other than the far future epoch.
initFuluForkEpoch := params.BeaconConfig().FuluForkEpoch
params.BeaconConfig().FuluForkEpoch = 42
defer func() {
params.BeaconConfig().FuluForkEpoch = initFuluForkEpoch
}()
testCases := []struct {
name string
cfg *Config
@@ -171,6 +181,7 @@ func TestCreateLocalNode(t *testing.T) {
expectedError: false,
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
// Define ports.
@@ -236,6 +247,11 @@ func TestCreateLocalNode(t *testing.T) {
syncSubnets := new([]byte)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, syncSubnets)))
require.DeepSSZEqual(t, []byte{0}, *syncSubnets)
// Check cgc config.
custodyGroupCount := new(uint64)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(peerdas.CustodyGroupCountEnrKey, custodyGroupCount)))
require.Equal(t, params.BeaconConfig().CustodyRequirement, *custodyGroupCount)
})
}
}
@@ -535,7 +551,7 @@ type check struct {
metadataSequenceNumber uint64
attestationSubnets []uint64
syncSubnets []uint64
custodySubnetCount *uint64
custodyGroupCount *uint64
}
func checkPingCountCacheMetadataRecord(
@@ -601,6 +617,18 @@ func checkPingCountCacheMetadataRecord(
actualBitSMetadata := service.metaData.SyncnetsBitfield()
require.DeepSSZEqual(t, expectedBitS, actualBitSMetadata)
}
if expected.custodyGroupCount != nil {
// Check custody subnet count in ENR.
var actualCustodyGroupCount uint64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(peerdas.CustodyGroupCountEnrKey, &actualCustodyGroupCount))
require.NoError(t, err)
require.Equal(t, *expected.custodyGroupCount, actualCustodyGroupCount)
// Check custody subnet count in metadata.
actualGroupCountMetadata := service.metaData.CustodyGroupCount()
require.Equal(t, *expected.custodyGroupCount, actualGroupCountMetadata)
}
}
func TestRefreshPersistentSubnets(t *testing.T) {
@@ -610,12 +638,18 @@ func TestRefreshPersistentSubnets(t *testing.T) {
defer cache.SubnetIDs.EmptyAllCaches()
defer cache.SyncSubnetIDs.EmptyAllCaches()
const altairForkEpoch = 5
const (
altairForkEpoch = 5
fuluForkEpoch = 10
)
custodyGroupCount := params.BeaconConfig().CustodyRequirement
// Set up epochs.
defaultCfg := params.BeaconConfig()
cfg := defaultCfg.Copy()
cfg.AltairForkEpoch = altairForkEpoch
cfg.FuluForkEpoch = fuluForkEpoch
params.OverrideBeaconConfig(cfg)
// Compute the number of seconds per epoch.
@@ -684,6 +718,39 @@ func TestRefreshPersistentSubnets(t *testing.T) {
},
},
},
{
name: "Fulu",
epochSinceGenesis: fuluForkEpoch,
checks: []check{
{
pingCount: 0,
metadataSequenceNumber: 0,
attestationSubnets: []uint64{},
syncSubnets: nil,
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
syncSubnets: nil,
custodyGroupCount: &custodyGroupCount,
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
custodyGroupCount: &custodyGroupCount,
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
custodyGroupCount: &custodyGroupCount,
},
},
},
}
for _, tc := range testCases {

View File

@@ -121,7 +121,7 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage):
case strings.Contains(topic, GossipBlobSidecarMessage), strings.Contains(topic, GossipDataColumnSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil
default:

View File

@@ -22,6 +22,7 @@ var gossipTopicMappings = map[string]func() proto.Message{
SyncCommitteeSubnetTopicFormat: func() proto.Message { return &ethpb.SyncCommitteeMessage{} },
BlsToExecutionChangeSubnetTopicFormat: func() proto.Message { return &ethpb.SignedBLSToExecutionChange{} },
BlobSubnetTopicFormat: func() proto.Message { return &ethpb.BlobSidecar{} },
DataColumnSubnetTopicFormat: func() proto.Message { return &ethpb.DataColumnSidecar{} },
}
// GossipTopicMappings is a function to return the assigned data type
@@ -50,7 +51,7 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
return gossipMessage(topic)
case AttestationSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SingleAttestation{}
return &ethpb.AttestationElectra{}
}
return gossipMessage(topic)
case AttesterSlashingSubnetTopicFormat:
@@ -109,7 +110,7 @@ func init() {
// Specially handle Electra objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SingleAttestation{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttestationElectra{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat

View File

@@ -121,7 +121,7 @@ func TestGossipTopicMappings_CorrectType(t *testing.T) {
_, ok = pMessage.(*ethpb.SignedBeaconBlockElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.SingleAttestation)
_, ok = pMessage.(*ethpb.AttestationElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashingElectra)

View File

@@ -22,7 +22,9 @@ const (
)
func peerMultiaddrString(conn network.Conn) string {
return fmt.Sprintf("%s/p2p/%s", conn.RemoteMultiaddr().String(), conn.RemotePeer().String())
remoteMultiaddr := conn.RemoteMultiaddr().String()
remotePeerID := conn.RemotePeer().String()
return fmt.Sprintf("%s/p2p/%s", remoteMultiaddr, remotePeerID)
}
func (s *Service) connectToPeer(conn network.Conn) {

View File

@@ -3,6 +3,7 @@ package p2p
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/connmgr"
@@ -12,6 +13,7 @@ import (
"github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"google.golang.org/protobuf/proto"
@@ -28,6 +30,12 @@ type P2P interface {
ConnectionHandler
PeersProvider
MetadataProvider
DataColumnsHandler
}
type Acceser interface {
Broadcaster
PeerManager
}
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
@@ -36,6 +44,7 @@ type Broadcaster interface {
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastDataColumn(ctx context.Context, root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.
@@ -81,6 +90,7 @@ type PeerManager interface {
PeerID() peer.ID
Host() host.Host
ENR() *enr.Record
NodeID() enode.ID
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshPersistentSubnets()
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
@@ -102,3 +112,9 @@ type MetadataProvider interface {
Metadata() metadata.Metadata
MetadataSeq() uint64
}
type DataColumnsHandler interface {
CustodyGroupCountFromPeer(peer.ID) uint64
AdmissibleCustodyGroupsPeers([]peer.ID) ([]peer.ID, error)
AdmissibleCustodySamplingPeers([]peer.ID) ([]peer.ID, error)
}

View File

@@ -60,17 +60,25 @@ var (
"the subnet. The beacon node increments this counter when the broadcast is blocked " +
"until a subnet peer can be found.",
})
blobSidecarCommitteeBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
blobSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_broadcasts",
Help: "The number of blob sidecar committee messages that were broadcast with no peer on.",
Help: "The number of blob sidecar messages that were broadcast with no peer on.",
})
syncCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_sync_committee_subnet_attempted_broadcasts",
Help: "The number of sync committee that were attempted to be broadcast.",
})
blobSidecarCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
blobSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_attempted_broadcasts",
Help: "The number of blob sidecar committee messages that were attempted to be broadcast.",
Help: "The number of blob sidecar messages that were attempted to be broadcast.",
})
dataColumnSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_broadcasts",
Help: "The number of data column sidecar messages that were broadcasted.",
})
dataColumnSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_attempted_broadcasts",
Help: "The number of data column sidecar messages that were attempted to be broadcast.",
})
// Gossip Tracer Metrics

View File

@@ -4,7 +4,6 @@ import (
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
)
@@ -126,13 +125,14 @@ func (s *BadResponsesScorer) IsBadPeer(pid peer.ID) error {
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *BadResponsesScorer) isBadPeerNoLock(pid peer.ID) error {
if peerData, ok := s.store.PeerData(pid); ok {
if peerData.BadResponses >= s.config.Threshold {
return errors.Errorf("peer exceeded bad responses threshold: got %d, threshold %d", peerData.BadResponses, s.config.Threshold)
}
// if peerData, ok := s.store.PeerData(pid); ok {
// TODO: Remote this out of devnet
// if peerData.BadResponses >= s.config.Threshold {
// return errors.Errorf("peer exceeded bad responses threshold: got %d, threshold %d", peerData.BadResponses, s.config.Threshold)
// }
return nil
}
// return nil
// }
return nil
}

View File

@@ -2,7 +2,6 @@ package scorers_test
import (
"context"
"sort"
"testing"
"github.com/libp2p/go-libp2p/core/network"
@@ -14,40 +13,41 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestScorers_BadResponses_Score(t *testing.T) {
const pid = "peer1"
// TODO: Uncomment when out of devnet
// func TestScorers_BadResponses_Score(t *testing.T) {
// const pid = "peer1"
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// ctx, cancel := context.WithCancel(context.Background())
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 4,
},
},
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 4,
// },
// },
// })
// scorer := peerStatuses.Scorers().BadResponsesScorer()
assert.Equal(t, 0., scorer.Score(pid), "Unexpected score for unregistered peer")
// assert.Equal(t, 0., scorer.Score(pid), "Unexpected score for unregistered peer")
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, -2.5, scorer.Score(pid))
// scorer.Increment(pid)
// assert.NoError(t, scorer.IsBadPeer(pid))
// assert.Equal(t, -2.5, scorer.Score(pid))
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-5), scorer.Score(pid))
// scorer.Increment(pid)
// assert.NoError(t, scorer.IsBadPeer(pid))
// assert.Equal(t, float64(-5), scorer.Score(pid))
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-7.5), scorer.Score(pid))
// scorer.Increment(pid)
// assert.NoError(t, scorer.IsBadPeer(pid))
// assert.Equal(t, float64(-7.5), scorer.Score(pid))
scorer.Increment(pid)
assert.NotNil(t, scorer.IsBadPeer(pid))
assert.Equal(t, -100.0, scorer.Score(pid))
}
// scorer.Increment(pid)
// assert.NotNil(t, scorer.IsBadPeer(pid))
// assert.Equal(t, -100.0, scorer.Score(pid))
// }
func TestScorers_BadResponses_ParamsThreshold(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
@@ -142,58 +142,60 @@ func TestScorers_BadResponses_Decay(t *testing.T) {
assert.Equal(t, 1, badResponses, "unexpected bad responses for pid3")
}
func TestScorers_BadResponses_IsBadPeer(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// TODO: Uncomment when out of devnet
// func TestScorers_BadResponses_IsBadPeer(t *testing.T) {
// ctx, cancel := context.WithCancel(context.Background())
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
pid := peer.ID("peer1")
assert.NoError(t, scorer.IsBadPeer(pid))
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{},
// })
// scorer := peerStatuses.Scorers().BadResponsesScorer()
// pid := peer.ID("peer1")
// assert.NoError(t, scorer.IsBadPeer(pid))
peerStatuses.Add(nil, pid, nil, network.DirUnknown)
assert.NoError(t, scorer.IsBadPeer(pid))
// peerStatuses.Add(nil, pid, nil, network.DirUnknown)
// assert.NoError(t, scorer.IsBadPeer(pid))
for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
scorer.Increment(pid)
if i == scorers.DefaultBadResponsesThreshold-1 {
assert.NotNil(t, scorer.IsBadPeer(pid), "Unexpected peer status")
} else {
assert.NoError(t, scorer.IsBadPeer(pid), "Unexpected peer status")
}
}
}
// for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
// scorer.Increment(pid)
// if i == scorers.DefaultBadResponsesThreshold-1 {
// assert.NotNil(t, scorer.IsBadPeer(pid), "Unexpected peer status")
// } else {
// assert.NoError(t, scorer.IsBadPeer(pid), "Unexpected peer status")
// }
// }
// }
func TestScorers_BadResponses_BadPeers(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// TODO: Uncomment when out of devnet
// func TestScorers_BadResponses_BadPeers(t *testing.T) {
// ctx, cancel := context.WithCancel(context.Background())
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
pids := []peer.ID{peer.ID("peer1"), peer.ID("peer2"), peer.ID("peer3"), peer.ID("peer4"), peer.ID("peer5")}
for i := 0; i < len(pids); i++ {
peerStatuses.Add(nil, pids[i], nil, network.DirUnknown)
}
for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
scorer.Increment(pids[1])
scorer.Increment(pids[2])
scorer.Increment(pids[4])
}
assert.NoError(t, scorer.IsBadPeer(pids[0]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[1]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[2]), "Invalid peer status")
assert.NoError(t, scorer.IsBadPeer(pids[3]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[4]), "Invalid peer status")
want := []peer.ID{pids[1], pids[2], pids[4]}
badPeers := scorer.BadPeers()
sort.Slice(badPeers, func(i, j int) bool {
return badPeers[i] < badPeers[j]
})
assert.DeepEqual(t, want, badPeers, "Unexpected list of bad peers")
}
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{},
// })
// scorer := peerStatuses.Scorers().BadResponsesScorer()
// pids := []peer.ID{peer.ID("peer1"), peer.ID("peer2"), peer.ID("peer3"), peer.ID("peer4"), peer.ID("peer5")}
// for i := 0; i < len(pids); i++ {
// peerStatuses.Add(nil, pids[i], nil, network.DirUnknown)
// }
// for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
// scorer.Increment(pids[1])
// scorer.Increment(pids[2])
// scorer.Increment(pids[4])
// }
// assert.NoError(t, scorer.IsBadPeer(pids[0]), "Invalid peer status")
// assert.NotNil(t, scorer.IsBadPeer(pids[1]), "Invalid peer status")
// assert.NotNil(t, scorer.IsBadPeer(pids[2]), "Invalid peer status")
// assert.NoError(t, scorer.IsBadPeer(pids[3]), "Invalid peer status")
// assert.NotNil(t, scorer.IsBadPeer(pids[4]), "Invalid peer status")
// want := []peer.ID{pids[1], pids[2], pids[4]}
// badPeers := scorer.BadPeers()
// sort.Slice(badPeers, func(i, j int) bool {
// return badPeers[i] < badPeers[j]
// })
// assert.DeepEqual(t, want, badPeers, "Unexpected list of bad peers")
// }

View File

@@ -44,7 +44,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 10.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, nil, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
assert.NoError(t, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
_, _, topicMap, err := scorer.GossipData("peer1")
assert.NoError(t, err)
assert.Equal(t, uint64(100), topicMap["a"].TimeInMesh, "incorrect time in mesh")

View File

@@ -212,99 +212,102 @@ func TestScorers_Service_Score(t *testing.T) {
})
}
func TestScorers_Service_loop(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
// TODO: Uncomment when out of devnet
// func TestScorers_Service_loop(t *testing.T) {
// ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 5,
DecayInterval: 50 * time.Millisecond,
},
BlockProviderScorerConfig: &scorers.BlockProviderScorerConfig{
DecayInterval: 25 * time.Millisecond,
Decay: 64,
},
},
})
s1 := peerStatuses.Scorers().BadResponsesScorer()
s2 := peerStatuses.Scorers().BlockProviderScorer()
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 5,
// DecayInterval: 50 * time.Millisecond,
// },
// BlockProviderScorerConfig: &scorers.BlockProviderScorerConfig{
// DecayInterval: 25 * time.Millisecond,
// Decay: 64,
// },
// },
// })
// s1 := peerStatuses.Scorers().BadResponsesScorer()
// s2 := peerStatuses.Scorers().BlockProviderScorer()
pid1 := peer.ID("peer1")
peerStatuses.Add(nil, pid1, nil, network.DirUnknown)
for i := 0; i < s1.Params().Threshold+5; i++ {
s1.Increment(pid1)
}
assert.NotNil(t, s1.IsBadPeer(pid1), "Peer should be marked as bad")
// pid1 := peer.ID("peer1")
// peerStatuses.Add(nil, pid1, nil, network.DirUnknown)
// for i := 0; i < s1.Params().Threshold+5; i++ {
// s1.Increment(pid1)
// }
// assert.NotNil(t, s1.IsBadPeer(pid1), "Peer should be marked as bad")
s2.IncrementProcessedBlocks("peer1", 221)
assert.Equal(t, uint64(221), s2.ProcessedBlocks("peer1"))
// s2.IncrementProcessedBlocks("peer1", 221)
// assert.Equal(t, uint64(221), s2.ProcessedBlocks("peer1"))
done := make(chan struct{}, 1)
go func() {
defer func() {
done <- struct{}{}
}()
ticker := time.NewTicker(50 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if s1.IsBadPeer(pid1) == nil && s2.ProcessedBlocks("peer1") == 0 {
return
}
case <-ctx.Done():
t.Error("Timed out")
return
}
}
}()
// done := make(chan struct{}, 1)
// go func() {
// defer func() {
// done <- struct{}{}
// }()
// ticker := time.NewTicker(50 * time.Millisecond)
// defer ticker.Stop()
// for {
// select {
// case <-ticker.C:
// if s1.IsBadPeer(pid1) == nil && s2.ProcessedBlocks("peer1") == 0 {
// return
// }
// case <-ctx.Done():
// t.Error("Timed out")
// return
// }
// }
// }()
<-done
assert.NoError(t, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
assert.Equal(t, uint64(0), s2.ProcessedBlocks("peer1"), "No blocks are expected")
}
// <-done
// assert.NoError(t, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
// assert.Equal(t, uint64(0), s2.ProcessedBlocks("peer1"), "No blocks are expected")
// }
func TestScorers_Service_IsBadPeer(t *testing.T) {
peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 2,
DecayInterval: 50 * time.Second,
},
},
})
// TODO: Uncomment when out of devnet
// func TestScorers_Service_IsBadPeer(t *testing.T) {
// peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 2,
// DecayInterval: 50 * time.Second,
// },
// },
// })
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
}
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
// peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
// assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// }
func TestScorers_Service_BadPeers(t *testing.T) {
peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 2,
DecayInterval: 50 * time.Second,
},
},
})
// TODO: Uncomment when out of devnet
// func TestScorers_Service_BadPeers(t *testing.T) {
// peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 2,
// DecayInterval: 50 * time.Second,
// },
// },
// })
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 0, len(peerStatuses.Scorers().BadPeers()))
for _, pid := range []peer.ID{"peer1", "peer3"} {
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
}
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
}
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer3"))
// assert.Equal(t, 0, len(peerStatuses.Scorers().BadPeers()))
// for _, pid := range []peer.ID{"peer1", "peer3"} {
// peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
// peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
// }
// assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
// assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer3"))
// assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
// }

View File

@@ -705,31 +705,47 @@ func (p *Status) deprecatedPrune() {
p.tallyIPTracker()
}
// BestFinalized returns the highest finalized epoch equal to or higher than ours that is agreed
// upon by the majority of peers. This method may not return the absolute highest finalized, but
// the finalized epoch in which most peers can serve blocks (plurality voting).
// Ideally, all peers would be reporting the same finalized epoch but some may be behind due to their
// own latency, or because of their finalized epoch at the time we queried them.
// Returns epoch number and list of peers that are at or beyond that epoch.
// BestFinalized returns the highest finalized epoch equal to or higher than `ourFinalizedEpoch`
// that is agreed upon by the majority of peers, and the peers agreeing on this finalized epoch.
// This method may not return the absolute highest finalized epoch, but the finalized epoch in which
// most peers can serve blocks (plurality voting). Ideally, all peers would be reporting the same
// finalized epoch but some may be behind due to their own latency, or because of their finalized
// epoch at the time we queried them. Returns epoch number and list of peers that are at or beyond
// that epoch.
func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
connected := p.Connected()
// key: finalized epoch, value: number of peers that support this finalized epoch.
finalizedEpochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: finalized epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
for _, pid := range connected {
peerChainState, err := p.ChainState(pid)
if err == nil && peerChainState != nil && peerChainState.FinalizedEpoch >= ourFinalizedEpoch {
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
potentialPIDs = append(potentialPIDs, pid)
pidHead[pid] = peerChainState.HeadSlot
// Skip if the peer's finalized epoch is not defined, or if the peer's finalized epoch is
// lower than ours.
if err != nil || peerChainState == nil || peerChainState.FinalizedEpoch < ourFinalizedEpoch {
continue
}
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which is the epoch most peers agree upon.
var targetEpoch primitives.Epoch
var mostVotes uint64
// If there is a tie, select the highest epoch.
targetEpoch, mostVotes := primitives.Epoch(0), uint64(0)
for epoch, count := range finalizedEpochVotes {
if count > mostVotes || (count == mostVotes && epoch > targetEpoch) {
mostVotes = count
@@ -737,11 +753,12 @@ func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch)
}
}
// Sort PIDs by finalized epoch, in decreasing order.
// Sort PIDs by finalized (epoch, head), in decreasing order.
sort.Slice(potentialPIDs, func(i, j int) bool {
if pidEpoch[potentialPIDs[i]] == pidEpoch[potentialPIDs[j]] {
return pidHead[potentialPIDs[i]] > pidHead[potentialPIDs[j]]
}
return pidEpoch[potentialPIDs[i]] > pidEpoch[potentialPIDs[j]]
})
@@ -764,26 +781,42 @@ func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch)
// BestNonFinalized returns the highest known epoch, higher than ours,
// and is shared by at least minPeers.
func (p *Status) BestNonFinalized(minPeers int, ourHeadEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
connected := p.Connected()
// Calculate our head slot.
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
ourHeadSlot := slotsPerEpoch.Mul(uint64(ourHeadEpoch))
// key: head epoch, value: number of peers that support this epoch.
epochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: head epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
ourHeadSlot := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(ourHeadEpoch))
for _, pid := range connected {
peerChainState, err := p.ChainState(pid)
if err == nil && peerChainState != nil && peerChainState.HeadSlot > ourHeadSlot {
epoch := slots.ToEpoch(peerChainState.HeadSlot)
epochVotes[epoch]++
pidEpoch[pid] = epoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
// Skip if the peer's head epoch is not defined, or if the peer's head slot is
// lower or equal than ours.
if err != nil || peerChainState == nil || peerChainState.HeadSlot <= ourHeadSlot {
continue
}
epoch := slots.ToEpoch(peerChainState.HeadSlot)
epochVotes[epoch]++
pidEpoch[pid] = epoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which has enough peers' votes (>= minPeers).
var targetEpoch primitives.Epoch
targetEpoch := primitives.Epoch(0)
for epoch, votes := range epochVotes {
if votes >= uint64(minPeers) && targetEpoch < epoch {
targetEpoch = epoch
@@ -1012,16 +1045,23 @@ func (p *Status) isfromBadIP(pid peer.ID) error {
return nil
}
ip, err := manet.ToIP(peerData.Address)
if err != nil {
return errors.Wrap(err, "to ip")
}
// ip, err := manet.ToIP(peerData.Address)
// if err != nil {
// return errors.Wrap(err, "to ip")
// }
if val, ok := p.ipTracker[ip.String()]; ok {
if val > CollocationLimit {
return errors.Errorf("collocation limit exceeded: got %d - limit %d", val, CollocationLimit)
}
}
// if val, ok := p.ipTracker[ip.String()]; ok {
// if val > CollocationLimit {
// TODO: Remove this out of denvet.
// return errors.Errorf("colocation limit exceeded: got %d - limit %d", val, CollocationLimit)
// log.WithFields(logrus.Fields{
// "pid": pid,
// "ip": ip.String(),
// "colocationCount": val,
// "colocationLimit": CollocationLimit,
// }).Debug("Colocation limit exceeded. Peer should be banned.")
// }
// }
return nil
}

View File

@@ -3,7 +3,6 @@ package peers_test
import (
"context"
"crypto/rand"
"strconv"
"testing"
"time"
@@ -329,55 +328,56 @@ func TestPeerWithNilChainState(t *testing.T) {
require.Equal(t, resChainState, nothing)
}
func TestPeerBadResponses(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,
},
},
})
// TODO: Uncomment when out of devnet
// func TestPeerBadResponses(t *testing.T) {
// maxBadResponses := 2
// p := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: maxBadResponses,
// },
// },
// })
id, err := peer.Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
require.NoError(t, err)
{
_, err := id.MarshalBinary()
require.NoError(t, err)
}
// id, err := peer.Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
// require.NoError(t, err)
// {
// _, err := id.MarshalBinary()
// require.NoError(t, err)
// }
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
// assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
require.NoError(t, err, "Failed to create address")
direction := network.DirInbound
p.Add(new(enr.Record), id, address, direction)
// address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
// require.NoError(t, err, "Failed to create address")
// direction := network.DirInbound
// p.Add(new(enr.Record), id, address, direction)
scorer := p.Scorers().BadResponsesScorer()
resBadResponses, err := scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 0, resBadResponses, "Unexpected bad responses")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
// scorer := p.Scorers().BadResponsesScorer()
// resBadResponses, err := scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 0, resBadResponses, "Unexpected bad responses")
// assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 1, resBadResponses, "Unexpected bad responses")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
// scorer.Increment(id)
// resBadResponses, err = scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 1, resBadResponses, "Unexpected bad responses")
// assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 2, resBadResponses, "Unexpected bad responses")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
// scorer.Increment(id)
// resBadResponses, err = scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 2, resBadResponses, "Unexpected bad responses")
// assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 3, resBadResponses, "Unexpected bad responses")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
}
// scorer.Increment(id)
// resBadResponses, err = scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 3, resBadResponses, "Unexpected bad responses")
// assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
// }
func TestAddMetaData(t *testing.T) {
maxBadResponses := 2
@@ -496,100 +496,102 @@ func TestPeerValidTime(t *testing.T) {
assert.Equal(t, numPeersConnected, len(p.Connected()), "Unexpected number of connected peers")
}
func TestPrune(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,
},
},
})
// TODO: Uncomment when out of devnet
// func TestPrune(t *testing.T) {
// maxBadResponses := 2
// p := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: maxBadResponses,
// },
// },
// })
for i := 0; i < p.MaxPeerLimit()+100; i++ {
if i%7 == 0 {
// Peer added as disconnected.
_ = addPeer(t, p, peers.Disconnected)
}
// Peer added to peer handler.
_ = addPeer(t, p, peers.Connected)
}
// for i := 0; i < p.MaxPeerLimit()+100; i++ {
// if i%7 == 0 {
// // Peer added as disconnected.
// _ = addPeer(t, p, peers.PeerDisconnected)
// }
// // Peer added to peer handler.
// _ = addPeer(t, p, peers.PeerConnected)
// }
disPeers := p.Disconnected()
firstPID := disPeers[0]
secondPID := disPeers[1]
thirdPID := disPeers[2]
// disPeers := p.Disconnected()
// firstPID := disPeers[0]
// secondPID := disPeers[1]
// thirdPID := disPeers[2]
scorer := p.Scorers().BadResponsesScorer()
// scorer := p.Scorers().BadResponsesScorer()
// Make first peer a bad peer
scorer.Increment(firstPID)
scorer.Increment(firstPID)
// // Make first peer a bad peer
// scorer.Increment(firstPID)
// scorer.Increment(firstPID)
// Add bad response for p2.
scorer.Increment(secondPID)
// // Add bad response for p2.
// scorer.Increment(secondPID)
// Prune peers
p.Prune()
// // Prune peers
// p.Prune()
// Bad peer is expected to still be kept in handler.
badRes, err := scorer.Count(firstPID)
assert.NoError(t, err, "error is supposed to be nil")
assert.Equal(t, 2, badRes, "Did not get expected amount")
// // Bad peer is expected to still be kept in handler.
// badRes, err := scorer.Count(firstPID)
// assert.NoError(t, err, "error is supposed to be nil")
// assert.Equal(t, 2, badRes, "Did not get expected amount")
// Not so good peer is pruned away so that we can reduce the
// total size of the handler.
_, err = scorer.Count(secondPID)
assert.ErrorContains(t, "peer unknown", err)
// // Not so good peer is pruned away so that we can reduce the
// // total size of the handler.
// _, err = scorer.Count(secondPID)
// assert.ErrorContains(t, "peer unknown", err)
// Last peer has been removed.
_, err = scorer.Count(thirdPID)
assert.ErrorContains(t, "peer unknown", err)
}
// // Last peer has been removed.
// _, err = scorer.Count(thirdPID)
// assert.ErrorContains(t, "peer unknown", err)
// }
func TestPeerIPTracker(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnablePeerScorer: false,
})
defer resetCfg()
maxBadResponses := 2
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,
},
},
})
// TODO: Uncomment when out of devnet
// func TestPeerIPTracker(t *testing.T) {
// resetCfg := features.InitWithReset(&features.Flags{
// EnablePeerScorer: false,
// })
// defer resetCfg()
// maxBadResponses := 2
// p := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: maxBadResponses,
// },
// },
// })
badIP := "211.227.218.116"
var badPeers []peer.ID
for i := 0; i < peers.CollocationLimit+10; i++ {
port := strconv.Itoa(3000 + i)
addr, err := ma.NewMultiaddr("/ip4/" + badIP + "/tcp/" + port)
if err != nil {
t.Fatal(err)
}
badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.ConnectionState(ethpb.ConnectionState_DISCONNECTED)))
}
for _, pr := range badPeers {
assert.NotNil(t, p.IsBad(pr), "peer with bad ip is not bad")
}
// badIP := "211.227.218.116"
// var badPeers []peer.ID
// for i := 0; i < peers.CollocationLimit+10; i++ {
// port := strconv.Itoa(3000 + i)
// addr, err := ma.NewMultiaddr("/ip4/" + badIP + "/tcp/" + port)
// if err != nil {
// t.Fatal(err)
// }
// badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.PeerConnectionState(ethpb.ConnectionState_DISCONNECTED)))
// }
// for _, pr := range badPeers {
// assert.NotNil(t, p.IsBad(pr), "peer with bad ip is not bad")
// }
// Add in bad peers, so that our records are trimmed out
// from the peer store.
for i := 0; i < p.MaxPeerLimit()+100; i++ {
// Peer added to peer handler.
pid := addPeer(t, p, peers.Disconnected)
p.Scorers().BadResponsesScorer().Increment(pid)
}
p.Prune()
// // Add in bad peers, so that our records are trimmed out
// // from the peer store.
// for i := 0; i < p.MaxPeerLimit()+100; i++ {
// // Peer added to peer handler.
// pid := addPeer(t, p, peers.PeerDisconnected)
// p.Scorers().BadResponsesScorer().Increment(pid)
// }
// p.Prune()
for _, pr := range badPeers {
assert.NoError(t, p.IsBad(pr), "peer with good ip is regarded as bad")
}
}
// for _, pr := range badPeers {
// assert.NoError(t, p.IsBad(pr), "peer with good ip is regarded as bad")
// }
// }
func TestTrimmedOrderedPeers(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{

View File

@@ -26,10 +26,11 @@ var _ pubsub.SubscriptionFilter = (*Service)(nil)
// -> SyncContributionAndProof * 2 = 2
// -> 4 SyncCommitteeSubnets * 2 = 8
// -> BlsToExecutionChange * 2 = 2
// -> 6 BlobSidecar * 2 = 12
// -> 128 DataColumnSidecar * 2 = 256
// -------------------------------------
// TOTAL = 162
const pubsubSubscriptionRequestLimit = 200
// TOTAL = 406
// (Note: BlobSidecar is not included in this list since it is superseded by DataColumnSidecar)
const pubsubSubscriptionRequestLimit = 500
// CanSubscribe returns true if the topic is of interest and we could subscribe to it.
func (s *Service) CanSubscribe(topic string) bool {

View File

@@ -90,7 +90,7 @@ func TestService_CanSubscribe(t *testing.T) {
formatting := []interface{}{digest}
// Special case for attestation subnets which have a second formatting placeholder.
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat {
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat || topic == DataColumnSubnetTopicFormat {
formatting = append(formatting, 0 /* some subnet ID */)
}

View File

@@ -10,11 +10,16 @@ import (
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// SchemaVersionV1 specifies the schema version for our rpc protocol ID.
const SchemaVersionV1 = "/1"
const (
// SchemaVersionV1 specifies the schema version for our rpc protocol ID.
SchemaVersionV1 = "/1"
// SchemaVersionV2 specifies the next schema version for our rpc protocol ID.
const SchemaVersionV2 = "/2"
// SchemaVersionV2 specifies the next schema version for our rpc protocol ID.
SchemaVersionV2 = "/2"
// SchemaVersionV3 specifies the next schema version for our rpc protocol ID.
SchemaVersionV3 = "/3"
)
// Specifies the protocol prefix for all our Req/Resp topics.
const protocolPrefix = "/eth2/beacon_chain/req"
@@ -43,6 +48,12 @@ const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
// DataColumnSidecarsByRootName is the name for the DataColumnSidecarsByRoot v1 message topic.
const DataColumnSidecarsByRootName = "/data_column_sidecars_by_root"
// DataColumnSidecarsByRangeName is the name for the DataColumnSidecarsByRange v1 message topic.
const DataColumnSidecarsByRangeName = "/data_column_sidecars_by_range"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -60,11 +71,17 @@ const (
// RPCBlobSidecarsByRangeTopicV1 is a topic for requesting blob sidecars
// in the slot range [start_slot, start_slot + count), leading up to the current head block as selected by fork choice.
// Protocol ID: /eth2/beacon_chain/req/blob_sidecars_by_range/1/ - New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_range/1/ - New in deneb.
RPCBlobSidecarsByRangeTopicV1 = protocolPrefix + BlobSidecarsByRangeName + SchemaVersionV1
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root. New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/ - New in deneb.
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1 - New in Fulu.
RPCDataColumnSidecarsByRootTopicV1 = protocolPrefix + DataColumnSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1 - New in Fulu.
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
@@ -73,6 +90,10 @@ const (
RPCBlocksByRootTopicV2 = protocolPrefix + BeaconBlocksByRootsMessageName + SchemaVersionV2
// RPCMetaDataTopicV2 defines the v2 topic for the metadata rpc method.
RPCMetaDataTopicV2 = protocolPrefix + MetadataMessageName + SchemaVersionV2
// V3 RPC Topics
// RPCMetaDataTopicV3 defines the v3 topic for the metadata rpc method.
RPCMetaDataTopicV3 = protocolPrefix + MetadataMessageName + SchemaVersionV3
)
// RPC errors for topic parsing.
@@ -97,10 +118,15 @@ var RPCTopicMappings = map[string]interface{}{
// RPC Metadata Message
RPCMetaDataTopicV1: new(interface{}),
RPCMetaDataTopicV2: new(interface{}),
RPCMetaDataTopicV3: new(interface{}),
// BlobSidecarsByRange v1 Message
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
// DataColumnSidecarsByRange v1 Message
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnSidecarsByRootReq),
}
// Maps all registered protocol prefixes.
@@ -119,6 +145,8 @@ var messageMapping = map[string]bool{
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
DataColumnSidecarsByRootName: true,
DataColumnSidecarsByRangeName: true,
}
// Maps all the RPC messages which are to updated in altair.
@@ -128,9 +156,15 @@ var altairMapping = map[string]bool{
MetadataMessageName: true,
}
// Maps all the RPC messages which are to updated in fulu.
var fuluMapping = map[string]bool{
MetadataMessageName: true,
}
var versionMapping = map[string]bool{
SchemaVersionV1: true,
SchemaVersionV2: true,
SchemaVersionV3: true,
}
// OmitContextBytesV1 keeps track of which RPC methods do not write context bytes in their v1 incarnations.
@@ -258,13 +292,25 @@ func (r RPCTopic) Version() string {
// TopicFromMessage constructs the rpc topic from the provided message
// type and epoch.
func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
// Check if the topic is known.
if !messageMapping[msg] {
return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg)
}
// Base version is version 1.
version := SchemaVersionV1
// Check if the message is to be updated in altair.
isAltair := epoch >= params.BeaconConfig().AltairForkEpoch
if isAltair && altairMapping[msg] {
version = SchemaVersionV2
}
// Check if the message is to be updated in fulu.
isFulu := epoch >= params.BeaconConfig().FuluForkEpoch
if isFulu && fuluMapping[msg] {
version = SchemaVersionV3
}
return protocolPrefix + msg + version, nil
}

View File

@@ -42,7 +42,7 @@ func (s *Service) Send(ctx context.Context, message interface{}, baseTopic strin
return nil, err
}
// do not encode anything if we are sending a metadata request
if baseTopic != RPCMetaDataTopicV1 && baseTopic != RPCMetaDataTopicV2 {
if baseTopic != RPCMetaDataTopicV1 && baseTopic != RPCMetaDataTopicV2 && baseTopic != RPCMetaDataTopicV3 {
castedMsg, ok := message.(ssz.Marshaler)
if !ok {
return nil, errors.Errorf("%T does not support the ssz marshaller interface", message)

View File

@@ -363,6 +363,15 @@ func (s *Service) ENR() *enr.Record {
return s.dv5Listener.Self().Record()
}
// NodeID returns the local node's node ID
// for discovery.
func (s *Service) NodeID() enode.ID {
if s.dv5Listener == nil {
return [32]byte{}
}
return s.dv5Listener.Self().ID()
}
// DiscoveryAddresses represents our enr addresses as multiaddresses.
func (s *Service) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if s.dv5Listener == nil {

View File

@@ -16,8 +16,6 @@ import (
"github.com/multiformats/go-multiaddr"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -358,48 +356,49 @@ func initializeStateWithForkDigest(_ context.Context, t *testing.T, gs startup.C
return fd
}
func TestService_connectWithPeer(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
peers *peers.Status
info peer.AddrInfo
wantErr string
}{
{
name: "bad peer",
peers: func() *peers.Status {
ps := peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
for i := 0; i < 10; i++ {
ps.Scorers().BadResponsesScorer().Increment("bad")
}
return ps
}(),
info: peer.AddrInfo{ID: "bad"},
wantErr: "refused to connect to bad peer",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h, _, _ := createHost(t, 34567)
defer func() {
if err := h.Close(); err != nil {
t.Fatal(err)
}
}()
ctx := context.Background()
s := &Service{
host: h,
peers: tt.peers,
}
err := s.connectWithPeer(ctx, tt.info)
if len(tt.wantErr) > 0 {
require.ErrorContains(t, tt.wantErr, err)
} else {
require.NoError(t, err)
}
})
}
}
// TODO: Uncomment when out of devnet
// func TestService_connectWithPeer(t *testing.T) {
// params.SetupTestConfigCleanup(t)
// tests := []struct {
// name string
// peers *peers.Status
// info peer.AddrInfo
// wantErr string
// }{
// {
// name: "bad peer",
// peers: func() *peers.Status {
// ps := peers.NewStatus(context.Background(), &peers.StatusConfig{
// ScorerParams: &scorers.Config{},
// })
// for i := 0; i < 10; i++ {
// ps.Scorers().BadResponsesScorer().Increment("bad")
// }
// return ps
// }(),
// info: peer.AddrInfo{ID: "bad"},
// wantErr: "refused to connect to bad peer",
// },
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// h, _, _ := createHost(t, 34567)
// defer func() {
// if err := h.Close(); err != nil {
// t.Fatal(err)
// }
// }()
// ctx := context.Background()
// s := &Service{
// host: h,
// peers: tt.peers,
// }
// err := s.connectWithPeer(ctx, tt.info)
// if len(tt.wantErr) > 0 {
// require.ErrorContains(t, tt.wantErr, err)
// } else {
// require.NoError(t, err)
// }
// })
// }
// }

View File

@@ -14,6 +14,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -29,8 +30,9 @@ var (
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
custodyGroupCountEnrKey = params.BeaconNetworkConfig().CustodyGroupCountKey
)
// The value used with the subnet, in order
@@ -47,13 +49,22 @@ const syncLockerVal = 100
// chosen more than sync and attestation subnet combined.
const blobSubnetLockerVal = 110
// nodeFilter return a function that filters nodes based on the subnet topic and subnet index.
// The value used with the data column sidecar subnet, in order
// to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate
// data column subnets from others. This is deliberately
// chosen more than sync, attestation and blob subnet (6) combined.
const dataColumnSubnetVal = 150
// nodeFilter returns a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, index uint64) (func(node *enode.Node) bool, error) {
switch {
case strings.Contains(topic, GossipAttestationMessage):
return s.filterPeerForAttSubnet(index), nil
case strings.Contains(topic, GossipSyncCommitteeMessage):
return s.filterPeerForSyncSubnet(index), nil
case strings.Contains(topic, GossipDataColumnSidecarMessage):
return s.filterPeerForDataColumnsSubnet(index), nil
default:
return nil, errors.Errorf("no subnet exists for provided topic: %s", topic)
}
@@ -266,6 +277,22 @@ func (s *Service) filterPeerForSyncSubnet(index uint64) func(node *enode.Node) b
}
}
// returns a method with filters peers specifically for a particular data column subnet.
func (s *Service) filterPeerForDataColumnsSubnet(index uint64) func(node *enode.Node) bool {
return func(node *enode.Node) bool {
if !s.filterPeer(node) {
return false
}
subnets, err := dataColumnSubnets(node.ID(), node.Record())
if err != nil {
return false
}
return subnets[index]
}
}
// lower threshold to broadcast object compared to searching
// for a subnet. So that even in the event of poor peer
// connectivity, we can still broadcast an attestation.
@@ -311,6 +338,35 @@ func (s *Service) updateSubnetRecordWithMetadataV2(bitVAtt bitfield.Bitvector64,
})
}
// updateSubnetRecordWithMetadataV3 updates:
// - attestation subnet tracked,
// - sync subnets tracked, and
// - custody subnet count
// both in the node's record and in the node's metadata.
func (s *Service) updateSubnetRecordWithMetadataV3(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodyGroupCount uint64,
) {
attSubnetsEntry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
syncSubnetsEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
custodyGroupCountEntry := enr.WithEntry(custodyGroupCountEnrKey, custodyGroupCount)
localNode := s.dv5Listener.LocalNode()
localNode.Set(attSubnetsEntry)
localNode.Set(syncSubnetsEntry)
localNode.Set(custodyGroupCountEntry)
newSeqNumber := s.metaData.SequenceNumber() + 1
s.metaData = wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: newSeqNumber,
Attnets: bitVAtt,
Syncnets: bitVSync,
CustodyGroupCount: custodyGroupCount,
})
}
func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
_, ok, expTime := cache.SubnetIDs.GetPersistentSubnets()
if ok && expTime.After(time.Now()) {
@@ -325,6 +381,42 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
return nil
}
// initializePersistentColumnSubnets initialize persistent column subnets
func initializePersistentColumnSubnets(id enode.ID) error {
// Check if the column subnets are already cached.
_, ok, expTime := cache.ColumnSubnetIDs.GetColumnSubnets()
if ok && expTime.After(time.Now()) {
return nil
}
// Compute the number of custody groups we should sample.
custodyGroupSamplingSize := peerdas.CustodyGroupSamplingSize()
// Compute the custody groups we should sample.
custodyGroups, err := peerdas.CustodyGroups(id, custodyGroupSamplingSize)
if err != nil {
return errors.Wrap(err, "custody groups")
}
// Compute the column subnets for the custody groups.
custodyColumns, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Compute subnets from the custody columns.
subnets := make([]uint64, 0, len(custodyColumns))
for column := range custodyColumns {
subnet := peerdas.ComputeSubnetForDataColumnSidecar(column)
subnets = append(subnets, subnet)
}
// Add the subnets to the cache.
cache.ColumnSubnetIDs.AddColumnSubnets(subnets)
return nil
}
// Spec pseudocode definition:
//
// def compute_subscribed_subnets(node_id: NodeID, epoch: Epoch) -> Sequence[SubnetID]:
@@ -448,6 +540,31 @@ func syncSubnets(record *enr.Record) ([]uint64, error) {
return committeeIdxs, nil
}
// Retrieve the data columns subnets from a node's ENR and node ID.
// TODO: Add tests
func dataColumnSubnets(nodeID enode.ID, record *enr.Record) (map[uint64]bool, error) {
// Retrieve the custody count from the ENR.
custodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
return nil, errors.Wrap(err, "custody group count from record")
}
// Retrieve the custody groups from the remote peer.
custodyGroups, err := peerdas.CustodyGroups(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody groups")
}
// Retrieve the custody columns from the groups.
custodyColumns, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
// Get custody columns subnets from the columns.
return peerdas.DataColumnSubnets(custodyColumns), nil
}
// Parses the attestation subnets ENR entry in a node and extracts its value
// as a bitvector for further manipulation.
func attBitvector(record *enr.Record) (bitfield.Bitvector64, error) {
@@ -477,6 +594,7 @@ func syncBitvector(record *enr.Record) (bitfield.Bitvector4, error) {
// between both the attestation, sync and blob subnets.
// Sync subnets are stored by (subnet+syncLockerVal).
// Blob subnets are stored by (subnet+blobSubnetLockerVal).
// Data column subnets are stored by (subnet+dataColumnSubnetVal).
// This is to prevent conflicts while allowing subnets
// to use a single locker.
func (s *Service) subnetLocker(i uint64) *sync.RWMutex {

View File

@@ -17,9 +17,12 @@ go_library(
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//testing/require:go_default_library",

View File

@@ -3,6 +3,7 @@ package testing
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/control"
@@ -12,6 +13,7 @@ import (
"github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"google.golang.org/protobuf/proto"
@@ -55,6 +57,11 @@ func (*FakeP2P) ENR() *enr.Record {
return new(enr.Record)
}
// NodeID returns the node id of the local peer.
func (*FakeP2P) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses -- fake
func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
@@ -148,6 +155,11 @@ func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar)
return nil
}
// BroadcastDataColumn -- fake.
func (*FakeP2P) BroadcastDataColumn(_ context.Context, _ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
return nil
}
// InterceptPeerDial -- fake.
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
@@ -172,3 +184,15 @@ func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiad
func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}
func (*FakeP2P) AdmissibleCustodyGroupsPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}
func (*FakeP2P) AdmissibleCustodySamplingPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -5,6 +5,7 @@ import (
"sync"
"sync/atomic"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
)
@@ -48,6 +49,12 @@ func (m *MockBroadcaster) BroadcastBlob(context.Context, uint64, *ethpb.BlobSide
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumn(context.Context, [fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}
// NumMessages returns the number of messages broadcasted.
func (m *MockBroadcaster) NumMessages() int {
m.msgLock.Lock()

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
@@ -39,6 +40,11 @@ func (m *MockPeerManager) ENR() *enr.Record {
return m.Enr
}
// NodeID .
func (m MockPeerManager) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses .
func (m *MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if m.FailDiscoveryAddr {

View File

@@ -24,9 +24,12 @@ import (
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
"github.com/multiformats/go-multiaddr"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -207,6 +210,12 @@ func (p *TestP2P) BroadcastBlob(context.Context, uint64, *ethpb.BlobSidecar) err
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumn(context.Context, [fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
p.BroadcastCalled.Store(true)
return nil
}
// SetStreamHandler for RPC.
func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler) {
p.BHost.SetStreamHandler(protocol.ID(topic), handler)
@@ -438,3 +447,30 @@ func (*TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiad
func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}
func (s *TestP2P) CustodyGroupCountFromPeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of groups.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
record, err := s.peers.ENR(pid)
if err != nil {
return custodyRequirement
}
// Retrieve the custody subnets count from the ENR.
custodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
return custodyRequirement
}
return custodyGroupCount
}
func (*TestP2P) AdmissibleCustodyGroupsPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}
func (*TestP2P) AdmissibleCustodySamplingPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -30,6 +30,9 @@ const (
GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
// GossipBlobSidecarMessage is the name for the blob sidecar message type.
GossipBlobSidecarMessage = "blob_sidecar"
// GossipDataColumnSidecarMessage is the name for the data column sidecar message type.
GossipDataColumnSidecarMessage = "data_column_sidecar"
// Topic Formats
//
// AttestationSubnetTopicFormat is the topic format for the attestation subnet.
@@ -52,4 +55,6 @@ const (
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
// BlobSubnetTopicFormat is the topic format for the blob subnet.
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
// DataColumnSubnetTopicFormat is the topic format for the data column subnet.
DataColumnSubnetTopicFormat = GossipProtocolAndDigest + GossipDataColumnSidecarMessage + "_%d"
)

View File

@@ -98,7 +98,7 @@ func InitializeDataMaps() {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
},
}
@@ -120,7 +120,7 @@ func InitializeDataMaps() {
return &ethpb.Attestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (ethpb.Att, error) {
return &ethpb.SingleAttestation{}, nil
return &ethpb.AttestationElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.Att, error) {
return &ethpb.AttestationElectra{}, nil

View File

@@ -9,10 +9,15 @@ var (
ErrInvalidSequenceNum = errors.New("invalid sequence number provided")
ErrGeneric = errors.New("internal service error")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob slot < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrDataColumnLTMinRequest = errors.New("data column epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrMaxDataColumnReqExceeded = errors.New("requested more than MAX_REQUEST_DATA_COLUMN_SIDECARS")
ErrResourceUnavailable = errors.New("resource requested unavailable")
ErrInvalidColumnIndex = errors.New("invalid column index requested")
)

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/config/params"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -183,31 +184,118 @@ func (b *BlobSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
return nil
}
var _ sort.Interface = BlobSidecarsByRootReq{}
var _ sort.Interface = (*BlobSidecarsByRootReq)(nil)
// Less reports whether the element with index i must sort before the element with index j.
// BlobIdentifier will be sorted in lexicographic order by root, with Blob Index as tiebreaker for a given root.
func (s BlobSidecarsByRootReq) Less(i, j int) bool {
rootCmp := bytes.Compare(s[i].BlockRoot, s[j].BlockRoot)
func (s *BlobSidecarsByRootReq) Less(i, j int) bool {
rootCmp := bytes.Compare((*s)[i].BlockRoot, (*s)[j].BlockRoot)
if rootCmp != 0 {
// They aren't equal; return true if i < j, false if i > j.
return rootCmp < 0
}
// They are equal; blob index is the tie breaker.
return s[i].Index < s[j].Index
return (*s)[i].Index < (*s)[j].Index
}
// Swap swaps the elements with indexes i and j.
func (s BlobSidecarsByRootReq) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
func (s *BlobSidecarsByRootReq) Swap(i, j int) {
(*s)[i], (*s)[j] = (*s)[j], (*s)[i]
}
// Len is the number of elements in the collection.
func (s BlobSidecarsByRootReq) Len() int {
return len(s)
func (s *BlobSidecarsByRootReq) Len() int {
return len(*s)
}
// ===================================
// DataColumnSidecarsByRootReq section
// ===================================
var _ ssz.Marshaler = (*DataColumnSidecarsByRootReq)(nil)
var _ ssz.Unmarshaler = (*DataColumnSidecarsByRootReq)(nil)
var _ sort.Interface = (*DataColumnSidecarsByRootReq)(nil)
// DataColumnSidecarsByRootReq is used to specify a list of data column targets (root+index) in a DataColumnSidecarsByRoot RPC request.
type DataColumnSidecarsByRootReq []*eth.DataColumnIdentifier
// DataColumnIdentifier is a fixed size value, so we can compute its fixed size at start time (see init below)
var dataColumnIdSize int
// UnmarshalSSZ implements ssz.Unmarshaler. It unmarshals the provided bytes buffer into the DataColumnSidecarsByRootReq value.
func (d *DataColumnSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
bufLen := len(buf)
maxLen := int(params.BeaconConfig().MaxRequestDataColumnSidecars) * dataColumnIdSize
if bufLen > maxLen {
return errors.Errorf("expected buffer with length of up to %d but received length %d", maxLen, bufLen)
}
if bufLen%dataColumnIdSize != 0 {
return errors.Wrapf(ssz.ErrIncorrectByteSize, "size=%d", bufLen)
}
count := bufLen / dataColumnIdSize
*d = make([]*eth.DataColumnIdentifier, count)
for i := 0; i < count; i++ {
id := &eth.DataColumnIdentifier{}
err := id.UnmarshalSSZ(buf[i*dataColumnIdSize : (i+1)*dataColumnIdSize])
if err != nil {
return err
}
(*d)[i] = id
}
return nil
}
// MarshalSSZ implements ssz.Marshaler. It serializes the DataColumnSidecarsByRootReq value to a byte slice.
func (d *DataColumnSidecarsByRootReq) MarshalSSZ() ([]byte, error) {
buf := make([]byte, d.SizeSSZ())
for i, id := range *d {
bytes, err := id.MarshalSSZ()
if err != nil {
return nil, err
}
copy(buf[i*dataColumnIdSize:(i+1)*dataColumnIdSize], bytes)
}
return buf, nil
}
// MarshalSSZTo implements ssz.Marshaler. It appends the serialized DataColumnSidecarsByRootReq value to the provided byte slice.
func (d *DataColumnSidecarsByRootReq) MarshalSSZTo(dst []byte) ([]byte, error) {
mobj, err := d.MarshalSSZ()
if err != nil {
return nil, err
}
return append(dst, mobj...), nil
}
// SizeSSZ implements ssz.Marshaler. It returns the size of the serialized representation.
func (d *DataColumnSidecarsByRootReq) SizeSSZ() int {
return len(*d) * dataColumnIdSize
}
// Len implements sort.Interface. It returns the number of elements in the collection.
func (d *DataColumnSidecarsByRootReq) Len() int {
return len(*d)
}
// Less implements sort.Interface. It reports whether the element with index i must sort before the element with index j.
func (d *DataColumnSidecarsByRootReq) Less(i, j int) bool {
rootCmp := bytes.Compare((*d)[i].BlockRoot, (*d)[j].BlockRoot)
if rootCmp != 0 {
return rootCmp < 0
}
return (*d)[i].ColumnIndex < (*d)[j].ColumnIndex
}
// Swap implements sort.Interface. It swaps the elements with indexes i and j.
func (d *DataColumnSidecarsByRootReq) Swap(i, j int) {
(*d)[i], (*d)[j] = (*d)[j], (*d)[i]
}
func init() {
sizer := &eth.BlobIdentifier{}
blobIdSize = sizer.SizeSSZ()
blobSizer := &eth.BlobIdentifier{}
blobIdSize = blobSizer.SizeSSZ()
dataColumnSizer := &eth.DataColumnIdentifier{}
dataColumnIdSize = dataColumnSizer.SizeSSZ()
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -194,3 +195,136 @@ func hexDecodeOrDie(t *testing.T, str string) []byte {
require.NoError(t, err)
return decoded
}
// =====================================
// DataColumnSidecarsByRootReq section
// =====================================
func generateDataColumnIdentifiers(n int) []*eth.DataColumnIdentifier {
r := make([]*eth.DataColumnIdentifier, n)
for i := 0; i < n; i++ {
r[i] = &eth.DataColumnIdentifier{
BlockRoot: bytesutil.PadTo([]byte{byte(i)}, 32),
ColumnIndex: uint64(i),
}
}
return r
}
func TestDataColumnSidecarsByRootReq_MarshalUnmarshal(t *testing.T) {
cases := []struct {
name string
ids []*eth.DataColumnIdentifier
marshalErr error
unmarshalErr string
unmarshalMod func([]byte) []byte
}{
{
name: "empty list",
},
{
name: "single item list",
ids: generateDataColumnIdentifiers(1),
},
{
name: "10 item list",
ids: generateDataColumnIdentifiers(10),
},
{
name: "wonky unmarshal size",
ids: generateDataColumnIdentifiers(10),
unmarshalMod: func(in []byte) []byte {
in = append(in, byte(0))
return in
},
unmarshalErr: ssz.ErrIncorrectByteSize.Error(),
},
{
name: "size too big",
ids: generateDataColumnIdentifiers(1),
unmarshalMod: func(in []byte) []byte {
maxLen := params.BeaconConfig().MaxRequestDataColumnSidecars * uint64(dataColumnIdSize)
add := make([]byte, maxLen)
in = append(in, add...)
return in
},
unmarshalErr: "expected buffer with length of up to",
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
req := DataColumnSidecarsByRootReq(c.ids)
bytes, err := req.MarshalSSZ()
if c.marshalErr != nil {
require.ErrorIs(t, err, c.marshalErr)
return
}
require.NoError(t, err)
if c.unmarshalMod != nil {
bytes = c.unmarshalMod(bytes)
}
got := &DataColumnSidecarsByRootReq{}
err = got.UnmarshalSSZ(bytes)
if c.unmarshalErr != "" {
require.ErrorContains(t, c.unmarshalErr, err)
return
}
require.NoError(t, err)
for i, id := range *got {
require.DeepEqual(t, c.ids[i], id)
}
})
}
// Test MarshalSSZTo
req := DataColumnSidecarsByRootReq(generateDataColumnIdentifiers(10))
buf := make([]byte, 0)
buf, err := req.MarshalSSZTo(buf)
require.NoError(t, err)
require.Equal(t, len(buf), int(req.SizeSSZ()))
var unmarshalled DataColumnSidecarsByRootReq
err = unmarshalled.UnmarshalSSZ(buf)
require.NoError(t, err)
require.DeepEqual(t, req, unmarshalled)
}
func TestDataColumnSidecarsByRootReq_Sort(t *testing.T) {
ids := []*eth.DataColumnIdentifier{
{
BlockRoot: bytesutil.PadTo([]byte{3}, 32),
ColumnIndex: 0,
},
{
BlockRoot: bytesutil.PadTo([]byte{2}, 32),
ColumnIndex: 2,
},
{
BlockRoot: bytesutil.PadTo([]byte{2}, 32),
ColumnIndex: 1,
},
{
BlockRoot: bytesutil.PadTo([]byte{1}, 32),
ColumnIndex: 2,
},
{
BlockRoot: bytesutil.PadTo([]byte{0}, 32),
ColumnIndex: 3,
},
}
req := DataColumnSidecarsByRootReq(ids)
require.Equal(t, true, req.Less(4, 3))
require.Equal(t, true, req.Less(3, 2))
require.Equal(t, true, req.Less(2, 1))
require.Equal(t, true, req.Less(1, 0))
require.Equal(t, 5, req.Len())
ids = []*eth.DataColumnIdentifier{
{
BlockRoot: bytesutil.PadTo([]byte{0}, 32),
ColumnIndex: 3,
},
}
req = DataColumnSidecarsByRootReq(ids)
require.Equal(t, 1, req.Len())
}

View File

@@ -78,7 +78,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
if !(cfg.StaticPeerID || params.FuluEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
@@ -200,6 +200,11 @@ func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
newPubkey := &ecdsa.PublicKey{
Curve: gCrypto.S256(),
X: pubKeyObjSecp256k1.X(),
Y: pubKeyObjSecp256k1.Y(),
}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -532,7 +532,6 @@ func (s *Service) beaconEndpoints(
FinalizationFetcher: s.cfg.FinalizationFetcher,
ForkchoiceFetcher: s.cfg.ForkchoiceFetcher,
CoreService: coreService,
AttestationStateFetcher: s.cfg.AttestationReceiver,
}
const namespace = "beacon"

View File

@@ -285,11 +285,8 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
}
}
func (s *Server) handleAttestationsElectra(
ctx context.Context,
data json.RawMessage,
) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.SingleAttestation
func (s *Server) handleAttestationsElectra(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.AttestationElectra
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
@@ -299,7 +296,7 @@ func (s *Server) handleAttestationsElectra(
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.SingleAttestation
var validAttestations []*eth.AttestationElectra
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
@@ -319,23 +316,18 @@ func (s *Server) handleAttestationsElectra(
validAttestations = append(validAttestations, att)
}
for i, singleAtt := range validAttestations {
targetState, err := s.AttestationStateFetcher.AttestationTargetState(ctx, singleAtt.Data.Target)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get target state for attestation")
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !att.IsAggregated() {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
committee, err := corehelpers.BeaconCommitteeFromState(ctx, targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get committee for attestation")
}
att := singleAtt.ToAttestationElectra(committee)
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
@@ -343,8 +335,12 @@ func (s *Server) handleAttestationsElectra(
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
committeeIndex, err := att.GetCommitteeIndex()
if err != nil {
return nil, nil, errors.Wrap(err, "failed to retrieve attestation committee index")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), committeeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
@@ -354,9 +350,13 @@ func (s *Server) handleAttestationsElectra(
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("could not save attestation")
}
} else if att.IsAggregated() {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}

View File

@@ -498,17 +498,13 @@ func TestSubmitAttestations(t *testing.T) {
c.SlotsPerEpoch = 1
params.OverrideBeaconConfig(c)
_, keys, err := util.DeterministicDepositsAndKeys(2)
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validators := []*ethpbv1alpha1.Validator{
{
PublicKey: keys[0].PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{
PublicKey: keys[1].PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
state.Validators = validators
@@ -525,10 +521,9 @@ func TestSubmitAttestations(t *testing.T) {
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
AttestationStateFetcher: chainService,
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
t.Run("V1", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
@@ -737,7 +732,7 @@ func TestSubmitAttestations(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, primitives.ValidatorIndex(1), broadcaster.BroadcastAttestations[0].GetAttestingIndex())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
@@ -2349,8 +2344,8 @@ var (
]`
singleAttElectra = `[
{
"committee_index": "0",
"attester_index": "1",
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
@@ -2369,8 +2364,8 @@ var (
]`
multipleAttsElectra = `[
{
"committee_index": "0",
"attester_index": "0",
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
@@ -2387,8 +2382,8 @@ var (
}
},
{
"committee_index": "0",
"attester_index": "1",
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
@@ -2408,8 +2403,8 @@ var (
// signature is invalid
invalidAttElectra = `[
{
"committee_index": "0",
"attester_index": "0",
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"data": {
"slot": "0",

View File

@@ -50,5 +50,4 @@ type Server struct {
BLSChangesPool blstoexec.PoolManager
ForkchoiceFetcher blockchain.ForkchoiceFetcher
CoreService *core.Service
AttestationStateFetcher blockchain.AttestationStateFetcher
}

View File

@@ -89,11 +89,11 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
}
// parseIndices filters out invalid and duplicate blob indices
func parseIndices(url *url.URL, s primitives.Slot) ([]uint64, error) {
func parseIndices(url *url.URL, s primitives.Slot) (map[uint64]bool, error) {
rawIndices := url.Query()["indices"]
indices := make([]uint64, 0, params.BeaconConfig().MaxBlobsPerBlock(s))
indices := make(map[uint64]bool, params.BeaconConfig().MaxBlobsPerBlock(s))
invalidIndices := make([]string, 0)
loop:
for _, raw := range rawIndices {
ix, err := strconv.ParseUint(raw, 10, 64)
if err != nil {
@@ -104,12 +104,8 @@ loop:
invalidIndices = append(invalidIndices, raw)
continue
}
for i := range indices {
if ix == indices[i] {
continue loop
}
}
indices = append(indices, ix)
indices[ix] = true
}
if len(invalidIndices) > 0 {

View File

@@ -375,13 +375,13 @@ func Test_parseIndices(t *testing.T) {
tests := []struct {
name string
query string
want []uint64
want map[uint64]bool
wantErr string
}{
{
name: "happy path with duplicate indices within bound and other query parameters ignored",
query: "indices=1&indices=2&indices=1&indices=3&bar=bar",
want: []uint64{1, 2, 3},
want: map[uint64]bool{1: true, 2: true, 3: true},
},
{
name: "out of bounds indices throws error",

View File

@@ -191,7 +191,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 161, len(data))
assert.Equal(t, 165, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -538,6 +538,16 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "9", v)
case "MAX_REQUEST_BLOB_SIDECARS_ELECTRA":
assert.Equal(t, "1152", v)
case "MAX_REQUEST_BLOB_SIDECARS_FULU":
assert.Equal(t, "1536", v)
case "NUMBER_OF_CUSTODY_GROUPS":
assert.Equal(t, "128", v)
case "CUSTODY_REQUIREMENT":
assert.Equal(t, "4", v)
case "SAMPLES_PER_SLOT":
assert.Equal(t, "8", v)
case "MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -198,20 +198,35 @@ func matchingAtts(atts []ethpbalpha.Att, slot primitives.Slot, attDataRoot []byt
continue
}
root, err := att.GetData().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get attestation data root")
}
if !bytes.Equal(root[:], attDataRoot) {
continue
}
// We ignore the committee index from the request before Electra.
// This is because before Electra the committee index is part of the attestation data,
// meaning that comparing the data root is sufficient.
// Post-Electra the committee index in the data root is always 0, so we need to
// compare the committee index separately.
if (!postElectra && att.Version() < version.Electra) || (postElectra && att.Version() >= version.Electra && att.GetCommitteeIndex() == index) {
if postElectra {
if att.Version() >= version.Electra {
ci, err := att.GetCommitteeIndex()
if err != nil {
return nil, err
}
if ci != index {
continue
}
} else {
continue
}
} else {
// If postElectra is false and att.Version >= version.Electra, ignore the attestation.
if att.Version() >= version.Electra {
continue
}
}
root, err := att.GetData().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get attestation data root")
}
if bytes.Equal(root[:], attDataRoot) {
result = append(result, att)
}
}

View File

@@ -10,11 +10,13 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -22,6 +24,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -37,7 +40,9 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
@@ -46,14 +51,20 @@ go_test(
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -3,21 +3,25 @@ package lookup
import (
"context"
"fmt"
"math"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
@@ -42,7 +46,7 @@ func (e BlockIdParseError) Error() string {
// Blocker is responsible for retrieving blocks.
type Blocker interface {
Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error)
Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError)
Blobs(ctx context.Context, id string, indices map[uint64]bool) ([]*blocks.VerifiedROBlob, *core.RpcError)
}
// BeaconDbBlocker is an implementation of Blocker. It retrieves blocks from the beacon chain database.
@@ -132,6 +136,317 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
return blk, nil
}
// blobsFromStoredBlobs retrieves blobs corresponding to `indices` and `root` from the store.
// This function expects blobs to be stored directly (aka. no data columns).
func (p *BeaconDbBlocker) blobsFromStoredBlobs(
indices map[uint64]bool,
root []byte,
slot primitives.Slot,
commitments [][]byte,
) ([]*blocks.VerifiedROBlob, *core.RpcError) {
// If no indices are provided in the request, we fetch all available blobs for the block.
if len(indices) == 0 {
indicesMap, err := p.BlobStorage.Indices(bytesutil.ToBytes32(root), slot)
if err != nil {
log.WithField("blockRoot", hexutil.Encode(root)).Error("Could not retrieve blob indices for root")
return nil, &core.RpcError{Err: fmt.Errorf("could not retrieve blob indices for root %#x", root), Reason: core.Internal}
}
// Get the maximum index.
maxIndex := -1
for index, exists := range indicesMap {
if exists && index > maxIndex {
maxIndex = index
}
}
if maxIndex >= len(commitments) {
return nil, &core.RpcError{Err: fmt.Errorf("%d blob indices for only %d blob kzg commitments", len(indicesMap), len(commitments)), Reason: core.BadRequest}
}
for indice, exists := range indicesMap {
if exists {
indices[uint64(indice)] = true
}
}
}
// Retrieve from the store the blobs corresponding to the indices for this block root.
blobs := make([]*blocks.VerifiedROBlob, 0, len(indices))
for index := range indices {
vblob, err := p.BlobStorage.Get(bytesutil.ToBytes32(root), index)
if err != nil {
log.WithFields(log.Fields{
"blockRoot": hexutil.Encode(root),
"blobIndex": index,
}).Error("Could not retrieve blob for block root")
return nil, &core.RpcError{Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", root, index), Reason: core.Internal}
}
blobs = append(blobs, &vblob)
}
return blobs, nil
}
// blobsFromNonExtendedStoredDataColumns load the non-extended data columns from the store corresponding to `root` and returns the verified RO blobs.
// This function assumes that all the non-extended data columns are available in the store.
func (p *BeaconDbBlocker) blobsFromNonExtendedStoredDataColumns(
root [fieldparams.RootLength]byte,
indices map[uint64]bool,
) ([]*blocks.VerifiedROBlob, *core.RpcError) {
nonExtendedColumnsCount := uint64(fieldparams.NumberOfColumns / 2)
// Load the data columns corresponding to the non-extended blobs.
storedDataColumnsSidecar := make([]*ethpb.DataColumnSidecar, 0, nonExtendedColumnsCount)
for index := range nonExtendedColumnsCount {
dataColumnSidecar, err := p.BlobStorage.GetColumn(root, index)
if err != nil {
log.WithFields(log.Fields{
"blockRoot": hexutil.Encode(root[:]),
"column": index,
}).Error(errors.Wrapf(err, "could not retrieve column %d for block root %#x.", index, root))
return nil, &core.RpcError{
Err: fmt.Errorf("could not retrieve column %d for block root %#x", index, root),
Reason: core.Internal,
}
}
storedDataColumnsSidecar = append(storedDataColumnsSidecar, dataColumnSidecar)
}
// Get verified RO blobs from the data columns.
verifiedROBlobs, err := peerdas.Blobs(indices, storedDataColumnsSidecar)
if err != nil {
log.WithField("blockRoot", hexutil.Encode(root[:])).Error(errors.Wrap(err, "could not compute blobs from data columns"))
return nil, &core.RpcError{Err: errors.Wrap(err, "could not compute blobs from data columns"), Reason: core.Internal}
}
return verifiedROBlobs, nil
}
// blobsFromReconstructedDataColumns retrieves data columns from the store, reconstruct the whole matrix and returns the verified RO blobs.
func (p *BeaconDbBlocker) blobsFromReconstructedDataColumns(
root [fieldparams.RootLength]byte,
indices map[uint64]bool,
storedDataColumnsIndices map[uint64]bool,
) ([]*blocks.VerifiedROBlob, *core.RpcError) {
// Load all the data columns we have in the store.
// Theoretically, we could only retrieve the minimum number of columns needed to reconstruct the missing ones,
// but here we make the assumption that the cost of loading all the columns from the store is negligible
// compared to the cost of reconstructing them.
storedDataColumnsSidecar := make([]*ethpb.DataColumnSidecar, 0, len(storedDataColumnsIndices))
for index := range storedDataColumnsIndices {
dataColumnSidecar, err := p.BlobStorage.GetColumn(root, index)
if err != nil {
log.WithFields(log.Fields{
"blockRoot": hexutil.Encode(root[:]),
"column": index,
}).Error(errors.Wrapf(err, "could not retrieve column %d for block root %#x.", index, root))
return nil, &core.RpcError{
Err: fmt.Errorf("could not retrieve column %d for block root %#x", index, root),
Reason: core.Internal,
}
}
storedDataColumnsSidecar = append(storedDataColumnsSidecar, dataColumnSidecar)
}
// Recover cells and proofs.
recoveredCellsAndProofs, err := peerdas.RecoverCellsAndProofs(storedDataColumnsSidecar, root)
if err != nil {
log.WithField("blockRoot", hexutil.Encode(root[:])).Error(errors.Wrap(err, "could not recover cells and proofs"))
return nil, &core.RpcError{Err: errors.Wrap(err, "could not recover cells and proofs"), Reason: core.Internal}
}
// It is safe to retrieve the first element, since we already know there is at least one column in the store.
firstDataColumnSidecar := storedDataColumnsSidecar[0]
// Reconstruct the data columns sidecars.
reconstructedDataColumnSidecars, err := peerdas.DataColumnSidecarsForReconstruct(
firstDataColumnSidecar.KzgCommitments,
firstDataColumnSidecar.SignedBlockHeader,
firstDataColumnSidecar.KzgCommitmentsInclusionProof,
recoveredCellsAndProofs,
)
if err != nil {
log.WithField("blockRoot", hexutil.Encode(root[:])).Error(errors.Wrap(err, "could not reconstruct data columns sidecars"))
return nil, &core.RpcError{Err: errors.Wrap(err, "could not reconstruct data columns sidecars"), Reason: core.Internal}
}
// Get verified RO blobs from the data columns.
verifiedROBlobs, err := peerdas.Blobs(indices, reconstructedDataColumnSidecars)
if err != nil {
log.WithField("blockRoot", hexutil.Encode(root[:])).Error(errors.Wrap(err, "could not compute blobs from data columns"))
return nil, &core.RpcError{Err: errors.Wrap(err, "could not compute blobs from data columns"), Reason: core.Internal}
}
return verifiedROBlobs, nil
}
// blobsFromStoredDataColumns retrieves data columns from the store, reconstruct the whole matrix if needed, convert the matrix to blobs,
// and then returns blobs corresponding to `indices` and `root` from the store,
// This function expects data columns to be stored (aka. no blobs).
// If not enough data columns are available to extract blobs from them (either directly or after reconstruction), an error is returned.
func (p *BeaconDbBlocker) blobsFromStoredDataColumns(indices map[uint64]bool, rootBytes []byte) ([]*blocks.VerifiedROBlob, *core.RpcError) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
root := bytesutil.ToBytes32(rootBytes)
// Get the number of groups we should custody.
custodyGroupCount := peerdas.CustodyGroupCount()
// Determine if we are theoretically able to reconstruct the data columns.
canTheoreticallyReconstruct := peerdas.CanSelfReconstruct(custodyGroupCount)
// Retrieve the data columns indice actually we store.
storedDataColumnsIndices, err := p.BlobStorage.ColumnIndices(root)
if err != nil {
log.WithField("blockRoot", hexutil.Encode(rootBytes)).Error(errors.Wrap(err, "Could not retrieve columns indices stored for block root"))
return nil, &core.RpcError{Err: errors.Wrap(err, "could not retrieve columns indices stored for block root"), Reason: core.Internal}
}
storedDataColumnCount := uint64(len(storedDataColumnsIndices))
storedGroupCount := storedDataColumnCount / columnsPerGroup
// Determine is we acually able to reconstruct the data columns.
canActuallyReconstruct := peerdas.CanSelfReconstruct(storedGroupCount)
if !canTheoreticallyReconstruct && !canActuallyReconstruct {
// There is no way to reconstruct the data columns.
return nil, &core.RpcError{
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs. Please start the beacon node with the `--%s` flag to ensure this call to success.", flags.SubscribeToAllSubnets.Name),
Reason: core.NotFound,
}
}
nonExtendedColumnsCount := uint64(fieldparams.NumberOfColumns / 2)
if canTheoreticallyReconstruct && !canActuallyReconstruct {
// This case may happen if the node started recently with a big enough custody count, but did not (yet) backfill all the columns.
return nil, &core.RpcError{
Err: errors.Errorf("not all data columns are available for this blob. Wanted: %d, got: %d. Please retry later.", nonExtendedColumnsCount, storedDataColumnCount),
Reason: core.NotFound}
}
// - The case !canTheoreticallyReconstruct && canActuallyReconstruct may happen if the node used to custody enough columns,
// but do not custody enough columns anymore. We are still able to reconstruct the data columns.
// - The case canTheoreticallyReconstruct && canActuallyReconstruct is the happy path.
// Check if we store all the non extended columns. If so, we can respond without reconstructing.
missingColumns := make(map[uint64]bool)
for columnIndex := range nonExtendedColumnsCount {
if _, ok := storedDataColumnsIndices[columnIndex]; !ok {
missingColumns[columnIndex] = true
}
}
if len(missingColumns) == 0 {
// No need to reconstruct, this is the happy path.
return p.blobsFromNonExtendedStoredDataColumns(root, indices)
}
// Some non-extended data columns are missing, we need to reconstruct them.
return p.blobsFromReconstructedDataColumns(root, indices, storedDataColumnsIndices)
}
// extractRootDefault extracts the block root from the given identifier for the default case.
func (p *BeaconDbBlocker) extractRootDefault(ctx context.Context, id string) ([]byte, *core.RpcError) {
if bytesutil.IsHex([]byte(id)) {
root, err := hexutil.Decode(id)
if len(root) != fieldparams.RootLength {
return nil, &core.RpcError{Err: fmt.Errorf("invalid block root of length %d", len(root)), Reason: core.BadRequest}
}
if err != nil {
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
}
return root, nil
} else {
slot, err := strconv.ParseUint(id, 10, 64)
if err != nil {
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
}
denebStart, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Deneb start slot"), Reason: core.Internal}
}
if primitives.Slot(slot) < denebStart {
return nil, &core.RpcError{Err: errors.New("blobs are not supported before Deneb fork"), Reason: core.BadRequest}
}
ok, roots, err := p.BeaconDB.BlockRootsBySlot(ctx, primitives.Slot(slot))
if !ok {
return nil, &core.RpcError{Err: fmt.Errorf("block not found: no block roots at slot %d", slot), Reason: core.NotFound}
}
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "failed to get block roots by slot"), Reason: core.Internal}
}
root := roots[0][:]
if len(roots) == 1 {
return root, nil
}
for _, blockRoot := range roots {
canonical, err := p.ChainInfoFetcher.IsCanonical(ctx, blockRoot)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not determine if block root is canonical"), Reason: core.Internal}
}
if canonical {
return blockRoot[:], nil
}
}
return nil, &core.RpcError{Err: errors.Wrap(err, "could not find any canonical block for this slot"), Reason: core.NotFound}
}
}
// extractRoot extracts the block root from the given identifier.
func (p *BeaconDbBlocker) extractRoot(ctx context.Context, id string) ([]byte, *core.RpcError) {
switch id {
case "genesis":
return nil, &core.RpcError{Err: errors.New("blobs are not supported for Phase 0 fork"), Reason: core.BadRequest}
case "head":
var err error
root, err := p.ChainInfoFetcher.HeadRoot(ctx)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "could not retrieve head root"), Reason: core.Internal}
}
return root, nil
case "finalized":
fcp := p.ChainInfoFetcher.FinalizedCheckpt()
if fcp == nil {
return nil, &core.RpcError{Err: errors.New("received nil finalized checkpoint"), Reason: core.Internal}
}
return fcp.Root, nil
case "justified":
jcp := p.ChainInfoFetcher.CurrentJustifiedCheckpt()
if jcp == nil {
return nil, &core.RpcError{Err: errors.New("received nil justified checkpoint"), Reason: core.Internal}
}
return jcp.Root, nil
default:
return p.extractRootDefault(ctx, id)
}
}
// Blobs returns the blobs for a given block id identifier and blob indices. The identifier can be one of:
// - "head" (canonical head in node's view)
// - "genesis"
@@ -147,74 +462,12 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
// - block exists, has commitments, inside retention period (greater of protocol- or user-specified) serve then w/ 200 unless we hit an error reading them.
// we are technically not supposed to import a block to forkchoice unless we have the blobs, so the nuance here is if we can't find the file and we are inside the protocol-defined retention period, then it's actually a 500.
// - block exists, has commitments, outside retention period (greater of protocol- or user-specified) - ie just like block exists, no commitment
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
var root []byte
switch id {
case "genesis":
return nil, &core.RpcError{Err: errors.New("blobs are not supported for Phase 0 fork"), Reason: core.BadRequest}
case "head":
var err error
root, err = p.ChainInfoFetcher.HeadRoot(ctx)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "could not retrieve head root"), Reason: core.Internal}
}
case "finalized":
fcp := p.ChainInfoFetcher.FinalizedCheckpt()
if fcp == nil {
return nil, &core.RpcError{Err: errors.New("received nil finalized checkpoint"), Reason: core.Internal}
}
root = fcp.Root
case "justified":
jcp := p.ChainInfoFetcher.CurrentJustifiedCheckpt()
if jcp == nil {
return nil, &core.RpcError{Err: errors.New("received nil justified checkpoint"), Reason: core.Internal}
}
root = jcp.Root
default:
if bytesutil.IsHex([]byte(id)) {
var err error
root, err = hexutil.Decode(id)
if len(root) != fieldparams.RootLength {
return nil, &core.RpcError{Err: fmt.Errorf("invalid block root of length %d", len(root)), Reason: core.BadRequest}
}
if err != nil {
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
}
} else {
slot, err := strconv.ParseUint(id, 10, 64)
if err != nil {
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
}
denebStart, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Deneb start slot"), Reason: core.Internal}
}
if primitives.Slot(slot) < denebStart {
return nil, &core.RpcError{Err: errors.New("blobs are not supported before Deneb fork"), Reason: core.BadRequest}
}
ok, roots, err := p.BeaconDB.BlockRootsBySlot(ctx, primitives.Slot(slot))
if !ok {
return nil, &core.RpcError{Err: fmt.Errorf("block not found: no block roots at slot %d", slot), Reason: core.NotFound}
}
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "failed to get block roots by slot"), Reason: core.Internal}
}
root = roots[0][:]
if len(roots) == 1 {
break
}
for _, blockRoot := range roots {
canonical, err := p.ChainInfoFetcher.IsCanonical(ctx, blockRoot)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not determine if block root is canonical"), Reason: core.Internal}
}
if canonical {
root = blockRoot[:]
break
}
}
}
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices map[uint64]bool) ([]*blocks.VerifiedROBlob, *core.RpcError) {
root, rpcErr := p.extractRoot(ctx, id)
if rpcErr != nil {
return nil, rpcErr
}
if !p.BeaconDB.HasBlock(ctx, bytesutil.ToBytes32(root)) {
return nil, &core.RpcError{Err: errors.New("block not found"), Reason: core.NotFound}
}
@@ -234,35 +487,32 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
if len(commitments) == 0 {
return make([]*blocks.VerifiedROBlob, 0), nil
}
if len(indices) == 0 {
m, err := p.BlobStorage.Indices(bytesutil.ToBytes32(root), b.Block().Slot())
// Get the slot of the block.
blockSlot := b.Block().Slot()
// Get the first peerDAS epoch.
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
// Compute the first peerDAS slot.
peerDASStartSlot := primitives.Slot(math.MaxUint64)
if fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
peerDASStartSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
log.WithFields(log.Fields{
"blockRoot": hexutil.Encode(root),
}).Error(errors.Wrapf(err, "could not retrieve blob indices for root %#x", root))
return nil, &core.RpcError{Err: fmt.Errorf("could not retrieve blob indices for root %#x", root), Reason: core.Internal}
}
for k, v := range m {
if v {
if k >= len(commitments) {
return nil, &core.RpcError{Err: fmt.Errorf("blob index %d is more than blob kzg commitments :%dd", k, len(commitments)), Reason: core.BadRequest}
}
indices = append(indices, uint64(k))
}
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate peerDAS start slot"), Reason: core.Internal}
}
}
// returns empty slice if there are no indices
blobs := make([]*blocks.VerifiedROBlob, len(indices))
for i, index := range indices {
vblob, err := p.BlobStorage.Get(bytesutil.ToBytes32(root), index)
if err != nil {
log.WithFields(log.Fields{
"blockRoot": hexutil.Encode(root),
"blobIndex": index,
}).Error(errors.Wrapf(err, "could not retrieve blob for block root %#x at index %d", root, index))
return nil, &core.RpcError{Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", root, index), Reason: core.Internal}
}
blobs[i] = &vblob
// Is peerDAS enabled for this block?
isPeerDASEnabledForBlock := blockSlot >= peerDASStartSlot
if indices == nil {
indices = make(map[uint64]bool)
}
return blobs, nil
if !isPeerDASEnabledForBlock {
return p.blobsFromStoredBlobs(indices, root, blockSlot, commitments)
}
return p.blobsFromStoredDataColumns(indices, root)
}

View File

@@ -1,27 +1,38 @@
package lookup
import (
"bytes"
"context"
"crypto/sha256"
"encoding/binary"
"fmt"
"net/http"
"reflect"
"testing"
"time"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
)
func TestGetBlock(t *testing.T) {
@@ -50,7 +61,7 @@ func TestGetBlock(t *testing.T) {
b4.Block.ParentRoot = bytesutil.PadTo([]byte{8}, 32)
util.SaveBlock(t, ctx, beaconDB, b4)
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block)
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block)
require.NoError(t, err)
fetcher := &BeaconDbBlocker{
@@ -59,7 +70,7 @@ func TestGetBlock(t *testing.T) {
DB: beaconDB,
Block: wsb,
Root: headBlock.BlockRoot,
FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blkContainers[64].BlockRoot},
FinalizedCheckPoint: &ethpb.Checkpoint{Root: blkContainers[64].BlockRoot},
CanonicalRoots: canonicalRoots,
},
}
@@ -70,13 +81,13 @@ func TestGetBlock(t *testing.T) {
tests := []struct {
name string
blockID []byte
want *ethpbalpha.SignedBeaconBlock
want *ethpb.SignedBeaconBlock
wantErr bool
}{
{
name: "slot",
blockID: []byte("30"),
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "bad formatting",
@@ -86,7 +97,7 @@ func TestGetBlock(t *testing.T) {
{
name: "canonical",
blockID: []byte("30"),
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "non canonical",
@@ -96,12 +107,12 @@ func TestGetBlock(t *testing.T) {
{
name: "head",
blockID: []byte("head"),
want: headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "finalized",
blockID: []byte("finalized"),
want: blkContainers[64].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[64].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "genesis",
@@ -116,7 +127,7 @@ func TestGetBlock(t *testing.T) {
{
name: "root",
blockID: blkContainers[20].BlockRoot,
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "non-existent root",
@@ -126,7 +137,7 @@ func TestGetBlock(t *testing.T) {
{
name: "hex",
blockID: []byte(hexutil.Encode(blkContainers[20].BlockRoot)),
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "no block",
@@ -148,7 +159,7 @@ func TestGetBlock(t *testing.T) {
require.NoError(t, err)
pb, err := result.Proto()
require.NoError(t, err)
pbBlock, ok := pb.(*ethpbalpha.SignedBeaconBlock)
pbBlock, ok := pb.(*ethpb.SignedBeaconBlock)
require.Equal(t, true, ok)
if !reflect.DeepEqual(pbBlock, tt.want) {
t.Error("Expected blocks to equal")
@@ -157,6 +168,245 @@ func TestGetBlock(t *testing.T) {
}
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
for i := 0; i < len(blob); i += 32 {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+32], fieldElementBytes[:])
}
return blob
}
func generateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}
func generateRandomBlocSignedBeaconBlockkAndVerifiedRoBlobs(t *testing.T, blobCount int) (interfaces.SignedBeaconBlock, []*blocks.VerifiedROBlob) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Generate random blobs and their corresponding commitments and proofs.
blobs := make([]kzg.Blob, 0, blobCount)
blobKzgCommitments := make([]*kzg.Commitment, 0, blobCount)
blobKzgProofs := make([]*kzg.Proof, 0, blobCount)
for blobIndex := range blobCount {
// Create a random blob.
blob := getRandBlob(int64(blobIndex))
blobs = append(blobs, blob)
// Generate a blobKZGCommitment for the blob.
blobKZGCommitment, proof, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
blobKzgCommitments = append(blobKzgCommitments, blobKZGCommitment)
blobKzgProofs = append(blobKzgProofs, proof)
}
// Set the commitments into the block.
blobZkgCommitmentsBytes := make([][]byte, 0, blobCount)
for _, blobKZGCommitment := range blobKzgCommitments {
blobZkgCommitmentsBytes = append(blobZkgCommitmentsBytes, blobKZGCommitment[:])
}
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = blobZkgCommitmentsBytes
// Generate verified RO blobs.
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, blobCount)
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
commitmentInclusionProof, err := blocks.MerkleProofKZGCommitments(signedBeaconBlock.Block().Body())
require.NoError(t, err)
for blobIndex := range blobCount {
blob := blobs[blobIndex]
blobKZGCommitment := blobKzgCommitments[blobIndex]
blobKzgProof := blobKzgProofs[blobIndex]
// Get the signed beacon block header.
signedBeaconBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
blobSidecar := &ethpb.BlobSidecar{
Index: uint64(blobIndex),
Blob: blob[:],
KzgCommitment: blobKZGCommitment[:],
KzgProof: blobKzgProof[:],
SignedBlockHeader: signedBeaconBlockHeader,
CommitmentInclusionProof: commitmentInclusionProof,
}
roBlob, err := blocks.NewROBlob(blobSidecar)
require.NoError(t, err)
verifiedROBlob := blocks.NewVerifiedROBlob(roBlob)
verifiedROBlobs = append(verifiedROBlobs, &verifiedROBlob)
}
return signedBeaconBlock, verifiedROBlobs
}
func TestBlobsFromStoredDataColumns(t *testing.T) {
const blobCount = 5
blobsIndex := make(map[uint64]bool, blobCount)
for i := range blobCount {
blobsIndex[uint64(i)] = true
}
var (
nilError *core.RpcError
noDataColumnsIndice []int
)
allDataColumnsIndice := make([]int, 0, fieldparams.NumberOfColumns)
for i := range fieldparams.NumberOfColumns {
allDataColumnsIndice = append(allDataColumnsIndice, i)
}
originalColumnsIndice := allDataColumnsIndice[:fieldparams.NumberOfColumns/2]
extendedColumnsIndice := allDataColumnsIndice[fieldparams.NumberOfColumns/2:]
testCases := []struct {
errorReason core.ErrorReason
isError bool
subscribeToAllSubnets bool
storedColumnsIndice []int
name string
}{
{
name: "Cannot theoretically nor actually reconstruct",
subscribeToAllSubnets: false,
storedColumnsIndice: noDataColumnsIndice,
isError: true,
errorReason: core.NotFound,
},
{
name: "Can theoretically but not actually reconstruct",
subscribeToAllSubnets: true,
storedColumnsIndice: noDataColumnsIndice,
isError: true,
errorReason: core.NotFound,
},
{
name: "No need to reconstruct",
subscribeToAllSubnets: true,
storedColumnsIndice: originalColumnsIndice,
isError: false,
},
{
name: "Reconstruction needed",
subscribeToAllSubnets: false,
storedColumnsIndice: extendedColumnsIndice,
isError: false,
},
}
// Load the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create a dummy signed beacon blocks and dummy verified RO blobs.
signedBeaconBlock, verifiedRoBlobs := generateRandomBlocSignedBeaconBlockkAndVerifiedRoBlobs(t, blobCount)
// Extract the root from the signed beacon block.
blockRoot, err := signedBeaconBlock.Block().HashTreeRoot()
require.NoError(t, err)
// Extract blobs from verified RO blobs.
blobs := make([]kzg.Blob, 0, blobCount)
for _, verifiedRoBlob := range verifiedRoBlobs {
blob := verifiedRoBlob.BlobSidecar.Blob
blobs = append(blobs, kzg.Blob(blob))
}
// Convert blobs to data columns.
dataColumnSidecars, err := peerdas.DataColumnSidecars(signedBeaconBlock, blobs)
require.NoError(t, err)
// Create verified RO data columns.
verifiedRoDataColumns := make([]*blocks.VerifiedRODataColumn, 0, fieldparams.NumberOfColumns)
for _, dataColumnSidecar := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumn(dataColumnSidecar)
require.NoError(t, err)
verifiedRoDataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRoDataColumns = append(verifiedRoDataColumns, &verifiedRoDataColumn)
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the subscription to all subnets flags.
resetFlags := flags.Get()
params.SetupTestConfigCleanup(t)
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = tc.subscribeToAllSubnets
flags.Init(gFlags)
// Define a blob storage.
blobStorage := filesystem.NewEphemeralBlobStorage(t)
// Save the data columns in the store.
for _, columnIndex := range tc.storedColumnsIndice {
verifiedRoDataColumn := verifiedRoDataColumns[columnIndex]
err := blobStorage.SaveDataColumn(*verifiedRoDataColumn)
require.NoError(t, err)
}
// Define the blocker.
blocker := &BeaconDbBlocker{
BlobStorage: blobStorage,
}
// Get the blobs from the data columns.
actual, err := blocker.blobsFromStoredDataColumns(blobsIndex, blockRoot[:])
if tc.isError {
require.Equal(t, tc.errorReason, err.Reason)
} else {
require.Equal(t, nilError, err)
expected := verifiedRoBlobs
require.DeepSSZEqual(t, expected, actual)
}
// Reset flags.
flags.Init(resetFlags)
})
}
}
func TestGetBlob(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
@@ -218,7 +468,7 @@ func TestGetBlob(t *testing.T) {
})
t.Run("finalized", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -232,7 +482,7 @@ func TestGetBlob(t *testing.T) {
})
t.Run("justified", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -270,14 +520,14 @@ func TestGetBlob(t *testing.T) {
})
t.Run("one blob only", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []uint64{2})
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", map[uint64]bool{2: true})
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 1, len(verifiedBlobs))
sidecar := verifiedBlobs[0].BlobSidecar
@@ -289,7 +539,7 @@ func TestGetBlob(t *testing.T) {
})
t.Run("no blobs returns an empty array", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},

View File

@@ -105,6 +105,8 @@ func (ds *Server) getPeer(pid peer.ID) (*ethpb.DebugPeerResponse, error) {
peerInfo.MetadataV0 = metadata.MetadataObjV0()
case metadata.MetadataObjV1() != nil:
peerInfo.MetadataV1 = metadata.MetadataObjV1()
case metadata.MetadataObjV2() != nil:
peerInfo.MetadataV2 = metadata.MetadataObjV2()
}
}
addresses := peerStore.Addrs(pid)

View File

@@ -1,3 +1,5 @@
# gazelle:ignore
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
@@ -36,6 +38,7 @@ go_library(
"//api/client/builder:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
@@ -45,6 +48,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -177,7 +181,6 @@ common_deps = [
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
]
# gazelle:ignore
go_test(
name = "go_default_test",
timeout = "moderate",

View File

@@ -13,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@@ -46,7 +45,7 @@ func (vs *Server) ProposeAttestation(ctx context.Context, att *ethpb.Attestation
ctx, span := trace.StartSpan(ctx, "AttesterServer.ProposeAttestation")
defer span.End()
resp, err := vs.proposeAtt(ctx, att, nil, att.GetData().CommitteeIndex)
resp, err := vs.proposeAtt(ctx, att, att.GetData().CommitteeIndex)
if err != nil {
return nil, err
}
@@ -70,27 +69,20 @@ func (vs *Server) ProposeAttestation(ctx context.Context, att *ethpb.Attestation
// ProposeAttestationElectra is a function called by an attester to vote
// on a block via an attestation object as defined in the Ethereum specification.
func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethpb.SingleAttestation) (*ethpb.AttestResponse, error) {
func (vs *Server) ProposeAttestationElectra(ctx context.Context, att *ethpb.AttestationElectra) (*ethpb.AttestResponse, error) {
ctx, span := trace.StartSpan(ctx, "AttesterServer.ProposeAttestationElectra")
defer span.End()
targetState, err := vs.AttestationStateFetcher.AttestationTargetState(ctx, singleAtt.Data.Target)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get target state")
}
committeeIndex := singleAtt.GetCommitteeIndex()
committee, err := helpers.BeaconCommitteeFromState(ctx, targetState, singleAtt.Data.Slot, committeeIndex)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get committee")
}
resp, err := vs.proposeAtt(ctx, singleAtt, committee, committeeIndex)
committeeIndex, err := att.GetCommitteeIndex()
if err != nil {
return nil, err
}
resp, err := vs.proposeAtt(ctx, att, committeeIndex)
if err != nil {
return nil, err
}
singleAttCopy := singleAtt.Copy()
att := singleAttCopy.ToAttestationElectra(committee)
if features.Get().EnableExperimentalAttestationPool {
if err = vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
@@ -98,7 +90,8 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
} else {
go func() {
ctx = trace.NewContext(context.Background(), trace.FromContext(ctx))
if err := vs.AttPool.SaveUnaggregatedAttestation(att); err != nil {
attCopy := att.Copy()
if err := vs.AttPool.SaveUnaggregatedAttestation(attCopy); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
return
}
@@ -156,29 +149,14 @@ func (vs *Server) SubscribeCommitteeSubnets(ctx context.Context, req *ethpb.Comm
return &emptypb.Empty{}, nil
}
func (vs *Server) proposeAtt(
ctx context.Context,
att ethpb.Att,
committee []primitives.ValidatorIndex, // required post-Electra
committeeIndex primitives.CommitteeIndex,
) (*ethpb.AttestResponse, error) {
func (vs *Server) proposeAtt(ctx context.Context, att ethpb.Att, committee primitives.CommitteeIndex) (*ethpb.AttestResponse, error) {
if _, err := bls.SignatureFromBytes(att.GetSignature()); err != nil {
return nil, status.Error(codes.InvalidArgument, "Incorrect attestation signature")
}
root, err := att.GetData().HashTreeRoot()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get attestation root: %v", err)
}
var singleAtt *ethpb.SingleAttestation
if att.Version() >= version.Electra {
var ok bool
singleAtt, ok = att.(*ethpb.SingleAttestation)
if !ok {
return nil, status.Errorf(codes.Internal, "Attestation has wrong type (expected %T, got %T)", &ethpb.SingleAttestation{}, att)
}
att = singleAtt.ToAttestationElectra(committee)
return nil, status.Errorf(codes.Internal, "Could not tree hash attestation: %v", err)
}
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
@@ -196,16 +174,10 @@ func (vs *Server) proposeAtt(
if err != nil {
return nil, err
}
subnet := helpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), committeeIndex, att.GetData().Slot)
subnet := helpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), committee, att.GetData().Slot)
// Broadcast the new attestation to the network.
var attToBroadcast ethpb.Att
if singleAtt != nil {
attToBroadcast = singleAtt
} else {
attToBroadcast = att
}
if err := vs.P2P.BroadcastAttestation(ctx, subnet, attToBroadcast); err != nil {
if err := vs.P2P.BroadcastAttestation(ctx, subnet, att); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast attestation: %v", err)
}

View File

@@ -31,13 +31,11 @@ import (
)
func TestProposeAttestation(t *testing.T) {
chainService := &mock.ChainService{}
attesterServer := &Server{
HeadFetcher: chainService,
P2P: &mockp2p.MockBroadcaster{},
AttPool: attestations.NewPool(),
OperationNotifier: (&mock.ChainService{}).OperationNotifier(),
AttestationStateFetcher: chainService,
HeadFetcher: &mock.ChainService{},
P2P: &mockp2p.MockBroadcaster{},
AttPool: attestations.NewPool(),
OperationNotifier: (&mock.ChainService{}).OperationNotifier(),
}
head := util.NewBeaconBlock()
head.Block.Slot = 999
@@ -81,19 +79,81 @@ func TestProposeAttestation(t *testing.T) {
require.NoError(t, err)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
require.NoError(t, state.SetValidators(validators))
chainService.State = state
req := &ethpb.SingleAttestation{
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(0, true)
req := &ethpb.AttestationElectra{
Signature: sig.Marshal(),
Data: &ethpb.AttestationData{
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
},
CommitteeBits: cb,
}
_, err = attesterServer.ProposeAttestationElectra(context.Background(), req)
assert.NoError(t, err)
})
t.Run("Electra - non-zero committee index", func(t *testing.T) {
state, err := util.NewBeaconStateElectra()
require.NoError(t, err)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
require.NoError(t, state.SetValidators(validators))
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(0, true)
req := &ethpb.AttestationElectra{
Signature: sig.Marshal(),
Data: &ethpb.AttestationData{
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
CommitteeIndex: 1,
},
CommitteeBits: cb,
}
_, err = attesterServer.ProposeAttestationElectra(context.Background(), req)
assert.ErrorContains(t, "attestation data's committee index must be 0 but was 1", err)
})
t.Run("Electra - no committee bit set", func(t *testing.T) {
state, err := util.NewBeaconStateElectra()
require.NoError(t, err)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
require.NoError(t, state.SetValidators(validators))
req := &ethpb.AttestationElectra{
Signature: sig.Marshal(),
Data: &ethpb.AttestationData{
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
},
CommitteeBits: primitives.NewAttestationCommitteeBits(),
}
_, err = attesterServer.ProposeAttestationElectra(context.Background(), req)
assert.ErrorContains(t, "exactly 1 committee index must be set but 0 were set", err)
})
t.Run("Electra - multiple committee bits set", func(t *testing.T) {
state, err := util.NewBeaconStateElectra()
require.NoError(t, err)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
require.NoError(t, state.SetValidators(validators))
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(0, true)
cb.SetBitAt(1, true)
req := &ethpb.AttestationElectra{
Signature: sig.Marshal(),
Data: &ethpb.AttestationData{
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
},
CommitteeBits: cb,
}
_, err = attesterServer.ProposeAttestationElectra(context.Background(), req)
assert.ErrorContains(t, "exactly 1 committee index must be set but 2 were set", err)
})
}
func TestProposeAttestation_IncorrectSignature(t *testing.T) {

View File

@@ -13,15 +13,20 @@ import (
"github.com/pkg/errors"
builderapi "github.com/prysmaticlabs/prysm/v5/api/client/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -56,28 +61,31 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
if err != nil {
log.WithError(err).Error("Could not convert slot to time")
}
log.WithFields(logrus.Fields{
"slot": req.Slot,
"sinceSlotStartTime": time.Since(t),
}).Info("Begin building block")
log := log.WithField("slot", req.Slot)
log.WithField("sinceSlotStartTime", time.Since(t)).Info("Begin building block")
// A syncing validator should not produce a block.
if vs.SyncChecker.Syncing() {
log.Error("Fail to build block: node is syncing")
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
// An optimistic validator MUST NOT produce a block (i.e., sign across the DOMAIN_BEACON_PROPOSER domain).
if slots.ToEpoch(req.Slot) >= params.BeaconConfig().BellatrixForkEpoch {
if err := vs.optimisticStatus(ctx); err != nil {
log.WithError(err).Error("Fail to build block: node is optimistic")
return nil, status.Errorf(codes.Unavailable, "Validator is not ready to propose: %v", err)
}
}
head, parentRoot, err := vs.getParentState(ctx, req.Slot)
if err != nil {
log.WithError(err).Error("Fail to build block: could not get parent state")
return nil, err
}
sBlk, err := getEmptyBlock(req.Slot)
if err != nil {
log.WithError(err).Error("Fail to build block: could not get empty block")
return nil, status.Errorf(codes.Internal, "Could not prepare block: %v", err)
}
// Set slot, graffiti, randao reveal, and parent root.
@@ -89,6 +97,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
// Set proposer index.
idx, err := helpers.BeaconProposerIndex(ctx, head)
if err != nil {
log.WithError(err).Error("Fail to build block: could not calculate proposer index")
return nil, fmt.Errorf("could not calculate proposer index %w", err)
}
sBlk.SetProposerIndex(idx)
@@ -99,7 +108,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
}
resp, err := vs.BuildBlockParallel(ctx, sBlk, head, req.SkipMevBoost, builderBoostFactor)
log := log.WithFields(logrus.Fields{
log = log.WithFields(logrus.Fields{
"slot": req.Slot,
"sinceSlotStartTime": time.Since(t),
"validator": sBlk.Block().ProposerIndex(),
@@ -270,7 +279,13 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
}
// ProposeBeaconBlock handles the proposal of beacon blocks.
// TODO: Add tests
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
var (
blobSidecars []*ethpb.BlobSidecar
dataColumnSideCars []*ethpb.DataColumnSidecar
)
ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock")
defer span.End()
@@ -282,12 +297,12 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", "decode block failed", err)
}
isPeerDASEnabled := coreTime.PeerDASIsActive(block.Block().Slot())
var sidecars []*ethpb.BlobSidecar
if block.IsBlinded() {
block, sidecars, err = vs.handleBlindedBlock(ctx, block)
block, blobSidecars, dataColumnSideCars, err = vs.handleBlindedBlock(ctx, block, isPeerDASEnabled)
} else if block.Version() >= version.Deneb {
sidecars, err = vs.blobSidecarsFromUnblindedBlock(block, req)
blobSidecars, dataColumnSideCars, err = vs.handleUnblindedBlock(block, req, isPeerDASEnabled)
}
if err != nil {
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
@@ -298,9 +313,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
return nil, status.Errorf(codes.Internal, "Could not hash tree root: %v", err)
}
slot := block.Block().Slot()
var wg sync.WaitGroup
errChan := make(chan error, 1)
wg.Add(1)
go func() {
defer wg.Done()
@@ -311,8 +327,14 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
errChan <- nil
}()
if err := vs.broadcastAndReceiveBlobs(ctx, sidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
if isPeerDASEnabled {
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSideCars, root, slot); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive data columns: %v", err)
}
} else {
if err := vs.broadcastAndReceiveBlobs(ctx, blobSidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
}
}
wg.Wait()
@@ -324,46 +346,85 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
}
// handleBlindedBlock processes blinded beacon blocks.
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) {
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock, isPeerDASEnabled bool) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
if block.Version() < version.Bellatrix {
return nil, nil, errors.New("pre-Bellatrix blinded block")
return nil, nil, nil, errors.New("pre-Bellatrix blinded block")
}
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return nil, nil, errors.New("unconfigured block builder")
return nil, nil, nil, errors.New("unconfigured block builder")
}
copiedBlock, err := block.Copy()
if err != nil {
return nil, nil, err
return nil, nil, nil, errors.Wrap(err, "block copy")
}
payload, bundle, err := vs.BlockBuilder.SubmitBlindedBlock(ctx, block)
if err != nil {
return nil, nil, errors.Wrap(err, "submit blinded block failed")
return nil, nil, nil, errors.Wrap(err, "submit blinded block")
}
if err := copiedBlock.Unblind(payload); err != nil {
return nil, nil, errors.Wrap(err, "unblind failed")
return nil, nil, nil, errors.Wrap(err, "unblind")
}
sidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if isPeerDASEnabled {
dataColumnSideCars, err := unblindDataColumnsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "unblind data columns sidecars")
}
return copiedBlock, nil, dataColumnSideCars, nil
}
blobSidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, errors.Wrap(err, "unblind blobs sidecars: commitment value doesn't match block")
return nil, nil, nil, errors.Wrap(err, "unblind blobs sidecars: commitment value doesn't match block")
}
return copiedBlock, sidecars, nil
return copiedBlock, blobSidecars, nil, nil
}
func (vs *Server) blobSidecarsFromUnblindedBlock(block interfaces.SignedBeaconBlock, req *ethpb.GenericSignedBeaconBlock) ([]*ethpb.BlobSidecar, error) {
func (vs *Server) handleUnblindedBlock(
block interfaces.SignedBeaconBlock,
req *ethpb.GenericSignedBeaconBlock,
isPeerDASEnabled bool,
) ([]*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
rawBlobs, proofs, err := blobsAndProofs(req)
if err != nil {
return nil, err
return nil, nil, err
}
return BuildBlobSidecars(block, rawBlobs, proofs)
if isPeerDASEnabled {
// Convert blobs from slices to array.
blobs := make([]kzg.Blob, 0, len(rawBlobs))
for _, blob := range rawBlobs {
if len(blob) != kzg.BytesPerBlob {
return nil, nil, errors.Errorf("invalid blob size. expected %d bytes, got %d bytes", kzg.BytesPerBlob, len(blob))
}
blobs = append(blobs, kzg.Blob(blob))
}
dataColumnSideCars, err := peerdas.DataColumnSidecars(block, blobs)
if err != nil {
return nil, nil, errors.Wrap(err, "data column sidecars")
}
return nil, dataColumnSideCars, nil
}
blobSidecars, err := BuildBlobSidecars(block, rawBlobs, proofs)
if err != nil {
return nil, nil, errors.Wrap(err, "build blob sidecars")
}
return blobSidecars, nil, nil
}
// broadcastReceiveBlock broadcasts a block and handles its reception.
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [32]byte) error {
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
protoBlock, err := block.Proto()
if err != nil {
return errors.Wrap(err, "protobuf conversion failed")
@@ -379,7 +440,7 @@ func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.Si
}
// broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars.
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [32]byte) error {
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [fieldparams.RootLength]byte) error {
eg, eCtx := errgroup.WithContext(ctx)
for i, sc := range sidecars {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
@@ -408,6 +469,57 @@ func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethp
return eg.Wait()
}
// broadcastAndReceiveDataColumns handles the broadcasting and reception of data columns sidecars.
func (vs *Server) broadcastAndReceiveDataColumns(
ctx context.Context,
sidecars []*ethpb.DataColumnSidecar,
root [fieldparams.RootLength]byte,
slot primitives.Slot,
) error {
eg, _ := errgroup.WithContext(ctx)
dataColumnsWithholdCount := features.Get().DataColumnsWithholdCount
for _, sd := range sidecars {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
sidecar := sd
eg.Go(func() error {
// Compute the subnet index based on the column index.
subnet := sidecar.ColumnIndex % params.BeaconConfig().DataColumnSidecarSubnetCount
if sidecar.ColumnIndex < dataColumnsWithholdCount {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"slot": slot,
"dataColumnIndex": sidecar.ColumnIndex,
}).Warning("Withholding data column")
} else {
if err := vs.P2P.BroadcastDataColumn(ctx, root, subnet, sidecar); err != nil {
return errors.Wrap(err, "broadcast data column")
}
}
roDataColumn, err := blocks.NewRODataColumnWithRoot(sidecar, root)
if err != nil {
return errors.Wrap(err, "new read-only data column with root")
}
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
if err := vs.DataColumnReceiver.ReceiveDataColumn(verifiedRODataColumn); err != nil {
return errors.Wrap(err, "receive data column")
}
vs.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.DataColumnSidecarReceived,
Data: &operation.DataColumnSidecarReceivedData{DataColumn: &verifiedRODataColumn},
})
return nil
})
}
return eg.Wait()
}
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
func (vs *Server) PrepareBeaconProposer(
_ context.Context, request *ethpb.PrepareBeaconProposerRequest,

View File

@@ -42,44 +42,44 @@ import (
// and committees in which particular validators need to perform their responsibilities,
// and more.
type Server struct {
Ctx context.Context
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
HeadFetcher blockchain.HeadFetcher
ForkFetcher blockchain.ForkFetcher
ForkchoiceFetcher blockchain.ForkchoiceFetcher
GenesisFetcher blockchain.GenesisFetcher
FinalizationFetcher blockchain.FinalizationFetcher
TimeFetcher blockchain.TimeFetcher
BlockFetcher execution.POWBlockFetcher
DepositFetcher cache.DepositFetcher
ChainStartFetcher execution.ChainStartFetcher
Eth1InfoFetcher execution.ChainInfoFetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
SyncChecker sync.Checker
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
P2P p2p.Broadcaster
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
SlashingsPool slashings.PoolManager
ExitPool voluntaryexits.PoolManager
SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher
OperationNotifier opfeed.Notifier
StateGen stategen.StateManager
ReplayerBuilder stategen.ReplayerBuilder
BeaconDB db.HeadAccessDatabase
ExecutionEngineCaller execution.EngineCaller
BlockBuilder builder.BlockBuilder
BLSChangesPool blstoexec.PoolManager
ClockWaiter startup.ClockWaiter
CoreService *core.Service
AttestationStateFetcher blockchain.AttestationStateFetcher
Ctx context.Context
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
HeadFetcher blockchain.HeadFetcher
ForkFetcher blockchain.ForkFetcher
ForkchoiceFetcher blockchain.ForkchoiceFetcher
GenesisFetcher blockchain.GenesisFetcher
FinalizationFetcher blockchain.FinalizationFetcher
TimeFetcher blockchain.TimeFetcher
BlockFetcher execution.POWBlockFetcher
DepositFetcher cache.DepositFetcher
ChainStartFetcher execution.ChainStartFetcher
Eth1InfoFetcher execution.ChainInfoFetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
SyncChecker sync.Checker
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
P2P p2p.Broadcaster
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
SlashingsPool slashings.PoolManager
ExitPool voluntaryexits.PoolManager
SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher
OperationNotifier opfeed.Notifier
StateGen stategen.StateManager
ReplayerBuilder stategen.ReplayerBuilder
BeaconDB db.HeadAccessDatabase
ExecutionEngineCaller execution.EngineCaller
BlockBuilder builder.BlockBuilder
BLSChangesPool blstoexec.PoolManager
ClockWaiter startup.ClockWaiter
CoreService *core.Service
}
// WaitForActivation checks if a validator public key exists in the active validator registry of the current

View File

@@ -4,6 +4,8 @@ import (
"bytes"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -68,3 +70,29 @@ func unblindBlobsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.B
}
return sidecars, nil
}
// TODO: Add tests
func unblindDataColumnsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.BlobsBundle) ([]*ethpb.DataColumnSidecar, error) {
// Check if the block is at least a Deneb block.
if block.Version() < version.Deneb {
return nil, nil
}
// Convert blobs from slices to array.
blobs := make([]kzg.Blob, 0, len(bundle.Blobs))
for _, blob := range bundle.Blobs {
if len(blob) != kzg.BytesPerBlob {
return nil, errors.Errorf("invalid blob size. expected %d bytes, got %d bytes", kzg.BytesPerBlob, len(blob))
}
blobs = append(blobs, kzg.Blob(blob))
}
// Retrieve data columns from blobs.
dataColumnSidecars, err := peerdas.DataColumnSidecars(block, blobs)
if err != nil {
return nil, errors.Wrap(err, "data column sidecars")
}
return dataColumnSidecars, nil
}

View File

@@ -88,6 +88,7 @@ type Config struct {
AttestationReceiver blockchain.AttestationReceiver
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
ExecutionChainService execution.Chain
ChainStartFetcher execution.ChainStartFetcher
ExecutionChainInfoFetcher execution.ChainInfoFetcher
@@ -212,44 +213,44 @@ func NewService(ctx context.Context, cfg *Config) *Service {
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
}
validatorServer := &validatorv1alpha1.Server{
Ctx: s.ctx,
AttestationCache: s.cfg.AttestationCache,
AttPool: s.cfg.AttestationsPool,
ExitPool: s.cfg.ExitPool,
HeadFetcher: s.cfg.HeadFetcher,
ForkFetcher: s.cfg.ForkFetcher,
ForkchoiceFetcher: s.cfg.ForkchoiceFetcher,
GenesisFetcher: s.cfg.GenesisFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
TimeFetcher: s.cfg.GenesisTimeFetcher,
BlockFetcher: s.cfg.ExecutionChainService,
DepositFetcher: s.cfg.DepositFetcher,
ChainStartFetcher: s.cfg.ChainStartFetcher,
Eth1InfoFetcher: s.cfg.ExecutionChainService,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
SyncChecker: s.cfg.SyncService,
StateNotifier: s.cfg.StateNotifier,
BlockNotifier: s.cfg.BlockNotifier,
OperationNotifier: s.cfg.OperationNotifier,
P2P: s.cfg.Broadcaster,
BlockReceiver: s.cfg.BlockReceiver,
BlobReceiver: s.cfg.BlobReceiver,
MockEth1Votes: s.cfg.MockEth1Votes,
Eth1BlockFetcher: s.cfg.ExecutionChainService,
PendingDepositsFetcher: s.cfg.PendingDepositFetcher,
SlashingsPool: s.cfg.SlashingsPool,
StateGen: s.cfg.StateGen,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
ReplayerBuilder: ch,
ExecutionEngineCaller: s.cfg.ExecutionEngineCaller,
BeaconDB: s.cfg.BeaconDB,
BlockBuilder: s.cfg.BlockBuilder,
BLSChangesPool: s.cfg.BLSChangesPool,
ClockWaiter: s.cfg.ClockWaiter,
CoreService: coreService,
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
PayloadIDCache: s.cfg.PayloadIDCache,
AttestationStateFetcher: s.cfg.AttestationReceiver,
Ctx: s.ctx,
AttestationCache: s.cfg.AttestationCache,
AttPool: s.cfg.AttestationsPool,
ExitPool: s.cfg.ExitPool,
HeadFetcher: s.cfg.HeadFetcher,
ForkFetcher: s.cfg.ForkFetcher,
ForkchoiceFetcher: s.cfg.ForkchoiceFetcher,
GenesisFetcher: s.cfg.GenesisFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
TimeFetcher: s.cfg.GenesisTimeFetcher,
BlockFetcher: s.cfg.ExecutionChainService,
DepositFetcher: s.cfg.DepositFetcher,
ChainStartFetcher: s.cfg.ChainStartFetcher,
Eth1InfoFetcher: s.cfg.ExecutionChainService,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
SyncChecker: s.cfg.SyncService,
StateNotifier: s.cfg.StateNotifier,
BlockNotifier: s.cfg.BlockNotifier,
OperationNotifier: s.cfg.OperationNotifier,
P2P: s.cfg.Broadcaster,
BlockReceiver: s.cfg.BlockReceiver,
BlobReceiver: s.cfg.BlobReceiver,
DataColumnReceiver: s.cfg.DataColumnReceiver,
MockEth1Votes: s.cfg.MockEth1Votes,
Eth1BlockFetcher: s.cfg.ExecutionChainService,
PendingDepositsFetcher: s.cfg.PendingDepositFetcher,
SlashingsPool: s.cfg.SlashingsPool,
StateGen: s.cfg.StateGen,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
ReplayerBuilder: ch,
ExecutionEngineCaller: s.cfg.ExecutionEngineCaller,
BeaconDB: s.cfg.BeaconDB,
BlockBuilder: s.cfg.BlockBuilder,
BLSChangesPool: s.cfg.BLSChangesPool,
ClockWaiter: s.cfg.ClockWaiter,
CoreService: coreService,
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
PayloadIDCache: s.cfg.PayloadIDCache,
}
s.validatorServer = validatorServer
nodeServer := &nodev1alpha1.Server{

Some files were not shown because too many files have changed in this diff Show More