Compare commits

...

177 Commits

Author SHA1 Message Date
Manu NALEPA
49d5065116 Merge branch 'eip_7917' into fusaka-devnet-1 2025-06-06 09:56:24 +02:00
Manu NALEPA
a94561f8dc Merge branch 'develop' into peerDAS 2025-06-06 09:56:04 +02:00
Manu NALEPA
af875b78c9 Peer das misc (#15384)
* `ExchangeCapabilities`: Transform `O(n**2)` into `O(2n)` and fix logging.

* Find peers with subnets and logs: Refactor

* Validator custody: Do not wait being subscribed to advertise correct `cgc`. (temp hack)
2025-06-06 09:43:13 +02:00
terence tsao
009263650c Fix minimal tests 2025-06-05 22:12:51 -07:00
Preston Van Loon
543ebe857e V6.0.4 changelog (#15385)
* Ran unclog to build release notes for v6.0.4

* Add changelog preamble

* Changelog fragment
2025-06-06 01:37:32 +00:00
terence tsao
ce591f6bd8 Remove deneb and electra entries from blob schedule
This was cherry picked from PR #15364
and edited to remove the minimal cases
2025-06-05 21:18:35 -03:00
potuz
19ba5848fd update spectests to 1.6.0-alpha.1 2025-06-05 21:18:30 -03:00
Potuz
7083f439d2 Marshal the state JSON 2025-06-05 21:18:21 -03:00
Potuz
efebbe125d short circuit the proposer cache to use the state 2025-06-05 21:18:21 -03:00
Potuz
d802d87858 Add epoch transition code 2025-06-05 21:18:21 -03:00
potuz
fdf9b53294 Add the fields as shared 2025-06-05 21:18:21 -03:00
potuz
9cfbc2f27e Fix ToProto() and ToProtoUnsafe() 2025-06-05 21:18:21 -03:00
potuz
1603c651b6 fix the hasher for the fulu state 2025-06-05 21:18:21 -03:00
potuz
1db9b59933 Add the new Fulu state with the new field 2025-06-05 21:18:21 -03:00
james-prysm
e569df5ebc moving web flag to feature (#15382) 2025-06-05 19:18:54 +00:00
james-prysm
8c324cc491 validator client: adding in get duties v2 (#15380)
* adding in get duties v2

* gaz

* missed definition

* removing comment

* updating description
2025-06-05 15:49:57 +00:00
Bastin
265d84569c SSZ support for Attestation APIs Part 1 (#15377)
* ssz support for validator/GetAttestationData V1

* ssz support for validator/GetAggregateAttestation V2

* changelog

* remove duplicate header
2025-06-05 13:39:52 +00:00
Alleysira
79b064a6cc fix missing meta in resp of getPeers (#15371)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-06-04 17:55:01 +00:00
Preston Van Loon
182c18a7b2 Add regression test for PR 15369 (#15379)
* Update go.uber.org/mock to v0.5.2.

* Regression test for #15369

* Changelog fragment

* Use SetActiveTestCleanup

* Remove logrus log level change
2025-06-04 17:30:31 +00:00
james-prysm
8b9c161560 Get duties v2 ( gRPC) (#15273)
* adding in proto for getdutiesv2

* adding in GetDutiesV2

* updating changelog and mock

* fixing tests

* breaking up function into smaller functions for gocognit

* reverting some changes so we can break it into a separate PR for validator client only

* reverted too much adding mock change back in

* removing reserved based on preston's feedback

* adding duties tests back in

* Update beacon-chain/rpc/prysm/v1alpha1/validator/duties.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* updating based on preston's feedback

* updating build for new files

* maybe there's some flake with how the state is used, trying with it moved

* reverting config override

* reverting all changes to get duties v1 based on preston's feedback

* Update proto/prysm/v1alpha1/validator.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing partial feedback

* adding small comment

* reorganizing function based on feedback

* removing unused field

* adding some more unit tests

* adding attribute for pubkeys to span

* preston's feedback

* fixing current to next

* probably safer to register current period now

* Update beacon-chain/core/helpers/beacon_committee.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* adding in preston's comment

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-04 15:05:57 +00:00
Manu NALEPA
4a4532f3ba Beacon node: Instantiate custody info. (#15378) 2025-06-04 13:08:40 +00:00
Manu NALEPA
61207bd3ac Merge branch 'develop' into peerDAS 2025-06-02 14:15:22 +02:00
Manu NALEPA
0b6fcd7d17 Merge branch 'develop' into peerDAS 2025-05-28 21:05:22 +02:00
Manu NALEPA
fe2766e716 Merge branch 'develop' into peerDAS 2025-05-26 09:57:57 +02:00
Manu NALEPA
9135d765e1 Merge branch 'develop' into peerDAS 2025-05-23 15:41:27 +02:00
Manu NALEPA
eca87f29d1 Merge branch 'develop' into peerDAS 2025-05-22 14:37:11 +02:00
Manu NALEPA
00821c8f55 Merge branch 'develop' into peerDAS 2025-05-21 13:50:23 +02:00
Manu NALEPA
4b9e92bcd7 Peerdas by root req (#15275)
* `DataColumnStorageSummary`: Implement `HasAtLeastOneIndex`.

* `DataColumnStorage.Get`: Exit early if the root is found but no corresponding columns.

* `custodyColumnsFromPeers`: Simplify.

* Remove duplicate `uint64MapToSortedSlice` function.

* `DataColumnStorageSummary`: Add `Stored`.

* Refactor reconstruction related code.
2025-05-16 16:19:01 +02:00
terence
b01d9005b8 Update data column receive log (#15289) 2025-05-16 07:01:40 -07:00
Manu NALEPA
8d812d5f0e Merge branch 'develop' into peerDAS 2025-05-07 17:41:25 +02:00
terence
24a3cb2a8b Add column identifiers by root request (#15212)
* Add column identifiers by root request

* `DataColumnsByRootIdentifiers`: Fix Un/Marshal.

* alternate MashalSSZ impl

* remove sort.Interface impl

* optimize unmarshal and add defensive checks

* fix offsets in error messages

* Fix build, remove sort

* Fix `SendDataColumnSidecarsByRootRequest` and tests.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Kasey <kasey@users.noreply.github.com>
2025-05-06 14:07:16 +02:00
Manu NALEPA
66d1d3e248 Use finalized state for validator custody instead of head state. (#15243)
* `finalizedState` ==> `FinalizedState`.
We'll need it in an other package later.

* `setTargetValidatorsCustodyRequirement`: Use finalized state instead of head state.

* Fix James's comment.
2025-05-05 21:13:49 +02:00
Manu NALEPA
99933678ea Peerdas fix get blobs v2 (#15234)
* `reconstructAndBroadcastDataColumnSidecars`: Improve logging.

* `ReconstructDataColumnSidecars`: Add comments and return early if needed.

* `reconstructAndBroadcastDataColumnSidecars`: Return early if not blobs are retrieved from the EL.

* `filterPeerWhichCustodyAtLeastOneDataColumn`: Remove unneded log field.

* Fix Terence's comment.
2025-05-02 17:34:32 +02:00
Manu NALEPA
34f8e1e92b Data colummns by range: Use all possible peers then filter them. (#15242) 2025-05-02 12:15:02 +02:00
terence
a6a41a8755 Add column sidecar inclusion proof cache (#15217) 2025-04-29 13:46:32 +02:00
terence
f110b94fac Add flag to subscribe to all blob column subnets (#15197)
* Seperate subscribe data columns from attestation and sync committee subnets

* Fix test

* Rename to subscribe-data-subnets

* Update to subscribe-all-data-subnets

* `--subscribe-all-data-subnets`: Add `.` at the end of help, since it seems to be the consensus.

* `ConfigureGlobalFlags`: Fix log.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-04-29 11:59:17 +02:00
Manu NALEPA
33023aa282 Merge branch 'develop' into peerDAS 2025-04-29 11:13:27 +02:00
Manu NALEPA
eeb3cdc99e Merge branch 'develop' into peerDAS 2025-04-18 08:37:33 +02:00
Preston Van Loon
1e7147f060 Remove --compilation_mode=opt, use supranational blst headers. 2025-04-17 20:53:54 +02:00
Manu NALEPA
8936beaff3 Merge branch 'develop' into peerDAS 2025-04-17 16:49:22 +02:00
Manu NALEPA
c00283f247 UpgradeToFulu: Add spec tests. (#15189) 2025-04-17 15:17:27 +02:00
Manu NALEPA
a4269cf308 Add tests (#15188) 2025-04-17 13:12:46 +02:00
Manu NALEPA
91f3c8a4d0 c-kzg-4844 lib: Update to v2.1.1. (#15185) 2025-04-17 01:25:36 +02:00
terence
30c7ee9c7b Validate parent block exists before signature (#15184)
* Validate parent block exists before signature

* `ValidProposerSignature`: Add comment

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-04-17 00:40:48 +02:00
Manu NALEPA
456d8b9eb9 Merge branch 'develop' into peerDAS-do-not-merge 2025-04-16 22:58:38 +02:00
Manu NALEPA
4fe3e6d31a Merge branch 'develop' into peerDAS-do-not-merge 2025-04-16 20:30:19 +02:00
Manu NALEPA
01ee1c80b4 merge from develop 2025-04-16 17:27:48 +02:00
Manu NALEPA
c14fe47a81 Data columns by range requests: Simplify and move from initial sync package to sync package. (#15179)
* `data_column.go`: Factorize declarations (no functional changes).

* Verification for data columns: Do not recompute again if already done.

* `SaveDataColumns`: Delete because unused.

* `MissingDataColumns`: Use `DataColumnStorageSummarizer` instead of `DataColumnStorage`

* `TestFetchDataColumnsFromPeers`: Move trusted setup load out of the loop for optimization.

* `TestFetchDataColumnsFromPeers`: Use fulu block instead of deneb block.

* `fetchDataColumnsFromPeers`: Use functions already implemented in the `sync` package instead of duplicated them here.

* `fetchDataColumnsFromPeers` ==> `fetchMissingDataColumnsFromPeers`.

* Data columns initial sync: simplify

* Requests data columns by range: Move from initial sync to sync package.

Since it will eventually be used by the backfill package, and
the backfill packages does not depend on the initial sync package.
2025-04-16 11:18:05 +02:00
terence
b9deabbf0a Execution API: Support blobs_bundle_v2 for PeerDAS (#15167)
* Execution api: add and use blobs_bundle_v2

* Execution bundle fulu can unmarshal

* Manus feedback and fix execution request decode
2025-04-16 10:53:55 +02:00
Manu NALEPA
5d66a98e78 Uniformize data columns sidecars validation pipeline (#15154)
* Rework the data column sidecars verification pipeline.

* Nishant's comment.

* `blocks.BlockWithROBlobs` ==> `blocks.BlockWithROSidecars`

* `batchBlobSync` ==> `batchSidecarSync`.

* `handleBlobs` ==> `handleSidecars`.

* Kasey comment about verification
2025-04-15 20:32:50 +02:00
Manu NALEPA
2d46d6ffae Various small optimizations (#15153)
* Reconstruct data columns from gossip source: Call `setSeenDataColumnIndex`.

* `reconstructAndBroadcastDataColumnSidecars`: Minor optimisation.

Avoid to range over all columns.

* Reconstructed data columns sidecars from EL: Avoid broadcasting already received data columns.
2025-04-09 11:38:28 +02:00
Manu NALEPA
57107e50a7 Cells proofs (#15152)
* Implement distributed block building.
Credits: Francis

* Add fixes.
2025-04-09 09:28:59 +02:00
Manu NALEPA
47271254f6 New Data Column Sidecar Storage Design, Data Columns as a First-Class Citizen & Unit Testing (#15061)
* DB Filesystem: Move all data column related code to `data_columns.go`

Only code move.

* Implement data columns storage

* Kasey comment: Fix typo

* Kasey comment: Fix clutter

* Kasey comment: `IsDataAvailable`: Remove `nodeID`.

* Kasey comment: indice ==> index

* Kasey comment: Move `CreateTestVerifiedRoDataColumnSidecars` in `beacon-chain/verification/fake`.

* `Store` ==> `Save`.

* Kasey comment: AAAA!

* Kasey comment: Fix typo.

* Kasey comment: Add comment.

* Kasey commnet: Stop exporting errors for nothing.

* Kasey comment: Read all metadata at once.

* Kasey comment: Compute file size instead of reading it from stats.

* Kasey comment: Lock mutexes before checking if the file exists.

* Kasey comment: `limit` ==> `nonZeroOffset`.

* Kasey comment: `DataColumnStorage.Get`: Set verified into the `verification package`.

* Kasey comment: `prune` - Flatten the `==` case.

* Kasey comment: Implement and use `storageIndices`.

* `DataColumnsAlignWithBlock`: Move into its own file.

* `DataColumnSidecar`: Rename variables to stick with
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#datacolumnsidecar

* Kasey comment: Add `file.Sync`.

* `DataColumnStorage.Get`: Remove useless cast.

* (Internal) Kasey comment: Set automatically the count of saved data columns.
2025-04-08 23:20:38 +02:00
Francis Li
f304028874 Add new vars defined in consensus-spec (#15101) 2025-03-31 20:01:47 +02:00
Manu NALEPA
8abc5e159a DataColumnSidecarsForReconstruct: Add guards (#15051) 2025-03-14 10:29:15 +01:00
Manu NALEPA
b1ac53c4dd Set defaultEngineTimeout = 2 * time.Second (#15043) 2025-03-13 13:56:42 +01:00
Francis Li
27ab68c856 feat: implement reconstruct and broadcast data columns (#15023)
* Implement reconstructAndBroadcastDataColumns

* Fix merge error

* Fix tests

* Minor changes.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-13 11:19:34 +01:00
Niran Babalola
ddf5a3953b Fetch data columns from multiple peers instead of just supernodes (#14977)
* Extract the block fetcher's peer selection logic for data columns so it can be used in both by range and by root requests

* Refactor data column sidecar request to send requests to multiple peers instead of supernodes

* Remove comment

* Remove unused method

* Add tests for dmissiblePeersForDataColumns

* Extract data column fetching into standalone functions

* Remove AdmissibleCustodyGroupsPeers and replace the final call with requests to multiple peers

* Apply suggestions from code review

Co-authored-by: Manu NALEPA <nalepae@gmail.com>

* Wrap errors

* Use cached peedas.Info and properly convert custody groups to custody columns

* Rename filterPeersForRangeReq

* Preserve debugging descriptions when filtering out peers

* Remove unused functions.

* Initialize nested maps

* Fix comment

* First pass at retry logic for data column requests

* Select fresh peers for each retry

* Return an error if there are requested columns remaining

* Adjust errors

* Improve slightly the godoc.

* Improve wrapped error messages.

* `AdmissiblePeersForDataColumns`: Use value or `range`.

* Remove `convertCustodyGroupsToDataColumnsByPeer` since used only once.

* Minor fixes.

* Retry until we run out of peers

* Delete from the map of peers instead of filtering

* Remove unneeded break

* WIP: TestRequestDataColumnSidecars

* `RequestDataColumnSidecars`: Move the happy path in the for loop.

* Convert the peer ID to a node ID instead of using peer.EnodeID

* Extract AdmissiblePeersForDataColumns from a method into a function and use it (instead of a mock) in TestRequestDataColumnSidecars

* Track data column requests in tests to compare vs expectations

* Run gazelle

* Clean up test config changes so other tests don't break

* Clean up comments

* Minor changes.

* Add tests for peers that don't respond with all requested columns

* Respect MaxRequestDataColumnSidecars

---------

Co-authored-by: Manu NALEPA <nalepae@gmail.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-12 11:46:20 +01:00
Manu NALEPA
92d2fc101d Implement validator custody (#14948)
* Node info: Rename cache and mutex.

* Add `VALIDATOR_CUSTODY_REQUIREMENT` and `BALANCE_PER_ADDITIONAL_CUSTODY_GROUP`.

* Implement `ValidatorsCustodyRequirement`.

* Sync service: Add tracked validators cache.

* `dataColumnSidecarByRootRPCHandler`: Remove custody columns in logs.

* `dataColumnSidecarByRangeRPCHandler`: Remove custody columns in logs.

* `blobsFromStoredDataColumns`: Simplify.

Do not make any more a difference between "can theoretically reconstruct" and "can actually reconstruct".

* Implement validator custody.

* Fix Nishant's comment.

* Fix Nishant's commit.
2025-03-11 11:11:23 +01:00
Francis Li
8996000d2b feature: Implement data column support for different storage layouts (#15014)
* Implement data column support for different storage layouts

* Fix errors

* Fix linting

* `slotFromFile`: First try to decode as a data column.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-07 20:25:31 +01:00
Francis Li
a2fcba2349 feat: implement reconstruct data column sidecars (#15005) 2025-03-05 17:23:58 +01:00
Francis Li
abe8638991 feat: update ckzg lib to support ComputeCells (#15004)
* Update ckzg version to include ComputeCells

* Minor fix

* Run `bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=deps.bzl%prysm_deps -prune=true`

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-04 17:48:18 +01:00
Francis Li
0b5064b474 feat: cell proof computation related proto and generated go files (#15003)
* Add new message type to proto and generate .go files

* `proto/engine/v1`: Remove `execution_engine_eip7594.go` since this file does not exists.

Rerun ` hack/update-go-pbs.sh` and `hack/update-go-ssz.sh `.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-04 17:48:01 +01:00
Manu NALEPA
da9d4cf5b9 Merge branch 'develop' into peerDAS 2025-02-21 16:03:20 +01:00
Manu NALEPA
a62cca15dd Merge branch 'develop' into peerDAS 2025-02-20 15:48:07 +01:00
Manu NALEPA
ac04246a2a Avoid computing peerDAS info again and again. (#14893)
* `areDataColumnsAvailable`: `signed` ==> `signedBlock`.

* peerdas: Split `helpers.go` in multiple files respecting the specification.

* peerDAS: Implement `Info`.

* peerDAS: Use cached `Info` when possible.
2025-02-14 18:06:04 +01:00
Manu NALEPA
0923145bd7 Merge branch 'develop' into peerDAS 2025-02-14 16:51:05 +01:00
Manu NALEPA
a216cb4105 Merge branch 'develop' into peerDAS 2025-02-13 18:22:21 +01:00
Manu NALEPA
01705d1f3d Peer das sync empty requests (#14854)
* `TestBuildBwbSlices`: Add test case failing with the current implementation.

* Fix `buildBwbSlices` to comply with the new test case.

* `block_fetchers.go`: Improve logging and godoc.

* `DataColumnsRPCMinValidSlot`: Update to Fulu.
2025-02-03 15:23:04 +01:00
Manu NALEPA
14f93b4e9d Sync: Integrate batch directly in buildBwbSlices. (#14843)
Previously, `buildBwbSlices` were built, and then only to big requests were batched in `buildDataColumnSidecarsByRangeRequests`.

In some edge cases, this lead to requesting data columns to peers for blocks with no blobs.

Splitting by batch directly in `buildBwbSlices` fixes the issue.
2025-01-30 12:11:06 +01:00
Manu NALEPA
ad11036c36 reconstructAndBroadcastBlobs: Temporarily deactivate starting at Fulu. 2025-01-27 15:15:34 +01:00
Manu NALEPA
632a06076b Merge branch 'develop' into peerDAS 2025-01-22 21:30:32 +01:00
Manu NALEPA
242c2b0268 Merge branch 'develop' into peerDAS 2025-01-22 20:08:10 +01:00
Ekaterina Riazantseva
19662da905 Add PeerDAS kzg and inclusion proof verification metrics (#14814) 2025-01-21 16:20:10 +01:00
Ekaterina Riazantseva
7faee5af35 Add PeerDAS gossip verification metrics (#14796) 2025-01-21 16:16:12 +01:00
Ekaterina Riazantseva
805ee1bf31 Add 'beacon' prefix to 'data_column_sidecar_computation' metric (#14790) 2025-01-21 16:14:26 +01:00
Manu NALEPA
bea46fdfa1 Merge branch 'develop' into peerDAS 2025-01-20 13:37:29 +01:00
Manu NALEPA
f6b1fb1c88 Merge branch 'develop' into peerDAS 2025-01-16 10:23:21 +01:00
Manu NALEPA
6fb349ea76 unmarshalState: Use hasFuluKey. 2025-01-15 20:48:25 +01:00
Manu NALEPA
e5a425f5c7 Merge branch 'develop' into peerDAS 2025-01-15 17:18:34 +01:00
Manu NALEPA
f157d37e4c peerDAS: Decouple network subnets from das-core. (#14784)
https://github.com/ethereum/consensus-specs/pull/3832/
2025-01-14 10:45:05 +01:00
Manu NALEPA
5f08559bef Merge branch 'develop' into peerDAS 2025-01-08 10:18:18 +01:00
Manu NALEPA
a082d2aecd Merge branch 'fulu-boilerplate' into peerDAS 2025-01-06 13:45:33 +01:00
Manu NALEPA
bcfaff8504 Upgraded state to <fork> log: Move from debug to info.
Rationale:
This log is the only one notifying the user a new fork happened.
A new fork is always a little bit stressful for a node operator.
Having at least one log indicating the client switched fork is something useful.
2025-01-05 16:22:43 +01:00
Manu NALEPA
d8e09c346f Implement the Fulu fork boilerplate. 2025-01-05 16:22:38 +01:00
Manu NALEPA
876519731b Prepare for future fork boilerplate. 2025-01-05 16:14:02 +01:00
Manu NALEPA
de05b83aca Merge branch 'develop' into peerDAS 2024-12-30 15:11:02 +01:00
Manu NALEPA
56c73e7193 Merge branch 'develop' into peerDAS 2024-12-27 22:11:36 +01:00
Manu NALEPA
859ac008a8 Activate peerDAS at electra. (#14734) 2024-12-27 09:48:57 +01:00
Manu NALEPA
f882bd27c8 Merge branch 'develop' into peerDAS 2024-12-18 16:15:32 +01:00
Manu NALEPA
361e5759c1 Merge branch 'develop' into peerDAS 2024-12-17 22:19:20 +01:00
Manu NALEPA
34ef0da896 Merge branch 'develop' into peerDAS 2024-12-10 23:11:45 +01:00
Manu NALEPA
726e8b962f Revert "Revert "Add error count prom metric (#14670)""
This reverts commit 5f17317c1c.
2024-12-10 21:49:40 +01:00
Manu NALEPA
453ea01deb disconnectFromPeer: Remove unused function. 2024-11-28 17:37:30 +01:00
Manu NALEPA
6537f8011e Merge branch 'peerDAS' into peerDAS-do-not-merge 2024-11-28 17:27:44 +01:00
Manu NALEPA
5f17317c1c Revert "Add error count prom metric (#14670)"
This reverts commit b28b1ed6ce.
2024-11-28 16:37:19 +01:00
Manu NALEPA
3432ffa4a3 PeerDAS: Batch columns verifications (#14559)
* `ColumnAlignsWithBlock`: Split lines.

* Data columns verifications: Batch

* Remove completely `DataColumnBatchVerifier`.

Only `DataColumnsVerifier` (with `s`) on columns remains.
It is the responsability of the function which receive the data column
(either by gossip, by range request or by root request) to verify the
data column wrt. corresponding checks.

* Fix Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9dac67635b streamDataColumnBatch: Sort columns by index. (#14542)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyrange-v1

The following data column sidecars, where they exist, MUST be sent in (slot, column_index) order.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9be69fbd07 PeerDAS: Fix major bug in dataColumnSidecarsByRangeRPCHandler and allow syncing from full nodes. (#14532)
* `validateDataColumnsByRange`: `current` ==> `currentSlot`.

* `validateRequest`: Extract `remotePeer` variable.

* `dataColumnSidecarsByRangeRPCHandler`: Small non functional refactor.

* `streamDataColumnBatch`: Fix major bug.

Before this commit, the node was unable to respond with a data column index higher than the count of stored data columns.
For example, if there is 8 data columns stored for a given block, the node was
able to respond for data columns indices 1, 3, and 5, but not for 10, 16 or 127.

The issue was visible only for full nodes, since super nodes always store 128 data columns.

* Initial sync: Fetch data columns from all peers.
(Not only from supernodes.)

* Nishant's comment: Fix `lastSlot` and `endSlot` duplication.

* Address Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
e21261e893 Data columns initial sync: Rework. (#14522) 2024-11-27 10:37:03 +01:00
Nishant Das
da53a8fc48 Fix Commitments Check (#14493)
* Fix Commitments Check

* `highestFinalizedEpoch`: Refactor (no functional change).

* `retrieveMissingDataColumnsFromPeers`: Fix logs.

* `VerifyDataColumnSidecarKZGProofs`: Optimise with capacity.

* Save data columns when initial syncing.

* `dataColumnSidecarsByRangeRPCHandler`: Add logs when a request enters.

* Improve logging.

* Improve logging.

* `peersWithDataColumns: Do not filter any more on peer head slot.

* Fix Nishant's comment.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:37:03 +01:00
Manu NALEPA
a14634e656 PeerDAS: Improve initial sync logs (#14496)
* `retrieveMissingDataColumnsFromPeers`: Search only for needed peers.

* Improve logging.
2024-11-27 10:37:03 +01:00
Manu NALEPA
43761a8066 PeerDAS: Fix initial sync with super nodes (#14495)
* Improve logging.

* `retrieveMissingDataColumnsFromPeers`: Limit to `512` items per request.

* `retrieveMissingDataColumnsFromPeers`: Allow `nil` peers.

Before this commit:
If, when this funcion is called, we are not yet connected to enough peers, then `peers` will be possibly not be satisfaying,
and, if new peers are connected, we will never see them.

After this commit:
If `peers` is `nil`, then we regularly check for all connected peers.
If `peers` is not `nil`, then we use them.
2024-11-27 10:37:03 +01:00
Manu NALEPA
01dbc337c0 PeerDAS: Fix initial sync (#14494)
* `BestFinalized`: Refactor (no functional change).

* `BestNonFinalized`: Refactor (no functional change).

* `beaconBlocksByRangeRPCHandler`: Remove useless log.

The same is already printed at the start of the function.

* `calculateHeadAndTargetEpochs`: Avoid `else`.

* `ConvertPeerIDToNodeID`: Improve error.

* Stop printing noisy "peer should be banned" logs.

* Initial sync: Request data columns from peers which:
- custody a superset of columns we need, and
- have a head slot >= our target slot.

* `requestDataColumnsFromPeers`: Shuffle peers before requesting.

Before this commit, we always requests peers in the same order,
until one responds something.
Without shuffling, we always requests data columns from the same
peer.

* `requestDataColumnsFromPeers`: If error from a peer, just log the error and skip the peer.

* Improve logging.

* Fix tests.
2024-11-27 10:37:03 +01:00
Nishant Das
92f9b55fcb Put Subscriber in Goroutine (#14486) 2024-11-27 10:36:18 +01:00
Manu NALEPA
f65f12f58b Stop disconnecting peers for bad response / excessive colocation. (#14483) 2024-11-27 10:36:17 +01:00
Manu NALEPA
f2b61a3dcf PeerDAS: Misc improvements (#14482)
* `retrieveMissingDataColumnsFromPeers`: Improve logging.

* `dataColumnSidecarByRootRPCHandler`: Stop decreasing peer's score if asking for a column we do not custody.

* `dataColumnSidecarByRootRPCHandler`: If a data column is unavailable, stop waiting for it.

This behaviour was useful for peer sampling.
Now, just return the data column if we store it.
If we don't, skip.

* Dirty code comment.

* `retrieveMissingDataColumnsFromPeers`: Improve logs.

* `SendDataColumnsByRangeRequest`: Improve logs.

* `dataColumnSidecarsByRangeRPCHandler`: Improve logs.
2024-11-27 10:34:38 +01:00
Manu NALEPA
77a6d29a2e PeerDAS: Re-enable full node joining the main fork (#14475)
* `columnErrBuilder`: Uses `Wrap` instead of `Join`.

Reason: `Join` makes a carriage return. The log is quite unreadable.

* `validateDataColumn`: Improve log.

* `areDataColumnsAvailable`: Improve log.

* `SendDataColumnSidecarByRoot` ==> `SendDataColumnSidecarsByRootRequest`.

* `handleDA`: Refactor error message.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

Reason: There is no notion at all of "recent" in the function.

If the caller decides to call this function only with "recent" blocks, that's fine.
However, the function itself will know nothing about the "recentness" of these blocks.

* `sendBatchRootRequest`: Improve comments.

* `sendBeaconBlocksRequest`: Avoid `else` usage and use map of bool instead of `struct{}`.

* `wrapAndReportValidation`: Remove `agent` from log.

Reason: This prevent the log to hold on one line, and it is not really useful to debug.

* `validateAggregateAndProof`: Add comments.

* `GetValidCustodyPeers`: Fix typo.

* `GetValidCustodyPeers` ==> `DataColumnsAdmissibleCustodyPeers`.

* `CustodyHandler` ==> `DataColumnsHandler`.

* `CustodyCountFromRemotePeer` ==> `DataColumnsCustodyCountFromRemotePeer`.

* Implement `DataColumnsAdmissibleSubnetSamplingPeers`.

* Use `SubnetSamplingSize` instead of `CustodySubnetCount` where needed.

* Revert "`wrapAndReportValidation`: Remove `agent` from log."

This reverts commit 55db351102.
2024-11-27 10:34:38 +01:00
Manu NALEPA
31d16da3a0 PeerDAS: Multiple improvements (#14467)
* `scheduleReconstructedDataColumnsBroadcast`: Really minor refactor.

* `receivedDataColumnsFromRootLock` -> `dataColumnsFromRootLock`

* `reconstructDataColumns`: Stop looking into the DB to know if we have some columns.

Before this commit:
Each time we receive a column, we look into the filesystem for all columns we store.
==> For 128 columns, it looks for 1 + 2 + 3 + ... + 128 = 128(128+1)/2 = 8256 files look.

Also, as soon as a column is saved into the file system, then if, right after, we
look at the filesystem again, we assume the column will be available (strict consistency).
It happens not to be always true.

==> Sometimes, we can reconstruct and reseed columns more than once, because of this lack of filesystem strict consistency.

After this commit:
We use a (strictly consistent) cache to determine if we received a column or not.
==> No more consistency issue, and less stress for the filesystem.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

Before this commit, logged values assumed that all requested columns correspond to
the same block root, which is not always the case.

After this commit, we know which columns are requested for which root.

* Add a log when broadcasting a data column.

This is useful to debug "lost data columns" in devnet.

* Address Nishant's comment
2024-11-27 10:34:38 +01:00
Justin Traglia
19221b77bd Update c-kzg-4844 to v2.0.1 (#14421) 2024-11-27 10:34:38 +01:00
Manu NALEPA
83df293647 Peerdas: Several updates (#14459)
* `validateDataColumn`: Refactor logging.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

* `isDataAvailable`: Improve logging.

* Add hidden debug flag: `--data-columns-reject-slot-multiple`.

* Add more logs about peer disconnection.

* `validPeersExist` --> `enoughPeersAreConnected`

* `beaconBlocksByRangeRPCHandler`: Add remote Peer ID in logs.

* Stop calling twice `writeErrorResponseToStream` in case of rate limit.
2024-11-27 10:34:37 +01:00
Manu NALEPA
c20c09ce36 Peerdas: Full subnet sampling and sendBatchRootRequest fix. (#14452)
* `sendBatchRootRequest`: Refactor and add comments.

* `sendBatchRootRequest`: Do send requests to peers that custodies a superset of our columns.

Before this commit, we sent "data columns by root requests" for data columns peers do not custody.

* Data columns: Use subnet sampling only.

(Instead of peer sampling.)

aaa

* `areDataColumnsAvailable`: Improve logs.

* `GetBeaconBlock`: Improve logs.

Rationale: A `begin` log should always be followed by a `success` log or a `failure` log.
2024-11-27 10:30:29 +01:00
Manu NALEPA
2191faaa3f Fix CPU usage in small devnets (#14446)
* `CustodyCountFromRemotePeer`: Set happy path in the outer scope.

* `FindPeersWithSubnet`: Improve logging.

* `listenForNewNodes`: Avoid infinite loop in a small subnet.

* Address Nishant's comment.

* FIx Nishant's comment.
2024-11-27 10:30:29 +01:00
Nishant Das
2de1e6f3e4 Revert "Change Custody Count to Uint8 (#14386)" (#14415)
This reverts commit bd7ec3fa97.
2024-11-27 10:30:29 +01:00
Manu NALEPA
db44df3964 Fix Initial Sync with 128 data columns subnets (#14403)
* `pingPeers`: Add log with new ENR when modified.

* `p2p Start`: Use idiomatic go error syntax.

* P2P `start`: Fix error message.

* Use not bootnodes at all if the `--chain-config-file` flag is used and no `--bootstrap-node` flag is used.

Before this commit, if the  `--chain-config-file` flag is used and no `--bootstrap-node` flag is used, then bootnodes are (incorrectly) defaulted on `mainnet` ones.

* `validPeersExist`: Centralize logs.

* `AddConnectionHandler`: Improve logging.

"Peer connected" does not really reflect the fact that a new peer is actually connected. --> "New peer connection" is more clear.

Also, instead of writing `0`, `1`or `2` for direction, now it's writted "Unknown", "Inbound", "Outbound".

* Logging: Add 2 decimals for timestamt in text and JSON logs.

* Improve "no valid peers" logging.

* Improve "Some columns have no peers responsible for custody" logging.

* `pubsubSubscriptionRequestLimit`: Increase to be consistent with data columns.

* `sendPingRequest`: Improve logging.

* `FindPeersWithSubnet`: Regularly recheck in our current set of peers if we have enough peers for this topic.

Before this commit, new peers HAD to be found, even if current peers are eventually acceptable.
For very small network, it used to lead to infinite search.

* `subscribeDynamicWithSyncSubnets`: Use exactly the same subscription function initially and every slot.

* Make deepsource happier.

* Nishant's commend: Change peer disconnected log.

* NIshant's comment: Change `Too many incoming subscription` log from error to debug.

* `FindPeersWithSubnet`: Address Nishant's comment.

* `batchSize`: Address Nishant's comment.

* `pingPeers` ==> `pingPeersAndLogEnr`.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:30:29 +01:00
Nishant Das
f92eb44c89 Add Data Column Computation Metrics (#14400)
* Add Data Column Metrics

* Shift it All To Peerdas Package
2024-11-27 10:24:03 +01:00
Nishant Das
a26980b64d Set Precompute at 8 (#14399) 2024-11-27 10:24:03 +01:00
Manu NALEPA
f58cf7e626 PeerDAS: Improve logging and reduce the number of needed goroutines for reconstruction (#14397)
* `broadcastAndReceiveDataColumns`: Use real `sidecar.ColumnIndex` instead of position in the slice.

And improve logging as well.

* `isDataColumnsAvailable`: Improve logging.

* `validateDataColumn`: Print `Accepted data column sidecar gossip` really at the end.

* Subscriber: Improve logging.

* `sendAndSaveDataColumnSidecars`: Use common used function for logging.

* `dataColumnSidecarByRootRPCHandler`: Logging - Pring `all` instead of all the columns for a super node.

* Verification: Improve logging.

* `DataColumnsWithholdCount`: Set as `uint64` instead `int`.

* `DataColumnFields`: Improve logging.

* Logging: Remove now useless private `columnFields`function.

* Avoid useless goroutines blocking for reconstruction.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Address Nishant's comment.

* Improve logging.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Nishant Das
68da7dabe2 Fix Bugs in PeerDAS Testing (#14396)
* Fix Various Bugs in PeerDAS

* Remove Log

* Remove useless copy var.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d1e43a2c02 Change Custody Count to Uint8 (#14386)
* Add Changes for Uint8 Csc

* Fix Build

* Fix Build for Sync

* Fix Discovery Test
2024-11-27 10:24:03 +01:00
Nishant Das
3652bec2f8 Use Data Column Validation Across Prysm (#14377)
* Use Data Column Validation Everywhere

* Fix Build

* Fix Lint

* Fix Clock Synchronizer

* Fix Panic
2024-11-27 10:24:03 +01:00
Nishant Das
81b7a1725f Update Config To Latest Value (#14352)
* Update values

* Update Spec To v1.5.0-alpha.5

* Fix Discovery Tests

* Hardcode Subnet Count For Tests

* Fix All Initial Sync Tests

* Gazelle

* Less Chaotic Service Initialization

* Gazelle
2024-11-27 10:24:03 +01:00
Nishant Das
0c917079c4 Fix CI in PeerDAS (#14347)
* Update go.yml

* Disable mnd

* Update .golangci.yml

* Update go.yml

* Update go.yml

* Update .golangci.yml

* Update go.yml

* Fix Lint Issues

* Remove comment

* Update .golangci.yml
2024-11-27 10:24:03 +01:00
Manu NALEPA
a732fe7021 Implement /eth/v1/beacon/blob_sidecars/{block_id} for peerDAS. (#14312)
* `parseIndices`: `O(n**2)` ==> `O(n)`.

* PeerDAS: Implement `/eth/v1/beacon/blob_sidecars/{block_id}`.

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Rename some functions.

* `Blobs`: Fix empty slice.

* `recoverCellsAndProofs` --> Move function in `beacon-chain/core/peerdas`.

* peerDAS helpers: Add missing tests.

* Implement `CustodyColumnCount`.

* `RecoverCellsAndProofs`: Remove useless argument `columnsCount`.

* Tests: Add cleanups.

* `blobsFromStoredDataColumns`: Reconstruct if needed.

* Make deepsource happy.

* Beacon API: Use provided indices.

* Make deepsource happier.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d75a7aae6a Add Data Column Verification (#14287)
* Persist All Changes

* Fix All Tests

* Fix Build

* Fix Build

* Fix Build

* Fix Test Again

* Add missing verification

* Add Test Cases for Data Column Validation

* Fix comments for methods

* Fix comments for methods

* Fix Test

* Manu's Review
2024-11-27 10:24:03 +01:00
Manu NALEPA
e788a46e82 PeerDAS: Add MetadataV3 with custody_subnet_count (#14274)
* `sendPingRequest`: Add some comments.

* `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`.

* `pingHandler`: Add comments.

* `sendMetaDataRequest`: Add comments and implement an unique test.

* Gather `SchemaVersion`s in the same `const` definition.

* Define `SchemaVersionV3`.

* `MetaDataV1`: Fix comment.

* Proto: Define `MetaDataV2`.

* `MetaDataV2`: Generate SSZ.

* `newColumnSubnetIDs`: Use smaller lines.

* `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`.

* `RefreshPersistentSubnets`: Refactor tests (no functional change).

* `RefreshPersistentSubnets`: Refactor and add comments (no functional change).

* `RefreshPersistentSubnets`: Compare cache with both ENR & metadata.

* `RefreshPersistentSubnets`: Manage peerDAS.

* `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`.

* `CustodyCountFromRemotePeer`: Retrieve the count from metadata.

Then default to ENR, then default to the default value.

* Update beacon-chain/sync/rpc_metadata.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix duplicate case.

* Remove version testing.

* `debug.proto`: Stop breaking ordering.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Manu NALEPA
199543125a Fix data columns sampling (#14263)
* Fix the obvious...

* Data columns sampling: Modify logging.

* `waitForChainStart`: Set it threadsafe - Do only wait once.

* Sampling: Wait for chain start before running the sampling.

Reason: `newDataColumnSampler1D` needs `s.ctxMap`.
`s.ctxMap` is only set when chain is started.

Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine.

==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not.

* Adresse Nishant's comments.

* Sampling: Improve logging.

* `waitForChainStart`: Remove `chainIsStarted` check.
2024-11-27 10:19:07 +01:00
Manu NALEPA
ca63efa770 PeerDAS: Fix initial sync (#14208)
* `SendDataColumnsByRangeRequest`: Add some new fields in logs.

* `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`.

* Implement `fetchDataColumnsFromPeers`.

* `fetchBlobsFromPeer`: Return only one error.
2024-11-27 10:19:07 +01:00
Manu NALEPA
345e6edd9c Make deepsource happy (#14237)
* DeepSource: Pass heavy objects by pointers.

* `removeBlockFromQueue`: Remove redundant error checking.

* `fetchBlobsFromPeer`: Use same variable for `append`.

* Remove unused arguments.

* Combine types.

* `Persist`: Add documentation.

* Remove unused receiver

* Remove duplicated import.

* Stop using both pointer and value receiver at the same time.

* `verifyAndPopulateColumns`: Remove unused parameter

* Stop using mpty slice literal used to declare a variable.
2024-11-27 10:19:07 +01:00
Manu NALEPA
6403064126 PeerDAS: Run reconstruction in parallel. (#14236)
* PeerDAS: Run reconstruction in parallel.

* `isDataAvailableDataColumns` --> `isDataColumnsAvailable`

* `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received.

* Make deepsource happy.
2024-11-27 10:19:07 +01:00
Justin Traglia
0517d76631 Update ckzg4844 to latest version of das branch (#14223)
* Update ckzg4844 to latest version

* Run go mod tidy

* Remove unnecessary tests & run goimports

* Remove fieldparams from blockchain/kzg

* Add back blank line

* Avoid large copies

* Run gazelle

* Use trusted setup from the specs & fix issue with struct

* Run goimports

* Fix mistake in makeCellsAndProofs

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:19:07 +01:00
Nishant Das
000d480f77 Add Current Changes (#14231) 2024-11-27 10:19:07 +01:00
Manu NALEPA
b40a8ed37e Implement and use filterPeerForDataColumnsSubnet. (#14230) 2024-11-27 10:19:07 +01:00
Francis Li
d21c2bd63e [PeerDAS] Parallelize data column sampling (#14105)
* PeerDAS: parallelizing sample queries

* PeerDAS: select sample from non custodied columns

* Finish rebase

* Add more test cases
2024-11-27 10:19:07 +01:00
kevaundray
7a256e93f7 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-11-27 10:19:07 +01:00
Nishant Das
07fe76c2da Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-11-27 10:19:07 +01:00
Manu NALEPA
54affa897f PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-11-27 10:16:50 +01:00
kevaundray
ac4c5fae3c chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-11-27 10:16:50 +01:00
Manu NALEPA
2845d87077 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-11-27 10:16:50 +01:00
Nishant Das
dc2c90b8ed Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-11-27 10:16:50 +01:00
kevaundray
b469157e1f chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-11-27 10:16:50 +01:00
kevaundray
2697794e58 chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-11-27 10:16:50 +01:00
Manu NALEPA
48cf24edb4 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-11-27 10:16:50 +01:00
Francis Li
78f90db90b PeerDAS: add data column batch config (#14122) 2024-11-27 10:15:27 +01:00
Francis Li
d0a3b9bc1d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
bfdb6dab86 Fix columns sampling (#14118) 2024-11-27 10:15:27 +01:00
Francis Li
7dd2fd52af [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-11-27 10:15:27 +01:00
Francis Li
b6bad9331b [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-11-27 10:15:27 +01:00
Francis Li
6e2122085d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
7a847292aa PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
81f4db0afa PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
a7dc2e6c8b PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-11-27 10:15:27 +01:00
Manu NALEPA
0a010b5088 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
1e335e2cf2 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
42f4c0f14e PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
d3c12abe25 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-11-27 10:15:27 +01:00
Nishant Das
b0ba05b4f4 Fix Custody Columns (#14021) 2024-11-27 10:15:27 +01:00
Nishant Das
e206506489 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-11-27 10:15:27 +01:00
Nishant Das
013cb28663 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-11-27 10:15:27 +01:00
Manu NALEPA
496914cb39 Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-11-27 10:15:27 +01:00
Nishant Das
c032e78888 Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-11-27 10:15:26 +01:00
Manu NALEPA
5e4deff6fd Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-11-27 10:15:26 +01:00
Nishant Das
6daa91c465 Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-11-27 10:15:26 +01:00
Nishant Das
32ce6423eb Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Justin Traglia
b0ea450df5 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-11-27 10:15:26 +01:00
Manu NALEPA
8bd10df423 SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-11-27 10:15:26 +01:00
Manu NALEPA
dcbb543be2 Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-11-27 10:15:26 +01:00
Nishant Das
be0580e1a9 Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Manu NALEPA
1355178115 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-11-27 10:15:26 +01:00
Nishant Das
b78c3485b9 Update .bazelrc (#13931) 2024-11-27 10:15:26 +01:00
Manu NALEPA
f503efc6ed Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-11-27 10:15:26 +01:00
Manu NALEPA
1bfbd3980e Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-11-27 10:15:26 +01:00
Nishant Das
3e722ea1bc Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Nishant Das
d844026433 Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-11-27 10:15:26 +01:00
Nishant Das
9ffc19d5ef Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-11-27 10:15:26 +01:00
Nishant Das
3e23f6e879 add it (#13865) 2024-11-27 10:11:55 +01:00
Manu NALEPA
c688c84393 Add in column sidecars protos (#13862) 2024-11-27 10:11:55 +01:00
274 changed files with 15514 additions and 3134 deletions

View File

@@ -4,6 +4,49 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
- Blob cache size now correctly set at startup.
- A fix for slashing protection history exports where the validator database was in a nested folder.
- Corrected behavior of the API call for state committees with an invalid request.
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
### Added
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
### Changed
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
### Fixed
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:

View File

@@ -1,7 +1,7 @@
workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_pkg",
@@ -16,8 +16,6 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "toolchains_protoc",
sha256 = "abb1540f8a9e045422730670ebb2f25b41fa56ca5a7cf795175a110a0a68f4ad",
@@ -255,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.0"
consensus_spec_version = "v1.6.0-alpha.1"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-W7oKvoM0nAkyitykRxAw6kmCvjYC01IqiNJy0AmCnMM=",
"minimal": "sha256-ig7/zxomjv6buBWMom4IxAJh3lFJ9+JnY44E7c8ZNP8=",
"mainnet": "sha256-mjx+MkXtPhCNv4c4knLYLIkvIdpF7WTjx/ElvGPQzSo=",
"general": "sha256-o4t9p3R+fQHF4KOykGmwlG3zDw5wUdVWprkzId8aIsk=",
"minimal": "sha256-sU7ToI8t3MR8x0vVjC8ERmAHZDWpEmnAC9FWIpHi5x4=",
"mainnet": "sha256-YKS4wngg0LgI9Upp4MYJ77aG+8+e/G4YeqEIlp06LZw=",
},
version = consensus_spec_version,
)
@@ -280,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-u0RkIZIeGttb3sInR31mO64aBSwxALqO5SYIPlqEvPo=",
integrity = "sha256-Nv4TEuEJPQIM4E6T9J0FOITsmappmXZjGtlhe1HEXnU=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -923,7 +923,14 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
if err != nil {
return nil, err
}
srcLookahead, err := st.ProposerLookahead()
if err != nil {
return nil, err
}
lookahead := make([]string, len(srcLookahead))
for i, v := range srcLookahead {
lookahead[i] = fmt.Sprintf("%d", uint64(v))
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
@@ -962,5 +969,6 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
PendingDeposits: PendingDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
ProposerLookahead: lookahead,
}, nil
}

View File

@@ -33,8 +33,14 @@ type GetPeerResponse struct {
Data *Peer `json:"data"`
}
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
type Meta struct {
Count int `json:"count"`
}
type GetPeersResponse struct {
Data []*Peer `json:"data"`
Meta Meta `json:"meta"`
}
type Peer struct {

View File

@@ -219,4 +219,5 @@ type BeaconStateFulu struct {
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
ProposerLookahead []string `json:"proposer_lookahead"`
}

View File

@@ -31,6 +31,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -109,22 +110,26 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][]bool
// TODO: Separate blobs from data columns
// seenIndex map[[32]byte][]bool
seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitives.Slot) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if idx >= uint64(maxBlobsPerBlock) {
return
}
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// if idx >= uint64(maxBlobsPerBlock) {
// return
// }
bn.Lock()
seen := bn.seenIndex[root]
if seen == nil {
seen = make([]bool, maxBlobsPerBlock)
}
// TODO: Separate blobs from data columns
// if seen == nil {
// seen = make([]bool, maxBlobsPerBlock)
// }
if seen[idx] {
bn.Unlock()
return
@@ -135,7 +140,9 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
// c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
@@ -145,12 +152,15 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
}
func (bn *blobNotifierMap) forRoot(root [32]byte, slot primitives.Slot) chan uint64 {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
bn.Lock()
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
// c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
return c
@@ -176,7 +186,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][]bool),
// TODO: Separate blobs from data columns
// seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
}
srv := &Service{
ctx: ctx,

View File

@@ -587,7 +587,9 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][]bool),
// TODO: Separate blobs from data columns
// seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
notifiers: make(map[[32]byte]chan uint64),
}

View File

@@ -229,13 +229,16 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
if body.Version() < version.Deneb {
return nil
}
kzgs, err := body.BlobKzgCommitments()
if err != nil {
return err
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(kzgs) > maxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
commitmentCount, maxBlobsPerBlock := len(kzgs), params.BeaconConfig().MaxBlobsPerBlock(slot)
if commitmentCount > maxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: actual count %d - max allowed %d", commitmentCount, maxBlobsPerBlock)
}
return nil
}

View File

@@ -926,8 +926,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
require.NoError(t, err)
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
maxCommitmentsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, maxCommitmentsPerBlock+1)}}
rb, err = consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: actual count %d - max allowed %d", maxCommitmentsPerBlock+1, maxCommitmentsPerBlock), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
}

View File

@@ -96,6 +96,24 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.

View File

@@ -37,8 +37,11 @@ const (
// SingleAttReceived is sent after a single attestation object is received from gossip or rpc
SingleAttReceived = 9
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 10
// BlockGossipReceived is sent after a block has been received from gossip or API that passes validation rules.
BlockGossipReceived = 10
BlockGossipReceived = 11
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -90,6 +93,11 @@ type SingleAttReceivedData struct {
Attestation ethpb.Att
}
// DataColumnSidecarReceivedData is the data sent with DataColumnSidecarReceived events.
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}
// BlockGossipReceivedData is the data sent with BlockGossipReceived events.
type BlockGossipReceivedData struct {
// SignedBlock is the block that was received.

View File

@@ -2,23 +2,33 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["upgrade.go"],
srcs = [
"transition.go",
"upgrade.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/fulu",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["upgrade_test.go"],
srcs = [
"transition_test.go",
"upgrade_test.go",
],
deps = [
":go_default_library",
"//beacon-chain/core/time:go_default_library",

View File

@@ -0,0 +1,47 @@
package fulu
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err := electra.ProcessEpoch(ctx, state); err != nil {
return errors.Wrap(err, "could not process epoch in fulu transition")
}
return processProposerLookahead(ctx, state)
}
func processProposerLookahead(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "fulu.processProposerLookahead")
defer span.End()
if state == nil || state.IsNil() {
return errors.New("nil state")
}
lookAhead, err := state.ProposerLookahead()
if err != nil {
return errors.Wrap(err, "could not get proposer lookahead")
}
lastEpochStart := len(lookAhead) - int(params.BeaconConfig().SlotsPerEpoch)
copy(lookAhead[:lastEpochStart], lookAhead[params.BeaconConfig().SlotsPerEpoch:])
lastEpoch := slots.ToEpoch(state.Slot()) + params.BeaconConfig().MinSeedLookahead + 1
indices, err := helpers.ActiveValidatorIndices(ctx, state, lastEpoch)
if err != nil {
return err
}
lastEpochProposers, err := helpers.PrecomputeProposerIndices(state, indices, lastEpoch)
if err != nil {
return errors.Wrap(err, "could not precompute proposer indices")
}
copy(lookAhead[lastEpochStart:], lastEpochProposers)
return state.SetProposerLookahead(lookAhead)
}

View File

@@ -0,0 +1,28 @@
package fulu_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/fulu"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestProcessEpoch_CanProcessFulu(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
st, err := fulu.UpgradeToFulu(context.Background(), st)
require.NoError(t, err)
preLookahead, err := st.ProposerLookahead()
require.NoError(t, err)
err = fulu.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
postLookahead, err := st.ProposerLookahead()
require.NoError(t, err)
require.NotEqual(t, preLookahead[0], postLookahead[0])
for i, v := range preLookahead[params.BeaconConfig().SlotsPerEpoch:] {
require.Equal(t, v, postLookahead[i])
}
}

View File

@@ -1,18 +1,22 @@
package fulu
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -101,8 +105,12 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateElectra{
s := &ethpb.BeaconStateFulu{
GenesisTime: beaconState.GenesisTime(),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
@@ -163,6 +171,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
// Need to cast the beaconState to use in helper functions

View File

@@ -1,6 +1,7 @@
package fulu_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/fulu"
@@ -25,7 +26,7 @@ func TestUpgradeToFulu(t *testing.T) {
require.NoError(t, st.SetBalances(bals))
preForkState := st.Copy()
mSt, err := fulu.UpgradeToFulu(st)
mSt, err := fulu.UpgradeToFulu(context.Background(), st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), mSt.GenesisTime())

View File

@@ -277,10 +277,10 @@ type CommitteeAssignment struct {
CommitteeIndex primitives.CommitteeIndex
}
// verifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
// VerifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
// It checks if the epoch is not greater than the next epoch, and if the start slot of the epoch is greater
// than or equal to the minimum valid start slot calculated based on the state's current slot and historical roots.
func verifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) error {
func VerifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) error {
nextEpoch := time.NextEpoch(state)
if epoch > nextEpoch {
return fmt.Errorf("epoch %d can't be greater than next epoch %d", epoch, nextEpoch)
@@ -308,7 +308,7 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
defer span.End()
// Verify if the epoch is valid for assignment based on the provided state.
if err := verifyAssignmentEpoch(epoch, state); err != nil {
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
startSlot, err := slots.EpochStart(epoch)
@@ -351,6 +351,61 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
return proposerAssignments, nil
}
// LiteAssignment is a lite version of CommitteeAssignment, and has committee length
// and validator committee index instead of the full committee list
type LiteAssignment struct {
AttesterSlot primitives.Slot // slot in which to attest
CommitteeIndex primitives.CommitteeIndex // position of the committee in the slot
CommitteeLength uint64 // number of members in the committee
ValidatorCommitteeIndex uint64 // validators offset inside the committee
}
// PrecomputeCommittees returns an array indexed by (slot-startSlot)
// whose elements are the beacon committees of that slot.
func PrecomputeCommittees(
ctx context.Context,
st state.BeaconState,
startSlot primitives.Slot,
) ([][][]primitives.ValidatorIndex, error) {
cfg := params.BeaconConfig()
out := make([][][]primitives.ValidatorIndex, cfg.SlotsPerEpoch)
for relativeSlot := primitives.Slot(0); relativeSlot < cfg.SlotsPerEpoch; relativeSlot++ {
slot := startSlot + relativeSlot
comms, err := BeaconCommittees(ctx, st, slot)
if err != nil {
return nil, errors.Wrapf(err, "BeaconCommittees failed at slot %d", slot)
}
out[relativeSlot] = comms
}
return out, nil
}
// AssignmentForValidator scans the cached committees once
// and returns the duty for a single validator.
func AssignmentForValidator(
bySlot [][][]primitives.ValidatorIndex,
startSlot primitives.Slot,
vIdx primitives.ValidatorIndex,
) *LiteAssignment {
for relativeSlot, committees := range bySlot {
for cIdx, committee := range committees {
for pos, member := range committee {
if member == vIdx {
return &LiteAssignment{
AttesterSlot: startSlot + primitives.Slot(relativeSlot),
CommitteeIndex: primitives.CommitteeIndex(cIdx),
CommitteeLength: uint64(len(committee)),
ValidatorCommitteeIndex: uint64(pos),
}
}
}
}
}
return nil // validator is not scheduled this epoch
}
// CommitteeAssignments calculates committee assignments for each validator during the specified epoch.
// It retrieves active validator indices, determines the number of committees per slot, and computes
// assignments for each validator based on their presence in the provided validators slice.
@@ -359,7 +414,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
defer span.End()
// Verify if the epoch is valid for assignment based on the provided state.
if err := verifyAssignmentEpoch(epoch, state); err != nil {
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
startSlot, err := slots.EpochStart(epoch)
@@ -500,21 +555,31 @@ func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaco
if err != nil {
return err
}
// Skip cache update if the key already exists
_, ok := proposerIndicesCache.ProposerIndices(epoch, [32]byte(root))
if ok {
return nil
}
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return err
}
proposerIndices, err := PrecomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return errors.New("invalid proposer length returned from state")
var proposerIndices []primitives.ValidatorIndex
// use the state if post fulu (EIP-7917)
if state.Version() >= version.Fulu {
lookAhead, err := state.ProposerLookahead()
if err != nil {
return errors.Wrap(err, "could not get proposer lookahead")
}
proposerIndices = lookAhead[:params.BeaconConfig().SlotsPerEpoch]
} else {
// Skip cache update if the key already exists
_, ok := proposerIndicesCache.ProposerIndices(epoch, [32]byte(root))
if ok {
return nil
}
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return err
}
proposerIndices, err = PrecomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return errors.New("invalid proposer length returned from state")
}
}
// This is here to deal with tests only
var indicesArray [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
@@ -601,6 +666,25 @@ func ComputeCommittee(
return shuffledList[start:end], nil
}
// InitializeProposerLookahead computes the list of the proposer indices for the next couple of epochs.
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]uint64, error) {
lookAhead := make([]uint64, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return nil, errors.Wrap(err, "could not get active indices")
}
for i := uint64(0); i < uint64(params.BeaconConfig().MinSeedLookahead+1); i++ {
proposerIndices, err := PrecomputeProposerIndices(state, indices, epoch+primitives.Epoch(i))
if err != nil {
return nil, errors.Wrap(err, "could not compute proposer indices")
}
for _, proposerIndex := range proposerIndices {
lookAhead = append(lookAhead, uint64(proposerIndex))
}
}
return lookAhead, nil
}
// PrecomputeProposerIndices computes proposer indices of the current epoch and returns a list of proposer indices,
// the index of the list represents the slot number.
func PrecomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex, e primitives.Epoch) ([]primitives.ValidatorIndex, error) {

View File

@@ -871,3 +871,48 @@ func TestBeaconCommitteesFromCache(t *testing.T) {
assert.DeepEqual(t, committees[idx], committee)
}
}
func TestPrecomputeCommittees_HappyPath(t *testing.T) {
cfg := params.BeaconConfig()
start := primitives.Slot(100)
ctx := context.Background()
st, _ := util.DeterministicGenesisState(t, 256)
got, err := helpers.PrecomputeCommittees(ctx, st, start)
require.NoError(t, err)
require.Equal(t, len(got), int(cfg.SlotsPerEpoch), "outer slice length mismatch")
for i := range got {
expSlot := start + primitives.Slot(i)
comms, err := helpers.BeaconCommittees(ctx, st, expSlot)
require.NoError(t, err)
require.DeepEqual(t, comms, got[i])
}
}
func TestAssignmentForValidator(t *testing.T) {
start := primitives.Slot(200)
bySlot := [][][]primitives.ValidatorIndex{
{{1, 2, 3}},
{{7, 8, 9}},
}
vIdx := primitives.ValidatorIndex(8)
got := helpers.AssignmentForValidator(bySlot, start, vIdx)
require.NotNil(t, got)
require.Equal(t, start+1, got.AttesterSlot)
require.Equal(t, primitives.CommitteeIndex(0), got.CommitteeIndex)
require.Equal(t, uint64(3), got.CommitteeLength)
require.Equal(t, uint64(1), got.ValidatorCommitteeIndex)
t.Run("Not Found", func(t *testing.T) {
start = primitives.Slot(300)
bySlot = [][][]primitives.ValidatorIndex{
{{4, 5, 6}},
}
got = helpers.AssignmentForValidator(bySlot, start, primitives.ValidatorIndex(99))
require.IsNil(t, got)
})
}

View File

@@ -78,6 +78,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +265,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -299,9 +299,29 @@ func ProposerIndexAtSlotFromCheckpoint(c *forkchoicetypes.Checkpoint, slot primi
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(state.Slot())
if e < stateEpoch || e > stateEpoch+1 {
return 0, errors.Errorf("slot %d is not in the current epoch %d or the next epoch", slot, stateEpoch)
}
lookAhead, err := state.ProposerLookahead()
if err != nil {
return 0, errors.Wrap(err, "could not get proposer lookahead")
}
if e == stateEpoch {
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
// The caller is requesting the proposer for the next epoch
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch+params.BeaconConfig().SlotsPerEpoch], nil
}
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
// point of view of the given state as head state
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
if state.Version() >= version.Fulu {
return beaconProposerIndexAtSlotFulu(state, slot)
}
e := slots.ToEpoch(slot)
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch.

View File

@@ -7,6 +7,7 @@ go_library(
"info.go",
"metrics.go",
"p2p_interface.go",
"peer_sampling.go",
"reconstruction.go",
"validator.go",
"verification.go",
@@ -44,6 +45,7 @@ go_test(
"das_core_test.go",
"info_test.go",
"p2p_interface_test.go",
"peer_sampling_test.go",
"reconstruction_test.go",
"utils_test.go",
"validator_test.go",

View File

@@ -0,0 +1,56 @@
package peerdas
import (
"math/big"
"github.com/OffchainLabs/prysm/v6/config/params"
)
// ExtendedSampleCount computes, for a given number of samples per slot and allowed failures the
// number of samples we should actually query from peers.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/peer-sampling.md#get_extended_sample_count
func ExtendedSampleCount(samplesPerSlot, allowedFailures uint64) uint64 {
// Retrieve the columns count
columnsCount := params.BeaconConfig().NumberOfColumns
// If half of the columns are missing, we are able to reconstruct the data.
// If half of the columns + 1 are missing, we are not able to reconstruct the data.
// This is the smallest worst case.
worstCaseMissing := columnsCount/2 + 1
// Compute the false positive threshold.
falsePositiveThreshold := HypergeomCDF(0, columnsCount, worstCaseMissing, samplesPerSlot)
var sampleCount uint64
// Finally, compute the extended sample count.
for sampleCount = samplesPerSlot; sampleCount < columnsCount+1; sampleCount++ {
if HypergeomCDF(allowedFailures, columnsCount, worstCaseMissing, sampleCount) <= falsePositiveThreshold {
break
}
}
return sampleCount
}
// HypergeomCDF computes the hypergeometric cumulative distribution function.
// https://en.wikipedia.org/wiki/Hypergeometric_distribution
func HypergeomCDF(k, M, n, N uint64) float64 {
denominatorInt := new(big.Int).Binomial(int64(M), int64(N)) // lint:ignore uintcast
denominator := new(big.Float).SetInt(denominatorInt)
rBig := big.NewFloat(0)
for i := uint64(0); i < k+1; i++ {
a := new(big.Int).Binomial(int64(n), int64(i)) // lint:ignore uintcast
b := new(big.Int).Binomial(int64(M-n), int64(N-i))
numeratorInt := new(big.Int).Mul(a, b)
numerator := new(big.Float).SetInt(numeratorInt)
item := new(big.Float).Quo(numerator, denominator)
rBig.Add(rBig, item)
}
r, _ := rBig.Float64()
return r
}

View File

@@ -0,0 +1,60 @@
package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestExtendedSampleCount(t *testing.T) {
const samplesPerSlot = 16
testCases := []struct {
name string
allowedMissings uint64
extendedSampleCount uint64
}{
{name: "allowedMissings=0", allowedMissings: 0, extendedSampleCount: 16},
{name: "allowedMissings=1", allowedMissings: 1, extendedSampleCount: 20},
{name: "allowedMissings=2", allowedMissings: 2, extendedSampleCount: 24},
{name: "allowedMissings=3", allowedMissings: 3, extendedSampleCount: 27},
{name: "allowedMissings=4", allowedMissings: 4, extendedSampleCount: 29},
{name: "allowedMissings=5", allowedMissings: 5, extendedSampleCount: 32},
{name: "allowedMissings=6", allowedMissings: 6, extendedSampleCount: 35},
{name: "allowedMissings=7", allowedMissings: 7, extendedSampleCount: 37},
{name: "allowedMissings=8", allowedMissings: 8, extendedSampleCount: 40},
{name: "allowedMissings=9", allowedMissings: 9, extendedSampleCount: 42},
{name: "allowedMissings=10", allowedMissings: 10, extendedSampleCount: 44},
{name: "allowedMissings=11", allowedMissings: 11, extendedSampleCount: 47},
{name: "allowedMissings=12", allowedMissings: 12, extendedSampleCount: 49},
{name: "allowedMissings=13", allowedMissings: 13, extendedSampleCount: 51},
{name: "allowedMissings=14", allowedMissings: 14, extendedSampleCount: 53},
{name: "allowedMissings=15", allowedMissings: 15, extendedSampleCount: 55},
{name: "allowedMissings=16", allowedMissings: 16, extendedSampleCount: 57},
{name: "allowedMissings=17", allowedMissings: 17, extendedSampleCount: 59},
{name: "allowedMissings=18", allowedMissings: 18, extendedSampleCount: 61},
{name: "allowedMissings=19", allowedMissings: 19, extendedSampleCount: 63},
{name: "allowedMissings=20", allowedMissings: 20, extendedSampleCount: 65},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := peerdas.ExtendedSampleCount(samplesPerSlot, tc.allowedMissings)
require.Equal(t, tc.extendedSampleCount, result)
})
}
}
func TestHypergeomCDF(t *testing.T) {
// Test case from https://en.wikipedia.org/wiki/Hypergeometric_distribution
// Population size: 1000, number of successes in population: 500, sample size: 10, number of successes in sample: 5
// Expected result: 0.072
const (
expected = 0.0796665913283742
margin = 0.000001
)
actual := peerdas.HypergeomCDF(5, 128, 65, 16)
require.Equal(t, true, expected-margin <= actual && actual <= expected+margin)
}

View File

@@ -44,7 +44,7 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
validatorsIndex[primitives.ValidatorIndex(i)] = true
}
beaconState, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateElectra{Validators: validators})
beaconState, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateFulu{Validators: validators})
require.NoError(t, err)
actual, err := peerdas.ValidatorsCustodyRequirement(beaconState, validatorsIndex)

View File

@@ -53,6 +53,11 @@ func HigherEqualThanAltairVersionAndEpoch(s state.BeaconState, e primitives.Epoc
return s.Version() >= version.Altair && e >= params.BeaconConfig().AltairForkEpoch
}
// PeerDASIsActive checks whether peerDAS is active at the provided slot.
func PeerDASIsActive(slot primitives.Slot) bool {
return params.FuluEnabled() && slots.ToEpoch(slot) >= params.BeaconConfig().FuluForkEpoch
}
// CanUpgradeToAltair returns true if the input `slot` can upgrade to Altair.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == ALTAIR_FORK_EPOCH

View File

@@ -303,7 +303,11 @@ func ProcessSlotsCore(ctx context.Context, span trace.Span, state state.BeaconSt
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
var err error
if time.CanProcessEpoch(state) {
if state.Version() >= version.Electra {
if state.Version() >= version.Fulu {
if err = fulu.ProcessEpoch(ctx, state); err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("could not process %s epoch", version.String(state.Version())))
}
} else if state.Version() >= version.Electra {
if err = electra.ProcessEpoch(ctx, state); err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("could not process %s epoch", version.String(state.Version())))
}
@@ -377,7 +381,7 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
}
if time.CanUpgradeToFulu(slot) {
state, err = fulu.UpgradeToFulu(state)
state, err = fulu.UpgradeToFulu(ctx, state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err

View File

@@ -4,21 +4,26 @@ go_library(
name = "go_default_library",
srcs = [
"availability.go",
"availability_columns.go",
"cache.go",
"data_column_cache.go",
"iface.go",
"mock.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -27,13 +32,18 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"availability_columns_test.go",
"availability_test.go",
"cache_test.go",
"data_column_cache_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -41,6 +51,7 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -53,10 +53,16 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
if len(sc) == 0 {
func (s *LazilyPersistentStore) Persist(current primitives.Slot, scg ...blocks.ROSidecar) error {
if len(scg) == 0 {
return nil
}
sc, err := blocks.BlobSidecarsFromSidecars(scg)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
if len(sc) > 1 {
first := sc[0].BlockRoot()
for i := 1; i < len(sc); i++ {

View File

@@ -0,0 +1,200 @@
package das
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
nodeID enode.ID
cache *dataColumnCache
custodyInfo *peerdas.CustodyInfo
newDataColumnsVerifier verification.NewDataColumnsVerifier
}
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
func NewLazilyPersistentStoreColumn(store *filesystem.DataColumnStorage, nodeID enode.ID, newDataColumnsVerifier verification.NewDataColumnsVerifier, custodyInfo *peerdas.CustodyInfo) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
nodeID: nodeID,
cache: newDataColumnCache(),
custodyInfo: custodyInfo,
newDataColumnsVerifier: newDataColumnsVerifier,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
if len(sidecars) == 0 {
return nil
}
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
// It is safe to retrieve the first sidecar.
firstSidecar := dataColumnSidecars[0]
if len(sidecars) > 1 {
firstRoot := firstSidecar.BlockRoot()
for _, sidecar := range dataColumnSidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
return nil
}
key := dataColumnCacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
entry := s.cache.ensure(key)
for i := range sidecars {
if err := entry.stash(&dataColumnSidecars[i]); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
}
// Return early for blocks that do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
// Get the root of the block.
blockRoot := block.Root()
// Build the cache key for the block.
key := dataColumnCacheKey{slot: block.Block().Slot(), root: blockRoot}
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
entry := s.cache.ensure(key)
// Delete the cache entry for the block at the end.
defer s.cache.delete(key)
// Set the disk summary for the block in the cache entry.
entry.setDiskSummary(s.store.Summary(blockRoot))
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
if err != nil {
return errors.Wrap(err, "filter")
}
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return errors.Wrap(err, "valid")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return errors.Wrap(err, "sidecar KZG proof verified")
}
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
if err != nil {
return errors.Wrap(err, "verified RO data columns - should never happen")
}
if err := s.store.Save(verifiedRoDataColumns); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
// fullCommitmentsToCheck returns the commitments to check for a given block.
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
// Compute the block epoch.
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current spoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return &safeCommitmentsArray{}, nil
}
// Retrieve the KZG commitments for the block.
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Return early if there are no commitments in the block.
if len(kzgCommitments) == 0 {
return &safeCommitmentsArray{}, nil
}
// Retrieve the groups count.
custodyGroupCount := s.custodyInfo.ActualGroupCount()
// Retrieve peer info.
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}
// Create a safe commitments array for the custody columns.
commitmentsArray := &safeCommitmentsArray{}
for column := range peerInfo.CustodyColumns {
commitmentsArray[column] = kzgCommitments
}
return commitmentsArray, nil
}

View File

@@ -0,0 +1,303 @@
package das
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
)
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, dataColumnParamsByBlockRoot verification.DataColumnsParamsByRoot) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := verification.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
}
return roSidecars, roDataColumns
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("mixed roots", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]verification.DataColumnParams{
{1}: {{ColumnIndex: 1}},
{2}: {{ColumnIndex: 2}},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.ErrorIs(t, err, errMixedRoots)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]verification.DataColumnParams{
{1}: {{ColumnIndex: 1}},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]verification.DataColumnParams{
{}: {{ColumnIndex: 1}, {ColumnIndex: 5}},
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
key := dataColumnCacheKey{slot: 0, root: [32]byte{}}
entry := lazilyPersistentStoreColumns.cache.entries[key]
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), entry.diskSummary.Count())
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
for i, roDataColumn := range entry.scs {
if map[int]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := context.Background()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
indices := [...]uint64{1, 17, 87, 102}
dataColumnsParams := make([]verification.DataColumnParams, 0, len(indices))
for _, index := range indices {
dataColumnParams := verification.DataColumnParams{
ColumnIndex: index,
KzgCommitments: commitments,
}
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
}
dataColumnsParamsByBlockRoot := verification.DataColumnsParamsByRoot{root: dataColumnsParams}
_, verifiedRoDataColumns := verification.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParamsByBlockRoot)
key := dataColumnCacheKey{root: root}
entry := lazilyPersistentStoreColumns.cache.ensure(key)
defer lazilyPersistentStoreColumns.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
actual, err := dataColumnStorage.Get(root, indices[:])
require.NoError(t, err)
summary := dataColumnStorage.Summary(root)
require.Equal(t, uint64(len(indices)), summary.Count())
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestFullCommitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: windowSlots + 1,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = 100
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, &peerdas.CustodyInfo{})
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
require.NoError(t, err)
for _, commitments := range commitmentsArray {
require.DeepEqual(t, tc.commitments, commitments)
}
})
}
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }

View File

@@ -116,9 +116,11 @@ func TestLazilyPersistent_Missing(t *testing.T) {
ctx := context.Background()
store := filesystem.NewEphemeralBlobStorage(t)
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: scs}
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
@@ -141,12 +143,14 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
ctx := context.Background()
store := filesystem.NewEphemeralBlobStorage(t)
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
scs[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
err := as.IsDataAvailable(ctx, 1, blk)
@@ -155,7 +159,10 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
}
func TestLazyPersistOnceCommitted(t *testing.T) {
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, scs...))
@@ -163,10 +170,13 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
// ignores index out of bound
scs[0].Index = 6
require.ErrorIs(t, as.Persist(1, scs[0]), errIndexOutOfBounds)
blobSidecars[0].Index = 6
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
_, more := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)

View File

@@ -10,12 +10,7 @@ import (
"github.com/pkg/errors"
)
var (
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
)
var errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
// cacheKey includes the slot so that we can easily iterate through the cache and compare
// slots for eviction purposes. Whether the input is the block or the sidecar, we always have

View File

@@ -0,0 +1,135 @@
package das
import (
"reflect"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/pkg/errors"
)
var (
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errColumnIndexTooHigh = errors.New("column index too high")
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
)
// dataColumnCacheKey includes the slot so that we can easily iterate through the cache and compare
// slots for eviction purposes. Whether the input is the block or the sidecar, we always have
// the root+slot when interacting with the cache, so it isn't an inconvenience to use both.
type dataColumnCacheKey struct {
slot primitives.Slot
root [32]byte
}
type dataColumnCache struct {
entries map[dataColumnCacheKey]*dataColumnCacheEntry
}
func newDataColumnCache() *dataColumnCache {
return &dataColumnCache{entries: make(map[dataColumnCacheKey]*dataColumnCacheEntry)}
}
// ensure returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) ensure(key dataColumnCacheKey) *dataColumnCacheEntry {
entry, ok := c.entries[key]
if !ok {
entry = &dataColumnCacheEntry{}
c.entries[key] = entry
}
return entry
}
// delete removes the cache entry from the cache.
func (c *dataColumnCache) delete(key dataColumnCacheKey) {
delete(c.entries, key)
}
// dataColumnCacheEntry holds a fixed-length cache of BlobSidecars.
type dataColumnCacheEntry struct {
scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.DataColumnStorageSummary
}
func (e *dataColumnCacheEntry) setDiskSummary(sum filesystem.DataColumnStorageSummary) {
e.diskSummary = sum
}
// stash adds an item to the in-memory cache of DataColumnSidecars.
// Only the first DataColumnSidecar of a given Index will be kept in the cache.
// stash will return an error if the given data colunn is already in the cache, or if the Index is out of bounds.
func (e *dataColumnCacheEntry) stash(sc *blocks.RODataColumn) error {
if sc.Index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitments)
}
e.scs[sc.Index] = sc
return nil
}
func (e *dataColumnCacheEntry) filter(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
if e.diskSummary.AllAvailable(nonEmptyIndices) {
return nil, nil
}
commitmentsCount := commitmentsArray.count()
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
for i := range nonEmptyIndices {
if e.diskSummary.HasIndex(i) {
continue
}
if e.scs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !reflect.DeepEqual(commitmentsArray[i], e.scs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitments, commitmentsArray[i])
}
sidecars = append(sidecars, *e.scs[i])
}
return sidecars, nil
}
// safeCommitmentsArray is a fixed size array of commitments.
// This is helpful for avoiding gratuitous bounds checks.
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
// count returns the number of commitments in the array.
func (s *safeCommitmentsArray) count() int {
count := 0
for i := range s {
if s[i] != nil {
count++
}
}
return count
}
// nonEmptyIndices returns a map of indices that are non-nil in the array.
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
columns := make(map[uint64]bool)
for i := range s {
if s[i] != nil {
columns[uint64(i)] = true
}
}
return columns
}

View File

@@ -0,0 +1,124 @@
package das
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestEnsureDeleteSetDiskSummary(t *testing.T) {
c := newDataColumnCache()
key := dataColumnCacheKey{}
entry := c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true})
entry.setDiskSummary(diskSummary)
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{diskSummary: diskSummary}, *entry)
c.delete(key)
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
}
func TestStash(t *testing.T) {
t.Run("Index too high", func(t *testing.T) {
dataColumnParamsByBlockRoot := verification.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 10_000}}}
roDataColumns, _ := verification.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
require.NotNil(t, err)
})
t.Run("Nominal and already existing", func(t *testing.T) {
dataColumnParamsByBlockRoot := verification.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 1}}}
roDataColumns, _ := verification.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
require.NoError(t, err)
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
err = entry.stash(&roDataColumns[0])
require.NotNil(t, err)
})
}
func TestFilterDataColumns(t *testing.T) {
t.Run("All available", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NoError(t, err)
require.IsNil(t, actual)
})
t.Run("Some scs missing", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
_, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Commitments not equal", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
dataColumnParamsByBlockRoot := verification.DataColumnsParamsByRoot{root: {{ColumnIndex: 1}}}
roDataColumns, _ := verification.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[1] = &roDataColumns[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
_, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Nominal", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
dataColumnParamsByBlockRoot := verification.DataColumnsParamsByRoot{root: {{ColumnIndex: 3, KzgCommitments: [][]byte{[]byte{3}}}}}
expected, _ := verification.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[3] = &expected[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
require.NoError(t, err)
require.DeepEqual(t, expected, actual)
})
}
func TestCount(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
require.Equal(t, 2, s.count())
}
func TestNonEmptyIndices(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{10}}, nil, [][]byte{[]byte{20}}}
actual := s.nonEmptyIndices()
require.DeepEqual(t, map[uint64]bool{1: true, 3: true}, actual)
}

View File

@@ -15,5 +15,5 @@ import (
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROBlob) error
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
errors "github.com/pkg/errors"
)
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
@@ -24,9 +25,13 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
}
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(current, sc...)
return m.PersistBlobsCallback(current, blobSidecars...)
}
return nil
}

View File

@@ -518,7 +518,7 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
switch {
case hasFuluKey(enc):
protoState := &ethpb.BeaconStateElectra{}
protoState := &ethpb.BeaconStateFulu{}
if err := protoState.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Fulu")
}
@@ -690,7 +690,7 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
}
return snappy.Encode(nil, append(ElectraKey, rawObj...)), nil
case version.Fulu:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateElectra)
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateFulu)
if !ok {
return nil, errors.New("non valid inner state")
}

View File

@@ -31,6 +31,7 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/execution/types:go_default_library",
@@ -97,6 +98,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -44,11 +45,16 @@ var (
GetPayloadMethodV3,
GetPayloadBodiesByHashV1,
GetPayloadBodiesByRangeV1,
GetBlobsV1,
}
electraEngineEndpoints = []string{
NewPayloadMethodV4,
GetPayloadMethodV4,
}
fuluEngineEndpoints = []string{
GetPayloadMethodV5,
GetBlobsV2,
}
)
const (
@@ -73,6 +79,8 @@ const (
GetPayloadMethodV3 = "engine_getPayloadV3"
// GetPayloadMethodV4 is the get payload method added for electra
GetPayloadMethodV4 = "engine_getPayloadV4"
// GetPayloadMethodV5 is the get payload method added for fulu
GetPayloadMethodV5 = "engine_getPayloadV5"
// BlockByHashMethod request string for JSON-RPC.
BlockByHashMethod = "eth_getBlockByHash"
// BlockByNumberMethod request string for JSON-RPC.
@@ -85,11 +93,21 @@ const (
ExchangeCapabilities = "engine_exchangeCapabilities"
// GetBlobsV1 request string for JSON-RPC.
GetBlobsV1 = "engine_getBlobsV1"
// GetBlobsV2 request string for JSON-RPC.
GetBlobsV2 = "engine_getBlobsV2"
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
defaultEngineTimeout = time.Second
// TODO: Remove temporarily needed hack since geth takes an input blobs txs with blobs proofs, and
// does the heavy lifting of building cells proofs, while normally this is done by the tx sender.
// This is a cool hack because it lets the CL to act as if the tx sender actually computed the cells proofs.
// The only counter part is the `engine_getPayloadv<x>` takes a lot of time.
// defaultEngineTimeout = time.Second
defaultEngineTimeout = 2 * time.Second
)
var errInvalidPayloadBodyResponse = errors.New("engine api payload body response is invalid")
var (
errInvalidPayloadBodyResponse = errors.New("engine api payload body response is invalid")
errMissingBlobsAndProofsFromEL = errors.New("engine api payload body response is missing blobs and proofs")
)
// ForkchoiceUpdatedResponse is the response kind received by the
// engine_forkchoiceUpdatedV1 endpoint.
@@ -108,6 +126,7 @@ type Reconstructor interface {
ctx context.Context, blindedBlocks []interfaces.ReadOnlySignedBeaconBlock,
) ([]interfaces.SignedBeaconBlock, error)
ReconstructBlobSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, hi func(uint64) bool) ([]blocks.VerifiedROBlob, error)
ReconstructDataColumnSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) ([]blocks.VerifiedRODataColumn, error)
}
// EngineCaller defines a client that can interact with an Ethereum
@@ -257,6 +276,9 @@ func (s *Service) ForkchoiceUpdated(
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
pe := slots.ToEpoch(slot)
if pe >= params.BeaconConfig().FuluForkEpoch {
return GetPayloadMethodV5, &pb.ExecutionBundleFulu{}
}
if pe >= params.BeaconConfig().ElectraForkEpoch {
return GetPayloadMethodV4, &pb.ExecutionBundleElectra{}
}
@@ -289,7 +311,7 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
}
res, err := blocks.NewGetPayloadResponse(result)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "new get payload response")
}
return res, nil
}
@@ -298,33 +320,36 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeCapabilities")
defer span.End()
// Only check for electra related engine methods if it has been activated.
if params.ElectraEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, electraEngineEndpoints...)
}
var result []string
err := s.rpcClient.CallContext(ctx, &result, ExchangeCapabilities, supportedEngineEndpoints)
if err != nil {
if params.FuluEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, fuluEngineEndpoints...)
}
elSupportedEndpointsSlice := make([]string, len(supportedEngineEndpoints))
if err := s.rpcClient.CallContext(ctx, &elSupportedEndpointsSlice, ExchangeCapabilities, supportedEngineEndpoints); err != nil {
return nil, handleRPCError(err)
}
var unsupported []string
for _, s1 := range supportedEngineEndpoints {
supported := false
for _, s2 := range result {
if s1 == s2 {
supported = true
break
}
}
if !supported {
unsupported = append(unsupported, s1)
elSupportedEndpoints := make(map[string]bool, len(elSupportedEndpointsSlice))
for _, method := range elSupportedEndpointsSlice {
elSupportedEndpoints[method] = true
}
unsupported := make([]string, 0, len(supportedEngineEndpoints))
for _, method := range supportedEngineEndpoints {
if !elSupportedEndpoints[method] {
unsupported = append(unsupported, method)
}
}
if len(unsupported) != 0 {
log.Warnf("Please update client, detected the following unsupported engine methods: %s", unsupported)
log.WithField("methods", unsupported).Warning("Connected execution client does not support some requested engine methods")
}
return result, handleRPCError(err)
return elSupportedEndpointsSlice, nil
}
// GetTerminalBlockHash returns the valid terminal block hash based on total difficulty.
@@ -495,9 +520,10 @@ func (s *Service) HeaderByNumber(ctx context.Context, number *big.Int) (*types.H
func (s *Service) GetBlobs(ctx context.Context, versionedHashes []common.Hash) ([]*pb.BlobAndProof, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobs")
defer span.End()
// If the execution engine does not support `GetBlobsV1`, return early to prevent encountering an error later.
if !s.capabilityCache.has(GetBlobsV1) {
return nil, nil
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV1))
}
result := make([]*pb.BlobAndProof, len(versionedHashes))
@@ -505,6 +531,19 @@ func (s *Service) GetBlobs(ctx context.Context, versionedHashes []common.Hash) (
return result, handleRPCError(err)
}
func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash) ([]*pb.BlobAndProofV2, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
defer span.End()
if !s.capabilityCache.has(GetBlobsV2) {
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
return result, handleRPCError(err)
}
// ReconstructFullBlock takes in a blinded beacon block and reconstructs
// a beacon block with a full execution payload via the engine API.
func (s *Service) ReconstructFullBlock(
@@ -615,6 +654,73 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
return verifiedBlobs, nil
}
// ReconstructDataColumnSidecars reconstructs the verified data column sidecars for a given beacon block.
// It retrieves the KZG commitments from the block body, fetches the associated blobs and cell proofs from the EL,
// and constructs the corresponding verified read-only data column sidecars.
func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, signedROBlock interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error) {
block := signedROBlock.Block()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", blockRoot),
"slot": block.Slot(),
})
kzgCommitments, err := block.Body().BlobKzgCommitments()
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "blob KZG commitments")
}
// Collect KZG hashes for all blobs
var kzgHashes []common.Hash
for _, commitment := range kzgCommitments {
kzgHashes = append(kzgHashes, primitives.ConvertKzgCommitmentToVersionedHash(commitment))
}
// Fetch all blobsAndCellsProofs from EL
blobAndProofV2s, err := s.GetBlobsV2(ctx, kzgHashes)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "get blobs V2")
}
// Return early if nothing is returned from the EL.
if len(blobAndProofV2s) == 0 {
log.Debug("No blobs returned from EL")
return nil, nil
}
// Extract the blobs and proofs from the blobAndProofV2s.
blobs := make([][]byte, 0, len(blobAndProofV2s))
cellProofs := make([][]byte, 0, len(blobAndProofV2s))
for _, blobsAndProofs := range blobAndProofV2s {
if blobsAndProofs == nil {
return nil, wrapWithBlockRoot(errMissingBlobsAndProofsFromEL, blockRoot, "")
}
blobs = append(blobs, blobsAndProofs.Blob)
cellProofs = append(cellProofs, blobsAndProofs.KzgProofs...)
}
dataColumnSidecars, err := peerdas.ConstructDataColumnSidecars(signedROBlock, blobs, cellProofs)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "construct data column sidecars")
}
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(dataColumnSidecar, blockRoot)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "new read-only data column with root")
}
// We trust the execution layer we are connected to, so we can upgrade the read only data column sidecar into a verified one.
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
}
log.Debug("Data columns successfully reconstructed from EL")
return verifiedRODataColumns, nil
}
func fullPayloadFromPayloadBody(
header interfaces.ExecutionData, body *pb.ExecutionPayloadBody, bVersion int,
) (interfaces.ExecutionData, error) {
@@ -902,3 +1008,8 @@ func toBlockNumArg(number *big.Int) string {
}
return hexutil.EncodeBig(number)
}
// wrapWithBlockRoot returns a new error with the given block root.
func wrapWithBlockRoot(err error, blockRoot [32]byte, message string) error {
return errors.Wrap(err, fmt.Sprintf("%s for block %#x", message, blockRoot))
}

View File

@@ -13,6 +13,7 @@ import (
"strings"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
mocks "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
@@ -167,6 +168,7 @@ func TestClient_HTTP(t *testing.T) {
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 4
params.OverrideBeaconConfig(cfg)
t.Run(GetPayloadMethod, func(t *testing.T) {
@@ -317,11 +319,11 @@ func TestClient_HTTP(t *testing.T) {
require.DeepEqual(t, uint64(2), g)
commitments := [][]byte{bytesutil.PadTo([]byte("commitment1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("commitment2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, commitments, resp.BlobsBundle.KzgCommitments)
require.DeepEqual(t, commitments, resp.BlobsBundler.GetKzgCommitments())
proofs := [][]byte{bytesutil.PadTo([]byte("proof1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("proof2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, proofs, resp.BlobsBundle.Proofs)
require.DeepEqual(t, proofs, resp.BlobsBundler.GetProofs())
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
require.DeepEqual(t, blobs, resp.BlobsBundler.GetBlobs())
})
t.Run(GetPayloadMethodV4, func(t *testing.T) {
payloadId := [8]byte{1}
@@ -372,11 +374,11 @@ func TestClient_HTTP(t *testing.T) {
require.DeepEqual(t, uint64(2), g)
commitments := [][]byte{bytesutil.PadTo([]byte("commitment1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("commitment2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, commitments, resp.BlobsBundle.KzgCommitments)
require.DeepEqual(t, commitments, resp.BlobsBundler.GetKzgCommitments())
proofs := [][]byte{bytesutil.PadTo([]byte("proof1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("proof2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, proofs, resp.BlobsBundle.Proofs)
require.DeepEqual(t, proofs, resp.BlobsBundler.GetProofs())
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
require.DeepEqual(t, blobs, resp.BlobsBundler.GetBlobs())
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
@@ -405,7 +407,52 @@ func TestClient_HTTP(t *testing.T) {
require.DeepEqual(t, requests, resp.ExecutionRequests)
})
t.Run(GetPayloadMethodV5, func(t *testing.T) {
payloadId := [8]byte{1}
want, ok := fix["ExecutionBundleFulu"].(*pb.GetPayloadV5ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := io.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
reqArg, err := json.Marshal(pb.PayloadIDBytes(payloadId))
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(reqArg),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
client := &Service{}
client.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.GetPayload(ctx, payloadId, 4*params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
_, ok = resp.BlobsBundler.(*pb.BlobsBundleV2)
if !ok {
t.Logf("resp.BlobsBundler has unexpected type: %T", resp.BlobsBundler)
}
require.Equal(t, ok, true)
})
t.Run(ForkchoiceUpdatedMethod+" VALID status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
@@ -1539,6 +1586,7 @@ func fixtures() map[string]interface{} {
"ExecutionPayloadCapellaWithValue": s.ExecutionPayloadWithValueCapella,
"ExecutionPayloadDenebWithValue": s.ExecutionPayloadWithValueDeneb,
"ExecutionBundleElectra": s.ExecutionBundleElectra,
"ExecutionBundleFulu": s.ExecutionBundleFulu,
"ValidPayloadStatus": s.ValidPayloadStatus,
"InvalidBlockHashStatus": s.InvalidBlockHashStatus,
"AcceptedStatus": s.AcceptedStatus,
@@ -1774,6 +1822,36 @@ func fixturesStruct() *payloadFixtures {
append([]byte{pb.WithdrawalRequestType}, withdrawalRequestBytes...),
append([]byte{pb.ConsolidationRequestType}, consolidationRequestBytes...)},
}
executionBundleFixtureFulu := &pb.GetPayloadV5ResponseJson{
ShouldOverrideBuilder: true,
ExecutionPayload: &pb.ExecutionPayloadDenebJSON{
ParentHash: &common.Hash{'a'},
FeeRecipient: &common.Address{'b'},
StateRoot: &common.Hash{'c'},
ReceiptsRoot: &common.Hash{'d'},
LogsBloom: &hexutil.Bytes{'e'},
PrevRandao: &common.Hash{'f'},
BaseFeePerGas: "0x123",
BlockHash: &common.Hash{'g'},
Transactions: []hexutil.Bytes{{'h'}},
Withdrawals: []*pb.Withdrawal{},
BlockNumber: &hexUint,
GasLimit: &hexUint,
GasUsed: &hexUint,
Timestamp: &hexUint,
BlobGasUsed: &bgu,
ExcessBlobGas: &ebg,
},
BlockValue: "0x11fffffffff",
BlobsBundle: &pb.BlobBundleV2JSON{
Commitments: []hexutil.Bytes{[]byte("commitment1"), []byte("commitment2")},
Proofs: []hexutil.Bytes{[]byte("proof1"), []byte("proof2")},
Blobs: []hexutil.Bytes{{'a'}, {'b'}},
},
ExecutionRequests: []hexutil.Bytes{append([]byte{pb.DepositRequestType}, depositRequestBytes...),
append([]byte{pb.WithdrawalRequestType}, withdrawalRequestBytes...),
append([]byte{pb.ConsolidationRequestType}, consolidationRequestBytes...)},
}
parent := bytesutil.PadTo([]byte("parentHash"), fieldparams.RootLength)
sha3Uncles := bytesutil.PadTo([]byte("sha3Uncles"), fieldparams.RootLength)
miner := bytesutil.PadTo([]byte("miner"), fieldparams.FeeRecipientLength)
@@ -1868,6 +1946,7 @@ func fixturesStruct() *payloadFixtures {
ExecutionPayloadWithValueCapella: executionPayloadWithValueFixtureCapella,
ExecutionPayloadWithValueDeneb: executionPayloadWithValueFixtureDeneb,
ExecutionBundleElectra: executionBundleFixtureElectra,
ExecutionBundleFulu: executionBundleFixtureFulu,
ValidPayloadStatus: validStatus,
InvalidBlockHashStatus: inValidBlockHashStatus,
AcceptedStatus: acceptedStatus,
@@ -1892,6 +1971,7 @@ type payloadFixtures struct {
ExecutionPayloadWithValueCapella *pb.GetPayloadV2ResponseJson
ExecutionPayloadWithValueDeneb *pb.GetPayloadV3ResponseJson
ExecutionBundleElectra *pb.GetPayloadV4ResponseJson
ExecutionBundleFulu *pb.GetPayloadV5ResponseJson
ValidPayloadStatus *pb.PayloadStatus
InvalidBlockHashStatus *pb.PayloadStatus
AcceptedStatus *pb.PayloadStatus
@@ -2361,7 +2441,7 @@ func Test_ExchangeCapabilities(t *testing.T) {
for _, item := range results {
require.NotNil(t, item)
}
assert.LogsContain(t, logHook, "Please update client, detected the following unsupported engine methods:")
assert.LogsContain(t, logHook, "Connected execution client does not support some requested engine methods")
})
t.Run("list of items", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -2424,7 +2504,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
t.Run("get-blobs end point is not supported", func(t *testing.T) {
hi := mockSummary(t, []bool{true, true, true, true, true, false})
verifiedBlobs, err := client.ReconstructBlobSidecars(ctx, sb, r, hi)
require.NoError(t, err)
require.ErrorContains(t, "engine_getBlobsV1 is not supported", err)
require.Equal(t, 0, len(verifiedBlobs))
})
@@ -2476,6 +2556,76 @@ func TestReconstructBlobSidecars(t *testing.T) {
})
}
func TestReconstructDataColumnSidecars(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Setup right fork epoch
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 4
params.OverrideBeaconConfig(cfg)
client := &Service{capabilityCache: &capabilityCache{}}
b := util.NewBeaconBlockFulu()
b.Block.Slot = 4 * params.BeaconConfig().SlotsPerEpoch
kzgCommitments := createRandomKzgCommitments(t, 6)
b.Block.Body.BlobKzgCommitments = kzgCommitments
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
ctx := context.Background()
t.Run("GetBlobsV2 is not supported", func(t *testing.T) {
_, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.ErrorContains(t, "get blobs V2 for block", err)
})
t.Run("nothing received", func(t *testing.T) {
srv := createBlobServerV2(t, 0, []bool{})
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 0, len(dataColumns))
})
t.Run("receiving all blobs", func(t *testing.T) {
blobMasks := []bool{true, true, true, true, true, true}
srv := createBlobServerV2(t, 6, blobMasks)
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 128, len(dataColumns))
})
t.Run("missing some blobs", func(t *testing.T) {
blobMasks := []bool{false, true, true, true, true, true}
srv := createBlobServerV2(t, 6, blobMasks)
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.ErrorContains(t, errMissingBlobsAndProofsFromEL.Error(), err)
require.Equal(t, 0, len(dataColumns))
})
}
func createRandomKzgCommitments(t *testing.T, num int) [][]byte {
kzgCommitments := make([][]byte, num)
for i := range kzgCommitments {
@@ -2511,6 +2661,42 @@ func createBlobServer(t *testing.T, numBlobs int, callbackFuncs ...func()) *http
}))
}
func createBlobServerV2(t *testing.T, numBlobs int, blobMasks []bool) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
require.Equal(t, len(blobMasks), numBlobs)
blobAndCellProofs := make([]*pb.BlobAndProofV2Json, numBlobs)
for i := range blobAndCellProofs {
if !blobMasks[i] {
continue
}
blobAndCellProofs[i] = &pb.BlobAndProofV2Json{
Blob: []byte("0xblob"),
KzgProofs: []hexutil.Bytes{},
}
for j := 0; j < int(params.BeaconConfig().NumberOfColumns); j++ {
cellProof := make([]byte, 48)
blobAndCellProofs[i].KzgProofs = append(blobAndCellProofs[i].KzgProofs, cellProof)
}
}
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": blobAndCellProofs,
}
err := json.NewEncoder(w).Encode(respJSON)
require.NoError(t, err)
}))
}
func setupRpcClient(t *testing.T, url string, client *Service) (*rpc.Client, *Service) {
rpcClient, err := rpc.DialHTTP(url)
require.NoError(t, err)
@@ -2522,6 +2708,12 @@ func setupRpcClient(t *testing.T, url string, client *Service) (*rpc.Client, *Se
return rpcClient, client
}
func setupRpcClientV2(t *testing.T, url string, client *Service) (*rpc.Client, *Service) {
rpcClient, client := setupRpcClient(t, url, client)
client.capabilityCache = &capabilityCache{capabilities: map[string]interface{}{GetBlobsV2: nil}}
return rpcClient, client
}
func testNewBlobVerifier() verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return &verification.MockBlobVerifier{

View File

@@ -38,6 +38,8 @@ type EngineClient struct {
ErrGetPayload error
BlobSidecars []blocks.VerifiedROBlob
ErrorBlobSidecars error
DataColumnSidecars []blocks.VerifiedRODataColumn
ErrorDataColumnSidecars error
}
// NewPayload --
@@ -113,6 +115,10 @@ func (e *EngineClient) ReconstructBlobSidecars(context.Context, interfaces.ReadO
return e.BlobSidecars, e.ErrorBlobSidecars
}
func (e *EngineClient) ReconstructDataColumnSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) ([]blocks.VerifiedRODataColumn, error) {
return e.DataColumnSidecars, e.ErrorDataColumnSidecars
}
// GetTerminalBlockHash --
func (e *EngineClient) GetTerminalBlockHash(ctx context.Context, transitionTime uint64) ([]byte, bool, error) {
ttd := new(big.Int)

View File

@@ -23,6 +23,7 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
@@ -86,44 +87,47 @@ type serviceFlagOpts struct {
// full PoS node. It handles the lifecycle of the entire system and registers
// services to a service registry.
type BeaconNode struct {
cliCtx *cli.Context
ctx context.Context
cancel context.CancelFunc
services *runtime.ServiceRegistry
lock sync.RWMutex
stop chan struct{} // Channel to wait for termination notifications.
db db.Database
slasherDB db.SlasherDatabase
attestationCache *cache.AttestationCache
attestationPool attestations.Pool
exitPool voluntaryexits.PoolManager
slashingsPool slashings.PoolManager
syncCommitteePool synccommittee.Pool
blsToExecPool blstoexec.PoolManager
depositCache cache.DepositCache
trackedValidatorsCache *cache.TrackedValidatorsCache
payloadIDCache *cache.PayloadIDCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
stateGen *stategen.State
collector *bcnodeCollector
slasherBlockHeadersFeed *event.Feed
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
GenesisInitializer genesis.Initializer
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
BlobStorageOptions []filesystem.BlobStorageOption
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
slasherEnabled bool
lcStore *lightclient.Store
cliCtx *cli.Context
ctx context.Context
cancel context.CancelFunc
services *runtime.ServiceRegistry
lock sync.RWMutex
stop chan struct{} // Channel to wait for termination notifications.
db db.Database
slasherDB db.SlasherDatabase
attestationCache *cache.AttestationCache
attestationPool attestations.Pool
exitPool voluntaryexits.PoolManager
slashingsPool slashings.PoolManager
syncCommitteePool synccommittee.Pool
blsToExecPool blstoexec.PoolManager
depositCache cache.DepositCache
trackedValidatorsCache *cache.TrackedValidatorsCache
payloadIDCache *cache.PayloadIDCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
stateGen *stategen.State
collector *bcnodeCollector
slasherBlockHeadersFeed *event.Feed
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
GenesisInitializer genesis.Initializer
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
BlobStorageOptions []filesystem.BlobStorageOption
DataColumnStorage *filesystem.DataColumnStorage
DataColumnStorageOptions []filesystem.DataColumnStorageOption
custodyInfo *peerdas.CustodyInfo
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
slasherEnabled bool
lcStore *lightclient.Store
}
// New creates a new node instance, sets up configuration options, and registers
@@ -161,6 +165,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
serviceFlagOpts: &serviceFlagOpts{},
initialSyncComplete: make(chan struct{}),
syncChecker: &initialsync.SyncChecker{},
custodyInfo: &peerdas.CustodyInfo{},
slasherEnabled: cliCtx.Bool(flags.SlasherFlag.Name),
lcStore: &lightclient.Store{},
}
@@ -190,6 +195,15 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.BlobStorage = blobs
}
if beacon.DataColumnStorage == nil {
dataColumnStorage, err := filesystem.NewDataColumnStorage(cliCtx.Context, beacon.DataColumnStorageOptions...)
if err != nil {
return nil, errors.Wrap(err, "new data column storage")
}
beacon.DataColumnStorage = dataColumnStorage
}
bfs, err := startBaseServices(cliCtx, beacon, depositAddress)
if err != nil {
return nil, errors.Wrap(err, "could not start modules")
@@ -280,7 +294,9 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
if err := beacon.startDB(cliCtx, depositAddress); err != nil {
return nil, errors.Wrap(err, "could not start DB")
}
beacon.BlobStorage.WarmCache()
beacon.DataColumnStorage.WarmCache()
log.Debugln("Starting Slashing DB")
if err := beacon.startSlasherDB(cliCtx); err != nil {
@@ -491,6 +507,10 @@ func (b *BeaconNode) clearDB(clearDB, forceClearDB bool, d *kv.Store, dbPath str
return nil, errors.Wrap(err, "could not clear blob storage")
}
if err := b.DataColumnStorage.Clear(); err != nil {
return nil, errors.Wrap(err, "could not clear data column storage")
}
d, err = kv.NewKVStore(b.ctx, dbPath)
if err != nil {
return nil, errors.Wrap(err, "could not create new database")
@@ -697,6 +717,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
StateNotifier: b,
DB: b.db,
ClockWaiter: b.clockWaiter,
CustodyInfo: b.custodyInfo,
})
if err != nil {
return err
@@ -775,9 +796,11 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithClockSynchronizer(gs),
blockchain.WithSyncComplete(syncComplete),
blockchain.WithBlobStorage(b.BlobStorage),
blockchain.WithDataColumnStorage(b.DataColumnStorage),
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
blockchain.WithPayloadIDCache(b.payloadIDCache),
blockchain.WithSyncChecker(b.syncChecker),
blockchain.WithCustodyInfo(b.custodyInfo),
blockchain.WithSlasherEnabled(b.slasherEnabled),
blockchain.WithLightClientStore(b.lcStore),
)
@@ -862,8 +885,11 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithInitialSyncComplete(initialSyncComplete),
regularsync.WithStateNotifier(b),
regularsync.WithBlobStorage(b.BlobStorage),
regularsync.WithDataColumnStorage(b.DataColumnStorage),
regularsync.WithVerifierWaiter(b.verifyInitWaiter),
regularsync.WithAvailableBlocker(bFillStore),
regularsync.WithTrackedValidatorsCache(b.trackedValidatorsCache),
regularsync.WithCustodyInfo(b.custodyInfo),
regularsync.WithSlasherEnabled(b.slasherEnabled),
regularsync.WithLightClientStore(b.lcStore),
)
@@ -889,6 +915,8 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
ClockWaiter: b.clockWaiter,
InitialSyncComplete: complete,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
CustodyInfo: b.custodyInfo,
}, opts...)
return b.services.RegisterService(is)
}
@@ -983,6 +1011,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
FinalizationFetcher: chainService,
BlockReceiver: chainService,
BlobReceiver: chainService,
DataColumnReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,
@@ -1010,6 +1039,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
Router: router,
ClockWaiter: b.clockWaiter,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
PayloadIDCache: b.payloadIDCache,
LCStore: b.lcStore,
@@ -1151,6 +1181,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
// TODO: Add backfill for data column storage
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")

View File

@@ -54,7 +54,7 @@ func TestNodeClose_OK(t *testing.T) {
cmd.ValidatorMonitorIndicesFlag.Value.SetInt(1)
ctx, cancel := newCliContextWithCancel(&app, set)
node, err := New(ctx, cancel, WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
node, err := New(ctx, cancel, WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)))
require.NoError(t, err)
node.Close()
@@ -75,7 +75,7 @@ func TestNodeStart_Ok(t *testing.T) {
node, err := New(ctx, cancel, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)))
require.NoError(t, err)
node.services = &runtime.ServiceRegistry{}
go func() {
@@ -99,7 +99,7 @@ func TestNodeStart_SyncChecker(t *testing.T) {
node, err := New(ctx, cancel, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)))
require.NoError(t, err)
go func() {
node.Start()
@@ -130,7 +130,7 @@ func TestClearDB(t *testing.T) {
context, cancel := newCliContextWithCancel(&app, set)
options := []Option{
WithExecutionChainOptions([]execution.Option{execution.WithHttpEndpoint(endpoint)}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
_, err = New(context, cancel, options...)
require.NoError(t, err)

View File

@@ -50,3 +50,20 @@ func WithBlobStorageOptions(opt ...filesystem.BlobStorageOption) Option {
return nil
}
}
// WithDataColumnStorage sets the DataColumnStorage backend for the BeaconNode
func WithDataColumnStorage(bs *filesystem.DataColumnStorage) Option {
return func(bn *BeaconNode) error {
bn.DataColumnStorage = bs
return nil
}
}
// WithDataColumnStorageOptions appends 1 or more filesystem.DataColumnStorageOption on the beacon node,
// to be used when initializing data column storage.
func WithDataColumnStorageOptions(opt ...filesystem.DataColumnStorageOption) Option {
return func(bn *BeaconNode) error {
bn.DataColumnStorageOptions = append(bn.DataColumnStorageOptions, opt...)
return nil
}
}

View File

@@ -222,7 +222,6 @@ func (s *Service) RefreshPersistentSubnets() {
// Get the sync subnet bitfield in our metadata.
currentBitSInMetadata := s.Metadata().SyncnetsBitfield()
// Is our sync bitvector record up to date?
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
// Compare current epoch with the Fulu fork epoch.

View File

@@ -5,7 +5,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/peerdata"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
var _ Scorer = (*BadResponsesScorer)(nil)
@@ -129,13 +128,14 @@ func (s *BadResponsesScorer) IsBadPeer(pid peer.ID) error {
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *BadResponsesScorer) isBadPeerNoLock(pid peer.ID) error {
if peerData, ok := s.store.PeerData(pid); ok {
if peerData.BadResponses >= s.config.Threshold {
return errors.Errorf("peer exceeded bad responses threshold: got %d, threshold %d", peerData.BadResponses, s.config.Threshold)
}
// if peerData, ok := s.store.PeerData(pid); ok {
// TODO: Remote this out of devnet
// if peerData.BadResponses >= s.config.Threshold {
// return errors.Errorf("peer exceeded bad responses threshold: got %d, threshold %d", peerData.BadResponses, s.config.Threshold)
// }
return nil
}
// return nil
// }
return nil
}

View File

@@ -1,7 +1,6 @@
package scorers_test
import (
"sort"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
@@ -13,39 +12,41 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
)
func TestScorers_BadResponses_Score(t *testing.T) {
const pid = "peer1"
// TODO: Uncomment when out of devnet
// func TestScorers_BadResponses_Score(t *testing.T) {
// const pid = "peer1"
ctx := t.Context()
// ctx, cancel := context.WithCancel(context.Background())
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 4,
},
},
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 4,
// },
// },
// })
// scorer := peerStatuses.Scorers().BadResponsesScorer()
assert.Equal(t, 0., scorer.Score(pid), "Unexpected score for unregistered peer")
// assert.Equal(t, 0., scorer.Score(pid), "Unexpected score for unregistered peer")
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, -2.5, scorer.Score(pid))
// scorer.Increment(pid)
// assert.NoError(t, scorer.IsBadPeer(pid))
// assert.Equal(t, -2.5, scorer.Score(pid))
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-5), scorer.Score(pid))
// scorer.Increment(pid)
// assert.NoError(t, scorer.IsBadPeer(pid))
// assert.Equal(t, float64(-5), scorer.Score(pid))
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-7.5), scorer.Score(pid))
// scorer.Increment(pid)
// assert.NoError(t, scorer.IsBadPeer(pid))
// assert.Equal(t, float64(-7.5), scorer.Score(pid))
scorer.Increment(pid)
assert.NotNil(t, scorer.IsBadPeer(pid))
assert.Equal(t, -100.0, scorer.Score(pid))
}
// scorer.Increment(pid)
// assert.NotNil(t, scorer.IsBadPeer(pid))
// assert.Equal(t, -100.0, scorer.Score(pid))
// }
func TestScorers_BadResponses_ParamsThreshold(t *testing.T) {
ctx := t.Context()
@@ -137,56 +138,60 @@ func TestScorers_BadResponses_Decay(t *testing.T) {
assert.Equal(t, 1, badResponses, "unexpected bad responses for pid3")
}
func TestScorers_BadResponses_IsBadPeer(t *testing.T) {
ctx := t.Context()
// TODO: Uncomment when out of devnet
// func TestScorers_BadResponses_IsBadPeer(t *testing.T) {
// ctx, cancel := context.WithCancel(context.Background())
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
pid := peer.ID("peer1")
assert.NoError(t, scorer.IsBadPeer(pid))
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{},
// })
// scorer := peerStatuses.Scorers().BadResponsesScorer()
// pid := peer.ID("peer1")
// assert.NoError(t, scorer.IsBadPeer(pid))
peerStatuses.Add(nil, pid, nil, network.DirUnknown)
assert.NoError(t, scorer.IsBadPeer(pid))
// peerStatuses.Add(nil, pid, nil, network.DirUnknown)
// assert.NoError(t, scorer.IsBadPeer(pid))
for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
scorer.Increment(pid)
if i == scorers.DefaultBadResponsesThreshold-1 {
assert.NotNil(t, scorer.IsBadPeer(pid), "Unexpected peer status")
} else {
assert.NoError(t, scorer.IsBadPeer(pid), "Unexpected peer status")
}
}
}
// for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
// scorer.Increment(pid)
// if i == scorers.DefaultBadResponsesThreshold-1 {
// assert.NotNil(t, scorer.IsBadPeer(pid), "Unexpected peer status")
// } else {
// assert.NoError(t, scorer.IsBadPeer(pid), "Unexpected peer status")
// }
// }
// }
func TestScorers_BadResponses_BadPeers(t *testing.T) {
ctx := t.Context()
// TODO: Uncomment when out of devnet
// func TestScorers_BadResponses_BadPeers(t *testing.T) {
// ctx, cancel := context.WithCancel(context.Background())
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
pids := []peer.ID{peer.ID("peer1"), peer.ID("peer2"), peer.ID("peer3"), peer.ID("peer4"), peer.ID("peer5")}
for i := 0; i < len(pids); i++ {
peerStatuses.Add(nil, pids[i], nil, network.DirUnknown)
}
for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
scorer.Increment(pids[1])
scorer.Increment(pids[2])
scorer.Increment(pids[4])
}
assert.NoError(t, scorer.IsBadPeer(pids[0]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[1]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[2]), "Invalid peer status")
assert.NoError(t, scorer.IsBadPeer(pids[3]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[4]), "Invalid peer status")
want := []peer.ID{pids[1], pids[2], pids[4]}
badPeers := scorer.BadPeers()
sort.Slice(badPeers, func(i, j int) bool {
return badPeers[i] < badPeers[j]
})
assert.DeepEqual(t, want, badPeers, "Unexpected list of bad peers")
}
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{},
// })
// scorer := peerStatuses.Scorers().BadResponsesScorer()
// pids := []peer.ID{peer.ID("peer1"), peer.ID("peer2"), peer.ID("peer3"), peer.ID("peer4"), peer.ID("peer5")}
// for i := 0; i < len(pids); i++ {
// peerStatuses.Add(nil, pids[i], nil, network.DirUnknown)
// }
// for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
// scorer.Increment(pids[1])
// scorer.Increment(pids[2])
// scorer.Increment(pids[4])
// }
// assert.NoError(t, scorer.IsBadPeer(pids[0]), "Invalid peer status")
// assert.NotNil(t, scorer.IsBadPeer(pids[1]), "Invalid peer status")
// assert.NotNil(t, scorer.IsBadPeer(pids[2]), "Invalid peer status")
// assert.NoError(t, scorer.IsBadPeer(pids[3]), "Invalid peer status")
// assert.NotNil(t, scorer.IsBadPeer(pids[4]), "Invalid peer status")
// want := []peer.ID{pids[1], pids[2], pids[4]}
// badPeers := scorer.BadPeers()
// sort.Slice(badPeers, func(i, j int) bool {
// return badPeers[i] < badPeers[j]
// })
// assert.DeepEqual(t, want, badPeers, "Unexpected list of bad peers")
// }

View File

@@ -42,7 +42,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 10.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, nil, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
assert.NoError(t, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
_, _, topicMap, err := scorer.GossipData("peer1")
assert.NoError(t, err)
assert.Equal(t, uint64(100), topicMap["a"].TimeInMesh, "incorrect time in mesh")

View File

@@ -211,99 +211,102 @@ func TestScorers_Service_Score(t *testing.T) {
})
}
func TestScorers_Service_loop(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
// TODO: Uncomment when out of devnet
// func TestScorers_Service_loop(t *testing.T) {
// ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
// defer cancel()
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 5,
DecayInterval: 50 * time.Millisecond,
},
BlockProviderScorerConfig: &scorers.BlockProviderScorerConfig{
DecayInterval: 25 * time.Millisecond,
Decay: 64,
},
},
})
s1 := peerStatuses.Scorers().BadResponsesScorer()
s2 := peerStatuses.Scorers().BlockProviderScorer()
// peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 5,
// DecayInterval: 50 * time.Millisecond,
// },
// BlockProviderScorerConfig: &scorers.BlockProviderScorerConfig{
// DecayInterval: 25 * time.Millisecond,
// Decay: 64,
// },
// },
// })
// s1 := peerStatuses.Scorers().BadResponsesScorer()
// s2 := peerStatuses.Scorers().BlockProviderScorer()
pid1 := peer.ID("peer1")
peerStatuses.Add(nil, pid1, nil, network.DirUnknown)
for i := 0; i < s1.Params().Threshold+5; i++ {
s1.Increment(pid1)
}
assert.NotNil(t, s1.IsBadPeer(pid1), "Peer should be marked as bad")
// pid1 := peer.ID("peer1")
// peerStatuses.Add(nil, pid1, nil, network.DirUnknown)
// for i := 0; i < s1.Params().Threshold+5; i++ {
// s1.Increment(pid1)
// }
// assert.NotNil(t, s1.IsBadPeer(pid1), "Peer should be marked as bad")
s2.IncrementProcessedBlocks("peer1", 221)
assert.Equal(t, uint64(221), s2.ProcessedBlocks("peer1"))
// s2.IncrementProcessedBlocks("peer1", 221)
// assert.Equal(t, uint64(221), s2.ProcessedBlocks("peer1"))
done := make(chan struct{}, 1)
go func() {
defer func() {
done <- struct{}{}
}()
ticker := time.NewTicker(50 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if s1.IsBadPeer(pid1) == nil && s2.ProcessedBlocks("peer1") == 0 {
return
}
case <-ctx.Done():
t.Error("Timed out")
return
}
}
}()
// done := make(chan struct{}, 1)
// go func() {
// defer func() {
// done <- struct{}{}
// }()
// ticker := time.NewTicker(50 * time.Millisecond)
// defer ticker.Stop()
// for {
// select {
// case <-ticker.C:
// if s1.IsBadPeer(pid1) == nil && s2.ProcessedBlocks("peer1") == 0 {
// return
// }
// case <-ctx.Done():
// t.Error("Timed out")
// return
// }
// }
// }()
<-done
assert.NoError(t, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
assert.Equal(t, uint64(0), s2.ProcessedBlocks("peer1"), "No blocks are expected")
}
// <-done
// assert.NoError(t, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
// assert.Equal(t, uint64(0), s2.ProcessedBlocks("peer1"), "No blocks are expected")
// }
func TestScorers_Service_IsBadPeer(t *testing.T) {
peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 2,
DecayInterval: 50 * time.Second,
},
},
})
// TODO: Uncomment when out of devnet
// func TestScorers_Service_IsBadPeer(t *testing.T) {
// peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 2,
// DecayInterval: 50 * time.Second,
// },
// },
// })
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
}
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
// peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
// assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// }
func TestScorers_Service_BadPeers(t *testing.T) {
peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 2,
DecayInterval: 50 * time.Second,
},
},
})
// TODO: Uncomment when out of devnet
// func TestScorers_Service_BadPeers(t *testing.T) {
// peerStatuses := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: 2,
// DecayInterval: 50 * time.Second,
// },
// },
// })
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 0, len(peerStatuses.Scorers().BadPeers()))
for _, pid := range []peer.ID{"peer1", "peer3"} {
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
}
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
}
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer3"))
// assert.Equal(t, 0, len(peerStatuses.Scorers().BadPeers()))
// for _, pid := range []peer.ID{"peer1", "peer3"} {
// peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
// peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
// }
// assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
// assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
// assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer3"))
// assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
// }

View File

@@ -705,31 +705,47 @@ func (p *Status) deprecatedPrune() {
p.tallyIPTracker()
}
// BestFinalized returns the highest finalized epoch equal to or higher than ours that is agreed
// upon by the majority of peers. This method may not return the absolute highest finalized, but
// the finalized epoch in which most peers can serve blocks (plurality voting).
// Ideally, all peers would be reporting the same finalized epoch but some may be behind due to their
// own latency, or because of their finalized epoch at the time we queried them.
// Returns epoch number and list of peers that are at or beyond that epoch.
// BestFinalized returns the highest finalized epoch equal to or higher than `ourFinalizedEpoch`
// that is agreed upon by the majority of peers, and the peers agreeing on this finalized epoch.
// This method may not return the absolute highest finalized epoch, but the finalized epoch in which
// most peers can serve blocks (plurality voting). Ideally, all peers would be reporting the same
// finalized epoch but some may be behind due to their own latency, or because of their finalized
// epoch at the time we queried them. Returns epoch number and list of peers that are at or beyond
// that epoch.
func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
connected := p.Connected()
// key: finalized epoch, value: number of peers that support this finalized epoch.
finalizedEpochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: finalized epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
for _, pid := range connected {
peerChainState, err := p.ChainState(pid)
if err == nil && peerChainState != nil && peerChainState.FinalizedEpoch >= ourFinalizedEpoch {
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
potentialPIDs = append(potentialPIDs, pid)
pidHead[pid] = peerChainState.HeadSlot
// Skip if the peer's finalized epoch is not defined, or if the peer's finalized epoch is
// lower than ours.
if err != nil || peerChainState == nil || peerChainState.FinalizedEpoch < ourFinalizedEpoch {
continue
}
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which is the epoch most peers agree upon.
var targetEpoch primitives.Epoch
var mostVotes uint64
// If there is a tie, select the highest epoch.
targetEpoch, mostVotes := primitives.Epoch(0), uint64(0)
for epoch, count := range finalizedEpochVotes {
if count > mostVotes || (count == mostVotes && epoch > targetEpoch) {
mostVotes = count
@@ -737,11 +753,12 @@ func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch)
}
}
// Sort PIDs by finalized epoch, in decreasing order.
// Sort PIDs by finalized (epoch, head), in decreasing order.
sort.Slice(potentialPIDs, func(i, j int) bool {
if pidEpoch[potentialPIDs[i]] == pidEpoch[potentialPIDs[j]] {
return pidHead[potentialPIDs[i]] > pidHead[potentialPIDs[j]]
}
return pidEpoch[potentialPIDs[i]] > pidEpoch[potentialPIDs[j]]
})
@@ -764,26 +781,42 @@ func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch)
// BestNonFinalized returns the highest known epoch, higher than ours,
// and is shared by at least minPeers.
func (p *Status) BestNonFinalized(minPeers int, ourHeadEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
connected := p.Connected()
// Calculate our head slot.
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
ourHeadSlot := slotsPerEpoch.Mul(uint64(ourHeadEpoch))
// key: head epoch, value: number of peers that support this epoch.
epochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: head epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
ourHeadSlot := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(ourHeadEpoch))
for _, pid := range connected {
peerChainState, err := p.ChainState(pid)
if err == nil && peerChainState != nil && peerChainState.HeadSlot > ourHeadSlot {
epoch := slots.ToEpoch(peerChainState.HeadSlot)
epochVotes[epoch]++
pidEpoch[pid] = epoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
// Skip if the peer's head epoch is not defined, or if the peer's head slot is
// lower or equal than ours.
if err != nil || peerChainState == nil || peerChainState.HeadSlot <= ourHeadSlot {
continue
}
epoch := slots.ToEpoch(peerChainState.HeadSlot)
epochVotes[epoch]++
pidEpoch[pid] = epoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which has enough peers' votes (>= minPeers).
var targetEpoch primitives.Epoch
targetEpoch := primitives.Epoch(0)
for epoch, votes := range epochVotes {
if votes >= uint64(minPeers) && targetEpoch < epoch {
targetEpoch = epoch
@@ -1012,16 +1045,23 @@ func (p *Status) isfromBadIP(pid peer.ID) error {
return nil
}
ip, err := manet.ToIP(peerData.Address)
if err != nil {
return errors.Wrap(err, "to ip")
}
// ip, err := manet.ToIP(peerData.Address)
// if err != nil {
// return errors.Wrap(err, "to ip")
// }
if val, ok := p.ipTracker[ip.String()]; ok {
if val > CollocationLimit {
return errors.Errorf("collocation limit exceeded: got %d - limit %d", val, CollocationLimit)
}
}
// if val, ok := p.ipTracker[ip.String()]; ok {
// if val > CollocationLimit {
// TODO: Remove this out of denvet.
// return errors.Errorf("colocation limit exceeded: got %d - limit %d", val, CollocationLimit)
// log.WithFields(logrus.Fields{
// "pid": pid,
// "ip": ip.String(),
// "colocationCount": val,
// "colocationLimit": CollocationLimit,
// }).Debug("Colocation limit exceeded. Peer should be banned.")
// }
// }
return nil
}

View File

@@ -3,7 +3,6 @@ package peers_test
import (
"context"
"crypto/rand"
"strconv"
"testing"
"time"
@@ -329,55 +328,56 @@ func TestPeerWithNilChainState(t *testing.T) {
require.Equal(t, resChainState, nothing)
}
func TestPeerBadResponses(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,
},
},
})
// TODO: Uncomment when out of devnet
// func TestPeerBadResponses(t *testing.T) {
// maxBadResponses := 2
// p := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: maxBadResponses,
// },
// },
// })
id, err := peer.Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
require.NoError(t, err)
{
_, err := id.MarshalBinary()
require.NoError(t, err)
}
// id, err := peer.Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
// require.NoError(t, err)
// {
// _, err := id.MarshalBinary()
// require.NoError(t, err)
// }
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
// assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
require.NoError(t, err, "Failed to create address")
direction := network.DirInbound
p.Add(new(enr.Record), id, address, direction)
// address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
// require.NoError(t, err, "Failed to create address")
// direction := network.DirInbound
// p.Add(new(enr.Record), id, address, direction)
scorer := p.Scorers().BadResponsesScorer()
resBadResponses, err := scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 0, resBadResponses, "Unexpected bad responses")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
// scorer := p.Scorers().BadResponsesScorer()
// resBadResponses, err := scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 0, resBadResponses, "Unexpected bad responses")
// assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 1, resBadResponses, "Unexpected bad responses")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
// scorer.Increment(id)
// resBadResponses, err = scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 1, resBadResponses, "Unexpected bad responses")
// assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 2, resBadResponses, "Unexpected bad responses")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
// scorer.Increment(id)
// resBadResponses, err = scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 2, resBadResponses, "Unexpected bad responses")
// assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 3, resBadResponses, "Unexpected bad responses")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
}
// scorer.Increment(id)
// resBadResponses, err = scorer.Count(id)
// require.NoError(t, err)
// assert.Equal(t, 3, resBadResponses, "Unexpected bad responses")
// assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
// }
func TestAddMetaData(t *testing.T) {
maxBadResponses := 2
@@ -496,100 +496,102 @@ func TestPeerValidTime(t *testing.T) {
assert.Equal(t, numPeersConnected, len(p.Connected()), "Unexpected number of connected peers")
}
func TestPrune(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,
},
},
})
// TODO: Uncomment when out of devnet
// func TestPrune(t *testing.T) {
// maxBadResponses := 2
// p := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: maxBadResponses,
// },
// },
// })
for i := 0; i < p.MaxPeerLimit()+100; i++ {
if i%7 == 0 {
// Peer added as disconnected.
_ = addPeer(t, p, peers.Disconnected)
}
// Peer added to peer handler.
_ = addPeer(t, p, peers.Connected)
}
// for i := 0; i < p.MaxPeerLimit()+100; i++ {
// if i%7 == 0 {
// // Peer added as disconnected.
// _ = addPeer(t, p, peers.PeerDisconnected)
// }
// // Peer added to peer handler.
// _ = addPeer(t, p, peers.PeerConnected)
// }
disPeers := p.Disconnected()
firstPID := disPeers[0]
secondPID := disPeers[1]
thirdPID := disPeers[2]
// disPeers := p.Disconnected()
// firstPID := disPeers[0]
// secondPID := disPeers[1]
// thirdPID := disPeers[2]
scorer := p.Scorers().BadResponsesScorer()
// scorer := p.Scorers().BadResponsesScorer()
// Make first peer a bad peer
scorer.Increment(firstPID)
scorer.Increment(firstPID)
// // Make first peer a bad peer
// scorer.Increment(firstPID)
// scorer.Increment(firstPID)
// Add bad response for p2.
scorer.Increment(secondPID)
// // Add bad response for p2.
// scorer.Increment(secondPID)
// Prune peers
p.Prune()
// // Prune peers
// p.Prune()
// Bad peer is expected to still be kept in handler.
badRes, err := scorer.Count(firstPID)
assert.NoError(t, err, "error is supposed to be nil")
assert.Equal(t, 2, badRes, "Did not get expected amount")
// // Bad peer is expected to still be kept in handler.
// badRes, err := scorer.Count(firstPID)
// assert.NoError(t, err, "error is supposed to be nil")
// assert.Equal(t, 2, badRes, "Did not get expected amount")
// Not so good peer is pruned away so that we can reduce the
// total size of the handler.
_, err = scorer.Count(secondPID)
assert.ErrorContains(t, "peer unknown", err)
// // Not so good peer is pruned away so that we can reduce the
// // total size of the handler.
// _, err = scorer.Count(secondPID)
// assert.ErrorContains(t, "peer unknown", err)
// Last peer has been removed.
_, err = scorer.Count(thirdPID)
assert.ErrorContains(t, "peer unknown", err)
}
// // Last peer has been removed.
// _, err = scorer.Count(thirdPID)
// assert.ErrorContains(t, "peer unknown", err)
// }
func TestPeerIPTracker(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnablePeerScorer: false,
})
defer resetCfg()
maxBadResponses := 2
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,
},
},
})
// TODO: Uncomment when out of devnet
// func TestPeerIPTracker(t *testing.T) {
// resetCfg := features.InitWithReset(&features.Flags{
// EnablePeerScorer: false,
// })
// defer resetCfg()
// maxBadResponses := 2
// p := peers.NewStatus(context.Background(), &peers.StatusConfig{
// PeerLimit: 30,
// ScorerParams: &scorers.Config{
// BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
// Threshold: maxBadResponses,
// },
// },
// })
badIP := "211.227.218.116"
var badPeers []peer.ID
for i := 0; i < peers.CollocationLimit+10; i++ {
port := strconv.Itoa(3000 + i)
addr, err := ma.NewMultiaddr("/ip4/" + badIP + "/tcp/" + port)
if err != nil {
t.Fatal(err)
}
badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.ConnectionState(ethpb.ConnectionState_DISCONNECTED)))
}
for _, pr := range badPeers {
assert.NotNil(t, p.IsBad(pr), "peer with bad ip is not bad")
}
// badIP := "211.227.218.116"
// var badPeers []peer.ID
// for i := 0; i < peers.CollocationLimit+10; i++ {
// port := strconv.Itoa(3000 + i)
// addr, err := ma.NewMultiaddr("/ip4/" + badIP + "/tcp/" + port)
// if err != nil {
// t.Fatal(err)
// }
// badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.PeerConnectionState(ethpb.ConnectionState_DISCONNECTED)))
// }
// for _, pr := range badPeers {
// assert.NotNil(t, p.IsBad(pr), "peer with bad ip is not bad")
// }
// Add in bad peers, so that our records are trimmed out
// from the peer store.
for i := 0; i < p.MaxPeerLimit()+100; i++ {
// Peer added to peer handler.
pid := addPeer(t, p, peers.Disconnected)
p.Scorers().BadResponsesScorer().Increment(pid)
}
p.Prune()
// // Add in bad peers, so that our records are trimmed out
// // from the peer store.
// for i := 0; i < p.MaxPeerLimit()+100; i++ {
// // Peer added to peer handler.
// pid := addPeer(t, p, peers.PeerDisconnected)
// p.Scorers().BadResponsesScorer().Increment(pid)
// }
// p.Prune()
for _, pr := range badPeers {
assert.NoError(t, p.IsBad(pr), "peer with good ip is regarded as bad")
}
}
// for _, pr := range badPeers {
// assert.NoError(t, p.IsBad(pr), "peer with good ip is regarded as bad")
// }
// }
func TestTrimmedOrderedPeers(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{

View File

@@ -61,6 +61,12 @@ const LightClientFinalityUpdateName = "/light_client_finality_update"
// LightClientOptimisticUpdateName is the name for the LightClientOptimisticUpdate topic.
const LightClientOptimisticUpdateName = "/light_client_optimistic_update"
// DataColumnSidecarsByRootName is the name for the DataColumnSidecarsByRoot v1 message topic.
const DataColumnSidecarsByRootName = "/data_column_sidecars_by_root"
// DataColumnSidecarsByRangeName is the name for the DataColumnSidecarsByRange v1 message topic.
const DataColumnSidecarsByRangeName = "/data_column_sidecars_by_range"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -92,6 +98,12 @@ const (
RPCLightClientFinalityUpdateTopicV1 = protocolPrefix + LightClientFinalityUpdateName + SchemaVersionV1
// RPCLightClientOptimisticUpdateTopicV1 is a topic for requesting a light client Optimistic update.
RPCLightClientOptimisticUpdateTopicV1 = protocolPrefix + LightClientOptimisticUpdateName + SchemaVersionV1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1 - New in Fulu.
RPCDataColumnSidecarsByRootTopicV1 = protocolPrefix + DataColumnSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1 - New in Fulu.
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
@@ -139,6 +151,11 @@ var RPCTopicMappings = map[string]interface{}{
RPCLightClientUpdatesByRangeTopicV1: new(pb.LightClientUpdatesByRangeRequest),
RPCLightClientFinalityUpdateTopicV1: new(interface{}),
RPCLightClientOptimisticUpdateTopicV1: new(interface{}),
// DataColumnSidecarsByRange v1 Message
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnsByRootIdentifiers),
}
// Maps all registered protocol prefixes.
@@ -161,6 +178,8 @@ var messageMapping = map[string]bool{
LightClientUpdatesByRangeName: true,
LightClientFinalityUpdateName: true,
LightClientOptimisticUpdateName: true,
DataColumnSidecarsByRootName: true,
DataColumnSidecarsByRangeName: true,
}
// Maps all the RPC messages which are to updated in altair.

View File

@@ -10,8 +10,6 @@ import (
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -358,48 +356,49 @@ func initializeStateWithForkDigest(_ context.Context, t *testing.T, gs startup.C
return fd
}
func TestService_connectWithPeer(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
peers *peers.Status
info peer.AddrInfo
wantErr string
}{
{
name: "bad peer",
peers: func() *peers.Status {
ps := peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
for i := 0; i < 10; i++ {
ps.Scorers().BadResponsesScorer().Increment("bad")
}
return ps
}(),
info: peer.AddrInfo{ID: "bad"},
wantErr: "refused to connect to bad peer",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h, _, _ := createHost(t, 34567)
defer func() {
if err := h.Close(); err != nil {
t.Fatal(err)
}
}()
ctx := context.Background()
s := &Service{
host: h,
peers: tt.peers,
}
err := s.connectWithPeer(ctx, tt.info)
if len(tt.wantErr) > 0 {
require.ErrorContains(t, tt.wantErr, err)
} else {
require.NoError(t, err)
}
})
}
}
// TODO: Uncomment when out of devnet
// func TestService_connectWithPeer(t *testing.T) {
// params.SetupTestConfigCleanup(t)
// tests := []struct {
// name string
// peers *peers.Status
// info peer.AddrInfo
// wantErr string
// }{
// {
// name: "bad peer",
// peers: func() *peers.Status {
// ps := peers.NewStatus(context.Background(), &peers.StatusConfig{
// ScorerParams: &scorers.Config{},
// })
// for i := 0; i < 10; i++ {
// ps.Scorers().BadResponsesScorer().Increment("bad")
// }
// return ps
// }(),
// info: peer.AddrInfo{ID: "bad"},
// wantErr: "refused to connect to bad peer",
// },
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// h, _, _ := createHost(t, 34567)
// defer func() {
// if err := h.Close(); err != nil {
// t.Fatal(err)
// }
// }()
// ctx := context.Background()
// s := &Service{
// host: h,
// peers: tt.peers,
// }
// err := s.connectWithPeer(ctx, tt.info)
// if len(tt.wantErr) > 0 {
// require.ErrorContains(t, tt.wantErr, err)
// } else {
// require.NoError(t, err)
// }
// })
// }
// }

View File

@@ -23,7 +23,6 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/sirupsen/logrus"
)
var (
@@ -74,8 +73,8 @@ func (s *Service) nodeFilter(topic string, index uint64) (func(node *enode.Node)
// searchForPeers performs a network search for peers subscribed to a particular subnet.
// It exits as soon as one of these conditions is met:
// - It looped through `batchSize` nodes.
// - It found `peersToFindCount“ peers corresponding to the `filter` criteria.
// - It looped during `batchPeriod` duration, or
// - It found `peersToFindCount“ peers corresponding to the `filter` criteria, or
// - Iterator is exhausted.
func searchForPeers(
iterator enode.Iterator,
@@ -147,8 +146,6 @@ func (s *Service) FindPeersWithSubnet(
index uint64,
threshold int,
) (bool, error) {
const minLogInterval = 1 * time.Minute
ctx, span := trace.StartSpan(ctx, "p2p.FindPeersWithSubnet")
defer span.End()
@@ -168,41 +165,29 @@ func (s *Service) FindPeersWithSubnet(
return false, errors.Wrap(err, "node filter")
}
peersSummary := func(topic string, threshold int) (int, int) {
peersSummary := func(topic string, threshold int) int {
// Retrieve how many peers we have for this topic.
peerCountForTopic := len(s.pubsub.ListPeers(topic))
// Compute how many peers we are missing to reach the threshold.
missingPeerCountForTopic := max(0, threshold-peerCountForTopic)
return peerCountForTopic, missingPeerCountForTopic
return missingPeerCountForTopic
}
// Compute how many peers we are missing to reach the threshold.
peerCountForTopic, missingPeerCountForTopic := peersSummary(topic, threshold)
missingPeerCountForTopic := peersSummary(topic, threshold)
// Exit early if we have enough peers.
if missingPeerCountForTopic == 0 {
return true, nil
}
log := log.WithFields(logrus.Fields{
"topic": topic,
"targetPeerCount": threshold,
})
log.WithField("currentPeerCount", peerCountForTopic).Debug("Searching for new peers for a subnet - start")
lastLogTime := time.Now()
wg := new(sync.WaitGroup)
for {
// If the context is done, we can exit the loop. This is the unhappy path.
if err := ctx.Err(); err != nil {
return false, errors.Errorf(
"unable to find requisite number of peers for topic %s - only %d out of %d peers available after searching",
topic, peerCountForTopic, threshold,
)
if ctx.Err() != nil {
return false, nil
}
// Search for new peers in the network.
@@ -225,20 +210,14 @@ func (s *Service) FindPeersWithSubnet(
wg.Wait()
}
peerCountForTopic, missingPeerCountForTopic := peersSummary(topic, threshold)
missingPeerCountForTopic := peersSummary(topic, threshold)
// If we have enough peers, we can exit the loop. This is the happy path.
if missingPeerCountForTopic == 0 {
break
}
if time.Since(lastLogTime) > minLogInterval {
lastLogTime = time.Now()
log.WithField("currentPeerCount", peerCountForTopic).Debug("Searching for new peers for a subnet - continue")
}
}
log.WithField("currentPeerCount", threshold).Debug("Searching for new peers for a subnet - success")
return true, nil
}

View File

@@ -9,10 +9,13 @@ var (
ErrInvalidSequenceNum = errors.New("invalid sequence number provided")
ErrGeneric = errors.New("internal service error")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrMaxDataColumnReqExceeded = errors.New("requested more than MAX_REQUEST_DATA_COLUMN_SIDECARS")
ErrResourceUnavailable = errors.New("resource requested unavailable")
)

View File

@@ -206,8 +206,8 @@ func (s BlobSidecarsByRootReq) Swap(i, j int) {
}
// Len is the number of elements in the collection.
func (s BlobSidecarsByRootReq) Len() int {
return len(s)
func (s *BlobSidecarsByRootReq) Len() int {
return len(*s)
}
// ====================================

View File

@@ -221,7 +221,6 @@ func generateDataColumnIdentifiers(n int) []*eth.DataColumnsByRootIdentifier {
func TestDataColumnSidecarsByRootReq_Marshal(t *testing.T) {
/*
SSZ encoding of DataColumnsByRootIdentifiers is tested in spectests.
However, encoding a list of DataColumnsByRootIdentifier is not.
We are testing it here.

View File

@@ -201,6 +201,11 @@ func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
newPubkey := &ecdsa.PublicKey{
Curve: gCrypto.S256(),
X: pubKeyObjSecp256k1.X(),
Y: pubKeyObjSecp256k1.Y(),
}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -245,7 +245,7 @@ func (s *Service) validatorEndpoints(
template: "/eth/v2/validator/aggregate_attestation",
name: namespace + ".GetAggregateAttestationV2",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
},
handler: server.GetAggregateAttestationV2,
methods: []string{http.MethodGet},
@@ -314,7 +314,7 @@ func (s *Service) validatorEndpoints(
template: "/eth/v1/validator/attestation_data",
name: namespace + ".GetAttestationData",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
},
handler: server.GetAttestationData,
methods: []string{http.MethodGet},

View File

@@ -97,18 +97,19 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
}
// parseIndices filters out invalid and duplicate blob indices
func parseIndices(url *url.URL, s primitives.Slot) ([]uint64, error) {
func parseIndices(url *url.URL, s primitives.Slot) ([]int, error) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s)
rawIndices := url.Query()["indices"]
indices := make([]uint64, 0, params.BeaconConfig().MaxBlobsPerBlock(s))
indices := make([]int, 0, maxBlobsPerBlock)
invalidIndices := make([]string, 0)
loop:
for _, raw := range rawIndices {
ix, err := strconv.ParseUint(raw, 10, 64)
ix, err := strconv.Atoi(raw)
if err != nil {
invalidIndices = append(invalidIndices, raw)
continue
}
if ix >= uint64(params.BeaconConfig().MaxBlobsPerBlock(s)) {
if !(0 <= ix && ix < maxBlobsPerBlock) {
invalidIndices = append(invalidIndices, raw)
continue
}

View File

@@ -520,13 +520,13 @@ func Test_parseIndices(t *testing.T) {
tests := []struct {
name string
query string
want []uint64
want []int
wantErr string
}{
{
name: "happy path with duplicate indices within bound and other query parameters ignored",
query: "indices=1&indices=2&indices=1&indices=3&bar=bar",
want: []uint64{1, 2, 3},
want: []int{1, 2, 3},
},
{
name: "out of bounds indices throws error",

View File

@@ -35,6 +35,7 @@ go_test(
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -15,6 +15,7 @@ import (
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -219,9 +220,10 @@ func TestGetBeaconStateV2(t *testing.T) {
resp := &structs.GetBeaconStateV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Fulu), resp.Version)
st := &structs.BeaconStateElectra{}
st := &structs.BeaconStateFulu{}
require.NoError(t, json.Unmarshal(resp.Data, st))
assert.Equal(t, "123", st.Slot)
assert.Equal(t, int(params.BeaconConfig().MinSeedLookahead+1)*int(params.BeaconConfig().SlotsPerEpoch), len(st.ProposerLookahead))
})
t.Run("execution optimistic", func(t *testing.T) {
parentRoot := [32]byte{'a'}

View File

@@ -112,7 +112,12 @@ func (s *Server) GetPeers(w http.ResponseWriter, r *http.Request) {
}
allPeers = append(allPeers, p)
}
resp := &structs.GetPeersResponse{Data: allPeers}
resp := &structs.GetPeersResponse{
Data: allPeers,
Meta: structs.Meta{
Count: len(allPeers),
},
}
httputil.WriteJson(w, resp)
return
}
@@ -177,7 +182,12 @@ func (s *Server) GetPeers(w http.ResponseWriter, r *http.Request) {
filteredPeers = append(filteredPeers, p)
}
resp := &structs.GetPeersResponse{Data: filteredPeers}
resp := &structs.GetPeersResponse{
Data: filteredPeers,
Meta: structs.Meta{
Count: len(filteredPeers),
},
}
httputil.WriteJson(w, resp)
}

View File

@@ -145,6 +145,7 @@ func TestGetPeers(t *testing.T) {
resp := &structs.GetPeersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
assert.Equal(t, 1, resp.Meta.Count)
returnedPeer := resp.Data[0]
assert.Equal(t, expectedId.String(), returnedPeer.PeerId)
expectedEnr, err := peerStatus.ENR(expectedId)
@@ -229,6 +230,7 @@ func TestGetPeers(t *testing.T) {
resp := &structs.GetPeersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, len(tt.wantIds), len(resp.Data), "Wrong number of peers returned")
assert.Equal(t, len(tt.wantIds), resp.Meta.Count, "meta.count does not match number of returned peers")
for _, id := range tt.wantIds {
expectedId := id.String()
found := false

View File

@@ -98,6 +98,39 @@ func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Reques
if agg == nil {
return
}
if httputil.RespondWithSsz(r) {
var data []byte
var err error
if v >= version.Electra {
typedAgg, ok := agg.(*ethpbalpha.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.AttestationElectra{}), http.StatusInternalServerError)
return
}
data, err = typedAgg.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
} else {
typedAgg, ok := agg.(*ethpbalpha.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.Attestation{}), http.StatusInternalServerError)
return
}
data, err = typedAgg.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
}
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteSsz(w, data)
return
}
resp := &structs.AggregateAttestationResponse{
Version: version.String(v),
}
@@ -610,6 +643,16 @@ func (s *Server) GetAttestationData(w http.ResponseWriter, r *http.Request) {
return
}
if httputil.RespondWithSsz(r) {
data, err := attestationData.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal attestation data: "+err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, data)
return
}
response := &structs.GetAttestationDataResponse{
Data: &structs.AttestationData{
Slot: strconv.FormatUint(uint64(attestationData.Slot), 10),

View File

@@ -307,6 +307,23 @@ func TestGetAggregateAttestation(t *testing.T) {
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
})
t.Run("1 matching aggregated attestation - SSZ", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp ethpbalpha.Attestation
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
compareResult(t, *structs.AttFromConsensus(&resp), "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
@@ -327,6 +344,23 @@ func TestGetAggregateAttestation(t *testing.T) {
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching aggregated attestations - return the one with most bits - SSZ", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp ethpbalpha.Attestation
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
compareResult(t, *structs.AttFromConsensus(&resp), "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
})
})
t.Run("post-electra", func(t *testing.T) {
aggSlot1_Root1_1 := createAttestationElectra(1, bitfield.Bitlist{0b11100}, root1)
@@ -421,6 +455,23 @@ func TestGetAggregateAttestation(t *testing.T) {
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
})
t.Run("1 matching aggregated attestation - SSZ", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp ethpbalpha.AttestationElectra
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
compareResult(t, *structs.AttElectraFromConsensus(&resp), "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
@@ -441,6 +492,23 @@ func TestGetAggregateAttestation(t *testing.T) {
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
})
t.Run("multiple matching aggregated attestations - return the one with most bits - SSZ", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp ethpbalpha.AttestationElectra
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
compareResult(t, *structs.AttElectraFromConsensus(&resp), "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
})
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
@@ -460,6 +528,23 @@ func TestGetAggregateAttestation(t *testing.T) {
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
})
t.Run("1 matching unaggregated attestation - SSZ", func(t *testing.T) {
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp ethpbalpha.AttestationElectra
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
compareResult(t, *structs.AttElectraFromConsensus(&resp), "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
})
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
@@ -484,12 +569,33 @@ func TestGetAggregateAttestation(t *testing.T) {
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
})
t.Run("multiple matching unaggregated attestations - their aggregate is returned - SSZ", func(t *testing.T) {
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp ethpbalpha.AttestationElectra
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
require.NoError(t, err)
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
require.NoError(t, err)
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
compareResult(t, *structs.AttElectraFromConsensus(&resp), "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
})
t.Run("pre-electra attestation is ignored", func(t *testing.T) {
})
})
})
}
func createAttestationData(slot primitives.Slot, committeeIndex primitives.CommitteeIndex, root []byte) *ethpbalpha.AttestationData {
@@ -1293,6 +1399,81 @@ func TestGetAttestationData(t *testing.T) {
assert.DeepEqual(t, expectedResponse, resp)
})
t.Run("ok SSZ", func(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Slot = 3*params.BeaconConfig().SlotsPerEpoch + 1
targetBlock := util.NewBeaconBlock()
targetBlock.Block.Slot = 1 * params.BeaconConfig().SlotsPerEpoch
justifiedBlock := util.NewBeaconBlock()
justifiedBlock.Block.Slot = 2 * params.BeaconConfig().SlotsPerEpoch
blockRoot, err := block.Block.HashTreeRoot()
require.NoError(t, err, "Could not hash beacon block")
justifiedRoot, err := justifiedBlock.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root for justified block")
slot := 3*params.BeaconConfig().SlotsPerEpoch + 1
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconState.SetSlot(slot))
justifiedCheckpoint := &ethpbalpha.Checkpoint{
Epoch: 2,
Root: justifiedRoot[:],
}
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpoint))
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
chain := &mockChain.ChainService{
Optimistic: false,
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
Root: blockRoot[:],
CurrentJustifiedCheckPoint: justifiedCheckpoint,
TargetRoot: blockRoot,
State: beaconState,
}
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationDataCache(),
OptimisticModeFetcher: chain,
},
}
expectedAttData := &ethpbalpha.AttestationData{
Slot: slot,
BeaconBlockRoot: blockRoot[:],
CommitteeIndex: 0,
Source: &ethpbalpha.Checkpoint{
Epoch: 2,
Root: justifiedRoot[:],
},
Target: &ethpbalpha.Checkpoint{
Epoch: 3,
Root: blockRoot[:],
},
}
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttestationData(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
var att ethpbalpha.AttestationData
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
})
t.Run("syncing", func(t *testing.T) {
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
@@ -1536,6 +1717,82 @@ func TestGetAttestationData(t *testing.T) {
assert.DeepEqual(t, expectedResponse, resp)
})
t.Run("succeeds in first epoch SSZ", func(t *testing.T) {
slot := primitives.Slot(5)
block := util.NewBeaconBlock()
block.Block.Slot = slot
targetBlock := util.NewBeaconBlock()
targetBlock.Block.Slot = 0
justifiedBlock := util.NewBeaconBlock()
justifiedBlock.Block.Slot = 0
blockRoot, err := block.Block.HashTreeRoot()
require.NoError(t, err, "Could not hash beacon block")
justifiedRoot, err := justifiedBlock.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root for justified block")
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconState.SetSlot(slot))
justifiedCheckpt := &ethpbalpha.Checkpoint{
Epoch: 0,
Root: justifiedRoot[:],
}
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpt))
require.NoError(t, err)
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
chain := &mockChain.ChainService{
Root: blockRoot[:],
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
CurrentJustifiedCheckPoint: justifiedCheckpt,
TargetRoot: blockRoot,
State: beaconState,
}
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationDataCache(),
OptimisticModeFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
},
}
expectedAttData := &ethpbalpha.AttestationData{
Slot: slot,
BeaconBlockRoot: blockRoot[:],
CommitteeIndex: 0,
Source: &ethpbalpha.Checkpoint{
Epoch: 0,
Root: justifiedRoot[:],
},
Target: &ethpbalpha.Checkpoint{
Epoch: 0,
Root: blockRoot[:],
},
}
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttestationData(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
var att ethpbalpha.AttestationData
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
})
t.Run("handles far away justified epoch", func(t *testing.T) {
// Scenario:
//
@@ -1629,6 +1886,101 @@ func TestGetAttestationData(t *testing.T) {
require.NotNil(t, resp)
assert.DeepEqual(t, expectedResponse, resp)
})
t.Run("handles far away justified epoch SSZ", func(t *testing.T) {
// Scenario:
//
// State slot = 10000
// Last justified slot = epoch start of 1500
// HistoricalRootsLimit = 8192
//
// More background: https://github.com/prysmaticlabs/prysm/issues/2153
// This test breaks if it doesn't use mainnet config
// Ensure HistoricalRootsLimit matches scenario
params.SetupTestConfigCleanup(t)
cfg := params.MainnetConfig()
cfg.HistoricalRootsLimit = 8192
params.OverrideBeaconConfig(cfg)
block := util.NewBeaconBlock()
block.Block.Slot = 10000
epochBoundaryBlock := util.NewBeaconBlock()
var err error
epochBoundaryBlock.Block.Slot, err = slots.EpochStart(slots.ToEpoch(10000))
require.NoError(t, err)
justifiedBlock := util.NewBeaconBlock()
justifiedBlock.Block.Slot, err = slots.EpochStart(slots.ToEpoch(1500))
require.NoError(t, err)
justifiedBlock.Block.Slot -= 2 // Imagine two skip block
blockRoot, err := block.Block.HashTreeRoot()
require.NoError(t, err, "Could not hash beacon block")
justifiedBlockRoot, err := justifiedBlock.Block.HashTreeRoot()
require.NoError(t, err, "Could not hash justified block")
slot := primitives.Slot(10000)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconState.SetSlot(slot))
justifiedCheckpt := &ethpbalpha.Checkpoint{
Epoch: slots.ToEpoch(1500),
Root: justifiedBlockRoot[:],
}
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpt))
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
chain := &mockChain.ChainService{
Root: blockRoot[:],
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
CurrentJustifiedCheckPoint: justifiedCheckpt,
TargetRoot: blockRoot,
State: beaconState,
}
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationDataCache(),
OptimisticModeFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
},
}
expectedAttData := &ethpbalpha.AttestationData{
Slot: slot,
BeaconBlockRoot: blockRoot[:],
CommitteeIndex: 0,
Source: &ethpbalpha.Checkpoint{
Epoch: slots.ToEpoch(1500),
Root: justifiedBlockRoot[:],
},
Target: &ethpbalpha.Checkpoint{
Epoch: 312,
Root: blockRoot[:],
},
}
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
request := httptest.NewRequest(http.MethodGet, url, nil)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttestationData(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
var att ethpbalpha.AttestationData
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
})
}
func TestProduceSyncCommitteeContribution(t *testing.T) {

View File

@@ -10,11 +10,13 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -36,6 +38,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",
@@ -47,12 +50,16 @@ go_test(
"//beacon-chain/verification:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -3,12 +3,15 @@ package lookup
import (
"context"
"fmt"
"math"
"strconv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -40,7 +43,7 @@ func (e BlockIdParseError) Error() string {
// Blocker is responsible for retrieving blocks.
type Blocker interface {
Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error)
Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError)
Blobs(ctx context.Context, id string, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError)
}
// BeaconDbBlocker is an implementation of Blocker. It retrieves blocks from the beacon chain database.
@@ -49,6 +52,7 @@ type BeaconDbBlocker struct {
ChainInfoFetcher blockchain.ChainInfoFetcher
GenesisTimeFetcher blockchain.TimeFetcher
BlobStorage *filesystem.BlobStorage
DataColumnStorage *filesystem.DataColumnStorage
}
// Block returns the beacon block for a given identifier. The identifier can be one of:
@@ -129,6 +133,137 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
return blk, nil
}
// blobsFromStoredBlobs retrieves blobs corresponding to `indices` and `root` from the store.
// This function expects blobs to be stored directly (aka. no data columns).
func (p *BeaconDbBlocker) blobsFromStoredBlobs(indices []int, root []byte, commitments [][]byte) ([]*blocks.VerifiedROBlob, *core.RpcError) {
sum := p.BlobStorage.Summary(bytesutil.ToBytes32(root))
if len(indices) == 0 {
for index := range commitments {
if sum.HasIndex(uint64(index)) {
indices = append(indices, index)
}
}
} else {
for _, index := range indices {
if uint64(index) >= sum.MaxBlobsForEpoch() {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d is bigger than the maximum possible blob count %d", index, sum.MaxBlobsForEpoch()),
Reason: core.BadRequest,
}
}
if !sum.HasIndex(uint64(index)) {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d not found", index),
Reason: core.NotFound,
}
}
}
}
blobs := make([]*blocks.VerifiedROBlob, 0, len(indices))
for _, index := range indices {
vblob, err := p.BlobStorage.Get(bytesutil.ToBytes32(root), uint64(index))
if err != nil {
return nil, &core.RpcError{
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", root, index),
Reason: core.Internal,
}
}
blobs = append(blobs, &vblob)
}
return blobs, nil
}
// blobsFromStoredDataColumns retrieves data columns from the store, reconstruct the whole matrix if needed, convert the matrix to blobs,
// and then returns blobs corresponding to `indices` and `root` from the store,
// This function expects data columns to be stored (aka. no blobs).
// If not enough data columns are available to extract blobs from them (either directly or after reconstruction), an error is returned.
func (p *BeaconDbBlocker) blobsFromStoredDataColumns(block blocks.ROBlock, indices []int, rootBytes []byte) ([]*blocks.VerifiedROBlob, *core.RpcError) {
root := bytesutil.ToBytes32(rootBytes)
// Use all indices if none are provided.
if len(indices) == 0 {
commitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrap(err, "could not retrieve blob commitments"),
Reason: core.Internal,
}
}
for index := range commitments {
indices = append(indices, index)
}
}
// Count how many columns we have in the store.
summary := p.DataColumnStorage.Summary(root)
stored := summary.Stored()
count := uint64(len(stored))
if count < peerdas.MinimumColumnsCountToReconstruct() {
// There is no way to reconstruct the data columns.
return nil, &core.RpcError{
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs. Please start the beacon node with the `--%s` flag to ensure this call to success, or retry later if it is already the case.", flags.SubscribeAllDataSubnets.Name),
Reason: core.NotFound,
}
}
// Retrieve from the database needed data columns.
verifiedRoDataColumnSidecars, err := p.neededDataColumnSidecars(root, stored)
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrap(err, "needed data column sidecars"),
Reason: core.Internal,
}
}
verifiedRoBlobSidecars, err := peerdas.ReconstructBlobs(block, verifiedRoDataColumnSidecars, indices)
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrap(err, "blobs from data columns"),
Reason: core.Internal,
}
}
return verifiedRoBlobSidecars, nil
}
func (p *BeaconDbBlocker) neededDataColumnSidecars(root [fieldparams.RootLength]byte, stored map[uint64]bool) ([]blocks.VerifiedRODataColumn, error) {
// Check if we have all the non-extended data columns.
cellsPerBlob := fieldparams.CellsPerBlob
blobIndices := make([]uint64, 0, cellsPerBlob)
hasAllBlobColumns := true
for i := range uint64(cellsPerBlob) {
if !stored[i] {
hasAllBlobColumns = false
break
}
blobIndices = append(blobIndices, i)
}
if hasAllBlobColumns {
// Retrieve only the non-extended data columns.
verifiedRoSidecars, err := p.DataColumnStorage.Get(root, blobIndices)
if err != nil {
return nil, errors.Wrap(err, "data columns storage get")
}
return verifiedRoSidecars, nil
}
// Retrieve all the data columns.
verifiedRoSidecars, err := p.DataColumnStorage.Get(root, nil)
if err != nil {
return nil, errors.Wrap(err, "data columns storage get")
}
return verifiedRoSidecars, nil
}
// Blobs returns the blobs for a given block id identifier and blob indices. The identifier can be one of:
// - "head" (canonical head in node's view)
// - "genesis"
@@ -144,7 +279,7 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
// - block exists, has commitments, inside retention period (greater of protocol- or user-specified) serve then w/ 200 unless we hit an error reading them.
// we are technically not supposed to import a block to forkchoice unless we have the blobs, so the nuance here is if we can't find the file and we are inside the protocol-defined retention period, then it's actually a 500.
// - block exists, has commitments, outside retention period (greater of protocol- or user-specified) - ie just like block exists, no commitment
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
var rootSlice []byte
switch id {
case "genesis":
@@ -212,64 +347,55 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
root := bytesutil.ToBytes32(rootSlice)
b, err := p.BeaconDB.Block(ctx, root)
roSignedBlock, err := p.BeaconDB.Block(ctx, root)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve block %#x from db", rootSlice), Reason: core.Internal}
}
if b == nil {
if roSignedBlock == nil {
return nil, &core.RpcError{Err: fmt.Errorf("block %#x not found in db", rootSlice), Reason: core.NotFound}
}
// if block is not in the retention window, return 200 w/ empty list
if !p.BlobStorage.WithinRetentionPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
// If block is not in the retention window, return 200 w/ empty list
if !p.BlobStorage.WithinRetentionPeriod(slots.ToEpoch(roSignedBlock.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
return make([]*blocks.VerifiedROBlob, 0), nil
}
commitments, err := b.Block().Body().BlobKzgCommitments()
roBlock := roSignedBlock.Block()
commitments, err := roBlock.Body().BlobKzgCommitments()
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve kzg commitments from block %#x", rootSlice), Reason: core.Internal}
}
// if there are no commitments return 200 w/ empty list
if len(commitments) == 0 {
return make([]*blocks.VerifiedROBlob, 0), nil
}
sum := p.BlobStorage.Summary(root)
// Get the slot of the block.
blockSlot := roBlock.Slot()
if len(indices) == 0 {
for i := range commitments {
if sum.HasIndex(uint64(i)) {
indices = append(indices, uint64(i))
}
}
} else {
for _, ix := range indices {
if ix >= sum.MaxBlobsForEpoch() {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d is bigger than the maximum possible blob count %d", ix, sum.MaxBlobsForEpoch()),
Reason: core.BadRequest,
}
}
if !sum.HasIndex(ix) {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d not found", ix),
Reason: core.NotFound,
}
}
}
}
// Get the first peerDAS epoch.
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
blobs := make([]*blocks.VerifiedROBlob, len(indices))
for i, index := range indices {
vblob, err := p.BlobStorage.Get(root, index)
// Compute the first peerDAS slot.
fuluForkSlot := primitives.Slot(math.MaxUint64)
if fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", rootSlice, index),
Reason: core.Internal,
}
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate peerDAS start slot"), Reason: core.Internal}
}
blobs[i] = &vblob
}
return blobs, nil
if blockSlot >= fuluForkSlot {
roBlock, err := blocks.NewROBlockWithRoot(roSignedBlock, root)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to create roBlock with root %#x", root), Reason: core.Internal}
}
return p.blobsFromStoredDataColumns(roBlock, indices, rootSlice)
}
return p.blobsFromStoredBlobs(indices, rootSlice, commitments)
}

View File

@@ -1,7 +1,10 @@
package lookup
import (
"bytes"
"context"
"crypto/sha256"
"encoding/binary"
"fmt"
"math"
"net/http"
@@ -9,6 +12,7 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -17,12 +21,16 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpbalpha "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/sirupsen/logrus"
)
func TestGetBlock(t *testing.T) {
@@ -51,7 +59,7 @@ func TestGetBlock(t *testing.T) {
b4.Block.ParentRoot = bytesutil.PadTo([]byte{8}, 32)
util.SaveBlock(t, ctx, beaconDB, b4)
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block)
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block)
require.NoError(t, err)
fetcher := &BeaconDbBlocker{
@@ -60,7 +68,7 @@ func TestGetBlock(t *testing.T) {
DB: beaconDB,
Block: wsb,
Root: headBlock.BlockRoot,
FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blkContainers[64].BlockRoot},
FinalizedCheckPoint: &ethpb.Checkpoint{Root: blkContainers[64].BlockRoot},
CanonicalRoots: canonicalRoots,
},
}
@@ -71,13 +79,13 @@ func TestGetBlock(t *testing.T) {
tests := []struct {
name string
blockID []byte
want *ethpbalpha.SignedBeaconBlock
want *ethpb.SignedBeaconBlock
wantErr bool
}{
{
name: "slot",
blockID: []byte("30"),
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "bad formatting",
@@ -87,7 +95,7 @@ func TestGetBlock(t *testing.T) {
{
name: "canonical",
blockID: []byte("30"),
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "non canonical",
@@ -97,12 +105,12 @@ func TestGetBlock(t *testing.T) {
{
name: "head",
blockID: []byte("head"),
want: headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "finalized",
blockID: []byte("finalized"),
want: blkContainers[64].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[64].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "genesis",
@@ -117,7 +125,7 @@ func TestGetBlock(t *testing.T) {
{
name: "root",
blockID: blkContainers[20].BlockRoot,
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "non-existent root",
@@ -127,7 +135,7 @@ func TestGetBlock(t *testing.T) {
{
name: "hex",
blockID: []byte(hexutil.Encode(blkContainers[20].BlockRoot)),
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "no block",
@@ -149,7 +157,7 @@ func TestGetBlock(t *testing.T) {
require.NoError(t, err)
pb, err := result.Proto()
require.NoError(t, err)
pbBlock, ok := pb.(*ethpbalpha.SignedBeaconBlock)
pbBlock, ok := pb.(*ethpb.SignedBeaconBlock)
require.Equal(t, true, ok)
if !reflect.DeepEqual(pbBlock, tt.want) {
t.Error("Expected blocks to equal")
@@ -158,6 +166,118 @@ func TestGetBlock(t *testing.T) {
}
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
for i := 0; i < len(blob); i += 32 {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+32], fieldElementBytes[:])
}
return blob
}
func generateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}
func generateRandomBlocSignedBeaconBlockkAndVerifiedRoBlobs(t *testing.T, blobCount int) (interfaces.SignedBeaconBlock, []*blocks.VerifiedROBlob) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Generate random blobs and their corresponding commitments and proofs.
blobs := make([]kzg.Blob, 0, blobCount)
blobKzgCommitments := make([]*kzg.Commitment, 0, blobCount)
blobKzgProofs := make([]*kzg.Proof, 0, blobCount)
for blobIndex := range blobCount {
// Create a random blob.
blob := getRandBlob(int64(blobIndex))
blobs = append(blobs, blob)
// Generate a blobKZGCommitment for the blob.
blobKZGCommitment, proof, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
blobKzgCommitments = append(blobKzgCommitments, blobKZGCommitment)
blobKzgProofs = append(blobKzgProofs, proof)
}
// Set the commitments into the block.
blobZkgCommitmentsBytes := make([][]byte, 0, blobCount)
for _, blobKZGCommitment := range blobKzgCommitments {
blobZkgCommitmentsBytes = append(blobZkgCommitmentsBytes, blobKZGCommitment[:])
}
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = blobZkgCommitmentsBytes
// Generate verified RO blobs.
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, blobCount)
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
commitmentInclusionProof, err := blocks.MerkleProofKZGCommitments(signedBeaconBlock.Block().Body())
require.NoError(t, err)
for blobIndex := range blobCount {
blob := blobs[blobIndex]
blobKZGCommitment := blobKzgCommitments[blobIndex]
blobKzgProof := blobKzgProofs[blobIndex]
// Get the signed beacon block header.
signedBeaconBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
blobSidecar := &ethpb.BlobSidecar{
Index: uint64(blobIndex),
Blob: blob[:],
KzgCommitment: blobKZGCommitment[:],
KzgProof: blobKzgProof[:],
SignedBlockHeader: signedBeaconBlockHeader,
CommitmentInclusionProof: commitmentInclusionProof,
}
roBlob, err := blocks.NewROBlob(blobSidecar)
require.NoError(t, err)
verifiedROBlob := blocks.NewVerifiedROBlob(roBlob)
verifiedROBlobs = append(verifiedROBlobs, &verifiedROBlob)
}
return signedBeaconBlock, verifiedROBlobs
}
func TestGetBlob(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
@@ -218,7 +338,7 @@ func TestGetBlob(t *testing.T) {
})
t.Run("finalized", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -232,7 +352,7 @@ func TestGetBlob(t *testing.T) {
})
t.Run("justified", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -270,14 +390,14 @@ func TestGetBlob(t *testing.T) {
})
t.Run("one blob only", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []uint64{2})
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []int{2})
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 1, len(verifiedBlobs))
sidecar := verifiedBlobs[0].BlobSidecar
@@ -289,7 +409,7 @@ func TestGetBlob(t *testing.T) {
})
t.Run("no blobs returns an empty array", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -302,28 +422,28 @@ func TestGetBlob(t *testing.T) {
})
t.Run("no blob at index", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
noBlobIndex := uint64(len(blobs)) + 1
_, rpcErr := blocker.Blobs(ctx, "123", []uint64{0, noBlobIndex})
noBlobIndex := len(blobs) + 1
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, noBlobIndex})
require.NotNil(t, rpcErr)
assert.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
})
t.Run("index too big", func(t *testing.T) {
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpbalpha.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
_, rpcErr := blocker.Blobs(ctx, "123", []uint64{0, math.MaxUint})
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, math.MaxInt})
require.NotNil(t, rpcErr)
assert.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
})

View File

@@ -109,6 +109,8 @@ func (ds *Server) getPeer(pid peer.ID) (*ethpb.DebugPeerResponse, error) {
peerInfo.MetadataV0 = metadata.MetadataObjV0()
case metadata.MetadataObjV1() != nil:
peerInfo.MetadataV1 = metadata.MetadataObjV1()
case metadata.MetadataObjV2() != nil:
peerInfo.MetadataV2 = metadata.MetadataObjV2()
}
}
addresses := peerStore.Addrs(pid)

View File

@@ -1,3 +1,5 @@
# gazelle:ignore
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
@@ -8,6 +10,7 @@ go_library(
"blocks.go",
"construct_generic_block.go",
"duties.go",
"duties_v2.go",
"exit.go",
"log.go",
"proposer.go",
@@ -36,6 +39,7 @@ go_library(
"//api/client/builder:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
@@ -46,6 +50,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -179,7 +184,6 @@ common_deps = [
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
]
# gazelle:ignore
go_test(
name = "go_default_test",
timeout = "moderate",
@@ -189,6 +193,7 @@ go_test(
"blocks_test.go",
"construct_generic_block_test.go",
"duties_test.go",
"duties_v2_test.go",
"exit_test.go",
"proposer_altair_test.go",
"proposer_attestations_electra_test.go",

View File

@@ -8,13 +8,18 @@ import (
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
// constructGenericBeaconBlock constructs a `GenericBeaconBlock` based on the block version and other parameters.
func (vs *Server) constructGenericBeaconBlock(sBlk interfaces.SignedBeaconBlock, blobsBundle *enginev1.BlobsBundle, winningBid primitives.Wei) (*ethpb.GenericBeaconBlock, error) {
func (vs *Server) constructGenericBeaconBlock(
sBlk interfaces.SignedBeaconBlock,
blobsBundler enginev1.BlobsBundler,
winningBid primitives.Wei,
) (*ethpb.GenericBeaconBlock, error) {
if sBlk == nil || sBlk.Block() == nil {
return nil, fmt.Errorf("block cannot be nil")
return nil, errors.New("block cannot be nil")
}
blockProto, err := sBlk.Block().Proto()
@@ -34,12 +39,21 @@ func (vs *Server) constructGenericBeaconBlock(sBlk interfaces.SignedBeaconBlock,
return vs.constructBellatrixBlock(blockProto, isBlinded, bidStr), nil
case version.Capella:
return vs.constructCapellaBlock(blockProto, isBlinded, bidStr), nil
case version.Deneb:
return vs.constructDenebBlock(blockProto, isBlinded, bidStr, blobsBundle), nil
case version.Electra:
return vs.constructElectraBlock(blockProto, isBlinded, bidStr, blobsBundle), nil
case version.Deneb, version.Electra:
bundle, ok := blobsBundler.(*enginev1.BlobsBundle)
if blobsBundler != nil && !ok {
return nil, fmt.Errorf("expected *BlobsBundler, got %T", blobsBundler)
}
if sBlk.Version() == version.Deneb {
return vs.constructDenebBlock(blockProto, isBlinded, bidStr, bundle), nil
}
return vs.constructElectraBlock(blockProto, isBlinded, bidStr, bundle), nil
case version.Fulu:
return vs.constructFuluBlock(blockProto, isBlinded, bidStr, blobsBundle), nil
bundle, ok := blobsBundler.(*enginev1.BlobsBundleV2)
if blobsBundler != nil && !ok {
return nil, fmt.Errorf("expected *BlobsBundleV2, got %T", blobsBundler)
}
return vs.constructFuluBlock(blockProto, isBlinded, bidStr, bundle), nil
default:
return nil, fmt.Errorf("unknown block version: %d", sBlk.Version())
}
@@ -92,7 +106,7 @@ func (vs *Server) constructElectraBlock(blockProto proto.Message, isBlinded bool
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Electra{Electra: electraContents}, IsBlinded: false, PayloadValue: payloadValue}
}
func (vs *Server) constructFuluBlock(blockProto proto.Message, isBlinded bool, payloadValue string, bundle *enginev1.BlobsBundle) *ethpb.GenericBeaconBlock {
func (vs *Server) constructFuluBlock(blockProto proto.Message, isBlinded bool, payloadValue string, bundle *enginev1.BlobsBundleV2) *ethpb.GenericBeaconBlock {
if isBlinded {
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedFulu{BlindedFulu: blockProto.(*ethpb.BlindedBeaconBlockFulu)}, IsBlinded: true, PayloadValue: payloadValue}
}

View File

@@ -29,12 +29,19 @@ func TestConstructGenericBeaconBlock(t *testing.T) {
require.NoError(t, err)
r1, err := eb.Block.HashTreeRoot()
require.NoError(t, err)
result, err := vs.constructGenericBeaconBlock(b, nil, primitives.ZeroWei())
bundle := &enginev1.BlobsBundleV2{
KzgCommitments: [][]byte{{1, 2, 3}},
Proofs: [][]byte{{4, 5, 6}},
Blobs: [][]byte{{7, 8, 9}},
}
result, err := vs.constructGenericBeaconBlock(b, bundle, primitives.ZeroWei())
require.NoError(t, err)
r2, err := result.GetFulu().Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, r1, r2)
require.Equal(t, result.IsBlinded, false)
require.DeepEqual(t, bundle.Blobs, result.GetFulu().GetBlobs())
require.DeepEqual(t, bundle.Proofs, result.GetFulu().GetKzgProofs())
})
// Test for Electra version

View File

@@ -0,0 +1,268 @@
package validator
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// GetDutiesV2 returns the duties assigned to a list of validators specified
// in the request object.
//
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
func (vs *Server) GetDutiesV2(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.DutiesV2Response, error) {
if vs.SyncChecker.Syncing() {
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
return vs.dutiesv2(ctx, req)
}
// Compute the validator duties from the head state's corresponding epoch
// for validators public key / indices requested.
func (vs *Server) dutiesv2(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.DutiesV2Response, error) {
currentEpoch := slots.ToEpoch(vs.TimeFetcher.CurrentSlot())
if req.Epoch > currentEpoch+1 {
return nil, status.Errorf(codes.Unavailable, "Request epoch %d can not be greater than next epoch %d", req.Epoch, currentEpoch+1)
}
// Load head state
s, err := vs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
// Advance to start of requested epoch if necessary
s, err = vs.stateForEpoch(ctx, s, req.Epoch)
if err != nil {
return nil, err
}
// Build duties for each validator
ctx, span := trace.StartSpan(ctx, "dutiesv2.BuildResponse")
span.SetAttributes(trace.Int64Attribute("num_pubkeys", int64(len(req.PublicKeys))))
defer span.End()
// Load committee and proposer metadata
meta, err := loadDutiesMetadata(ctx, s, req.Epoch)
if err != nil {
return nil, err
}
validatorAssignments := make([]*ethpb.DutiesV2Response_Duty, 0, len(req.PublicKeys))
nextValidatorAssignments := make([]*ethpb.DutiesV2Response_Duty, 0, len(req.PublicKeys))
// start loop for assignments for current and next epochs
for _, pubKey := range req.PublicKeys {
if ctx.Err() != nil {
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())
}
idx, ok := s.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
// Unknown validator: still append placeholder duty with UNKNOWN_STATUS
validatorAssignments = append(validatorAssignments, &ethpb.DutiesV2Response_Duty{
PublicKey: pubKey,
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
})
nextValidatorAssignments = append(nextValidatorAssignments, &ethpb.DutiesV2Response_Duty{
PublicKey: pubKey,
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
})
continue
}
meta.current.liteAssignment = helpers.AssignmentForValidator(meta.current.committeesBySlot, meta.current.startSlot, idx)
meta.next.liteAssignment = helpers.AssignmentForValidator(meta.next.committeesBySlot, meta.next.startSlot, idx)
assignment, nextAssignment, err := vs.buildValidatorDuty(pubKey, idx, s, req.Epoch, meta)
if err != nil {
return nil, err
}
validatorAssignments = append(validatorAssignments, assignment)
nextValidatorAssignments = append(nextValidatorAssignments, nextAssignment)
}
// Dependent roots for fork choice
currDependentRoot, err := vs.ForkchoiceFetcher.DependentRoot(currentEpoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
}
prevDependentRoot := currDependentRoot
if currDependentRoot != [32]byte{} && currentEpoch > 0 {
prevDependentRoot, err = vs.ForkchoiceFetcher.DependentRoot(currentEpoch - 1)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get previous dependent root: %v", err)
}
}
return &ethpb.DutiesV2Response{
PreviousDutyDependentRoot: prevDependentRoot[:],
CurrentDutyDependentRoot: currDependentRoot[:],
CurrentEpochDuties: validatorAssignments,
NextEpochDuties: nextValidatorAssignments,
}, nil
}
// stateForEpoch returns a state advanced (with empty slot transitions) to the
// start slot of the requested epoch.
func (vs *Server) stateForEpoch(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (state.BeaconState, error) {
epochStartSlot, err := slots.EpochStart(reqEpoch)
if err != nil {
return nil, err
}
if s.Slot() >= epochStartSlot {
return s, nil
}
headRoot, err := vs.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
}
s, err = transition.ProcessSlotsUsingNextSlotCache(ctx, s, headRoot, epochStartSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", epochStartSlot, err)
}
return s, nil
}
// dutiesMetadata bundles together related data needed for duty
// construction.
type dutiesMetadata struct {
current *metadata
next *metadata
}
type metadata struct {
committeesAtSlot uint64
proposalSlots map[primitives.ValidatorIndex][]primitives.Slot
startSlot primitives.Slot
committeesBySlot [][][]primitives.ValidatorIndex
liteAssignment *helpers.LiteAssignment
}
func loadDutiesMetadata(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (*dutiesMetadata, error) {
meta := &dutiesMetadata{}
var err error
meta.current, err = loadMetadata(ctx, s, reqEpoch)
if err != nil {
return nil, err
}
// note: we only set the proposer slots for the current assignment and not the next epoch assignment
meta.current.proposalSlots, err = helpers.ProposerAssignments(ctx, s, reqEpoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not compute proposer slots: %v", err)
}
meta.next, err = loadMetadata(ctx, s, reqEpoch+1)
if err != nil {
return nil, err
}
return meta, nil
}
func loadMetadata(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (*metadata, error) {
meta := &metadata{}
if err := helpers.VerifyAssignmentEpoch(reqEpoch, s); err != nil {
return nil, err
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, reqEpoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get active validator count: %v", err)
}
meta.committeesAtSlot = helpers.SlotCommitteeCount(activeValidatorCount)
meta.startSlot, err = slots.EpochStart(reqEpoch)
if err != nil {
return nil, err
}
meta.committeesBySlot, err = helpers.PrecomputeCommittees(ctx, s, meta.startSlot)
if err != nil {
return nil, err
}
return meta, nil
}
// buildValidatorDuty builds both currentepoch and nextepoch V2 duty objects
// for a single validator index.
func (vs *Server) buildValidatorDuty(
pubKey []byte,
idx primitives.ValidatorIndex,
s state.BeaconState,
reqEpoch primitives.Epoch,
meta *dutiesMetadata,
) (*ethpb.DutiesV2Response_Duty, *ethpb.DutiesV2Response_Duty, error) {
assignment := &ethpb.DutiesV2Response_Duty{PublicKey: pubKey}
nextAssignment := &ethpb.DutiesV2Response_Duty{PublicKey: pubKey}
statusEnum := assignmentStatus(s, idx)
assignment.ValidatorIndex = idx
assignment.Status = statusEnum
assignment.CommitteesAtSlot = meta.current.committeesAtSlot
assignment.ProposerSlots = meta.current.proposalSlots[idx]
populateCommitteeFields(assignment, meta.current.liteAssignment)
nextAssignment.ValidatorIndex = idx
nextAssignment.Status = statusEnum
nextAssignment.CommitteesAtSlot = meta.next.committeesAtSlot
populateCommitteeFields(nextAssignment, meta.next.liteAssignment)
// Sync committee flags
if coreTime.HigherEqualThanAltairVersionAndEpoch(s, reqEpoch) {
inSync, err := helpers.IsCurrentPeriodSyncCommittee(s, idx)
if err != nil {
return nil, nil, status.Errorf(codes.Internal, "Could not determine current epoch sync committee: %v", err)
}
assignment.IsSyncCommittee = inSync
nextAssignment.IsSyncCommittee = inSync
if inSync {
if err := core.RegisterSyncSubnetCurrentPeriodProto(s, reqEpoch, pubKey, statusEnum); err != nil {
return nil, nil, status.Errorf(codes.Internal, "Could not register sync subnet current period: %v", err)
}
}
// Next epoch sync committee duty is assigned with next period sync committee only during
// sync period epoch boundary (ie. EPOCHS_PER_SYNC_COMMITTEE_PERIOD - 1). Else wise
// next epoch sync committee duty is the same as current epoch.
nextEpoch := reqEpoch + 1
currentEpoch := coreTime.CurrentEpoch(s)
n := slots.SyncCommitteePeriod(nextEpoch)
c := slots.SyncCommitteePeriod(currentEpoch)
if n > c {
nextInSync, err := helpers.IsNextPeriodSyncCommittee(s, idx)
if err != nil {
return nil, nil, status.Errorf(codes.Internal, "Could not determine next epoch sync committee: %v", err)
}
nextAssignment.IsSyncCommittee = nextInSync
if nextInSync {
go func() {
if err := core.RegisterSyncSubnetNextPeriodProto(s, reqEpoch, pubKey, statusEnum); err != nil {
log.WithError(err).Warn("Could not register sync subnet next period")
}
}()
}
}
}
return assignment, nextAssignment, nil
}
func populateCommitteeFields(duty *ethpb.DutiesV2Response_Duty, la *helpers.LiteAssignment) {
duty.CommitteeLength = la.CommitteeLength
duty.CommitteeIndex = la.CommitteeIndex
duty.ValidatorCommitteeIndex = la.ValidatorCommitteeIndex
duty.AttesterSlot = la.AttesterSlot
}

View File

@@ -0,0 +1,562 @@
package validator
import (
"context"
"testing"
"time"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
func TestGetDutiesV2_OK(t *testing.T) {
genesis := util.NewBeaconBlock()
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err, "Could not setup genesis bs")
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
pubKeys := make([][]byte, len(deposits))
indices := make([]uint64, len(deposits))
for i := 0; i < len(deposits); i++ {
pubKeys[i] = deposits[i].Data.PublicKey
indices[i] = uint64(i)
}
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
ForkchoiceFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Assigned slot %d can't be higher than %d",
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
}
// Test the last validator in registry.
lastValidatorIndex := depChainStart - 1
req = &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Assigned slot %d can't be higher than %d",
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
}
// We request for duties for all validators.
req = &ethpb.DutiesRequest{
PublicKeys: pubKeys,
Epoch: 0,
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
}
}
func TestGetAltairDutiesV2_SyncCommitteeOK(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = primitives.Epoch(0)
params.OverrideBeaconConfig(cfg)
genesis := util.NewBeaconBlock()
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err, "Could not setup genesis bs")
h := &ethpb.BeaconBlockHeader{
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
}
require.NoError(t, bs.SetLatestBlockHeader(h))
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
syncCommittee, err := altair.NextSyncCommittee(context.Background(), bs)
require.NoError(t, err)
require.NoError(t, bs.SetCurrentSyncCommittee(syncCommittee))
pubKeys := make([][]byte, len(deposits))
indices := make([]uint64, len(deposits))
for i := 0; i < len(deposits); i++ {
pubKeys[i] = deposits[i].Data.PublicKey
indices[i] = uint64(i)
}
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
pubkeysAs48ByteType := make([][fieldparams.BLSPubkeyLength]byte, len(pubKeys))
for i, pk := range pubKeys {
pubkeysAs48ByteType[i] = bytesutil.ToBytes48(pk)
}
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
ForkchoiceFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Assigned slot %d can't be higher than %d",
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
}
// Test the last validator in registry.
lastValidatorIndex := params.BeaconConfig().SyncCommitteeSize - 1
req = &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Assigned slot %d can't be higher than %d",
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
}
// We request for duties for all validators.
req = &ethpb.DutiesRequest{
PublicKeys: pubKeys,
Epoch: 0,
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
require.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
}
for i := 0; i < len(res.CurrentEpochDuties); i++ {
require.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
require.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
req = &ethpb.DutiesRequest{
PublicKeys: pubKeys,
Epoch: params.BeaconConfig().EpochsPerSyncCommitteePeriod - 1,
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
}
func TestGetBellatrixDutiesV2_SyncCommitteeOK(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = primitives.Epoch(0)
cfg.BellatrixForkEpoch = primitives.Epoch(1)
params.OverrideBeaconConfig(cfg)
genesis := util.NewBeaconBlock()
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
h := &ethpb.BeaconBlockHeader{
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
}
require.NoError(t, bs.SetLatestBlockHeader(h))
require.NoError(t, err, "Could not setup genesis bs")
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
syncCommittee, err := altair.NextSyncCommittee(context.Background(), bs)
require.NoError(t, err)
require.NoError(t, bs.SetCurrentSyncCommittee(syncCommittee))
pubKeys := make([][]byte, len(deposits))
indices := make([]uint64, len(deposits))
for i := 0; i < len(deposits); i++ {
pubKeys[i] = deposits[i].Data.PublicKey
indices[i] = uint64(i)
}
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
bs, err = execution.UpgradeToBellatrix(bs)
require.NoError(t, err)
pubkeysAs48ByteType := make([][fieldparams.BLSPubkeyLength]byte, len(pubKeys))
for i, pk := range pubKeys {
pubkeysAs48ByteType[i] = bytesutil.ToBytes48(pk)
}
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
ForkchoiceFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Assigned slot %d can't be higher than %d",
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
}
// Test the last validator in registry.
lastValidatorIndex := params.BeaconConfig().SyncCommitteeSize - 1
req = &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Assigned slot %d can't be higher than %d",
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
}
// We request for duties for all validators.
req = &ethpb.DutiesRequest{
PublicKeys: pubKeys,
Epoch: 0,
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
}
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
assert.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
req = &ethpb.DutiesRequest{
PublicKeys: pubKeys,
Epoch: params.BeaconConfig().EpochsPerSyncCommitteePeriod - 1,
}
res, err = vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
}
func TestGetAltairDutiesV2_UnknownPubkey(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = primitives.Epoch(0)
params.OverrideBeaconConfig(cfg)
genesis := util.NewBeaconBlock()
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err)
h := &ethpb.BeaconBlockHeader{
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
}
require.NoError(t, bs.SetLatestBlockHeader(h))
require.NoError(t, err, "Could not setup genesis bs")
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
vs := &Server{
HeadFetcher: chain,
ForkchoiceFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
DepositFetcher: depositCache,
PayloadIDCache: cache.NewPayloadIDCache(),
}
unknownPubkey := bytesutil.PadTo([]byte{'u'}, 48)
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{unknownPubkey},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err)
assert.Equal(t, false, res.CurrentEpochDuties[0].IsSyncCommittee)
assert.Equal(t, false, res.NextEpochDuties[0].IsSyncCommittee)
}
func TestGetDutiesV2_StateAdvancement(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.ElectraForkEpoch = primitives.Epoch(0)
params.OverrideBeaconConfig(cfg)
epochStart, err := slots.EpochStart(1)
require.NoError(t, err)
st, _ := util.DeterministicGenesisStateElectra(t, 1)
require.NoError(t, st.SetSlot(epochStart-1))
// Request epoch 1 which requires slot 32 processing
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{pubKey(0)},
Epoch: 1,
}
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockElectra(&ethpb.SignedBeaconBlockElectra{}))
require.NoError(t, err)
b.SetSlot(epochStart)
currentSlot := epochStart - 1
// Mock chain service with state at slot 0
chain := &mockChain.ChainService{
Root: make([]byte, 32),
State: st,
Block: b,
Slot: &currentSlot,
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
ForkchoiceFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
// Verify state processing occurs
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err)
require.NotNil(t, res)
}
func TestGetDutiesV2_SlotOutOfUpperBound(t *testing.T) {
chain := &mockChain.ChainService{
Genesis: time.Now(),
}
vs := &Server{
ForkchoiceFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := &ethpb.DutiesRequest{
Epoch: primitives.Epoch(chain.CurrentSlot()/params.BeaconConfig().SlotsPerEpoch + 2),
}
_, err := vs.GetDutiesV2(context.Background(), req)
require.ErrorContains(t, "can not be greater than next epoch", err)
}
func TestGetDutiesV2_CurrentEpoch_ShouldNotFail(t *testing.T) {
genesis := util.NewBeaconBlock()
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
bState, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err, "Could not setup genesis state")
// Set state to non-epoch start slot.
require.NoError(t, bState.SetSlot(5))
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, len(deposits))
indices := make([]uint64, len(deposits))
for i := 0; i < len(deposits); i++ {
pubKeys[i] = bytesutil.ToBytes48(deposits[i].Data.PublicKey)
indices[i] = uint64(i)
}
chain := &mockChain.ChainService{
State: bState, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
ForkchoiceFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err)
assert.Equal(t, 1, len(res.CurrentEpochDuties), "Expected 1 assignment")
}
func TestGetDutiesV2_MultipleKeys_OK(t *testing.T) {
genesis := util.NewBeaconBlock()
depChainStart := uint64(64)
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err, "Could not setup genesis bs")
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, len(deposits))
indices := make([]uint64, len(deposits))
for i := 0; i < len(deposits); i++ {
pubKeys[i] = bytesutil.ToBytes48(deposits[i].Data.PublicKey)
indices[i] = uint64(i)
}
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
ForkchoiceFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
pubkey0 := deposits[0].Data.PublicKey
pubkey1 := deposits[1].Data.PublicKey
// Test the first validator in registry.
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{pubkey0, pubkey1},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
assert.Equal(t, 2, len(res.CurrentEpochDuties))
assert.Equal(t, primitives.Slot(4), res.CurrentEpochDuties[0].AttesterSlot)
assert.Equal(t, primitives.Slot(4), res.CurrentEpochDuties[1].AttesterSlot)
}
func TestGetDutiesV2_NextSyncCommitteePeriod(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = primitives.Epoch(0)
cfg.EpochsPerSyncCommitteePeriod = 1
params.OverrideBeaconConfig(cfg)
// Configure sync committee period boundary
epochsPerPeriod := params.BeaconConfig().EpochsPerSyncCommitteePeriod
boundaryEpoch := epochsPerPeriod - 1
// Create state at last epoch of current period
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deposits))
require.NoError(t, err)
st, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err)
syncCommittee, err := altair.NextSyncCommittee(context.Background(), st)
require.NoError(t, err)
require.NoError(t, st.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(boundaryEpoch)))
validatorPubkey := deposits[0].Data.PublicKey
// Request duties for boundary epoch + 1
req := &ethpb.DutiesRequest{
PublicKeys: [][]byte{validatorPubkey},
Epoch: boundaryEpoch + 1,
}
genesisRoot := [32]byte{}
chain := &mockChain.ChainService{
State: st,
Root: genesisRoot[:],
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
ForkchoiceFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
res, err := vs.GetDutiesV2(context.Background(), req)
require.NoError(t, err)
//Verify next epoch duties have updated sync committee status
require.NotEqual(t,
res.CurrentEpochDuties[0].IsSyncCommittee,
res.NextEpochDuties[0].IsSyncCommittee,
)
}
func TestGetDutiesV2_SyncNotReady(t *testing.T) {
vs := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
_, err := vs.GetDutiesV2(context.Background(), &ethpb.DutiesRequest{})
assert.ErrorContains(t, "Syncing to latest head", err)
}

View File

@@ -15,9 +15,13 @@ import (
blockfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/block"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -58,28 +62,31 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
if err != nil {
log.WithError(err).Error("Could not convert slot to time")
}
log.WithFields(logrus.Fields{
"slot": req.Slot,
"sinceSlotStartTime": time.Since(t),
}).Info("Begin building block")
log := log.WithField("slot", req.Slot)
log.WithField("sinceSlotStartTime", time.Since(t)).Info("Begin building block")
// A syncing validator should not produce a block.
if vs.SyncChecker.Syncing() {
log.Error("Fail to build block: node is syncing")
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
// An optimistic validator MUST NOT produce a block (i.e., sign across the DOMAIN_BEACON_PROPOSER domain).
if slots.ToEpoch(req.Slot) >= params.BeaconConfig().BellatrixForkEpoch {
if err := vs.optimisticStatus(ctx); err != nil {
log.WithError(err).Error("Fail to build block: node is optimistic")
return nil, status.Errorf(codes.Unavailable, "Validator is not ready to propose: %v", err)
}
}
head, parentRoot, err := vs.getParentState(ctx, req.Slot)
if err != nil {
log.WithError(err).Error("Fail to build block: could not get parent state")
return nil, err
}
sBlk, err := getEmptyBlock(req.Slot)
if err != nil {
log.WithError(err).Error("Fail to build block: could not get empty block")
return nil, status.Errorf(codes.Internal, "Could not prepare block: %v", err)
}
// Set slot, graffiti, randao reveal, and parent root.
@@ -91,6 +98,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
// Set proposer index.
idx, err := helpers.BeaconProposerIndex(ctx, head)
if err != nil {
log.WithError(err).Error("Fail to build block: could not calculate proposer index")
return nil, fmt.Errorf("could not calculate proposer index %w", err)
}
sBlk.SetProposerIndex(idx)
@@ -101,7 +109,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
}
resp, err := vs.BuildBlockParallel(ctx, sBlk, head, req.SkipMevBoost, builderBoostFactor)
log := log.WithFields(logrus.Fields{
log = log.WithFields(logrus.Fields{
"slot": req.Slot,
"sinceSlotStartTime": time.Since(t),
"validator": sBlk.Block().ProposerIndex(),
@@ -232,7 +240,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
}()
winningBid := primitives.ZeroWei()
var bundle *enginev1.BlobsBundle
var bundle enginev1.BlobsBundler
if sBlk.Version() >= version.Bellatrix {
local, err := vs.getLocalPayload(ctx, sBlk.Block(), head)
if err != nil {
@@ -274,7 +282,13 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ProposeBeaconBlock handles the proposal of beacon blocks.
// TODO: Add tests
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
var (
blobSidecars []*ethpb.BlobSidecar
dataColumnSideCars []*ethpb.DataColumnSidecar
)
ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock")
defer span.End()
@@ -286,12 +300,12 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", "decode block failed", err)
}
isPeerDASEnabled := coreTime.PeerDASIsActive(block.Block().Slot())
var sidecars []*ethpb.BlobSidecar
if block.IsBlinded() {
block, sidecars, err = vs.handleBlindedBlock(ctx, block)
block, blobSidecars, dataColumnSideCars, err = vs.handleBlindedBlock(ctx, block, isPeerDASEnabled)
} else if block.Version() >= version.Deneb {
sidecars, err = vs.blobSidecarsFromUnblindedBlock(block, req)
blobSidecars, dataColumnSideCars, err = vs.handleUnblindedBlock(block, req, isPeerDASEnabled)
}
if err != nil {
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
@@ -302,9 +316,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
return nil, status.Errorf(codes.Internal, "Could not hash tree root: %v", err)
}
slot := block.Block().Slot()
var wg sync.WaitGroup
errChan := make(chan error, 1)
wg.Add(1)
go func() {
defer wg.Done()
@@ -315,8 +330,14 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
errChan <- nil
}()
if err := vs.broadcastAndReceiveBlobs(ctx, sidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
if isPeerDASEnabled {
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSideCars, root, slot); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive data columns: %v", err)
}
} else {
if err := vs.broadcastAndReceiveBlobs(ctx, blobSidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
}
}
wg.Wait()
@@ -328,46 +349,75 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
}
// handleBlindedBlock processes blinded beacon blocks.
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) {
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock, isPeerDASEnabled bool) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
if block.Version() < version.Bellatrix {
return nil, nil, errors.New("pre-Bellatrix blinded block")
return nil, nil, nil, errors.New("pre-Bellatrix blinded block")
}
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return nil, nil, errors.New("unconfigured block builder")
return nil, nil, nil, errors.New("unconfigured block builder")
}
copiedBlock, err := block.Copy()
if err != nil {
return nil, nil, err
return nil, nil, nil, errors.Wrap(err, "block copy")
}
payload, bundle, err := vs.BlockBuilder.SubmitBlindedBlock(ctx, block)
if err != nil {
return nil, nil, errors.Wrap(err, "submit blinded block failed")
return nil, nil, nil, errors.Wrap(err, "submit blinded block")
}
if err := copiedBlock.Unblind(payload); err != nil {
return nil, nil, errors.Wrap(err, "unblind failed")
return nil, nil, nil, errors.Wrap(err, "unblind")
}
sidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if isPeerDASEnabled {
dataColumnSideCars, err := peerdas.ConstructDataColumnSidecars(block, bundle.GetBlobs(), bundle.GetProofs())
if err != nil {
return nil, nil, nil, errors.Wrap(err, "construct data column sidecars")
}
return copiedBlock, nil, dataColumnSideCars, nil
}
blobSidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, errors.Wrap(err, "unblind blobs sidecars: commitment value doesn't match block")
return nil, nil, nil, errors.Wrap(err, "unblind blobs sidecars: commitment value doesn't match block")
}
return copiedBlock, sidecars, nil
return copiedBlock, blobSidecars, nil, nil
}
func (vs *Server) blobSidecarsFromUnblindedBlock(block interfaces.SignedBeaconBlock, req *ethpb.GenericSignedBeaconBlock) ([]*ethpb.BlobSidecar, error) {
func (vs *Server) handleUnblindedBlock(
block interfaces.SignedBeaconBlock,
req *ethpb.GenericSignedBeaconBlock,
isPeerDASEnabled bool,
) ([]*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
rawBlobs, proofs, err := blobsAndProofs(req)
if err != nil {
return nil, err
return nil, nil, err
}
return BuildBlobSidecars(block, rawBlobs, proofs)
if isPeerDASEnabled {
dataColumnSideCars, err := peerdas.ConstructDataColumnSidecars(block, rawBlobs, proofs)
if err != nil {
return nil, nil, errors.Wrap(err, "construct data column sidecars")
}
return nil, dataColumnSideCars, nil
}
blobSidecars, err := BuildBlobSidecars(block, rawBlobs, proofs)
if err != nil {
return nil, nil, errors.Wrap(err, "build blob sidecars")
}
return blobSidecars, nil, nil
}
// broadcastReceiveBlock broadcasts a block and handles its reception.
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [32]byte) error {
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
protoBlock, err := block.Proto()
if err != nil {
return errors.Wrap(err, "protobuf conversion failed")
@@ -383,7 +433,7 @@ func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.Si
}
// broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars.
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [32]byte) error {
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [fieldparams.RootLength]byte) error {
eg, eCtx := errgroup.WithContext(ctx)
for i, sc := range sidecars {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
@@ -412,6 +462,69 @@ func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethp
return eg.Wait()
}
// broadcastAndReceiveDataColumns handles the broadcasting and reception of data columns sidecars.
func (vs *Server) broadcastAndReceiveDataColumns(
ctx context.Context,
sidecars []*ethpb.DataColumnSidecar,
root [fieldparams.RootLength]byte,
slot primitives.Slot,
) error {
dataColumnsWithholdCount := features.Get().DataColumnsWithholdCount
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
eg, _ := errgroup.WithContext(ctx)
for _, sd := range sidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(sd, root)
if err != nil {
return errors.Wrap(err, "new read-only data column with root")
}
// We build this block ourselves, so we can upgrade the read only data column sidecar into a verified one.
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
sidecar := sd
eg.Go(func() error {
if sidecar.Index < dataColumnsWithholdCount {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"slot": slot,
"index": sidecar.Index,
}).Warning("Withholding data column")
return nil
}
// Compute the subnet index based on the column index.
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
if err := vs.P2P.BroadcastDataColumn(root, subnet, sidecar); err != nil {
return errors.Wrap(err, "broadcast data column")
}
return nil
})
}
if err := eg.Wait(); err != nil {
return errors.Wrap(err, "wait for data columns to be broadcasted")
}
if err := vs.DataColumnReceiver.ReceiveDataColumns(verifiedRODataColumns); err != nil {
return errors.Wrap(err, "receive data column")
}
for _, verifiedRODataColumn := range verifiedRODataColumns {
vs.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.DataColumnSidecarReceived,
Data: &operation.DataColumnSidecarReceivedData{DataColumn: &verifiedRODataColumn}, // #nosec G601
})
}
return nil
}
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.

View File

@@ -54,7 +54,7 @@ const blockBuilderTimeout = 1 * time.Second
const gasLimitAdjustmentFactor = 1024
// Sets the execution data for the block. Execution data can come from local EL client or remote builder depends on validator registration and circuit breaker conditions.
func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, local *blocks.GetPayloadResponse, bid builder.Bid, builderBoostFactor primitives.Gwei) (primitives.Wei, *enginev1.BlobsBundle, error) {
func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, local *blocks.GetPayloadResponse, bid builder.Bid, builderBoostFactor primitives.Gwei) (primitives.Wei, enginev1.BlobsBundler, error) {
_, span := trace.StartSpan(ctx, "ProposerServer.setExecutionData")
defer span.End()
@@ -69,13 +69,13 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
// Use local payload if builder payload is nil.
if bid == nil {
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
}
builderPayload, err := bid.Header()
if err != nil {
log.WithError(err).Warn("Proposer: failed to retrieve header from BuilderBid")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
}
switch {
@@ -84,7 +84,7 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
if err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Warn("Proposer: failed to match withdrawals root")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
}
// Compare payload values between local and builder. Default to the local value if it is higher.
@@ -97,7 +97,7 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
"minBuilderBid": minBid,
"builderGweiValue": builderValueGwei,
}).Warn("Proposer: using local execution payload because min bid not attained")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
}
// Use local block if min difference is not attained
@@ -108,7 +108,7 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
"minBidDiff": minDiff,
"builderGweiValue": builderValueGwei,
}).Warn("Proposer: using local execution payload because min difference with local value was not attained")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
}
// Use builder payload if the following in true:
@@ -133,7 +133,7 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
bidDeneb, ok := bid.(builder.BidDeneb)
if !ok {
log.Warnf("bid type %T does not implement builder.BidDeneb", bid)
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
} else {
builderKzgCommitments = bidDeneb.BlobKzgCommitments()
}
@@ -144,14 +144,14 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
bidElectra, ok := bid.(builder.BidElectra)
if !ok {
log.Warnf("bid type %T does not implement builder.BidElectra", bid)
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
} else {
executionRequests = bidElectra.ExecutionRequests()
}
}
if err := setBuilderExecution(blk, builderPayload, builderKzgCommitments, executionRequests); err != nil {
log.WithError(err).Warn("Proposer: failed to set builder payload")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
} else {
return bid.Value(), nil, nil
}
@@ -171,11 +171,11 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
trace.Int64Attribute("builderGweiValue", int64(builderValueGwei)), // lint:ignore uintcast -- This is OK for tracing.
trace.Int64Attribute("builderBoostFactor", int64(builderBoostFactor)), // lint:ignore uintcast -- This is OK for tracing.
)
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
default: // Bellatrix case.
if err := setBuilderExecution(blk, builderPayload, nil, nil); err != nil {
log.WithError(err).Warn("Proposer: failed to set builder payload")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
return local.Bid, local.BlobsBundler, setLocalExecution(blk, local)
} else {
return bid.Value(), nil, nil
}
@@ -375,8 +375,8 @@ func matchingWithdrawalsRoot(local, builder interfaces.ExecutionData) (bool, err
// It delegates to setExecution for the actual work.
func setLocalExecution(blk interfaces.SignedBeaconBlock, local *blocks.GetPayloadResponse) error {
var kzgCommitments [][]byte
if local.BlobsBundle != nil {
kzgCommitments = local.BlobsBundle.KzgCommitments
if local.BlobsBundler != nil {
kzgCommitments = local.BlobsBundler.GetKzgCommitments()
}
if local.ExecutionRequests != nil {
if err := blk.SetExecutionRequests(local.ExecutionRequests); err != nil {

View File

@@ -520,7 +520,7 @@ func TestServer_setExecutionData(t *testing.T) {
PayloadIDBytes: id,
GetPayloadResponse: &blocks.GetPayloadResponse{
ExecutionData: ed,
BlobsBundle: blobsBundle,
BlobsBundler: blobsBundle,
Bid: primitives.ZeroWei(),
},
}
@@ -528,7 +528,7 @@ func TestServer_setExecutionData(t *testing.T) {
res, err := vs.getLocalPayload(ctx, blk.Block(), capellaTransitionState)
require.NoError(t, err)
require.Equal(t, uint64(4), res.ExecutionData.BlockNumber())
require.DeepEqual(t, res.BlobsBundle, blobsBundle)
require.DeepEqual(t, res.BlobsBundler, blobsBundle)
})
t.Run("Can get builder payload and blobs in Deneb", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()

View File

@@ -529,7 +529,7 @@ func TestServer_GetBeaconBlock_Deneb(t *testing.T) {
PayloadIDBytes: &enginev1.PayloadIDBytes{1},
GetPayloadResponse: &blocks.GetPayloadResponse{
ExecutionData: ed,
BlobsBundle: bundle,
BlobsBundler: bundle,
},
}

View File

@@ -67,6 +67,7 @@ type Server struct {
SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher

View File

@@ -89,6 +89,7 @@ type Config struct {
AttestationReceiver blockchain.AttestationReceiver
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
ExecutionChainService execution.Chain
ChainStartFetcher execution.ChainStartFetcher
ExecutionChainInfoFetcher execution.ChainInfoFetcher
@@ -120,6 +121,7 @@ type Config struct {
Router *http.ServeMux
ClockWaiter startup.ClockWaiter
BlobStorage *filesystem.BlobStorage
DataColumnStorage *filesystem.DataColumnStorage
TrackedValidatorsCache *cache.TrackedValidatorsCache
PayloadIDCache *cache.PayloadIDCache
LCStore *lightClient.Store
@@ -196,6 +198,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
BlobStorage: s.cfg.BlobStorage,
DataColumnStorage: s.cfg.DataColumnStorage,
}
rewardFetcher := &rewards.BlockRewardService{Replayer: ch, DB: s.cfg.BeaconDB}
coreService := &core.Service{
@@ -236,6 +239,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
P2P: s.cfg.Broadcaster,
BlockReceiver: s.cfg.BlockReceiver,
BlobReceiver: s.cfg.BlobReceiver,
DataColumnReceiver: s.cfg.DataColumnReceiver,
MockEth1Votes: s.cfg.MockEth1Votes,
Eth1BlockFetcher: s.cfg.ExecutionChainService,
PendingDepositsFetcher: s.cfg.PendingDepositFetcher,

View File

@@ -36,6 +36,6 @@ func (m *MockBlocker) Block(_ context.Context, b []byte) (interfaces.ReadOnlySig
}
// Blobs --
func (m *MockBlocker) Blobs(_ context.Context, _ string, _ []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
panic("implement me") // lint:nopanic -- Test code.
func (*MockBlocker) Blobs(_ context.Context, _ string, _ []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
return nil, &core.RpcError{}
}

View File

@@ -61,6 +61,7 @@ type ReadOnlyBeaconState interface {
ReadOnlySyncCommittee
ReadOnlyDeposits
ReadOnlyConsolidations
ReadOnlyProposerLookahead
ToProtoUnsafe() interface{}
ToProto() interface{}
GenesisTime() uint64
@@ -95,6 +96,7 @@ type WriteOnlyBeaconState interface {
WriteOnlyConsolidations
WriteOnlyWithdrawals
WriteOnlyDeposits
WriteOnlyProposerLookahead
SetGenesisTime(val uint64) error
SetGenesisValidatorsRoot(val []byte) error
SetSlot(val primitives.Slot) error
@@ -239,6 +241,10 @@ type ReadOnlyConsolidations interface {
NumPendingConsolidations() (uint64, error)
}
type ReadOnlyProposerLookahead interface {
ProposerLookahead() ([]primitives.ValidatorIndex, error)
}
// WriteOnlyBlockRoots defines a struct which only has write access to block roots methods.
type WriteOnlyBlockRoots interface {
SetBlockRoots(val [][]byte) error
@@ -340,3 +346,7 @@ type WriteOnlyDeposits interface {
SetPendingDeposits(val []*ethpb.PendingDeposit) error
SetDepositBalanceToConsume(primitives.Gwei) error
}
type WriteOnlyProposerLookahead interface {
SetProposerLookahead([]primitives.ValidatorIndex) error
}

View File

@@ -17,6 +17,7 @@ go_library(
"getters_misc.go",
"getters_participation.go",
"getters_payload_header.go",
"getters_proposer_lookahead.go",
"getters_randao.go",
"getters_state.go",
"getters_sync_committee.go",
@@ -37,6 +38,7 @@ go_library(
"setters_misc.go",
"setters_participation.go",
"setters_payload_header.go",
"setters_proposer_lookahead.go",
"setters_randao.go",
"setters_state.go",
"setters_sync_committee.go",
@@ -97,6 +99,7 @@ go_test(
"getters_deposits_test.go",
"getters_exit_test.go",
"getters_participation_test.go",
"getters_setters_lookahead_test.go",
"getters_test.go",
"getters_validator_test.go",
"getters_withdrawal_test.go",

View File

@@ -3,7 +3,7 @@
Note: Whenever only the name of a file is provided, it's assumed to be in the `/beacon-chain/state/state-native` package.
- Add a `BeaconState[Version]FieldCount` configuration item to `/config/params/config.go` and set it in `/config/params/mainnet_config.go`.
- Add the field to the `BeaconState` struct in `beacon_state_mainnet.go` and `beacon_state_minimal.go`. Update the marshaling code too.
- Add the field to the `BeaconState` struct in `beacon_state.go`. Update the marshaling structs in the same file too.
- Add the field's metadata to `/beacon-chain/state/state-native/types/types.go`.
- Add a getter and a setter for the field, either to existing `getter_XXX.go`/`setter_XXX.go` files or create new ones if the field doesn't fit anywhere.
Add the new getter and setter to `/beacon-chain/state/interfaces.go`.
@@ -19,6 +19,6 @@ between states.
- Add the following functions: `InitializeFromProto[Version]()`, `InitializeFromProtoUnsafe[Version]()`.
- Update the following functions: `Copy()`, `initializeMerkleLayers()`, `RecordStateMetrics()` (applies only to multi-value slice fields), `rootSelector()`,
`finalizerCleanup()` (applies only to multi-value slice fields).
- If the field is a slice, add it to the field map in `types.go`.
- If the field is a slice, add it to the field map in `types.go`. This only applies to large slices that need to be rehashed only in part. In particular, this mostly applies for arrays of objects, and not for arrays of basic SSZ types as these are not hashed by taking the root of each element.
- If the field is a slice, update the `fieldConverters()` function in `/beacon-chain/state/fieldtrie/field_trie_helpers.go`. The exact implementation will vary
depending on a few factors (is the field similar to an existing one, is it a multi-value slice etc.)
depending on a few factors (is the field similar to an existing one, is it a multi-value slice etc). This applies only for the slices as mentioned in the previous comment.

View File

@@ -70,6 +70,7 @@ type BeaconState struct {
pendingDeposits []*ethpb.PendingDeposit // pending_deposits: List[PendingDeposit, PENDING_DEPOSITS_LIMIT]
pendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal // pending_partial_withdrawals: List[PartialWithdrawal, PENDING_PARTIAL_WITHDRAWALS_LIMIT]
pendingConsolidations []*ethpb.PendingConsolidation // pending_consolidations: List[PendingConsolidation, PENDING_CONSOLIDATIONS_LIMIT]
proposerLookahead []primitives.ValidatorIndex // proposer_look_ahead: List[uint64, (MIN_LOOKAHEAD + 1)*SLOTS_PER_EPOCH]
id uint64
lock sync.RWMutex
@@ -125,6 +126,7 @@ type beaconStateMarshalable struct {
PendingDeposits []*ethpb.PendingDeposit `json:"pending_deposits" yaml:"pending_deposits"`
PendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal `json:"pending_partial_withdrawals" yaml:"pending_partial_withdrawals"`
PendingConsolidations []*ethpb.PendingConsolidation `json:"pending_consolidations" yaml:"pending_consolidations"`
ProposerLookahead []primitives.ValidatorIndex `json:"proposer_look_ahead" yaml:"proposer_look_ahead"`
}
func (b *BeaconState) MarshalJSON() ([]byte, error) {
@@ -194,6 +196,7 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
PendingDeposits: b.pendingDeposits,
PendingPartialWithdrawals: b.pendingPartialWithdrawals,
PendingConsolidations: b.pendingConsolidations,
ProposerLookahead: b.proposerLookahead,
}
return json.Marshal(marshalable)
}

View File

@@ -0,0 +1,19 @@
package state_native
import (
"slices"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
)
// ProposerLookahead is a non-mutating call to the beacon state which returns a slice of
// validator indices that hold the proposers in the next few slots.
func (b *BeaconState) ProposerLookahead() ([]primitives.ValidatorIndex, error) {
if b.version < version.Fulu {
return nil, errNotSupported("ProposerLookahead", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return slices.Clone(b.proposerLookahead), nil
}

View File

@@ -0,0 +1,44 @@
package state_native_test
import (
"testing"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestProposerLookahead(t *testing.T) {
t.Run("Fulu expected values", func(t *testing.T) {
lookahead := make([]uint64, int(params.BeaconConfig().MinSeedLookahead+1)*int(params.BeaconConfig().SlotsPerEpoch))
want := make([]primitives.ValidatorIndex, int(params.BeaconConfig().MinSeedLookahead+1)*int(params.BeaconConfig().SlotsPerEpoch))
st, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateFulu{
ProposerLookahead: lookahead,
})
require.NoError(t, err)
got, err := st.ProposerLookahead()
require.NoError(t, err)
require.Equal(t, len(want), len(got))
for i, w := range want {
require.Equal(t, w, got[i], "index %d", i)
}
})
t.Run("Fulu error on invalid size", func(t *testing.T) {
lookahead := make([]primitives.ValidatorIndex, int(params.BeaconConfig().MinSeedLookahead+1)*int(params.BeaconConfig().SlotsPerEpoch)+1)
st, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateFulu{})
require.NoError(t, err)
require.ErrorContains(t, "invalid size for proposer lookahead", st.SetProposerLookahead(lookahead))
})
t.Run("earlier than electra returns error", func(t *testing.T) {
st, err := state_native.InitializeFromProtoDeneb(&ethpb.BeaconStateDeneb{})
require.NoError(t, err)
_, err = st.ProposerLookahead()
require.ErrorContains(t, "is not supported", err)
lookahead := make([]primitives.ValidatorIndex, int(params.BeaconConfig().MinSeedLookahead+1)*int(params.BeaconConfig().SlotsPerEpoch))
require.ErrorContains(t, "is not supported", st.SetProposerLookahead(lookahead))
})
}

View File

@@ -182,7 +182,7 @@ func (b *BeaconState) ToProtoUnsafe() interface{} {
NextWithdrawalValidatorIndex: b.nextWithdrawalValidatorIndex,
HistoricalSummaries: b.historicalSummaries,
}
case version.Electra, version.Fulu:
case version.Electra:
return &ethpb.BeaconStateElectra{
GenesisTime: b.genesisTime,
GenesisValidatorsRoot: gvrCopy[:],
@@ -222,6 +222,51 @@ func (b *BeaconState) ToProtoUnsafe() interface{} {
PendingPartialWithdrawals: b.pendingPartialWithdrawals,
PendingConsolidations: b.pendingConsolidations,
}
case version.Fulu:
lookahead := make([]uint64, len(b.proposerLookahead))
for i, v := range b.proposerLookahead {
lookahead[i] = uint64(v)
}
return &ethpb.BeaconStateFulu{
GenesisTime: b.genesisTime,
GenesisValidatorsRoot: gvrCopy[:],
Slot: b.slot,
Fork: b.fork,
LatestBlockHeader: b.latestBlockHeader,
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: b.historicalRoots.Slice(),
Eth1Data: b.eth1Data,
Eth1DataVotes: b.eth1DataVotes,
Eth1DepositIndex: b.eth1DepositIndex,
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: b.slashings,
PreviousEpochParticipation: b.previousEpochParticipation,
CurrentEpochParticipation: b.currentEpochParticipation,
JustificationBits: b.justificationBits,
PreviousJustifiedCheckpoint: b.previousJustifiedCheckpoint,
CurrentJustifiedCheckpoint: b.currentJustifiedCheckpoint,
FinalizedCheckpoint: b.finalizedCheckpoint,
InactivityScores: inactivityScores,
CurrentSyncCommittee: b.currentSyncCommittee,
NextSyncCommittee: b.nextSyncCommittee,
LatestExecutionPayloadHeader: b.latestExecutionPayloadHeaderDeneb,
NextWithdrawalIndex: b.nextWithdrawalIndex,
NextWithdrawalValidatorIndex: b.nextWithdrawalValidatorIndex,
HistoricalSummaries: b.historicalSummaries,
DepositRequestsStartIndex: b.depositRequestsStartIndex,
DepositBalanceToConsume: b.depositBalanceToConsume,
ExitBalanceToConsume: b.exitBalanceToConsume,
EarliestExitEpoch: b.earliestExitEpoch,
ConsolidationBalanceToConsume: b.consolidationBalanceToConsume,
EarliestConsolidationEpoch: b.earliestConsolidationEpoch,
PendingDeposits: b.pendingDeposits,
PendingPartialWithdrawals: b.pendingPartialWithdrawals,
PendingConsolidations: b.pendingConsolidations,
ProposerLookahead: lookahead,
}
default:
return nil
}
@@ -388,7 +433,7 @@ func (b *BeaconState) ToProto() interface{} {
NextWithdrawalValidatorIndex: b.nextWithdrawalValidatorIndex,
HistoricalSummaries: b.historicalSummariesVal(),
}
case version.Electra, version.Fulu:
case version.Electra:
return &ethpb.BeaconStateElectra{
GenesisTime: b.genesisTime,
GenesisValidatorsRoot: gvrCopy[:],
@@ -428,6 +473,51 @@ func (b *BeaconState) ToProto() interface{} {
PendingPartialWithdrawals: b.pendingPartialWithdrawalsVal(),
PendingConsolidations: b.pendingConsolidationsVal(),
}
case version.Fulu:
lookahead := make([]uint64, len(b.proposerLookahead))
for i, v := range b.proposerLookahead {
lookahead[i] = uint64(v)
}
return &ethpb.BeaconStateFulu{
GenesisTime: b.genesisTime,
GenesisValidatorsRoot: gvrCopy[:],
Slot: b.slot,
Fork: b.forkVal(),
LatestBlockHeader: b.latestBlockHeaderVal(),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: b.historicalRoots.Slice(),
Eth1Data: b.eth1DataVal(),
Eth1DataVotes: b.eth1DataVotesVal(),
Eth1DepositIndex: b.eth1DepositIndex,
Validators: b.validatorsVal(),
Balances: b.balancesVal(),
RandaoMixes: rm,
Slashings: b.slashingsVal(),
PreviousEpochParticipation: b.previousEpochParticipationVal(),
CurrentEpochParticipation: b.currentEpochParticipationVal(),
JustificationBits: b.justificationBitsVal(),
PreviousJustifiedCheckpoint: b.previousJustifiedCheckpointVal(),
CurrentJustifiedCheckpoint: b.currentJustifiedCheckpointVal(),
FinalizedCheckpoint: b.finalizedCheckpointVal(),
InactivityScores: b.inactivityScoresVal(),
CurrentSyncCommittee: b.currentSyncCommitteeVal(),
NextSyncCommittee: b.nextSyncCommitteeVal(),
LatestExecutionPayloadHeader: b.latestExecutionPayloadHeaderDeneb.Copy(),
NextWithdrawalIndex: b.nextWithdrawalIndex,
NextWithdrawalValidatorIndex: b.nextWithdrawalValidatorIndex,
HistoricalSummaries: b.historicalSummariesVal(),
DepositRequestsStartIndex: b.depositRequestsStartIndex,
DepositBalanceToConsume: b.depositBalanceToConsume,
ExitBalanceToConsume: b.exitBalanceToConsume,
EarliestExitEpoch: b.earliestExitEpoch,
ConsolidationBalanceToConsume: b.consolidationBalanceToConsume,
EarliestConsolidationEpoch: b.earliestConsolidationEpoch,
PendingDeposits: b.pendingDepositsVal(),
PendingPartialWithdrawals: b.pendingPartialWithdrawalsVal(),
PendingConsolidations: b.pendingConsolidationsVal(),
ProposerLookahead: lookahead,
}
default:
return nil
}
@@ -554,4 +644,12 @@ func ProtobufBeaconStateElectra(s interface{}) (*ethpb.BeaconStateElectra, error
return pbState, nil
}
var ProtobufBeaconStateFulu = ProtobufBeaconStateElectra
// ProtobufBeaconStateFulu transforms an input into beacon state Fulu in the form of protobuf.
// Error is returned if the input is not type protobuf beacon state.
func ProtobufBeaconStateFulu(s interface{}) (*ethpb.BeaconStateFulu, error) {
pbState, ok := s.(*ethpb.BeaconStateFulu)
if !ok {
return nil, errors.New("input is not type pb.BeaconStateFulu")
}
return pbState, nil
}

View File

@@ -320,5 +320,13 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
fieldRoots[types.PendingConsolidations.RealPosition()] = pcRoot[:]
}
if state.version >= version.Fulu {
// Proposer lookahead root.
proposerLookaheadRoot, err := stateutil.ProposerLookaheadRoot(state.proposerLookahead)
if err != nil {
return nil, errors.Wrap(err, "could not compute proposer lookahead merkleization")
}
fieldRoots[types.ProposerLookahead.RealPosition()] = proposerLookaheadRoot[:]
}
return fieldRoots, nil
}

View File

@@ -0,0 +1,30 @@
package state_native
import (
"errors"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
)
// SetProposerLookahead is a mutating call to the beacon state which sets the proposer lookahead
func (b *BeaconState) SetProposerLookahead(lookahead []primitives.ValidatorIndex) error {
if b.version < version.Fulu {
return errNotSupported("SetProposerLookahead", b.version)
}
if len(lookahead) != int((params.BeaconConfig().MinSeedLookahead+1))*int(params.BeaconConfig().SlotsPerEpoch) {
return errors.New("invalid size for proposer lookahead")
}
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[types.ProposerLookahead].MinusRef()
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
b.proposerLookahead = lookahead
b.markFieldAsDirty(types.ProposerLookahead)
return nil
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
mvslice "github.com/OffchainLabs/prysm/v6/container/multi-value-slice"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -108,7 +109,10 @@ var (
types.PendingConsolidations,
)
fuluFields = electraFields
fuluFields = append(
electraFields,
types.ProposerLookahead,
)
)
const (
@@ -118,14 +122,14 @@ const (
capellaSharedFieldRefCount = 13
denebSharedFieldRefCount = 13
electraSharedFieldRefCount = 16
fuluSharedFieldRefCount = 16
fuluSharedFieldRefCount = 17
experimentalStatePhase0SharedFieldRefCount = 5
experimentalStateAltairSharedFieldRefCount = 5
experimentalStateBellatrixSharedFieldRefCount = 6
experimentalStateCapellaSharedFieldRefCount = 7
experimentalStateDenebSharedFieldRefCount = 7
experimentalStateElectraSharedFieldRefCount = 10
experimentalStateFuluSharedFieldRefCount = 10
experimentalStateFuluSharedFieldRefCount = 11
)
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
@@ -159,8 +163,8 @@ func InitializeFromProtoElectra(st *ethpb.BeaconStateElectra) (state.BeaconState
}
// InitializeFromProtoFulu the beacon state from a protobuf representation.
func InitializeFromProtoFulu(st *ethpb.BeaconStateElectra) (state.BeaconState, error) {
return InitializeFromProtoUnsafeFulu(proto.Clone(st).(*ethpb.BeaconStateElectra))
func InitializeFromProtoFulu(st *ethpb.BeaconStateFulu) (state.BeaconState, error) {
return InitializeFromProtoUnsafeFulu(proto.Clone(st).(*ethpb.BeaconStateFulu))
}
// InitializeFromProtoUnsafePhase0 directly uses the beacon state protobuf fields
@@ -842,7 +846,7 @@ func InitializeFromProtoUnsafeElectra(st *ethpb.BeaconStateElectra) (state.Beaco
// InitializeFromProtoUnsafeFulu directly uses the beacon state protobuf fields
// and sets them as fields of the BeaconState type.
func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateElectra) (state.BeaconState, error) {
func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateFulu) (state.BeaconState, error) {
if st == nil {
return nil, errors.New("received nil state")
}
@@ -852,6 +856,10 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateElectra) (state.BeaconSt
hRoots[i] = bytesutil.ToBytes32(r)
}
proposerLookahead := make([]primitives.ValidatorIndex, len(st.ProposerLookahead))
for i, v := range st.ProposerLookahead {
proposerLookahead[i] = primitives.ValidatorIndex(v)
}
fieldCount := params.BeaconConfig().BeaconStateFuluFieldCount
b := &BeaconState{
version: version.Fulu,
@@ -886,6 +894,7 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateElectra) (state.BeaconSt
pendingDeposits: st.PendingDeposits,
pendingPartialWithdrawals: st.PendingPartialWithdrawals,
pendingConsolidations: st.PendingConsolidations,
proposerLookahead: proposerLookahead,
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
@@ -950,6 +959,7 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateElectra) (state.BeaconSt
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1) // New in Fulu.
if !features.Get().EnableExperimentalState {
b.sharedFieldReferences[types.BlockRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[types.StateRoots] = stateutil.NewRef(1)
@@ -1015,6 +1025,7 @@ func (b *BeaconState) Copy() state.BeaconState {
currentEpochAttestations: b.currentEpochAttestations,
eth1DataVotes: b.eth1DataVotes,
slashings: b.slashings,
proposerLookahead: b.proposerLookahead,
// Large arrays, increases over time.
balances: b.balances,
@@ -1441,6 +1452,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return stateutil.PendingPartialWithdrawalsRoot(b.pendingPartialWithdrawals)
case types.PendingConsolidations:
return stateutil.PendingConsolidationsRoot(b.pendingConsolidations)
case types.ProposerLookahead:
return stateutil.ProposerLookaheadRoot(b.proposerLookahead)
}
return [32]byte{}, errors.New("invalid field index provided")
}

View File

@@ -112,6 +112,8 @@ func (f FieldIndex) String() string {
return "pendingPartialWithdrawals"
case PendingConsolidations:
return "pendingConsolidations"
case ProposerLookahead:
return "proposerLookahead"
default:
return fmt.Sprintf("unknown field index number: %d", f)
}
@@ -195,6 +197,8 @@ func (f FieldIndex) RealPosition() int {
return 35
case PendingConsolidations:
return 36
case ProposerLookahead:
return 37
default:
return -1
}
@@ -259,6 +263,7 @@ const (
PendingDeposits // Electra: EIP-7251
PendingPartialWithdrawals // Electra: EIP-7251
PendingConsolidations // Electra: EIP-7251
ProposerLookahead // Fulu: EIP 7917
)
// Enumerator keeps track of the number of states created since the node's start.

View File

@@ -259,7 +259,7 @@ func (s *State) latestAncestor(ctx context.Context, blockRoot [32]byte) (state.B
defer span.End()
if s.isFinalizedRoot(blockRoot) {
finalizedState := s.finalizedState()
finalizedState := s.FinalizedState()
if finalizedState != nil {
return finalizedState, nil
}
@@ -297,7 +297,7 @@ func (s *State) latestAncestor(ctx context.Context, blockRoot [32]byte) (state.B
// Does the state exist in finalized info cache.
if s.isFinalizedRoot(parentRoot) {
return s.finalizedState(), nil
return s.FinalizedState(), nil
}
// Does the state exist in epoch boundary cache.

View File

@@ -196,7 +196,7 @@ func (s *State) isFinalizedRoot(r [32]byte) bool {
}
// Returns the cached and copied finalized state.
func (s *State) finalizedState() state.BeaconState {
func (s *State) FinalizedState() state.BeaconState {
s.finalizedInfo.lock.RLock()
defer s.finalizedInfo.lock.RUnlock()
return s.finalizedInfo.state.Copy()

View File

@@ -33,5 +33,5 @@ func TestResume(t *testing.T) {
require.DeepSSZEqual(t, beaconState.ToProtoUnsafe(), resumeState.ToProtoUnsafe())
assert.Equal(t, params.BeaconConfig().SlotsPerEpoch, service.finalizedInfo.slot, "Did not get watned slot")
assert.Equal(t, service.finalizedInfo.root, root, "Did not get wanted root")
assert.NotNil(t, service.finalizedState(), "Wanted a non nil finalized state")
assert.NotNil(t, service.FinalizedState(), "Wanted a non nil finalized state")
}

View File

@@ -15,6 +15,7 @@ go_library(
"pending_consolidations_root.go",
"pending_deposits_root.go",
"pending_partial_withdrawals_root.go",
"proposer_lookahead_root.go",
"reference.go",
"sync_committee.root.go",
"trie_helpers.go",
@@ -49,6 +50,7 @@ go_test(
"benchmark_test.go",
"field_root_test.go",
"field_root_validator_test.go",
"proposer_lookahead_root_test.go",
"reference_bench_test.go",
"state_root_test.go",
"trie_helpers_test.go",

View File

@@ -0,0 +1,18 @@
package stateutil
import (
"encoding/binary"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/ssz"
)
// ProposerLookaheadRoot computes the hash tree root of the proposer lookahead
func ProposerLookaheadRoot(lookahead []primitives.ValidatorIndex) ([32]byte, error) {
chunks := make([][32]byte, (len(lookahead)*8+31)/32)
for i, idx := range lookahead {
j := i / 4
binary.LittleEndian.PutUint64(chunks[j][(i%4)*8:], uint64(idx))
}
return ssz.MerkleizeVector(chunks, uint64(len(chunks))), nil
}

Some files were not shown because too many files have changed in this diff Show More