Compare commits

..

216 Commits

Author SHA1 Message Date
Manu NALEPA
814cec3a34 Merge branch 'develop' into peerDAS 2025-08-29 10:32:26 +02:00
Manu NALEPA
ad427ab08f Merge branch 'develop' into peerDAS 2025-08-26 16:56:51 +02:00
Manu NALEPA
abe36ef5a5 isSidecarIndexRequested: Log requested indices. 2025-08-26 16:31:20 +02:00
Manu NALEPA
295c91ef7a fetchOriginColumns: Retry. 2025-08-26 16:31:10 +02:00
Manu NALEPA
6a8f3cacc2 fetchOriginColumns: Fix typo 2025-08-26 16:30:34 +02:00
Manu NALEPA
7324c16c36 fetchOriginColumns: Remove unused argument. 2025-08-26 15:40:00 +02:00
Manu NALEPA
601afbe73d convertToAddrInfo: Add peer details on error. 2025-08-26 11:44:30 +02:00
Manu NALEPA
6d7cc7dcbc Merge branch 'develop' into peerDAS-do-not-merge 2025-08-25 17:16:19 +02:00
Manu NALEPA
3e1a15a9d9 ProposeBeaconBlock: Factorize. 2025-08-25 11:00:16 +02:00
Manu NALEPA
e97005c727 Remove unused code. 2025-08-25 10:00:35 +02:00
Manu NALEPA
cfe29850b2 Merge branch 'develop' into peerDAS 2025-08-25 01:31:02 +02:00
Manu NALEPA
d3763d56cf Clean 2025-08-02 10:02:28 +02:00
Manu NALEPA
461fa50c34 Merge branch 'develop' into peerDAS 2025-08-02 09:25:43 +02:00
Manu NALEPA
6d4e1d5f7a Merge branch 'develop' into peerDAS 2025-07-31 15:53:56 +02:00
Manu NALEPA
415622ec49 Merge branch 'develop' into peerDAS 2025-07-31 14:42:39 +02:00
Manu NALEPA
df65458834 refactor 2025-07-31 14:42:17 +02:00
Manu NALEPA
2005d5c6f2 step 2: Reconstruct if needed. 2025-07-31 14:42:17 +02:00
Manu NALEPA
7d72fbebe7 step 1: Retrieve from DB. 2025-07-31 14:42:17 +02:00
Manu NALEPA
685761666d Merge branch 'develop' into peerDAS 2025-07-28 20:31:48 +02:00
Manu NALEPA
a75974b5f5 Fix TestCreateLocalNode. 2025-07-25 16:17:28 +02:00
Manu NALEPA
0725dff5e8 Merge branch 'develop' into peerDAS 2025-07-25 13:26:58 +02:00
Manu NALEPA
0d95d3d022 Validator custody: Update earliest available slot. (#15527) 2025-07-25 13:20:54 +02:00
Manu NALEPA
ede560bee1 Merge branch 'develop' into peerDAS 2025-07-23 11:07:19 +02:00
Manu NALEPA
34a1bf835a Merge branch 'develop' into peerDAS 2025-07-22 17:42:04 +02:00
Manu NALEPA
b0bceac9c0 Implement validator custody with "go up only" according to the latest specification. (#15518)
* Simplify validator custody due to the latest spec.
(Go up only)

* Fix sync.
2025-07-22 17:41:15 +02:00
Manu NALEPA
e95d1c54cf reconstructSaveBroadcastDataColumnSidecars: Ensure a unique reconstruction. 2025-07-18 23:48:11 +02:00
Manu NALEPA
4af3763013 Merge branch 'develop' into peerDAS 2025-07-18 22:39:57 +02:00
Manu NALEPA
a520db7276 Merge branch 'develop' into peerDAS 2025-07-17 10:04:04 +02:00
terence
f8abf0565f Add bundle v2 support for submit blind block (#15198) 2025-07-16 08:19:07 -07:00
Manu NALEPA
11a6af9bf9 /eth/v1/node/identity: Add syncnets and custody_group_count. 2025-07-16 16:26:39 +02:00
Manu NALEPA
6f8a654874 Revert "Fixes server ignores request to gzip data (#14982)"
This reverts commit 4e5bfa9760.
2025-07-16 16:18:11 +02:00
Manu NALEPA
f0c01fdb4b Merge branch 'develop' into peerDAS-do-not-merge 2025-07-16 12:29:52 +02:00
Manu NALEPA
a015ae6a29 Merge branch 'develop' into peerDAS 2025-07-16 09:23:37 +02:00
Manu NALEPA
457aa117f3 Merge branch 'develop' into peerDAS 2025-07-11 09:38:37 +02:00
Manu NALEPA
d302b494df Execution reconstruction: Rename variables and logs. 2025-07-10 14:30:26 +02:00
Manu NALEPA
b3db1b6b74 Flags: Remove unused flag EnablePeerDAS 2025-07-10 13:56:53 +02:00
Manu NALEPA
66e4d5e816 Merge branch 'develop' into peerDAS 2025-07-04 01:34:12 +02:00
Manu NALEPA
41f109aa5b blocker_test.go: Remove unused functions. 2025-07-03 16:00:51 +02:00
Manu NALEPA
cfd4ceb4dd Merge branch 'develop' into peerDAS 2025-07-03 13:20:26 +02:00
Manu NALEPA
df211c3384 Merge branch 'develop' into peerDAS 2025-07-01 13:07:40 +02:00
Manu NALEPA
89e78d7da3 Remove peerSampling.
https://github.com/ethereum/consensus-specs/pull/4393#event-18356965177
2025-06-27 21:37:42 +02:00
Manu NALEPA
e76ea84596 Merge branch 'develop' into peerDAS 2025-06-26 15:03:22 +02:00
Manu NALEPA
f10d6e8e16 Merge branch 'develop' into peerDAS 2025-06-26 15:02:46 +02:00
Manu NALEPA
91eb43b595 Merge branch 'develop' into peerDAS 2025-06-24 23:53:09 +02:00
Manu NALEPA
90710ec57d Advertise correct cgc number starting at Altair. 2025-06-24 17:21:29 +02:00
Manu NALEPA
3dc65f991e Merge branch 'peerdas-send-data-columns-requests' into peerDAS 2025-06-24 10:51:32 +02:00
Manu NALEPA
4d9789401b Implement SendDataColumnSidecarsByRangeRequest and SendDataColumnSidecarsByRootRequest. 2025-06-24 01:06:42 +02:00
Manu NALEPA
f72d59b004 disconnectFromPeerOnError: Add peer agent in logs. 2025-06-23 13:02:13 +02:00
Manu NALEPA
e25497be3e Merge branch 'develop' into peerDAS 2025-06-20 20:04:27 +02:00
Manu NALEPA
8897a26f84 Merge branch 'develop' into peerDAS 2025-06-19 14:57:16 +02:00
Manu NALEPA
b2a26f2b62 earliest_available_slot implementation (networking only). 2025-06-19 13:52:47 +02:00
Manu NALEPA
09659010f8 Merge branch 'develop' into peerDAS 2025-06-19 12:01:45 +02:00
Manu NALEPA
589042df20 CreateTestVerifiedRoDataColumnSidecars: Use consistent block root. 2025-06-12 01:03:56 +02:00
terence tsao
312b93e9b1 Fix reconstruction matrix 2025-06-11 15:04:42 -07:00
Ekaterina Riazantseva
f86f76e447 Add PeerDAS reconstruction metrics (#14807)
* Add reconstruction metrics

* Fix time

* Fix format

* Fix format

* Update cells count function

* fix cells count

* Update reconstruction counter

* Fix peerDAS reconstruction counter metric

* Replace dataColumnSidecars with dataColumnSideCars
2025-06-11 19:03:31 +02:00
terence
c311e652eb Set subscribe all data subnets once (#15388) 2025-06-08 17:23:47 +02:00
Manu NALEPA
6a5d78a331 Merge branch 'develop' into peerDAS 2025-06-06 16:01:29 +02:00
Manu NALEPA
a2fd30497e Merge branch 'develop' into peerDAS 2025-06-06 12:46:48 +02:00
Manu NALEPA
a94561f8dc Merge branch 'develop' into peerDAS 2025-06-06 09:56:04 +02:00
Manu NALEPA
af875b78c9 Peer das misc (#15384)
* `ExchangeCapabilities`: Transform `O(n**2)` into `O(2n)` and fix logging.

* Find peers with subnets and logs: Refactor

* Validator custody: Do not wait being subscribed to advertise correct `cgc`. (temp hack)
2025-06-06 09:43:13 +02:00
Manu NALEPA
61207bd3ac Merge branch 'develop' into peerDAS 2025-06-02 14:15:22 +02:00
Manu NALEPA
0b6fcd7d17 Merge branch 'develop' into peerDAS 2025-05-28 21:05:22 +02:00
Manu NALEPA
fe2766e716 Merge branch 'develop' into peerDAS 2025-05-26 09:57:57 +02:00
Manu NALEPA
9135d765e1 Merge branch 'develop' into peerDAS 2025-05-23 15:41:27 +02:00
Manu NALEPA
eca87f29d1 Merge branch 'develop' into peerDAS 2025-05-22 14:37:11 +02:00
Manu NALEPA
00821c8f55 Merge branch 'develop' into peerDAS 2025-05-21 13:50:23 +02:00
Manu NALEPA
4b9e92bcd7 Peerdas by root req (#15275)
* `DataColumnStorageSummary`: Implement `HasAtLeastOneIndex`.

* `DataColumnStorage.Get`: Exit early if the root is found but no corresponding columns.

* `custodyColumnsFromPeers`: Simplify.

* Remove duplicate `uint64MapToSortedSlice` function.

* `DataColumnStorageSummary`: Add `Stored`.

* Refactor reconstruction related code.
2025-05-16 16:19:01 +02:00
terence
b01d9005b8 Update data column receive log (#15289) 2025-05-16 07:01:40 -07:00
Manu NALEPA
8d812d5f0e Merge branch 'develop' into peerDAS 2025-05-07 17:41:25 +02:00
terence
24a3cb2a8b Add column identifiers by root request (#15212)
* Add column identifiers by root request

* `DataColumnsByRootIdentifiers`: Fix Un/Marshal.

* alternate MashalSSZ impl

* remove sort.Interface impl

* optimize unmarshal and add defensive checks

* fix offsets in error messages

* Fix build, remove sort

* Fix `SendDataColumnSidecarsByRootRequest` and tests.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Kasey <kasey@users.noreply.github.com>
2025-05-06 14:07:16 +02:00
Manu NALEPA
66d1d3e248 Use finalized state for validator custody instead of head state. (#15243)
* `finalizedState` ==> `FinalizedState`.
We'll need it in an other package later.

* `setTargetValidatorsCustodyRequirement`: Use finalized state instead of head state.

* Fix James's comment.
2025-05-05 21:13:49 +02:00
Manu NALEPA
99933678ea Peerdas fix get blobs v2 (#15234)
* `reconstructAndBroadcastDataColumnSidecars`: Improve logging.

* `ReconstructDataColumnSidecars`: Add comments and return early if needed.

* `reconstructAndBroadcastDataColumnSidecars`: Return early if not blobs are retrieved from the EL.

* `filterPeerWhichCustodyAtLeastOneDataColumn`: Remove unneded log field.

* Fix Terence's comment.
2025-05-02 17:34:32 +02:00
Manu NALEPA
34f8e1e92b Data colummns by range: Use all possible peers then filter them. (#15242) 2025-05-02 12:15:02 +02:00
terence
a6a41a8755 Add column sidecar inclusion proof cache (#15217) 2025-04-29 13:46:32 +02:00
terence
f110b94fac Add flag to subscribe to all blob column subnets (#15197)
* Seperate subscribe data columns from attestation and sync committee subnets

* Fix test

* Rename to subscribe-data-subnets

* Update to subscribe-all-data-subnets

* `--subscribe-all-data-subnets`: Add `.` at the end of help, since it seems to be the consensus.

* `ConfigureGlobalFlags`: Fix log.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-04-29 11:59:17 +02:00
Manu NALEPA
33023aa282 Merge branch 'develop' into peerDAS 2025-04-29 11:13:27 +02:00
Manu NALEPA
eeb3cdc99e Merge branch 'develop' into peerDAS 2025-04-18 08:37:33 +02:00
Preston Van Loon
1e7147f060 Remove --compilation_mode=opt, use supranational blst headers. 2025-04-17 20:53:54 +02:00
Manu NALEPA
8936beaff3 Merge branch 'develop' into peerDAS 2025-04-17 16:49:22 +02:00
Manu NALEPA
c00283f247 UpgradeToFulu: Add spec tests. (#15189) 2025-04-17 15:17:27 +02:00
Manu NALEPA
a4269cf308 Add tests (#15188) 2025-04-17 13:12:46 +02:00
Manu NALEPA
91f3c8a4d0 c-kzg-4844 lib: Update to v2.1.1. (#15185) 2025-04-17 01:25:36 +02:00
terence
30c7ee9c7b Validate parent block exists before signature (#15184)
* Validate parent block exists before signature

* `ValidProposerSignature`: Add comment

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-04-17 00:40:48 +02:00
Manu NALEPA
456d8b9eb9 Merge branch 'develop' into peerDAS-do-not-merge 2025-04-16 22:58:38 +02:00
Manu NALEPA
4fe3e6d31a Merge branch 'develop' into peerDAS-do-not-merge 2025-04-16 20:30:19 +02:00
Manu NALEPA
01ee1c80b4 merge from develop 2025-04-16 17:27:48 +02:00
Manu NALEPA
c14fe47a81 Data columns by range requests: Simplify and move from initial sync package to sync package. (#15179)
* `data_column.go`: Factorize declarations (no functional changes).

* Verification for data columns: Do not recompute again if already done.

* `SaveDataColumns`: Delete because unused.

* `MissingDataColumns`: Use `DataColumnStorageSummarizer` instead of `DataColumnStorage`

* `TestFetchDataColumnsFromPeers`: Move trusted setup load out of the loop for optimization.

* `TestFetchDataColumnsFromPeers`: Use fulu block instead of deneb block.

* `fetchDataColumnsFromPeers`: Use functions already implemented in the `sync` package instead of duplicated them here.

* `fetchDataColumnsFromPeers` ==> `fetchMissingDataColumnsFromPeers`.

* Data columns initial sync: simplify

* Requests data columns by range: Move from initial sync to sync package.

Since it will eventually be used by the backfill package, and
the backfill packages does not depend on the initial sync package.
2025-04-16 11:18:05 +02:00
terence
b9deabbf0a Execution API: Support blobs_bundle_v2 for PeerDAS (#15167)
* Execution api: add and use blobs_bundle_v2

* Execution bundle fulu can unmarshal

* Manus feedback and fix execution request decode
2025-04-16 10:53:55 +02:00
Manu NALEPA
5d66a98e78 Uniformize data columns sidecars validation pipeline (#15154)
* Rework the data column sidecars verification pipeline.

* Nishant's comment.

* `blocks.BlockWithROBlobs` ==> `blocks.BlockWithROSidecars`

* `batchBlobSync` ==> `batchSidecarSync`.

* `handleBlobs` ==> `handleSidecars`.

* Kasey comment about verification
2025-04-15 20:32:50 +02:00
Manu NALEPA
2d46d6ffae Various small optimizations (#15153)
* Reconstruct data columns from gossip source: Call `setSeenDataColumnIndex`.

* `reconstructAndBroadcastDataColumnSidecars`: Minor optimisation.

Avoid to range over all columns.

* Reconstructed data columns sidecars from EL: Avoid broadcasting already received data columns.
2025-04-09 11:38:28 +02:00
Manu NALEPA
57107e50a7 Cells proofs (#15152)
* Implement distributed block building.
Credits: Francis

* Add fixes.
2025-04-09 09:28:59 +02:00
Manu NALEPA
47271254f6 New Data Column Sidecar Storage Design, Data Columns as a First-Class Citizen & Unit Testing (#15061)
* DB Filesystem: Move all data column related code to `data_columns.go`

Only code move.

* Implement data columns storage

* Kasey comment: Fix typo

* Kasey comment: Fix clutter

* Kasey comment: `IsDataAvailable`: Remove `nodeID`.

* Kasey comment: indice ==> index

* Kasey comment: Move `CreateTestVerifiedRoDataColumnSidecars` in `beacon-chain/verification/fake`.

* `Store` ==> `Save`.

* Kasey comment: AAAA!

* Kasey comment: Fix typo.

* Kasey comment: Add comment.

* Kasey commnet: Stop exporting errors for nothing.

* Kasey comment: Read all metadata at once.

* Kasey comment: Compute file size instead of reading it from stats.

* Kasey comment: Lock mutexes before checking if the file exists.

* Kasey comment: `limit` ==> `nonZeroOffset`.

* Kasey comment: `DataColumnStorage.Get`: Set verified into the `verification package`.

* Kasey comment: `prune` - Flatten the `==` case.

* Kasey comment: Implement and use `storageIndices`.

* `DataColumnsAlignWithBlock`: Move into its own file.

* `DataColumnSidecar`: Rename variables to stick with
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#datacolumnsidecar

* Kasey comment: Add `file.Sync`.

* `DataColumnStorage.Get`: Remove useless cast.

* (Internal) Kasey comment: Set automatically the count of saved data columns.
2025-04-08 23:20:38 +02:00
Francis Li
f304028874 Add new vars defined in consensus-spec (#15101) 2025-03-31 20:01:47 +02:00
Manu NALEPA
8abc5e159a DataColumnSidecarsForReconstruct: Add guards (#15051) 2025-03-14 10:29:15 +01:00
Manu NALEPA
b1ac53c4dd Set defaultEngineTimeout = 2 * time.Second (#15043) 2025-03-13 13:56:42 +01:00
Francis Li
27ab68c856 feat: implement reconstruct and broadcast data columns (#15023)
* Implement reconstructAndBroadcastDataColumns

* Fix merge error

* Fix tests

* Minor changes.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-13 11:19:34 +01:00
Niran Babalola
ddf5a3953b Fetch data columns from multiple peers instead of just supernodes (#14977)
* Extract the block fetcher's peer selection logic for data columns so it can be used in both by range and by root requests

* Refactor data column sidecar request to send requests to multiple peers instead of supernodes

* Remove comment

* Remove unused method

* Add tests for dmissiblePeersForDataColumns

* Extract data column fetching into standalone functions

* Remove AdmissibleCustodyGroupsPeers and replace the final call with requests to multiple peers

* Apply suggestions from code review

Co-authored-by: Manu NALEPA <nalepae@gmail.com>

* Wrap errors

* Use cached peedas.Info and properly convert custody groups to custody columns

* Rename filterPeersForRangeReq

* Preserve debugging descriptions when filtering out peers

* Remove unused functions.

* Initialize nested maps

* Fix comment

* First pass at retry logic for data column requests

* Select fresh peers for each retry

* Return an error if there are requested columns remaining

* Adjust errors

* Improve slightly the godoc.

* Improve wrapped error messages.

* `AdmissiblePeersForDataColumns`: Use value or `range`.

* Remove `convertCustodyGroupsToDataColumnsByPeer` since used only once.

* Minor fixes.

* Retry until we run out of peers

* Delete from the map of peers instead of filtering

* Remove unneeded break

* WIP: TestRequestDataColumnSidecars

* `RequestDataColumnSidecars`: Move the happy path in the for loop.

* Convert the peer ID to a node ID instead of using peer.EnodeID

* Extract AdmissiblePeersForDataColumns from a method into a function and use it (instead of a mock) in TestRequestDataColumnSidecars

* Track data column requests in tests to compare vs expectations

* Run gazelle

* Clean up test config changes so other tests don't break

* Clean up comments

* Minor changes.

* Add tests for peers that don't respond with all requested columns

* Respect MaxRequestDataColumnSidecars

---------

Co-authored-by: Manu NALEPA <nalepae@gmail.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-12 11:46:20 +01:00
Manu NALEPA
92d2fc101d Implement validator custody (#14948)
* Node info: Rename cache and mutex.

* Add `VALIDATOR_CUSTODY_REQUIREMENT` and `BALANCE_PER_ADDITIONAL_CUSTODY_GROUP`.

* Implement `ValidatorsCustodyRequirement`.

* Sync service: Add tracked validators cache.

* `dataColumnSidecarByRootRPCHandler`: Remove custody columns in logs.

* `dataColumnSidecarByRangeRPCHandler`: Remove custody columns in logs.

* `blobsFromStoredDataColumns`: Simplify.

Do not make any more a difference between "can theoretically reconstruct" and "can actually reconstruct".

* Implement validator custody.

* Fix Nishant's comment.

* Fix Nishant's commit.
2025-03-11 11:11:23 +01:00
Francis Li
8996000d2b feature: Implement data column support for different storage layouts (#15014)
* Implement data column support for different storage layouts

* Fix errors

* Fix linting

* `slotFromFile`: First try to decode as a data column.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-07 20:25:31 +01:00
Francis Li
a2fcba2349 feat: implement reconstruct data column sidecars (#15005) 2025-03-05 17:23:58 +01:00
Francis Li
abe8638991 feat: update ckzg lib to support ComputeCells (#15004)
* Update ckzg version to include ComputeCells

* Minor fix

* Run `bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=deps.bzl%prysm_deps -prune=true`

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-04 17:48:18 +01:00
Francis Li
0b5064b474 feat: cell proof computation related proto and generated go files (#15003)
* Add new message type to proto and generate .go files

* `proto/engine/v1`: Remove `execution_engine_eip7594.go` since this file does not exists.

Rerun ` hack/update-go-pbs.sh` and `hack/update-go-ssz.sh `.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-03-04 17:48:01 +01:00
Manu NALEPA
da9d4cf5b9 Merge branch 'develop' into peerDAS 2025-02-21 16:03:20 +01:00
Manu NALEPA
a62cca15dd Merge branch 'develop' into peerDAS 2025-02-20 15:48:07 +01:00
Manu NALEPA
ac04246a2a Avoid computing peerDAS info again and again. (#14893)
* `areDataColumnsAvailable`: `signed` ==> `signedBlock`.

* peerdas: Split `helpers.go` in multiple files respecting the specification.

* peerDAS: Implement `Info`.

* peerDAS: Use cached `Info` when possible.
2025-02-14 18:06:04 +01:00
Manu NALEPA
0923145bd7 Merge branch 'develop' into peerDAS 2025-02-14 16:51:05 +01:00
Manu NALEPA
a216cb4105 Merge branch 'develop' into peerDAS 2025-02-13 18:22:21 +01:00
Manu NALEPA
01705d1f3d Peer das sync empty requests (#14854)
* `TestBuildBwbSlices`: Add test case failing with the current implementation.

* Fix `buildBwbSlices` to comply with the new test case.

* `block_fetchers.go`: Improve logging and godoc.

* `DataColumnsRPCMinValidSlot`: Update to Fulu.
2025-02-03 15:23:04 +01:00
Manu NALEPA
14f93b4e9d Sync: Integrate batch directly in buildBwbSlices. (#14843)
Previously, `buildBwbSlices` were built, and then only to big requests were batched in `buildDataColumnSidecarsByRangeRequests`.

In some edge cases, this lead to requesting data columns to peers for blocks with no blobs.

Splitting by batch directly in `buildBwbSlices` fixes the issue.
2025-01-30 12:11:06 +01:00
Manu NALEPA
ad11036c36 reconstructAndBroadcastBlobs: Temporarily deactivate starting at Fulu. 2025-01-27 15:15:34 +01:00
Manu NALEPA
632a06076b Merge branch 'develop' into peerDAS 2025-01-22 21:30:32 +01:00
Manu NALEPA
242c2b0268 Merge branch 'develop' into peerDAS 2025-01-22 20:08:10 +01:00
Ekaterina Riazantseva
19662da905 Add PeerDAS kzg and inclusion proof verification metrics (#14814) 2025-01-21 16:20:10 +01:00
Ekaterina Riazantseva
7faee5af35 Add PeerDAS gossip verification metrics (#14796) 2025-01-21 16:16:12 +01:00
Ekaterina Riazantseva
805ee1bf31 Add 'beacon' prefix to 'data_column_sidecar_computation' metric (#14790) 2025-01-21 16:14:26 +01:00
Manu NALEPA
bea46fdfa1 Merge branch 'develop' into peerDAS 2025-01-20 13:37:29 +01:00
Manu NALEPA
f6b1fb1c88 Merge branch 'develop' into peerDAS 2025-01-16 10:23:21 +01:00
Manu NALEPA
6fb349ea76 unmarshalState: Use hasFuluKey. 2025-01-15 20:48:25 +01:00
Manu NALEPA
e5a425f5c7 Merge branch 'develop' into peerDAS 2025-01-15 17:18:34 +01:00
Manu NALEPA
f157d37e4c peerDAS: Decouple network subnets from das-core. (#14784)
https://github.com/ethereum/consensus-specs/pull/3832/
2025-01-14 10:45:05 +01:00
Manu NALEPA
5f08559bef Merge branch 'develop' into peerDAS 2025-01-08 10:18:18 +01:00
Manu NALEPA
a082d2aecd Merge branch 'fulu-boilerplate' into peerDAS 2025-01-06 13:45:33 +01:00
Manu NALEPA
bcfaff8504 Upgraded state to <fork> log: Move from debug to info.
Rationale:
This log is the only one notifying the user a new fork happened.
A new fork is always a little bit stressful for a node operator.
Having at least one log indicating the client switched fork is something useful.
2025-01-05 16:22:43 +01:00
Manu NALEPA
d8e09c346f Implement the Fulu fork boilerplate. 2025-01-05 16:22:38 +01:00
Manu NALEPA
876519731b Prepare for future fork boilerplate. 2025-01-05 16:14:02 +01:00
Manu NALEPA
de05b83aca Merge branch 'develop' into peerDAS 2024-12-30 15:11:02 +01:00
Manu NALEPA
56c73e7193 Merge branch 'develop' into peerDAS 2024-12-27 22:11:36 +01:00
Manu NALEPA
859ac008a8 Activate peerDAS at electra. (#14734) 2024-12-27 09:48:57 +01:00
Manu NALEPA
f882bd27c8 Merge branch 'develop' into peerDAS 2024-12-18 16:15:32 +01:00
Manu NALEPA
361e5759c1 Merge branch 'develop' into peerDAS 2024-12-17 22:19:20 +01:00
Manu NALEPA
34ef0da896 Merge branch 'develop' into peerDAS 2024-12-10 23:11:45 +01:00
Manu NALEPA
726e8b962f Revert "Revert "Add error count prom metric (#14670)""
This reverts commit 5f17317c1c.
2024-12-10 21:49:40 +01:00
Manu NALEPA
453ea01deb disconnectFromPeer: Remove unused function. 2024-11-28 17:37:30 +01:00
Manu NALEPA
6537f8011e Merge branch 'peerDAS' into peerDAS-do-not-merge 2024-11-28 17:27:44 +01:00
Manu NALEPA
5f17317c1c Revert "Add error count prom metric (#14670)"
This reverts commit b28b1ed6ce.
2024-11-28 16:37:19 +01:00
Manu NALEPA
3432ffa4a3 PeerDAS: Batch columns verifications (#14559)
* `ColumnAlignsWithBlock`: Split lines.

* Data columns verifications: Batch

* Remove completely `DataColumnBatchVerifier`.

Only `DataColumnsVerifier` (with `s`) on columns remains.
It is the responsability of the function which receive the data column
(either by gossip, by range request or by root request) to verify the
data column wrt. corresponding checks.

* Fix Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9dac67635b streamDataColumnBatch: Sort columns by index. (#14542)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyrange-v1

The following data column sidecars, where they exist, MUST be sent in (slot, column_index) order.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9be69fbd07 PeerDAS: Fix major bug in dataColumnSidecarsByRangeRPCHandler and allow syncing from full nodes. (#14532)
* `validateDataColumnsByRange`: `current` ==> `currentSlot`.

* `validateRequest`: Extract `remotePeer` variable.

* `dataColumnSidecarsByRangeRPCHandler`: Small non functional refactor.

* `streamDataColumnBatch`: Fix major bug.

Before this commit, the node was unable to respond with a data column index higher than the count of stored data columns.
For example, if there is 8 data columns stored for a given block, the node was
able to respond for data columns indices 1, 3, and 5, but not for 10, 16 or 127.

The issue was visible only for full nodes, since super nodes always store 128 data columns.

* Initial sync: Fetch data columns from all peers.
(Not only from supernodes.)

* Nishant's comment: Fix `lastSlot` and `endSlot` duplication.

* Address Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
e21261e893 Data columns initial sync: Rework. (#14522) 2024-11-27 10:37:03 +01:00
Nishant Das
da53a8fc48 Fix Commitments Check (#14493)
* Fix Commitments Check

* `highestFinalizedEpoch`: Refactor (no functional change).

* `retrieveMissingDataColumnsFromPeers`: Fix logs.

* `VerifyDataColumnSidecarKZGProofs`: Optimise with capacity.

* Save data columns when initial syncing.

* `dataColumnSidecarsByRangeRPCHandler`: Add logs when a request enters.

* Improve logging.

* Improve logging.

* `peersWithDataColumns: Do not filter any more on peer head slot.

* Fix Nishant's comment.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:37:03 +01:00
Manu NALEPA
a14634e656 PeerDAS: Improve initial sync logs (#14496)
* `retrieveMissingDataColumnsFromPeers`: Search only for needed peers.

* Improve logging.
2024-11-27 10:37:03 +01:00
Manu NALEPA
43761a8066 PeerDAS: Fix initial sync with super nodes (#14495)
* Improve logging.

* `retrieveMissingDataColumnsFromPeers`: Limit to `512` items per request.

* `retrieveMissingDataColumnsFromPeers`: Allow `nil` peers.

Before this commit:
If, when this funcion is called, we are not yet connected to enough peers, then `peers` will be possibly not be satisfaying,
and, if new peers are connected, we will never see them.

After this commit:
If `peers` is `nil`, then we regularly check for all connected peers.
If `peers` is not `nil`, then we use them.
2024-11-27 10:37:03 +01:00
Manu NALEPA
01dbc337c0 PeerDAS: Fix initial sync (#14494)
* `BestFinalized`: Refactor (no functional change).

* `BestNonFinalized`: Refactor (no functional change).

* `beaconBlocksByRangeRPCHandler`: Remove useless log.

The same is already printed at the start of the function.

* `calculateHeadAndTargetEpochs`: Avoid `else`.

* `ConvertPeerIDToNodeID`: Improve error.

* Stop printing noisy "peer should be banned" logs.

* Initial sync: Request data columns from peers which:
- custody a superset of columns we need, and
- have a head slot >= our target slot.

* `requestDataColumnsFromPeers`: Shuffle peers before requesting.

Before this commit, we always requests peers in the same order,
until one responds something.
Without shuffling, we always requests data columns from the same
peer.

* `requestDataColumnsFromPeers`: If error from a peer, just log the error and skip the peer.

* Improve logging.

* Fix tests.
2024-11-27 10:37:03 +01:00
Nishant Das
92f9b55fcb Put Subscriber in Goroutine (#14486) 2024-11-27 10:36:18 +01:00
Manu NALEPA
f65f12f58b Stop disconnecting peers for bad response / excessive colocation. (#14483) 2024-11-27 10:36:17 +01:00
Manu NALEPA
f2b61a3dcf PeerDAS: Misc improvements (#14482)
* `retrieveMissingDataColumnsFromPeers`: Improve logging.

* `dataColumnSidecarByRootRPCHandler`: Stop decreasing peer's score if asking for a column we do not custody.

* `dataColumnSidecarByRootRPCHandler`: If a data column is unavailable, stop waiting for it.

This behaviour was useful for peer sampling.
Now, just return the data column if we store it.
If we don't, skip.

* Dirty code comment.

* `retrieveMissingDataColumnsFromPeers`: Improve logs.

* `SendDataColumnsByRangeRequest`: Improve logs.

* `dataColumnSidecarsByRangeRPCHandler`: Improve logs.
2024-11-27 10:34:38 +01:00
Manu NALEPA
77a6d29a2e PeerDAS: Re-enable full node joining the main fork (#14475)
* `columnErrBuilder`: Uses `Wrap` instead of `Join`.

Reason: `Join` makes a carriage return. The log is quite unreadable.

* `validateDataColumn`: Improve log.

* `areDataColumnsAvailable`: Improve log.

* `SendDataColumnSidecarByRoot` ==> `SendDataColumnSidecarsByRootRequest`.

* `handleDA`: Refactor error message.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

Reason: There is no notion at all of "recent" in the function.

If the caller decides to call this function only with "recent" blocks, that's fine.
However, the function itself will know nothing about the "recentness" of these blocks.

* `sendBatchRootRequest`: Improve comments.

* `sendBeaconBlocksRequest`: Avoid `else` usage and use map of bool instead of `struct{}`.

* `wrapAndReportValidation`: Remove `agent` from log.

Reason: This prevent the log to hold on one line, and it is not really useful to debug.

* `validateAggregateAndProof`: Add comments.

* `GetValidCustodyPeers`: Fix typo.

* `GetValidCustodyPeers` ==> `DataColumnsAdmissibleCustodyPeers`.

* `CustodyHandler` ==> `DataColumnsHandler`.

* `CustodyCountFromRemotePeer` ==> `DataColumnsCustodyCountFromRemotePeer`.

* Implement `DataColumnsAdmissibleSubnetSamplingPeers`.

* Use `SubnetSamplingSize` instead of `CustodySubnetCount` where needed.

* Revert "`wrapAndReportValidation`: Remove `agent` from log."

This reverts commit 55db351102.
2024-11-27 10:34:38 +01:00
Manu NALEPA
31d16da3a0 PeerDAS: Multiple improvements (#14467)
* `scheduleReconstructedDataColumnsBroadcast`: Really minor refactor.

* `receivedDataColumnsFromRootLock` -> `dataColumnsFromRootLock`

* `reconstructDataColumns`: Stop looking into the DB to know if we have some columns.

Before this commit:
Each time we receive a column, we look into the filesystem for all columns we store.
==> For 128 columns, it looks for 1 + 2 + 3 + ... + 128 = 128(128+1)/2 = 8256 files look.

Also, as soon as a column is saved into the file system, then if, right after, we
look at the filesystem again, we assume the column will be available (strict consistency).
It happens not to be always true.

==> Sometimes, we can reconstruct and reseed columns more than once, because of this lack of filesystem strict consistency.

After this commit:
We use a (strictly consistent) cache to determine if we received a column or not.
==> No more consistency issue, and less stress for the filesystem.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

Before this commit, logged values assumed that all requested columns correspond to
the same block root, which is not always the case.

After this commit, we know which columns are requested for which root.

* Add a log when broadcasting a data column.

This is useful to debug "lost data columns" in devnet.

* Address Nishant's comment
2024-11-27 10:34:38 +01:00
Justin Traglia
19221b77bd Update c-kzg-4844 to v2.0.1 (#14421) 2024-11-27 10:34:38 +01:00
Manu NALEPA
83df293647 Peerdas: Several updates (#14459)
* `validateDataColumn`: Refactor logging.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

* `isDataAvailable`: Improve logging.

* Add hidden debug flag: `--data-columns-reject-slot-multiple`.

* Add more logs about peer disconnection.

* `validPeersExist` --> `enoughPeersAreConnected`

* `beaconBlocksByRangeRPCHandler`: Add remote Peer ID in logs.

* Stop calling twice `writeErrorResponseToStream` in case of rate limit.
2024-11-27 10:34:37 +01:00
Manu NALEPA
c20c09ce36 Peerdas: Full subnet sampling and sendBatchRootRequest fix. (#14452)
* `sendBatchRootRequest`: Refactor and add comments.

* `sendBatchRootRequest`: Do send requests to peers that custodies a superset of our columns.

Before this commit, we sent "data columns by root requests" for data columns peers do not custody.

* Data columns: Use subnet sampling only.

(Instead of peer sampling.)

aaa

* `areDataColumnsAvailable`: Improve logs.

* `GetBeaconBlock`: Improve logs.

Rationale: A `begin` log should always be followed by a `success` log or a `failure` log.
2024-11-27 10:30:29 +01:00
Manu NALEPA
2191faaa3f Fix CPU usage in small devnets (#14446)
* `CustodyCountFromRemotePeer`: Set happy path in the outer scope.

* `FindPeersWithSubnet`: Improve logging.

* `listenForNewNodes`: Avoid infinite loop in a small subnet.

* Address Nishant's comment.

* FIx Nishant's comment.
2024-11-27 10:30:29 +01:00
Nishant Das
2de1e6f3e4 Revert "Change Custody Count to Uint8 (#14386)" (#14415)
This reverts commit bd7ec3fa97.
2024-11-27 10:30:29 +01:00
Manu NALEPA
db44df3964 Fix Initial Sync with 128 data columns subnets (#14403)
* `pingPeers`: Add log with new ENR when modified.

* `p2p Start`: Use idiomatic go error syntax.

* P2P `start`: Fix error message.

* Use not bootnodes at all if the `--chain-config-file` flag is used and no `--bootstrap-node` flag is used.

Before this commit, if the  `--chain-config-file` flag is used and no `--bootstrap-node` flag is used, then bootnodes are (incorrectly) defaulted on `mainnet` ones.

* `validPeersExist`: Centralize logs.

* `AddConnectionHandler`: Improve logging.

"Peer connected" does not really reflect the fact that a new peer is actually connected. --> "New peer connection" is more clear.

Also, instead of writing `0`, `1`or `2` for direction, now it's writted "Unknown", "Inbound", "Outbound".

* Logging: Add 2 decimals for timestamt in text and JSON logs.

* Improve "no valid peers" logging.

* Improve "Some columns have no peers responsible for custody" logging.

* `pubsubSubscriptionRequestLimit`: Increase to be consistent with data columns.

* `sendPingRequest`: Improve logging.

* `FindPeersWithSubnet`: Regularly recheck in our current set of peers if we have enough peers for this topic.

Before this commit, new peers HAD to be found, even if current peers are eventually acceptable.
For very small network, it used to lead to infinite search.

* `subscribeDynamicWithSyncSubnets`: Use exactly the same subscription function initially and every slot.

* Make deepsource happier.

* Nishant's commend: Change peer disconnected log.

* NIshant's comment: Change `Too many incoming subscription` log from error to debug.

* `FindPeersWithSubnet`: Address Nishant's comment.

* `batchSize`: Address Nishant's comment.

* `pingPeers` ==> `pingPeersAndLogEnr`.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:30:29 +01:00
Nishant Das
f92eb44c89 Add Data Column Computation Metrics (#14400)
* Add Data Column Metrics

* Shift it All To Peerdas Package
2024-11-27 10:24:03 +01:00
Nishant Das
a26980b64d Set Precompute at 8 (#14399) 2024-11-27 10:24:03 +01:00
Manu NALEPA
f58cf7e626 PeerDAS: Improve logging and reduce the number of needed goroutines for reconstruction (#14397)
* `broadcastAndReceiveDataColumns`: Use real `sidecar.ColumnIndex` instead of position in the slice.

And improve logging as well.

* `isDataColumnsAvailable`: Improve logging.

* `validateDataColumn`: Print `Accepted data column sidecar gossip` really at the end.

* Subscriber: Improve logging.

* `sendAndSaveDataColumnSidecars`: Use common used function for logging.

* `dataColumnSidecarByRootRPCHandler`: Logging - Pring `all` instead of all the columns for a super node.

* Verification: Improve logging.

* `DataColumnsWithholdCount`: Set as `uint64` instead `int`.

* `DataColumnFields`: Improve logging.

* Logging: Remove now useless private `columnFields`function.

* Avoid useless goroutines blocking for reconstruction.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Address Nishant's comment.

* Improve logging.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Nishant Das
68da7dabe2 Fix Bugs in PeerDAS Testing (#14396)
* Fix Various Bugs in PeerDAS

* Remove Log

* Remove useless copy var.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d1e43a2c02 Change Custody Count to Uint8 (#14386)
* Add Changes for Uint8 Csc

* Fix Build

* Fix Build for Sync

* Fix Discovery Test
2024-11-27 10:24:03 +01:00
Nishant Das
3652bec2f8 Use Data Column Validation Across Prysm (#14377)
* Use Data Column Validation Everywhere

* Fix Build

* Fix Lint

* Fix Clock Synchronizer

* Fix Panic
2024-11-27 10:24:03 +01:00
Nishant Das
81b7a1725f Update Config To Latest Value (#14352)
* Update values

* Update Spec To v1.5.0-alpha.5

* Fix Discovery Tests

* Hardcode Subnet Count For Tests

* Fix All Initial Sync Tests

* Gazelle

* Less Chaotic Service Initialization

* Gazelle
2024-11-27 10:24:03 +01:00
Nishant Das
0c917079c4 Fix CI in PeerDAS (#14347)
* Update go.yml

* Disable mnd

* Update .golangci.yml

* Update go.yml

* Update go.yml

* Update .golangci.yml

* Update go.yml

* Fix Lint Issues

* Remove comment

* Update .golangci.yml
2024-11-27 10:24:03 +01:00
Manu NALEPA
a732fe7021 Implement /eth/v1/beacon/blob_sidecars/{block_id} for peerDAS. (#14312)
* `parseIndices`: `O(n**2)` ==> `O(n)`.

* PeerDAS: Implement `/eth/v1/beacon/blob_sidecars/{block_id}`.

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Rename some functions.

* `Blobs`: Fix empty slice.

* `recoverCellsAndProofs` --> Move function in `beacon-chain/core/peerdas`.

* peerDAS helpers: Add missing tests.

* Implement `CustodyColumnCount`.

* `RecoverCellsAndProofs`: Remove useless argument `columnsCount`.

* Tests: Add cleanups.

* `blobsFromStoredDataColumns`: Reconstruct if needed.

* Make deepsource happy.

* Beacon API: Use provided indices.

* Make deepsource happier.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d75a7aae6a Add Data Column Verification (#14287)
* Persist All Changes

* Fix All Tests

* Fix Build

* Fix Build

* Fix Build

* Fix Test Again

* Add missing verification

* Add Test Cases for Data Column Validation

* Fix comments for methods

* Fix comments for methods

* Fix Test

* Manu's Review
2024-11-27 10:24:03 +01:00
Manu NALEPA
e788a46e82 PeerDAS: Add MetadataV3 with custody_subnet_count (#14274)
* `sendPingRequest`: Add some comments.

* `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`.

* `pingHandler`: Add comments.

* `sendMetaDataRequest`: Add comments and implement an unique test.

* Gather `SchemaVersion`s in the same `const` definition.

* Define `SchemaVersionV3`.

* `MetaDataV1`: Fix comment.

* Proto: Define `MetaDataV2`.

* `MetaDataV2`: Generate SSZ.

* `newColumnSubnetIDs`: Use smaller lines.

* `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`.

* `RefreshPersistentSubnets`: Refactor tests (no functional change).

* `RefreshPersistentSubnets`: Refactor and add comments (no functional change).

* `RefreshPersistentSubnets`: Compare cache with both ENR & metadata.

* `RefreshPersistentSubnets`: Manage peerDAS.

* `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`.

* `CustodyCountFromRemotePeer`: Retrieve the count from metadata.

Then default to ENR, then default to the default value.

* Update beacon-chain/sync/rpc_metadata.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix duplicate case.

* Remove version testing.

* `debug.proto`: Stop breaking ordering.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Manu NALEPA
199543125a Fix data columns sampling (#14263)
* Fix the obvious...

* Data columns sampling: Modify logging.

* `waitForChainStart`: Set it threadsafe - Do only wait once.

* Sampling: Wait for chain start before running the sampling.

Reason: `newDataColumnSampler1D` needs `s.ctxMap`.
`s.ctxMap` is only set when chain is started.

Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine.

==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not.

* Adresse Nishant's comments.

* Sampling: Improve logging.

* `waitForChainStart`: Remove `chainIsStarted` check.
2024-11-27 10:19:07 +01:00
Manu NALEPA
ca63efa770 PeerDAS: Fix initial sync (#14208)
* `SendDataColumnsByRangeRequest`: Add some new fields in logs.

* `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`.

* Implement `fetchDataColumnsFromPeers`.

* `fetchBlobsFromPeer`: Return only one error.
2024-11-27 10:19:07 +01:00
Manu NALEPA
345e6edd9c Make deepsource happy (#14237)
* DeepSource: Pass heavy objects by pointers.

* `removeBlockFromQueue`: Remove redundant error checking.

* `fetchBlobsFromPeer`: Use same variable for `append`.

* Remove unused arguments.

* Combine types.

* `Persist`: Add documentation.

* Remove unused receiver

* Remove duplicated import.

* Stop using both pointer and value receiver at the same time.

* `verifyAndPopulateColumns`: Remove unused parameter

* Stop using mpty slice literal used to declare a variable.
2024-11-27 10:19:07 +01:00
Manu NALEPA
6403064126 PeerDAS: Run reconstruction in parallel. (#14236)
* PeerDAS: Run reconstruction in parallel.

* `isDataAvailableDataColumns` --> `isDataColumnsAvailable`

* `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received.

* Make deepsource happy.
2024-11-27 10:19:07 +01:00
Justin Traglia
0517d76631 Update ckzg4844 to latest version of das branch (#14223)
* Update ckzg4844 to latest version

* Run go mod tidy

* Remove unnecessary tests & run goimports

* Remove fieldparams from blockchain/kzg

* Add back blank line

* Avoid large copies

* Run gazelle

* Use trusted setup from the specs & fix issue with struct

* Run goimports

* Fix mistake in makeCellsAndProofs

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:19:07 +01:00
Nishant Das
000d480f77 Add Current Changes (#14231) 2024-11-27 10:19:07 +01:00
Manu NALEPA
b40a8ed37e Implement and use filterPeerForDataColumnsSubnet. (#14230) 2024-11-27 10:19:07 +01:00
Francis Li
d21c2bd63e [PeerDAS] Parallelize data column sampling (#14105)
* PeerDAS: parallelizing sample queries

* PeerDAS: select sample from non custodied columns

* Finish rebase

* Add more test cases
2024-11-27 10:19:07 +01:00
kevaundray
7a256e93f7 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-11-27 10:19:07 +01:00
Nishant Das
07fe76c2da Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-11-27 10:19:07 +01:00
Manu NALEPA
54affa897f PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-11-27 10:16:50 +01:00
kevaundray
ac4c5fae3c chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-11-27 10:16:50 +01:00
Manu NALEPA
2845d87077 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-11-27 10:16:50 +01:00
Nishant Das
dc2c90b8ed Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-11-27 10:16:50 +01:00
kevaundray
b469157e1f chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-11-27 10:16:50 +01:00
kevaundray
2697794e58 chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-11-27 10:16:50 +01:00
Manu NALEPA
48cf24edb4 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-11-27 10:16:50 +01:00
Francis Li
78f90db90b PeerDAS: add data column batch config (#14122) 2024-11-27 10:15:27 +01:00
Francis Li
d0a3b9bc1d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
bfdb6dab86 Fix columns sampling (#14118) 2024-11-27 10:15:27 +01:00
Francis Li
7dd2fd52af [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-11-27 10:15:27 +01:00
Francis Li
b6bad9331b [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-11-27 10:15:27 +01:00
Francis Li
6e2122085d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
7a847292aa PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
81f4db0afa PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
a7dc2e6c8b PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-11-27 10:15:27 +01:00
Manu NALEPA
0a010b5088 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
1e335e2cf2 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
42f4c0f14e PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
d3c12abe25 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-11-27 10:15:27 +01:00
Nishant Das
b0ba05b4f4 Fix Custody Columns (#14021) 2024-11-27 10:15:27 +01:00
Nishant Das
e206506489 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-11-27 10:15:27 +01:00
Nishant Das
013cb28663 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-11-27 10:15:27 +01:00
Manu NALEPA
496914cb39 Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-11-27 10:15:27 +01:00
Nishant Das
c032e78888 Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-11-27 10:15:26 +01:00
Manu NALEPA
5e4deff6fd Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-11-27 10:15:26 +01:00
Nishant Das
6daa91c465 Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-11-27 10:15:26 +01:00
Nishant Das
32ce6423eb Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Justin Traglia
b0ea450df5 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-11-27 10:15:26 +01:00
Manu NALEPA
8bd10df423 SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-11-27 10:15:26 +01:00
Manu NALEPA
dcbb543be2 Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-11-27 10:15:26 +01:00
Nishant Das
be0580e1a9 Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Manu NALEPA
1355178115 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-11-27 10:15:26 +01:00
Nishant Das
b78c3485b9 Update .bazelrc (#13931) 2024-11-27 10:15:26 +01:00
Manu NALEPA
f503efc6ed Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-11-27 10:15:26 +01:00
Manu NALEPA
1bfbd3980e Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-11-27 10:15:26 +01:00
Nishant Das
3e722ea1bc Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Nishant Das
d844026433 Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-11-27 10:15:26 +01:00
Nishant Das
9ffc19d5ef Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-11-27 10:15:26 +01:00
Nishant Das
3e23f6e879 add it (#13865) 2024-11-27 10:11:55 +01:00
Manu NALEPA
c688c84393 Add in column sidecars protos (#13862) 2024-11-27 10:11:55 +01:00
418 changed files with 21284 additions and 25208 deletions

View File

@@ -2993,7 +2993,7 @@ There are two known issues with this release:
### Added
- Web3Signer support. See the [documentation](https://prysm.offchainlabs.com/docs/manage-wallet/web3signer/) for more
- Web3Signer support. See the [documentation](https://docs.prylabs.network/docs/next/wallet/web3signer) for more
details.
- Bellatrix support. See [kiln testnet instructions](https://hackmd.io/OqIoTiQvS9KOIataIFksBQ?view)
- Weak subjectivity sync / checkpoint sync. This is an experimental feature and may have unintended side effects for

View File

@@ -2,7 +2,7 @@
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
to read [Why Bazel?](https://prysm.offchainlabs.com/docs/install-prysm/install-with-bazel/#why-bazel) to fully
to read [Why Bazel?](https://github.com/OffchainLabs/documentation/issues/138) to fully
understand the reasoning behind an additional layer of build tooling via Bazel rather than a pure
"go build" project.

View File

@@ -158,15 +158,15 @@ oci_register_toolchains(
http_archive(
name = "io_bazel_rules_go",
integrity = "sha256-JD8o94crTb2DFiJJR8nMAGdBAW95zIENB4cbI+JnrI4=",
patch_args = ["-p1"],
patches = [
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "a729c8ed2447c90fe140077689079ca0acfb7580ec41637f312d650ce9d93d96",
strip_prefix = "rules_go-cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9",
urls = [
"https://mirror.bazel.build/github.com/bazel-contrib/rules_go/releases/download/v0.57.0/rules_go-v0.57.0.zip",
"https://github.com/bazel-contrib/rules_go/releases/download/v0.57.0/rules_go-v0.57.0.zip",
"https://github.com/bazel-contrib/rules_go/archive/cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9.tar.gz",
],
)
@@ -208,7 +208,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.25.1",
go_version = "1.24.6",
nogo = "@//:nogo",
)
@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.6"
consensus_spec_version = "v1.6.0-alpha.5"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-7wkWuahuCO37uVYnxq8Badvi+jY907pBj68ixL8XDOI=",
"minimal": "sha256-Qy/f27N0LffS/ej7VhIubwDejD6LMK0VdenKkqtZVt4=",
"mainnet": "sha256-3H7mu5yE+FGz2Wr/nc8Nd9aEu93YoEpsYtn0zBSoeDE=",
"general": "sha256-BXuEb1XbeSft0qzVFnoB8KC0YR1qM3ybT5lKUDbUWn8=",
"minimal": "sha256-EjwSHgBbWSoy5hm9V+A/bVMabyojaKsBNPrRtuPVq4k=",
"mainnet": "sha256-OGWMzarzaV1B9mVpy48/DCUbhjfX+b64pAxWwPLWhAs=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-uvz3XfMTGfy3/BtQQoEp5XQOgrWgcH/5Zo/gR0iiP+k=",
integrity = "sha256-FQWR5EZuVcQGR0ol9vpd7eunnfGexJ/7J3xycrFEJbU=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -300,6 +300,22 @@ filegroup(
url = "https://github.com/ethereum/bls12-381-tests/releases/download/%s/bls_tests_yaml.tar.gz" % bls_test_version,
)
http_archive(
name = "eth2_networks",
build_file_content = """
filegroup(
name = "configs",
srcs = glob([
"shared/**/config.yaml",
]),
visibility = ["//visibility:public"],
)
""",
sha256 = "77e7e3ed65e33b7bb19d30131f4c2bb39e4dfeb188ab9ae84651c3cc7600131d",
strip_prefix = "eth2-networks-934c948e69205dcf2deb87e4ae6cc140c335f94d",
url = "https://github.com/eth-clients/eth2-networks/archive/934c948e69205dcf2deb87e4ae6cc140c335f94d.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """
@@ -311,9 +327,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-htyxg8Ln2o8eCiifFN7/hcHGZg8Ir9CPzCEx+FUnnCs=",
strip_prefix = "holesky-8aec65f11f0c986d6b76b2eb902420635eb9b815",
url = "https://github.com/eth-clients/holesky/archive/8aec65f11f0c986d6b76b2eb902420635eb9b815.tar.gz",
integrity = "sha256-YVFFrCmjoGZ3fXMWpsCpSsYbANy1grnqYwOLKIg2SsA=",
strip_prefix = "holesky-32a72e21c6e53c262f27d50dd540cb654517d03a",
url = "https://github.com/eth-clients/holesky/archive/32a72e21c6e53c262f27d50dd540cb654517d03a.tar.gz", # 2025-03-17
)
http_archive(
@@ -343,9 +359,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-+UZgfvBcea0K0sbvAJZOz5ZNmxdWZYbohP38heUuc6w=",
strip_prefix = "sepolia-f9158732adb1a2a6440613ad2232eb50e7384c4f",
url = "https://github.com/eth-clients/sepolia/archive/f9158732adb1a2a6440613ad2232eb50e7384c4f.tar.gz",
integrity = "sha256-b5F7Wg9LLMqGRIpP2uqb/YsSFVn2ynzlV7g/Nb1EFLk=",
strip_prefix = "sepolia-562d9938f08675e9ba490a1dfba21fb05843f39f",
url = "https://github.com/eth-clients/sepolia/archive/562d9938f08675e9ba490a1dfba21fb05843f39f.tar.gz", # 2025-03-17
)
http_archive(
@@ -359,17 +375,17 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-G+4c9c/vci1OyPrQJnQCI+ZCv/E0cWN4hrHDY3i7ns0=",
strip_prefix = "hoodi-b6ee51b2045a5e7fe3efac52534f75b080b049c6",
url = "https://github.com/eth-clients/hoodi/archive/b6ee51b2045a5e7fe3efac52534f75b080b049c6.tar.gz",
integrity = "sha256-dPiEWUd8QvbYGwGtIm0QtCekitVLOLsW5rpQIGzz8PU=",
strip_prefix = "hoodi-828c2c940e1141092bd4bb979cef547ea926d272",
url = "https://github.com/eth-clients/hoodi/archive/828c2c940e1141092bd4bb979cef547ea926d272.tar.gz",
)
http_archive(
name = "com_google_protobuf",
sha256 = "7c3ebd7aaedd86fa5dc479a0fda803f602caaf78d8aff7ce83b89e1b8ae7442a",
strip_prefix = "protobuf-28.3",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v28.3.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
],
)

View File

@@ -30,11 +30,10 @@ import (
)
const (
getExecHeaderPath = "/eth/v1/builder/header/{{.Slot}}/{{.ParentHash}}/{{.Pubkey}}"
getStatus = "/eth/v1/builder/status"
postBlindedBeaconBlockPath = "/eth/v1/builder/blinded_blocks"
postBlindedBeaconBlockV2Path = "/eth/v2/builder/blinded_blocks"
postRegisterValidatorPath = "/eth/v1/builder/validators"
getExecHeaderPath = "/eth/v1/builder/header/{{.Slot}}/{{.ParentHash}}/{{.Pubkey}}"
getStatus = "/eth/v1/builder/status"
postBlindedBeaconBlockPath = "/eth/v1/builder/blinded_blocks"
postRegisterValidatorPath = "/eth/v1/builder/validators"
)
var (
@@ -513,7 +512,7 @@ func (c *Client) SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.R
}
// Post the blinded block - the response should only contain a status code (no payload)
_, _, err = c.do(ctx, http.MethodPost, postBlindedBeaconBlockV2Path, bytes.NewBuffer(body), http.StatusAccepted, postOpts)
_, _, err = c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), http.StatusAccepted, postOpts)
if err != nil {
return errors.Wrap(err, "error posting the blinded block to the builder api post-Fulu")
}

View File

@@ -1561,7 +1561,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
t.Run("success", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
@@ -1586,7 +1586,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
t.Run("success_ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
@@ -1612,7 +1612,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
t.Run("error_response", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
message := ErrorMessage{
Code: 400,

View File

@@ -1,9 +1,10 @@
package httprest
import (
"net/http"
"time"
"net/http"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
)

View File

@@ -290,9 +290,3 @@ type GetProposerLookaheadResponse struct {
Finalized bool `json:"finalized"`
Data []string `json:"data"` // validator indexes
}
type GetBlobsResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []string `json:"data"` //blobs
}

View File

@@ -56,19 +56,3 @@ type ForkChoiceNodeExtraData struct {
TimeStamp string `json:"timestamp"`
Target string `json:"target"`
}
type GetDebugDataColumnSidecarsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*DataColumnSidecar `json:"data"`
}
type DataColumnSidecar struct {
Index string `json:"index"`
Column []string `json:"column"`
KzgCommitments []string `json:"kzg_commitments"`
KzgProofs []string `json:"kzg_proofs"`
SignedBeaconBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitmentsInclusionProof []string `json:"kzg_commitments_inclusion_proof"`
}

View File

@@ -27,7 +27,7 @@ go_library(
"receive_block.go",
"receive_data_column.go",
"service.go",
"setup_forkchoice.go",
"setup_forchoice.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
],
@@ -50,6 +50,7 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
@@ -62,7 +63,6 @@ go_library(
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/light-client:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
@@ -148,6 +148,7 @@ go_test(
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -160,7 +161,6 @@ go_test(
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/light-client:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/attestations/kv:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",

View File

@@ -14,10 +14,7 @@ const BytesPerBlob = ckzg4844.BytesPerBlob
type Blob [BytesPerBlob]byte
// BytesPerCell is the number of bytes in a single cell.
const (
BytesPerCell = ckzg4844.BytesPerCell
BytesPerProof = ckzg4844.BytesPerProof
)
const BytesPerCell = ckzg4844.BytesPerCell
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
@@ -26,7 +23,7 @@ type Cell [BytesPerCell]byte
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [BytesPerProof]byte
type Proof [48]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48

View File

@@ -366,7 +366,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
// Create invalid commitment (wrong size)
invalidCommitment := make([]byte, 32) // Should be 48 bytes
cellProofs := make([][]byte, numberOfColumns)
@@ -456,10 +456,10 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
// Add cell proofs - make some invalid in the second blob
for j := uint64(0); j < numberOfColumns; j++ {
if i == 1 && j == 64 {

View File

@@ -6,11 +6,11 @@ import (
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/slashings"
@@ -35,7 +35,7 @@ func WithMaxGoroutines(x int) Option {
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.P2P, s.cfg.StateNotifier.StateFeed(), s.cfg.BeaconDB)
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
return nil
}
}

View File

@@ -3,6 +3,7 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -246,7 +247,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
pendingNodes[i] = args
pendingNodes[len(blks)-i-1] = args
if err := s.saveInitSyncBlock(ctx, root, b); err != nil {
tracing.AnnotateError(span, err)
return err
@@ -283,10 +284,14 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return err
}
// Insert all nodes to forkchoice
// Insert all nodes but the last one to forkchoice
if err := s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes); err != nil {
return errors.Wrap(err, "could not insert batch to forkchoice")
}
// Insert the last block to forkchoice
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, preState, lastB); err != nil {
return errors.Wrap(err, "could not insert last block in batch to forkchoice")
}
// Set their optimistic status
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, lastBR); err != nil {
@@ -659,14 +664,14 @@ func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldpa
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(
ctx context.Context,
roBlock consensusblocks.ROBlock,
root [fieldparams.RootLength]byte,
signedBlock interfaces.ReadOnlySignedBeaconBlock,
) error {
block := roBlock.Block()
block := signedBlock.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
root := roBlock.Root()
blockVersion := block.Version()
if blockVersion >= version.Fulu {
return s.areDataColumnsAvailable(ctx, root, block)
@@ -686,6 +691,8 @@ func (s *Service) areDataColumnsAvailable(
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
@@ -719,7 +726,6 @@ func (s *Service) areDataColumnsAvailable(
// Compute the sampling size.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#custody-sampling
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
samplingSize := max(samplesPerSlot, custodyGroupCount)
// Get the peer info for the node.
@@ -745,14 +751,14 @@ func (s *Service) areDataColumnsAvailable(
}
// Get a map of data column indices that are not currently available.
missing, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
if err != nil {
return errors.Wrap(err, "missing data columns")
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missing) == 0 {
if len(missingMap) == 0 {
return nil
}
@@ -769,17 +775,33 @@ func (s *Service) areDataColumnsAvailable(
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
timer := time.AfterFunc(time.Until(nextSlot), func() {
missingCount := uint64(len(missing))
missingMapCount := uint64(len(missingMap))
if missingCount == 0 {
if missingMapCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(peerInfo.CustodyColumns))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": helpers.SortedPrettySliceFromMap(peerInfo.CustodyColumns),
"columnsWaiting": helpers.SortedPrettySliceFromMap(missing),
"columnsExpected": expected,
"columnsWaiting": missing,
}).Warning("Data columns still missing at slot end")
})
defer timer.Stop()
@@ -795,7 +817,7 @@ func (s *Service) areDataColumnsAvailable(
for _, index := range idents.Indices {
// This is a data column we are expecting.
if _, ok := missing[index]; ok {
if _, ok := missingMap[index]; ok {
storedDataColumnsCount++
}
@@ -806,10 +828,10 @@ func (s *Service) areDataColumnsAvailable(
}
// Remove the index from the missing map.
delete(missing, index)
delete(missingMap, index)
// Return if there is no more missing data columns.
if len(missing) == 0 {
if len(missingMap) == 0 {
return nil
}
}
@@ -817,10 +839,10 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missing))
missingIndicesCount := uint64(len(missingMap))
if missingIndicesCount < numberOfColumns {
missingIndices = helpers.SortedPrettySliceFromMap(missing)
missingIndices = uint64MapToSortedSlice(missingMap)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing: %v", block.Slot(), root, missingIndices)
@@ -904,6 +926,16 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
}
}
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot

View File

@@ -3,12 +3,14 @@ package blockchain
import (
"context"
"fmt"
"slices"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
@@ -131,26 +133,35 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
// processLightClientUpdates saves the light client data in lcStore, when feature flag is enabled.
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientUpdate(cfg); err != nil {
log.WithError(err).Error("Failed to process light client update")
}
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
if err := s.processLightClientFinalityUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client finality update")
}
}
// processLightClientUpdate saves the light client update for this block
// if it's better than the already saved one, when feature flag is enabled.
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
attestedRoot := cfg.roblock.Block().ParentRoot()
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Error("processLightClientUpdates: Could not get attested block")
return
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
}
if attestedBlock == nil || attestedBlock.IsNil() {
log.Error("processLightClientUpdates: Could not get attested block")
return
return errors.New("attested block is nil")
}
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Error("processLightClientUpdates: Could not get attested state")
return
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
if attestedState == nil || attestedState.IsNil() {
log.Error("processLightClientUpdates: Could not get attested state")
return
return errors.New("attested state is nil")
}
finalizedRoot := attestedState.FinalizedCheckpoint().Root
@@ -158,17 +169,98 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
return
return nil
}
log.WithError(err).Error("processLightClientUpdates: Could not get finalized block")
return
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
err = s.lcStore.SaveLCData(cfg.ctx, cfg.postState, cfg.roblock, attestedState, attestedBlock, finalizedBlock, s.headRoot())
update, err := lightclient.NewLightClientUpdateFromBeaconState(cfg.ctx, cfg.postState, cfg.roblock, attestedState, attestedBlock, finalizedBlock)
if err != nil {
log.WithError(err).Error("processLightClientUpdates: Could not save light client data")
return errors.Wrapf(err, "could not create light client update")
}
log.Debug("Processed light client updates")
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
return s.lcStore.SaveLightClientUpdate(cfg.ctx, period, update)
}
func (s *Service) processLightClientFinalityUpdate(
ctx context.Context,
signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState,
) error {
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
if finalizedCheckpoint == nil {
return nil
}
finalizedRoot := bytesutil.ToBytes32(finalizedCheckpoint.Root)
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping processing light client finality update: Finalized block is nil for root %#x", finalizedRoot)
return nil
}
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
newUpdate, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, postState, signed, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return errors.Wrap(err, "could not create light client finality update")
}
if !lightclient.IsBetterFinalityUpdate(newUpdate, s.lcStore.LastFinalityUpdate()) {
log.Debug("Skip saving light client finality update: current update is better")
return nil
}
s.lcStore.SetLastFinalityUpdate(newUpdate, true)
return nil
}
func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) error {
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
newUpdate, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, postState, signed, attestedState, attestedBlock)
if err != nil {
if strings.Contains(err.Error(), lightclient.ErrNotEnoughSyncCommitteeBits) {
log.WithError(err).Debug("Skipping processing light client optimistic update")
return nil
}
return errors.Wrap(err, "could not create light client optimistic update")
}
if !lightclient.IsBetterOptimisticUpdate(newUpdate, s.lcStore.LastOptimisticUpdate()) {
log.Debug("Skip saving light client optimistic update: current update is better")
return nil
}
s.lcStore.SetLastOptimisticUpdate(newUpdate, true)
return nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
@@ -377,8 +469,15 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
if err != nil {
return err
}
root := signed.Block().ParentRoot()
// The first block can have a bogus root since the block is not inserted in forkchoice
roblock, err := consensus_blocks.NewROBlockWithRoot(signed, [32]byte{})
if err != nil {
return err
}
pendingNodes = append(pendingNodes, &forkchoicetypes.BlockAndCheckpoints{Block: roblock,
JustifiedCheckpoint: jCheckpoint, FinalizedCheckpoint: fCheckpoint})
// As long as parent node is not in fork choice store, and parent node is in DB.
root := roblock.Block().ParentRoot()
for !s.cfg.ForkChoiceStore.HasNode(root) && s.cfg.BeaconDB.HasBlock(ctx, root) {
b, err := s.getBlock(ctx, root)
if err != nil {
@@ -397,13 +496,12 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
FinalizedCheckpoint: fCheckpoint}
pendingNodes = append(pendingNodes, args)
}
if len(pendingNodes) == 0 {
if len(pendingNodes) == 1 {
return nil
}
if root != s.ensureRootNotZeros(finalized.Root) && !s.cfg.ForkChoiceStore.HasNode(root) {
return ErrNotDescendantOfFinalized
}
slices.Reverse(pendingNodes)
return s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes)
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
@@ -23,7 +24,6 @@ import (
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations/kv"
mockp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -2795,11 +2795,6 @@ func TestProcessLightClientUpdate(t *testing.T) {
s, tr := minimalTestService(t, WithLCStore())
ctx := tr.ctx
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveState(ctx, headState, [32]byte{1, 2}))
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
@@ -2822,8 +2817,6 @@ func TestProcessLightClientUpdate(t *testing.T) {
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
@@ -2838,9 +2831,10 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
t.Run("no old update", func(t *testing.T) {
s.processLightClientUpdates(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
u, err := s.lcStore.LightClientUpdate(ctx, period, l.Block)
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
@@ -2854,12 +2848,12 @@ func TestProcessLightClientUpdate(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.processLightClientUpdates(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.lcStore.LightClientUpdate(ctx, period, l.Block)
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
@@ -2883,12 +2877,12 @@ func TestProcessLightClientUpdate(t *testing.T) {
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.processLightClientUpdates(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.lcStore.LightClientUpdate(ctx, period, l.Block)
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
@@ -2958,18 +2952,14 @@ func TestIsDataAvailable(t *testing.T) {
params := testIsAvailableParams{options: []Option{WithGenesisTime(time.Unix(0, 0))}}
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - no commitment in blocks", func(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, testIsAvailableParams{})
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -2987,9 +2977,7 @@ func TestIsDataAvailable(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3001,9 +2989,7 @@ func TestIsDataAvailable(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3051,9 +3037,7 @@ func TestIsDataAvailable(t *testing.T) {
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3115,9 +3099,7 @@ func TestIsDataAvailable(t *testing.T) {
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3136,9 +3118,7 @@ func TestIsDataAvailable(t *testing.T) {
cancel()
}()
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
err := service.isDataAvailable(ctx, root, signed)
require.NotNil(t, err)
})
}
@@ -3210,11 +3190,6 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
s.cfg.P2P = &mockp2p.FakeP2P{}
ctx := tr.ctx
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveState(ctx, headState, [32]byte{1, 2}))
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
testCases := []struct {
name string
oldOptions []util.LightClientOption
@@ -3230,7 +3205,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
{
name: "Same age",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithSupermajority(0)}, // supermajority does not matter here and is only added to result in two different updates
newOptions: []util.LightClientOption{util.WithSupermajority()}, // supermajority does not matter here and is only added to result in two different updates
expectReplace: false,
},
{
@@ -3274,14 +3249,14 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = lightClient.NewLightClientStore(s.cfg.P2P, s.cfg.StateNotifier.StateFeed(), s.cfg.BeaconDB)
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var oldActualUpdate interfaces.LightClientOptimisticUpdate
var err error
if tc.oldOptions != nil {
// config for old update
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
s.processLightClientUpdates(cfgOld)
require.NoError(t, s.processLightClientOptimisticUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
oldActualUpdate, err = lightClient.NewLightClientOptimisticUpdateFromBeaconState(lOld.Ctx, lOld.State, lOld.Block, lOld.AttestedState, lOld.AttestedBlock)
require.NoError(t, err)
@@ -3295,7 +3270,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
// config for new update
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
s.processLightClientUpdates(cfgNew)
require.NoError(t, s.processLightClientOptimisticUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
newActualUpdate, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lNew.Ctx, lNew.State, lNew.Block, lNew.AttestedState, lNew.AttestedBlock)
require.NoError(t, err)
@@ -3336,7 +3311,6 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
s, tr := minimalTestService(t)
s.cfg.P2P = &mockp2p.FakeP2P{}
ctx := tr.ctx
s.head = &head{}
testCases := []struct {
name string
@@ -3415,18 +3389,15 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = lightClient.NewLightClientStore(s.cfg.P2P, s.cfg.StateNotifier.StateFeed(), s.cfg.BeaconDB)
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var actualOldUpdate, actualNewUpdate interfaces.LightClientFinalityUpdate
var err error
if tc.oldOptions != nil {
// config for old update
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
blkRoot, err := lOld.Block.Block().HashTreeRoot()
require.NoError(t, err)
s.head.block = lOld.Block
s.head.root = blkRoot
s.processLightClientUpdates(cfgOld)
require.NoError(t, s.processLightClientFinalityUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
// check that the old update is saved
actualOldUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, cfgOld.postState, cfgOld.roblock, lOld.AttestedState, lOld.AttestedBlock, lOld.FinalizedBlock)
@@ -3437,11 +3408,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
// config for new update
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
blkRoot, err := lNew.Block.Block().HashTreeRoot()
require.NoError(t, err)
s.head.block = lNew.Block
s.head.root = blkRoot
s.processLightClientUpdates(cfgNew)
require.NoError(t, s.processLightClientFinalityUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
// check that the actual old update and the actual new update are different
actualNewUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, cfgNew.postState, cfgNew.roblock, lNew.AttestedState, lNew.AttestedBlock, lNew.FinalizedBlock)

View File

@@ -16,6 +16,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/slasher/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -92,7 +93,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
blockCopy, err := block.Copy()
if err != nil {
return errors.Wrap(err, "block copy")
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
@@ -103,17 +104,17 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
currentCheckpoints := s.saveCurrentCheckpoints(preState)
roblock, err := blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "new ro block with root")
return err
}
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return errors.Wrap(err, "validator execution and consensus")
return err
}
daWaitedTime, err := s.handleDA(ctx, avs, roblock)
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return errors.Wrap(err, "handle da")
return err
}
// Defragment the state before continuing block processing.
@@ -134,10 +135,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := s.postBlockProcess(args); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return errors.Wrap(err, "post block process")
return err
}
if err := s.updateCheckpoints(ctx, currentCheckpoints, preState, postState, blockRoot); err != nil {
return errors.Wrap(err, "update checkpoints")
return err
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
if s.slasherEnabled {
@@ -151,12 +152,12 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Have we been finalizing? Should we start saving hot states to db?
if err := s.checkSaveHotStateDB(ctx); err != nil {
return errors.Wrap(err, "check save hot state db")
return err
}
// We apply the same heuristic to some of our more important caches.
if err := s.handleCaches(); err != nil {
return errors.Wrap(err, "handle caches")
return err
}
s.reportPostBlockProcessing(blockCopy, blockRoot, receivedTime, daWaitedTime)
return nil
@@ -239,19 +240,37 @@ func (s *Service) validateExecutionAndConsensus(
return postState, isValidPayload, nil
}
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block blocks.ROBlock) (time.Duration, error) {
var err error
start := time.Now()
if avs != nil {
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), block)
} else {
err = s.isDataAvailable(ctx, block)
func (s *Service) handleDA(
ctx context.Context,
block interfaces.SignedBeaconBlock,
blockRoot [fieldparams.RootLength]byte,
avs das.AvailabilityStore,
) (elapsed time.Duration, err error) {
defer func(start time.Time) {
elapsed = time.Since(start)
if err == nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
}
}(time.Now())
if avs == nil {
if err = s.isDataAvailable(ctx, blockRoot, block); err != nil {
return
}
return
}
elapsed := time.Since(start)
var rob blocks.ROBlock
rob, err = blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
return
}
return elapsed, err
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), rob)
return
}
func (s *Service) reportPostBlockProcessing(
@@ -301,28 +320,13 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
if features.Get().EnableLightClient {
// Save a light client bootstrap for the finalized checkpoint
go func() {
st, err := s.cfg.StateGen.StateByRoot(ctx, finalized.Root)
if err != nil {
log.WithError(err).Error("Could not retrieve state for finalized root to save light client bootstrap")
return
}
err = s.lcStore.SaveLightClientBootstrap(s.ctx, finalized.Root, st)
err := s.lcStore.SaveLightClientBootstrap(s.ctx, finalized.Root)
if err != nil {
log.WithError(err).Error("Could not save light client bootstrap by block root")
} else {
log.Debugf("Saved light client bootstrap for finalized root %#x", finalized.Root)
}
}()
// Clean up the light client store caches
go func() {
err := s.lcStore.MigrateToCold(s.ctx, finalized.Root)
if err != nil {
log.WithError(err).Error("Could not migrate light client store to cold storage")
} else {
log.Debugf("Migrated light client store to cold storage for finalized root %#x", finalized.Root)
}
}()
}
}

View File

@@ -9,9 +9,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/voluntaryexits"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -192,9 +192,7 @@ func TestHandleDA(t *testing.T) {
require.NoError(t, err)
s, _ := minimalTestService(t)
block, err := blocks.NewROBlockWithRoot(signedBeaconBlock, [32]byte{})
require.NoError(t, err)
elapsed, err := s.handleDA(t.Context(), nil, block)
elapsed, err := s.handleDA(t.Context(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
require.NoError(t, err)
require.Equal(t, true, elapsed > 0, "Elapsed time should be greater than 0")
}
@@ -596,7 +594,11 @@ func TestProcessLightClientBootstrap(t *testing.T) {
require.NoError(t, s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: cp.Epoch, Root: [32]byte(cp.Root)}))
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
sss, err := s.cfg.BeaconDB.State(ctx, finalizedBlockRoot)
require.NoError(t, err)
require.NotNil(t, sss)
s.executePostFinalizationTasks(s.ctx, l.FinalizedState)
// wait for the goroutine to finish processing
time.Sleep(1 * time.Second)

View File

@@ -14,13 +14,13 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
f "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/slashings"

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"context"
"fmt"
"slices"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -113,7 +112,6 @@ func (s *Service) buildForkchoiceChain(ctx context.Context, head interfaces.Read
return nil, errors.New("head block is not a descendant of the finalized checkpoint")
}
}
slices.Reverse(chain)
return chain, nil
}

View File

@@ -124,5 +124,5 @@ func Test_setupForkchoiceTree_Head(t *testing.T) {
require.NotEqual(t, fRoot, root)
require.Equal(t, root, service.startupHeadRoot())
require.NoError(t, service.setupForkchoiceTree(st))
require.Equal(t, 3, service.cfg.ForkChoiceStore.NodeCount())
require.Equal(t, 2, service.cfg.ForkChoiceStore.NodeCount())
}

View File

@@ -11,21 +11,21 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -89,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumnSidecar(_ uint64, _ blocks.VerifiedRODataColumn) error {
func (mb *mockBroadcaster) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}

View File

@@ -723,8 +723,7 @@ func (c *ChainService) ReceiveDataColumn(dc blocks.VerifiedRODataColumn) error {
}
// ReceiveDataColumns implements the same method in chain service
func (c *ChainService) ReceiveDataColumns(dcs []blocks.VerifiedRODataColumn) error {
c.DataColumns = append(c.DataColumns, dcs...)
func (*ChainService) ReceiveDataColumns(_ []blocks.VerifiedRODataColumn) error {
return nil
}

View File

@@ -67,30 +67,6 @@ func (s *SyncCommitteeCache) Clear() {
s.cache = cache.NewFIFO(keyFn)
}
// CurrentPeriodPositions returns current period positions of validator indices with respect with
// sync committee. If any input validator index has no assignment, an empty list will be returned
// for that validator. If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
// Manual checking of state for index position in state is recommended when `ErrNonExistingSyncCommitteeKey` is returned.
func (s *SyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
s.lock.RLock()
defer s.lock.RUnlock()
pos, err := s.positionsInCommittee(root, indices)
if err != nil {
return nil, err
}
result := make([][]primitives.CommitteeIndex, len(pos))
for i, p := range pos {
if p == nil {
result[i] = []primitives.CommitteeIndex{}
} else {
result[i] = p.currentPeriod
}
}
return result, nil
}
// CurrentPeriodIndexPosition returns current period index position of a validator index with respect with
// sync committee. If the input validator index has no assignment, an empty list will be returned.
// If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
@@ -128,7 +104,11 @@ func (s *SyncCommitteeCache) NextPeriodIndexPosition(root [32]byte, valIdx primi
return pos.nextPeriod, nil
}
func (s *SyncCommitteeCache) positionsInCommittee(root [32]byte, indices []primitives.ValidatorIndex) ([]*positionInCommittee, error) {
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
obj, exists, err := s.cache.GetByKey(key(root))
if err != nil {
return nil, err
@@ -141,33 +121,13 @@ func (s *SyncCommitteeCache) positionsInCommittee(root [32]byte, indices []primi
if !ok {
return nil, errNotSyncCommitteeIndexPosition
}
result := make([]*positionInCommittee, len(indices))
for i, idx := range indices {
idxInCommittee, ok := item.vIndexToPositionMap[idx]
if ok {
SyncCommitteeCacheHit.Inc()
result[i] = idxInCommittee
} else {
SyncCommitteeCacheMiss.Inc()
result[i] = nil
}
}
return result, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
positions, err := s.positionsInCommittee(root, []primitives.ValidatorIndex{valIdx})
if err != nil {
return nil, err
}
if len(positions) == 0 {
idxInCommittee, ok := item.vIndexToPositionMap[valIdx]
if !ok {
SyncCommitteeCacheMiss.Inc()
return nil, nil
}
return positions[0], nil
SyncCommitteeCacheHit.Inc()
return idxInCommittee, nil
}
// UpdatePositionsInCommittee updates caching of validators position in sync committee in respect to

View File

@@ -16,11 +16,6 @@ func NewSyncCommittee() *FakeSyncCommitteeCache {
return &FakeSyncCommitteeCache{}
}
// CurrentPeriodPositions -- fake
func (s *FakeSyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
return nil, nil
}
// CurrentEpochIndexPosition -- fake.
func (s *FakeSyncCommitteeCache) CurrentPeriodIndexPosition(root [32]byte, valIdx primitives.ValidatorIndex) ([]primitives.CommitteeIndex, error) {
return nil, nil

View File

@@ -5,7 +5,6 @@ import (
"sort"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
@@ -40,11 +39,11 @@ func ProcessAttesterSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
slashFunc slashValidatorFunc,
) (state.BeaconState, error) {
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, exitInfo)
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, slashFunc)
if err != nil {
return nil, err
}
@@ -57,7 +56,7 @@ func ProcessAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
slashFunc slashValidatorFunc,
) (state.BeaconState, error) {
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
@@ -76,9 +75,10 @@ func ProcessAttesterSlashing(
return nil, err
}
if helpers.IsSlashableValidator(val.ActivationEpoch(), val.WithdrawableEpoch(), val.Slashed(), currentEpoch) {
beaconState, err = validators.SlashValidator(ctx, beaconState, primitives.ValidatorIndex(validatorIndex), exitInfo)
beaconState, err = slashFunc(ctx, beaconState, primitives.ValidatorIndex(validatorIndex))
if err != nil {
return nil, errors.Wrapf(err, "could not slash validator index %d", validatorIndex)
return nil, errors.Wrapf(err, "could not slash validator index %d",
validatorIndex)
}
slashedAny = true
}

View File

@@ -4,7 +4,6 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -45,10 +44,11 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
Target: &ethpb.Checkpoint{Epoch: 1}},
})}}
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{{}},
Validators: registry,
Slot: currentSlot,
})
require.NoError(t, err)
@@ -62,15 +62,16 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
assert.ErrorContains(t, "attestations are not slashable", err)
}
func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T) {
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{{}},
Validators: registry,
Slot: currentSlot,
})
require.NoError(t, err)
@@ -100,7 +101,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -242,7 +243,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -264,83 +265,3 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
})
}
}
func TestProcessAttesterSlashing_ExitEpochGetsUpdated(t *testing.T) {
st, keys := util.DeterministicGenesisStateElectra(t, 8)
bal, err := helpers.TotalActiveBalance(st)
require.NoError(t, err)
perEpochChurn := helpers.ActivationExitChurnLimit(primitives.Gwei(bal))
vals := st.Validators()
// We set the total effective balance of slashed validators
// higher than the churn limit for a single epoch.
vals[0].EffectiveBalance = uint64(perEpochChurn / 3)
vals[1].EffectiveBalance = uint64(perEpochChurn / 3)
vals[2].EffectiveBalance = uint64(perEpochChurn / 3)
vals[3].EffectiveBalance = uint64(perEpochChurn / 3)
require.NoError(t, st.SetValidators(vals))
sl1att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
sl1att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
slashing1 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl1att1,
Attestation_2: sl1att2,
}
sl2att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{2, 3},
})
sl2att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{2, 3},
})
slashing2 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl2att1,
Attestation_2: sl2att2,
}
domain, err := signing.Domain(st.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(sl1att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := keys[0].Sign(signingRoot[:])
sig1 := keys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl1att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[0].Sign(signingRoot[:])
sig1 = keys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att2.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att2.Signature = aggregateSig.Marshal()
exitInfo := v.ExitInformation(st)
assert.Equal(t, primitives.Epoch(0), exitInfo.HighestExitEpoch)
_, err = blocks.ProcessAttesterSlashings(t.Context(), st, []ethpb.AttSlashing{slashing1, slashing2}, exitInfo)
require.NoError(t, err)
assert.Equal(t, primitives.Epoch(6), exitInfo.HighestExitEpoch)
}

View File

@@ -191,7 +191,7 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer.Fuzz(p)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.ExitInformation(s))
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.SlashValidator)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
}
@@ -224,7 +224,7 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer.Fuzz(a)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.ExitInformation(s))
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.SlashValidator)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
@@ -334,7 +334,7 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
}
@@ -351,7 +351,7 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
}

View File

@@ -94,7 +94,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -9,6 +9,7 @@ import (
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -49,12 +50,13 @@ func ProcessVoluntaryExits(
ctx context.Context,
beaconState state.BeaconState,
exits []*ethpb.SignedVoluntaryExit,
exitInfo *v.ExitInfo,
) (state.BeaconState, error) {
// Avoid calculating the epoch churn if no exits exist.
if len(exits) == 0 {
return beaconState, nil
}
maxExitEpoch, churn := v.MaxExitEpochAndChurn(beaconState)
var exitEpoch primitives.Epoch
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
@@ -66,8 +68,15 @@ func ProcessVoluntaryExits(
if err := VerifyExitAndSignature(val, beaconState, exit); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, exitInfo)
if err != nil && !errors.Is(err, v.ErrValidatorAlreadyExited) {
beaconState, exitEpoch, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, maxExitEpoch, churn)
if err == nil {
if exitEpoch > maxExitEpoch {
maxExitEpoch = exitEpoch
churn = 1
} else if exitEpoch == maxExitEpoch {
churn++
}
} else if !errors.Is(err, v.ErrValidatorAlreadyExited) {
return nil, err
}
}

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -47,7 +46,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
}
want := "validator has not been active long enough to exit"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
assert.ErrorContains(t, want, err)
}
@@ -77,7 +76,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
}
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
assert.ErrorContains(t, want, err)
}
@@ -125,7 +124,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
},
}
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
require.NoError(t, err, "Could not process exits")
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {

View File

@@ -184,54 +184,45 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
})
case *ethpb.BeaconStateElectra:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockElectra{
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateFulu:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockFulu{
Block: electraGenesisBlock(root),
Block: &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}
}
func electraGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockElectra {
return &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
}
}

View File

@@ -7,9 +7,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -19,6 +19,11 @@ import (
// ErrCouldNotVerifyBlockHeader is returned when a block header's signature cannot be verified.
var ErrCouldNotVerifyBlockHeader = errors.New("could not verify beacon block header")
type slashValidatorFunc func(
ctx context.Context,
st state.BeaconState,
vid primitives.ValidatorIndex) (state.BeaconState, error)
// ProcessProposerSlashings is one of the operations performed
// on each processed beacon block to slash proposers based on
// slashing conditions if any slashable events occurred.
@@ -49,11 +54,11 @@ func ProcessProposerSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []*ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
slashFunc slashValidatorFunc,
) (state.BeaconState, error) {
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, exitInfo)
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, slashFunc)
if err != nil {
return nil, err
}
@@ -66,7 +71,7 @@ func ProcessProposerSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
slashFunc slashValidatorFunc,
) (state.BeaconState, error) {
var err error
if slashing == nil {
@@ -75,7 +80,7 @@ func ProcessProposerSlashing(
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
beaconState, err = slashFunc(ctx, beaconState, slashing.Header_1.Header.ProposerIndex)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
}

View File

@@ -50,7 +50,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
}
want := "mismatched header slots"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -83,7 +83,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
want := "expected slashing headers to differ"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -133,7 +133,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
"validator with key %#x is not slashable",
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -172,7 +172,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -220,7 +220,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -268,7 +268,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -316,7 +316,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()

View File

@@ -84,8 +84,8 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
// Handle validator ejections.
for _, idx := range eligibleForEjection {
var err error
// exit info is not used in electra
st, err = validators.InitiateValidatorExit(ctx, st, idx, &validators.ExitInfo{})
// exitQueueEpoch and churn arguments are not used in electra.
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, 0 /*exitQueueEpoch*/, 0 /*churn*/)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return fmt.Errorf("failed to initiate validator exit at index %d: %w", idx, err)
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -47,21 +46,18 @@ var (
// # [New in Electra:EIP7251]
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
func ProcessOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
func ProcessOperations(
ctx context.Context,
st state.BeaconState,
block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
// 6110 validations are in VerifyOperationLengths
bb := block.Body()
// Electra extends the altair operations.
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
st, err := ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -72,7 +68,7 @@ func ProcessOperations(ctx context.Context, st state.BeaconState, block interfac
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}

View File

@@ -147,8 +147,9 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
if isFullExitRequest {
// Only exit validator if it has no pending withdrawals in the queue
if pendingBalanceToWithdraw == 0 {
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
var err error
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
st, _, err = validators.InitiateValidatorExit(ctx, st, vIdx, maxExitEpoch, churn)
if err != nil {
return nil, err
}

View File

@@ -99,7 +99,8 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
st, err = validators.InitiateValidatorExit(ctx, st, idx, validators.ExitInformation(st))
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, maxExitEpoch, churn)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}

View File

@@ -16,10 +16,10 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err := electra.ProcessEpoch(ctx, state); err != nil {
return errors.Wrap(err, "could not process epoch in fulu transition")
}
return ProcessProposerLookahead(ctx, state)
return processProposerLookahead(ctx, state)
}
func ProcessProposerLookahead(ctx context.Context, state state.BeaconState) error {
func processProposerLookahead(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "fulu.processProposerLookahead")
defer span.End()

View File

@@ -10,7 +10,6 @@ go_library(
"legacy.go",
"metrics.go",
"randao.go",
"ranges.go",
"rewards_penalties.go",
"shuffle.go",
"sync_committee.go",
@@ -57,7 +56,6 @@ go_test(
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
"ranges_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",
"sync_committee_test.go",

View File

@@ -317,15 +317,23 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments := make(map[primitives.ValidatorIndex][]primitives.Slot)
originalStateSlot := state.Slot()
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
// Skip proposer assignment for genesis slot.
if slot == 0 {
continue
}
// Set the state's current slot.
if err := state.SetSlot(slot); err != nil {
return nil, err
}
// Determine the proposer index for the current slot.
i, err := BeaconProposerIndexAtSlot(ctx, state, slot)
i, err := BeaconProposerIndex(ctx, state)
if err != nil {
return nil, errors.Wrapf(err, "could not check proposer at slot %d", slot)
return nil, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot())
}
// Append the slot to the proposer's assignments.
@@ -334,6 +342,12 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments[i] = append(proposerAssignments[i], slot)
}
// Reset state back to its original slot.
if err := state.SetSlot(originalStateSlot); err != nil {
return nil, err
}
return proposerAssignments, nil
}

View File

@@ -1,62 +0,0 @@
package helpers
import (
"fmt"
"slices"
)
// SortedSliceFromMap takes a map with uint64 keys and returns a sorted slice of the keys.
func SortedSliceFromMap(toSort map[uint64]bool) []uint64 {
slice := make([]uint64, 0, len(toSort))
for key := range toSort {
slice = append(slice, key)
}
slices.Sort(slice)
return slice
}
// PrettySlice returns a pretty string representation of a sorted slice of uint64.
// `sortedSlice` must be sorted in ascending order.
// Example: [1,2,3,5,6,7,8,10] -> "1-3,5-8,10"
func PrettySlice(sortedSlice []uint64) string {
if len(sortedSlice) == 0 {
return ""
}
var result string
start := sortedSlice[0]
end := sortedSlice[0]
for i := 1; i < len(sortedSlice); i++ {
if sortedSlice[i] == end+1 {
end = sortedSlice[i]
continue
}
if start == end {
result += fmt.Sprintf("%d,", start)
start = sortedSlice[i]
end = sortedSlice[i]
continue
}
result += fmt.Sprintf("%d-%d,", start, end)
start = sortedSlice[i]
end = sortedSlice[i]
}
if start == end {
result += fmt.Sprintf("%d", start)
return result
}
result += fmt.Sprintf("%d-%d", start, end)
return result
}
// SortedPrettySliceFromMap combines SortedSliceFromMap and PrettySlice to return a pretty string representation of the keys in a map.
func SortedPrettySliceFromMap(toSort map[uint64]bool) string {
sorted := SortedSliceFromMap(toSort)
return PrettySlice(sorted)
}

View File

@@ -1,64 +0,0 @@
package helpers_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestSortedSliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 3: true, 8: true, 1: true}
expected := []uint64{1, 3, 5, 8}
actual := helpers.SortedSliceFromMap(input)
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.Equal(t, expected[i], actual[i])
}
}
func TestPrettySlice(t *testing.T) {
tests := []struct {
name string
input []uint64
expected string
}{
{
name: "empty slice",
input: []uint64{},
expected: "",
},
{
name: "only distinct elements",
input: []uint64{1, 3, 5, 7, 9},
expected: "1,3,5,7,9",
},
{
name: "single range",
input: []uint64{1, 2, 3, 4, 5},
expected: "1-5",
},
{
name: "multiple ranges and distinct elements",
input: []uint64{1, 2, 3, 5, 6, 7, 8, 10, 12, 13, 14},
expected: "1-3,5-8,10,12-14",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
actual := helpers.PrettySlice(tt.input)
require.Equal(t, tt.expected, actual)
})
}
}
func TestSortedPrettySliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 7: true, 8: true, 10: true}
expected := "5,7-8,10"
actual := helpers.SortedPrettySliceFromMap(input)
require.Equal(t, expected, actual)
}

View File

@@ -87,11 +87,6 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
return total, nil
}
// UpdateTotalActiveBalanceCache updates the cache with the given total active balance.
func UpdateTotalActiveBalanceCache(s state.BeaconState, total uint64) error {
return balanceCache.AddTotalEffectiveBalance(s, total)
}
// IncreaseBalance increases validator with the given 'index' balance by 'delta' in Gwei.
//
// Spec pseudocode definition:

View File

@@ -297,30 +297,3 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
require.ErrorContains(t, "addition overflows", helpers.IncreaseBalance(state, test.i, test.nb))
}
}
func TestUpdateTotalActiveBalanceCache(t *testing.T) {
helpers.ClearCache()
// Create a test state with some validators
validators := []*ethpb.Validator{
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 31 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
Slot: 0,
})
require.NoError(t, err)
// Test updating cache with a specific total
testTotal := uint64(95 * 1e9) // 32 + 32 + 31 = 95
err = helpers.UpdateTotalActiveBalanceCache(state, testTotal)
require.NoError(t, err)
// Verify the cache was updated by retrieving the total active balance
// which should now return the cached value
cachedTotal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
assert.Equal(t, testTotal, cachedTotal, "Cache should return the updated total")
}

View File

@@ -21,39 +21,6 @@ var (
syncCommitteeCache = cache.NewSyncCommittee()
)
// CurrentPeriodPositions returns committee indices of the current period sync committee for input validators.
func CurrentPeriodPositions(st state.BeaconState, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return nil, err
}
pos, err := syncCommitteeCache.CurrentPeriodPositions(root, indices)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
committee, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
// Fill in the cache on miss.
go func() {
if err := syncCommitteeCache.UpdatePositionsInCommittee(root, st); err != nil {
log.WithError(err).Error("Could not fill sync committee cache on miss")
}
}()
pos = make([][]primitives.CommitteeIndex, len(indices))
for i, idx := range indices {
pubkey := st.PubkeyAtIndex(idx)
pos[i] = findSubCommitteeIndices(pubkey[:], committee.Pubkeys)
}
return pos, nil
}
if err != nil {
return nil, err
}
return pos, nil
}
// IsCurrentPeriodSyncCommittee returns true if the input validator index belongs in the current period sync committee
// along with the sync committee root.
// 1. Checks if the public key exists in the sync committee cache

View File

@@ -17,38 +17,6 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestCurrentPeriodPositions(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
}
syncCommittee.Pubkeys[i] = bytesutil.PadTo(k, 48)
}
state, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Validators: validators,
})
require.NoError(t, err)
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee([32]byte{}, state))
positions, err := helpers.CurrentPeriodPositions(state, []primitives.ValidatorIndex{0, 1})
require.NoError(t, err)
require.Equal(t, 2, len(positions))
require.Equal(t, 1, len(positions[0]))
assert.Equal(t, primitives.CommitteeIndex(0), positions[0][0])
require.Equal(t, 1, len(positions[1]))
assert.Equal(t, primitives.CommitteeIndex(1), positions[1][0])
}
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
helpers.ClearCache()

View File

@@ -309,29 +309,23 @@ func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primiti
if err != nil {
return 0, errors.Wrap(err, "could not get proposer lookahead")
}
spe := params.BeaconConfig().SlotsPerEpoch
if e == stateEpoch {
return lookAhead[slot%spe], nil
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
// The caller is requesting the proposer for the next epoch
return lookAhead[spe+slot%spe], nil
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch+params.BeaconConfig().SlotsPerEpoch], nil
}
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
// point of view of the given state as head state
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(state.Slot())
// Even if the state is post Fulu, we may request a past proposer index.
if state.Version() >= version.Fulu && e >= params.BeaconConfig().FuluForkEpoch {
// We can use the cached lookahead only for the current and the next epoch.
if e == stateEpoch || e == stateEpoch+1 {
return beaconProposerIndexAtSlotFulu(state, slot)
}
if state.Version() >= version.Fulu {
return beaconProposerIndexAtSlotFulu(state, slot)
}
e := slots.ToEpoch(slot)
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch. If the passed state has not yet reached this slot then we do not check the cache.
if e <= stateEpoch && e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
// For simplicity, the node will skip caching of genesis epoch.
if e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
s, err := slots.EpochEnd(e - 1)
if err != nil {
return 0, err

View File

@@ -1161,10 +1161,6 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
}
func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
lookahead := make([]uint64, 64)
lookahead[0] = 15
lookahead[1] = 16
@@ -1184,4 +1180,8 @@ func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
idx, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 130)
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(42), idx)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 95)
require.ErrorContains(t, "slot 95 is not in the current epoch 3 or the next epoch", err)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 160)
require.ErrorContains(t, "slot 160 is not in the current epoch 3 or the next epoch", err)
}

View File

@@ -3,12 +3,11 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"cache.go",
"helpers.go",
"lightclient.go",
"store.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client",
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
@@ -39,30 +38,25 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"cache_test.go",
"lightclient_test.go",
"store_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//async/event:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -592,10 +592,6 @@ func HasFinality(update interfaces.LightClientUpdate) (bool, error) {
}
func IsBetterUpdate(newUpdate, oldUpdate interfaces.LightClientUpdate) (bool, error) {
if oldUpdate == nil || oldUpdate.IsNil() {
return true, nil
}
maxActiveParticipants := newUpdate.SyncAggregate().SyncCommitteeBits.Len()
newNumActiveParticipants := newUpdate.SyncAggregate().SyncCommitteeBits.Count()
oldNumActiveParticipants := oldUpdate.SyncAggregate().SyncCommitteeBits.Count()
@@ -782,7 +778,7 @@ func IsFinalityUpdateValidForBroadcast(newUpdate, oldUpdate interfaces.LightClie
// This does not concern broadcasting, but rather the decision of whether to save the new update.
// For broadcasting checks, use IsFinalityUpdateValidForBroadcast.
func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil || oldUpdate.IsNil() {
if oldUpdate == nil {
return true
}
@@ -808,7 +804,7 @@ func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityU
}
func IsBetterOptimisticUpdate(newUpdate, oldUpdate interfaces.LightClientOptimisticUpdate) bool {
if oldUpdate == nil || oldUpdate.IsNil() {
if oldUpdate == nil {
return true
}
// The attested_header.beacon.slot is greater than that of all previously forwarded optimistic updates

View File

@@ -9,7 +9,7 @@ import (
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/runtime/version"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
consensustypes "github.com/OffchainLabs/prysm/v6/consensus-types"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"

View File

@@ -0,0 +1,202 @@
package light_client
import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate // tracks the best finality update seen so far
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate // tracks the best optimistic update seen so far
p2p p2p.Accessor
stateFeed event.SubscriberSender
}
func NewLightClientStore(db iface.HeadAccessDatabase, p p2p.Accessor, e event.SubscriberSender) *Store {
return &Store{
beaconDB: db,
p2p: p,
stateFeed: e,
}
}
func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot [32]byte) (interfaces.LightClientBootstrap, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client bootstrap from the database
bootstrap, err := s.beaconDB.LightClientBootstrap(ctx, blockRoot[:])
if err != nil {
return nil, err
}
if bootstrap == nil { // not found
return nil, ErrLightClientBootstrapNotFound
}
return bootstrap, nil
}
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte) error {
s.mu.Lock()
defer s.mu.Unlock()
blk, err := s.beaconDB.Block(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch block for root %x", blockRoot)
}
if blk == nil {
return errors.Errorf("failed to fetch block for root %x", blockRoot)
}
state, err := s.beaconDB.State(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch state for block root %x", blockRoot)
}
if state == nil {
return errors.Errorf("failed to fetch state for block root %x", blockRoot)
}
bootstrap, err := NewLightClientBootstrapFromBeaconState(ctx, state.Slot(), state, blk)
if err != nil {
return errors.Wrapf(err, "failed to create light client bootstrap for block root %x", blockRoot)
}
// Save the light client bootstrap to the database
if err := s.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap); err != nil {
return err
}
return nil
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) ([]interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client updatesMap from the database
updatesMap, err := s.beaconDB.LightClientUpdates(ctx, startPeriod, endPeriod)
if err != nil {
return nil, err
}
var updates []interfaces.LightClientUpdate
for i := startPeriod; i <= endPeriod; i++ {
update, ok := updatesMap[i]
if !ok {
// Only return the first contiguous range of updates
break
}
updates = append(updates, update)
}
return updates, nil
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client update for the given period from the database
update, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return nil, err
}
return update, nil
}
func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error {
s.mu.Lock()
defer s.mu.Unlock()
oldUpdate, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}
s.lastFinalityUpdate = update
log.Debug("Saved new light client finality update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
}
func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
s.mu.RLock()
defer s.mu.RUnlock()
return s.lastFinalityUpdate
}
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}
s.lastOptimisticUpdate = update
log.Debug("Saved new light client optimistic update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
}
func (s *Store) LastOptimisticUpdate() interfaces.LightClientOptimisticUpdate {
s.mu.RLock()
defer s.mu.RUnlock()
return s.lastOptimisticUpdate
}

View File

@@ -0,0 +1,165 @@
package light_client_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/async/event"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestLightClientStore(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
cfg.BellatrixForkEpoch = 2
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(cfg)
// Initialize the light client store
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), &p2pTesting.FakeP2P{}, new(event.Feed))
// Create test light client updates for Capella and Deneb
lCapella := util.NewTestLightClient(t, version.Capella)
opUpdateCapella, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateCapella, "OptimisticUpdateCapella is nil")
finUpdateCapella, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock, lCapella.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateCapella, "FinalityUpdateCapella is nil")
lDeneb := util.NewTestLightClient(t, version.Deneb)
opUpdateDeneb, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateDeneb, "OptimisticUpdateDeneb is nil")
finUpdateDeneb, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock, lDeneb.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateDeneb, "FinalityUpdateDeneb is nil")
// Initially the store should have nil values for both updates
require.IsNil(t, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should be nil")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get finality with Capella update. Optimistic update should be nil
lcStore.SetLastFinalityUpdate(finUpdateCapella, false)
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get optimistic with Capella update. Finality update should be Capella
lcStore.SetLastOptimisticUpdate(opUpdateCapella, false)
require.Equal(t, opUpdateCapella, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
// Set and get finality and optimistic with Deneb update
lcStore.SetLastFinalityUpdate(finUpdateDeneb, false)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb, false)
require.Equal(t, finUpdateDeneb, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.Equal(t, opUpdateDeneb, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
}
func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), p2p, new(event.Feed))
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update0, lcStore.LastFinalityUpdate()), "update0 should be better than nil")
// update0 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update0, lcStore.LastFinalityUpdate()), "update0 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
l1 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update1, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update1, update0), "update1 should be better than update0")
// update1 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update1, lcStore.LastFinalityUpdate()), "update1 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 2 with same finality slot, increased attested slot, and supermajority - should save and broadcast
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update2, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update2, update1), "update2 should be better than update1")
// update2 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update2, lcStore.LastFinalityUpdate()), "update2 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
l3 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(3), util.WithSupermajority())
update3, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l3.Ctx, l3.State, l3.Block, l3.AttestedState, l3.AttestedBlock, l3.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update3, update2), "update3 should be better than update2")
// update3 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update3, lcStore.LastFinalityUpdate()), "update3 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
l4 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(1), util.WithSupermajority())
update4, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l4.Ctx, l4.State, l4.Block, l4.AttestedState, l4.AttestedBlock, l4.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update4, update3), "update4 should be better than update3")
// update4 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update4, lcStore.LastFinalityUpdate()), "update4 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
l5 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update5, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l5.Ctx, l5.State, l5.Block, l5.AttestedState, l5.AttestedBlock, l5.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update5, update4), "update5 should be better than update4")
// update5 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update5, lcStore.LastFinalityUpdate()), "update5 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
l6 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(3))
update6, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l6.Ctx, l6.State, l6.Block, l6.AttestedState, l6.AttestedBlock, l6.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update6, update5), "update6 should be better than update5")
// update6 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update6, lcStore.LastFinalityUpdate()), "update6 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -19,11 +19,11 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -4,10 +4,15 @@ import (
"encoding/binary"
"math"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/holiman/uint256"
"github.com/pkg/errors"
@@ -15,9 +20,12 @@ import (
var (
// Custom errors
ErrCustodyGroupTooLarge = errors.New("custody group too large")
ErrCustodyGroupCountTooLarge = errors.New("custody group count too large")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
ErrCustodyGroupTooLarge = errors.New("custody group too large")
ErrCustodyGroupCountTooLarge = errors.New("custody group count too large")
ErrSizeMismatch = errors.New("mismatch in the number of blob KZG commitments and cellsAndProofs")
ErrNotEnoughDataColumnSidecars = errors.New("not enough columns")
ErrDataColumnSidecarsNotSortedByIndex = errors.New("data column sidecars are not sorted by index")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
// maxUint256 is the maximum value of an uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
@@ -109,6 +117,44 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
return columns, nil
}
// DataColumnSidecars computes the data column sidecars from the signed block, cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates `cellsAndProofs` afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsAndProofs []kzg.CellsAndProofs) ([]*ethpb.DataColumnSidecar, error) {
if signedBlock == nil || signedBlock.IsNil() || len(cellsAndProofs) == 0 {
return nil, nil
}
block := signedBlock.Block()
blockBody := block.Body()
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
if len(blobKzgCommitments) != len(cellsAndProofs) {
return nil, ErrSizeMismatch
}
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof KZG commitments")
}
dataColumnSidecars, err := dataColumnsSidecars(signedBlockHeader, blobKzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidecars")
}
return dataColumnSidecars, nil
}
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
@@ -148,3 +194,72 @@ func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
return columns, nil
}
// dataColumnsSidecars computes the data column sidecars from the signed block header, the blob KZG commiments,
// the KZG commitment includion proofs and cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates input parameters afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars
func dataColumnsSidecars(
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
blobKzgCommitments [][]byte,
kzgCommitmentsInclusionProof [][]byte,
cellsAndProofs []kzg.CellsAndProofs,
) ([]*ethpb.DataColumnSidecar, error) {
start := time.Now()
if len(blobKzgCommitments) != len(cellsAndProofs) {
return nil, ErrSizeMismatch
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobsCount := len(cellsAndProofs)
sidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
for columnIndex := range numberOfColumns {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := range blobsCount {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
// Validate that we have enough cells and proofs for this column index
if columnIndex >= uint64(len(cellsForRow)) {
return nil, errors.Errorf("column index %d exceeds cells length %d for blob %d", columnIndex, len(cellsForRow), rowIndex)
}
if columnIndex >= uint64(len(proofsForRow)) {
return nil, errors.Errorf("column index %d exceeds proofs length %d for blob %d", columnIndex, len(proofsForRow), rowIndex)
}
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, kzgProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
Index: columnIndex,
Column: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProofs: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return sidecars, nil
}

View File

@@ -3,9 +3,13 @@ package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/p2p/enode"
)
@@ -27,6 +31,93 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
}
func TestDataColumnSidecars(t *testing.T) {
t.Run("nil signed block", func(t *testing.T) {
var expected []*ethpb.DataColumnSidecar = nil
actual, err := peerdas.DataColumnSidecars(nil, []kzg.CellsAndProofs{})
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("empty cells and proofs", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
actual, err := peerdas.DataColumnSidecars(signedBeaconBlock, []kzg.CellsAndProofs{})
require.NoError(t, err)
require.IsNil(t, actual)
})
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs.
cellsAndProofs := make([]kzg.CellsAndProofs, 1)
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
})
t.Run("cells array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with insufficient cells for the number of columns.
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, 10), // Only 10 cells
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
},
}
// This should fail because the function will try to access columns up to NumberOfColumns
// but we only have 10 cells/proofs.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds cells length", err)
})
t.Run("proofs array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
},
}
// This should fail when trying to access proof beyond index 4.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds proofs length", err)
})
}
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()

View File

@@ -63,14 +63,17 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
require.ErrorIs(t, err, peerdas.ErrInvalidKZGProof)
})
t.Run("nominal", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
require.NoError(t, err)
})
}
@@ -253,8 +256,9 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
for i := range int64(b.N) {
// Generate new random sidecars to ensure the KZG backend does not cache anything.
sidecars := generateRandomSidecars(b, i, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars)
for _, sidecar := range sidecars {
for _, sidecar := range roDataColumnSidecars {
sidecars := []blocks.RODataColumn{sidecar}
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
@@ -278,7 +282,7 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.
b.ResetTimer()
for j := range int64(b.N) {
allSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
allSidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
for k := int64(0); k < numberOfColumns; k += columnsCount {
// Use different seeds to generate different blobs/commitments
seed := int64(b.N*i) + numberOfColumns*j + blobCount*k
@@ -288,8 +292,10 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.
allSidecars = append(allSidecars, sidecars[k:k+columnsCount]...)
}
roDataColumnSidecars := generateRODataColumnSidecars(b, allSidecars)
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(allSidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
b.StopTimer()
require.NoError(b, err)
}
@@ -317,7 +323,8 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing
for j := range int64(batchCount) {
// Use different seeds to generate different blobs/commitments
sidecars := generateRandomSidecars(b, int64(batchCount)*i+j*blobCount, blobCount)
allSidecars = append(allSidecars, sidecars)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars[:columnsCount])
allSidecars = append(allSidecars, roDataColumnSidecars)
}
for _, sidecars := range allSidecars {
@@ -351,7 +358,7 @@ func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgPr
return roSidecar
}
func generateRandomSidecars(t testing.TB, seed, blobCount int64) []blocks.RODataColumn {
func generateRandomSidecars(t testing.TB, seed, blobCount int64) []*ethpb.DataColumnSidecar {
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
@@ -372,10 +379,20 @@ func generateRandomSidecars(t testing.TB, seed, blobCount int64) []blocks.ROData
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
rob, err := blocks.NewROBlock(sBlock)
require.NoError(t, err)
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
require.NoError(t, err)
return sidecars
}
func generateRODataColumnSidecars(t testing.TB, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}

View File

@@ -5,7 +5,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
pb "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
@@ -16,7 +16,6 @@ var (
ErrBlobIndexTooHigh = errors.New("blob index is too high")
ErrBlockRootMismatch = errors.New("block root mismatch")
ErrBlobsCellsProofsMismatch = errors.New("blobs and cells proofs mismatch")
ErrNilBlobAndProof = errors.New("nil blob and proof")
)
// MinimumColumnCountToReconstruct return the minimum number of columns needed to proceed to a reconstruction.
@@ -29,19 +28,19 @@ func MinimumColumnCountToReconstruct() uint64 {
// ReconstructDataColumnSidecars reconstructs all the data column sidecars from the given input data column sidecars.
// All input sidecars must be committed to the same block.
// `inVerifiedRoSidecars` should contain enough (unique) sidecars to reconstruct the missing columns.
func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataColumn) ([]blocks.VerifiedRODataColumn, error) {
func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataColumn) ([]blocks.VerifiedRODataColumn, error) {
// Check if there is at least one input sidecar.
if len(verifiedRoSidecars) == 0 {
if len(inVerifiedRoSidecars) == 0 {
return nil, ErrNotEnoughDataColumnSidecars
}
// Safely retrieve the first sidecar as a reference.
referenceSidecar := verifiedRoSidecars[0]
referenceSidecar := inVerifiedRoSidecars[0]
// Check if all columns have the same length and are commmitted to the same block.
blobCount := len(referenceSidecar.Column)
blockRoot := referenceSidecar.BlockRoot()
for _, sidecar := range verifiedRoSidecars[1:] {
for _, sidecar := range inVerifiedRoSidecars[1:] {
if len(sidecar.Column) != blobCount {
return nil, ErrColumnLengthsDiffer
}
@@ -52,8 +51,8 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
}
// Deduplicate sidecars.
sidecarByIndex := make(map[uint64]blocks.VerifiedRODataColumn, len(verifiedRoSidecars))
for _, inVerifiedRoSidecar := range verifiedRoSidecars {
sidecarByIndex := make(map[uint64]blocks.VerifiedRODataColumn, len(inVerifiedRoSidecars))
for _, inVerifiedRoSidecar := range inVerifiedRoSidecars {
sidecarByIndex[inVerifiedRoSidecar.Index] = inVerifiedRoSidecar
}
@@ -63,6 +62,12 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
return nil, ErrNotEnoughDataColumnSidecars
}
// Sidecars are verified and are committed to the same block.
// All signed block headers, KZG commitments, and inclusion proofs are the same.
signedBlockHeader := referenceSidecar.SignedBlockHeader
kzgCommitments := referenceSidecar.KzgCommitments
kzgCommitmentsInclusionProof := referenceSidecar.KzgCommitmentsInclusionProof
// Recover cells and compute proofs in parallel.
var wg errgroup.Group
cellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
@@ -95,20 +100,78 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
return nil, errors.Wrap(err, "wait for RecoverCellsAndKZGProofs")
}
outSidecars, err := DataColumnSidecars(cellsAndProofs, PopulateFromSidecar(referenceSidecar))
outSidecars, err := dataColumnsSidecars(signedBlockHeader, kzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidecars from items")
}
// Input sidecars are verified, and we reconstructed ourselves the missing sidecars.
// As a consequence, reconstructed sidecars are also verified.
reconstructedVerifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(outSidecars))
outVerifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(outSidecars))
for _, sidecar := range outSidecars {
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(sidecar)
reconstructedVerifiedRoSidecars = append(reconstructedVerifiedRoSidecars, verifiedRoSidecar)
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, blockRoot)
if err != nil {
return nil, errors.Wrap(err, "new RO data column with root")
}
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roSidecar)
outVerifiedRoSidecars = append(outVerifiedRoSidecars, verifiedRoSidecar)
}
return reconstructedVerifiedRoSidecars, nil
return outVerifiedRoSidecars, nil
}
// ConstructDataColumnSidecars constructs data column sidecars from a block, (un-extended) blobs and
// cell proofs corresponding the extended blobs. The main purpose of this function is to
// construct data column sidecars from data obtained from the execution client via:
// - `engine_getBlobsV2` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getblobsv2, or
// - `engine_getPayloadV5` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getpayloadv5
// Note: In this function, to stick with the `BlobsBundleV2` format returned by the execution client in `engine_getPayloadV5`,
// cell proofs are "flattened".
func ConstructDataColumnSidecars(block interfaces.ReadOnlySignedBeaconBlock, blobs [][]byte, cellProofs [][]byte) ([]*ethpb.DataColumnSidecar, error) {
// Check if the cells count is equal to the cell proofs count.
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
cellsCount := blobCount * numberOfColumns
if cellsCount != cellProofsCount {
return nil, ErrBlobsCellsProofsMismatch
}
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: proofs}
cellsAndProofs = append(cellsAndProofs, cellsProofs)
}
dataColumnSidecars, err := DataColumnSidecars(block, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidcars")
}
return dataColumnSidecars, nil
}
// ReconstructBlobs constructs verified read only blobs sidecars from verified read only blob sidecars.
@@ -193,89 +256,6 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
return blobSidecars, nil
}
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([]kzg.CellsAndProofs, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
cellsCount := blobCount * numberOfColumns
if cellsCount != cellProofsCount {
return nil, ErrBlobsCellsProofsMismatch
}
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: proofs}
cellsAndProofs = append(cellsAndProofs, cellsProofs)
}
return cellsAndProofs, nil
}
// ComputeCellsAndProofs computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([]kzg.CellsAndProofs, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := make([]kzg.CellsAndProofs, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
if blobAndProof == nil {
return nil, ErrNilBlobAndProof
}
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, errors.New("wrong KZG proof size - should never happen")
}
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return nil, errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: kzgProofs}
cellsAndProofs = append(cellsAndProofs, cellsProofs)
}
return cellsAndProofs, nil
}
// blobSidecarsFromDataColumnSidecars converts verified data column sidecars to verified blob sidecars.
func blobSidecarsFromDataColumnSidecars(roBlock blocks.ROBlock, dataColumnSidecars []blocks.VerifiedRODataColumn, indices []int) ([]*blocks.VerifiedROBlob, error) {
referenceSidecar := dataColumnSidecars[0]

View File

@@ -9,7 +9,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
pb "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/pkg/errors"
@@ -124,6 +124,50 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
})
}
func TestConstructDataColumnSidecars(t *testing.T) {
const (
blobCount = 3
cellsPerBlob = fieldparams.CellsPerBlob
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
roBlock, _, baseVerifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
// Extract blobs and proofs from the sidecars.
blobs := make([][]byte, 0, blobCount)
cellProofs := make([][]byte, 0, cellsPerBlob)
for blobIndex := range blobCount {
blob := make([]byte, 0, cellsPerBlob)
for columnIndex := range cellsPerBlob {
cell := baseVerifiedRoSidecars[columnIndex].Column[blobIndex]
blob = append(blob, cell...)
}
blobs = append(blobs, blob)
for columnIndex := range numberOfColumns {
cellProof := baseVerifiedRoSidecars[columnIndex].KzgProofs[blobIndex]
cellProofs = append(cellProofs, cellProof)
}
}
actual, err := peerdas.ConstructDataColumnSidecars(roBlock, blobs, cellProofs)
require.NoError(t, err)
// Extract the base verified ro sidecars into sidecars.
expected := make([]*ethpb.DataColumnSidecar, 0, len(baseVerifiedRoSidecars))
for _, verifiedRoSidecar := range baseVerifiedRoSidecars {
expected = append(expected, verifiedRoSidecar.DataColumnSidecar)
}
require.DeepSSZEqual(t, expected, actual)
}
func TestReconstructBlobs(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
@@ -206,7 +250,7 @@ func TestReconstructBlobs(t *testing.T) {
// Compute cells and proofs from blob sidecars.
var wg errgroup.Group
blobs := make([][]byte, blobCount)
inputCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
cellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
for i := range blobCount {
blob := roBlobSidecars[i].Blob
blobs[i] = blob
@@ -223,7 +267,7 @@ func TestReconstructBlobs(t *testing.T) {
// It is safe for multiple goroutines to concurrently write to the same slice,
// as long as they are writing to different indices, which is the case here.
inputCellsAndProofs[i] = cp
cellsAndProofs[i] = cp
return nil
})
@@ -234,24 +278,25 @@ func TestReconstructBlobs(t *testing.T) {
// Flatten proofs.
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
for _, cp := range inputCellsAndProofs {
for _, cp := range cellsAndProofs {
for _, proof := range cp.Proofs {
cellProofs = append(cellProofs, proof[:])
}
}
// Compute celles and proofs from the blobs and cell proofs.
cellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
require.NoError(t, err)
// Construct data column sidears from the signed block and cells and proofs.
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(roBlock))
// Construct data column sidecars.
// It is OK to use the public function `ConstructDataColumnSidecars`, as long as
// `TestConstructDataColumnSidecars` tests pass.
dataColumnSidecars, err := peerdas.ConstructDataColumnSidecars(roBlock, blobs, cellProofs)
require.NoError(t, err)
// Convert to verified data column sidecars.
verifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumnSidecars))
for _, roDataColumnSidecar := range roDataColumnSidecars {
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roDataColumnSidecar)
verifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
roSidecar, err := blocks.NewRODataColumn(dataColumnSidecar)
require.NoError(t, err)
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roSidecar)
verifiedRoSidecars = append(verifiedRoSidecars, verifiedRoSidecar)
}
@@ -294,162 +339,3 @@ func TestReconstructBlobs(t *testing.T) {
})
}
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
t.Run("mismatched blob and proof counts", func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Create one blob but proofs for two blobs
blobs := [][]byte{{}}
// Create proofs for 2 blobs worth of columns
cellProofs := make([][]byte, 2*numberOfColumns)
_, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
require.ErrorIs(t, err, peerdas.ErrBlobsCellsProofsMismatch)
})
t.Run("nominal", func(t *testing.T) {
const blobCount = 2
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
// Extract blobs and compute expected cells and proofs
blobs := make([][]byte, blobCount)
expectedCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
var wg errgroup.Group
for i := range blobCount {
blob := roBlobSidecars[i].Blob
blobs[i] = blob
wg.Go(func() error {
var kzgBlob kzg.Blob
count := copy(kzgBlob[:], blob)
require.Equal(t, len(kzgBlob), count)
cp, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
if err != nil {
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
}
expectedCellsAndProofs[i] = cp
return nil
})
}
err := wg.Wait()
require.NoError(t, err)
// Flatten proofs
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
for _, cp := range expectedCellsAndProofs {
for _, proof := range cp.Proofs {
cellProofs = append(cellProofs, proof[:])
}
}
// Test ComputeCellsAndProofs
actualCellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
require.NoError(t, err)
require.Equal(t, blobCount, len(actualCellsAndProofs))
// Verify the results match expected
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
require.Equal(t, expectedCell, actualCellsAndProofs[i].Cells[j])
}
// Compare proofs
for j, expectedProof := range expectedCellsAndProofs[i].Proofs {
require.Equal(t, expectedProof, actualCellsAndProofs[i].Proofs[j])
}
}
})
}
func TestComputeCellsAndProofsFromStructured(t *testing.T) {
t.Run("nil blob and proof", func(t *testing.T) {
_, err := peerdas.ComputeCellsAndProofsFromStructured([]*pb.BlobAndProofV2{nil})
require.ErrorIs(t, err, peerdas.ErrNilBlobAndProof)
})
t.Run("nominal", func(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
const blobCount = 2
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
// Extract blobs and compute expected cells and proofs
blobsAndProofs := make([]*pb.BlobAndProofV2, blobCount)
expectedCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
var wg errgroup.Group
for i := range blobCount {
blob := roBlobSidecars[i].Blob
wg.Go(func() error {
var kzgBlob kzg.Blob
count := copy(kzgBlob[:], blob)
require.Equal(t, len(kzgBlob), count)
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
if err != nil {
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
}
expectedCellsAndProofs[i] = cellsAndProofs
kzgProofs := make([][]byte, 0, len(cellsAndProofs.Proofs))
for _, proof := range cellsAndProofs.Proofs {
kzgProofs = append(kzgProofs, proof[:])
}
blobAndProof := &pb.BlobAndProofV2{
Blob: blob,
KzgProofs: kzgProofs,
}
blobsAndProofs[i] = blobAndProof
return nil
})
}
err = wg.Wait()
require.NoError(t, err)
// Test ComputeCellsAndProofs
actualCellsAndProofs, err := peerdas.ComputeCellsAndProofsFromStructured(blobsAndProofs)
require.NoError(t, err)
require.Equal(t, blobCount, len(actualCellsAndProofs))
// Verify the results match expected
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
require.Equal(t, expectedCell, actualCellsAndProofs[i].Cells[j])
}
// Compare proofs
for j, expectedProof := range expectedCellsAndProofs[i].Proofs {
require.Equal(t, expectedProof, actualCellsAndProofs[i].Proofs[j])
}
}
})
}

View File

@@ -1,76 +1,12 @@
package peerdas
import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
beaconState "github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
var (
ErrNilSignedBlockOrEmptyCellsAndProofs = errors.New("nil signed block or empty cells and proofs")
ErrSizeMismatch = errors.New("mismatch in the number of blob KZG commitments and cellsAndProofs")
ErrNotEnoughDataColumnSidecars = errors.New("not enough columns")
ErrDataColumnSidecarsNotSortedByIndex = errors.New("data column sidecars are not sorted by index")
)
var (
_ ConstructionPopulator = (*BlockReconstructionSource)(nil)
_ ConstructionPopulator = (*SidecarReconstructionSource)(nil)
)
const (
BlockType = "BeaconBlock"
SidecarType = "DataColumnSidecar"
)
type (
// ConstructionPopulator is an interface that can be satisfied by a type that can use data from a struct
// like a DataColumnSidecar or a BeaconBlock to set the fields in a data column sidecar that cannot
// be obtained from the engine api.
ConstructionPopulator interface {
Slot() primitives.Slot
Root() [fieldparams.RootLength]byte
ProposerIndex() primitives.ValidatorIndex
Commitments() ([][]byte, error)
Type() string
extract() (*blockInfo, error)
}
// BlockReconstructionSource is a ConstructionPopulator that uses a beacon block as the source of data
BlockReconstructionSource struct {
blocks.ROBlock
}
// DataColumnSidecar is a ConstructionPopulator that uses a data column sidecar as the source of data
SidecarReconstructionSource struct {
blocks.VerifiedRODataColumn
}
blockInfo struct {
signedBlockHeader *ethpb.SignedBeaconBlockHeader
kzgCommitments [][]byte
kzgInclusionProof [][]byte
}
)
// PopulateFromBlock creates a BlockReconstructionSource from a beacon block
func PopulateFromBlock(block blocks.ROBlock) *BlockReconstructionSource {
return &BlockReconstructionSource{ROBlock: block}
}
// PopulateFromSidecar creates a SidecarReconstructionSource from a data column sidecar
func PopulateFromSidecar(sidecar blocks.VerifiedRODataColumn) *SidecarReconstructionSource {
return &SidecarReconstructionSource{VerifiedRODataColumn: sidecar}
}
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
@@ -92,158 +28,3 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
}
// DataColumnSidecars, given ConstructionPopulator and the cells/proofs associated with each blob in the
// block, assembles sidecars which can be distributed to peers.
// This is an adapted version of
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars,
// which is designed to be used both when constructing sidecars from a block and from a sidecar, replacing
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
func DataColumnSidecars(rows []kzg.CellsAndProofs, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
if len(rows) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(rows, params.BeaconConfig().NumberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
KzgCommitments: info.kzgCommitments,
KzgProofs: proofs[idx],
SignedBlockHeader: info.signedBlockHeader,
KzgCommitmentsInclusionProof: info.kzgInclusionProof,
}
if len(sidecar.KzgCommitments) != len(sidecar.Column) || len(sidecar.KzgCommitments) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, src.Root())
if err != nil {
return nil, errors.Wrap(err, "new ro data column")
}
roSidecars = append(roSidecars, roSidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return roSidecars, nil
}
// Slot returns the slot of the source
func (s *BlockReconstructionSource) Slot() primitives.Slot {
return s.Block().Slot()
}
// ProposerIndex returns the proposer index of the source
func (s *BlockReconstructionSource) ProposerIndex() primitives.ValidatorIndex {
return s.Block().ProposerIndex()
}
// Commitments returns the blob KZG commitments of the source
func (s *BlockReconstructionSource) Commitments() ([][]byte, error) {
c, err := s.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
return c, nil
}
// Type returns the type of the source
func (s *BlockReconstructionSource) Type() string {
return BlockType
}
// extract extracts the block information from the source
func (b *BlockReconstructionSource) extract() (*blockInfo, error) {
block := b.Block()
header, err := b.Header()
if err != nil {
return nil, errors.Wrap(err, "header")
}
commitments, err := block.Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "commitments")
}
inclusionProof, err := blocks.MerkleProofKZGCommitments(block.Body())
if err != nil {
return nil, errors.Wrap(err, "merkle proof kzg commitments")
}
info := &blockInfo{
signedBlockHeader: header,
kzgCommitments: commitments,
kzgInclusionProof: inclusionProof,
}
return info, nil
}
// rotateRowsToCols takes a 2D slice of cells and proofs, where the x is rows (blobs) and y is columns,
// and returns a 2D slice where x is columns and y is rows.
func rotateRowsToCols(rows []kzg.CellsAndProofs, numCols uint64) ([][][]byte, [][][]byte, error) {
if len(rows) == 0 {
return nil, nil, nil
}
cellCols := make([][][]byte, numCols)
proofCols := make([][][]byte, numCols)
for i, cp := range rows {
if uint64(len(cp.Cells)) != numCols {
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough cells")
}
if len(cp.Cells) != len(cp.Proofs) {
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough proofs")
}
for j := uint64(0); j < numCols; j++ {
if i == 0 {
cellCols[j] = make([][]byte, len(rows))
proofCols[j] = make([][]byte, len(rows))
}
cellCols[j][i] = cp.Cells[j][:]
proofCols[j][i] = cp.Proofs[j][:]
}
}
return cellCols, proofCols, nil
}
// Root returns the block root of the source
func (s *SidecarReconstructionSource) Root() [fieldparams.RootLength]byte {
return s.BlockRoot()
}
// Commmitments returns the blob KZG commitments of the source
func (s *SidecarReconstructionSource) Commitments() ([][]byte, error) {
return s.KzgCommitments, nil
}
// Type returns the type of the source
func (s *SidecarReconstructionSource) Type() string {
return SidecarType
}
// extract extracts the block information from the source
func (s *SidecarReconstructionSource) extract() (*blockInfo, error) {
info := &blockInfo{
signedBlockHeader: s.SignedBlockHeader,
kzgCommitments: s.KzgCommitments,
kzgInclusionProof: s.KzgCommitmentsInclusionProof,
}
return info, nil
}

View File

@@ -3,15 +3,11 @@ package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestValidatorsCustodyRequirement(t *testing.T) {
@@ -57,218 +53,3 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
})
}
}
func TestDataColumnSidecars(t *testing.T) {
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
Proofs: make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
},
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
})
t.Run("cells array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with insufficient cells for the number of columns.
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, 10), // Only 10 cells
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
},
}
// This should fail because the function will try to access columns up to NumberOfColumns
// but we only have 10 cells/proofs.
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
})
t.Run("proofs array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
},
}
// This should fail when trying to access proof beyond index 4.
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
require.ErrorContains(t, "not enough proofs", err)
})
t.Run("nominal", func(t *testing.T) {
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
commitment2 := make([]byte, 48)
// Set different values to distinguish commitments
commitment1[0] = 0x01
commitment2[0] = 0x02
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{commitment1, commitment2}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
}
// Set distinct values in cells and proofs for testing
for i := range numberOfColumns {
cellsAndProofs[0].Cells[i][0] = byte(i)
cellsAndProofs[0].Proofs[i][0] = byte(i)
cellsAndProofs[1].Cells[i][0] = byte(i + 128)
cellsAndProofs[1].Proofs[i][0] = byte(i + 128)
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.NoError(t, err)
require.NotNil(t, sidecars)
require.Equal(t, int(numberOfColumns), len(sidecars))
// Verify each sidecar has the expected structure
for i, sidecar := range sidecars {
require.Equal(t, uint64(i), sidecar.Index)
require.Equal(t, 2, len(sidecar.Column))
require.Equal(t, 2, len(sidecar.KzgCommitments))
require.Equal(t, 2, len(sidecar.KzgProofs))
// Verify commitments match what we set
require.DeepEqual(t, commitment1, sidecar.KzgCommitments[0])
require.DeepEqual(t, commitment2, sidecar.KzgCommitments[1])
// Verify column data comes from the correct cells
require.Equal(t, byte(i), sidecar.Column[0][0])
require.Equal(t, byte(i+128), sidecar.Column[1][0])
// Verify proofs come from the correct proofs
require.Equal(t, byte(i), sidecar.KzgProofs[0][0])
require.Equal(t, byte(i+128), sidecar.KzgProofs[1][0])
}
})
}
func TestReconstructionSource(t *testing.T) {
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
commitment2 := make([]byte, 48)
// Set different values to distinguish commitments
commitment1[0] = 0x01
commitment2[0] = 0x02
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{commitment1, commitment2}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
}
// Set distinct values in cells and proofs for testing
for i := range numberOfColumns {
cellsAndProofs[0].Cells[i][0] = byte(i)
cellsAndProofs[0].Proofs[i][0] = byte(i)
cellsAndProofs[1].Cells[i][0] = byte(i + 128)
cellsAndProofs[1].Proofs[i][0] = byte(i + 128)
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.NoError(t, err)
require.NotNil(t, sidecars)
require.Equal(t, int(numberOfColumns), len(sidecars))
t.Run("from block", func(t *testing.T) {
src := peerdas.PopulateFromBlock(rob)
require.Equal(t, rob.Block().Slot(), src.Slot())
require.Equal(t, rob.Root(), src.Root())
require.Equal(t, rob.Block().ProposerIndex(), src.ProposerIndex())
commitments, err := src.Commitments()
require.NoError(t, err)
require.Equal(t, 2, len(commitments))
require.DeepEqual(t, commitment1, commitments[0])
require.DeepEqual(t, commitment2, commitments[1])
require.Equal(t, peerdas.BlockType, src.Type())
})
t.Run("from sidecar", func(t *testing.T) {
referenceSidecar := blocks.NewVerifiedRODataColumn(sidecars[0])
src := peerdas.PopulateFromSidecar(referenceSidecar)
require.Equal(t, referenceSidecar.Slot(), src.Slot())
require.Equal(t, referenceSidecar.BlockRoot(), src.Root())
require.Equal(t, referenceSidecar.ProposerIndex(), src.ProposerIndex())
commitments, err := src.Commitments()
require.NoError(t, err)
require.Equal(t, 2, len(commitments))
require.DeepEqual(t, commitment1, commitments[0])
require.DeepEqual(t, commitment2, commitments[1])
require.Equal(t, peerdas.SidecarType, src.Type())
})
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
b "github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition/interop"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -375,18 +374,15 @@ func ProcessBlockForStateRoot(
}
// This calls altair block operations.
func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
func altairOperations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -397,7 +393,7 @@ func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock int
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
@@ -405,18 +401,15 @@ func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock int
}
// This calls phase 0 block operations.
func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
func phase0Operations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process block proposer slashings")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process block attester slashings")
}
@@ -427,9 +420,5 @@ func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock int
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
return st, nil
return b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
}

View File

@@ -13,55 +13,34 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/math"
mathutil "github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
// ExitInfo provides information about validator exits in the state.
type ExitInfo struct {
HighestExitEpoch primitives.Epoch
Churn uint64
TotalActiveBalance uint64
}
// ErrValidatorAlreadyExited is an error raised when trying to process an exit of
// an already exited validator
var ErrValidatorAlreadyExited = errors.New("validator already exited")
// ExitInformation returns information about validator exits.
func ExitInformation(s state.BeaconState) *ExitInfo {
exitInfo := &ExitInfo{}
// MaxExitEpochAndChurn returns the maximum non-FAR_FUTURE_EPOCH exit
// epoch and the number of them
func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, churn uint64) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
currentEpoch := slots.ToEpoch(s.Slot())
totalActiveBalance := uint64(0)
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = e
exitInfo.Churn = 1
} else if e == exitInfo.HighestExitEpoch {
exitInfo.Churn++
if e > maxExitEpoch {
maxExitEpoch = e
churn = 1
} else if e == maxExitEpoch {
churn++
}
}
// Calculate total active balance in the same loop
if helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
totalActiveBalance += val.EffectiveBalance()
}
return nil
})
_ = err
// Apply minimum balance as per spec
exitInfo.TotalActiveBalance = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, totalActiveBalance)
return exitInfo
return
}
// InitiateValidatorExit takes in validator index and updates
@@ -85,117 +64,59 @@ func ExitInformation(s state.BeaconState) *ExitInfo {
// # Set validator exit epoch and withdrawable epoch
// validator.exit_epoch = exit_queue_epoch
// validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
func InitiateValidatorExit(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex, exitQueueEpoch primitives.Epoch, churn uint64) (state.BeaconState, primitives.Epoch, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
return nil, 0, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, ErrValidatorAlreadyExited
return s, validator.ExitEpoch, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitQueueEpoch {
exitQueueEpoch = exitableEpoch
churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return nil, 0, errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if churn >= currentChurn {
exitQueueEpoch, err = exitQueueEpoch.SafeAdd(1)
if err != nil {
return nil, 0, err
}
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
exitQueueEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, err
return nil, 0, err
}
}
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
validator.ExitEpoch = exitQueueEpoch
validator.WithdrawableEpoch, err = exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
return nil, 0, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
return nil, 0, err
}
return s, nil
}
// InitiateValidatorExitForTotalBal has the same functionality as InitiateValidatorExit,
// the only difference being how total active balance is obtained. In InitiateValidatorExit
// it is calculated inside the function and in InitiateValidatorExitForTotalBal it's a
// function argument.
func InitiateValidatorExitForTotalBal(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
totalActiveBalance primitives.Gwei,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance, primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, err
}
}
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
}
return s, nil
}
func initiateValidatorExitPreElectra(ctx context.Context, s state.BeaconState, exitInfo *ExitInfo) error {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = exitableEpoch
exitInfo.Churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if exitInfo.Churn >= currentChurn {
exitInfo.HighestExitEpoch, err = exitInfo.HighestExitEpoch.SafeAdd(1)
if err != nil {
return err
}
exitInfo.Churn = 1
} else {
exitInfo.Churn = exitInfo.Churn + 1
}
return nil
return s, exitQueueEpoch, nil
}
// SlashValidator slashes the malicious validator's balance and awards
@@ -231,12 +152,9 @@ func initiateValidatorExitPreElectra(ctx context.Context, s state.BeaconState, e
func SlashValidator(
ctx context.Context,
s state.BeaconState,
slashedIdx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
var err error
s, err = InitiateValidatorExitForTotalBal(ctx, s, slashedIdx, exitInfo, primitives.Gwei(exitInfo.TotalActiveBalance))
slashedIdx primitives.ValidatorIndex) (state.BeaconState, error) {
maxExitEpoch, churn := MaxExitEpochAndChurn(s)
s, _, err := InitiateValidatorExit(ctx, s, slashedIdx, maxExitEpoch, churn)
if err != nil && !errors.Is(err, ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)
}

View File

@@ -49,11 +49,9 @@ func TestInitiateValidatorExit_AlreadyExited(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
exitInfo := &validators.ExitInfo{HighestExitEpoch: 199, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, 0, exitInfo)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, 0, 199, 1)
require.ErrorIs(t, err, validators.ErrValidatorAlreadyExited)
assert.Equal(t, primitives.Epoch(199), exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
require.Equal(t, exitEpoch, epoch)
v, err := newState.ValidatorAtIndex(0)
require.NoError(t, err)
assert.Equal(t, exitEpoch, v.ExitEpoch, "Already exited")
@@ -70,11 +68,9 @@ func TestInitiateValidatorExit_ProperExit(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 1)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(2), exitInfo.Churn)
require.Equal(t, exitedEpoch+2, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, v.ExitEpoch, "Exit epoch was not the highest")
@@ -92,11 +88,9 @@ func TestInitiateValidatorExit_ChurnOverflow(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 4}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 4)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+3, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
require.Equal(t, exitedEpoch+3, epoch)
// Because of exit queue overflow,
// validator who init exited has to wait one more epoch.
@@ -116,8 +110,7 @@ func TestInitiateValidatorExit_WithdrawalOverflows(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
exitInfo := &validators.ExitInfo{HighestExitEpoch: params.BeaconConfig().FarFutureEpoch - 1, Churn: 1}
_, err = validators.InitiateValidatorExit(t.Context(), state, 1, exitInfo)
_, _, err = validators.InitiateValidatorExit(t.Context(), state, 1, params.BeaconConfig().FarFutureEpoch-1, 1)
require.ErrorContains(t, "addition overflows", err)
}
@@ -153,11 +146,12 @@ func TestInitiateValidatorExit_ProperExit_Electra(t *testing.T) {
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), ebtc)
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, &validators.ExitInfo{}) // exit info is not used in electra
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, 0, 0) // exitQueueEpoch and churn are not used in electra
require.NoError(t, err)
// Expect that the exit epoch is the next available epoch with max seed lookahead.
want := helpers.ActivationExitEpoch(exitedEpoch + 1)
require.Equal(t, want, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, want, v.ExitEpoch, "Exit epoch was not the highest")
@@ -196,7 +190,7 @@ func TestSlashValidator_OK(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Phase0)
@@ -250,7 +244,7 @@ func TestSlashValidator_Electra(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Electra)
@@ -511,8 +505,8 @@ func TestValidatorMaxExitEpochAndChurn(t *testing.T) {
for _, tt := range tests {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
exitInfo := validators.ExitInformation(s)
require.Equal(t, tt.wantedEpoch, exitInfo.HighestExitEpoch)
require.Equal(t, tt.wantedChurn, exitInfo.Churn)
epoch, churn := validators.MaxExitEpochAndChurn(s)
require.Equal(t, tt.wantedEpoch, epoch)
require.Equal(t, tt.wantedChurn, churn)
}
}

View File

@@ -122,18 +122,18 @@ type BlobStorage struct {
func (bs *BlobStorage) WarmCache() {
start := time.Now()
if bs.layoutName == LayoutNameFlat {
log.Info("Blob filesystem cache warm-up started. This may take a few minutes")
log.Info("Blob filesystem cache warm-up started. This may take a few minutes.")
} else {
log.Info("Blob filesystem cache warm-up started")
log.Info("Blob filesystem cache warm-up started.")
}
if err := warmCache(bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while warming up blob filesystem cache")
log.WithError(err).Error("Error encountered while warming up blob filesystem cache.")
}
if err := bs.migrateLayouts(); err != nil {
log.WithError(err).Error("Error encountered while migrating blob storage")
log.WithError(err).Error("Error encountered while migrating blob storage.")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete")
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
}
// If any blob storage directories are found for layouts besides the configured layout, migrate them.

View File

@@ -259,6 +259,7 @@ func (dcs *DataColumnStorage) Summary(root [fieldparams.RootLength]byte) DataCol
}
// Save saves data column sidecars into the database and asynchronously performs pruning.
// The returned channel is closed when the pruning is complete.
func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
startTime := time.Now()

View File

@@ -116,43 +116,19 @@ func (l *periodicEpochLayout) pruneBefore(before primitives.Epoch) (*pruneSummar
}
// Roll up summaries and clean up per-epoch directories.
rollup := &pruneSummary{}
// Track which period directories might be empty after epoch removal
periodsToCheck := make(map[string]struct{})
for epoch, sum := range sums {
rollup.blobsPruned += sum.blobsPruned
rollup.failedRemovals = append(rollup.failedRemovals, sum.failedRemovals...)
rmdir := l.epochDir(epoch)
periodDir := l.periodDir(epoch)
if len(sum.failedRemovals) == 0 {
if err := l.fs.Remove(rmdir); err != nil {
log.WithField("dir", rmdir).WithError(err).Error("Failed to remove epoch directory while pruning")
} else {
periodsToCheck[periodDir] = struct{}{}
}
} else {
log.WithField("dir", rmdir).WithField("numFailed", len(sum.failedRemovals)).WithError(err).Error("Unable to remove epoch directory due to pruning failures")
}
}
//Clean up empty period directories
for periodDir := range periodsToCheck {
entries, err := afero.ReadDir(l.fs, periodDir)
if err != nil {
log.WithField("dir", periodDir).WithError(err).Debug("Failed to read period directory contents")
continue
}
// Only attempt to remove if directory is empty
if len(entries) == 0 {
if err := l.fs.Remove(periodDir); err != nil {
log.WithField("dir", periodDir).WithError(err).Error("Failed to remove empty period directory")
}
}
}
return rollup, nil
}

View File

@@ -4,7 +4,6 @@ import (
"encoding/binary"
"os"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -196,48 +195,3 @@ func TestLayoutPruneBefore(t *testing.T) {
})
}
}
func TestLayoutByEpochPruneBefore(t *testing.T) {
roots := testRoots(10)
cases := []struct {
name string
pruned []testIdent
remain []testIdent
err error
sum pruneSummary
}{
{
name: "single epoch period cleanup",
pruned: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: 367076, index: 0}},
},
remain: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[1], epoch: 371176, index: 0}}, // Different period
},
sum: pruneSummary{blobsPruned: 1},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t, WithLayout(LayoutNameByEpoch))
pruned := testSetupBlobIdentPaths(t, fs, bs, c.pruned)
remain := testSetupBlobIdentPaths(t, fs, bs, c.remain)
time.Sleep(1 * time.Second)
for _, id := range pruned {
_, err := fs.Stat(bs.layout.sszPath(id))
require.Equal(t, true, os.IsNotExist(err))
dirs := bs.layout.blockParentDirs(id)
for i := len(dirs) - 1; i > 0; i-- {
_, err = fs.Stat(dirs[i])
require.Equal(t, true, os.IsNotExist(err))
}
}
for _, id := range remain {
_, err := fs.Stat(bs.layout.sszPath(id))
require.NoError(t, err)
}
})
}
}

View File

@@ -25,7 +25,6 @@ go_library(
"//testing/spectest:__subpackages__",
],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/altair:go_default_library",
@@ -104,7 +103,6 @@ go_test(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",

View File

@@ -7,7 +7,6 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
@@ -102,7 +101,10 @@ const (
defaultEngineTimeout = time.Second
)
var errInvalidPayloadBodyResponse = errors.New("engine api payload body response is invalid")
var (
errInvalidPayloadBodyResponse = errors.New("engine api payload body response is invalid")
errMissingBlobsAndProofsFromEL = errors.New("engine api payload body response is missing blobs and proofs")
)
// ForkchoiceUpdatedResponse is the response kind received by the
// engine_forkchoiceUpdatedV1 endpoint.
@@ -121,7 +123,7 @@ type Reconstructor interface {
ctx context.Context, blindedBlocks []interfaces.ReadOnlySignedBeaconBlock,
) ([]interfaces.SignedBeaconBlock, error)
ReconstructBlobSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte, hi func(uint64) bool) ([]blocks.VerifiedROBlob, error)
ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error)
ReconstructDataColumnSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error)
}
// EngineCaller defines a client that can interact with an Ethereum
@@ -649,40 +651,22 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
return verifiedBlobs, nil
}
func (s *Service) ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error) {
root := populator.Root()
// ReconstructDataColumnSidecars reconstructs the verified data column sidecars for a given beacon block.
// It retrieves the KZG commitments from the block body, fetches the associated blobs and cell proofs from the EL,
// and constructs the corresponding verified read-only data column sidecars.
func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, signedROBlock interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error) {
block := signedROBlock.Block()
// Fetch cells and proofs from the execution client using the KZG commitments from the sidecar.
commitments, err := populator.Commitments()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", blockRoot),
"slot": block.Slot(),
})
kzgCommitments, err := block.Body().BlobKzgCommitments()
if err != nil {
return nil, wrapWithBlockRoot(err, root, "commitments")
return nil, wrapWithBlockRoot(err, blockRoot, "blob KZG commitments")
}
cellsAndProofs, err := s.fetchCellsAndProofsFromExecution(ctx, commitments)
if err != nil {
return nil, wrapWithBlockRoot(err, root, "fetch cells and proofs from execution client")
}
// Return early if nothing is returned from the EL.
if len(cellsAndProofs) == 0 {
return nil, nil
}
// Construct data column sidears from the signed block and cells and proofs.
roSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, populator)
if err != nil {
return nil, wrapWithBlockRoot(err, populator.Root(), "data column sidcars from column sidecar")
}
// Upgrade the sidecars to verified sidecars.
// We trust the execution layer we are connected to, so we can upgrade the sidecar into a verified one.
verifiedROSidecars := upgradeSidecarsToVerifiedSidecars(roSidecars)
return verifiedROSidecars, nil
}
// fetchCellsAndProofsFromExecution fetches cells and proofs from the execution client (using engine_getBlobsV2 execution API method)
func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommitments [][]byte) ([]kzg.CellsAndProofs, error) {
// Collect KZG hashes for all blobs.
versionedHashes := make([]common.Hash, 0, len(kzgCommitments))
for _, commitment := range kzgCommitments {
@@ -693,32 +677,47 @@ func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommi
// Fetch all blobsAndCellsProofs from the execution client.
blobAndProofV2s, err := s.GetBlobsV2(ctx, versionedHashes)
if err != nil {
return nil, errors.Wrapf(err, "get blobs V2")
return nil, wrapWithBlockRoot(err, blockRoot, "get blobs V2")
}
// Return early if nothing is returned from the EL.
if len(blobAndProofV2s) == 0 {
log.Debug("No blobs returned from execution client")
return nil, nil
}
// Compute cells and proofs from the blobs and cell proofs.
cellsAndProofs, err := peerdas.ComputeCellsAndProofsFromStructured(blobAndProofV2s)
if err != nil {
return nil, errors.Wrap(err, "compute cells and proofs")
// Extract the blobs and proofs from the blobAndProofV2s.
blobs, cellProofs := make([][]byte, 0, len(blobAndProofV2s)), make([][]byte, 0, len(blobAndProofV2s))
for _, blobsAndProofs := range blobAndProofV2s {
if blobsAndProofs == nil {
return nil, wrapWithBlockRoot(errMissingBlobsAndProofsFromEL, blockRoot, "")
}
blobs, cellProofs = append(blobs, blobsAndProofs.Blob), append(cellProofs, blobsAndProofs.KzgProofs...)
}
return cellsAndProofs, nil
}
// Construct the data column sidcars from the blobs and cell proofs provided by the execution client.
dataColumnSidecars, err := peerdas.ConstructDataColumnSidecars(signedROBlock, blobs, cellProofs)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "construct data column sidecars")
}
// upgradeSidecarsToVerifiedSidecars upgrades a list of data column sidecars into verified data column sidecars.
func upgradeSidecarsToVerifiedSidecars(roSidecars []blocks.RODataColumn) []blocks.VerifiedRODataColumn {
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(roSidecars))
for _, roSidecar := range roSidecars {
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roSidecar)
// Finally, construct verified RO data column sidecars.
// We trust the execution layer we are connected to, so we can upgrade the read only data column sidecar into a verified one.
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(dataColumnSidecar, blockRoot)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "new read-only data column with root")
}
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
}
return verifiedRODataColumns
log.Debug("Data columns successfully reconstructed from the execution client")
return verifiedRODataColumns, nil
}
func fullPayloadFromPayloadBody(
@@ -1010,6 +1009,6 @@ func toBlockNumArg(number *big.Int) string {
}
// wrapWithBlockRoot returns a new error with the given block root.
func wrapWithBlockRoot(err error, blockRoot [fieldparams.RootLength]byte, message string) error {
func wrapWithBlockRoot(err error, blockRoot [32]byte, message string) error {
return errors.Wrap(err, fmt.Sprintf("%s for block %#x", message, blockRoot))
}

View File

@@ -14,7 +14,6 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
mocks "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
@@ -2557,7 +2556,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
})
}
func TestConstructDataColumnSidecars(t *testing.T) {
func TestReconstructDataColumnSidecars(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
@@ -2581,14 +2580,11 @@ func TestConstructDataColumnSidecars(t *testing.T) {
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
roBlock, err := blocks.NewROBlockWithRoot(sb, r)
require.NoError(t, err)
ctx := context.Background()
t.Run("GetBlobsV2 is not supported", func(t *testing.T) {
_, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
require.ErrorContains(t, "engine_getBlobsV2 is not supported", err)
_, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.ErrorContains(t, "get blobs V2 for block", err)
})
t.Run("nothing received", func(t *testing.T) {
@@ -2598,7 +2594,7 @@ func TestConstructDataColumnSidecars(t *testing.T) {
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 0, len(dataColumns))
})
@@ -2611,22 +2607,23 @@ func TestConstructDataColumnSidecars(t *testing.T) {
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 128, len(dataColumns))
})
// t.Run("missing some blobs", func(t *testing.T) {
// blobMasks := []bool{false, true, true, true, true, true}
// srv := createBlobServerV2(t, 6, blobMasks)
// defer srv.Close()
t.Run("missing some blobs", func(t *testing.T) {
blobMasks := []bool{false, true, true, true, true, true}
srv := createBlobServerV2(t, 6, blobMasks)
defer srv.Close()
// rpcClient, client := setupRpcClientV2(t, srv.URL, client)
// defer rpcClient.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
// _, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
// require.ErrorContains(t, "fetch cells and proofs from execution client", err)
// })
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.ErrorContains(t, errMissingBlobsAndProofsFromEL.Error(), err)
require.Equal(t, 0, len(dataColumns))
})
}
func createRandomKzgCommitments(t *testing.T, num int) [][]byte {

View File

@@ -14,7 +14,6 @@ go_library(
],
deps = [
"//async/event:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/execution/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",

View File

@@ -4,7 +4,6 @@ import (
"context"
"math/big"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -117,8 +116,7 @@ func (e *EngineClient) ReconstructBlobSidecars(context.Context, interfaces.ReadO
return e.BlobSidecars, e.ErrorBlobSidecars
}
// ConstructDataColumnSidecars is a mock implementation of the ConstructDataColumnSidecars method.
func (e *EngineClient) ConstructDataColumnSidecars(context.Context, peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error) {
func (e *EngineClient) ReconstructDataColumnSidecars(context.Context, interfaces.ReadOnlySignedBeaconBlock, [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error) {
return e.DataColumnSidecars, e.ErrorDataColumnSidecars
}

View File

@@ -463,20 +463,22 @@ func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byt
}
// InsertChain inserts all nodes corresponding to blocks in the slice
// `blocks`. This slice must be ordered in increasing slot order and
// each consecutive entry must be a child of the previous one.
// The parent of the first block in this list must already be present in forkchoice.
// `blocks`. This slice must be ordered from child to parent. It includes all
// blocks **except** the first one (that is the one with the highest slot
// number). All blocks are assumed to be a strict chain
// where blocks[i].Parent = blocks[i+1]. Also, we assume that the parent of the
// last block in this list is already included in forkchoice store.
func (f *ForkChoice) InsertChain(ctx context.Context, chain []*forkchoicetypes.BlockAndCheckpoints) error {
if len(chain) == 0 {
return nil
}
for _, bcp := range chain {
for i := len(chain) - 1; i > 0; i-- {
if _, err := f.store.insert(ctx,
bcp.Block,
bcp.JustifiedCheckpoint.Epoch, bcp.FinalizedCheckpoint.Epoch); err != nil {
chain[i].Block,
chain[i].JustifiedCheckpoint.Epoch, chain[i].FinalizedCheckpoint.Epoch); err != nil {
return err
}
if err := f.updateCheckpoints(ctx, bcp.JustifiedCheckpoint, bcp.FinalizedCheckpoint); err != nil {
if err := f.updateCheckpoints(ctx, chain[i].JustifiedCheckpoint, chain[i].FinalizedCheckpoint); err != nil {
return err
}
}

View File

@@ -630,15 +630,14 @@ func TestStore_InsertChain(t *testing.T) {
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: params.BeaconConfig().ZeroHash[:]},
})
}
// InsertChain now expects blocks in increasing slot order
require.NoError(t, f.InsertChain(t.Context(), blks))
args := make([]*forkchoicetypes.BlockAndCheckpoints, 10)
for i := 0; i < len(blks); i++ {
args[i] = blks[10-i-1]
}
require.NoError(t, f.InsertChain(t.Context(), args))
// Test partial insertion: first insert the foundation blocks, then a subset
f = setup(1, 1)
// Insert first 2 blocks to establish a chain from genesis
require.NoError(t, f.InsertChain(t.Context(), blks[:2]))
// Then insert the remaining blocks
require.NoError(t, f.InsertChain(t.Context(), blks[2:]))
require.NoError(t, f.InsertChain(t.Context(), args[2:]))
}
func TestForkChoice_UpdateCheckpoints(t *testing.T) {

View File

@@ -1,26 +0,0 @@
package light_client
import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
)
// cache tracks LC data over the non finalized chain for different branches.
type cache struct {
items map[[32]byte]*cacheItem
}
// cacheItem represents the LC data for a block. It tracks the best update and finality update seen in that branch.
type cacheItem struct {
parent *cacheItem // parent item in the cache, can be nil
period uint64 // sync committee period
slot primitives.Slot // slot of the signature block
bestUpdate interfaces.LightClientUpdate
bestFinalityUpdate interfaces.LightClientFinalityUpdate
}
func newLightClientCache() *cache {
return &cache{
items: make(map[[32]byte]*cacheItem),
}
}

View File

@@ -1,24 +0,0 @@
package light_client
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestLCCache(t *testing.T) {
lcCache := newLightClientCache()
require.NotNil(t, lcCache)
item := &cacheItem{
period: 5,
bestUpdate: nil,
bestFinalityUpdate: nil,
}
blkRoot := [32]byte{4, 5, 6}
lcCache.items[blkRoot] = item
require.Equal(t, item, lcCache.items[blkRoot], "Expected to find the item in the cache")
}

View File

@@ -1,383 +0,0 @@
package light_client
import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate // tracks the best finality update seen so far
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate // tracks the best optimistic update seen so far
p2p p2p.Accessor
stateFeed event.SubscriberSender
cache *cache // non finality cache
}
func NewLightClientStore(p p2p.Accessor, e event.SubscriberSender, db iface.HeadAccessDatabase) *Store {
return &Store{
beaconDB: db,
p2p: p,
stateFeed: e,
cache: newLightClientCache(),
}
}
func (s *Store) SaveLCData(ctx context.Context,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock,
headBlockRoot [32]byte) error {
s.mu.Lock()
defer s.mu.Unlock()
// compute required data
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return errors.Wrapf(err, "failed to create light client update")
}
finalityUpdate, err := NewLightClientFinalityUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return errors.Wrapf(err, "failed to create light client finality update")
}
optimisticUpdate, err := NewLightClientOptimisticUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock)
if err != nil {
return errors.Wrapf(err, "failed to create light client optimistic update")
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
blockRoot, err := attestedBlock.Block().HashTreeRoot()
if err != nil {
return errors.Wrapf(err, "failed to compute attested block root")
}
parentRoot := [32]byte(update.AttestedHeader().Beacon().ParentRoot)
signatureBlockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return errors.Wrapf(err, "failed to compute signature block root")
}
newBlockIsHead := signatureBlockRoot == headBlockRoot
// create the new cache item
newCacheItem := &cacheItem{
period: period,
slot: attestedBlock.Block().Slot(),
}
// check if parent exists in cache
parentItem, ok := s.cache.items[parentRoot]
if ok {
newCacheItem.parent = parentItem
} else {
// if not, create an item for the parent, but don't need to save it since it's the accumulated best update and is just used for comparison
bestUpdateSoFar, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get best light client update for period %d", period)
}
parentItem = &cacheItem{
period: period,
bestUpdate: bestUpdateSoFar,
bestFinalityUpdate: s.lastFinalityUpdate,
}
}
// if at a period boundary, no need to compare data, just save new ones
if parentItem.period != period {
newCacheItem.bestUpdate = update
newCacheItem.bestFinalityUpdate = finalityUpdate
s.cache.items[blockRoot] = newCacheItem
s.setLastOptimisticUpdate(optimisticUpdate, true)
// if the new block is not head, we don't want to change our lastFinalityUpdate
if newBlockIsHead {
s.setLastFinalityUpdate(finalityUpdate, true)
}
return nil
}
// if in the same period, compare updates
isUpdateBetter, err := IsBetterUpdate(update, parentItem.bestUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isUpdateBetter {
newCacheItem.bestUpdate = update
} else {
newCacheItem.bestUpdate = parentItem.bestUpdate
}
isBetterFinalityUpdate := IsBetterFinalityUpdate(finalityUpdate, parentItem.bestFinalityUpdate)
if isBetterFinalityUpdate {
newCacheItem.bestFinalityUpdate = finalityUpdate
} else {
newCacheItem.bestFinalityUpdate = parentItem.bestFinalityUpdate
}
// save new item in cache
s.cache.items[blockRoot] = newCacheItem
// save lastOptimisticUpdate if better
if isBetterOptimisticUpdate := IsBetterOptimisticUpdate(optimisticUpdate, s.lastOptimisticUpdate); isBetterOptimisticUpdate {
s.setLastOptimisticUpdate(optimisticUpdate, true)
}
// if the new block is considered the head, set the last finality update
if newBlockIsHead {
s.setLastFinalityUpdate(newCacheItem.bestFinalityUpdate, isBetterFinalityUpdate)
}
return nil
}
func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot [32]byte) (interfaces.LightClientBootstrap, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client bootstrap from the database
bootstrap, err := s.beaconDB.LightClientBootstrap(ctx, blockRoot[:])
if err != nil {
return nil, err
}
if bootstrap == nil { // not found
return nil, ErrLightClientBootstrapNotFound
}
return bootstrap, nil
}
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte, state state.BeaconState) error {
s.mu.Lock()
defer s.mu.Unlock()
blk, err := s.beaconDB.Block(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch block for root %x", blockRoot)
}
if blk == nil {
return errors.Errorf("nil block for root %x", blockRoot)
}
bootstrap, err := NewLightClientBootstrapFromBeaconState(ctx, state.Slot(), state, blk)
if err != nil {
return errors.Wrapf(err, "failed to create light client bootstrap for block root %x", blockRoot)
}
// Save the light client bootstrap to the database
if err := s.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap); err != nil {
return err
}
return nil
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64, headBlock interfaces.ReadOnlySignedBeaconBlock) ([]interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client updatesMap from the database
updatesMap, err := s.beaconDB.LightClientUpdates(ctx, startPeriod, endPeriod)
if err != nil {
return nil, errors.Wrapf(err, "failed to get updates from the database")
}
cacheUpdatesByPeriod, err := s.getCacheUpdatesByPeriod(headBlock)
if err != nil {
return nil, errors.Wrapf(err, "failed to get updates from cache")
}
for period, update := range cacheUpdatesByPeriod {
updatesMap[period] = update
}
var updates []interfaces.LightClientUpdate
for i := startPeriod; i <= endPeriod; i++ {
update, ok := updatesMap[i]
if !ok {
// Only return the first contiguous range of updates
break
}
updates = append(updates, update)
}
return updates, nil
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64, headBlock interfaces.ReadOnlySignedBeaconBlock) (interfaces.LightClientUpdate, error) {
// we don't need to lock here because the LightClientUpdates method locks the store
updates, err := s.LightClientUpdates(ctx, period, period, headBlock)
if err != nil {
return nil, errors.Wrapf(err, "failed to get light client update for period %d", period)
}
if len(updates) == 0 {
return nil, nil
}
return updates[0], nil
}
func (s *Store) getCacheUpdatesByPeriod(headBlock interfaces.ReadOnlySignedBeaconBlock) (map[uint64]interfaces.LightClientUpdate, error) {
updatesByPeriod := make(map[uint64]interfaces.LightClientUpdate)
cacheHeadRoot := headBlock.Block().ParentRoot()
cacheHeadItem, ok := s.cache.items[cacheHeadRoot]
if !ok {
log.Debugf("Head root %x not found in light client cache. Returning empty updates map for non finality cache.", cacheHeadRoot)
return updatesByPeriod, nil
}
for cacheHeadItem != nil {
if _, exists := updatesByPeriod[cacheHeadItem.period]; !exists {
updatesByPeriod[cacheHeadItem.period] = cacheHeadItem.bestUpdate
}
cacheHeadItem = cacheHeadItem.parent
}
return updatesByPeriod, nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
s.setLastFinalityUpdate(update, broadcast)
}
func (s *Store) setLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}
s.lastFinalityUpdate = update
log.Debug("Saved new light client finality update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
}
func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
s.mu.RLock()
defer s.mu.RUnlock()
return s.lastFinalityUpdate
}
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
s.setLastOptimisticUpdate(update, broadcast)
}
func (s *Store) setLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}
s.lastOptimisticUpdate = update
log.Debug("Saved new light client optimistic update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
}
func (s *Store) LastOptimisticUpdate() interfaces.LightClientOptimisticUpdate {
s.mu.RLock()
defer s.mu.RUnlock()
return s.lastOptimisticUpdate
}
func (s *Store) MigrateToCold(ctx context.Context, finalizedRoot [32]byte) error {
s.mu.Lock()
defer s.mu.Unlock()
// If there cache is empty (some problem in processing data), we can skip migration.
// This is a safety check and should not happen in normal operation.
if len(s.cache.items) == 0 {
log.Debug("Non-finality cache is empty. Skipping migration.")
return nil
}
blk, err := s.beaconDB.Block(ctx, finalizedRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch block for finalized root %x", finalizedRoot)
}
if blk == nil {
return errors.Errorf("nil block for finalized root %x", finalizedRoot)
}
finalizedSlot := blk.Block().Slot()
finalizedCacheHeadRoot := blk.Block().ParentRoot()
var finalizedCacheHead *cacheItem
var ok bool
finalizedCacheHead, ok = s.cache.items[finalizedCacheHeadRoot]
if !ok {
log.Debugf("Finalized block's parent root %x not found in light client cache. Cleaning the broken part of the cache.", finalizedCacheHeadRoot)
// delete non-finality cache items older than finalized slot
s.cleanCache(finalizedSlot)
return nil
}
updateByPeriod := make(map[uint64]interfaces.LightClientUpdate)
// Traverse the cache from the head item to the tail, collecting updates
for item := finalizedCacheHead; item != nil; item = item.parent {
if _, seen := updateByPeriod[item.period]; seen {
// We already have an update for this period, skip this item
continue
}
updateByPeriod[item.period] = item.bestUpdate
}
// save updates to db
for period, update := range updateByPeriod {
err = s.beaconDB.SaveLightClientUpdate(ctx, period, update)
if err != nil {
log.WithError(err).Errorf("failed to save light client update for period %d. Skipping this period.", period)
}
}
// delete non-finality cache items older than finalized slot
s.cleanCache(finalizedSlot)
return nil
}
func (s *Store) cleanCache(finalizedSlot primitives.Slot) {
// delete non-finality cache items older than finalized slot
for k, v := range s.cache.items {
if v.slot < finalizedSlot {
delete(s.cache.items, k)
}
if v.parent != nil && v.parent.slot < finalizedSlot {
v.parent = nil // remove parent reference
}
}
}

View File

@@ -1,959 +0,0 @@
package light_client
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
func TestLightClientStore(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
cfg.BellatrixForkEpoch = 2
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(cfg)
// Initialize the light client store
lcStore := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), testDB.SetupDB(t))
// Create test light client updates for Capella and Deneb
lCapella := util.NewTestLightClient(t, version.Capella)
opUpdateCapella, err := NewLightClientOptimisticUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateCapella, "OptimisticUpdateCapella is nil")
finUpdateCapella, err := NewLightClientFinalityUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock, lCapella.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateCapella, "FinalityUpdateCapella is nil")
lDeneb := util.NewTestLightClient(t, version.Deneb)
opUpdateDeneb, err := NewLightClientOptimisticUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateDeneb, "OptimisticUpdateDeneb is nil")
finUpdateDeneb, err := NewLightClientFinalityUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock, lDeneb.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateDeneb, "FinalityUpdateDeneb is nil")
// Initially the store should have nil values for both updates
require.IsNil(t, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should be nil")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get finality with Capella update. Optimistic update should be nil
lcStore.SetLastFinalityUpdate(finUpdateCapella, false)
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get optimistic with Capella update. Finality update should be Capella
lcStore.SetLastOptimisticUpdate(opUpdateCapella, false)
require.Equal(t, opUpdateCapella, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
// Set and get finality and optimistic with Deneb update
lcStore.SetLastFinalityUpdate(finUpdateDeneb, false)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb, false)
require.Equal(t, finUpdateDeneb, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.Equal(t, opUpdateDeneb, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
}
func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update0, lcStore.LastFinalityUpdate()), "update0 should be better than nil")
// update0 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, IsFinalityUpdateValidForBroadcast(update0, lcStore.LastFinalityUpdate()), "update0 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
l1 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update1, err := NewLightClientFinalityUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update1, update0), "update1 should be better than update0")
// update1 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, IsFinalityUpdateValidForBroadcast(update1, lcStore.LastFinalityUpdate()), "update1 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 2 with same finality slot, increased attested slot, and supermajority - should save and broadcast
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(2), util.WithSupermajority(0))
update2, err := NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update2, update1), "update2 should be better than update1")
// update2 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, IsFinalityUpdateValidForBroadcast(update2, lcStore.LastFinalityUpdate()), "update2 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
l3 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(3), util.WithSupermajority(0))
update3, err := NewLightClientFinalityUpdateFromBeaconState(l3.Ctx, l3.State, l3.Block, l3.AttestedState, l3.AttestedBlock, l3.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update3, update2), "update3 should be better than update2")
// update3 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, IsFinalityUpdateValidForBroadcast(update3, lcStore.LastFinalityUpdate()), "update3 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
l4 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(1), util.WithSupermajority(0))
update4, err := NewLightClientFinalityUpdateFromBeaconState(l4.Ctx, l4.State, l4.Block, l4.AttestedState, l4.AttestedBlock, l4.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update4, update3), "update4 should be better than update3")
// update4 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, IsFinalityUpdateValidForBroadcast(update4, lcStore.LastFinalityUpdate()), "update4 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
l5 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(2), util.WithSupermajority(0))
update5, err := NewLightClientFinalityUpdateFromBeaconState(l5.Ctx, l5.State, l5.Block, l5.AttestedState, l5.AttestedBlock, l5.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update5, update4), "update5 should be better than update4")
// update5 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, IsFinalityUpdateValidForBroadcast(update5, lcStore.LastFinalityUpdate()), "update5 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
l6 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(3))
update6, err := NewLightClientFinalityUpdateFromBeaconState(l6.Ctx, l6.State, l6.Block, l6.AttestedState, l6.AttestedBlock, l6.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, IsBetterFinalityUpdate(update6, update5), "update6 should be better than update5")
// update6 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, IsFinalityUpdateValidForBroadcast(update6, lcStore.LastFinalityUpdate()), "update6 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}
func TestLightClientStore_SaveLCData(t *testing.T) {
t.Run("no parent in cache or db - new is head", func(t *testing.T) {
db := testDB.SetupDB(t)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), db)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
blkRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.SaveLCData(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, blkRoot), "Failed to save light client data")
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err := NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
optimisticUpdate, err := NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
attstedBlkRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.DeepEqual(t, finalityUpdate, s.lastFinalityUpdate, "Expected to find the last finality update in the store")
require.DeepEqual(t, optimisticUpdate, s.lastOptimisticUpdate, "Expected to find the last optimistic update in the store")
require.DeepEqual(t, update, s.cache.items[attstedBlkRoot].bestUpdate, "Expected to find the update in the non-finality cache")
require.DeepEqual(t, finalityUpdate, s.cache.items[attstedBlkRoot].bestFinalityUpdate, "Expected to find the finality update in the non-finality cache")
})
t.Run("no parent in cache or db - new not head", func(t *testing.T) {
db := testDB.SetupDB(t)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), db)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
blkRoot, err := l.FinalizedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.SaveLCData(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, blkRoot), "Failed to save light client data")
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err := NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
optimisticUpdate, err := NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
attstedBlkRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.IsNil(t, s.lastFinalityUpdate, "Expected to not find the last finality update in the store since the block is not head")
require.DeepEqual(t, optimisticUpdate, s.lastOptimisticUpdate, "Expected to find the last optimistic update in the store")
require.DeepEqual(t, update, s.cache.items[attstedBlkRoot].bestUpdate, "Expected to find the update in the non-finality cache")
require.DeepEqual(t, finalityUpdate, s.cache.items[attstedBlkRoot].bestFinalityUpdate, "Expected to find the finality update in the non-finality cache")
})
t.Run("parent in db", func(t *testing.T) {
db := testDB.SetupDB(t)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), db)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
// save an update for this period in db
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, db.SaveLightClientUpdate(l.Ctx, period, update), "Failed to save light client update in db")
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1), util.WithSupermajority(0)) // updates from this setup should be all better
blkRoot, err := l2.Block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.SaveLCData(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock, blkRoot), "Failed to save light client data")
update, err = NewLightClientUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err := NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
optimisticUpdate, err := NewLightClientOptimisticUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock)
require.NoError(t, err)
attstedBlkRoot, err := l2.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.DeepEqual(t, finalityUpdate, s.lastFinalityUpdate, "Expected to find the last finality update in the store")
require.DeepEqual(t, optimisticUpdate, s.lastOptimisticUpdate, "Expected to find the last optimistic update in the store")
require.DeepEqual(t, update, s.cache.items[attstedBlkRoot].bestUpdate, "Expected to find the update in the non-finality cache")
require.DeepEqual(t, finalityUpdate, s.cache.items[attstedBlkRoot].bestFinalityUpdate, "Expected to find the finality update in the non-finality cache")
})
t.Run("parent in cache", func(t *testing.T) {
db := testDB.SetupDB(t)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), db)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1), util.WithSupermajority(0)) // updates from this setup should be all better
// save the cache item for this period in cache
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err := NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
item := &cacheItem{
period: period,
bestUpdate: update,
bestFinalityUpdate: finalityUpdate,
}
attestedBlockRoot := l2.AttestedBlock.Block().ParentRoot() // we want this item to be the parent of the new block
s.cache.items[attestedBlockRoot] = item
blkRoot, err := l2.Block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.SaveLCData(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock, blkRoot), "Failed to save light client data")
update, err = NewLightClientUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err = NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
optimisticUpdate, err := NewLightClientOptimisticUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock)
require.NoError(t, err)
attstedBlkRoot, err := l2.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.DeepEqual(t, finalityUpdate, s.lastFinalityUpdate, "Expected to find the last finality update in the store")
require.DeepEqual(t, optimisticUpdate, s.lastOptimisticUpdate, "Expected to find the last optimistic update in the store")
require.DeepEqual(t, update, s.cache.items[attstedBlkRoot].bestUpdate, "Expected to find the update in the non-finality cache")
require.DeepEqual(t, finalityUpdate, s.cache.items[attstedBlkRoot].bestFinalityUpdate, "Expected to find the finality update in the non-finality cache")
})
t.Run("parent in the previous period", func(t *testing.T) {
db := testDB.SetupDB(t)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), db)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
l2 := util.NewTestLightClient(t, version.Bellatrix, util.WithIncreasedAttestedSlot(1), util.WithSupermajority(0)) // updates from this setup should be all better
// save the cache item for this period1 in cache
period1 := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err := NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
item := &cacheItem{
period: period1,
bestUpdate: update,
bestFinalityUpdate: finalityUpdate,
}
attestedBlockRoot := l2.AttestedBlock.Block().ParentRoot() // we want this item to be the parent of the new block
s.cache.items[attestedBlockRoot] = item
blkRoot, err := l2.Block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.SaveLCData(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock, blkRoot), "Failed to save light client data")
update, err = NewLightClientUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
finalityUpdate, err = NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
optimisticUpdate, err := NewLightClientOptimisticUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock)
require.NoError(t, err)
attstedBlkRoot, err := l2.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.DeepEqual(t, finalityUpdate, s.lastFinalityUpdate, "Expected to find the last finality update in the store")
require.DeepEqual(t, optimisticUpdate, s.lastOptimisticUpdate, "Expected to find the last optimistic update in the store")
require.DeepEqual(t, update, s.cache.items[attstedBlkRoot].bestUpdate, "Expected to find the update in the non-finality cache")
require.DeepEqual(t, finalityUpdate, s.cache.items[attstedBlkRoot].bestFinalityUpdate, "Expected to find the finality update in the non-finality cache")
})
}
func TestLightClientStore_MigrateToCold(t *testing.T) {
// This tests the scenario where chain advances but the cache is empty.
// It should see that there is nothing in the cache to migrate and just update the tail to the new finalized root.
t.Run("empty cache", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
finalizedBlockRoot, _ := saveInitialFinalizedCheckpointData(t, ctx, beaconDB)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), beaconDB)
require.NotNil(t, s)
for i := 0; i < 3; i++ {
newBlock := util.NewBeaconBlock()
newBlock.Block.Slot = primitives.Slot(32 + uint64(i))
newBlock.Block.ParentRoot = finalizedBlockRoot[:]
signedNewBlock, err := blocks.NewSignedBeaconBlock(newBlock)
require.NoError(t, err)
blockRoot, err := signedNewBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, signedNewBlock))
finalizedBlockRoot = blockRoot
}
err := s.MigrateToCold(ctx, finalizedBlockRoot)
require.NoError(t, err)
require.Equal(t, 0, len(s.cache.items))
})
// This tests the scenario where chain advances but the CANONICAL cache is empty.
// It should see that there is nothing in the canonical cache to migrate and just update the tail to the new finalized root AND delete anything non-canonical.
t.Run("non canonical cache", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
finalizedBlockRoot, _ := saveInitialFinalizedCheckpointData(t, ctx, beaconDB)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), beaconDB)
require.NotNil(t, s)
for i := 0; i < 3; i++ {
newBlock := util.NewBeaconBlock()
newBlock.Block.Slot = primitives.Slot(32 + uint64(i))
newBlock.Block.ParentRoot = finalizedBlockRoot[:]
signedNewBlock, err := blocks.NewSignedBeaconBlock(newBlock)
require.NoError(t, err)
blockRoot, err := signedNewBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, signedNewBlock))
finalizedBlockRoot = blockRoot
}
// Add a non-canonical item to the cache
cacheItem := &cacheItem{
period: 0,
slot: 33,
}
nonCanonicalBlockRoot := [32]byte{1, 2, 3, 4}
s.cache.items[nonCanonicalBlockRoot] = cacheItem
require.Equal(t, 1, len(s.cache.items))
err := s.MigrateToCold(ctx, finalizedBlockRoot)
require.NoError(t, err)
require.Equal(t, 0, len(s.cache.items), "Expected the non-canonical item in the cache to be deleted")
})
// db has update - cache has both canonical and non-canonical items.
// should update the update in db and delete cache.
t.Run("mixed cache - finality immediately after cache", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
finalizedBlockRoot, _ := saveInitialFinalizedCheckpointData(t, ctx, beaconDB)
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, finalizedBlockRoot))
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), beaconDB)
require.NotNil(t, s)
// Save an update for this period in db
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
require.NoError(t, beaconDB.SaveLightClientUpdate(ctx, period, update))
lastBlockRoot := finalizedBlockRoot
lastAttestedRoot := finalizedBlockRoot
lastUpdate := update
for i := 1; i < 4; i++ {
l = util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(uint64(i)), util.WithSupermajority(uint64(i)), util.WithAttestedParentRoot(lastAttestedRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, l.Block))
require.NoError(t, s.SaveLCData(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, [32]byte{1}))
lastBlockRoot, err = l.Block.Block().HashTreeRoot()
require.NoError(t, err)
lastAttestedRoot, err = l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
update, err = NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
lastUpdate = update
}
require.Equal(t, 3, len(s.cache.items))
// Add a non-canonical item to the cache
cacheItem := &cacheItem{
period: 0,
slot: 33,
}
nonCanonicalBlockRoot := [32]byte{1, 2, 3, 4}
s.cache.items[nonCanonicalBlockRoot] = cacheItem
require.Equal(t, 4, len(s.cache.items))
err = s.MigrateToCold(ctx, lastBlockRoot)
require.NoError(t, err)
require.Equal(t, 0, len(s.cache.items), "Expected the non-canonical item in the cache to be deleted")
u, err := beaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, lastUpdate, u)
})
// db has update - cache has both canonical and non-canonical items. finalized height is in the middle.
// should update the update in db and delete items in cache before finalized slot.
t.Run("mixed cache - finality middle of cache", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
finalizedBlockRoot, _ := saveInitialFinalizedCheckpointData(t, ctx, beaconDB)
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, finalizedBlockRoot))
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), beaconDB)
require.NotNil(t, s)
// Save an update for this period in db
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
require.NoError(t, beaconDB.SaveLightClientUpdate(ctx, period, update))
lastBlockRoot := finalizedBlockRoot
lastUpdate := update
lastAttestedRoot := [32]byte{}
for i := 1; i < 4; i++ {
l = util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(uint64(i)), util.WithSupermajority(uint64(i)), util.WithAttestedParentRoot(lastAttestedRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, l.Block))
require.NoError(t, s.SaveLCData(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, [32]byte{1}))
root, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
lastBlockRoot = root
update, err = NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
lastUpdate = update
lastAttestedRoot, err = l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
}
require.Equal(t, 3, len(s.cache.items))
// Add a non-canonical item to the cache
cacheItem := &cacheItem{
period: 0,
slot: 33,
}
nonCanonicalBlockRoot := [32]byte{1, 2, 3, 4}
s.cache.items[nonCanonicalBlockRoot] = cacheItem
require.Equal(t, 4, len(s.cache.items))
for i := 4; i < 7; i++ {
l = util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(uint64(i)), util.WithSupermajority(0), util.WithAttestedParentRoot(lastAttestedRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, l.Block))
require.NoError(t, s.SaveLCData(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, [32]byte{1}))
lastAttestedRoot, err = l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
}
require.Equal(t, 7, len(s.cache.items))
err = s.MigrateToCold(ctx, lastBlockRoot)
require.NoError(t, err)
require.Equal(t, 3, len(s.cache.items), "Expected the non-canonical item in the cache to be deleted")
u, err := beaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, lastUpdate, u)
})
// we have multiple periods in the cache before finalization happens. we expect all of them to be saved in db.
t.Run("finality after multiple periods in cache", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
cfg := params.BeaconConfig().Copy()
cfg.EpochsPerSyncCommitteePeriod = 1
params.OverrideBeaconConfig(cfg)
finalizedBlockRoot, _ := saveInitialFinalizedCheckpointData(t, ctx, beaconDB)
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, finalizedBlockRoot))
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), beaconDB)
require.NotNil(t, s)
// Save an update for this period1 in db
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
period1 := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
require.NoError(t, beaconDB.SaveLightClientUpdate(ctx, period1, update))
lastBlockRoot := finalizedBlockRoot
lastUpdatePeriod1 := update
lastAttestedRoot := [32]byte{}
for i := 1; i < 4; i++ {
l = util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(uint64(i)), util.WithSupermajority(uint64(i)), util.WithAttestedParentRoot(lastAttestedRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, l.Block))
require.NoError(t, s.SaveLCData(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, [32]byte{1}))
root, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
lastBlockRoot = root
lastUpdatePeriod1, err = NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
lastAttestedRoot, err = l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
}
period2 := period1
var lastUpdatePeriod2 interfaces.LightClientUpdate
for i := 1; i < 4; i++ {
l = util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(uint64(i)+33), util.WithSupermajority(uint64(i)), util.WithAttestedParentRoot(lastAttestedRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, l.Block))
require.NoError(t, s.SaveLCData(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, [32]byte{1}))
root, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
lastBlockRoot = root
lastUpdatePeriod2, err = NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
lastAttestedRoot, err = l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
period2 = slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedBlock.Block().Slot()))
}
require.Equal(t, 6, len(s.cache.items))
// Add a non-canonical item to the cache
cacheItem := &cacheItem{
period: 0,
slot: 33,
}
nonCanonicalBlockRoot := [32]byte{1, 2, 3, 4}
s.cache.items[nonCanonicalBlockRoot] = cacheItem
require.Equal(t, 7, len(s.cache.items))
for i := 4; i < 7; i++ {
l = util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(uint64(i)+33), util.WithSupermajority(0), util.WithAttestedParentRoot(lastAttestedRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, l.Block))
require.NoError(t, s.SaveLCData(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock, [32]byte{1}))
lastAttestedRoot, err = l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
}
require.Equal(t, 10, len(s.cache.items))
err = s.MigrateToCold(ctx, lastBlockRoot)
require.NoError(t, err)
require.Equal(t, 3, len(s.cache.items), "Expected the non-canonical item in the cache to be deleted")
u, err := beaconDB.LightClientUpdate(ctx, period2)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, lastUpdatePeriod2, u)
u, err = beaconDB.LightClientUpdate(ctx, period1)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, lastUpdatePeriod1, u)
})
}
func saveInitialFinalizedCheckpointData(t *testing.T, ctx context.Context, beaconDB db.Database) ([32]byte, interfaces.SignedBeaconBlock) {
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
genesisState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, genesisState, genesisRoot))
finalizedState, err := util.NewBeaconState()
require.NoError(t, err)
finalizedBlock := util.NewBeaconBlock()
finalizedBlock.Block.Slot = 32
finalizedBlock.Block.ParentRoot = genesisRoot[:]
signedFinalizedBlock, err := blocks.NewSignedBeaconBlock(finalizedBlock)
require.NoError(t, err)
finalizedBlockHeader, err := signedFinalizedBlock.Header()
require.NoError(t, err)
require.NoError(t, finalizedState.SetLatestBlockHeader(finalizedBlockHeader.Header))
finalizedStateRoot, err := finalizedState.HashTreeRoot(ctx)
require.NoError(t, err)
finalizedBlock.Block.StateRoot = finalizedStateRoot[:]
signedFinalizedBlock, err = blocks.NewSignedBeaconBlock(finalizedBlock)
require.NoError(t, err)
finalizedBlockRoot, err := signedFinalizedBlock.Block().HashTreeRoot()
require.NoError(t, err)
cp := ethpb.Checkpoint{
Epoch: 1,
Root: finalizedBlockRoot[:],
}
require.NoError(t, beaconDB.SaveBlock(ctx, signedFinalizedBlock))
require.NoError(t, beaconDB.SaveState(ctx, finalizedState, finalizedBlockRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &cp))
return finalizedBlockRoot, signedFinalizedBlock
}
func TestLightClientStore_LightClientUpdatesByRange(t *testing.T) {
t.Run("no updates", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, finalizedBlock := saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
updates, err := s.LightClientUpdates(ctx, 2, 5, finalizedBlock)
require.NoError(t, err)
require.Equal(t, 0, len(updates))
})
t.Run("single update from db", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, finalizedBlock := saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, d.SaveLightClientUpdate(ctx, 3, update))
updates, err := s.LightClientUpdates(ctx, 3, 3, finalizedBlock)
require.NoError(t, err)
require.Equal(t, 1, len(updates))
require.DeepEqual(t, update, updates[0], "Expected to find the update in the store")
})
t.Run("multiple updates from db", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, finalizedBlock := saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, d.SaveLightClientUpdate(ctx, 3, update))
require.NoError(t, d.SaveLightClientUpdate(ctx, 4, update))
require.NoError(t, d.SaveLightClientUpdate(ctx, 5, update))
updates, err := s.LightClientUpdates(ctx, 3, 5, finalizedBlock)
require.NoError(t, err)
require.Equal(t, 3, len(updates))
require.DeepEqual(t, update, updates[0], "Expected to find the update in the store")
require.DeepEqual(t, update, updates[1], "Expected to find the update in the store")
require.DeepEqual(t, update, updates[2], "Expected to find the update in the store")
})
t.Run("single update from cache", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, _ = saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
cacheItem := &cacheItem{
period: 3,
bestUpdate: update,
}
s.cache.items[[32]byte{3}] = cacheItem
_, headBlock := saveStateAndBlockWithParentRoot(t, ctx, d, [32]byte{3})
updates, err := s.LightClientUpdates(ctx, 3, 3, headBlock)
require.NoError(t, err)
require.Equal(t, 1, len(updates))
require.DeepEqual(t, update, updates[0], "Expected to find the update in the store")
})
t.Run("multiple updates from cache", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, _ = saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
cacheItemP3 := &cacheItem{
period: 3,
bestUpdate: update,
}
s.cache.items[[32]byte{3}] = cacheItemP3
cacheItemP4 := &cacheItem{
period: 4,
bestUpdate: update,
parent: cacheItemP3,
}
s.cache.items[[32]byte{4}] = cacheItemP4
cacheItemP5 := &cacheItem{
period: 5,
bestUpdate: update,
parent: cacheItemP4,
}
s.cache.items[[32]byte{5}] = cacheItemP5
_, headBlock := saveStateAndBlockWithParentRoot(t, ctx, d, [32]byte{5})
updates, err := s.LightClientUpdates(ctx, 3, 5, headBlock)
require.NoError(t, err)
require.Equal(t, 3, len(updates))
require.DeepEqual(t, update, updates[0], "Expected to find the update in the store")
require.DeepEqual(t, update, updates[1], "Expected to find the update in the store")
require.DeepEqual(t, update, updates[2], "Expected to find the update in the store")
})
t.Run("multiple updates from both db and cache - no overlap", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, _ = saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, d.SaveLightClientUpdate(ctx, 1, update))
require.NoError(t, d.SaveLightClientUpdate(ctx, 2, update))
cacheItemP3 := &cacheItem{
period: 3,
bestUpdate: update,
}
s.cache.items[[32]byte{3}] = cacheItemP3
cacheItemP4 := &cacheItem{
period: 4,
bestUpdate: update,
parent: cacheItemP3,
}
s.cache.items[[32]byte{4}] = cacheItemP4
cacheItemP5 := &cacheItem{
period: 5,
bestUpdate: update,
parent: cacheItemP4,
}
s.cache.items[[32]byte{5}] = cacheItemP5
_, headBlock := saveStateAndBlockWithParentRoot(t, ctx, d, [32]byte{5})
updates, err := s.LightClientUpdates(ctx, 1, 5, headBlock)
require.NoError(t, err)
require.Equal(t, 5, len(updates))
for i := 0; i < 5; i++ {
require.DeepEqual(t, update, updates[i], "Expected to find the update in the store")
}
})
t.Run("multiple updates from both db and cache - overlap", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, _ = saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l1 := util.NewTestLightClient(t, version.Altair)
update1, err := NewLightClientUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err)
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update2, err := NewLightClientUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err)
require.DeepNotEqual(t, update1, update2)
require.NoError(t, d.SaveLightClientUpdate(ctx, 1, update1))
require.NoError(t, d.SaveLightClientUpdate(ctx, 2, update1))
require.NoError(t, d.SaveLightClientUpdate(ctx, 3, update1))
require.NoError(t, d.SaveLightClientUpdate(ctx, 4, update1))
cacheItemP3 := &cacheItem{
period: 3,
bestUpdate: update2,
}
s.cache.items[[32]byte{3}] = cacheItemP3
cacheItemP4 := &cacheItem{
period: 4,
bestUpdate: update2,
parent: cacheItemP3,
}
s.cache.items[[32]byte{4}] = cacheItemP4
cacheItemP5 := &cacheItem{
period: 5,
bestUpdate: update2,
parent: cacheItemP4,
}
s.cache.items[[32]byte{5}] = cacheItemP5
_, headBlock := saveStateAndBlockWithParentRoot(t, ctx, d, [32]byte{5})
updates, err := s.LightClientUpdates(ctx, 1, 5, headBlock)
require.NoError(t, err)
require.Equal(t, 5, len(updates))
// first two updates should be update1
for i := 0; i < 2; i++ {
require.DeepEqual(t, update1, updates[i], "Expected to find the update in the store")
}
// next three updates should be update2 - as cache overrides db
for i := 2; i < 5; i++ {
require.DeepEqual(t, update2, updates[i], "Expected to find the update in the store")
}
})
t.Run("first continuous range", func(t *testing.T) {
d := testDB.SetupDB(t)
ctx := context.Background()
_, _ = saveInitialFinalizedCheckpointData(t, ctx, d)
s := NewLightClientStore(&p2pTesting.FakeP2P{}, new(event.Feed), d)
require.NotNil(t, s)
l1 := util.NewTestLightClient(t, version.Altair)
update, err := NewLightClientUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, d.SaveLightClientUpdate(ctx, 1, update))
require.NoError(t, d.SaveLightClientUpdate(ctx, 2, update))
cacheItemP4 := &cacheItem{
period: 4,
bestUpdate: update,
}
s.cache.items[[32]byte{4}] = cacheItemP4
cacheItemP5 := &cacheItem{
period: 5,
bestUpdate: update,
parent: cacheItemP4,
}
s.cache.items[[32]byte{5}] = cacheItemP5
_, headBlock := saveStateAndBlockWithParentRoot(t, ctx, d, [32]byte{5})
updates, err := s.LightClientUpdates(ctx, 1, 5, headBlock)
require.NoError(t, err)
require.Equal(t, 2, len(updates))
require.DeepEqual(t, update, updates[0], "Expected to find the update in the store")
require.DeepEqual(t, update, updates[1], "Expected to find the update in the store")
})
}
func saveStateAndBlockWithParentRoot(t *testing.T, ctx context.Context, d db.Database, parentRoot [32]byte) ([32]byte, interfaces.SignedBeaconBlock) {
blk := util.NewBeaconBlock()
blk.Block.ParentRoot = parentRoot[:]
blkRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, d, blk)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, d.SaveState(ctx, st, blkRoot))
signedFinalizedBlock, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return blkRoot, signedFinalizedBlock
}

View File

@@ -23,6 +23,7 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",
@@ -31,7 +32,6 @@ go_library(
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/light-client:go_default_library",
"//beacon-chain/monitor:go_default_library",
"//beacon-chain/node/registration:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
@@ -33,7 +34,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/monitor"
"github.com/OffchainLabs/prysm/v6/beacon-chain/node/registration"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
@@ -253,6 +253,10 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
// their initialization.
beacon.finalizedStateAtStartUp = nil
if features.Get().EnableLightClient {
beacon.lcStore = lightclient.NewLightClientStore(beacon.db, beacon.fetchP2P(), beacon.StateFeed())
}
return beacon, nil
}
@@ -345,11 +349,6 @@ func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *sta
return errors.Wrap(err, "could not register P2P service")
}
if features.Get().EnableLightClient {
log.Debugln("Registering Light Client Store")
beacon.registerLightClientStore()
}
log.Debugln("Registering Backfill Service")
if err := beacon.RegisterBackfillService(cliCtx, bfs); err != nil {
return errors.Wrap(err, "could not register Back Fill service")
@@ -622,55 +621,35 @@ func (b *BeaconNode) startStateGen(ctx context.Context, bfs coverage.AvailableBl
return nil
}
func parseIPNetStrings(ipWhitelist []string) ([]*net.IPNet, error) {
ipNets := make([]*net.IPNet, 0, len(ipWhitelist))
for _, cidr := range ipWhitelist {
_, ipNet, err := net.ParseCIDR(cidr)
if err != nil {
log.WithError(err).WithField("cidr", cidr).Error("Invalid CIDR in IP colocation whitelist")
return nil, err
}
ipNets = append(ipNets, ipNet)
log.WithField("cidr", cidr).Info("Added IP to colocation whitelist")
}
return ipNets, nil
}
func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
bootstrapNodeAddrs, dataDir, err := registration.P2PPreregistration(cliCtx)
if err != nil {
return errors.Wrapf(err, "could not register p2p service")
}
colocationWhitelist, err := parseIPNetStrings(slice.SplitCommaSeparated(cliCtx.StringSlice(cmd.P2PColocationWhitelist.Name)))
if err != nil {
return fmt.Errorf("failed to register p2p service: %w", err)
}
svc, err := p2p.NewService(b.ctx, &p2p.Config{
NoDiscovery: cliCtx.Bool(cmd.NoDiscovery.Name),
StaticPeers: slice.SplitCommaSeparated(cliCtx.StringSlice(cmd.StaticPeers.Name)),
Discv5BootStrapAddrs: p2p.ParseBootStrapAddrs(bootstrapNodeAddrs),
RelayNodeAddr: cliCtx.String(cmd.RelayNode.Name),
DataDir: dataDir,
DiscoveryDir: filepath.Join(dataDir, "discovery"),
LocalIP: cliCtx.String(cmd.P2PIP.Name),
HostAddress: cliCtx.String(cmd.P2PHost.Name),
HostDNS: cliCtx.String(cmd.P2PHostDNS.Name),
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: cliCtx.Uint(cmd.P2PMaxPeers.Name),
QueueSize: cliCtx.Uint(cmd.PubsubQueueSize.Name),
AllowListCIDR: cliCtx.String(cmd.P2PAllowList.Name),
DenyListCIDR: slice.SplitCommaSeparated(cliCtx.StringSlice(cmd.P2PDenyList.Name)),
IPColocationWhitelist: colocationWhitelist,
EnableUPnP: cliCtx.Bool(cmd.EnableUPnPFlag.Name),
StateNotifier: b,
DB: b.db,
ClockWaiter: b.clockWaiter,
NoDiscovery: cliCtx.Bool(cmd.NoDiscovery.Name),
StaticPeers: slice.SplitCommaSeparated(cliCtx.StringSlice(cmd.StaticPeers.Name)),
Discv5BootStrapAddrs: p2p.ParseBootStrapAddrs(bootstrapNodeAddrs),
RelayNodeAddr: cliCtx.String(cmd.RelayNode.Name),
DataDir: dataDir,
DiscoveryDir: filepath.Join(dataDir, "discovery"),
LocalIP: cliCtx.String(cmd.P2PIP.Name),
HostAddress: cliCtx.String(cmd.P2PHost.Name),
HostDNS: cliCtx.String(cmd.P2PHostDNS.Name),
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: cliCtx.Uint(cmd.P2PMaxPeers.Name),
QueueSize: cliCtx.Uint(cmd.PubsubQueueSize.Name),
AllowListCIDR: cliCtx.String(cmd.P2PAllowList.Name),
DenyListCIDR: slice.SplitCommaSeparated(cliCtx.StringSlice(cmd.P2PDenyList.Name)),
EnableUPnP: cliCtx.Bool(cmd.EnableUPnPFlag.Name),
StateNotifier: b,
DB: b.db,
ClockWaiter: b.clockWaiter,
})
if err != nil {
return err
@@ -1140,11 +1119,6 @@ func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.
return b.services.RegisterService(bf)
}
func (b *BeaconNode) registerLightClientStore() {
lcs := lightclient.NewLightClientStore(b.fetchP2P(), b.StateFeed(), b.db)
b.lcStore = lcs
}
func hasNetworkFlag(cliCtx *cli.Context) bool {
for _, flag := range features.NetworkFlags {
for _, name := range flag.Names() {

View File

@@ -74,9 +74,7 @@ func TestNodeStart_Ok(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("enable-light-client", true, "enable light client")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
require.NoError(t, set.Set("enable-light-client", "true"))
ctx, cancel := newCliContextWithCancel(&app, set)
@@ -90,7 +88,6 @@ func TestNodeStart_Ok(t *testing.T) {
node, err := New(ctx, cancel, options...)
require.NoError(t, err)
require.NotNil(t, node.lcStore)
node.services = &runtime.ServiceRegistry{}
go func() {
node.Start()
@@ -265,46 +262,3 @@ func TestCORS(t *testing.T) {
})
}
}
func TestParseIPNetStrings(t *testing.T) {
tests := []struct {
name string
whitelist []string
wantCount int
wantError string
}{
{
name: "empty whitelist",
whitelist: []string{},
wantCount: 0,
},
{
name: "single IP whitelist",
whitelist: []string{"192.168.1.1/32"},
wantCount: 1,
},
{
name: "multiple IPs whitelist",
whitelist: []string{"192.168.1.0/24", "10.0.0.0/8", "34.42.19.170/32"},
wantCount: 3,
},
{
name: "invalid CIDR returns error",
whitelist: []string{"192.168.1.0/24", "invalid-cidr", "10.0.0.0/8"},
wantCount: 0,
wantError: "invalid CIDR address",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := parseIPNetStrings(tt.whitelist)
assert.Equal(t, tt.wantCount, len(result))
if len(tt.wantError) == 0 {
assert.Equal(t, nil, err)
} else {
assert.ErrorContains(t, tt.wantError, err)
}
})
}
}

View File

@@ -60,7 +60,6 @@ go_library(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
@@ -96,7 +95,6 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/peerstore:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/net/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/quic:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
@@ -186,7 +184,6 @@ go_test(
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//core/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/host:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
@@ -309,13 +308,19 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
// BroadcastDataColumnSidecar broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
func (s *Service) BroadcastDataColumnSidecar(
root [fieldparams.RootLength]byte,
dataColumnSubnet uint64,
dataColumnSidecar blocks.VerifiedRODataColumn,
dataColumnSidecar *ethpb.DataColumnSidecar,
) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(s.ctx, "p2p.BroadcastDataColumnSidecar")
defer span.End()
// Ensure the data column sidecar is not nil.
if dataColumnSidecar == nil {
return errors.Errorf("attempted to broadcast nil data column sidecar at subnet %d", dataColumnSubnet)
}
// Retrieve the current fork digest.
forkDigest, err := s.currentForkDigest()
if err != nil {
@@ -325,15 +330,16 @@ func (s *Service) BroadcastDataColumnSidecar(
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumnSidecar(ctx, dataColumnSubnet, dataColumnSidecar, forkDigest)
go s.internalBroadcastDataColumnSidecar(ctx, root, dataColumnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumnSidecar(
ctx context.Context,
root [fieldparams.RootLength]byte,
columnSubnet uint64,
dataColumnSidecar blocks.VerifiedRODataColumn,
dataColumnSidecar *ethpb.DataColumnSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
@@ -379,9 +385,8 @@ func (s *Service) internalBroadcastDataColumnSidecar(
log.WithFields(logrus.Fields{
"slot": slot,
"timeSinceSlotStart": time.Since(slotStartTime),
"root": fmt.Sprintf("%#x", dataColumnSidecar.BlockRoot()),
"root": fmt.Sprintf("%#x", root),
"columnSubnet": columnSubnet,
"blobCount": len(dataColumnSidecar.Column),
}).Debug("Broadcasted data column sidecar")
// Increase the number of successful broadcasts.

View File

@@ -711,8 +711,13 @@ func TestService_BroadcastDataColumn(t *testing.T) {
subnet := peerdas.ComputeSubnetForDataColumnSidecar(columnIndex)
topic := fmt.Sprintf(topicFormat, digest, subnet) + service.Encoding().ProtocolSuffix()
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: columnIndex}})
verifiedRoSidecar := verifiedRoSidecars[0]
roSidecars, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: columnIndex}})
sidecar := roSidecars[0].DataColumnSidecar
// Attempt to broadcast nil object should fail.
var emptyRoot [fieldparams.RootLength]byte
err = service.BroadcastDataColumnSidecar(emptyRoot, subnet, nil)
require.ErrorContains(t, "attempted to broadcast nil", err)
// Subscribe to the topic.
sub, err := p2.SubscribeToTopic(topic)
@@ -722,7 +727,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
time.Sleep(50 * time.Millisecond)
// Broadcast to peers and wait.
err = service.BroadcastDataColumnSidecar(subnet, verifiedRoSidecar)
err = service.BroadcastDataColumnSidecar(emptyRoot, subnet, sidecar)
require.NoError(t, err)
// Receive the message.
@@ -734,5 +739,5 @@ func TestService_BroadcastDataColumn(t *testing.T) {
var result ethpb.DataColumnSidecar
require.NoError(t, service.Encoding().DecodeGossip(msg.Data, &result))
require.DeepEqual(t, &result, verifiedRoSidecar)
require.DeepEqual(t, &result, sidecar)
}

View File

@@ -1,7 +1,6 @@
package p2p
import (
"net"
"time"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
@@ -11,56 +10,34 @@ import (
// This is the default queue size used if we have specified an invalid one.
const defaultPubsubQueueSize = 600
const (
// defaultConnManagerPruneAbove sets the number of peers where ConnectionManager
// will begin to internally prune peers. This value is set based on the internal
// value of the libp2p DefaultConectionManager "high water mark". The "low water mark"
// is the number of peers where ConnManager will stop pruning. This value is computed
// by subtracting connManagerPruneAmount from the high water mark.
defaultConnManagerPruneAbove = 192
connManagerPruneAmount = 32
)
// Config for the p2p service. These parameters are set from application level flags
// to initialize the p2p service.
type Config struct {
NoDiscovery bool
EnableUPnP bool
StaticPeerID bool
DisableLivenessCheck bool
StaticPeers []string
Discv5BootStrapAddrs []string
RelayNodeAddr string
LocalIP string
HostAddress string
HostDNS string
PrivateKey string
DataDir string
DiscoveryDir string
QUICPort uint
TCPPort uint
UDPPort uint
PingInterval time.Duration
MaxPeers uint
QueueSize uint
AllowListCIDR string
DenyListCIDR []string
IPColocationWhitelist []*net.IPNet
StateNotifier statefeed.Notifier
DB db.ReadOnlyDatabaseWithSeqNum
ClockWaiter startup.ClockWaiter
}
// connManagerLowHigh picks the low and high water marks for the connection manager based
// on the MaxPeers setting. The high water mark will be at least the default high water mark
// (192), or MaxPeers + 32, whichever is higher. The low water mark is set to be 32 less than
// the high water mark. This is done to ensure the ConnManager never prunes peers that the
// node has connected to based on the MaxPeers setting.
func (cfg *Config) connManagerLowHigh() (int, int) {
maxPeersPlusMargin := int(cfg.MaxPeers) + connManagerPruneAmount
high := max(maxPeersPlusMargin, defaultConnManagerPruneAbove)
low := high - connManagerPruneAmount
return low, high
NoDiscovery bool
EnableUPnP bool
StaticPeerID bool
DisableLivenessCheck bool
StaticPeers []string
Discv5BootStrapAddrs []string
RelayNodeAddr string
LocalIP string
HostAddress string
HostDNS string
PrivateKey string
DataDir string
DiscoveryDir string
QUICPort uint
TCPPort uint
UDPPort uint
PingInterval time.Duration
MaxPeers uint
QueueSize uint
AllowListCIDR string
DenyListCIDR []string
StateNotifier statefeed.Notifier
DB db.ReadOnlyDatabaseWithSeqNum
ClockWaiter startup.ClockWaiter
}
// validateConfig validates whether the values provided are accurate and will set

View File

@@ -10,8 +10,6 @@ import (
"github.com/sirupsen/logrus"
)
var errNoCustodyInfo = errors.New("no custody info available")
var _ CustodyManager = (*Service)(nil)
// EarliestAvailableSlot returns the earliest available slot.
@@ -32,7 +30,7 @@ func (s *Service) CustodyGroupCount() (uint64, error) {
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
return 0, errNoCustodyInfo
return 0, errors.New("no custody info available")
}
return s.custodyInfo.groupCount, nil

View File

@@ -79,7 +79,7 @@ func (quicProtocol) ENRKey() string { return quickProtocolEnrKey }
func newListener(listenerCreator func() (*discover.UDPv5, error)) (*listenerWrapper, error) {
rawListener, err := listenerCreator()
if err != nil {
return nil, errors.Wrap(err, "create new listener")
return nil, errors.Wrap(err, "could not create new listener")
}
return &listenerWrapper{
listener: rawListener,
@@ -536,7 +536,7 @@ func (s *Service) createListener(
int(s.cfg.QUICPort),
)
if err != nil {
return nil, errors.Wrap(err, "create local node")
return nil, errors.Wrap(err, "could not create local node")
}
bootNodes := make([]*enode.Node, 0, len(s.cfg.Discv5BootStrapAddrs))
@@ -604,27 +604,13 @@ func (s *Service) createLocalNode(
localNode = initializeSyncCommSubnets(localNode)
if params.FuluEnabled() {
// TODO: Replace this quick fix with a proper synchronization scheme (chan?)
const delay = 1 * time.Second
var custodyGroupCount uint64
err := errNoCustodyInfo
for errors.Is(err, errNoCustodyInfo) {
custodyGroupCount, err = s.CustodyGroupCount()
if errors.Is(err, errNoCustodyInfo) {
log.WithField("delay", delay).Debug("No custody info available yet, retrying later")
time.Sleep(delay)
continue
}
if err != nil {
return nil, errors.Wrap(err, "retrieve custody group count")
}
custodyGroupCountEntry := peerdas.Cgc(custodyGroupCount)
localNode.Set(custodyGroupCountEntry)
custodyGroupCount, err := s.CustodyGroupCount()
if err != nil {
return nil, errors.Wrap(err, "could not retrieve custody group count")
}
custodyGroupCountEntry := peerdas.Cgc(custodyGroupCount)
localNode.Set(custodyGroupCountEntry)
}
if s.cfg != nil && s.cfg.HostAddress != "" {
@@ -666,7 +652,7 @@ func (s *Service) startDiscoveryV5(
}
wrappedListener, err := newListener(createListener)
if err != nil {
return nil, errors.Wrap(err, "create listener")
return nil, errors.Wrap(err, "could not create listener")
}
record := wrappedListener.Self()

View File

@@ -3,7 +3,6 @@ package p2p
import (
"context"
"math"
"net"
"reflect"
"strings"
"time"
@@ -76,7 +75,7 @@ var (
tenEpochs = 10 * oneEpochDuration()
)
func peerScoringParams(colocationWhitelist []*net.IPNet) (*pubsub.PeerScoreParams, *pubsub.PeerScoreThresholds) {
func peerScoringParams() (*pubsub.PeerScoreParams, *pubsub.PeerScoreThresholds) {
thresholds := &pubsub.PeerScoreThresholds{
GossipThreshold: -4000,
PublishThreshold: -8000,
@@ -84,7 +83,6 @@ func peerScoringParams(colocationWhitelist []*net.IPNet) (*pubsub.PeerScoreParam
AcceptPXThreshold: 100,
OpportunisticGraftThreshold: 5,
}
scoreParams := &pubsub.PeerScoreParams{
Topics: make(map[string]*pubsub.TopicScoreParams),
TopicScoreCap: 32.72,
@@ -94,7 +92,7 @@ func peerScoringParams(colocationWhitelist []*net.IPNet) (*pubsub.PeerScoreParam
AppSpecificWeight: 1,
IPColocationFactorWeight: -35.11,
IPColocationFactorThreshold: 10,
IPColocationFactorWhitelist: colocationWhitelist,
IPColocationFactorWhitelist: nil,
BehaviourPenaltyWeight: -15.92,
BehaviourPenaltyThreshold: 6,
BehaviourPenaltyDecay: scoreDecay(tenEpochs),

View File

@@ -6,7 +6,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -52,7 +51,7 @@ type (
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
BroadcastDataColumnSidecar(columnSubnet uint64, dataColumnSidecar blocks.VerifiedRODataColumn) error
BroadcastDataColumnSidecar(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -18,7 +18,6 @@ var (
"lodestar",
"js-libp2p",
"rust-libp2p",
"erigon/caplin",
}
p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count",

View File

@@ -13,7 +13,6 @@ import (
mplex "github.com/libp2p/go-libp2p-mplex"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/net/connmgr"
"github.com/libp2p/go-libp2p/p2p/security/noise"
libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic"
libp2ptcp "github.com/libp2p/go-libp2p/p2p/transport/tcp"
@@ -59,22 +58,6 @@ func MultiAddressBuilder(ip net.IP, tcpPort, quicPort uint) ([]ma.Multiaddr, err
return multiaddrs, nil
}
// setConnManagerOption sets the connection manager option for libp2p based on the
// MaxPeers setting in the p2p config. If MaxPeers is set to a value higher than the
// default high water mark, we create a new connection manager with a high water mark
// that is higher than MaxPeers. Otherwise, we do not set a connection manager option
// and allow the libp2p fallback defaults to be applied. Rationale below:
// see: https://github.com/OffchainLabs/prysm/issues/15607
func setConnManagerOption(cfg *Config, opts []libp2p.Option) ([]libp2p.Option, error) {
low, high := cfg.connManagerLowHigh()
cm, err := connmgr.NewConnManager(low, high)
if err != nil {
return nil, errors.Wrap(err, "new ConnManager")
}
opts = append(opts, libp2p.ConnectionManager(cm))
return opts, nil
}
// buildOptions for the libp2p host.
func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Option, error) {
cfg := s.cfg
@@ -101,6 +84,7 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
if err != nil {
return nil, errors.Wrapf(err, "cannot get ID from public key: %s", ifaceKey.GetPublic().Type().String())
}
log.Infof("Running node with peer id of %s ", id.String())
options := []libp2p.Option{
@@ -114,10 +98,6 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
libp2p.Security(noise.ID, noise.New),
libp2p.Ping(false), // Disable Ping Service.
}
options, err = setConnManagerOption(s.cfg, options)
if err != nil {
return nil, errors.Wrap(err, "set connection manager option")
}
if features.Get().EnableQUIC {
options = append(options, libp2p.Transport(libp2pquic.NewTransport))

View File

@@ -18,7 +18,6 @@ import (
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/connmgr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
@@ -135,59 +134,6 @@ func TestDefaultMultiplexers(t *testing.T) {
assert.Equal(t, protocol.ID("/mplex/6.7.0"), cfg.Muxers[1].ID)
}
func TestSetConnManagerOption(t *testing.T) {
cases := []struct {
name string
maxPeers uint
highWater int
}{
{
name: "MaxPeers lower than default high water mark",
maxPeers: defaultConnManagerPruneAbove - 1,
highWater: defaultConnManagerPruneAbove,
},
{
name: "MaxPeers equal to default high water mark",
maxPeers: defaultConnManagerPruneAbove,
highWater: defaultConnManagerPruneAbove,
},
{
name: "MaxPeers higher than default high water mark",
maxPeers: defaultConnManagerPruneAbove + 1,
highWater: defaultConnManagerPruneAbove + 1 + connManagerPruneAmount,
},
}
for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) {
cfg := &Config{MaxPeers: tt.maxPeers}
opts, err := setConnManagerOption(cfg, []libp2p.Option{})
assert.NoError(t, err)
_, high := cfg.connManagerLowHigh()
require.Equal(t, true, high > int(cfg.MaxPeers))
var libCfg libp2p.Config
require.NoError(t, libCfg.Apply(append(opts, libp2p.FallbackDefaults)...))
checkLimit(t, libCfg.ConnManager, high)
})
}
}
type connLimitGetter int
func (m connLimitGetter) GetConnLimit() int {
return int(m)
}
// CheckLimit will return an error if the result of calling lg.GetConnLimit is greater than
// the high water mark. So by checking the result of calling it with a value equal to and lower
// than the expected value, we can determine the value it holds internally.
func checkLimit(t *testing.T, cm connmgr.ConnManager, expected int) {
require.NoError(t, cm.CheckLimit(connLimitGetter(expected)), "Connection manager limit check failed")
if err := cm.CheckLimit(connLimitGetter(expected - 1)); err == nil {
t.Errorf("connection manager limit is below the expected value of %d", expected)
}
}
func TestMultiAddressBuilderWithID(t *testing.T) {
testCases := []struct {
name string

View File

@@ -25,7 +25,6 @@ package peers
import (
"context"
"math"
"net"
"sort"
"strings"
"time"
@@ -88,12 +87,11 @@ const (
// Status is the structure holding the peer status information.
type Status struct {
ctx context.Context
scorers *scorers.Service
store *peerdata.Store
ipTracker map[string]uint64
rand *rand.Rand
ipColocationWhitelist []*net.IPNet
ctx context.Context
scorers *scorers.Service
store *peerdata.Store
ipTracker map[string]uint64
rand *rand.Rand
}
// StatusConfig represents peer status service params.
@@ -102,8 +100,6 @@ type StatusConfig struct {
PeerLimit int
// ScorerParams holds peer scorer configuration params.
ScorerParams *scorers.Config
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
IPColocationWhitelist []*net.IPNet
}
// NewStatus creates a new status entity.
@@ -111,13 +107,11 @@ func NewStatus(ctx context.Context, config *StatusConfig) *Status {
store := peerdata.NewStore(ctx, &peerdata.StoreConfig{
MaxPeers: maxLimitBuffer + config.PeerLimit,
})
return &Status{
ctx: ctx,
store: store,
scorers: scorers.NewService(ctx, store, config.ScorerParams),
ipTracker: map[string]uint64{},
ipColocationWhitelist: config.IPColocationWhitelist,
ctx: ctx,
store: store,
scorers: scorers.NewService(ctx, store, config.ScorerParams),
ipTracker: map[string]uint64{},
// Random generator used to calculate dial backoff period.
// It is ok to use deterministic generator, no need for true entropy.
rand: rand.NewDeterministicGenerator(),
@@ -1052,13 +1046,6 @@ func (p *Status) isfromBadIP(pid peer.ID) error {
if val, ok := p.ipTracker[ip.String()]; ok {
if val > CollocationLimit {
// Check if IP is in the whitelist
for _, ipNet := range p.ipColocationWhitelist {
if ipNet.Contains(ip) {
// IP is whitelisted, skip colocation limit check
return nil
}
}
return errors.Errorf(
"colocation limit exceeded: got %d - limit %d for peer %v with IP %v",
val, CollocationLimit, pid, ip.String(),

View File

@@ -145,7 +145,7 @@ func (s *Service) pubsubOptions() []pubsub.Option {
pubsub.WithPeerOutboundQueueSize(int(s.cfg.QueueSize)),
pubsub.WithMaxMessageSize(int(MaxMessageSize())), // lint:ignore uintcast -- Max Message Size is a config value and is naturally bounded by networking limitations.
pubsub.WithValidateQueueSize(int(s.cfg.QueueSize)),
pubsub.WithPeerScore(peerScoringParams(s.cfg.IPColocationWhitelist)),
pubsub.WithPeerScore(peerScoringParams()),
pubsub.WithPeerScoreInspect(s.peerInspector, time.Minute),
pubsub.WithGossipSubParams(pubsubGossipParam()),
pubsub.WithRawTracer(gossipTracer{host: s.host}),

View File

@@ -178,8 +178,7 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
s.pubsub = gs
s.peers = peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: int(s.cfg.MaxPeers),
IPColocationWhitelist: s.cfg.IPColocationWhitelist,
PeerLimit: int(s.cfg.MaxPeers),
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: maxBadResponses,

View File

@@ -25,7 +25,6 @@ go_library(
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -6,7 +6,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -169,7 +168,7 @@ func (*FakeP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfac
}
// BroadcastDataColumnSidecar -- fake.
func (*FakeP2P) BroadcastDataColumnSidecar(_ uint64, _ blocks.VerifiedRODataColumn) error {
func (*FakeP2P) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
return nil
}

View File

@@ -5,7 +5,7 @@ import (
"sync"
"sync/atomic"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
@@ -63,7 +63,7 @@ func (m *MockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
}
// BroadcastDataColumnSidecar broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumnSidecar(uint64, blocks.VerifiedRODataColumn) error {
func (m *MockBroadcaster) BroadcastDataColumnSidecar([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}

Some files were not shown because too many files have changed in this diff Show More