Compare commits

...

79 Commits

Author SHA1 Message Date
Preston Van Loon
7d0a3566da Fixes required to sync fusaka-devnet-4 2025-08-11 17:48:58 -05:00
Preston Van Loon
65affd47ca Test devnet-4 changes with the ENR changes from PR 15501 2025-08-11 16:35:01 -05:00
Preston Van Loon
42fb56e498 Update log field to spell out 'nfd' as 'NextForkDigest' 2025-08-11 12:21:14 -05:00
Potuz
ab80865286 add testcase 2025-08-11 14:00:33 -03:00
Preston Van Loon
57d44bd33d Add tests for fulu NFD key 2025-08-11 11:48:30 -05:00
Kasey Kirkham
ff2994c24c fusaka fork digest enr changes 2025-08-11 11:18:11 -05:00
kasey
3da40ecd9c Refactor fork schedules (#15490)
* overhaul fork schedule management for bpos

* Unify log

* Radek's comments

* Use arg config to determine previous epoch, with regression test

* Remove unnecessary NewClock. @potuz feedback

* Continuation of previous commit: Remove unnecessary NewClock. @potuz feedback

* Remove VerifyBlockHeaderSignatureUsingCurrentFork

* cosmetic changes

* Remove unnecessary copy. entryWithForkDigest passes by value, not by pointer so it shold be fine

* Reuse ErrInvalidTopic from p2p package

* Unskip TestServer_GetBeaconConfig

* Resolve TODO about forkwatcher in local mode

* remove Copy()

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-11 16:08:53 +00:00
terence
f7f992c256 gofmt: fix formatting issues in test files (#15577) 2025-08-11 15:56:53 +00:00
Manu NALEPA
978ffa4780 Add tests for sendDataColumnSidecarsRequest. 2025-08-11 00:35:40 +02:00
Manu NALEPA
407bf6785f Fix Potuz's comment. 2025-08-10 23:07:56 +02:00
Manu NALEPA
c45230b455 Fix Potuz's comment. 2025-08-10 22:47:00 +02:00
Manu NALEPA
0af6591001 Fix Potuz's comment. 2025-08-10 22:41:00 +02:00
Manu NALEPA
1af249da31 Fix Potuz's comment. 2025-08-10 22:02:22 +02:00
Manu NALEPA
901f6b6e6c Fix Potuz's comment. 2025-08-10 21:59:17 +02:00
Manu NALEPA
a3cdda56d9 Fix Potuz's comment. 2025-08-10 21:53:54 +02:00
Manu NALEPA
cf3200fa06 Fix Potuz's comment. 2025-08-10 21:47:55 +02:00
Manu NALEPA
04cafa1959 Partially fix Potuz's comment. 2025-08-10 21:39:27 +02:00
Manu NALEPA
498b945a61 Fix Satyajit's comment. 2025-08-10 21:35:39 +02:00
terence
09565e0c3a Refactor attestation cache key generation outside critical section (#15572)
* Refactor attestation cache key generation outside critical section

* Improve attestation cache key error handling and logging
2025-08-10 17:41:30 +00:00
Jun Song
05a3736310 refactor: removing redundant codes in htrutils.go (#15453)
* refactor: use auto-generated HashTreeRoot functions in htrutil.go

* refactor: use type alias for Transaction & use SliceRoot for TransactionsRoot

* changelog

* fix: TransactionsRoot receives raw 2d bytes as an argument

* fix: handle nil argument

* test: add nil test for fork and checkpoint

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-10 10:44:01 +00:00
kasey
84c8653a52 initialize genesis data asap at node start (#15470)
* initialize genesis data asap at node start

* add genesis validation tests with embedded state verification

* Add test for hardcoded mainnet genesis validator root and time from init() function

* Add test for UnmarshalState in encoding/ssz/detect/configfork.go

* Add tests for genesis.Initialize

* Move genesis/embedded to genesis/internal/embedded

* Gazelle / BUILD fix

* James feedback

* Fix lint

* Revert lock

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-10 02:09:40 +00:00
Tomás Andróil
921ff23c6b refactor: use central flag name for validator HTTP server port (#15236)
* Update validator.go

* Create tomasandroil_replace_grpc_gateway_flag_name.md
2025-08-09 10:35:04 +00:00
Manu NALEPA
90317ba5b5 Fix Potuz's comment. 2025-08-09 01:24:00 +02:00
Radosław Kapka
87235fb384 Don't submit duplicate aggregated SignedContributionAndProof messages (#15571) 2025-08-08 22:57:36 +00:00
Manu NALEPA
b35358f440 Fix Potuz's comment. 2025-08-09 00:47:55 +02:00
Manu NALEPA
b926066495 Fix Potuz's comment. 2025-08-08 20:59:28 +02:00
Manu NALEPA
263ddf9a7b PeerDAS: Implement sync 2025-08-08 20:05:12 +02:00
james-prysm
2ec5914b4a fixing builder version check (#15568)
* adding fix

* fixing test
2025-08-08 01:42:55 +00:00
Potuz
fe000e5629 Fix validateConsensus (#15548)
* Fix validateConsensus

Reported by NuConstruct

The stater package looks for a stateroot using the head state from the
blockchain package. However, this state is very unlikely to have the
poststateroot since that's only added after slot processing. I assume
that essentially any REST endpoint that uses this mechanism to get head
is broken if it needs to gather a state by stateroot.

This PR is a placeholder to verify this is the issue, here I just check
if the NSC already has the post-state since that will have already the
processing state cached.

* Add changelog

* add fallback

* Fix tests
2025-08-07 13:08:40 +00:00
Preston Van Loon
447a3d8add Update go to v1.24.6 (#15566) 2025-08-06 20:59:26 +00:00
Jun Song
b00aaef202 Persist metadata sequence number using Beacon DB (#15554)
* Add entry for sequence number in chain-metadata bucket & Basic getter/setter

* Mark p2p-metadata flag as deprecated

* Fix metaDataFromConfig: use DB instead to get seqnum

* Save sequence number after updating the metadata

* Fix beacon-chain/p2p unit tests: add DB in config

* Add changelog

* Add ReadOnlyDatabaseWithSeqNum

* Code suggestion from Manu

* Remove seqnum getter at interface

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-06 20:18:33 +00:00
Potuz
0f6070a866 Fix race on ReceibeBlock (#15565)
* Fix race on ReceibeBlock

In the event two routines for `ReceiveBlock` are triggered with the same
block (it may happen if one routine is triggered over gossip and the
other in init-sync) it may happen that the second routine believes it's
syncing the block for the first time. This is because the cache on
`blocksBeingSynced` is not checked to be set and the block may still not
be put in forkchoice by the first routine.

In the normal case this would not cause any trouble as the second
forkchoice insertion is a noop by design. However, if the second routine
times out or has any error in processing (for example the engine will
return an error if we try to send FCU to an older head) then the second
routine will attempt to remove the inserted block from forkchoice and
this bricks the node since forkchoice refuses to remove a valid node,
the root is removed inconditionally from db and the node ends up with a
root that is not in db and remains in forkchoice.

This PR just prevents the race.

As a followup perhaps we can gate the rollback function from db to nodes
that are effectively not in forkchoice, alternatively, force removal
from forkchoice when rolling back from db (although this version is
complicated due to possible accounting issues on forkchoice).

* Fix lint
2025-08-06 17:18:13 +00:00
Justin Traglia
2a09c9f681 Fix BlobsBundleV2 proofs limit (#15530)
* Assign max_cell_proofs_length the correct value

* Add changelog fragment

* Run update-go-pbs.sh

* Run update-go-ssz.sh

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-05 21:33:45 +00:00
Jun Song
9c6ccd67c1 Add Fulu case for saveStatesEfficientInternal (#15553)
* Add Fulu case for saveStatesEfficientInternal

* Add changelog

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-05 19:25:17 +00:00
Radosław Kapka
36e5d4926b Redesign the pending attestation queue (#15024)
* Redesign pending attestation queue

* fix bad signature test

* equality tests

* changelog <3

* rename functions

* change logs

* fix fuzzing

* fixes after rebasing

* build fix

* review

* James' review

* fix imports
2025-08-05 18:35:19 +00:00
terence
17b7d3ff12 Add fork choice check to pending attestations processing (#15547) 2025-08-05 16:08:47 +00:00
terence
fb2bceece8 beacon api: optimize val assignment lookup (#15558) 2025-08-05 15:23:37 +00:00
terence
d012ab653c Beacon api: fix proposer duty computation for fulu (#15534) 2025-08-05 15:17:02 +00:00
Preston Van Loon
46e7555c42 Update cc-debian11 to latest (#15562) 2025-08-04 16:17:13 +00:00
Preston Van Loon
181df3995e Update go to v1.24.5 (#15561) 2025-08-04 15:17:59 +00:00
Manu NALEPA
149e220b61 Validator custody: Update to the latest specification. (#15532)
* Validator custody: Update to the latest specfication.

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-02 06:21:08 +00:00
Bastin
ae4b982a6c Fix finality update bugs & Move broadcast logic to LC Store (#15540)
* fix IsBetterFinalityUpdate and add tests

fix finality update bugs

* Update lightclient.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/light-client/lightclient.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-01 12:35:21 +00:00
Radosław Kapka
f330021785 Do not compare liveness response with LH (#15556)
* Do not compare liveness response with LH

* changelog <3
2025-07-31 14:37:36 +00:00
james-prysm
bd6b4ecd5b wrapping goodbye messages in goroutine to speed up node shutdown (#15542)
* wrapping goodbye messages in goroutine to speed up node shutdown

* fixing requirement
2025-07-31 12:52:31 +00:00
Potuz
d7d8764a91 Trigger payload attribute event on early blocks (#15541)
Currently the payload attribute events is triggered on
`forkchoiceUpodateWithExecution`. However when we import an early block,
we do not call this function, we make two calls to FCU, the first one is
on a locked path at the end of `postBlockProcess` and this call is made
without any payload attributes to avoid updating the shuffling caches.

The second call is made on `handleSecondFCUCall` which calls directly
`notifyForkchoiceUpdate` bypassing the call to
`forkchoiceUpdateWithExecution`, but this call is the one that actually
computes the payload attributes. So the event handler is never called
with the new attributes.

This PR moves the event trigger to the same place where we actually call
FCU with the computed payload attributes.

Some considerations with forkchoice locking logic: since the calls are
always in a go routine, anyway the routine will wait to forkchoice to be
unlocked to proceed.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-30 19:34:34 +00:00
Muzry
9b7f91d947 bugfix: submitPoolSyncCommitteeSignatures response inconsistent (#15516)
* fix: submitPoolSyncCommitteeSignatures reponse inconsistent

* update: bazel build file

* update: add changelog fragment file

* update api/server/structs/BUILD.bazel format

* update the unit test

* update: the error format

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-29 16:08:28 +00:00
terence
57e27199bd Fix builder bid version compatibility to support Electra bids with Fulu blocks (#15536) 2025-07-29 14:16:05 +00:00
Potuz
11ca766ed6 Add timing metric for PublishBlockV2 endpoint (#15539)
This commit adds a Prometheus histogram metric to measure the processing
duration of the PublishBlockV2 beacon API endpoint in milliseconds.

The metric covers the complete request processing time including:
- Request validation and parsing
- Block decoding (SSZ/JSON)
- Broadcast validation checks
- Block proposal through ProposeBeaconBlock
- All synchronous operations and awaited goroutines

Background operations that run in goroutines (block broadcasting, blob
sidecar processing) are included in the timing since the main function
waits for their completion before returning.

Files changed:
- beacon-chain/rpc/eth/beacon/metrics.go: New metric definition
- beacon-chain/rpc/eth/beacon/handlers.go: Timing instrumentation
- beacon-chain/rpc/eth/beacon/BUILD.bazel: Added metrics.go and Prometheus deps
- changelog/potuz_add_publishv2_metric.md: Changelog entry

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-28 18:20:57 +00:00
terence
cd6cc76d58 Add BLOB_SCHEDULE to eth/v1/config/spec endpoint (#15485)
* Beacon api: fix get config blob schedule

* Numbers should be string instead of float

* more generalized implementation for nested objects

* removing unused function

* fixing linting

* removing redundant switch fields

* adding additional log for debugging

* Fix build.

* adding skip function based on kasey's recommendation

* fixing test

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-07-25 19:06:10 +00:00
terence
fc4a1469f0 Include requested state root in StateNotFoundError message for debugging (#15533) 2025-07-25 17:50:26 +00:00
Justin Traglia
f3dc4c283e Improve das-core functions (#15524)
* Improve das-core functions

* Add changelog fragment

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-25 15:35:05 +00:00
raulk
6ddf271688 fix(beacon-api): return syncnets and cgc in Metadata. (#15506)
* fix(beacon-api): return syncnets and cgc in Metadata.

* changelog

* fixing implementation and adding unit tests

* gaz

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-07-25 15:26:56 +00:00
Justin Traglia
af7afba26e Move reconstruction lock to prevent unnecessary work (#15528)
* Move reconstruction lock to prevent unnecessary work

* Add changelog fragment
2025-07-24 21:06:53 +00:00
james-prysm
b740a4ff83 implements the proposer lookahead api (#15525)
* implements the proposer lookahead api

* radek's feedback
2025-07-24 15:00:43 +00:00
Radosław Kapka
385c2224e8 Return zero value for Eth-Consensus-Block-Value on error (#15526) 2025-07-24 13:58:23 +00:00
Justin Traglia
04b39d1a4d Fix some nits associated with data column sidecar verification (#15521)
* Fix some nits associated with data column sidecar verification

* Add changelog fragment
2025-07-23 20:02:32 +00:00
james-prysm
4c40caf7fd persistent enode db to persist seq number (#15519)
* adding persistent db path for persistent seq data information

* fixing typo

* Update changelog/James-prysm_persistent-seq-number.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update beacon-chain/p2p/discovery_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* adding log updated based on manu's suggestion

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-23 13:55:05 +00:00
Justin Traglia
bc209cadab Use MinEpochsForDataColumnSidecarsRequest in WithinDAPeriod for Fulu (#15522)
* Use MinEpochsForDataColumnSidecarsRequest in WithinDAPeriod for Fulu

* Make fixes

* Add blank line
2025-07-23 13:35:27 +00:00
Justin Traglia
856742ff68 Update links to consensus-specs to point to master branch (#15523)
* Update links to consensus-specs to point to master branch

* Add changelog fragment
2025-07-23 08:32:09 +00:00
Manu NALEPA
abe16a9cb4 Fix downscore by peers when a node gracefully stops. (#15505)
* Log when downscoring a peer.

* `validateSequenceNumber`: Downscore peer in function, clarify and add logs

* `AddConnectionHandler`: Send majority code to the outer scope (no funtional change).

* `disconnectBadPeer`: Improve log.

* `sendRPCStatusRequest`: Improve log.

* `findPeersWithSubnets`: Add preventive peer filtering.
(As done in `s.findPeers`.)

* `Stop`: Use one `defer` for the whole function.
Reminder: `defer`s are executed backwards.

* `Stop`: Send a goodbye message to all connected peers when stopping the service.

Before this commit, stopping the service did not send any goodbye message to all connected peers. The issue with this approach is that the peer still thinks we are alive, and behaves so by trying to communicate with us. Unfortunatly, because we are offline, we cannot respond. Because of that, the peer starts to downscore us, and then bans us. As a consequence, when we restart, the peer refuses our connection request.

By sending a goodbye message when stopping the service, we ensure the peer stops to expect anything from us. When restarting, everything is allright.

* `ConnectedF` and `DisconnectedF`: Workaround very probable libp2p bug by preventing outbound connection to very recently disconnected peers.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* `AddDisconnectionHandler`: Handle multiple close calls to `DisconnectedF` for the same peer.
2025-07-22 20:15:18 +00:00
james-prysm
77958022e7 removing ssz-only flag ( reverting feature) and fix accept header middleware (#15433)
* removing ssz-only flag

* gaz

* reverting other uses of sszonly

* gaz

* adding kasey and radek's suggestions

* update changelog

* adding test

* radek advice with new headers and tests

* adding logs and fixing comments

* adding logs and fixing comments

* gaz

* Update validator/client/beacon-api/rest_handler_client.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/apiutil/header.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/apiutil/header.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's comments

* adding another failing case based on radek's suggestion

* another unit test

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-22 16:06:51 +00:00
kasey
c21fae239f don't use gzip handler for sse (#15517)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-07-22 13:54:25 +00:00
Bastin
deb3ba7f21 Remove unused parameter from LC functions (#15514)
* remove unused parameter

* format code
2025-07-21 13:50:21 +00:00
Manu NALEPA
f288a3c0e1 Workaround TestHostIsResolved by using "more reliable" DNS resolver. (#15515) 2025-07-21 13:37:05 +00:00
Bastin
a4ca6355d0 Abstract away LC update validation rules into the LC store (#15508)
* abstract update validation into IsBetter funcs

* Update beacon-chain/core/light-client/store.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update beacon-chain/core/light-client/store.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* clean up

* clean up

* clean up again

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-21 09:37:02 +00:00
Manu NALEPA
cd0821d026 Subnets subscription: Avoid dynamic subscribing blocking in case not enough peers per subnets are found. (#15471)
* Subnets subscription: Avoid dynamic subscribing blocking in case not enough peers per subnets are found.

* `subscribeWithParameters`: Use struct to avoid too many function parameters (no functional changes).

* Optimise subnets search.

Currently, when we are looking for peers in let's say data column sidecars subnets 3, 6 and 7, we first look for peers in subnet 3.
If, during the crawling, we meet some peers with subnet 6, we discard them (because we are exclusively looking for peers with subnet 3).
When we are happy, we start again with peers with subnet 6.

This commit optimizes that by looking for peers with satisfy our constraints in one look.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' commnet.

* Fix James' comment.

* Fix James' comment.

* Fix James's comment.

* Simplify following James' comment.

* Fix James' comment.

* Update beacon-chain/sync/rpc_goodbye.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update config/params/config.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix Preston's comment.

* Fix Preston's comment.

* `TestService_BroadcastDataColumn`: Re-add sleep 50 ms.

* Fix Preston's comment.

* Update beacon-chain/p2p/subnets.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-07-18 17:19:15 +00:00
Bastin
8b53887891 Save LC Bootstraps only on finalized checkpoints (#15497)
* Unify LC API (updates)

* Remove unused fields in LC beacon API server

* bootstraps only on checkpoints

* Update beacon-chain/blockchain/receive_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* move tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-18 15:53:31 +00:00
Radosław Kapka
8623a144d9 Write Content-Encoding header in the response properly when gzip encoding is requested (#15499)
* Write gzip header properly

* changelog

* regression test
2025-07-17 19:14:48 +00:00
Bastin
f3314d2d24 Abstract validation logic for saving LC updates into the store function (#15504)
* Unify LC API (updates)

* Remove unused fields in LC beacon API server

* refactor lc logic into core

* fix tests
2025-07-17 14:58:18 +00:00
Bastin
bcd65e7a4d Remove extra lc server fields (#15493)
* Unify LC API (updates)

* Remove unused fields in LC beacon API server
2025-07-17 11:50:16 +00:00
Bastin
dce89a1627 Unify LC API 2/2 (updates) (#15488)
* Unify LC API (updates)

* Update beacon-chain/core/light-client/store.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-17 10:59:51 +00:00
Manu NALEPA
09485c2062 Add bundle v2 support for submit blind block (#15503)
* Add bundle v2 support for submit blind block

* add tests

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2025-07-17 07:51:28 +00:00
Manu NALEPA
9e014da0b9 Log: Add milliseconds to log timestamps (#15496) 2025-07-16 10:07:12 +00:00
Manu NALEPA
d8fedacc26 beaconBlockSubscriber: Implement data column sidecars reconstruction with data retrieved from the execution client when receiving a block via gossip. (#15483) 2025-07-16 06:00:23 +00:00
Potuz
3def16caaa Fix InitializeProposerLookahead (#15450)
* Fix InitializeProposerLookahead

Get the right Active validator indices for each epoch after the fork
transition.

Co-Authored-By: Claude <noreply@anthropic.com>

* Add test

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-16 03:16:11 +00:00
Rose Jethani
4e5bfa9760 Fixes server ignores request to gzip data (#14982)
* files changed

* Update api/server/middleware/middleware.go

* fixed AcceptEncodingHeaderHandler

* Update changelog/rose2221-develop.md

* Added tests'

* Update api/server/middleware/middleware_test.go

* updated bazel

* formatting

* fixed e2e_test

* zip only json

* zip only json

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-15 16:39:20 +00:00
terence
70ac53f991 fix(execution): skip genesis block retrieval when EIP-6110 is active (#15494) 2025-07-15 16:00:53 +00:00
terence
6f5ff03b42 optimize data column seen cache memory usage with slot-aware pruning (#15477)
* optimize data column seen cache memory usage with slot-aware pruning

* Manu's feedback

* Fix lint
2025-07-15 16:00:36 +00:00
Preston Van Loon
499d27b6ae Use time.Time instead of uint64 for genesis time (#15419)
* Convert genesis times from seconds to time.Time

* Fixing failed forkchoice tests in a new commit so it doesn't get worse

Fixing failed spectest tests in a new commit so it doesn't get worse

Fixing forkchoice tests, then spectests

* Fixing forkchoice tests, then spectests. Now asking for help...

* Fix TestForkChoice_GetProposerHead

* Fix broken build

* Resolve TODO(preston) items

* Changelog fragment

* Resolve TODO(preston) items again

* Resolve lint issues

* Use consistant field names for sinceSlotStart (no spaces)

* Manu's feedback

* Renamed StartTime -> UnsafeStartTime, marked as deprecated because it doesn't handle overflow scenarios.
Renamed SlotTime -> StartTime
Renamed SlotAt -> At
Handled the error in cases where StartTime was used.

@james-prysm feedback

* Revert beacon-chain/blockchain/receive_block_test.go from 1b7844de

* Fixing issues after rebase

* Accepted suggestions from @potuz

* Remove CanonicalHeadSlot from merge conflicts

---------

Co-authored-by: potuz <potuz@prysmaticlabs.com>
2025-07-14 21:04:50 +00:00
557 changed files with 15784 additions and 9865 deletions

View File

@@ -190,7 +190,7 @@ load("@rules_oci//oci:pull.bzl", "oci_pull")
# A multi-arch base image
oci_pull(
name = "linux_debian11_multiarch_base", # Debian bullseye
digest = "sha256:b82f113425c5b5c714151aaacd8039bc141821cdcd3c65202d42bdf9c43ae60b", # 2023-12-12
digest = "sha256:55a5e011b2c4246b4c51e01fcc2b452d151e03df052e357465f0392fcd59fddf",
image = "gcr.io/prysmaticlabs/distroless/cc-debian11",
platforms = [
"linux/amd64",
@@ -208,7 +208,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.24.0",
go_version = "1.24.6",
nogo = "@//:nogo",
)

View File

@@ -2,18 +2,28 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["common.go"],
srcs = [
"common.go",
"header.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/apiutil",
visibility = ["//visibility:public"],
deps = ["//consensus-types/primitives:go_default_library"],
deps = [
"//consensus-types/primitives:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["common_test.go"],
srcs = [
"common_test.go",
"header_test.go",
],
embed = [":go_default_library"],
deps = [
"//consensus-types/primitives:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

122
api/apiutil/header.go Normal file
View File

@@ -0,0 +1,122 @@
package apiutil
import (
"mime"
"sort"
"strconv"
"strings"
log "github.com/sirupsen/logrus"
)
type mediaRange struct {
mt string // canonicalised mediatype, e.g. "application/json"
q float64 // quality factor (01)
raw string // original string useful for logging/debugging
spec int // 2=exact, 1=type/*, 0=*/*
}
func parseMediaRange(field string) (mediaRange, bool) {
field = strings.TrimSpace(field)
mt, params, err := mime.ParseMediaType(field)
if err != nil {
log.WithError(err).Debug("Failed to parse header field")
return mediaRange{}, false
}
r := mediaRange{mt: mt, q: 1, spec: 2, raw: field}
if qs, ok := params["q"]; ok {
v, err := strconv.ParseFloat(qs, 64)
if err != nil || v < 0 || v > 1 {
log.WithField("q", qs).Debug("Invalid quality factor (01)")
return mediaRange{}, false // skip invalid entry
}
r.q = v
}
switch {
case mt == "*/*":
r.spec = 0
case strings.HasSuffix(mt, "/*"):
r.spec = 1
}
return r, true
}
func hasExplicitQ(r mediaRange) bool {
return strings.Contains(strings.ToLower(r.raw), ";q=")
}
// ParseAccept returns media ranges sorted by q (desc) then specificity.
func ParseAccept(header string) []mediaRange {
if header == "" {
return []mediaRange{{mt: "*/*", q: 1, spec: 0, raw: "*/*"}}
}
var out []mediaRange
for _, field := range strings.Split(header, ",") {
if r, ok := parseMediaRange(field); ok {
out = append(out, r)
}
}
sort.SliceStable(out, func(i, j int) bool {
ei, ej := hasExplicitQ(out[i]), hasExplicitQ(out[j])
if ei != ej {
return ei // explicit beats implicit
}
if out[i].q != out[j].q {
return out[i].q > out[j].q
}
return out[i].spec > out[j].spec
})
return out
}
// Matches reports whether content type is acceptable per the header.
func Matches(header, ct string) bool {
for _, r := range ParseAccept(header) {
switch {
case r.q == 0:
continue
case r.mt == "*/*":
return true
case strings.HasSuffix(r.mt, "/*"):
if strings.HasPrefix(ct, r.mt[:len(r.mt)-1]) {
return true
}
case r.mt == ct:
return true
}
}
return false
}
// Negotiate selects the best server type according to the header.
// Returns the chosen type and true, or "", false when nothing matches.
func Negotiate(header string, serverTypes []string) (string, bool) {
for _, r := range ParseAccept(header) {
if r.q == 0 {
continue
}
for _, s := range serverTypes {
if Matches(r.mt, s) {
return s, true
}
}
}
return "", false
}
// PrimaryAcceptMatches only checks if the first accept matches
func PrimaryAcceptMatches(header, produced string) bool {
for _, r := range ParseAccept(header) {
if r.q == 0 {
continue // explicitly unacceptable skip
}
return Matches(r.mt, produced)
}
return false
}

174
api/apiutil/header_test.go Normal file
View File

@@ -0,0 +1,174 @@
package apiutil
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestParseAccept(t *testing.T) {
type want struct {
mt string
q float64
spec int
}
cases := []struct {
name string
header string
want []want
}{
{
name: "empty header becomes */*;q=1",
header: "",
want: []want{{mt: "*/*", q: 1, spec: 0}},
},
{
name: "quality ordering then specificity",
header: "application/json;q=0.2, */*;q=0.1, application/xml;q=0.5, text/*;q=0.5",
want: []want{
{mt: "application/xml", q: 0.5, spec: 2},
{mt: "text/*", q: 0.5, spec: 1},
{mt: "application/json", q: 0.2, spec: 2},
{mt: "*/*", q: 0.1, spec: 0},
},
},
{
name: "invalid pieces are skipped",
header: "text/plain; q=boom, application/json",
want: []want{{mt: "application/json", q: 1, spec: 2}},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := ParseAccept(tc.header)
gotProjected := make([]want, len(got))
for i, g := range got {
gotProjected[i] = want{mt: g.mt, q: g.q, spec: g.spec}
}
require.DeepEqual(t, gotProjected, tc.want)
})
}
}
func TestMatches(t *testing.T) {
cases := []struct {
name string
accept string
ct string
matches bool
}{
{"exact match", "application/json", "application/json", true},
{"type wildcard", "application/*;q=0.8", "application/xml", true},
{"global wildcard", "*/*;q=0.1", "image/png", true},
{"explicitly unacceptable (q=0)", "text/*;q=0", "text/plain", false},
{"no match", "image/png", "application/json", false},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := Matches(tc.accept, tc.ct)
require.Equal(t, tc.matches, got)
})
}
}
func TestNegotiate(t *testing.T) {
cases := []struct {
name string
accept string
serverTypes []string
wantType string
ok bool
}{
{
name: "highest quality wins",
accept: "application/json;q=0.8,application/xml;q=0.9",
serverTypes: []string{"application/json", "application/xml"},
wantType: "application/xml",
ok: true,
},
{
name: "wildcard matches first server type",
accept: "*/*;q=0.5",
serverTypes: []string{"application/octet-stream", "application/json"},
wantType: "application/octet-stream",
ok: true,
},
{
name: "no acceptable type",
accept: "image/png",
serverTypes: []string{"application/json"},
wantType: "",
ok: false,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got, ok := Negotiate(tc.accept, tc.serverTypes)
require.Equal(t, tc.ok, ok)
require.Equal(t, tc.wantType, got)
})
}
}
func TestPrimaryAcceptMatches(t *testing.T) {
tests := []struct {
name string
accept string
produced string
expect bool
}{
{
name: "prefers json",
accept: "application/json;q=0.9,application/xml",
produced: "application/json",
expect: true,
},
{
name: "wildcard application beats other wildcard",
accept: "application/*;q=0.2,*/*;q=0.1",
produced: "application/xml",
expect: true,
},
{
name: "json wins",
accept: "application/xml;q=0.8,application/json;q=0.9",
produced: "application/json",
expect: true,
},
{
name: "json loses",
accept: "application/xml;q=0.8,application/json;q=0.9,application/octet-stream;q=0.99",
produced: "application/json",
expect: false,
},
{
name: "json wins with non q option",
accept: "application/xml;q=0.8,image/png,application/json;q=0.9",
produced: "application/json",
expect: true,
},
{
name: "json not primary",
accept: "image/png,application/json",
produced: "application/json",
expect: false,
},
{
name: "absent header",
accept: "",
produced: "text/plain",
expect: true,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := PrimaryAcceptMatches(tc.accept, tc.produced)
require.Equal(t, got, tc.expect)
})
}
}

View File

@@ -16,7 +16,6 @@ go_library(
"//api/server/structs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -9,7 +9,6 @@ import (
"net/url"
"path"
"regexp"
"sort"
"strconv"
"github.com/OffchainLabs/prysm/v6/api/client"
@@ -17,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
@@ -137,24 +135,6 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
return fr.ToConsensus()
}
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
body, err := c.Get(ctx, getForkSchedulePath)
if err != nil {
return nil, errors.Wrap(err, "error requesting fork schedule")
}
fsr := &forkScheduleResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
}
ofs, err := fsr.OrderedForkSchedule()
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("problem unmarshaling %s response", getForkSchedulePath))
}
return ofs, nil
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
@@ -334,31 +314,3 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToEx
}
return poolResponse, nil
}
type forkScheduleResponse struct {
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {
ofs := make(forks.OrderedSchedule, 0)
for _, d := range fsr.Data {
epoch, err := strconv.ParseUint(d.Epoch, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "error parsing epoch %s", d.Epoch)
}
vSlice, err := hexutil.Decode(d.CurrentVersion)
if err != nil {
return nil, err
}
if len(vSlice) != 4 {
return nil, fmt.Errorf("got %d byte version, expected 4 bytes. version hex=%s", len(vSlice), d.CurrentVersion)
}
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)
return ofs, nil
}

View File

@@ -50,6 +50,7 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",

View File

@@ -101,7 +101,7 @@ type BuilderClient interface {
NodeURL() string
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error)
RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
Status(ctx context.Context) error
}
@@ -446,6 +446,9 @@ func sszValidatorRegisterRequest(svr []*ethpb.SignedValidatorRegistrationV1) ([]
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")
func getVersionsBlockToPayload(blockVersion int) (int, error) {
if blockVersion >= version.Fulu {
return version.Fulu, nil
}
if blockVersion >= version.Deneb {
return version.Deneb, nil
}
@@ -460,7 +463,7 @@ func getVersionsBlockToPayload(blockVersion int) (int, error) {
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full execution payload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
body, postOpts, err := c.buildBlindedBlockRequest(sb)
if err != nil {
return nil, nil, err
@@ -558,7 +561,7 @@ func (c *Client) buildBlindedBlockRequest(sb interfaces.ReadOnlySignedBeaconBloc
func (c *Client) parseBlindedBlockResponse(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
) (interfaces.ExecutionData, v1.BlobsBundler, error) {
if c.sszEnabled {
return c.parseBlindedBlockResponseSSZ(respBytes, forkVersion)
}
@@ -568,8 +571,18 @@ func (c *Client) parseBlindedBlockResponse(
func (c *Client) parseBlindedBlockResponseSSZ(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
if forkVersion >= version.Deneb {
) (interfaces.ExecutionData, v1.BlobsBundler, error) {
if forkVersion >= version.Fulu {
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundleV2{}
if err := payloadAndBlobs.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadDenebAndBlobsBundleV2 SSZ")
}
ed, err := blocks.NewWrappedExecutionData(payloadAndBlobs.Payload)
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to wrap execution data for %s", version.String(forkVersion))
}
return ed, payloadAndBlobs.BlobsBundle, nil
} else if forkVersion >= version.Deneb {
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundle{}
if err := payloadAndBlobs.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadDenebAndBlobsBundle SSZ")

View File

@@ -2,6 +2,7 @@ package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
@@ -13,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
@@ -1573,3 +1575,166 @@ func TestRequestLogger(t *testing.T) {
err = c.Status(ctx)
require.NoError(t, err)
}
func TestGetVersionsBlockToPayload(t *testing.T) {
tests := []struct {
name string
blockVersion int
expectedVersion int
expectedError bool
}{
{
name: "Fulu version",
blockVersion: 6, // version.Fulu
expectedVersion: 6,
expectedError: false,
},
{
name: "Deneb version",
blockVersion: 4, // version.Deneb
expectedVersion: 4,
expectedError: false,
},
{
name: "Capella version",
blockVersion: 3, // version.Capella
expectedVersion: 3,
expectedError: false,
},
{
name: "Bellatrix version",
blockVersion: 2, // version.Bellatrix
expectedVersion: 2,
expectedError: false,
},
{
name: "Unsupported version",
blockVersion: 0,
expectedError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
version, err := getVersionsBlockToPayload(tt.blockVersion)
if tt.expectedError {
assert.NotNil(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedVersion, version)
}
})
}
}
func TestParseBlindedBlockResponseSSZ_WithBlobsBundleV2(t *testing.T) {
c := &Client{sszEnabled: true}
// Create test payload
payload := &v1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BlockNumber: 123456,
GasLimit: 30000000,
GasUsed: 21000,
Timestamp: 1234567890,
ExtraData: []byte("test-extra-data"),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: [][]byte{},
Withdrawals: []*v1.Withdrawal{},
BlobGasUsed: 1024,
ExcessBlobGas: 2048,
}
// Create test BlobsBundleV2
bundleV2 := &v1.BlobsBundleV2{
KzgCommitments: [][]byte{make([]byte, 48), make([]byte, 48)},
Proofs: [][]byte{make([]byte, 48), make([]byte, 48)},
Blobs: [][]byte{make([]byte, 131072), make([]byte, 131072)},
}
// Test Fulu version (should use ExecutionPayloadDenebAndBlobsBundleV2)
t.Run("Fulu version with BlobsBundleV2", func(t *testing.T) {
payloadAndBlobsV2 := &v1.ExecutionPayloadDenebAndBlobsBundleV2{
Payload: payload,
BlobsBundle: bundleV2,
}
respBytes, err := payloadAndBlobsV2.MarshalSSZ()
require.NoError(t, err)
ed, bundle, err := c.parseBlindedBlockResponseSSZ(respBytes, 6) // version.Fulu
require.NoError(t, err)
require.NotNil(t, ed)
require.NotNil(t, bundle)
// Verify the bundle is BlobsBundleV2
bundleV2Result, ok := bundle.(*v1.BlobsBundleV2)
assert.Equal(t, true, ok, "Expected BlobsBundleV2 type")
require.Equal(t, len(bundleV2.KzgCommitments), len(bundleV2Result.KzgCommitments))
require.Equal(t, len(bundleV2.Proofs), len(bundleV2Result.Proofs))
require.Equal(t, len(bundleV2.Blobs), len(bundleV2Result.Blobs))
})
// Test Deneb version (should use regular BlobsBundle)
t.Run("Deneb version with regular BlobsBundle", func(t *testing.T) {
regularBundle := &v1.BlobsBundle{
KzgCommitments: bundleV2.KzgCommitments,
Proofs: bundleV2.Proofs,
Blobs: bundleV2.Blobs,
}
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundle{
Payload: payload,
BlobsBundle: regularBundle,
}
respBytes, err := payloadAndBlobs.MarshalSSZ()
require.NoError(t, err)
ed, bundle, err := c.parseBlindedBlockResponseSSZ(respBytes, 4) // version.Deneb
require.NoError(t, err)
require.NotNil(t, ed)
require.NotNil(t, bundle)
// Verify the bundle is regular BlobsBundle
regularBundleResult, ok := bundle.(*v1.BlobsBundle)
assert.Equal(t, true, ok, "Expected BlobsBundle type")
require.Equal(t, len(regularBundle.KzgCommitments), len(regularBundleResult.KzgCommitments))
})
// Test invalid SSZ data
t.Run("Invalid SSZ data", func(t *testing.T) {
invalidBytes := []byte("invalid-ssz-data")
ed, bundle, err := c.parseBlindedBlockResponseSSZ(invalidBytes, 6)
assert.NotNil(t, err)
assert.Equal(t, true, ed == nil)
assert.Equal(t, true, bundle == nil)
})
}
func TestSubmitBlindedBlock_BlobsBundlerInterface(t *testing.T) {
// Note: The full integration test is complex due to version detection logic
// The key functionality is tested in the parseBlindedBlockResponseSSZ tests above
// and in the mock service tests which verify the interface changes work correctly
t.Run("Interface signature verification", func(t *testing.T) {
// This test verifies that the SubmitBlindedBlock method signature
// has been updated to return BlobsBundler interface
client := &Client{}
// Verify the method exists with the correct signature
// by using reflection or by checking it compiles with the interface
var _ func(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) = client.SubmitBlindedBlock
// This test passes if the signature is correct
assert.Equal(t, true, true)
})
}

View File

@@ -41,7 +41,7 @@ func (m MockClient) RegisterValidator(_ context.Context, svr []*ethpb.SignedVali
}
// SubmitBlindedBlock --
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
return nil, nil, nil
}

View File

@@ -8,7 +8,12 @@ go_library(
],
importpath = "github.com/OffchainLabs/prysm/v6/api/server/middleware",
visibility = ["//visibility:public"],
deps = ["@com_github_rs_cors//:go_default_library"],
deps = [
"//api:go_default_library",
"//api/apiutil:go_default_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
@@ -22,5 +27,6 @@ go_test(
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,11 +1,15 @@
package middleware
import (
"compress/gzip"
"fmt"
"net/http"
"strings"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/apiutil"
"github.com/rs/cors"
log "github.com/sirupsen/logrus"
)
type Middleware func(http.Handler) http.Handler
@@ -71,47 +75,64 @@ func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
acceptHeader := r.Header.Get("Accept")
// header is optional and should skip if not provided
if acceptHeader == "" {
if _, ok := apiutil.Negotiate(r.Header.Get("Accept"), serverAcceptedTypes); !ok {
http.Error(w, "Not Acceptable", http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
})
}
}
// AcceptEncodingHeaderHandler compresses the response before sending it back to the client, if gzip is supported.
func AcceptEncodingHeaderHandler() Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
next.ServeHTTP(w, r)
return
}
accepted := false
acceptTypes := strings.Split(acceptHeader, ",")
// follows rules defined in https://datatracker.ietf.org/doc/html/rfc2616#section-14.1
for _, acceptType := range acceptTypes {
acceptType = strings.TrimSpace(acceptType)
if acceptType == "*/*" {
accepted = true
break
gz := gzip.NewWriter(w)
gzipRW := &gzipResponseWriter{gz: gz, ResponseWriter: w}
defer func() {
if !gzipRW.zip {
return
}
for _, serverAcceptedType := range serverAcceptedTypes {
if strings.HasPrefix(acceptType, serverAcceptedType) {
accepted = true
break
}
if acceptType != "/*" && strings.HasSuffix(acceptType, "/*") && strings.HasPrefix(serverAcceptedType, acceptType[:len(acceptType)-2]) {
accepted = true
break
}
if err := gz.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip writer")
}
if accepted {
break
}
}
}()
if !accepted {
http.Error(w, fmt.Sprintf("Not Acceptable: %s", acceptHeader), http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
next.ServeHTTP(gzipRW, r)
})
}
}
type gzipResponseWriter struct {
gz *gzip.Writer
http.ResponseWriter
zip bool
}
func (g *gzipResponseWriter) WriteHeader(statusCode int) {
if strings.Contains(g.Header().Get("Content-Type"), api.JsonMediaType) {
// Removing the current Content-Length because zipping will change it.
g.Header().Del("Content-Length")
g.Header().Set("Content-Encoding", "gzip")
g.zip = true
}
g.ResponseWriter.WriteHeader(statusCode)
}
func (g *gzipResponseWriter) Write(b []byte) (int, error) {
if g.zip {
return g.gz.Write(b)
}
return g.ResponseWriter.Write(b)
}
func MiddlewareChain(h http.Handler, mw []Middleware) http.Handler {
if len(mw) < 1 {
return h

View File

@@ -1,14 +1,35 @@
package middleware
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/testing/require"
log "github.com/sirupsen/logrus"
)
// frozenHeaderRecorder allows asserting that response headers were not modified
// after the call to WriteHeader.
//
// Its purpose is to have a regression test for https://github.com/OffchainLabs/prysm/pull/15499.
type frozenHeaderRecorder struct {
*httptest.ResponseRecorder
frozenHeader http.Header
}
func (r *frozenHeaderRecorder) WriteHeader(code int) {
if r.frozenHeader != nil {
return
}
r.ResponseRecorder.WriteHeader(code)
r.frozenHeader = r.ResponseRecorder.Header().Clone()
}
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
@@ -124,6 +145,90 @@ func TestContentTypeHandler(t *testing.T) {
}
}
func TestAcceptEncodingHeaderHandler(t *testing.T) {
dummyContent := "Test gzip middleware content"
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", r.Header.Get("Accept"))
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte(dummyContent))
require.NoError(t, err)
})
handler := AcceptEncodingHeaderHandler()(nextHandler)
tests := []struct {
name string
accept string
acceptEncoding string
expectCompressed bool
}{
{
name: "Accept gzip",
accept: api.JsonMediaType,
acceptEncoding: "gzip",
expectCompressed: true,
},
{
name: "Accept multiple encodings",
accept: api.JsonMediaType,
acceptEncoding: "deflate, gzip",
expectCompressed: true,
},
{
name: "Accept unsupported encoding",
accept: api.JsonMediaType,
acceptEncoding: "deflate",
expectCompressed: false,
},
{
name: "No accept encoding header",
accept: api.JsonMediaType,
acceptEncoding: "",
expectCompressed: false,
},
{
name: "SSZ",
accept: api.OctetStreamMediaType,
acceptEncoding: "gzip",
expectCompressed: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
req.Header.Set("Accept", tt.accept)
if tt.acceptEncoding != "" {
req.Header.Set("Accept-Encoding", tt.acceptEncoding)
}
rr := &frozenHeaderRecorder{ResponseRecorder: httptest.NewRecorder()}
handler.ServeHTTP(rr, req)
if tt.expectCompressed {
require.Equal(t, "gzip", rr.frozenHeader.Get("Content-Encoding"), "Expected Content-Encoding header to be 'gzip'")
compressedBody := rr.Body.Bytes()
require.NotEqual(t, dummyContent, string(compressedBody), "Response body should be compressed and differ from the original")
gzReader, err := gzip.NewReader(bytes.NewReader(compressedBody))
require.NoError(t, err, "Failed to create gzipReader")
defer func() {
if err := gzReader.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip reader")
}
}()
decompressedBody, err := io.ReadAll(gzReader)
require.NoError(t, err, "Failed to decompress response body")
require.Equal(t, dummyContent, string(decompressedBody), "Decompressed content should match the original")
} else {
require.Equal(t, dummyContent, rr.Body.String(), "Response body should be uncompressed and match the original")
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}

View File

@@ -36,6 +36,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/consensus-types/validator"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
@@ -699,6 +700,11 @@ func (m *SyncCommitteeMessage) ToConsensus() (*eth.SyncCommitteeMessage, error)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
// Add validation to check if the signature is valid BLS format
_, err = bls.SignatureFromBytes(sig)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SyncCommitteeMessage{
Slot: primitives.Slot(slot),

View File

@@ -74,7 +74,7 @@ func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error)
}
return &BeaconState{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -177,7 +177,7 @@ func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAlt
}
return &BeaconStateAltair{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -295,7 +295,7 @@ func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconState
}
return &BeaconStateBellatrix{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -430,7 +430,7 @@ func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCa
}
return &BeaconStateCapella{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -568,7 +568,7 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
}
return &BeaconStateDeneb{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -742,7 +742,7 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
}
return &BeaconStateElectra{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -932,7 +932,7 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
lookahead[i] = fmt.Sprintf("%d", uint64(v))
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),

View File

@@ -283,3 +283,10 @@ type GetPendingPartialWithdrawalsResponse struct {
Finalized bool `json:"finalized"`
Data []*PendingPartialWithdrawal `json:"data"`
}
type GetProposerLookaheadResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []string `json:"data"` // validator indexes
}

View File

@@ -27,6 +27,8 @@ type Identity struct {
type Metadata struct {
SeqNumber string `json:"seq_number"`
Attnets string `json:"attnets"`
Syncnets string `json:"syncnets,omitempty"`
Cgc string `json:"custody_group_count,omitempty"`
}
type GetPeerResponse struct {

View File

@@ -73,6 +73,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -181,6 +182,7 @@ go_test(
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -194,6 +196,7 @@ go_test(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -41,7 +41,7 @@ type ForkchoiceFetcher interface {
Ancestor(context.Context, []byte, primitives.Slot) ([]byte, error)
CachedHeadRoot() [32]byte
GetProposerHead() [32]byte
SetForkChoiceGenesisTime(uint64)
SetForkChoiceGenesisTime(time.Time)
UpdateHead(context.Context, primitives.Slot)
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
@@ -514,7 +514,7 @@ func (s *Service) Ancestor(ctx context.Context, root []byte, slot primitives.Slo
// SetGenesisTime sets the genesis time of beacon chain.
func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t
s.genesisTime = t.Truncate(time.Second) // Genesis time has a precision of 1 second.
}
func (s *Service) recoverStateSummary(ctx context.Context, blockRoot [32]byte) (*ethpb.StateSummary, error) {

View File

@@ -2,6 +2,7 @@ package blockchain
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
consensus_blocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -27,7 +28,7 @@ func (s *Service) GetProposerHead() [32]byte {
}
// SetForkChoiceGenesisTime sets the genesis time in Forkchoice
func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
func (s *Service) SetForkChoiceGenesisTime(timestamp time.Time) {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
s.cfg.ForkChoiceStore.SetGenesisTime(timestamp)

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -432,6 +433,7 @@ func TestService_IsOptimistic(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := testServiceWithDB(t)
c.SetGenesisTime(time.Now())
c.head = &head{root: [32]byte{'b'}}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
@@ -623,6 +625,7 @@ func Test_hashForGenesisRoot(t *testing.T) {
ctx := t.Context()
c := setupBeaconChain(t, beaconDB)
st, _ := util.DeterministicGenesisStateElectra(t, 10)
genesis.StoreDuringTest(t, genesis.GenesisData{State: st})
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))
root, err := beaconDB.GenesisBlockRoot(ctx)
require.NoError(t, err)

View File

@@ -7,10 +7,15 @@ type currentlySyncingBlock struct {
roots map[[32]byte]struct{}
}
func (b *currentlySyncingBlock) set(root [32]byte) {
func (b *currentlySyncingBlock) set(root [32]byte) error {
b.Lock()
defer b.Unlock()
_, ok := b.roots[root]
if ok {
return errBlockBeingSynced
}
b.roots[root] = struct{}{}
return nil
}
func (b *currentlySyncingBlock) unset(root [32]byte) {

View File

@@ -44,6 +44,8 @@ var (
errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
// errMaxDataColumnsExceeded is returned when the number of data columns exceeds the maximum allowed.
errMaxDataColumnsExceeded = verification.AsVerificationFailure(errors.New("expected data columns for node exceeds NUMBER_OF_COLUMNS"))
// errBlockBeingSynced is returned when a block is being synced.
errBlockBeingSynced = errors.New("block is being synced")
)
// An invalid block is the block that fails state transition based on the core protocol rules.

View File

@@ -174,6 +174,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.PayloadIDCache.Set(nextSlot, arg.headRoot, pId)
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), arg.headBlock, arg.headRoot, nextSlot)
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
@@ -354,7 +355,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
}
// Get timestamp.
t, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
t, err := slots.StartTime(s.genesisTime, slot)
if err != nil {
log.WithError(err).Error("Could not get timestamp to get payload attribute")
return emptyAttri

View File

@@ -19,6 +19,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -309,6 +310,7 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
block: wba,
}
genesis.StoreStateDuringTest(t, st)
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &fcuConfig{
@@ -403,6 +405,7 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
bState, _ := util.DeterministicGenesisState(t, 10)
genesis.StoreStateDuringTest(t, bState)
require.NoError(t, beaconDB.SaveState(ctx, bState, bra))
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)

View File

@@ -102,8 +102,6 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
log.WithError(err).Error("Could not save head")
}
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), args.headBlock, args.headRoot, s.CurrentSlot()+1)
// Only need to prune attestations from pool if the head has changed.
s.pruneAttsFromPool(s.ctx, args.headState, args.headBlock)
return nil
@@ -132,17 +130,17 @@ func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitiv
if s.cfg.ForkChoiceStore.ShouldOverrideFCU() {
return true
}
secs, err := slots.SecondsSinceSlotStart(currentSlot,
uint64(s.genesisTime.Unix()), uint64(time.Now().Unix()))
sss, err := slots.SinceSlotStart(currentSlot, s.genesisTime, time.Now())
if err != nil {
log.WithError(err).Error("Could not compute seconds since slot start")
}
if secs >= doublylinkedtree.ProcessAttestationsThreshold {
if sss >= doublylinkedtree.ProcessAttestationsThreshold {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
}).Infof("Attempted late block reorg aborted due to attestations at %d seconds",
doublylinkedtree.ProcessAttestationsThreshold)
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
"sinceSlotStart": sss,
"threshold": doublylinkedtree.ProcessAttestationsThreshold,
}).Info("Attempted late block reorg aborted due to attestations after threshold")
lateBlockFailedAttemptFirstThreshold.Inc()
}
}

View File

@@ -160,6 +160,7 @@ func TestShouldOverrideFCU(t *testing.T) {
ctx, fcs := tr.ctx, tr.fcs
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second))
fcs.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second))
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
@@ -180,11 +181,12 @@ func TestShouldOverrideFCU(t *testing.T) {
require.NoError(t, err)
require.Equal(t, headRoot, head)
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 29)
wantLog := "aborted due to attestations after threshold"
fcs.SetGenesisTime(time.Now().Add(-29 * time.Second))
require.Equal(t, true, service.shouldOverrideFCU(parentRoot, 3))
require.LogsDoNotContain(t, hook, "10 seconds")
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 24)
require.LogsDoNotContain(t, hook, wantLog)
fcs.SetGenesisTime(time.Now().Add(-24 * time.Second))
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot+10) * time.Second))
require.Equal(t, false, service.shouldOverrideFCU(parentRoot, 3))
require.LogsContain(t, hook, "10 seconds")
require.LogsContain(t, hook, wantLog)
}

View File

@@ -154,6 +154,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("genesis_state_root", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
@@ -180,8 +181,8 @@ func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("non_genesis_values", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
genesisRoot := [32]byte{1}
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
srv.originBlockRoot = genesisRoot
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
@@ -397,7 +398,7 @@ func TestSaveOrphanedOps(t *testing.T) {
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
service.SetGenesisTime(time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
// Chain setup
// 0 -- 1 -- 2 -- 3

View File

@@ -11,7 +11,7 @@ import (
)
var (
// https://github.com/ethereum/consensus-specs/blob/dev/presets/mainnet/trusted_setups/trusted_setup_4096.json
// https://github.com/ethereum/consensus-specs/blob/master/presets/mainnet/trusted_setups/trusted_setup_4096.json
//go:embed trusted_setup_4096.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context

View File

@@ -6,20 +6,20 @@ import (
)
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
func Verify(blobSidecars ...blocks.ROBlob) error {
if len(blobSidecars) == 0 {
return nil
}
if len(sidecars) == 1 {
if len(blobSidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(sidecars[0].Blob),
bytesToCommitment(sidecars[0].KzgCommitment),
bytesToKZGProof(sidecars[0].KzgProof))
bytesToBlob(blobSidecars[0].Blob),
bytesToCommitment(blobSidecars[0].KzgCommitment),
bytesToKZGProof(blobSidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs := make([]GoKZG.Blob, len(blobSidecars))
cmts := make([]GoKZG.KZGCommitment, len(blobSidecars))
proofs := make([]GoKZG.KZGProof, len(blobSidecars))
for i, sidecar := range blobSidecars {
blobs[i] = *bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)

View File

@@ -22,8 +22,8 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
}
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
blobSidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(blobSidecars...))
}
func TestBytesToAny(t *testing.T) {

View File

@@ -86,8 +86,8 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64, daWaitedTime time.Duration) error {
startTime, err := slots.ToTime(genesisTime, block.Slot())
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
}

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
@@ -36,7 +35,7 @@ func WithMaxGoroutines(x int) Option {
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB)
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
return nil
}
}
@@ -235,14 +234,6 @@ func WithSyncChecker(checker Checker) Option {
}
}
// WithCustodyInfo sets the custody info for the blockchain service.
func WithCustodyInfo(custodyInfo *peerdas.CustodyInfo) Option {
return func(s *Service) error {
s.cfg.CustodyInfo = custodyInfo
return nil
}
}
// WithSlasherEnabled sets whether the slasher is enabled or not.
func WithSlasherEnabled(enabled bool) Option {
return func(s *Service) error {
@@ -254,7 +245,7 @@ func WithSlasherEnabled(enabled bool) Option {
// WithGenesisTime sets the genesis time for the blockchain service.
func WithGenesisTime(genesisTime time.Time) Option {
return func(s *Service) error {
s.genesisTime = genesisTime
s.genesisTime = genesisTime.Truncate(time.Second) // Genesis time has a precision of 1 second.
return nil
}
}

View File

@@ -60,10 +60,8 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
return err
}
genesisTime := uint64(s.genesisTime.Unix())
// Verify attestation target is from current epoch or previous epoch.
if err := verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Add(disparity).Unix()), tgt); err != nil {
if err := verifyAttTargetEpoch(ctx, s.genesisTime, time.Now().Add(disparity), tgt); err != nil {
return err
}
@@ -76,7 +74,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// validate_aggregate_proof.go and validate_beacon_attestation.go
// Verify attestations can only affect the fork choice of subsequent slots.
if err := slots.VerifyTime(genesisTime, a.GetData().Slot+1, disparity); err != nil {
if err := slots.VerifyTime(s.genesisTime, a.GetData().Slot+1, disparity); err != nil {
return err
}

View File

@@ -4,12 +4,12 @@ import (
"context"
"fmt"
"strconv"
"time"
"github.com/OffchainLabs/prysm/v6/async"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -139,8 +139,8 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
}
// verifyAttTargetEpoch validates attestation is from the current or previous epoch.
func verifyAttTargetEpoch(_ context.Context, genesisTime, nowTime uint64, c *ethpb.Checkpoint) error {
currentSlot := primitives.Slot((nowTime - genesisTime) / params.BeaconConfig().SecondsPerSlot)
func verifyAttTargetEpoch(_ context.Context, genesis, now time.Time, c *ethpb.Checkpoint) error {
currentSlot := slots.At(genesis, now)
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch primitives.Epoch
// Prevents previous epoch under flow

View File

@@ -355,22 +355,22 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := t.Context()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
nowTime := time.Unix(int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot), 0)
require.NoError(t, verifyAttTargetEpoch(ctx, time.Unix(0, 0), nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
}
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := t.Context()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
nowTime := time.Unix(int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot), 0)
require.NoError(t, verifyAttTargetEpoch(ctx, time.Unix(0, 0), nowTime, &ethpb.Checkpoint{Epoch: 1}))
}
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := t.Context()
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
err := verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
nowTime := time.Unix(2*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot), 0)
err := verifyAttTargetEpoch(ctx, time.Unix(0, 0), nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
assert.ErrorContains(t, "target epoch 0 does not match current epoch 2 or prev epoch 1", err)
}

View File

@@ -240,9 +240,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
if err := s.areSidecarsAvailable(ctx, avs, b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability for block %#x at slot %d", b.Root(), b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
@@ -308,6 +309,26 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
if version.Deneb <= blockVersion && blockVersion < version.Fulu {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), roBlock); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", slot)
}
}
if version.Fulu <= blockVersion {
if err := s.areDataColumnsAvailable(ctx, roBlock.Root(), block); err != nil {
return errors.Wrapf(err, "are data columns available for block %#x with slot %d", roBlock.Root(), slot)
}
}
return nil
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -666,10 +687,11 @@ func (s *Service) areDataColumnsAvailable(
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}
@@ -690,16 +712,20 @@ func (s *Service) areDataColumnsAvailable(
}
// All columns to sample need to be available for the block to be considered available.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
nodeID := s.cfg.P2P.NodeID()
// Prevent custody group count to change during the rest of the function.
s.cfg.CustodyInfo.Mut.RLock()
defer s.cfg.CustodyInfo.Mut.RUnlock()
// Get the custody group sampling size for the node.
custodyGroupSamplingSize := s.cfg.CustodyInfo.CustodyGroupSamplingSize(peerdas.Actual)
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupSamplingSize)
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount()
if err != nil {
return errors.Wrap(err, "custody group count")
}
// Compute the sampling size.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#custody-sampling
samplingSize := max(samplesPerSlot, custodyGroupCount)
// Get the peer info for the node.
peerInfo, _, err := peerdas.Info(nodeID, samplingSize)
if err != nil {
return errors.Wrap(err, "peer info")
}
@@ -737,7 +763,10 @@ func (s *Service) areDataColumnsAvailable(
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
nextSlot, err := slots.StartTime(s.genesisTime, block.Slot()+1)
if err != nil {
return fmt.Errorf("unable to determine slot start time: %w", err)
}
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
@@ -855,7 +884,10 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
nc := s.blobNotifiers.forRoot(root, block.Slot())
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
nextSlot, err := slots.StartTime(s.genesisTime, block.Slot()+1)
if err != nil {
return fmt.Errorf("unable to determine slot start time: %w", err)
}
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
@@ -906,7 +938,7 @@ func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
// it also updates the next slot cache and the proposer index cache to deal with skipped slots.
func (s *Service) lateBlockTasks(ctx context.Context) {
currentSlot := s.CurrentSlot()
if s.CurrentSlot() == s.HeadSlot() {
if currentSlot == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"strings"
@@ -33,7 +32,7 @@ import (
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() primitives.Slot {
return slots.CurrentSlot(uint64(s.genesisTime.Unix()))
return slots.CurrentSlot(s.genesisTime)
}
// getFCUArgs returns the arguments to call forkchoice update
@@ -45,7 +44,7 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
return nil
}
slot := cfg.roblock.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
if slots.WithinVotingWindow(s.genesisTime, slot) {
return nil
}
return s.computePayloadAttributes(cfg, fcuArgs)
@@ -134,9 +133,6 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientUpdate(cfg); err != nil {
log.WithError(err).Error("Failed to process light client update")
}
if err := s.processLightClientBootstrap(cfg); err != nil {
log.WithError(err).Error("Failed to process light client bootstrap")
}
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
@@ -174,62 +170,14 @@ func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
update, err := lightclient.NewLightClientUpdateFromBeaconState(
cfg.ctx,
s.CurrentSlot(),
cfg.postState,
cfg.roblock,
attestedState,
attestedBlock,
finalizedBlock,
)
update, err := lightclient.NewLightClientUpdateFromBeaconState(cfg.ctx, cfg.postState, cfg.roblock, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return errors.Wrapf(err, "could not create light client update")
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
// processLightClientBootstrap saves a light client bootstrap for this block
// when feature flag is enabled.
func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error {
blockRoot := cfg.roblock.Root()
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
if err != nil {
return errors.Wrapf(err, "could not create light client bootstrap")
}
if err := s.lcStore.SaveLightClientBootstrap(cfg.ctx, blockRoot, bootstrap); err != nil {
return errors.Wrapf(err, "could not save light client bootstrap")
}
return nil
return s.lcStore.SaveLightClientUpdate(cfg.ctx, period, update)
}
func (s *Service) processLightClientFinalityUpdate(
@@ -249,8 +197,7 @@ func (s *Service) processLightClientFinalityUpdate(
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
// Check if the finalized checkpoint has changed
if finalizedCheckpoint == nil || bytes.Equal(finalizedCheckpoint.GetRoot(), postState.FinalizedCheckpoint().Root) {
if finalizedCheckpoint == nil {
return nil
}
@@ -264,51 +211,18 @@ func (s *Service) processLightClientFinalityUpdate(
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
newUpdate, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState.Slot(),
postState,
signed,
attestedState,
attestedBlock,
finalizedBlock,
)
newUpdate, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, postState, signed, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return errors.Wrap(err, "could not create light client finality update")
}
lastUpdate := s.lcStore.LastFinalityUpdate()
if lastUpdate != nil {
// The finalized_header.beacon.lastUpdateSlot is greater than that of all previously forwarded finality_updates,
// or it matches the highest previously forwarded lastUpdateSlot and also has a sync_aggregate indicating supermajority (> 2/3)
// sync committee participation while the previously forwarded finality_update for that lastUpdateSlot did not indicate supermajority
newUpdateSlot := newUpdate.FinalizedHeader().Beacon().Slot
newHasSupermajority := lightclient.UpdateHasSupermajority(newUpdate.SyncAggregate())
lastUpdateSlot := lastUpdate.FinalizedHeader().Beacon().Slot
lastHasSupermajority := lightclient.UpdateHasSupermajority(lastUpdate.SyncAggregate())
if newUpdateSlot < lastUpdateSlot {
log.Debug("Skip saving light client finality newUpdate: Older than local newUpdate")
return nil
}
if newUpdateSlot == lastUpdateSlot && (lastHasSupermajority || !newHasSupermajority) {
log.Debug("Skip saving light client finality update: No supermajority advantage")
return nil
}
if !lightclient.IsBetterFinalityUpdate(newUpdate, s.lcStore.LastFinalityUpdate()) {
log.Debug("Skip saving light client finality update: current update is better")
return nil
}
log.Debug("Saving new light client finality update")
s.lcStore.SetLastFinalityUpdate(newUpdate)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client finality update")
}
s.lcStore.SetLastFinalityUpdate(newUpdate, true)
return nil
}
@@ -325,14 +239,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
newUpdate, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState.Slot(),
postState,
signed,
attestedState,
attestedBlock,
)
newUpdate, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, postState, signed, attestedState, attestedBlock)
if err != nil {
if strings.Contains(err.Error(), lightclient.ErrNotEnoughSyncCommitteeBits) {
@@ -342,26 +249,12 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
return errors.Wrap(err, "could not create light client optimistic update")
}
lastUpdate := s.lcStore.LastOptimisticUpdate()
if lastUpdate != nil {
// The attested_header.beacon.slot is greater than that of all previously forwarded optimistic updates
if newUpdate.AttestedHeader().Beacon().Slot <= lastUpdate.AttestedHeader().Beacon().Slot {
log.Debug("Skip saving light client optimistic update: Older than local update")
return nil
}
if !lightclient.IsBetterOptimisticUpdate(newUpdate, s.lcStore.LastOptimisticUpdate()) {
log.Debug("Skip saving light client optimistic update: current update is better")
return nil
}
log.Debug("Saving new light client optimistic update")
s.lcStore.SetLastOptimisticUpdate(newUpdate)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client optimistic update")
}
s.lcStore.SetLastOptimisticUpdate(newUpdate, true)
return nil
}
@@ -432,7 +325,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
}
// Verify block slot time is not from the future.
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), b.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
if err := slots.VerifyTime(s.genesisTime, b.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
return nil, err
}

View File

@@ -35,6 +35,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -1357,7 +1358,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1378,7 +1379,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1399,7 +1400,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
}
for i := 12; i < 18; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1548,7 +1549,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1568,7 +1569,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1589,7 +1590,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
}
for i := 12; i < 18; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1740,7 +1741,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1761,7 +1762,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1812,7 +1813,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
// import blocks 13 through 18 to justify 12
invalidRoots := make([][32]byte, 19-13)
for i := 13; i < 19; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1907,7 +1908,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
// Import blocks 21--30 (Epoch 3 was not enough to justify 2)
for i := 21; i < 30; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
@@ -1980,20 +1981,22 @@ func TestNoViableHead_Reboot(t *testing.T) {
genesisState, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := genesisState.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
gb := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(gb)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
genesisRoot, err := gb.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
require.NoError(t, service.saveGenesisData(ctx, genesisState))
genesis.StoreStateDuringTest(t, genesisState)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, genesisState, genesisRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, genesisRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
t.Log(i)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -2013,7 +2016,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -2062,7 +2065,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
// import blocks 13 through 18 to justify 12
for i := 13; i < 19; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -2305,6 +2308,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
func TestFillMissingBlockPayloadId_DiffSlotExitEarly(t *testing.T) {
logHook := logTest.NewGlobal()
service, tr := minimalTestService(t)
service.SetGenesisTime(time.Now())
service.lateBlockTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "could not perform late block tasks")
}
@@ -2317,17 +2321,20 @@ func TestFillMissingBlockPayloadId_PrepareAllPayloads(t *testing.T) {
defer resetCfg()
service, tr := minimalTestService(t)
service.SetGenesisTime(time.Now())
service.SetForkChoiceGenesisTime(time.Now())
service.lateBlockTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "could not perform late block tasks")
}
// Helper function to simulate the block being on time or delayed for proposer
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(s *Service, slot, delay int64) {
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) + delay
newTime := time.Unix(time.Now().Unix()-offset, 0)
s.SetGenesisTime(newTime)
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(newTime.Unix()))
func driftGenesisTime(s *Service, slot primitives.Slot, delay time.Duration) {
now := time.Now()
slotDuration := time.Duration(slot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
genesis := now.Add(-slotDuration - delay)
s.SetGenesisTime(genesis)
s.cfg.ForkChoiceStore.SetGenesisTime(genesis)
}
func TestMissingBlobIndices(t *testing.T) {
@@ -2708,492 +2715,6 @@ func TestProcessLightClientUpdate(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
s, tr := minimalTestService(t)
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Altair)
})
})
t.Run("Capella", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Capella)
})
})
t.Run("Deneb", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Deneb)
})
})
reset()
}
func TestProcessLightClientBootstrap(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
defer reset()
s, tr := minimalTestService(t, WithLCStore())
@@ -3205,6 +2726,13 @@ func TestProcessLightClientBootstrap(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().VersionToForkEpochMap()[testVersion])*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
@@ -3215,6 +2743,9 @@ func TestProcessLightClientBootstrap(t *testing.T) {
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
@@ -3222,17 +2753,66 @@ func TestProcessLightClientBootstrap(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, currentBlockRoot)
require.NoError(t, err)
require.NotNil(t, b)
t.Run("no old update", func(t *testing.T) {
require.NoError(t, s.processLightClientUpdate(cfg))
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), testVersion)
// Check that the light client update is saved
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), testVersion)
})
t.Run("new update is better", func(t *testing.T) {
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), testVersion)
})
t.Run("old update is better", func(t *testing.T) {
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
// set a better sync aggregate
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), testVersion)
})
})
}
}
@@ -3316,7 +2896,6 @@ func TestIsDataAvailable(t *testing.T) {
}
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
@@ -3329,7 +2908,6 @@ func TestIsDataAvailable(t *testing.T) {
t.Run("Fulu - no missing data columns", func(t *testing.T) {
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
columnsToSave: []uint64{1, 17, 19, 42, 75, 87, 102, 117, 119}, // 119 is not needed
blobKzgCommitmentsCount: 3,
}
@@ -3344,7 +2922,7 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
options: []Option{WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
blobKzgCommitmentsCount: 3,
@@ -3381,6 +2959,9 @@ func TestIsDataAvailable(t *testing.T) {
require.NoError(t, err)
}()
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3393,10 +2974,6 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
var custodyInfo peerdas.CustodyInfo
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
@@ -3405,12 +2982,14 @@ func TestIsDataAvailable(t *testing.T) {
}
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&custodyInfo), WithStartWaitingDataColumnSidecars(startWaiting)},
options: []Option{WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
_, _, err := service.cfg.P2P.UpdateCustodyInfo(0, cgc)
require.NoError(t, err)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
@@ -3442,6 +3021,9 @@ func TestIsDataAvailable(t *testing.T) {
require.NoError(t, err)
}()
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3450,7 +3032,7 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
options: []Option{WithStartWaitingDataColumnSidecars(startWaiting)},
blobKzgCommitmentsCount: 3,
}
@@ -3592,7 +3174,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = &lightClient.Store{}
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var oldActualUpdate interfaces.LightClientOptimisticUpdate
var err error
@@ -3601,14 +3183,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
require.NoError(t, s.processLightClientOptimisticUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
oldActualUpdate, err = lightClient.NewLightClientOptimisticUpdateFromBeaconState(
lOld.Ctx,
lOld.State.Slot(),
lOld.State,
lOld.Block,
lOld.AttestedState,
lOld.AttestedBlock,
)
oldActualUpdate, err = lightClient.NewLightClientOptimisticUpdateFromBeaconState(lOld.Ctx, lOld.State, lOld.Block, lOld.AttestedState, lOld.AttestedBlock)
require.NoError(t, err)
// check that the old update is saved
@@ -3622,14 +3197,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
require.NoError(t, s.processLightClientOptimisticUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
newActualUpdate, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(
lNew.Ctx,
lNew.State.Slot(),
lNew.State,
lNew.Block,
lNew.AttestedState,
lNew.AttestedBlock,
)
newActualUpdate, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lNew.Ctx, lNew.State, lNew.Block, lNew.AttestedState, lNew.AttestedBlock)
require.NoError(t, err)
require.DeepNotEqual(t, newActualUpdate, oldActualUpdate, "new update should not be equal to old update")
@@ -3682,39 +3250,39 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectReplace: true,
},
{
name: "Old update is better - age - no supermajority",
name: "Old update is better - finalized slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "Old update is better - age - both supermajority",
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithSupermajority()},
expectReplace: false,
},
{
name: "Old update is better - supermajority",
oldOptions: []util.LightClientOption{util.WithSupermajority()},
name: "Old update is better - attested slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "New update is better - age - both supermajority",
oldOptions: []util.LightClientOption{util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
name: "Old update is better - signature slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedSignatureSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "New update is better - finalized slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
expectReplace: true,
},
{
name: "New update is better - age - no supermajority",
name: "New update is better - attested slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
expectReplace: true,
},
{
name: "New update is better - supermajority",
name: "New update is better - signature slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithIncreasedSignatureSlot(1)},
expectReplace: true,
},
}
@@ -3746,7 +3314,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = &lightClient.Store{}
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var actualOldUpdate, actualNewUpdate interfaces.LightClientFinalityUpdate
var err error
@@ -3757,15 +3325,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
require.NoError(t, s.processLightClientFinalityUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
// check that the old update is saved
actualOldUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(
ctx,
cfgOld.postState.Slot(),
cfgOld.postState,
cfgOld.roblock,
lOld.AttestedState,
lOld.AttestedBlock,
lOld.FinalizedBlock,
)
actualOldUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, cfgOld.postState, cfgOld.roblock, lOld.AttestedState, lOld.AttestedBlock, lOld.FinalizedBlock)
require.NoError(t, err)
oldUpdate := s.lcStore.LastFinalityUpdate()
require.DeepEqual(t, actualOldUpdate, oldUpdate)
@@ -3776,15 +3336,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
require.NoError(t, s.processLightClientFinalityUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
// check that the actual old update and the actual new update are different
actualNewUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(
ctx,
cfgNew.postState.Slot(),
cfgNew.postState,
cfgNew.roblock,
lNew.AttestedState,
lNew.AttestedBlock,
lNew.FinalizedBlock,
)
actualNewUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, cfgNew.postState, cfgNew.roblock, lNew.AttestedState, lNew.AttestedBlock, lNew.FinalizedBlock)
require.NoError(t, err)
require.DeepNotEqual(t, actualOldUpdate, actualNewUpdate)

View File

@@ -43,7 +43,7 @@ func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Chec
if err != nil {
return nil, err
}
if err := slots.ValidateClock(ss, uint64(s.genesisTime.Unix())); err != nil {
if err := slots.ValidateClock(ss, s.genesisTime); err != nil {
return nil, err
}
// We acquire the lock here instead than on gettAttPreState because that function gets called from UpdateHead that holds a write lock
@@ -177,9 +177,9 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
for _, a := range atts {
// Based on the spec, don't process the attestation until the subsequent slot.
// This delays consideration in the fork choice until their slot is in the past.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#validate_on_attestation
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/fork-choice.md#validate_on_attestation
nextSlot := a.GetData().Slot + 1
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), nextSlot, disparity); err != nil {
if err := slots.VerifyTime(s.genesisTime, nextSlot, disparity); err != nil {
continue
}

View File

@@ -70,7 +70,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, genesisState.SetGenesisTime(time.Now().Add(-1*time.Duration(params.BeaconConfig().SecondsPerSlot)*time.Second)))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)

View File

@@ -84,7 +84,11 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
err := s.blockBeingSynced.set(blockRoot)
if errors.Is(err, errBlockBeingSynced) {
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring block currently being synced")
return nil
}
defer s.blockBeingSynced.unset(blockRoot)
blockCopy, err := block.Copy()
@@ -283,7 +287,7 @@ func (s *Service) reportPostBlockProcessing(
// Log block sync status.
cp = s.cfg.ForkChoiceStore.JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, s.genesisTime, daWaitedTime); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data
@@ -300,15 +304,30 @@ func (s *Service) reportPostBlockProcessing(
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
// Send finalization event
go func() {
s.sendNewFinalizedEvent(ctx, finalizedState)
}()
// Insert finalized deposits into finalized deposit trie
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDepositsAndPrune(depCtx, finalized.Root)
cancel()
}()
if features.Get().EnableLightClient {
// Save a light client bootstrap for the finalized checkpoint
go func() {
err := s.lcStore.SaveLightClientBootstrap(s.ctx, finalized.Root)
if err != nil {
log.WithError(err).Error("Could not save light client bootstrap by block root")
} else {
log.Debugf("Saved light client bootstrap for finalized root %#x", finalized.Root)
}
}()
}
}
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning

View File

@@ -8,7 +8,10 @@ import (
blockchainTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/voluntaryexits"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -18,6 +21,7 @@ import (
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpbv1 "github.com/OffchainLabs/prysm/v6/proto/eth/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -307,7 +311,10 @@ func TestService_HasBlock(t *testing.T) {
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, s.HasBlock(t.Context(), r))
s.blockBeingSynced.set(r)
err = s.blockBeingSynced.set(r)
require.NoError(t, err)
err = s.blockBeingSynced.set(r)
require.ErrorIs(t, err, errBlockBeingSynced)
require.Equal(t, false, s.HasBlock(t.Context(), r))
}
@@ -338,7 +345,7 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
hook := logTest.NewGlobal()
s, _ := minimalTestService(t)
s.genesisTime = time.Now()
s.SetGenesisTime(time.Now())
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
assert.LogsDoNotContain(t, hook, "Entering mode to save hot states in DB")
@@ -348,8 +355,9 @@ func TestHandleCaches_EnablingLargeSize(t *testing.T) {
hook := logTest.NewGlobal()
s, _ := minimalTestService(t)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
s.SetGenesisTime(time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
helpers.ClearCache()
require.NoError(t, s.handleCaches())
assert.LogsContain(t, hook, "Expanding committee cache size")
}
@@ -562,3 +570,48 @@ func Test_executePostFinalizationTasks(t *testing.T) {
})
}
func TestProcessLightClientBootstrap(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
defer reset()
s, tr := minimalTestService(t, WithLCStore())
ctx := tr.ctx
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock))
finalizedBlockRoot, err := l.FinalizedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveState(ctx, l.FinalizedState, finalizedBlockRoot))
cp := l.AttestedState.FinalizedCheckpoint()
require.DeepSSZEqual(t, finalizedBlockRoot, [32]byte(cp.Root))
require.NoError(t, s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: cp.Epoch, Root: [32]byte(cp.Root)}))
sss, err := s.cfg.BeaconDB.State(ctx, finalizedBlockRoot)
require.NoError(t, err)
require.NotNil(t, sss)
s.executePostFinalizationTasks(s.ctx, l.FinalizedState)
// wait for the goroutine to finish processing
time.Sleep(1 * time.Second)
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
require.NoError(t, err)
require.NotNil(t, b)
btst, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.FinalizedState.Slot(), l.FinalizedState, l.FinalizedBlock)
require.NoError(t, err)
require.DeepEqual(t, btst, b)
require.Equal(t, b.Version(), testVersion)
})
}
}

View File

@@ -15,7 +15,6 @@ func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataC
}
// ReceiveDataColumn receives a single data column.
// (It is only a wrapper around ReceiveDataColumns.)
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
return errors.Wrap(err, "save data column sidecars")

View File

@@ -12,11 +12,9 @@ import (
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
@@ -31,6 +29,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -97,7 +96,6 @@ type config struct {
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
CustodyInfo *peerdas.CustodyInfo
}
// Checker is an interface used to determine if a node is in initial sync
@@ -208,17 +206,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
// Start a blockchain service's main event loop.
func (s *Service) Start() {
saved := s.cfg.FinalizedStateAtStartUp
defer s.removeStartupState()
if saved != nil && !saved.IsNil() {
if err := s.StartFromSavedState(saved); err != nil {
log.Fatal(err)
}
} else {
if err := s.startFromExecutionChain(); err != nil {
log.Fatal(err)
}
if err := s.StartFromSavedState(s.cfg.FinalizedStateAtStartUp); err != nil {
log.Fatal(err)
}
s.spawnProcessAttestationsRoutine()
go s.runLateBlockTasks()
@@ -267,8 +257,11 @@ func (s *Service) Status() error {
// StartFromSavedState initializes the blockchain using a previously saved finalized checkpoint.
func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if state.IsNil(saved) {
return errors.New("Last finalized state at startup is nil")
}
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = time.Unix(int64(saved.GenesisTime()), 0) // lint:ignore uintcast -- Genesis time will not exceed int64 in your lifetime.
s.genesisTime = saved.GenesisTime()
s.cfg.AttService.SetGenesisTime(saved.GenesisTime())
originRoot, err := s.originRootFromSavedState(s.ctx)
@@ -296,6 +289,20 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if err := s.clockSetter.SetClock(startup.NewClock(s.genesisTime, vr)); err != nil {
return errors.Wrap(err, "failed to initialize blockchain service")
}
if !params.FuluEnabled() {
return nil
}
earliestAvailableSlot, custodySubnetCount, err := s.updateCustodyInfoInDB(saved.Slot())
if err != nil {
return errors.Wrap(err, "could not get and save custody group count")
}
if _, _, err := s.cfg.P2P.UpdateCustodyInfo(earliestAvailableSlot, custodySubnetCount); err != nil {
return errors.Wrap(err, "update custody info")
}
return nil
}
@@ -358,62 +365,6 @@ func (s *Service) initializeHead(ctx context.Context, st state.BeaconState) erro
return nil
}
func (s *Service) startFromExecutionChain() error {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.cfg.ChainStartFetcher == nil {
return errors.New("not configured execution chain")
}
go func() {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for {
select {
case e := <-stateChannel:
if e.Type == statefeed.ChainStarted {
data, ok := e.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("Event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("startTime", data.StartTime).Debug("Received chain start event")
s.onExecutionChainStart(s.ctx, data.StartTime)
return
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state forRoot failed")
return
}
}
}()
return nil
}
// onExecutionChainStart initializes a series of deposits from the ChainStart deposits in the eth1
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (s *Service) onExecutionChainStart(ctx context.Context, genesisTime time.Time) {
preGenesisState := s.cfg.ChainStartFetcher.PreGenesisState()
initializedState, err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.cfg.ChainStartFetcher.ChainStartEth1Data())
if err != nil {
log.WithError(err).Fatal("Could not initialize beacon chain")
}
// We start a counter to genesis, if needed.
gRoot, err := initializedState.HashTreeRoot(s.ctx)
if err != nil {
log.WithError(err).Fatal("Could not hash tree root genesis state")
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(initializedState.NumValidators()), gRoot)
vr := bytesutil.ToBytes32(initializedState.GenesisValidatorsRoot())
if err := s.clockSetter.SetClock(startup.NewClock(genesisTime, vr)); err != nil {
log.WithError(err).Fatal("Failed to initialize blockchain service from execution start event")
}
}
// initializes the state and genesis block of the beacon chain to persistent storage
// based on a genesis timestamp value obtained from the ChainStart event emitted
// by the ETH1.0 Deposit Contract and the POWChain service of the node.
@@ -424,7 +375,7 @@ func (s *Service) initializeBeaconChain(
eth1data *ethpb.Eth1Data) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "beacon-chain.Service.initializeBeaconChain")
defer span.End()
s.genesisTime = genesisTime
s.genesisTime = genesisTime.Truncate(time.Second) // Genesis time has a precision of 1 second.
unixTime := uint64(genesisTime.Unix())
genesisState, err := transition.OptimizedGenesisBeaconState(unixTime, preGenesisState, eth1data)
@@ -485,7 +436,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "Could not set optimistic status of genesis block to false")
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
s.cfg.ForkChoiceStore.SetGenesisTime(s.genesisTime)
if err := s.setHead(&head{
genesisBlkRoot,
@@ -516,6 +467,57 @@ func (s *Service) removeStartupState() {
s.cfg.FinalizedStateAtStartUp = nil
}
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
beaconConfig := params.BeaconConfig()
custodyRequirement := beaconConfig.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSubscribedToAllDataSubnets, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSubscribedToAllDataSubnets)
if err != nil {
log.WithError(err).Error("Could not update subscription status to all data subnets")
}
// Warn the user if the node was previously subscribed to all data subnets and is not any more.
if wasSubscribedToAllDataSubnets && !isSubscribedToAllDataSubnets {
log.Warnf(
"Because the flag `--%s` was previously used, the node will still subscribe to all data subnets.",
flags.SubscribeAllDataSubnets.Name,
)
}
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = beaconConfig.NumberOfColumns
}
// Safely compute the fulu fork slot.
fuluForkSlot, err := fuluForkSlot()
if err != nil {
return 0, 0, errors.Wrap(err, "fulu fork slot")
}
// If slot is before the fulu fork slot, then use the earliest stored slot as the reference slot.
if slot < fuluForkSlot {
slot, err = s.cfg.BeaconDB.EarliestSlot(s.ctx)
if err != nil {
return 0, 0, errors.Wrap(err, "earliest slot")
}
}
earliestAvailableSlot, custodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, custodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
return earliestAvailableSlot, custodyGroupCount, nil
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {
currentTime := prysmTime.Now()
if currentTime.After(genesisTime) {
@@ -532,3 +534,19 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(gState.NumValidators()), gRoot)
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
if err != nil {
return 0, errors.Wrap(err, "epoch start")
}
return forkFuluSlot, nil
}

View File

@@ -31,6 +31,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/trie"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -51,6 +52,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
srv.Stop()
})
bState, _ := util.DeterministicGenesisState(t, 10)
genesis.StoreStateDuringTest(t, bState)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProtoUnsafe())
require.NoError(t, err)
mockTrie, err := trie.NewTrie(0)
@@ -71,20 +73,22 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
DepositContainers: []*ethpb.DepositContainer{},
})
require.NoError(t, err)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
web3Service, err = execution.NewService(
ctx,
execution.WithDatabase(beaconDB),
execution.WithHttpEndpoint(endpoint),
execution.WithDepositContractAddress(common.Address{}),
execution.WithDepositCache(depositCache),
)
require.NoError(t, err, "Unable to set up web3 service")
attService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
require.NoError(t, err)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
@@ -127,7 +131,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetGenesisTime(uint64(gt.Unix())))
require.NoError(t, s.SetGenesisTime(gt))
require.NoError(t, s.SetSlot(1))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
@@ -164,7 +168,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
wsb := util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetGenesisTime(uint64(gt.Unix())))
require.NoError(t, s.SetGenesisTime(gt))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
@@ -238,7 +242,7 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetGenesisTime(uint64(gt.Unix())))
require.NoError(t, s.SetGenesisTime(gt))
require.NoError(t, s.SetSlot(0))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
@@ -396,24 +400,6 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(s.ctx, r))
}
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
mgs := &MockClockSetter{}
service.clockSetter = mgs
gt := time.Now()
service.onExecutionChainStart(t.Context(), gt)
gs, err := beaconDB.GenesisState(ctx)
require.NoError(t, err)
require.NotEqual(t, nil, gs)
require.Equal(t, 32, len(gs.GenesisValidatorsRoot()))
var zero [32]byte
require.DeepNotEqual(t, gs.GenesisValidatorsRoot(), zero[:])
require.Equal(t, gt, mgs.G.GenesisTime())
require.Equal(t, bytesutil.ToBytes32(gs.GenesisValidatorsRoot()), mgs.G.GenesisValidatorsRoot())
}
func BenchmarkHasBlockDB(b *testing.B) {
ctx := b.Context()
s := testServiceWithDB(b)

View File

@@ -170,6 +170,6 @@ func (s *Service) setupForkchoiceCheckpoints() error {
Root: fRoot}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
s.cfg.ForkChoiceStore.SetGenesisTime(s.genesisTime)
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/event"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
@@ -24,9 +25,12 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/libp2p/go-libp2p/core/peer"
"google.golang.org/protobuf/proto"
)
@@ -51,6 +55,7 @@ type mockBroadcaster struct {
type mockAccessor struct {
mockBroadcaster
mockCustodyManager
p2pTesting.MockPeerManager
}
@@ -84,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar, _ ...chan<- bool) error {
func (mb *mockBroadcaster) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
@@ -94,6 +99,43 @@ func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.Sig
var _ p2p.Broadcaster = (*mockBroadcaster)(nil)
// mockCustodyManager is a mock implementation of p2p.CustodyManager
type mockCustodyManager struct {
mut sync.RWMutex
earliestAvailableSlot primitives.Slot
custodyGroupCount uint64
}
func (dch *mockCustodyManager) EarliestAvailableSlot() (primitives.Slot, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
return dch.earliestAvailableSlot, nil
}
func (dch *mockCustodyManager) CustodyGroupCount() (uint64, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
return dch.custodyGroupCount, nil
}
func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
dch.mut.Lock()
defer dch.mut.Unlock()
dch.earliestAvailableSlot = earliestAvailableSlot
dch.custodyGroupCount = custodyGroupCount
return earliestAvailableSlot, custodyGroupCount, nil
}
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}
var _ p2p.CustodyManager = (*mockCustodyManager)(nil)
type testServiceRequirements struct {
ctx context.Context
db db.Database
@@ -109,8 +151,10 @@ type testServiceRequirements struct {
func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceRequirements) {
ctx := t.Context()
genesis := time.Now().Add(-1 * 4 * time.Duration(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().SecondsPerSlot)) * time.Second) // Genesis was 4 epochs ago.
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
fcs.SetGenesisTime(genesis)
sg := stategen.New(beaconDB, fcs)
notif := &mockBeaconNode{}
fcs.SetBalancesByRooter(sg.ActiveNonSlashedBalancesByRoot)
@@ -149,6 +193,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
WithP2PBroadcaster(&mockAccessor{}),
WithLightClientStore(&lightclient.Store{}),
WithGenesisTime(genesis),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -641,7 +641,7 @@ func (s *ChainService) GetProposerHead() [32]byte {
}
// SetForkChoiceGenesisTime mocks the same method in the chain service
func (s *ChainService) SetForkChoiceGenesisTime(timestamp uint64) {
func (s *ChainService) SetForkChoiceGenesisTime(timestamp time.Time) {
if s.ForkChoiceStore != nil {
s.ForkChoiceStore.SetGenesisTime(timestamp)
}

View File

@@ -24,7 +24,7 @@ var ErrNoBuilder = errors.New("builder endpoint not configured")
// BlockBuilder defines the interface for interacting with the block builder
type BlockBuilder interface {
SubmitBlindedBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
SubmitBlindedBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubKey [48]byte) (builder.SignedBid, error)
RegisterValidator(ctx context.Context, reg []*ethpb.SignedValidatorRegistrationV1) error
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
@@ -68,7 +68,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
log.WithError(err).Error("Failed to check builder status")
} else {
log.WithField("endpoint", s.c.NodeURL()).Info("Builder has been configured")
log.Warn("Outsourcing block construction to external builders adds non-trivial delay to block propagation time. " +
log.Warn("Outsourcing block construction to external builders adds non-trivial delay to block propagation time. " +
"Builder-constructed blocks or fallback blocks may get orphaned. Use at your own risk!")
}
}
@@ -87,7 +87,7 @@ func (s *Service) Stop() error {
}
// SubmitBlindedBlock submits a blinded block to the builder relay network.
func (s *Service) SubmitBlindedBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (s *Service) SubmitBlindedBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
ctx, span := trace.StartSpan(ctx, "builder.SubmitBlindedBlock")
defer span.End()
start := time.Now()

View File

@@ -29,6 +29,7 @@ type MockBuilderService struct {
PayloadCapella *v1.ExecutionPayloadCapella
PayloadDeneb *v1.ExecutionPayloadDeneb
BlobBundle *v1.BlobsBundle
BlobBundleV2 *v1.BlobsBundleV2
ErrSubmitBlindedBlock error
Bid *ethpb.SignedBuilderBid
BidCapella *ethpb.SignedBuilderBidCapella
@@ -46,7 +47,7 @@ func (s *MockBuilderService) Configured() bool {
}
// SubmitBlindedBlock for mocking.
func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
switch b.Version() {
case version.Bellatrix:
w, err := blocks.WrappedExecutionPayload(s.Payload)
@@ -66,6 +67,16 @@ func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.
return nil, nil, errors.Wrap(err, "could not wrap deneb payload")
}
return w, s.BlobBundle, s.ErrSubmitBlindedBlock
case version.Fulu:
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap deneb payload for fulu")
}
// For Fulu, return BlobsBundleV2 if available, otherwise regular BlobsBundle
if s.BlobBundleV2 != nil {
return w, s.BlobBundleV2, s.ErrSubmitBlindedBlock
}
return w, s.BlobBundle, s.ErrSubmitBlindedBlock
default:
return nil, nil, errors.New("unknown block version for mocking")
}

View File

@@ -217,15 +217,15 @@ func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
// ValidateSyncMessageTime validates sync message to ensure that the provided slot is valid.
// Spec: [IGNORE] The message's slot is for the current slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance), i.e. sync_committee_message.slot == current_slot
func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
if err := slots.ValidateClock(slot, uint64(genesisTime.Unix())); err != nil {
if err := slots.ValidateClock(slot, genesisTime); err != nil {
return err
}
messageTime, err := slots.ToTime(uint64(genesisTime.Unix()), slot)
messageTime, err := slots.StartTime(genesisTime, slot)
if err != nil {
return err
}
currentSlot := slots.Since(genesisTime)
slotStartTime, err := slots.ToTime(uint64(genesisTime.Unix()), currentSlot)
currentSlot := slots.CurrentSlot(genesisTime)
slotStartTime, err := slots.StartTime(genesisTime, currentSlot)
if err != nil {
return err
}

View File

@@ -68,7 +68,7 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -41,7 +41,6 @@ go_library(
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",

View File

@@ -162,7 +162,7 @@ func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) err
if !bytes.Equal(payload.PrevRandao(), random) {
return ErrInvalidPayloadPrevRandao
}
t, err := slots.ToTime(st.GenesisTime(), st.Slot())
t, err := slots.StartTime(st.GenesisTime(), st.Slot())
if err != nil {
return err
}

View File

@@ -501,7 +501,7 @@ func Test_ValidatePayload(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string
@@ -551,7 +551,7 @@ func Test_ProcessPayload(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string
@@ -627,7 +627,7 @@ func Test_ProcessPayload_Blinded(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string
@@ -697,7 +697,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -101,7 +100,7 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
// via the respective epoch.
func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState, blk interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
currentEpoch := slots.ToEpoch(blk.Block().Slot())
fork, err := forks.Fork(currentEpoch)
fork, err := params.Fork(currentEpoch)
if err != nil {
return err
}

View File

@@ -43,7 +43,7 @@ func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
}
s := &ethpb.BeaconStateCapella{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -59,7 +59,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -208,7 +208,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
s := &ethpb.BeaconStateElectra{
GenesisTime: beaconState.GenesisTime(),
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{

View File

@@ -102,13 +102,13 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
return nil, err
} else if n == params.BeaconConfig().PendingPartialWithdrawalsLimit && !isFullExitRequest {
// if the PendingPartialWithdrawalsLimit is met, the user would have paid for a partial withdrawal that's not included
log.Debugln("Skipping execution layer withdrawal request, PendingPartialWithdrawalsLimit reached")
log.Debug("Skipping execution layer withdrawal request, PendingPartialWithdrawalsLimit reached")
continue
}
vIdx, exists := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(wr.ValidatorPubkey))
if !exists {
log.Debugf("Skipping execution layer withdrawal request, validator index for %s not found\n", hexutil.Encode(wr.ValidatorPubkey))
log.WithField("validator", hexutil.Encode(wr.ValidatorPubkey)).Debug("Skipping execution layer withdrawal request, validator index not found")
continue
}
validator, err := st.ValidatorAtIndexReadOnly(vIdx)
@@ -120,23 +120,23 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
wc := validator.GetWithdrawalCredentials()
isCorrectSourceAddress := bytes.Equal(wc[12:], wr.SourceAddress)
if !hasCorrectCredential || !isCorrectSourceAddress {
log.Debugln("Skipping execution layer withdrawal request, wrong withdrawal credentials")
log.Debug("Skipping execution layer withdrawal request, wrong withdrawal credentials")
continue
}
// Verify the validator is active.
if !helpers.IsActiveValidatorUsingTrie(validator, currentEpoch) {
log.Debugln("Skipping execution layer withdrawal request, validator not active")
log.Debug("Skipping execution layer withdrawal request, validator not active")
continue
}
// Verify the validator has not yet submitted an exit.
if validator.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
log.Debugln("Skipping execution layer withdrawal request, validator has submitted an exit already")
log.Debug("Skipping execution layer withdrawal request, validator has submitted an exit already")
continue
}
// Verify the validator has been active long enough.
if currentEpoch < validator.ActivationEpoch().AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
log.Debugln("Skipping execution layer withdrawal request, validator has not been active long enough")
log.Debug("Skipping execution layer withdrawal request, validator has not been active long enough")
continue
}

View File

@@ -36,7 +36,7 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
}
s := &ethpb.BeaconStateBellatrix{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -15,7 +15,7 @@ import (
)
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/fork.md#upgrading-the-state
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
@@ -111,7 +111,7 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
}
s := &ethpb.BeaconStateFulu{
GenesisTime: beaconState.GenesisTime(),
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{

View File

@@ -36,7 +36,6 @@ go_library(
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
@@ -125,18 +124,18 @@ func ComputeSubnetFromCommitteeAndSlot(activeValCount uint64, comIdx primitives.
// valid_attestation_slot = 101
//
// In the attestation must be within the range of 95 to 102 in the example above.
func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
attTime, err := slots.ToTime(uint64(genesisTime.Unix()), attSlot)
func ValidateAttestationTime(attSlot primitives.Slot, genesis time.Time, clockDisparity time.Duration) error {
attTime, err := slots.StartTime(genesis, attSlot)
if err != nil {
return err
}
currentSlot := slots.Since(genesisTime)
currentSlot := slots.CurrentSlot(genesis)
// When receiving an attestation, it can be from the future.
// so the upper bounds is set to now + clockDisparity(SECONDS_PER_SLOT * 2).
// But when sending an attestation, it should not be in future slot.
// so the upper bounds is set to now + clockDisparity(MAXIMUM_GOSSIP_CLOCK_DISPARITY).
upperBounds := prysmTime.Now().Add(clockDisparity)
upperBounds := time.Now().Add(clockDisparity)
// An attestation cannot be older than the current slot - attestation propagation slot range
// with a minor tolerance for peer clock disparity.
@@ -144,7 +143,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
if currentSlot > params.BeaconConfig().AttestationPropagationSlotRange {
lowerBoundsSlot = currentSlot - params.BeaconConfig().AttestationPropagationSlotRange
}
lowerTime, err := slots.ToTime(uint64(genesisTime.Unix()), lowerBoundsSlot)
lowerTime, err := slots.StartTime(genesis, lowerBoundsSlot)
if err != nil {
return err
}
@@ -187,7 +186,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
// VerifyCheckpointEpoch is within current epoch and previous epoch
// with respect to current time. Returns true if it's within, false if it's not.
func VerifyCheckpointEpoch(c *ethpb.Checkpoint, genesis time.Time) bool {
currentSlot := slots.CurrentSlot(uint64(genesis.Unix()))
currentSlot := slots.CurrentSlot(genesis)
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch primitives.Epoch

View File

@@ -217,10 +217,10 @@ func Test_ValidateAttestationTime(t *testing.T) {
{
name: "attestation.slot is well beyond current slot",
args: args{
attSlot: 1 << 32,
genesisTime: prysmTime.Now().Add(-15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
attSlot: 1024,
genesisTime: time.Now().Add(-15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
},
wantedErr: "attestation slot 4294967296 not within attestation propagation range of 0 to 15 (current slot)",
wantedErr: "attestation slot 1024 not within attestation propagation range of 0 to 15 (current slot)",
},
}
for _, tt := range tests {

View File

@@ -669,11 +669,11 @@ func ComputeCommittee(
// InitializeProposerLookahead computes the list of the proposer indices for the next MIN_SEED_LOOKAHEAD + 1 epochs.
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]uint64, error) {
lookAhead := make([]uint64, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return nil, errors.Wrap(err, "could not get active indices")
}
for i := range params.BeaconConfig().MinSeedLookahead + 1 {
indices, err := ActiveValidatorIndices(ctx, state, epoch+i)
if err != nil {
return nil, errors.Wrap(err, "could not get active indices")
}
proposerIndices, err := PrecomputeProposerIndices(state, indices, epoch+i)
if err != nil {
return nil, errors.Wrap(err, "could not compute proposer indices")

View File

@@ -916,3 +916,46 @@ func TestAssignmentForValidator(t *testing.T) {
require.DeepEqual(t, &helpers.LiteAssignment{}, got)
})
}
// Regression for #15450
func TestInitializeProposerLookahead_RegressionTest(t *testing.T) {
ctx := t.Context()
state, _ := util.DeterministicGenesisState(t, 128)
// Set some validators to activate in epoch 3 instead of 0
validators := state.Validators()
for i := 64; i < 128; i++ {
validators[i].ActivationEpoch = 3
}
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(64)) // epoch 2
epoch := slots.ToEpoch(state.Slot())
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, state, epoch)
require.NoError(t, err)
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
for epochOffset := primitives.Epoch(0); epochOffset < 2; epochOffset++ {
targetEpoch := epoch + epochOffset
activeIndices, err := helpers.ActiveValidatorIndices(ctx, state, targetEpoch)
require.NoError(t, err)
expectedProposers, err := helpers.PrecomputeProposerIndices(state, activeIndices, targetEpoch)
require.NoError(t, err)
startIdx := int(epochOffset) * slotsPerEpoch
endIdx := startIdx + slotsPerEpoch
actualProposers := proposerLookahead[startIdx:endIdx]
expectedUint64 := make([]uint64, len(expectedProposers))
for i, proposer := range expectedProposers {
expectedUint64[i] = uint64(proposer)
}
// This assertion would fail with the original bug:
for i, expected := range expectedUint64 {
require.Equal(t, expected, actualProposers[i],
"Proposer index mismatch at slot %d in epoch %d", i, targetEpoch)
}
}
}

View File

@@ -78,6 +78,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +265,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -206,9 +206,9 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#configuration
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)

View File

@@ -292,7 +292,7 @@ func TestMinEpochsForBlockRequests(t *testing.T) {
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (

View File

@@ -10,8 +10,12 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -26,6 +30,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -38,6 +43,9 @@ go_test(
],
deps = [
":go_default_library",
"//async/event:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -26,14 +26,12 @@ const ErrNotEnoughSyncCommitteeBits = "sync committee bits count is less than re
func NewLightClientFinalityUpdateFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock,
) (interfaces.LightClientFinalityUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, currentSlot, state, block, attestedState, attestedBlock, finalizedBlock)
finalizedBlock interfaces.ReadOnlySignedBeaconBlock) (interfaces.LightClientFinalityUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return nil, err
}
@@ -43,13 +41,11 @@ func NewLightClientFinalityUpdateFromBeaconState(
func NewLightClientOptimisticUpdateFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
) (interfaces.LightClientOptimisticUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, currentSlot, state, block, attestedState, attestedBlock, nil)
attestedBlock interfaces.ReadOnlySignedBeaconBlock) (interfaces.LightClientOptimisticUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, nil)
if err != nil {
return nil, err
}
@@ -66,7 +62,6 @@ func NewLightClientOptimisticUpdateFromBeaconState(
// if locally available (may be unavailable, e.g., when using checkpoint sync, or if it was pruned locally)
func NewLightClientUpdateFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
@@ -521,8 +516,7 @@ func ComputeWithdrawalsRoot(payload interfaces.ExecutionData) ([]byte, error) {
func BlockToLightClientHeader(
ctx context.Context,
attestedBlockVersion int, // this is the version that the light client header should be in, based on the attested block.
block interfaces.ReadOnlySignedBeaconBlock, // this block is either the attested block, or the finalized block.
// in case of the latter, we might need to upgrade it to the attested block's version.
block interfaces.ReadOnlySignedBeaconBlock, // this block is either the attested block, or the finalized block. in case of the latter, we might need to upgrade it to the attested block's version.
) (interfaces.LightClientHeader, error) {
if block.Version() > attestedBlockVersion {
return nil, errors.Errorf("block version %s is greater than attested block version %s", version.String(block.Version()), version.String(attestedBlockVersion))
@@ -755,3 +749,64 @@ func UpdateHasSupermajority(syncAggregate *pb.SyncAggregate) bool {
numActiveParticipants := syncAggregate.SyncCommitteeBits.Count()
return numActiveParticipants*3 >= maxActiveParticipants*2
}
// IsFinalityUpdateValidForBroadcast checks if a finality update needs to be broadcasted.
// It is also used to check if an incoming gossiped finality update is valid for forwarding and saving.
func IsFinalityUpdateValidForBroadcast(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil {
return true
}
// The finalized_header.beacon.slot is greater than that of all previously forwarded finality_updates,
// or it matches the highest previously forwarded slot and also has a sync_aggregate indicating supermajority (> 2/3)
// sync committee participation while the previously forwarded finality_update for that slot did not indicate supermajority
newUpdateSlot := newUpdate.FinalizedHeader().Beacon().Slot
newHasSupermajority := UpdateHasSupermajority(newUpdate.SyncAggregate())
lastUpdateSlot := oldUpdate.FinalizedHeader().Beacon().Slot
lastHasSupermajority := UpdateHasSupermajority(oldUpdate.SyncAggregate())
if newUpdateSlot < lastUpdateSlot {
return false
}
if newUpdateSlot == lastUpdateSlot && (lastHasSupermajority || !newHasSupermajority) {
return false
}
return true
}
// IsBetterFinalityUpdate checks if the new finality update is better than the old one for saving.
// This does not concern broadcasting, but rather the decision of whether to save the new update.
// For broadcasting checks, use IsFinalityUpdateValidForBroadcast.
func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil {
return true
}
// Full nodes SHOULD provide the LightClientFinalityUpdate with the highest attested_header.beacon.slot (if multiple, highest signature_slot)
newFinalizedSlot := newUpdate.FinalizedHeader().Beacon().Slot
newAttestedSlot := newUpdate.AttestedHeader().Beacon().Slot
oldFinalizedSlot := oldUpdate.FinalizedHeader().Beacon().Slot
oldAttestedSlot := oldUpdate.AttestedHeader().Beacon().Slot
if newFinalizedSlot < oldFinalizedSlot {
return false
}
if newFinalizedSlot == oldFinalizedSlot {
if newAttestedSlot < oldAttestedSlot {
return false
}
if newAttestedSlot == oldAttestedSlot && newUpdate.SignatureSlot() <= oldUpdate.SignatureSlot() {
return false
}
}
return true
}
func IsBetterOptimisticUpdate(newUpdate, oldUpdate interfaces.LightClientOptimisticUpdate) bool {
if oldUpdate == nil {
return true
}
// The attested_header.beacon.slot is greater than that of all previously forwarded optimistic updates
return newUpdate.AttestedHeader().Beacon().Slot > oldUpdate.AttestedHeader().Beacon().Slot
}

View File

@@ -34,7 +34,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
@@ -46,7 +46,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
t.Run("Capella", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -59,7 +59,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
t.Run("Deneb", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -72,7 +72,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
t.Run("Electra", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -97,7 +97,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -132,7 +132,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -206,7 +206,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella, util.WithFinalizedCheckpointInPrevFork())
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -240,7 +240,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -315,7 +315,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb, util.WithFinalizedCheckpointInPrevFork())
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -392,7 +392,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
@@ -467,7 +467,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra, util.WithFinalizedCheckpointInPrevFork())
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")

View File

@@ -4,9 +4,14 @@ import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
@@ -15,13 +20,17 @@ type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate
lastFinalityUpdate interfaces.LightClientFinalityUpdate // tracks the best finality update seen so far
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate // tracks the best optimistic update seen so far
p2p p2p.Accessor
stateFeed event.SubscriberSender
}
func NewLightClientStore(db iface.HeadAccessDatabase) *Store {
func NewLightClientStore(db iface.HeadAccessDatabase, p p2p.Accessor, e event.SubscriberSender) *Store {
return &Store{
beaconDB: db,
beaconDB: db,
p2p: p,
stateFeed: e,
}
}
@@ -41,10 +50,31 @@ func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot [32]byte) (i
return bootstrap, nil
}
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte, bootstrap interfaces.LightClientBootstrap) error {
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte) error {
s.mu.Lock()
defer s.mu.Unlock()
blk, err := s.beaconDB.Block(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch block for root %x", blockRoot)
}
if blk == nil {
return errors.Errorf("failed to fetch block for root %x", blockRoot)
}
state, err := s.beaconDB.State(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch state for block root %x", blockRoot)
}
if state == nil {
return errors.Errorf("failed to fetch state for block root %x", blockRoot)
}
bootstrap, err := NewLightClientBootstrapFromBeaconState(ctx, state.Slot(), state, blk)
if err != nil {
return errors.Wrapf(err, "failed to create light client bootstrap for block root %x", blockRoot)
}
// Save the light client bootstrap to the database
if err := s.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap); err != nil {
return err
@@ -52,10 +82,92 @@ func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate) {
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) ([]interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client updatesMap from the database
updatesMap, err := s.beaconDB.LightClientUpdates(ctx, startPeriod, endPeriod)
if err != nil {
return nil, err
}
var updates []interfaces.LightClientUpdate
for i := startPeriod; i <= endPeriod; i++ {
update, ok := updatesMap[i]
if !ok {
// Only return the first contiguous range of updates
break
}
updates = append(updates, update)
}
return updates, nil
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client update for the given period from the database
update, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return nil, err
}
return update, nil
}
func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error {
s.mu.Lock()
defer s.mu.Unlock()
oldUpdate, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}
s.lastFinalityUpdate = update
log.Debug("Saved new light client finality update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
}
func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
@@ -64,10 +176,23 @@ func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
return s.lastFinalityUpdate
}
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate) {
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}
s.lastOptimisticUpdate = update
log.Debug("Saved new light client optimistic update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
}
func (s *Store) LastOptimisticUpdate() interfaces.LightClientOptimisticUpdate {

View File

@@ -3,7 +3,10 @@ package light_client_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/async/event"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -21,22 +24,22 @@ func TestLightClientStore(t *testing.T) {
params.OverrideBeaconConfig(cfg)
// Initialize the light client store
lcStore := &lightClient.Store{}
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), &p2pTesting.FakeP2P{}, new(event.Feed))
// Create test light client updates for Capella and Deneb
lCapella := util.NewTestLightClient(t, version.Capella)
opUpdateCapella, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lCapella.Ctx, lCapella.State.Slot(), lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock)
opUpdateCapella, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateCapella, "OptimisticUpdateCapella is nil")
finUpdateCapella, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lCapella.Ctx, lCapella.State.Slot(), lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock, lCapella.FinalizedBlock)
finUpdateCapella, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock, lCapella.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateCapella, "FinalityUpdateCapella is nil")
lDeneb := util.NewTestLightClient(t, version.Deneb)
opUpdateDeneb, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State.Slot(), lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock)
opUpdateDeneb, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateDeneb, "OptimisticUpdateDeneb is nil")
finUpdateDeneb, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State.Slot(), lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock, lDeneb.FinalizedBlock)
finUpdateDeneb, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock, lDeneb.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateDeneb, "FinalityUpdateDeneb is nil")
@@ -45,24 +48,118 @@ func TestLightClientStore(t *testing.T) {
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get finality with Capella update. Optimistic update should be nil
lcStore.SetLastFinalityUpdate(finUpdateCapella)
lcStore.SetLastFinalityUpdate(finUpdateCapella, false)
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get optimistic with Capella update. Finality update should be Capella
lcStore.SetLastOptimisticUpdate(opUpdateCapella)
lcStore.SetLastOptimisticUpdate(opUpdateCapella, false)
require.Equal(t, opUpdateCapella, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
// Set and get finality and optimistic with Deneb update
lcStore.SetLastFinalityUpdate(finUpdateDeneb)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb)
lcStore.SetLastFinalityUpdate(finUpdateDeneb, false)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb, false)
require.Equal(t, finUpdateDeneb, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.Equal(t, opUpdateDeneb, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
// Set and get finality and optimistic with nil update
lcStore.SetLastFinalityUpdate(nil)
lcStore.SetLastOptimisticUpdate(nil)
require.IsNil(t, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should be nil")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
}
func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), p2p, new(event.Feed))
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update0, lcStore.LastFinalityUpdate()), "update0 should be better than nil")
// update0 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update0, lcStore.LastFinalityUpdate()), "update0 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
l1 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update1, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update1, update0), "update1 should be better than update0")
// update1 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update1, lcStore.LastFinalityUpdate()), "update1 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 2 with same finality slot, increased attested slot, and supermajority - should save and broadcast
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update2, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update2, update1), "update2 should be better than update1")
// update2 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update2, lcStore.LastFinalityUpdate()), "update2 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
l3 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(3), util.WithSupermajority())
update3, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l3.Ctx, l3.State, l3.Block, l3.AttestedState, l3.AttestedBlock, l3.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update3, update2), "update3 should be better than update2")
// update3 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update3, lcStore.LastFinalityUpdate()), "update3 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
l4 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(1), util.WithSupermajority())
update4, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l4.Ctx, l4.State, l4.Block, l4.AttestedState, l4.AttestedBlock, l4.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update4, update3), "update4 should be better than update3")
// update4 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update4, lcStore.LastFinalityUpdate()), "update4 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
l5 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update5, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l5.Ctx, l5.State, l5.Block, l5.AttestedState, l5.AttestedBlock, l5.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update5, update4), "update5 should be better than update4")
// update5 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update5, lcStore.LastFinalityUpdate()), "update5 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
l6 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(3))
update6, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l6.Ctx, l6.State, l6.Block, l6.AttestedState, l6.AttestedBlock, l6.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update6, update5), "update6 should be better than update5")
// update6 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update6, lcStore.LastFinalityUpdate()), "update6 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -16,7 +16,6 @@ go_library(
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -53,7 +52,6 @@ go_test(
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -31,27 +31,20 @@ var (
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
type CustodyType int
const (
Target CustodyType = iota
Actual
)
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#get_custody_groups
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error) {
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
// Check if the custody group count is larger than the number of custody groups.
if custodyGroupCount > numberOfCustodyGroup {
if custodyGroupCount > numberOfCustodyGroups {
return nil, ErrCustodyGroupCountTooLarge
}
// Shortcut if all custody groups are needed.
if custodyGroupCount == numberOfCustodyGroup {
custodyGroups := make([]uint64, 0, numberOfCustodyGroup)
for i := range numberOfCustodyGroup {
if custodyGroupCount == numberOfCustodyGroups {
custodyGroups := make([]uint64, 0, numberOfCustodyGroups)
for i := range numberOfCustodyGroups {
custodyGroups = append(custodyGroups, i)
}
@@ -73,7 +66,7 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the custody group ID.
custodyGroup := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroup
custodyGroup := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroups
// Add the custody group to the map.
if !custodyGroupsMap[custodyGroup] {
@@ -88,9 +81,6 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
// Increment the current ID.
currentId.Add(currentId, one)
}
// Sort the custody groups.
slices.Sort[[]uint64](custodyGroups)
}
// Final check.
@@ -98,26 +88,29 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
return nil, errWrongComputedCustodyGroupCount
}
// Sort the custody groups.
slices.Sort[[]uint64](custodyGroups)
return custodyGroups, nil
}
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#compute_columns_for_custody_group
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
if custodyGroup >= numberOfCustodyGroup {
if custodyGroup >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroup
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
columns := make([]uint64, 0, columnsPerGroup)
for i := range columnsPerGroup {
column := numberOfCustodyGroup*i + custodyGroup
column := numberOfCustodyGroups*i + custodyGroup
columns = append(columns, column)
}
@@ -127,7 +120,7 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// DataColumnSidecars computes the data column sidecars from the signed block, cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates `cellsAndProofs` afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.3/specs/fulu/das-core.md#get_data_column_sidecars
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsAndProofs []kzg.CellsAndProofs) ([]*ethpb.DataColumnSidecar, error) {
if signedBlock == nil || signedBlock.IsNil() || len(cellsAndProofs) == 0 {
return nil, nil
@@ -151,7 +144,7 @@ func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsA
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
return nil, errors.Wrap(err, "merkle proof KZG commitments")
}
dataColumnSidecars, err := dataColumnsSidecars(signedBlockHeader, blobKzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
@@ -176,19 +169,6 @@ func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
return columnIndex % numberOfCustodyGroups, nil
}
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#custody-sampling
func (custodyInfo *CustodyInfo) CustodyGroupSamplingSize(ct CustodyType) uint64 {
custodyGroupCount := custodyInfo.TargetGroupCount.Get()
if ct == Actual {
custodyGroupCount = custodyInfo.ActualGroupCount()
}
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
return max(samplesPerSlot, custodyGroupCount)
}
// CustodyColumns computes the custody columns from the custody groups.
func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
@@ -219,6 +199,7 @@ func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
// the KZG commitment includion proofs and cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates input parameters afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars
func dataColumnsSidecars(
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
blobKzgCommitments [][]byte,

View File

@@ -17,8 +17,8 @@ func TestCustodyGroups(t *testing.T) {
// --------------------------------------------
// The happy path is unit tested in spec tests.
// --------------------------------------------
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.CustodyGroups(enode.ID{}, numberOfCustodyGroup+1)
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.CustodyGroups(enode.ID{}, numberOfCustodyGroups+1)
require.ErrorIs(t, err, peerdas.ErrCustodyGroupCountTooLarge)
}
@@ -26,8 +26,8 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
// --------------------------------------------
// The happy path is unit tested in spec tests.
// --------------------------------------------
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.ComputeColumnsForCustodyGroup(numberOfCustodyGroup)
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.ComputeColumnsForCustodyGroup(numberOfCustodyGroups)
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
}
@@ -104,62 +104,6 @@ func TestComputeCustodyGroupForColumn(t *testing.T) {
})
}
func TestCustodyGroupSamplingSize(t *testing.T) {
testCases := []struct {
name string
custodyType peerdas.CustodyType
validatorsCustodyRequirement uint64
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "target, lower than samples per slot",
custodyType: peerdas.Target,
validatorsCustodyRequirement: 2,
expected: 8,
},
{
name: "target, higher than samples per slot",
custodyType: peerdas.Target,
validatorsCustodyRequirement: 100,
expected: 100,
},
{
name: "actual, lower than samples per slot",
custodyType: peerdas.Actual,
validatorsCustodyRequirement: 3,
toAdvertiseCustodyGroupCount: 4,
expected: 8,
},
{
name: "actual, higher than samples per slot",
custodyType: peerdas.Actual,
validatorsCustodyRequirement: 100,
toAdvertiseCustodyGroupCount: 101,
expected: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a custody info.
custodyInfo := peerdas.CustodyInfo{}
// Set the validators custody requirement for target custody group count.
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(tc.validatorsCustodyRequirement)
// Set the to advertise custody group count.
custodyInfo.ToAdvertiseGroupCount.Set(tc.toAdvertiseCustodyGroupCount)
// Compute the custody group sampling size.
actual := custodyInfo.CustodyGroupSamplingSize(tc.custodyType)
// Check the result.
require.Equal(t, tc.expected, actual)
})
}
}
func TestCustodyColumns(t *testing.T) {
t.Run("group too large", func(t *testing.T) {
_, err := peerdas.CustodyColumns([]uint64{1_000_000})

View File

@@ -4,45 +4,17 @@ import (
"encoding/binary"
"sync"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/ethereum/go-ethereum/p2p/enode"
lru "github.com/hashicorp/golang-lru"
"github.com/pkg/errors"
)
// info contains all useful peerDAS related information regarding a peer.
type (
info struct {
CustodyGroups map[uint64]bool
CustodyColumns map[uint64]bool
DataColumnsSubnets map[uint64]bool
}
targetCustodyGroupCount struct {
mut sync.RWMutex
validatorsCustodyRequirement uint64
}
toAdverstiseCustodyGroupCount struct {
mut sync.RWMutex
value uint64
}
CustodyInfo struct {
// Mut is a mutex to be used by caller to ensure neither
// TargetCustodyGroupCount nor ToAdvertiseCustodyGroupCount are being modified.
// (This is not necessary to use this mutex for any data protection.)
Mut sync.RWMutex
// TargetGroupCount represents the target number of custody groups we should custody
// regarding the validators we are tracking.
TargetGroupCount targetCustodyGroupCount
// ToAdvertiseGroupCount represents the number of custody groups to advertise to the network.
ToAdvertiseGroupCount toAdverstiseCustodyGroupCount
}
)
type info struct {
CustodyGroups map[uint64]bool
CustodyColumns map[uint64]bool
DataColumnsSubnets map[uint64]bool
}
const (
nodeInfoCacheSize = 200
@@ -109,61 +81,6 @@ func Info(nodeID enode.ID, custodyGroupCount uint64) (*info, bool, error) {
return result, false, nil
}
// ActualGroupCount returns the actual custody group count.
func (custodyInfo *CustodyInfo) ActualGroupCount() uint64 {
return min(custodyInfo.TargetGroupCount.Get(), custodyInfo.ToAdvertiseGroupCount.Get())
}
// CustodyGroupCount returns the number of groups we should participate in for custody.
func (tcgc *targetCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeAllDataSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
tcgc.mut.RLock()
defer tcgc.mut.RUnlock()
// If no validators are tracked, return the default custody requirement.
if tcgc.validatorsCustodyRequirement == 0 {
return params.BeaconConfig().CustodyRequirement
}
// Return the validators custody requirement.
return tcgc.validatorsCustodyRequirement
}
// setValidatorsCustodyRequirement sets the validators custody requirement.
func (tcgc *targetCustodyGroupCount) SetValidatorsCustodyRequirement(value uint64) {
tcgc.mut.Lock()
defer tcgc.mut.Unlock()
tcgc.validatorsCustodyRequirement = value
}
// Get returns the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeAllDataSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
custodyRequirement := params.BeaconConfig().CustodyRequirement
tacgc.mut.RLock()
defer tacgc.mut.RUnlock()
return max(tacgc.value, custodyRequirement)
}
// Set sets the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Set(value uint64) {
tacgc.mut.Lock()
defer tacgc.mut.Unlock()
tacgc.value = value
}
// createInfoCacheIfNeeded creates a new cache if it doesn't exist.
func createInfoCacheIfNeeded() error {
nodeInfoCacheMut.Lock()

View File

@@ -4,7 +4,6 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
@@ -26,108 +25,3 @@ func TestInfo(t *testing.T) {
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestTargetCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllColumns bool
validatorsCustodyRequirement uint64
expected uint64
}{
{
name: "subscribed to all data subnets",
subscribeToAllColumns: true,
validatorsCustodyRequirement: 100,
expected: 128,
},
{
name: "no validators attached",
subscribeToAllColumns: false,
validatorsCustodyRequirement: 0,
expected: 4,
},
{
name: "some validators attached",
subscribeToAllColumns: false,
validatorsCustodyRequirement: 100,
expected: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllColumns {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
var custodyInfo peerdas.CustodyInfo
// Set the validators custody requirement.
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(tc.validatorsCustodyRequirement)
// Get the target custody group count.
actual := custodyInfo.TargetGroupCount.Get()
// Compare the expected and actual values.
require.Equal(t, tc.expected, actual)
})
}
}
func TestToAdvertiseCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllColumns bool
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "subscribed to all subnets",
subscribeToAllColumns: true,
toAdvertiseCustodyGroupCount: 100,
expected: 128,
},
{
name: "higher than custody requirement",
subscribeToAllColumns: false,
toAdvertiseCustodyGroupCount: 100,
expected: 100,
},
{
name: "lower than custody requirement",
subscribeToAllColumns: false,
toAdvertiseCustodyGroupCount: 1,
expected: 4,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllColumns {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
// Create a custody info.
var custodyInfo peerdas.CustodyInfo
// Set the to advertise custody group count.
custodyInfo.ToAdvertiseGroupCount.Set(tc.toAdvertiseCustodyGroupCount)
// Get the to advertise custody group count.
actual := custodyInfo.ToAdvertiseGroupCount.Get()
// Compare the expected and actual values.
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -10,10 +10,7 @@ import (
"github.com/pkg/errors"
)
const (
CustodyGroupCountEnrKey = "cgc"
kzgPosition = 11 // The index of the KZG commitment list in the Body
)
const kzgPosition = 11 // The index of the KZG commitment list in the Body
var (
ErrIndexTooLarge = errors.New("column index is larger than the specified columns count")
@@ -27,13 +24,13 @@ var (
ErrCannotLoadCustodyGroupCount = errors.New("cannot load the custody group count from peer")
)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#custody-group-count
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#custody-group-count
type Cgc uint64
func (Cgc) ENRKey() string { return CustodyGroupCountEnrKey }
func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCountKey }
// VerifyDataColumnSidecar verifies if the data column sidecar is valid.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
numberOfColumns := params.BeaconConfig().NumberOfColumns
@@ -60,7 +57,7 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// while we are verifying all the KZG proofs from multiple sidecars in a batch.
// This is done to improve performance since the internal KZG library is way more
// efficient when verifying in batch.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
// Compute the total count.
count := 0
@@ -96,7 +93,7 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
}
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
return ErrNilBlockHeader
@@ -128,7 +125,7 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
}
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
return columnIndex % dataColumnSidecarSubnetCount

View File

@@ -123,7 +123,7 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
// ConstructDataColumnSidecars constructs data column sidecars from a block, (un-extended) blobs and
// cell proofs corresponding the extended blobs. The main purpose of this function is to
// construct data columns sidecars from data obtained from the execution client via:
// construct data column sidecars from data obtained from the execution client via:
// - `engine_getBlobsV2` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getblobsv2, or
// - `engine_getPayloadV5` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getpayloadv5
// Note: In this function, to stick with the `BlobsBundleV2` format returned by the execution client in `engine_getPayloadV5`,
@@ -222,8 +222,8 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
// Check if the data column sidecars are aligned with the block.
dataColumnSidecars := make([]blocks.RODataColumn, 0, len(verifiedDataColumnSidecars))
for _, verifiedDataColumnSidecar := range verifiedDataColumnSidecars {
dataColumnSicecar := verifiedDataColumnSidecar.RODataColumn
dataColumnSidecars = append(dataColumnSidecars, dataColumnSicecar)
dataColumnSidecar := verifiedDataColumnSidecar.RODataColumn
dataColumnSidecars = append(dataColumnSidecars, dataColumnSidecar)
}
if err := DataColumnsAlignWithBlock(block, dataColumnSidecars); err != nil {
@@ -241,7 +241,7 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
return blobSidecars, nil
}
// We need to reconstruct the blobs.
// We need to reconstruct the data column sidecars.
reconstructedDataColumnSidecars, err := ReconstructDataColumnSidecars(verifiedDataColumnSidecars)
if err != nil {
return nil, errors.Wrap(err, "reconstruct data column sidecars")

View File

@@ -196,6 +196,26 @@ func TestReconstructBlobs(t *testing.T) {
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
})
t.Run("consecutive duplicates", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
// [0, 1, 1, 3, 4, ...]
verifiedRoSidecars[2] = verifiedRoSidecars[1]
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
})
t.Run("non-consecutive duplicates", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
// [0, 1, 2, 1, 4, ...]
verifiedRoSidecars[3] = verifiedRoSidecars[1]
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
})
t.Run("not enough columns", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)

View File

@@ -8,7 +8,7 @@ import (
)
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/validator.md#validator-custody
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
totalNodeBalance := uint64(0)
for index := range validatorsIndex {
@@ -21,10 +21,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroup), nil
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
}

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"domain.go",
"signature.go",
"signing_root.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing",
@@ -25,7 +24,6 @@ go_test(
name = "go_default_test",
srcs = [
"domain_test.go",
"signature_test.go",
"signing_root_test.go",
],
embed = [":go_default_library"],

View File

@@ -1,34 +0,0 @@
package signing
import (
"github.com/OffchainLabs/prysm/v6/config/params"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
var ErrNilRegistration = errors.New("nil signed registration")
// VerifyRegistrationSignature verifies the signature of a validator's registration.
func VerifyRegistrationSignature(
sr *ethpb.SignedValidatorRegistrationV1,
) error {
if sr == nil || sr.Message == nil {
return ErrNilRegistration
}
d := params.BeaconConfig().DomainApplicationBuilder
// Per spec, we want the fork version and genesis validator to be nil.
// Which is genesis value and zero by default.
sd, err := ComputeDomain(
d,
nil, /* fork version */
nil /* genesis val root */)
if err != nil {
return err
}
if err := VerifySigningRoot(sr.Message, sr.Message.Pubkey, sr.Signature, sd); err != nil {
return ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,42 +0,0 @@
package signing_test
import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestVerifyRegistrationSignature(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
reg := &ethpb.ValidatorRegistrationV1{
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
GasLimit: 123456,
Timestamp: uint64(time.Now().Unix()),
Pubkey: sk.PublicKey().Marshal(),
}
d := params.BeaconConfig().DomainApplicationBuilder
domain, err := signing.ComputeDomain(d, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(reg, domain)
require.NoError(t, err)
sk.Sign(sr[:]).Marshal()
sReg := &ethpb.SignedValidatorRegistrationV1{
Message: reg,
Signature: sk.Sign(sr[:]).Marshal(),
}
require.NoError(t, signing.VerifyRegistrationSignature(sReg))
sReg.Signature = []byte("bad")
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrSigFailedToVerify)
sReg.Message = nil
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrNilRegistration)
}

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
@@ -13,7 +12,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -23,7 +21,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -33,16 +30,13 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -51,7 +45,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -53,30 +53,25 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
if len(blobSidecars) > 1 {
firstRoot := blobSidecars[0].BlockRoot()
for _, sidecar := range blobSidecars[1:] {
if len(sidecars) > 1 {
firstRoot := sidecars[0].BlockRoot()
for _, sidecar := range sidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(blobSidecars[0].Slot()), slots.ToEpoch(current)) {
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(blobSidecars[0])
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range blobSidecars {
for _, blobSidecar := range sidecars {
if err := entry.stash(&blobSidecar); err != nil {
return err
}

View File

@@ -118,23 +118,21 @@ func TestLazilyPersistent_Missing(t *testing.T) {
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[2]))
require.NoError(t, as.Persist(1, blobSidecars[2]))
err := as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, scs[0]))
require.NoError(t, as.Persist(1, blobSidecars[0]))
err = as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(1, blobSidecars...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
}
@@ -149,10 +147,8 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
require.NoError(t, as.Persist(1, blobSidecars[0]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
@@ -161,29 +157,25 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
func TestLazyPersistOnceCommitted(t *testing.T) {
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(1, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
require.ErrorIs(t, as.Persist(1, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
require.ErrorIs(t, as.Persist(1, blobSidecars[0]), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
require.NoError(t, as.Persist(32+slotOOB, more[0]))
require.NoError(t, as.Persist(32+slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(1, more...))
require.NoError(t, as.Persist(1, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -1,208 +0,0 @@
package das
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
nodeID enode.ID
cache *dataColumnCache
custodyInfo *peerdas.CustodyInfo
newDataColumnsVerifier verification.NewDataColumnsVerifier
}
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(store *filesystem.DataColumnStorage, nodeID enode.ID, newDataColumnsVerifier verification.NewDataColumnsVerifier, custodyInfo *peerdas.CustodyInfo) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
nodeID: nodeID,
cache: newDataColumnCache(),
custodyInfo: custodyInfo,
newDataColumnsVerifier: newDataColumnsVerifier,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
if len(sidecars) == 0 {
return nil
}
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
// It is safe to retrieve the first sidecar.
firstSidecar := dataColumnSidecars[0]
if len(sidecars) > 1 {
firstRoot := firstSidecar.BlockRoot()
for _, sidecar := range dataColumnSidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
return nil
}
key := cacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
entry := s.cache.ensure(key)
for _, sidecar := range dataColumnSidecars {
if err := entry.stash(&sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
}
// Return early for blocks that do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
// Get the root of the block.
blockRoot := block.Root()
// Build the cache key for the block.
key := cacheKey{slot: block.Block().Slot(), root: blockRoot}
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
entry := s.cache.ensure(key)
// Delete the cache entry for the block at the end.
defer s.cache.delete(key)
// Set the disk summary for the block in the cache entry.
entry.setDiskSummary(s.store.Summary(blockRoot))
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
if err != nil {
return errors.Wrap(err, "entry filter")
}
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return errors.Wrap(err, "valid")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return errors.Wrap(err, "sidecar KZG proof verified")
}
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
if err != nil {
return errors.Wrap(err, "verified RO data columns - should never happen")
}
if err := s.store.Save(verifiedRoDataColumns); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
// fullCommitmentsToCheck returns the commitments to check for a given block.
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
// Compute the block epoch.
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current spoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return &safeCommitmentsArray{}, nil
}
// Retrieve the KZG commitments for the block.
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Return early if there are no commitments in the block.
if len(kzgCommitments) == 0 {
return &safeCommitmentsArray{}, nil
}
// Retrieve the groups count.
custodyGroupCount := s.custodyInfo.ActualGroupCount()
// Retrieve peer info.
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}
// Create a safe commitments array for the custody columns.
commitmentsArray := &safeCommitmentsArray{}
commitmentsArraySize := uint64(len(commitmentsArray))
for column := range peerInfo.CustodyColumns {
if column >= commitmentsArraySize {
return nil, errors.Errorf("custody column index %d too high (max allowed %d) - should never happen", column, commitmentsArraySize)
}
commitmentsArray[column] = kzgCommitments
}
return commitmentsArray, nil
}

View File

@@ -1,319 +0,0 @@
package das
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
)
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("mixed roots", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
{Slot: 2, Index: 2},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.ErrorIs(t, err, errMixedRoots)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := lazilyPersistentStoreColumns.cache.entries[key]
require.Equal(t, true, ok)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), entry.diskSummary.Count())
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
for i, roDataColumn := range entry.scs {
if map[int]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
indices := [...]uint64{1, 17, 87, 102}
dataColumnsParams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
key := cacheKey{root: root}
entry := lazilyPersistentStoreColumns.cache.ensure(key)
defer lazilyPersistentStoreColumns.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = lazilyPersistentStoreColumns.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := dataColumnStorage.Get(root, indices[:])
require.NoError(t, err)
summary := dataColumnStorage.Summary(root)
require.Equal(t, uint64(len(indices)), summary.Count())
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestFullCommitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: windowSlots + 1,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = 100
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, &peerdas.CustodyInfo{})
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
require.NoError(t, err)
for _, commitments := range commitmentsArray {
require.DeepEqual(t, tc.commitments, commitments)
}
})
}
}
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, parameters []util.DataColumnParam) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, parameters)
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
}
return roSidecars, roDataColumns
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }

View File

@@ -15,5 +15,5 @@ import (
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}

View File

@@ -5,13 +5,12 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
errors "github.com/pkg/errors"
)
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, sc ...blocks.ROBlob) error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityStore = &MockAvailabilityStore{}
@@ -25,13 +24,9 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
}
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
func (m *MockAvailabilityStore) Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error {
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(current, blobSidecars...)
return m.PersistBlobsCallback(current, blobSidecar...)
}
return nil
}

View File

@@ -13,6 +13,7 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/beacon-chain:__subpackages__",
"//genesis:__subpackages__",
"//testing/slasher/simulator:__pkg__",
"//tools:__subpackages__",
],

View File

@@ -10,6 +10,11 @@ type ReadOnlyDatabase = iface.ReadOnlyDatabase
// about head info. For head info, use github.com/prysmaticlabs/prysm/blockchain.HeadFetcher.
type NoHeadAccessDatabase = iface.NoHeadAccessDatabase
// ReadOnlyDatabaseWithSeqNum exposes Prysm's Ethereum data backend for read access only, no information about
// head info, but with read/write access to the p2p metadata sequence number.
// This is used for the p2p service.
type ReadOnlyDatabaseWithSeqNum = iface.ReadOnlyDatabaseWithSeqNum
// HeadAccessDatabase exposes Prysm's Ethereum backend for read/write access with information about
// chain head information. This interface should be used sparingly as the HeadFetcher is the source
// of truth around chain head information while this interface serves as persistent storage for the

View File

@@ -100,6 +100,14 @@ type (
}
)
// DataColumnStorageReader is an interface to read data column sidecars from the filesystem.
type DataColumnStorageReader interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
Get(root [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error)
}
var _ DataColumnStorageReader = &DataColumnStorage{}
// WithDataColumnBasePath is a required option that sets the base path of data column storage.
func WithDataColumnBasePath(base string) DataColumnStorageOption {
return func(b *DataColumnStorage) error {
@@ -251,7 +259,7 @@ func (dcs *DataColumnStorage) Summary(root [fieldparams.RootLength]byte) DataCol
}
// Save saves data column sidecars into the database and asynchronously performs pruning.
// The returned chanel is closed when the pruning is complete.
// The returned channel is closed when the pruning is complete.
func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
startTime := time.Now()
@@ -266,8 +274,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
return errWrongNumberOfColumns
}
highestEpoch := primitives.Epoch(0)
dataColumnSidecarsbyRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
dataColumnSidecarsByRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
// Group data column sidecars by root.
for _, dataColumnSidecar := range dataColumnSidecars {
@@ -278,23 +285,20 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Group data column sidecars by root.
root := dataColumnSidecar.BlockRoot()
dataColumnSidecarsbyRoot[root] = append(dataColumnSidecarsbyRoot[root], dataColumnSidecar)
dataColumnSidecarsByRoot[root] = append(dataColumnSidecarsByRoot[root], dataColumnSidecar)
}
for root, dataColumnSidecars := range dataColumnSidecarsbyRoot {
for root, dataColumnSidecars := range dataColumnSidecarsByRoot {
// Safety check all data column sidecars for this root are from the same slot.
firstSlot := dataColumnSidecars[0].SignedBlockHeader.Header.Slot
slot := dataColumnSidecars[0].Slot()
for _, dataColumnSidecar := range dataColumnSidecars[1:] {
if dataColumnSidecar.SignedBlockHeader.Header.Slot != firstSlot {
if dataColumnSidecar.Slot() != slot {
return errDataColumnSidecarsFromDifferentSlots
}
}
// Set the highest epoch.
epoch := slots.ToEpoch(dataColumnSidecars[0].Slot())
highestEpoch = max(highestEpoch, epoch)
// Save data columns in the filesystem.
epoch := slots.ToEpoch(slot)
if err := dcs.saveFilesystem(root, epoch, dataColumnSidecars); err != nil {
return errors.Wrap(err, "save filesystem")
}
@@ -306,7 +310,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
}
// Compute the data columns ident.
dataColumnsIdent := DataColumnsIdent{Root: root, Epoch: slots.ToEpoch(dataColumnSidecars[0].Slot()), Indices: indices}
dataColumnsIdent := DataColumnsIdent{Root: root, Epoch: epoch, Indices: indices}
// Set data columns in the cache.
if err := dcs.cache.set(dataColumnsIdent); err != nil {

View File

@@ -84,12 +84,6 @@ func (s DataColumnStorageSummary) Stored() map[uint64]bool {
return stored
}
// DataColumnStorageSummarizer can be used to receive a summary of metadata about data columns on disk for a given root.
// The DataColumnStorageSummary can be used to check which indices (if any) are available for a given block by root.
type DataColumnStorageSummarizer interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
}
type dataColumnStorageSummaryCache struct {
mu sync.RWMutex
dataColumnCount float64
@@ -98,8 +92,6 @@ type dataColumnStorageSummaryCache struct {
cache map[[fieldparams.RootLength]byte]DataColumnStorageSummary
}
var _ DataColumnStorageSummarizer = &dataColumnStorageSummaryCache{}
func newDataColumnStorageSummaryCache() *dataColumnStorageSummaryCache {
return &dataColumnStorageSummaryCache{
cache: make(map[[fieldparams.RootLength]byte]DataColumnStorageSummary),

Some files were not shown because too many files have changed in this diff Show More