Compare commits

...

89 Commits

Author SHA1 Message Date
Potuz
23e1525650 fix linter 2025-02-04 07:31:25 -03:00
Potuz
6d4d65db05 Use OrderedSchedule to compute forks for epochs 2025-02-03 17:07:17 -03:00
terence tsao
da470d76aa Rebase and fix tests 2025-02-01 20:13:11 -08:00
terence tsao
3a324d9a8c Fix logrus imports and build 2025-01-31 23:18:10 -08:00
Potuz
d41ecd0c0a pass epbs as version 2025-01-31 23:18:10 -08:00
Potuz
1bdd79e16b payload attribute and withdrawal fixes 2025-01-31 23:18:10 -08:00
Potuz
bf17020e01 get right parentblockhash on local blocks 2025-01-31 23:18:10 -08:00
Potuz
fd5111d30d Update the NSC on epbs (#14626) 2025-01-31 23:18:10 -08:00
Potuz
6523fb328d Epbs payload (#14619)
* Send FCU on epbs

* handle late block tasks

* deal with reorgs by attestations

* save head on regular sync

* don't sleep 1 sec
2025-01-31 23:18:10 -08:00
Potuz
f968baeb71 update header when computing the payload state root (#14625) 2025-01-31 23:17:54 -08:00
Potuz
6b47bd81f8 Get the right state when processing blocks 2025-01-31 23:17:54 -08:00
Potuz
ed801af186 fix pruning 2025-01-31 23:16:56 -08:00
Potuz
13aba9d129 Process blocks after ePBS (#14611)
These are some of the things that are left to be done

    - Process the payload
    - Change stategen to get the poststate of the block and the payload
      separately
    - Change the next slot cache to be safe for full/empty
2025-01-31 23:11:51 -08:00
Potuz
338e14d3af Share resources between empty and full nodes (#14517)
* Share resources between empty and full nodes

- Share a block structure withing the forkchoice node. The surrounding
  envelope contains information about the payload presence and the
  children links, the inner structure contains the usual FFG and parent links.
- Reworked setOptimistictoInvalid
- Changed the PTC vote logic to have validity handled outside of
  forkchoice and have forkchoice only keep the total count of votes.

* Fix tests

* gazelle

* Update head twice pre-epbs

* only upadte best descendants without computing head

* skip forkchoice tests

* fix some blockchain tests

* Nil optimistic sync fix

* only count weight of empty nodes
2025-01-31 23:11:51 -08:00
terence
9f16a17b8c Add support to generate genesis state for epbs (#14594) 2025-01-31 23:11:51 -08:00
Potuz
7d8ab438c6 Remove invalid tests 2025-01-31 23:10:31 -08:00
Potuz
6600cec6b3 Fix pending balance deposits 2025-01-31 23:10:31 -08:00
Potuz
4a4d8284cc Use wait for ptc duty in submit payload attestation message (#14491) 2025-01-31 23:10:31 -08:00
terence
28cd5072b5 Initialize exeuction requests for tests (#14489) 2025-01-31 23:10:31 -08:00
Potuz
2d6fb117d5 fix PayloadEnvelopeStateTransition test 2025-01-31 23:10:31 -08:00
Potuz
1525dde40e Fix compute field roots with hasher 2025-01-31 23:10:31 -08:00
Potuz
df9921efe5 export random execution request 2025-01-31 23:10:31 -08:00
Potuz
3098757724 fix build 2025-01-31 23:10:31 -08:00
Potuz
6b8409601c Prysm rpc: submit signed execution payload header (#14441) 2025-01-31 23:10:31 -08:00
terence
50352072bc Update attestor and aggregator respect epbs intervals (#14454) 2025-01-31 23:09:44 -08:00
Potuz
ccdafe3838 Provide a helper to compute the state root after a payload envelope (#14450) 2025-01-31 23:09:44 -08:00
Potuz
09d97c548b Fix pubkeyToIndex usage 2025-01-31 23:09:44 -08:00
terence
17e04f72f6 Prysm rpc: Get payload attestation data (#14380) 2025-01-31 23:09:44 -08:00
Potuz
c82e2368f2 Handle execution payload insertion in forkchoice (#14422) 2025-01-31 23:09:44 -08:00
Potuz
c9d257baf1 Add GetPTCVote helpers (#14420) 2025-01-31 23:09:44 -08:00
terence
3cc776b7f2 Add wait until PTC duty helper function (#14419)
Add wait until PTC duty
2025-01-31 23:09:44 -08:00
Potuz
7bf0379a78 Prysm rpc: submit execution payload envelope (#14395) 2025-01-31 23:09:44 -08:00
Potuz
fb21cf77cc Remove Changelog workflow 2025-01-31 23:08:55 -08:00
Potuz
8a5967d0aa Prysm rpc: Submit payload attestation data (#14381) 2025-01-31 23:08:43 -08:00
Potuz
b63d7cfffd Receive ptc message (#14394)
* Handle incoming ptc attestation messages in the chain package

* fix double import

* remove unused error
2025-01-31 23:07:49 -08:00
Potuz
66853d936a Add ePBS fork schedule to config (#14383) 2025-01-31 23:07:49 -08:00
Potuz
d5479ce098 [ePBS] implement UpdateVotesOnPayloadAttestation (#14308) 2025-01-31 23:00:58 -08:00
Potuz
aefa4eeb06 Signed execution payload header for sync (#14363)
* Signed execution payload header for sync

* Use RO state

* SignedExecutionPayloadHeader by hash and root

* Fix execution headers cache
2025-01-31 23:00:58 -08:00
Potuz
4d209bdccc Refactor currentlySyncingPayload cache (#14350) 2025-01-31 23:00:58 -08:00
JihoonSong
2ad8ef30b1 Add unit tests of ExecutionPayloadEnvelope verification (#14373)
* Correct requirement list of EnvelopeVerifier

* Add unit tests of ExecutionPayloadEnvelope verification
2025-01-31 23:00:58 -08:00
terence
f466c2e1fa Enable validator client to sign execution payload envelope (#14346)
* Enable validator client to sign execution payload envelope

* Update comment

Co-authored-by: JihoonSong <jihoonsong@users.noreply.github.com>

---------

Co-authored-by: JihoonSong <jihoonsong@users.noreply.github.com>
2025-01-31 23:00:58 -08:00
Potuz
83ee0b298a Sync changes to process execution payload envelopes 2025-01-31 23:00:29 -08:00
Potuz
ea9969ec7a Process withdrawal (#14297)
* process_withdrawal_fn and isParentfull test

* suggestions applied

* minor change

* removed

* lint

* lint fix

* removed Latestheader

* test added with nil error

* tests passing

* IsParentNode Test added

* lint

* fix test

* updated godoc

* fix in godoc

* comment removed

* fixed braces

* removed var

* removed var

* Update beacon-chain/core/blocks/withdrawals.go

* Update beacon-chain/core/blocks/withdrawals_test.go

* gazelle

* test added and removed previous changes in Testprocesswithdrawal

* added check for nil state

* decrease chromatic complexity

---------

Co-authored-by: Potuz <potuz@potuz.net>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-01-31 23:00:28 -08:00
Potuz
69f1a56d06 Enable validator client to sign execution header (#14333)
* Enable validator client to sign execution header

* Update proto/prysm/v1alpha1/validator-client/keymanager.proto

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-01-31 23:00:28 -08:00
terence
78d74a5313 Initialize payload att message verfier in sync (#14323) 2025-01-31 22:59:43 -08:00
terence
bfd0a22260 Add getter for payload attestation cache (#14328)
* Add getter for payload attestation cache

* Check against status

* Feedback #1
2025-01-31 22:59:43 -08:00
Potuz
790ce1cef9 Payload Attestation Sync package changes (#13989)
* Payload Attestation Sync package changes

* With verifier

* change idx back to uint64

* subscribe to topic

* add back error

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2025-01-31 22:59:43 -08:00
Potuz
2b6ddd5887 Process Execution Payload Envelope in Chain Service (#14295)
Adds the processing of execution payload envelope
Corrects the protos for attestations and slashings in Electra versions
Adds generators of full blocks for Electra
2025-01-31 22:59:43 -08:00
Md Amaan
63370855ce Indexed paylaod attestation test (#14299)
* test-added

* nil check fix

* randomized inputs

* hardcoded inputs

* suggestions applied

* minor-typo fixed

* deleted
2025-01-31 22:58:54 -08:00
JihoonSong
6d3e10a335 Add execution_payload and payload_attestation_message topics (#14304)
* Add `execution_payload` and `payload_attestation_message` topics

* Set `SourcePubkey` to 48 bytes long

* Add randomly populated `PayloadAttestationMessage` object

* Add tests for `execution_payload` and `payload_attestation_message` topics
2025-01-31 22:58:54 -08:00
terence
082edda8fa Broadcast signed execution payload header to peer (#14300) 2025-01-31 22:58:54 -08:00
terence
bfe4109068 Read only payload attestation message with Verifier (#14222)
* Read only payload attestation message with verifier

* Payload attestation tests (#14242)

* Payload attestation in verification package

* Feedback #1

---------

Co-authored-by: Md Amaan <114795592+Redidacove@users.noreply.github.com>
2025-01-31 22:58:17 -08:00
Potuz
99da0bec2c Allow nodes with and without payload in forkchoice (#14288)
* Allow nodes with and without payload in forkchoice

    This PR takes care of adding nodes to forkchoice that may or may not
    have a corresponding payload. The rationale is as follows

    - The node structure is kept almost the same as today.
    - A zero payload hash is considered as if the node was empty (except for
      the tree root)
    - When inserting a node we check what the right parent node would be
      depending on whether the parent had a payload or not.
    - For pre-epbs forks all nodes are full, no logic changes except a new
      steps to gather the parent hash that is needed for block insertion.

    This PR had to change some core consensus types and interfaces.
    - It removed the ROBlockEPBS interface and added the corresponding ePBS
      fields to the ReadOnlyBeaconBlockBody
    - It moved the setters and getters to epbs dedicated files.

    It also added a checker for `IsParentFull` on forkchoice that simply
    checks for the parent hash of the parent node.

* review
2025-01-31 22:58:17 -08:00
Potuz
1aa7bb6c72 Use BeaconCommittees helper to get the ptc (#14286) 2025-01-31 22:58:17 -08:00
JihoonSong
875cce2639 Add payload attestation helper functions (#14258)
* Add `IndexedPayloadAttestation` container

* Add `GetPayloadAttestingIndices` and its unit test

* Add `GetIndexedPayloadAttestation` and its unit test

* Add `is_valid_indexed_payload_attestation` and its unit test

* Create a smaller set of validators for faster unit test

* Pass context to `GetPayloadTimelinessCommittee`

* Iterate `ValidatorsReadOnly` instead of copying all validators
2025-01-31 22:58:17 -08:00
Potuz
037d91f750 Use slot for latest message in forkchoice (#14279) 2025-01-31 22:58:03 -08:00
Potuz
5079887351 Ensure epbs state getters & setters check versions (#14276)
* Ensure EPBS state getters and setters check versions

* Rename to LatestExecutionPayloadHeaderEPBS

* Add minimal beacon state
2025-01-31 22:58:03 -08:00
JihoonSong
e8b8d64bfa Add remove_flag and its unit test (#14260)
* Add `remove_flag` and its unit test

* Add a test case trying to remove a flag that is not set
2025-01-31 22:58:03 -08:00
JihoonSong
e50c60f783 Modify get_ptc function to follow the Python spec (#14256)
* Modify `get_ptc` function to follow the Python spec

* Assign PTC members from the beginning of beacon committee array
2025-01-31 22:58:03 -08:00
Potuz
00fb633c39 Remove inclusion list from epbs (#14188) 2025-01-31 22:58:03 -08:00
Potuz
3afb4d05db Enable validator client to submit payload attestation message (#14064) 2025-01-31 18:43:01 -10:00
Potuz
1974636dc6 Add PTC assignment support for Duty endpoint (#14032) 2025-01-31 18:41:50 -10:00
Potuz
c7c6226b4f use Keymanager() in validator client 2025-01-31 18:37:19 -10:00
Potuz
a54e1e8d94 Change gwei math to primitives package for ePBS state 2025-01-31 18:37:19 -10:00
terence
233eae8681 Fix GetPayloadTimelinessCommittee to return correct PTC size (#14012) 2025-01-31 18:37:19 -10:00
Potuz
bed94f37f5 Add ePBS to db (#13971)
* Add ePBS to db
2025-01-31 18:37:19 -10:00
Potuz
62202f8459 Implement get_ptc
This implements a helper to get the ptc committee from a state. It uses
the cached beacon committees if possible

It also implements a helper to compute the largest power of two of a
uint64 and a helper to test for nil payload attestation messages
2025-01-31 18:25:59 -10:00
Potuz
06bd2042be Add ePBS to state (#13926) 2025-01-31 18:25:47 -10:00
Potuz
7e27bd5615 Add testing utility methods to return randomly populated ePBS objects 2025-01-31 18:20:45 -10:00
Potuz
6930f722d5 Add ePBS stuff to consensus-types: block 2025-01-31 18:20:45 -10:00
Potuz
2e7ab9104e Helper for Payload Attestation Signing (#13901) 2025-01-31 16:24:29 -10:00
Potuz
2ef1e712de ePBS configuration constants 2025-01-31 16:21:29 -10:00
Potuz
a6cef5ef3f Add ePBS beacon state proto 2025-01-31 16:21:26 -10:00
Potuz
2512dbe580 Add protos for ePBS except state 2025-01-31 16:21:25 -10:00
Radosław Kapka
4a63a194b1 Address @jtraglia's comments regarding Electra code (#14833)
* change field IDs in `AggregateAttestationAndProofElectra`

* fix typo in `validator.proto`

* correct slashing indices length and shashings length

* check length in indexed attestation's `ToConsensus` method

* use `fieldparams.BLSSignatureLength`

* Add length checks for execution request

* fix typo in `beacon_state.proto`

* fix typo in `ssz_proto_library.bzl`

* fix error messages about incorrect types in block factory

* add Electra case to `BeaconBlockContainerToSignedBeaconBlock`

* move PeerDAS config items to PeerDAS section

* remove redundant params

* rename `PendingDepositLimit` to `PendingDepositsLimit`

* improve requests unmarshaling errors

* rename `get_validator_max_effective_balance` to `get_max_effective_balance`

* fix typo in `consolidations.go`

* rename `index` to `validator_index` in `PendingPartialWithdrawal`

* rename `randomByte` to `randomBytes` in `validators.go`

* fix for version in a comment in `validator.go`

* changelog <3

* Revert "rename `index` to `validator_index` in `PendingPartialWithdrawal`"

This reverts commit 87e4da0ea2.
2025-01-31 15:41:52 +00:00
james-prysm
d887536eb7 skip eth1data voting after electra (#14835)
* wip skip eth1data voting after electra

* updating technique

* adding fix for electra eth1 voting

* fixing linting on test

* seeing if reversing genesis state fixes problem

* increasing safety of legacy check

* review feedback

* forgot to fix tests

* nishant's feedback

* nishant's feedback

* rename function a little

* Update beacon-chain/core/helpers/legacy.go

Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>

* fixing naming

---------

Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
2025-01-31 15:19:28 +00:00
Radosław Kapka
1069da1cd2 Convert Phase0 slashing to Electra slashings at the fork (#14844)
* EIP-7549: slasher

* update chunks and detection

* update tests

* encode+decode

* timer

* test fixes

* testing the timer

* Decouple pool from service

* update mock

* cleanup

* make review easier

* comments and changelog
2025-01-31 03:17:52 +00:00
Potuz
4a487ba3bc Don't mark blocks as invalid on context deadlines (#14838)
* Don't mark blocks as invalid on context deadlines

When processing state transition, if the error is because of a context
deadline, do not mark it as invalid.

* review

* fix changelog
2025-01-31 03:16:16 +00:00
james-prysm
bf81cd4449 Electra blob sidecar API update (#14852)
* adding in versioned header and unit tests

* changelog

* handling case

* changelog
2025-01-31 02:39:27 +00:00
terence
00337fe005 Add nil consolidation check for core processing (#14851) 2025-01-30 22:24:23 +00:00
terence
bb3fba4d8e Add a test for nil withdrawal requeset (#14850) 2025-01-30 21:24:40 +00:00
terence
89967fe209 Move deposit request nil check for all (#14849) 2025-01-30 21:24:31 +00:00
terence
56712b5e49 Update electra spec tests to beta.1 (#14841) 2025-01-29 18:19:07 +00:00
Preston Van Loon
0be9391e62 electra: Improve test coverage for beacon-chain/core/electra/churn.go (#14837)
* Fixed mutants in beacon-chain/core/electra/churn.go

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅
┃ 🧬 Mutant survived: beacon-chain/core/electra/churn.go → Arithmetic
┠┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┃ --- beacon-chain/core/electra/churn.go (original)
┃ +++ beacon-chain/core/electra/churn.go (mutated with 'Arithmetic')
┃ @@ -64,7 +64,7 @@
┃       if consolidationBalance > consolidationBalanceToConsume {
┃               balanceToProcess := consolidationBalance - consolidationBalanceToConsume
┃               // additional_epochs = (balance_to_process - 1) // per_epoch_consolidation_churn + 1
┃ -             additionalEpochs, err := math.Div64(uint64(balanceToProcess-1), uint64(perEpochConsolidationChurn))
┃ +             additionalEpochs, err := math.Div64(uint64(balanceToProcess+1), uint64(perEpochConsolidationChurn))
┃               if err != nil {
┃                       return 0, err
┃               }
┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅
┃ 🧬 Mutant survived: beacon-chain/core/electra/churn.go → Comparison
┠┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┃ --- beacon-chain/core/electra/churn.go (original)
┃ +++ beacon-chain/core/electra/churn.go (mutated with 'Comparison')
┃ @@ -61,7 +61,7 @@
┃       }
┃
┃       // Consolidation doesn't fit in the current earliest epoch.
┃ -     if consolidationBalance > consolidationBalanceToConsume {
┃ +     if consolidationBalance >= consolidationBalanceToConsume {
┃               balanceToProcess := consolidationBalance - consolidationBalanceToConsume
┃               // additional_epochs = (balance_to_process - 1) // per_epoch_consolidation_churn + 1
┃               additionalEpochs, err := math.Div64(uint64(balanceToProcess-1), uint64(perEpochConsolidationChurn))
┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅
┃ 🧬 Mutant survived: beacon-chain/core/electra/churn.go → Integer Decrement
┠┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┃ --- beacon-chain/core/electra/churn.go (original)
┃ +++ beacon-chain/core/electra/churn.go (mutated with 'Integer Decrement')
┃ @@ -64,7 +64,7 @@
┃       if consolidationBalance > consolidationBalanceToConsume {
┃               balanceToProcess := consolidationBalance - consolidationBalanceToConsume
┃               // additional_epochs = (balance_to_process - 1) // per_epoch_consolidation_churn + 1
┃ -             additionalEpochs, err := math.Div64(uint64(balanceToProcess-1), uint64(perEpochConsolidationChurn))
┃ +             additionalEpochs, err := math.Div64(uint64(balanceToProcess-0), uint64(perEpochConsolidationChurn))
┃               if err != nil {
┃                       return 0, err
┃               }
┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅                                                                                                                                                      
┃ 🧬 Mutant survived: beacon-chain/core/electra/churn.go → Integer Increment                                                                                                                  
┠┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄                                                                                                                                                      
┃ --- beacon-chain/core/electra/churn.go (original)                                                                                                                                           
┃ +++ beacon-chain/core/electra/churn.go (mutated with 'Integer Increment')                                                                                                                   
┃ @@ -64,7 +64,7 @@                                                                                                                                                                           
┃       if consolidationBalance > consolidationBalanceToConsume {                                                                                                                             
┃               balanceToProcess := consolidationBalance - consolidationBalanceToConsume                                                                                                      
┃               // additional_epochs = (balance_to_process - 1) // per_epoch_consolidation_churn + 1                                                                                          
┃ -             additionalEpochs, err := math.Div64(uint64(balanceToProcess-1), uint64(perEpochConsolidationChurn))                                                                           
┃ +             additionalEpochs, err := math.Div64(uint64(balanceToProcess-2), uint64(perEpochConsolidationChurn))                                                                           
┃               if err != nil {                                                                                                                                                               
┃                       return 0, err                                                                                                                                                         
┃               }                                                                                                                                                                             
┃                                                                                                                                                                                             
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╍┅

* Changelog fragment
2025-01-29 02:15:43 +00:00
Dhruv Bodani
4a9c60f75f Implement beacon db pruner (#14687)
* implement weak subjectivity pruner

* fix goimports

* add delete before slot method to database

* add method to interface

* update changelog

* add flags

* wire pruner

* align pruner with backfill service

* rename db method

* fix imports

* delete block slot indices

* check for backfill and initial sync

* add tests

* fix imports

* Update beacon-chain/db/kv/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update cmd/beacon-chain/flags/base.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update beacon-chain/db/pruner/pruner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* cleanup

* fix buildkite

* initialise atomic bool

* delete data from remaining buckets

* fix build

* fix build

* address review comments

* add test for blockParentRootIndicesBucket

* fix changelog

* fix build

* address kasey's comments

* fix build

* add trace span to blockRootsBySlotRange

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-01-28 16:55:50 +00:00
Radosław Kapka
9cf6b93356 Handle AttesterSlashingElectra everywhere in the codebase (#14823)
* Handle `AttesterSlashingElectra` everywhere in the codebase

* simplify `TestProcessAttesterSlashings_AppliesCorrectStatus`
2025-01-28 15:06:37 +00:00
Preston Van Loon
b4220e35c4 CI: Add clang-formatter lint workflow for proto files (#14831)
* Enforce clang-format for proto files in proto/

* Update pb files after #14818

* Changelog fragment
2025-01-27 20:01:21 +00:00
terence
536cded4cc Fix batch deposit processing by retrieving validators from state (#14827)
* Fix batch process new pending deposits by getting validators from state

* Update tt_fix_pending_deposits.md
2025-01-27 18:11:06 +00:00
Bastin
86fc64c917 Lightclient Bootstrap API SSZ support tests (#14824)
* add bootstrap ssz tests

* add changelog entry

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-01-27 12:20:48 +00:00
410 changed files with 30621 additions and 9841 deletions

21
.github/workflows/clang-format.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: Protobuf Format
on:
push:
branches: [ '*' ]
pull_request:
branches: [ '*' ]
merge_group:
types: [checks_requested]
jobs:
clang-format-checking:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Is this step failing for you?
# Run: clang-format -i proto/**/*.proto
# See: https://clang.llvm.org/docs/ClangFormat.html
- uses: RafikFarhad/clang-format-github-action@v3
with:
sources: "proto/**/*.proto"

View File

@@ -60,7 +60,7 @@ Notable features:
- Updated the default `scrape-interval` in `Client-stats` to 2 minutes to accommodate Beaconcha.in API rate limits.
- Switch to compounding when consolidating with source==target.
- Revert block db save when saving state fails.
- Return false from HasBlock if the block is being synced.
- Return false from HasBlock if the block is being synced.
- Cleanup forkchoice on failed insertions.
- Use read only validator for core processing to avoid unnecessary copying.
- Use ROBlock across block processing pipeline.
@@ -73,7 +73,7 @@ Notable features:
- Simplified `EjectedValidatorIndices`.
- `engine_newPayloadV4`,`engine_getPayloadV4` are changes due to new execution request serialization decisions, [PR](https://github.com/prysmaticlabs/prysm/pull/14580)
- Fixed various small things in state-native code.
- Use ROBlock earlier in block syncing pipeline.
- Use ROBlock earlier in block syncing pipeline.
- Changed the signature of `ProcessPayload`.
- Only Build the Protobuf state once during serialization.
- Capella blocks are execution.
@@ -139,9 +139,9 @@ Notable features:
### Security
## [v5.1.2](https://github.com/prysmaticlabs/prysm/compare/v5.1.1...v5.1.2) - 2024-10-16
## [v5.1.2](https://github.com/prysmaticlabs/prysm/compare/v5.1.1...v5.1.2) - 2024-10-16
This is a hotfix release with one change.
This is a hotfix release with one change.
Prysm v5.1.1 contains an updated implementation of the beacon api streaming events endpoint. This
new implementation contains a bug that can cause a panic in certain conditions. The issue is
@@ -153,20 +153,20 @@ prysm REST mode validator (a feature which requires the validator to be configur
api instead of prysm's stock grpc endpoints) or accessory software that connects to the events api,
like https://github.com/ethpandaops/ethereum-metrics-exporter
### Fixed
### Fixed
- Recover from panics when writing the event stream [#14545](https://github.com/prysmaticlabs/prysm/pull/14545)
## [v5.1.1](https://github.com/prysmaticlabs/prysm/compare/v5.1.0...v5.1.1) - 2024-10-15
This release has a number of features and improvements. Most notably, the feature flag
`--enable-experimental-state` has been flipped to "opt out" via `--disable-experimental-state`.
This release has a number of features and improvements. Most notably, the feature flag
`--enable-experimental-state` has been flipped to "opt out" via `--disable-experimental-state`.
The experimental state management design has shown significant improvements in memory usage at
runtime. Updates to libp2p's gossipsub have some bandwidith stability improvements with support for
IDONTWANT control messages.
IDONTWANT control messages.
The gRPC gateway has been deprecated from Prysm in this release. If you need JSON data, consider the
standardized beacon-APIs.
standardized beacon-APIs.
Updating to this release is recommended at your convenience.
@@ -208,7 +208,7 @@ Updating to this release is recommended at your convenience.
- `grpc-gateway-corsdomain` is renamed to http-cors-domain. The old name can still be used as an alias.
- `api-timeout` is changed from int flag to duration flag, default value updated.
- Light client support: abstracted out the light client headers with different versions.
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
- Removed gorilla mux library and replaced it with net/http updates in go 1.22.
- Clean up `ProposeBlock` for validator client to reduce cognitive scoring and enable further changes.
- Updated k8s-io/client-go to v0.30.4 and k8s-io/apimachinery to v0.30.4
@@ -219,7 +219,7 @@ Updating to this release is recommended at your convenience.
- Updated Sepolia bootnodes.
- Make committee aware packing the default by deprecating `--enable-committee-aware-packing`.
- Moved `ConvertKzgCommitmentToVersionedHash` to the `primitives` package.
- Updated correlation penalty for EIP-7251.
- Updated correlation penalty for EIP-7251.
### Deprecated
- `--disable-grpc-gateway` flag is deprecated due to grpc gateway removal.
@@ -693,34 +693,34 @@ AVX support (eg Celeron) after the Deneb fork. This is not an issue for mainnet.
- Linter: Wastedassign linter enabled to improve code quality.
- API Enhancements:
- Added payload return in Wei for /eth/v3/validator/blocks.
- Added Holesky Deneb Epoch for better epoch management.
- Added payload return in Wei for /eth/v3/validator/blocks.
- Added Holesky Deneb Epoch for better epoch management.
- Testing Enhancements:
- Clear cache in tests of core helpers to ensure test reliability.
- Added Debug State Transition Method for improved debugging.
- Backfilling test: Enabled backfill in E2E tests for more comprehensive coverage.
- Clear cache in tests of core helpers to ensure test reliability.
- Added Debug State Transition Method for improved debugging.
- Backfilling test: Enabled backfill in E2E tests for more comprehensive coverage.
- API Updates: Re-enabled jwt on keymanager API for enhanced security.
- Logging Improvements: Enhanced block by root log for better traceability.
- Validator Client Improvements:
- Added Spans to Core Validator Methods for enhanced monitoring.
- Improved readability in validator client code for better maintenance (various commits).
- Added Spans to Core Validator Methods for enhanced monitoring.
- Improved readability in validator client code for better maintenance (various commits).
### Changed
- Optimizations and Refinements:
- Lowered resource usage in certain processes for efficiency.
- Moved blob rpc validation closer to peer read for optimized processing.
- Cleaned up validate beacon block code for clarity and efficiency.
- Updated Sepolia Deneb fork epoch for alignment with network changes.
- Changed blob latency metrics to milliseconds for more precise measurement.
- Altered getLegacyDatabaseLocation message for better clarity.
- Improved wait for activation method for enhanced performance.
- Capitalized Aggregated Unaggregated Attestations Log for consistency.
- Modified HistoricalRoots usage for accuracy.
- Adjusted checking of attribute emptiness for efficiency.
- Lowered resource usage in certain processes for efficiency.
- Moved blob rpc validation closer to peer read for optimized processing.
- Cleaned up validate beacon block code for clarity and efficiency.
- Updated Sepolia Deneb fork epoch for alignment with network changes.
- Changed blob latency metrics to milliseconds for more precise measurement.
- Altered getLegacyDatabaseLocation message for better clarity.
- Improved wait for activation method for enhanced performance.
- Capitalized Aggregated Unaggregated Attestations Log for consistency.
- Modified HistoricalRoots usage for accuracy.
- Adjusted checking of attribute emptiness for efficiency.
- Database Management:
- Moved --db-backup-output-dir as a deprecated flag for database management simplification.
- Added the Ability to Defragment the Beacon State for improved database performance.
- Moved --db-backup-output-dir as a deprecated flag for database management simplification.
- Added the Ability to Defragment the Beacon State for improved database performance.
- Dependency Update: Bumped quic-go version from 0.39.3 to 0.39.4 for up-to-date dependencies.
### Removed
@@ -731,12 +731,12 @@ AVX support (eg Celeron) after the Deneb fork. This is not an issue for mainnet.
### Fixed
- Bug Fixes:
- Fixed off by one error for improved accuracy.
- Resolved small typo in error messages for clarity.
- Addressed minor issue in blsToExecChange validator for better validation.
- Corrected blobsidecar json tag for commitment inclusion proof.
- Fixed ssz post-requests content type check.
- Resolved issue with port logging in bootnode.
- Fixed off by one error for improved accuracy.
- Resolved small typo in error messages for clarity.
- Addressed minor issue in blsToExecChange validator for better validation.
- Corrected blobsidecar json tag for commitment inclusion proof.
- Fixed ssz post-requests content type check.
- Resolved issue with port logging in bootnode.
- Test Fixes: Re-enabled Slasher E2E Test for more comprehensive testing.
### Security
@@ -1163,9 +1163,9 @@ No security issues in this release.
now features runtime detection, automatically enabling optimized code paths if your CPU supports it.
- **Multiarch Containers Preview Available**: multiarch (:wave: arm64 support :wave:) containers will be offered for
preview at the following locations:
- Beacon Chain: [gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0](gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0)
- Validator: [gcr.io/prylabs-dev/prysm/validator:v4.1.0](gcr.io/prylabs-dev/prysm/validator:v4.1.0)
- Please note that in the next cycle, we will exclusively use these containers at the canonical URLs.
- Beacon Chain: [gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0](gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0)
- Validator: [gcr.io/prylabs-dev/prysm/validator:v4.1.0](gcr.io/prylabs-dev/prysm/validator:v4.1.0)
- Please note that in the next cycle, we will exclusively use these containers at the canonical URLs.
### Added
@@ -2987,4 +2987,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -255,7 +255,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-beta.0"
consensus_spec_version = "v1.5.0-beta.1"
bls_test_version = "v0.1.1"
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-HdMlTN3wv+hUMCkIRPk+EHcLixY1cSZlvkx3obEp4AM=",
integrity = "sha256-R6r60geCfEjMaB1Ag3svaMFXFIgaJvkTJhfKsf76rFE=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -287,7 +287,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-eX/ihmHQ+OvfoGJxSMgy22yAU3SZ3xjsX0FU0EaZrSs=",
integrity = "sha256-2Pem2gMHxW/6bBhZ2BaqkQruQSd/dTS3WMaMQO8rZ/o=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -303,7 +303,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-k3Onf42vOzIqyddecR6G82sDy3mmDA+R8RN66QjB0GI=",
integrity = "sha256-5yP05JTV1MhcUZ2kSh+T+kXjG+uW3A5877veC5c1mD4=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -318,7 +318,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-N/d4AwdOSlb70Dr+2l20dfXxNSzJDj/qKA9Rkn8Gb5w=",
integrity = "sha256-O6Rg6h19T0RsJs0sBDZ9O1k4LnCJ/gu2ilHijFBVfME=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -28,6 +28,7 @@ go_library(
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v5/container/slice"
@@ -243,7 +244,7 @@ func (c *ContributionAndProof) ToConsensus() (*eth.ContributionAndProof, error)
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
selectionProof, err := bytesutil.DecodeHexWithLength(c.SelectionProof, 96)
selectionProof, err := bytesutil.DecodeHexWithLength(c.SelectionProof, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
@@ -330,7 +331,7 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
}
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, 96)
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
@@ -366,7 +367,7 @@ func (a *AggregateAttestationAndProofElectra) ToConsensus() (*eth.AggregateAttes
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
}
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, 96)
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
@@ -734,6 +735,10 @@ func (s *AttesterSlashingElectra) ToConsensus() (*eth.AttesterSlashingElectra, e
}
func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
if err := slice.VerifyMaxLength(a.AttestingIndices, params.BeaconConfig().MaxValidatorsPerCommittee); err != nil {
return nil, err
}
indices := make([]uint64, len(a.AttestingIndices))
var err error
for i, ix := range a.AttestingIndices {
@@ -759,6 +764,13 @@ func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
}
func (a *IndexedAttestationElectra) ToConsensus() (*eth.IndexedAttestationElectra, error) {
if err := slice.VerifyMaxLength(
a.AttestingIndices,
params.BeaconConfig().MaxValidatorsPerCommittee*params.BeaconConfig().MaxCommitteesPerSlot,
); err != nil {
return nil, err
}
indices := make([]uint64, len(a.AttestingIndices))
var err error
for i, ix := range a.AttestingIndices {
@@ -1189,7 +1201,7 @@ func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth
if src == nil {
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 2)
err := slice.VerifyMaxLength(src, fieldparams.MaxAttesterSlashingsElectra)
if err != nil {
return nil, err
}
@@ -1210,7 +1222,7 @@ func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation1.AttestingIndices, 2048)
err = slice.VerifyMaxLength(s.Attestation1.AttestingIndices, params.BeaconConfig().MaxValidatorsPerCommittee*params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.AttestingIndices", i))
}
@@ -1230,7 +1242,7 @@ func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation2.AttestingIndices, 2048)
err = slice.VerifyMaxLength(s.Attestation2.AttestingIndices, params.BeaconConfig().MaxValidatorsPerCommittee*params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.AttestingIndices", i))
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/slice"
@@ -2519,6 +2520,7 @@ func (b *BeaconBlockContentsElectra) ToConsensus() (*eth.BeaconBlockContentsElec
}, nil
}
// nolint:gocognit
func (b *BeaconBlockElectra) ToConsensus() (*eth.BeaconBlockElectra, error) {
if b == nil {
return nil, errNilValue
@@ -2706,6 +2708,9 @@ func (b *BeaconBlockElectra) ToConsensus() (*eth.BeaconBlockElectra, error) {
return nil, server.NewDecodeError(errors.New("nil execution requests"), "Body.ExequtionRequests")
}
if err = slice.VerifyMaxLength(b.Body.ExecutionRequests.Deposits, params.BeaconConfig().MaxDepositRequestsPerPayload); err != nil {
return nil, err
}
depositRequests := make([]*enginev1.DepositRequest, len(b.Body.ExecutionRequests.Deposits))
for i, d := range b.Body.ExecutionRequests.Deposits {
depositRequests[i], err = d.ToConsensus()
@@ -2714,6 +2719,9 @@ func (b *BeaconBlockElectra) ToConsensus() (*eth.BeaconBlockElectra, error) {
}
}
if err = slice.VerifyMaxLength(b.Body.ExecutionRequests.Withdrawals, params.BeaconConfig().MaxWithdrawalRequestsPerPayload); err != nil {
return nil, err
}
withdrawalRequests := make([]*enginev1.WithdrawalRequest, len(b.Body.ExecutionRequests.Withdrawals))
for i, w := range b.Body.ExecutionRequests.Withdrawals {
withdrawalRequests[i], err = w.ToConsensus()
@@ -2722,6 +2730,9 @@ func (b *BeaconBlockElectra) ToConsensus() (*eth.BeaconBlockElectra, error) {
}
}
if err = slice.VerifyMaxLength(b.Body.ExecutionRequests.Consolidations, params.BeaconConfig().MaxConsolidationsRequestsPerPayload); err != nil {
return nil, err
}
consolidationRequests := make([]*enginev1.ConsolidationRequest, len(b.Body.ExecutionRequests.Consolidations))
for i, c := range b.Body.ExecutionRequests.Consolidations {
consolidationRequests[i], err = c.ToConsensus()
@@ -3003,9 +3014,14 @@ func (b *BlindedBeaconBlockElectra) ToConsensus() (*eth.BlindedBeaconBlockElectr
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.ExcessBlobGas")
}
if b.Body.ExecutionRequests == nil {
return nil, server.NewDecodeError(errors.New("nil execution requests"), "Body.ExecutionRequests")
}
if err = slice.VerifyMaxLength(b.Body.ExecutionRequests.Deposits, params.BeaconConfig().MaxDepositRequestsPerPayload); err != nil {
return nil, err
}
depositRequests := make([]*enginev1.DepositRequest, len(b.Body.ExecutionRequests.Deposits))
for i, d := range b.Body.ExecutionRequests.Deposits {
depositRequests[i], err = d.ToConsensus()
@@ -3014,6 +3030,9 @@ func (b *BlindedBeaconBlockElectra) ToConsensus() (*eth.BlindedBeaconBlockElectr
}
}
if err = slice.VerifyMaxLength(b.Body.ExecutionRequests.Withdrawals, params.BeaconConfig().MaxWithdrawalRequestsPerPayload); err != nil {
return nil, err
}
withdrawalRequests := make([]*enginev1.WithdrawalRequest, len(b.Body.ExecutionRequests.Withdrawals))
for i, w := range b.Body.ExecutionRequests.Withdrawals {
withdrawalRequests[i], err = w.ToConsensus()
@@ -3022,6 +3041,9 @@ func (b *BlindedBeaconBlockElectra) ToConsensus() (*eth.BlindedBeaconBlockElectr
}
}
if err = slice.VerifyMaxLength(b.Body.ExecutionRequests.Consolidations, params.BeaconConfig().MaxConsolidationsRequestsPerPayload); err != nil {
return nil, err
}
consolidationRequests := make([]*enginev1.ConsolidationRequest, len(b.Body.ExecutionRequests.Consolidations))
for i, c := range b.Body.ExecutionRequests.Consolidations {
consolidationRequests[i], err = c.ToConsensus()

View File

@@ -6,9 +6,11 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"currently_syncing_execution_payload_envelope.go",
"defragment.go",
"error.go",
"execution_engine.go",
"execution_engine_epbs.go",
"forkchoice_update_execution.go",
"head.go",
"head_sync_committee_info.go",
@@ -25,6 +27,8 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_execution_payload_envelope.go",
"receive_payload_attestation_message.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -43,6 +47,7 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epbs:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
@@ -96,6 +101,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
@@ -108,6 +114,7 @@ go_test(
"chain_info_norace_test.go",
"chain_info_test.go",
"checktags_test.go",
"epbs_test.go",
"error_test.go",
"execution_engine_test.go",
"forkchoice_update_execution_test.go",
@@ -123,6 +130,7 @@ go_test(
"process_block_test.go",
"receive_attestation_test.go",
"receive_block_test.go",
"receive_execution_payload_envelope_test.go",
"service_norace_test.go",
"service_test.go",
"setup_test.go",
@@ -177,6 +185,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -43,7 +43,7 @@ type ForkchoiceFetcher interface {
GetProposerHead() [32]byte
SetForkChoiceGenesisTime(uint64)
UpdateHead(context.Context, primitives.Slot)
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte)
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, consensus_blocks.ROBlock) error
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
@@ -51,6 +51,7 @@ type ForkchoiceFetcher interface {
ProposerBoost() [32]byte
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error)
GetPTCVote(root [32]byte) primitives.PTCStatus
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
@@ -119,6 +120,12 @@ type OptimisticModeFetcher interface {
IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error)
}
// ExecutionPayloadFetcher defines a common interface that returns forkchoice
// information about payload block hashes
type ExecutionPayloadFetcher interface {
HashInForkchoice([32]byte) bool
}
// FinalizedCheckpt returns the latest finalized checkpoint from chain store.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
s.cfg.ForkChoiceStore.RLock()
@@ -400,6 +407,14 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// HashInForkchoice returns true if the given payload block hash is found in
// forkchoice
func (s *Service) HashInForkchoice(hash [32]byte) bool {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HasHash(hash)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
@@ -30,11 +31,11 @@ func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
s.cfg.ForkChoiceStore.SetGenesisTime(timestamp)
}
// HighestReceivedBlockSlot returns the corresponding value from forkchoice
func (s *Service) HighestReceivedBlockSlot() primitives.Slot {
// HighestReceivedBlockSlotRoot returns the corresponding value from forkchoice
func (s *Service) HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HighestReceivedBlockSlot()
return s.cfg.ForkChoiceStore.HighestReceivedBlockSlotRoot()
}
// ReceivedBlocksLastEpoch returns the corresponding value from forkchoice
@@ -100,3 +101,30 @@ func (s *Service) ParentRoot(root [32]byte) ([32]byte, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ParentRoot(root)
}
// GetPTCVote wraps a call to the corresponding method in forkchoice and checks
// the currently syncing status
// Warning: this method will return the current PTC status regardless of
// timeliness. A client MUST call this method when about to submit a PTC
// attestation, that is exactly at the threshold to submit the attestation.
func (s *Service) GetPTCVote(root [32]byte) primitives.PTCStatus {
s.cfg.ForkChoiceStore.RLock()
f := s.cfg.ForkChoiceStore.GetPTCVote()
s.cfg.ForkChoiceStore.RUnlock()
if f != primitives.PAYLOAD_ABSENT {
return f
}
f, isSyncing := s.payloadBeingSynced.isSyncing(root)
if isSyncing {
return f
}
return primitives.PAYLOAD_ABSENT
}
// insertPayloadEnvelope wraps a locked call to the corresponding method in
// forkchoice
func (s *Service) insertPayloadEnvelope(envelope interfaces.ROExecutionPayloadEnvelope) error {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.InsertPayloadEnvelope(envelope)
}

View File

@@ -36,6 +36,7 @@ func prepareForkchoiceState(
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
parentHash [32]byte,
justified *ethpb.Checkpoint,
finalized *ethpb.Checkpoint,
) (state.BeaconState, blocks.ROBlock, error) {
@@ -68,7 +69,8 @@ func prepareForkchoiceState(
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &enginev1.ExecutionPayload{
BlockHash: payloadHash[:],
BlockHash: payloadHash[:],
ParentHash: parentHash[:],
},
},
},
@@ -141,7 +143,7 @@ func TestUnrealizedJustifiedBlockHash(t *testing.T) {
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: []byte{'j'}}
ofc := &ethpb.Checkpoint{Root: []byte{'f'}}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
service.cfg.ForkChoiceStore.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
@@ -335,22 +337,22 @@ func TestService_ChainHeads(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, [32]byte{'E'}, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -432,10 +434,10 @@ func TestService_IsOptimistic(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -468,10 +470,10 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))

View File

@@ -0,0 +1,36 @@
package blockchain
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
type currentlySyncingPayload struct {
sync.Mutex
roots map[[32]byte]primitives.PTCStatus
}
func (b *currentlySyncingPayload) set(envelope interfaces.ROExecutionPayloadEnvelope) {
b.Lock()
defer b.Unlock()
if envelope.PayloadWithheld() {
b.roots[envelope.BeaconBlockRoot()] = primitives.PAYLOAD_WITHHELD
} else {
b.roots[envelope.BeaconBlockRoot()] = primitives.PAYLOAD_PRESENT
}
}
func (b *currentlySyncingPayload) unset(root [32]byte) {
b.Lock()
defer b.Unlock()
delete(b.roots, root)
}
func (b *currentlySyncingPayload) isSyncing(root [32]byte) (status primitives.PTCStatus, isSyncing bool) {
b.Lock()
defer b.Unlock()
status, isSyncing = b.roots[root]
return
}

View File

@@ -0,0 +1,18 @@
package blockchain
import (
"testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestServiceGetPTCVote(t *testing.T) {
c := &currentlySyncingPayload{roots: make(map[[32]byte]primitives.PTCStatus)}
s := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, payloadBeingSynced: c}
r := [32]byte{'r'}
require.Equal(t, primitives.PAYLOAD_ABSENT, s.GetPTCVote(r))
c.roots[r] = primitives.PAYLOAD_WITHHELD
require.Equal(t, primitives.PAYLOAD_WITHHELD, s.GetPTCVote(r))
}

View File

@@ -30,6 +30,9 @@ var (
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
// errInvalidValidatorIndex is returned when a validator index is
// invalid or unexpected
errInvalidValidatorIndex = errors.New("invalid validator index")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")

View File

@@ -2,6 +2,7 @@ package blockchain
import (
"context"
"crypto/sha256"
"fmt"
"github.com/ethereum/go-ethereum/common"
@@ -30,6 +31,8 @@ import (
"github.com/sirupsen/logrus"
)
const blobCommitmentVersionKZG uint8 = 0x01
var defaultLatestValidHash = bytesutil.PadTo([]byte{0xff}, 32)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
@@ -95,6 +98,14 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
log.WithError(err).Error("Could not set head root to invalid")
return nil, nil
}
if len(invalidRoots) == 0 {
log.WithFields(logrus.Fields{
"slot": headBlk.Slot(),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot[:])),
}).Warn("invalid payload")
return nil, nil
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
log.WithError(err).Error("Could not remove invalid block and state")
return nil, nil
@@ -109,6 +120,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}).Warn("Pruned invalid blocks, could not update head root")
return nil, invalidBlock{error: ErrInvalidPayload, root: arg.headRoot, invalidAncestorRoots: invalidRoots}
}
b, err := s.getBlock(ctx, r)
if err != nil {
log.WithError(err).Error("Could not get head block")
@@ -456,7 +468,13 @@ func kzgCommitmentsToVersionedHashes(body interfaces.ReadOnlyBeaconBlockBody) ([
versionedHashes := make([]common.Hash, len(commitments))
for i, commitment := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(commitment)
versionedHashes[i] = ConvertKzgCommitmentToVersionedHash(commitment)
}
return versionedHashes, nil
}
func ConvertKzgCommitmentToVersionedHash(commitment []byte) common.Hash {
versionedHash := sha256.Sum256(commitment)
versionedHash[0] = blobCommitmentVersionKZG
return versionedHash
}

View File

@@ -0,0 +1,62 @@
package blockchain
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/config/features"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/sirupsen/logrus"
)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdateEPBS(ctx context.Context, blockhash [32]byte, attributes payloadattribute.Attributer) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdateEPBS")
defer span.End()
finalizedHash := s.cfg.ForkChoiceStore.FinalizedPayloadBlockHash()
justifiedHash := s.cfg.ForkChoiceStore.UnrealizedJustifiedPayloadBlockHash()
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: blockhash[:],
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
if attributes == nil {
attributes = payloadattribute.EmptyWithVersion(version.EPBS)
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attributes)
if err != nil {
switch {
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
forkchoiceUpdatedOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"headPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(blockhash[:])),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
log.WithError(err).Info("forkchoice updated to invalid block")
return nil, invalidBlock{error: ErrInvalidPayload, root: [32]byte(lastValidHash)}
default:
log.WithError(err).Error(ErrUndefinedExecutionEngineError)
return nil, nil
}
}
forkchoiceUpdatedValidNodeCount.Inc()
// If the forkchoice update call has an attribute, update the payload ID cache.
hasAttr := attributes != nil && !attributes.IsEmpty()
if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", blockhash[:]),
}).Error("Received nil payload ID on VALID engine response")
}
return payloadID, nil
}

View File

@@ -46,13 +46,13 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -104,13 +104,13 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -287,16 +287,16 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{}))
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -316,10 +316,8 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
headRoot: brd,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.Equal(t, true, IsInvalidBlock(err))
require.Equal(t, brd, InvalidBlockRoot(err))
require.Equal(t, brd, InvalidAncestorRoots(err)[0])
require.Equal(t, 1, len(InvalidAncestorRoots(err)))
// The incoming block is not invalid because the empty node is still valid on ePBS.
require.Equal(t, false, IsInvalidBlock(err))
}
//
@@ -398,28 +396,28 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{}))
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
bState, _ := util.DeterministicGenesisState(t, 10)
require.NoError(t, beaconDB.SaveState(ctx, bState, bra))
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -512,10 +510,10 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -692,7 +690,7 @@ func Test_NotifyNewPayload(t *testing.T) {
}
service.cfg.ExecutionEngineCaller = e
root := [32]byte{'a'}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
@@ -759,17 +757,17 @@ func Test_reportInvalidBlock(t *testing.T) {
service, tr := minimalTestService(t)
ctx, _, fcs := tr.ctx, tr.db, tr.fcs
jcp := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, jcp, jcp)
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, [32]byte{}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, [32]byte{'a'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, [32]byte{'b'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, [32]byte{'c'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
@@ -931,7 +929,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
fjc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, fjc))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(fjc))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
fcs.SetOriginRoot(genesisRoot)
@@ -965,7 +963,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(ctx, opStateSummary))
tenjc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
tenfc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, tenjc, tenfc)
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, [32]byte{}, tenjc, tenfc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, opRoot))
@@ -994,7 +992,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(ctx, validSummary))
twentyjc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
twentyfc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, twentyjc, twentyfc)
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, [32]byte{}, twentyjc, twentyfc)
require.NoError(t, err)
fcs.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -1056,8 +1054,8 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
require.NoError(t, service.removeInvalidBlockAndState(ctx, [][32]byte{r1, r2}))
require.Equal(t, false, service.hasBlock(ctx, r1))
require.Equal(t, false, service.hasBlock(ctx, r2))
require.Equal(t, false, service.chainHasBlock(ctx, r1))
require.Equal(t, false, service.chainHasBlock(ctx, r2))
require.Equal(t, false, service.cfg.BeaconDB.HasStateSummary(ctx, r1))
require.Equal(t, false, service.cfg.BeaconDB.HasStateSummary(ctx, r2))
has, err := service.cfg.StateGen.HasState(ctx, r1)

View File

@@ -122,13 +122,13 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -164,10 +164,10 @@ func TestShouldOverrideFCU(t *testing.T) {
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, ojc, ojc)
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))

View File

@@ -48,7 +48,7 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -63,11 +63,11 @@ func TestSaveHead_Different(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -101,7 +101,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -110,7 +110,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
}
reorgChainParent := [32]byte{'B'}
state, blkRoot, err = prepareForkchoiceState(ctx, 0, reorgChainParent, oldRoot, oldBlock.Block().ParentRoot(), ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 0, reorgChainParent, oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
@@ -122,7 +122,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -238,11 +238,11 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -304,7 +304,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -381,7 +381,7 @@ func TestSaveOrphanedOps(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -451,7 +451,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlockCapella{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -509,7 +509,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -568,7 +568,7 @@ func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -583,7 +583,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
ojp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
@@ -603,7 +603,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
headRoot := service.headRoot()
require.Equal(t, [32]byte{}, headRoot)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, fcp, fcp)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
fcs.SetBalancesByRooter(func(context.Context, [32]byte) ([]uint64, error) { return []uint64{1, 2}, nil })

View File

@@ -45,28 +45,44 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if b.Version() >= version.EPBS {
sh, err := b.Body().SignedExecutionPayloadHeader()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
header, err := sh.Header()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
return err
}
log = log.WithFields(logrus.Fields{"payloadHash": fmt.Sprintf("%#x", header.BlockHash()),
"builderIndex": header.BuilderIndex(),
"value": header.Value(),
"blobKzgCommitmentsRoot": fmt.Sprintf("%#x", header.BlobKzgCommitmentsRoot()),
})
} else {
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
}
log.Info("Finished applying state transition")
@@ -112,6 +128,9 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
if block.Version() >= version.EPBS {
return nil
}
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")

View File

@@ -182,6 +182,10 @@ var (
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
executionEngineProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "execution_engine_processing_milliseconds",
Help: "Total time to process an execution payload envelope in ReceiveExecutionPayloadEnvelope()",
})
dataAvailWaitedTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "da_waited_time_milliseconds",
Help: "Total time spent waiting for a data availability check in ReceiveBlock()",

View File

@@ -1,6 +1,8 @@
package blockchain
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -69,6 +71,22 @@ func WithDepositCache(c cache.DepositCache) Option {
}
}
// WithPayloadAttestationCache for payload attestation cache.
func WithPayloadAttestationCache(c *cache.PayloadAttestationCache) Option {
return func(s *Service) error {
s.cfg.PayloadAttestationCache = c
return nil
}
}
// WithPayloadEnvelopeCache for payload envelope cache.
func WithPayloadEnvelopeCache(c *sync.Map) Option {
return func(s *Service) error {
s.cfg.PayloadEnvelopeCache = c
return nil
}
}
// WithPayloadIDCache for payload ID cache.
func WithPayloadIDCache(c *cache.PayloadIDCache) Option {
return func(s *Service) error {

View File

@@ -97,7 +97,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// We assume trusted attestation in this function has verified signature.
// Update forkchoice store with the new attestation for updating weight.
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Target.Epoch)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Slot)
return nil
}

View File

@@ -32,7 +32,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
cp := &ethpb.Checkpoint{}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -41,7 +41,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
r, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
cp = &ethpb.Checkpoint{Root: r[:]}
st, roblock, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
st, roblock, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
@@ -139,7 +139,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
state, roblock, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, roblock, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, roblock))
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
@@ -170,7 +170,7 @@ func TestService_GetRecentPreState(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
@@ -202,7 +202,7 @@ func TestService_GetAttPreState_Concurrency(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: ckRoot}))
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -259,7 +259,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)}))
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s1, err := service.getAttPreState(ctx, cp1)
@@ -273,7 +273,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
_, err = service.getAttPreState(ctx, cp2)
require.ErrorContains(t, "epoch 2 root 0x4200000000000000000000000000000000000000000000000000000000000000: not a checkpoint in forkchoice", err)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, cp2, cp2)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, [32]byte{}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -298,7 +298,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}))
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, cp2, cp2)
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, [32]byte{}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -318,7 +318,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, err)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r1[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root)))
st, roblock, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, checkpoint, checkpoint)
st, roblock, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, checkpoint, checkpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
returned, err := service.getAttPreState(ctx, checkpoint)
@@ -336,7 +336,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, err)
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r2[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(newCheckpoint.Root)))
st, roblock, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, newCheckpoint, newCheckpoint)
st, roblock, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, [32]byte{}, newCheckpoint, newCheckpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
returned, err = service.getAttPreState(ctx, newCheckpoint)

View File

@@ -64,7 +64,9 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
fcuArgs := &fcuConfig{}
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
if cfg.roblock.Version() < version.EPBS {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
@@ -102,6 +104,18 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
s.logNonCanonicalBlockReceived(cfg.roblock.Root(), cfg.headRoot)
return nil
}
if cfg.roblock.Version() >= version.EPBS {
if err := s.saveHead(ctx, cfg.headRoot, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("could not save head")
}
if err := s.pruneAttsFromPool(cfg.roblock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
// update the NSC and handle epoch boundaries here since we do
// not send FCU at all
return s.updateCachesPostBlockProcessing(cfg)
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
@@ -378,7 +392,7 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Re
}
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
if s.cfg.ForkChoiceStore.HasNode(r) {
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Target.Epoch)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Slot)
} else if features.Get().EnableExperimentalAttestationPool {
if err = s.cfg.AttestationCache.Add(a); err != nil {
return err
@@ -488,9 +502,15 @@ func (s *Service) runLateBlockTasks() {
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
epbs := params.BeaconConfig().EPBSForkEpoch
for {
select {
case <-ticker.C():
case slot := <-ticker.C():
if slots.ToEpoch(slot) == epbs && slot%32 == 0 {
ticker.Done()
attThreshold := params.BeaconConfig().SecondsPerSlot / 4
ticker = slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
}
s.lateBlockTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
@@ -666,24 +686,36 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
if headState.Version() >= version.EPBS {
bh, err := headState.LatestBlockHash()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateEPBS(ctx, [32]byte(bh), attribute)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
} else {
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
}
}

View File

@@ -8,6 +8,7 @@ import (
"time"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
@@ -368,7 +369,26 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
return nil, err
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, b.ParentRoot())
parentRoot := b.ParentRoot()
if b.Version() >= version.EPBS {
s.ForkChoicer().RLock()
parentHash := s.ForkChoicer().HashForBlockRoot(parentRoot)
s.ForkChoicer().RUnlock()
signedBid, err := b.Body().SignedExecutionPayloadHeader()
if err != nil {
return nil, errors.Wrap(err, "could not get signed execution payload header")
}
bid, err := signedBid.Header()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
if parentHash == bid.BlockHash() {
// It's based on full, use the state by hash
parentRoot = parentHash
}
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, parentRoot)
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot())
}

View File

@@ -142,7 +142,7 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
// the parent of the last block inserted is the tree node.
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
r0 := bytesutil.ToBytes32(roots[0])
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, fcp, fcp)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
@@ -184,7 +184,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
// the parent of the last block inserted is the tree node.
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
r0 := bytesutil.ToBytes32(roots[0])
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, fcp, fcp)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
@@ -464,7 +464,7 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
}
@@ -504,7 +504,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
}
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
@@ -1076,6 +1076,48 @@ func TestService_insertSlashingsToForkChoiceStore(t *testing.T) {
service.InsertSlashingsToForkChoiceStore(ctx, wb.Block().Body().AttesterSlashings())
}
func TestService_insertSlashingsToForkChoiceStoreElectra(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
beaconState, privKeys := util.DeterministicGenesisStateElectra(t, 100)
att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashingElectra{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
b := util.NewBeaconBlockElectra()
b.Block.Body.AttesterSlashings = slashings
wb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
service.InsertSlashingsToForkChoiceStore(ctx, wb.Block().Body().AttesterSlashings())
}
func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
@@ -1111,7 +1153,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
logHook := logTest.NewGlobal()
for i := 0; i < 10; i++ {
fc := &ethpb.Checkpoint{}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, fc, fc)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
var wg sync.WaitGroup

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
@@ -148,15 +149,35 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
var attributes payloadattribute.Attributer
if s.inRegularSync() {
attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if headState.Version() >= version.EPBS {
bh, err := headState.LatestBlockHash()
if err != nil {
log.WithError(err).Error("could not get latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateEPBS(ctx, [32]byte(bh), attributes)
if err != nil {
log.WithError(err).Error("could not notify forkchoice update")
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
return
}
if err := s.pruneAttsFromPool(headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return
}
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
@@ -184,7 +205,7 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
}
hasState := s.cfg.BeaconDB.HasStateSummary(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.hasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.chainHasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
if !(hasState && hasBlock) {
continue
}

View File

@@ -42,11 +42,11 @@ func TestVerifyLMDFFGConsistent(t *testing.T) {
f := service.cfg.ForkChoiceStore
fc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, fc, fc)
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r32))
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32.Root(), params.BeaconConfig().ZeroHash, fc, fc)
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32.Root(), params.BeaconConfig().ZeroHash, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r33))
@@ -82,7 +82,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
attsToSave := make([]ethpb.Att, len(atts))
@@ -142,7 +142,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())
@@ -191,7 +191,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())

View File

@@ -39,18 +39,30 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveExecutionPayloadEnvelope(ctx context.Context, env interfaces.ROSignedExecutionPayloadEnvelope, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
}
// PayloadAttestationReceiver defines methods of the chain service for receiving
// and processing new payload attestations and payload attestation messages
type PayloadAttestationReceiver interface {
ReceivePayloadAttestationMessage(ctx context.Context, a *ethpb.PayloadAttestationMessage) error
}
// BlobReceiver interface defines the methods of chain service for receiving new
// blobs
type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// ExecutionPayloadReceiver interface defines the methods of chain service for receiving `ROExecutionPayloadEnvelope`.
type ExecutionPayloadReceiver interface {
ReceiveExecutionPayloadEnvelope(ctx context.Context, envelope interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
@@ -87,14 +99,30 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
var postState state.BeaconState
var isValidPayload bool
var daWaitedTime time.Duration
if blockCopy.Version() >= version.EPBS {
postState, err = s.validateStateTransition(ctx, preState, roblock)
if err != nil {
return errors.Wrap(err, "could not validate state transition")
}
optimistic, err := s.IsOptimisticForRoot(ctx, roblock.Block().ParentRoot())
if err != nil {
return errors.Wrap(err, "could not check if parent is optimistic")
}
// if the parent is not optimistic then we can set the block as
// not optimistic.
isValidPayload = !optimistic
} else {
postState, isValidPayload, err = s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
}
daWaitedTime, err = s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
}
}
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -468,6 +496,9 @@ func (s *Service) validateStateTransition(ctx context.Context, preState state.Be
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
if ctx.Err() != nil {
return nil, err
}
return nil, invalidBlock{error: err}
}
stateTransitionProcessingTime.Observe(float64(time.Since(stateTransitionStartTime).Milliseconds()))

View File

@@ -0,0 +1,234 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
// ReceiveExecutionPayloadEnvelope is a function that defines the operations (minus pubsub)
// that are performed on a received execution payload envelope. The operations consist of:
// 1. Validate the payload, apply state transition.
// 2. Apply fork choice to the processed payload
// 3. Save latest head info
func (s *Service) ReceiveExecutionPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error {
receivedTime := time.Now()
envelope, err := signed.Envelope()
if err != nil {
return err
}
root := envelope.BeaconBlockRoot()
s.payloadBeingSynced.set(envelope)
defer s.payloadBeingSynced.unset(root)
preState, err := s.getPayloadEnvelopePrestate(ctx, envelope)
if err != nil {
return errors.Wrap(err, "could not get prestate")
}
eg, _ := errgroup.WithContext(ctx)
eg.Go(func() error {
if err := epbs.ValidatePayloadStateTransition(ctx, preState, envelope); err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnEnvelope(ctx, envelope)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
daStartTime := time.Now()
// TODO: Add DA check
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
if err := s.savePostPayload(ctx, signed, preState); err != nil {
return err
}
if err := s.insertPayloadEnvelope(envelope); err != nil {
return errors.Wrap(err, "could not insert payload to forkchoice")
}
if isValidPayload {
s.ForkChoicer().Lock()
if err := s.ForkChoicer().SetOptimisticToValid(ctx, root); err != nil {
s.ForkChoicer().Unlock()
return errors.Wrap(err, "could not set optimistic payload to valid")
}
s.ForkChoicer().Unlock()
}
headRoot, err := s.HeadRoot(ctx)
if err != nil {
log.WithError(err).Error("could not get headroot to compute attributes")
return nil
}
if bytes.Equal(headRoot, root[:]) {
attr := s.getPayloadAttribute(ctx, preState, envelope.Slot()+1, headRoot)
execution, err := envelope.Execution()
if err != nil {
log.WithError(err).Error("could not get execution data")
return nil
}
blockHash := [32]byte(execution.BlockHash())
payloadID, err := s.notifyForkchoiceUpdateEPBS(ctx, blockHash, attr)
if err != nil {
if IsInvalidBlock(err) {
// TODO handle the lvh here
return err
}
return nil
}
if attr != nil && !attr.IsEmpty() && payloadID != nil {
var pid [8]byte
copy(pid[:], payloadID[:])
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot)),
"headSlot": envelope.Slot(),
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.PayloadIDCache.Set(envelope.Slot()+1, root, pid)
}
headBlk, err := s.HeadBlock(ctx)
if err != nil {
log.WithError(err).Error("could not get head block")
}
if err := s.saveHead(ctx, root, headBlk, preState); err != nil {
log.WithError(err).Error("could not save new head")
}
// update the NSC with the hash for the full block
if err := transition.UpdateNextSlotCache(ctx, blockHash[:], preState); err != nil {
log.WithError(err).Error("could not update next slot cache with payload")
}
}
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
executionEngineProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
return nil
}
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewEnvelope(ctx context.Context, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
payload, err := envelope.Execution()
if err != nil {
return false, errors.Wrap(err, "could not get execution payload")
}
versionedHashes := envelope.VersionedHashes()
root := envelope.BeaconBlockRoot()
parentRoot, err := s.ParentRoot(root)
if err != nil {
return false, errors.Wrap(err, "could not get parent block root")
}
pr := common.Hash(parentRoot)
requests := envelope.ExecutionRequests()
lastValidHash, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &pr, requests)
switch {
case err == nil:
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: lvh,
}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
}
// validateExecutionOnEnvelope notifies the engine of the incoming execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnEnvelope(ctx context.Context, e interfaces.ROExecutionPayloadEnvelope) (bool, error) {
isValidPayload, err := s.notifyNewEnvelope(ctx, e)
if err == nil {
return isValidPayload, nil
}
blockRoot := e.BeaconBlockRoot()
parentRoot, rootErr := s.ParentRoot(blockRoot)
if rootErr != nil {
return false, errors.Wrap(rootErr, "could not get parent block root")
}
s.cfg.ForkChoiceStore.Lock()
err = s.handleInvalidExecutionError(ctx, err, blockRoot, parentRoot)
s.cfg.ForkChoiceStore.Unlock()
return false, err
}
func (s *Service) getPayloadEnvelopePrestate(ctx context.Context, e interfaces.ROExecutionPayloadEnvelope) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.getPayloadEnvelopePreState")
defer span.End()
// Verify incoming payload has a valid pre state.
root := e.BeaconBlockRoot()
// Verify the referred block is known to forkchoice
if !s.InForkchoice(root) {
return nil, errors.New("Cannot import execution payload envelope for unknown block")
}
if err := s.verifyBlkPreState(ctx, root); err != nil {
return nil, errors.Wrap(err, "could not verify payload prestate")
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, root)
if err != nil {
return nil, errors.Wrap(err, "could not get pre state")
}
if preState == nil || preState.IsNil() {
return nil, errors.Wrap(err, "nil pre state")
}
return preState, nil
}
func (s *Service) savePostPayload(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, st state.BeaconState) error {
if err := s.cfg.BeaconDB.SaveBlindPayloadEnvelope(ctx, signed); err != nil {
return err
}
envelope, err := signed.Envelope()
if err != nil {
return err
}
execution, err := envelope.Execution()
if err != nil {
return err
}
r := envelope.BeaconBlockRoot()
if err := s.cfg.StateGen.SaveState(ctx, [32]byte(execution.BlockHash()), st); err != nil {
log.Warnf("Rolling back insertion of block with root %#x", r)
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
}
return errors.Wrap(err, "could not save state")
}
return nil
}

View File

@@ -0,0 +1,103 @@
package blockchain
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func Test_getPayloadEnvelopePrestate(t *testing.T) {
service, tr := minimalTestService(t)
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
_, err = service.getPayloadEnvelopePrestate(ctx, e)
require.NoError(t, err)
}
func Test_notifyNewEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
isValidPayload, err := service.notifyNewEnvelope(ctx, e)
require.NoError(t, err)
require.Equal(t, true, isValidPayload)
}
func Test_validateExecutionOnEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
isValidPayload, err := service.validateExecutionOnEnvelope(ctx, e)
require.NoError(t, err)
require.Equal(t, true, isValidPayload)
}
func Test_ReceiveExecutionPayloadEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
post := gs.Copy()
p := &enginev1.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BeaconBlockRoot: service.originBlockRoot[:],
BlobKzgCommitments: make([][]byte, 0),
StateRoot: make([]byte, 32),
ExecutionRequests: &enginev1.ExecutionRequests{},
}
sp := &enginev1.SignedExecutionPayloadEnvelope{
Message: p,
}
e, err := blocks.WrappedROSignedExecutionPayloadEnvelope(sp)
require.NoError(t, err)
das := &das.MockAvailabilityStore{}
blockHeader := post.LatestBlockHeader()
prevStateRoot, err := post.HashTreeRoot(ctx)
require.NoError(t, err)
blockHeader.StateRoot = prevStateRoot[:]
require.NoError(t, post.SetLatestBlockHeader(blockHeader))
stRoot, err := post.HashTreeRoot(ctx)
require.NoError(t, err)
p.StateRoot = stRoot[:]
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
require.NoError(t, service.ReceiveExecutionPayloadEnvelope(ctx, e, das))
}

View File

@@ -0,0 +1,33 @@
package blockchain
import (
"context"
"slices"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
func (s *Service) ReceivePayloadAttestationMessage(ctx context.Context, a *eth.PayloadAttestationMessage) error {
if err := helpers.ValidateNilPayloadAttestationMessage(a); err != nil {
return err
}
root := [32]byte(a.Data.BeaconBlockRoot)
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
return err
}
ptc, err := helpers.GetPayloadTimelinessCommittee(ctx, st, a.Data.Slot)
if err != nil {
return err
}
idx := slices.Index(ptc, a.ValidatorIndex)
if idx == -1 {
return errInvalidValidatorIndex
}
if s.cfg.PayloadAttestationCache.Seen(root, uint64(primitives.ValidatorIndex(idx))) {
return nil
}
return s.cfg.PayloadAttestationCache.Add(a, uint64(idx))
}

View File

@@ -47,24 +47,26 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
lastPublishedLightClientEpoch primitives.Epoch
blobStorage *filesystem.BlobStorage
payloadBeingSynced *currentlySyncingPayload
}
// config options for the service.
@@ -73,6 +75,8 @@ type config struct {
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadAttestationCache *cache.PayloadAttestationCache
PayloadEnvelopeCache *sync.Map
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttestationCache *cache.AttestationCache
@@ -183,6 +187,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
payloadBeingSynced: &currentlySyncingPayload{roots: make(map[[32]byte]primitives.PTCStatus)},
}
for _, opt := range opts {
if err := opt(srv); err != nil {
@@ -558,7 +563,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
// 2.) Check DB.
// Checking 1.) is ten times faster than checking 2.)
// this function requires a lock in forkchoice
func (s *Service) hasBlock(ctx context.Context, root [32]byte) bool {
func (s *Service) chainHasBlock(ctx context.Context, root [32]byte) bool {
if s.cfg.ForkChoiceStore.HasNode(root) {
return true
}

View File

@@ -386,8 +386,8 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
require.NoError(t, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, roblock))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
assert.Equal(t, false, s.chainHasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.chainHasBlock(ctx, r), "Should have block")
}
func TestServiceStop_SaveCachedBlocks(t *testing.T) {

View File

@@ -3,7 +3,10 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["mock.go"],
srcs = [
"mock.go",
"mock_epbs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -37,44 +37,50 @@ var ErrNilState = errors.New("nil state")
// ChainService defines the mock interface for testing
type ChainService struct {
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
Slot *primitives.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block interfaces.ReadOnlySignedBeaconBlock
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []interfaces.ReadOnlySignedBeaconBlock
SyncCommitteeIndices []primitives.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
Slot *primitives.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block interfaces.ReadOnlySignedBeaconBlock
ExecutionPayloadEnvelope interfaces.ROExecutionPayloadEnvelope
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []interfaces.ReadOnlySignedBeaconBlock
SyncCommitteeIndices []primitives.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
ReceiveEnvelopeMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
HighestReceivedSlot primitives.Slot
HighestReceivedRoot [32]byte
PayloadStatus primitives.PTCStatus
ReceivePayloadAttestationMessageErr error
}
func (s *ChainService) Ancestor(ctx context.Context, root []byte, slot primitives.Slot) ([]byte, error) {
@@ -641,12 +647,12 @@ func (s *ChainService) ReceivedBlocksLastEpoch() (uint64, error) {
return 0, nil
}
// HighestReceivedBlockSlot mocks the same method in the chain service
func (s *ChainService) HighestReceivedBlockSlot() primitives.Slot {
// HighestReceivedBlockSlotRoot mocks the same method in the chain service
func (s *ChainService) HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte) {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.HighestReceivedBlockSlot()
return s.ForkChoiceStore.HighestReceivedBlockSlotRoot()
}
return 0
return s.HighestReceivedSlot, s.HighestReceivedRoot
}
// InsertNode mocks the same method in the chain service
@@ -706,3 +712,17 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil
}
// HashInForkchoice mocks the same method in the chain service
func (c *ChainService) HashInForkchoice([32]byte) bool {
return false
}
// ReceivePayloadAttestationMessage mocks the same method in the chain service
func (c *ChainService) ReceivePayloadAttestationMessage(_ context.Context, _ *ethpb.PayloadAttestationMessage) error {
return c.ReceivePayloadAttestationMessageErr
}
func (c *ChainService) GetPTCVote(root [32]byte) primitives.PTCStatus {
return c.PayloadStatus
}

View File

@@ -0,0 +1,30 @@
package testing
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
)
// ReceiveExecutionPayloadEnvelope mocks the method in chain service.
func (s *ChainService) ReceiveExecutionPayloadEnvelope(ctx context.Context, env interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}
if s.State == nil {
return ErrNilState
}
e, err := env.Envelope()
if err != nil {
return err
}
if s.State.Slot() == e.Slot() {
if err := s.State.SetLatestFullSlot(s.State.Slot()); err != nil {
return err
}
}
s.ExecutionPayloadEnvelope = e
return nil
}

View File

@@ -16,11 +16,13 @@ go_library(
"doc.go",
"error.go",
"interfaces.go",
"payload_attestation.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
"proposer_indices_type.go",
"registration.go",
"signed_execution_header.go",
"skip_slot_cache.go",
"subnet_ids.go",
"sync_committee.go",
@@ -50,6 +52,7 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
@@ -75,10 +78,12 @@ go_test(
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"payload_attestation_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",
"registration_test.go",
"signed_execution_header_test.go",
"skip_slot_cache_test.go",
"subnet_ids_test.go",
"sync_committee_head_state_test.go",
@@ -93,8 +98,10 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/blst:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -0,0 +1,132 @@
package cache
import (
"errors"
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var errNilPayloadAttestationMessage = errors.New("nil Payload Attestation Message")
// PayloadAttestationCache keeps a map of all the PTC votes that were seen,
// already aggregated. The key is the beacon block root.
type PayloadAttestationCache struct {
root [32]byte
attestations [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation
sync.Mutex
}
// Seen returns true if a vote for the given Beacon Block Root has already been processed
// for this Payload Timeliness Committee index. This will return true even if
// the Payload status differs.
func (p *PayloadAttestationCache) Seen(root [32]byte, idx uint64) bool {
p.Lock()
defer p.Unlock()
if p.root != root {
return false
}
for _, agg := range p.attestations {
if agg == nil {
continue
}
if agg.AggregationBits.BitAt(idx) {
return true
}
}
return false
}
// messageToPayloadAttestation creates a PayloadAttestation with a single
// aggregated bit from the passed PayloadAttestationMessage
func messageToPayloadAttestation(att *eth.PayloadAttestationMessage, idx uint64) *eth.PayloadAttestation {
bits := primitives.NewPayloadAttestationAggregationBits()
bits.SetBitAt(idx, true)
data := &eth.PayloadAttestationData{
BeaconBlockRoot: bytesutil.SafeCopyBytes(att.Data.BeaconBlockRoot),
Slot: att.Data.Slot,
PayloadStatus: att.Data.PayloadStatus,
}
return &eth.PayloadAttestation{
AggregationBits: bits,
Data: data,
Signature: bytesutil.SafeCopyBytes(att.Signature),
}
}
// aggregateSigFromMessage returns the aggregated signature from a Payload
// Attestation by adding the passed signature in the PayloadAttestationMessage,
// no signature validation is performed.
func aggregateSigFromMessage(aggregated *eth.PayloadAttestation, message *eth.PayloadAttestationMessage) ([]byte, error) {
aggSig, err := bls.SignatureFromBytesNoValidation(aggregated.Signature)
if err != nil {
return nil, err
}
sig, err := bls.SignatureFromBytesNoValidation(message.Signature)
if err != nil {
return nil, err
}
return bls.AggregateSignatures([]bls.Signature{aggSig, sig}).Marshal(), nil
}
// Add adds a PayloadAttestationMessage to the internal cache of aggregated
// PayloadAttestations.
// If the index has already been seen for this attestation status the function does nothing.
// If the root is not the cached root, the function will clear the previous cache
// This function assumes that the message has already been validated. In
// particular that the signature is valid and that the block root corresponds to
// the given slot in the attestation data.
func (p *PayloadAttestationCache) Add(att *eth.PayloadAttestationMessage, idx uint64) error {
if att == nil || att.Data == nil || att.Data.BeaconBlockRoot == nil {
return errNilPayloadAttestationMessage
}
p.Lock()
defer p.Unlock()
root := [32]byte(att.Data.BeaconBlockRoot)
if p.root != root {
p.root = root
p.attestations = [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{}
}
agg := p.attestations[att.Data.PayloadStatus]
if agg == nil {
p.attestations[att.Data.PayloadStatus] = messageToPayloadAttestation(att, idx)
return nil
}
if agg.AggregationBits.BitAt(idx) {
return nil
}
sig, err := aggregateSigFromMessage(agg, att)
if err != nil {
return err
}
agg.Signature = sig
agg.AggregationBits.SetBitAt(idx, true)
return nil
}
// Get returns the aggregated PayloadAttestation for the given root and status
// if the root doesn't exist or status is invalid, the function returns nil.
func (p *PayloadAttestationCache) Get(root [32]byte, status primitives.PTCStatus) *eth.PayloadAttestation {
p.Lock()
defer p.Unlock()
if p.root != root {
return nil
}
if status >= primitives.PAYLOAD_INVALID_STATUS {
return nil
}
return eth.CopyPayloadAttestation(p.attestations[status])
}
// Clear clears the internal map
func (p *PayloadAttestationCache) Clear() {
p.Lock()
defer p.Unlock()
p.root = [32]byte{}
p.attestations = [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{}
}

View File

@@ -0,0 +1,143 @@
package cache
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestPayloadAttestationCache(t *testing.T) {
p := &PayloadAttestationCache{}
//Test Has seen
root := [32]byte{'r'}
idx := uint64(5)
require.Equal(t, false, p.Seen(root, idx))
// Test Add
msg := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root[:],
Slot: 1,
PayloadStatus: primitives.PAYLOAD_PRESENT,
},
}
// Add new root
require.NoError(t, p.Add(msg, idx))
require.Equal(t, true, p.Seen(root, idx))
require.Equal(t, root, p.root)
att := p.attestations[primitives.PAYLOAD_PRESENT]
indices := att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx)}, indices)
singleSig := bytesutil.SafeCopyBytes(msg.Signature)
require.DeepEqual(t, singleSig, att.Signature)
// Test Seen
require.Equal(t, true, p.Seen(root, idx))
require.Equal(t, false, p.Seen(root, idx+1))
// Add another attestation on the same data
msg2 := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: att.Data,
}
idx2 := uint64(7)
require.NoError(t, p.Add(msg2, idx2))
att = p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx), int(idx2)}, indices)
require.DeepNotEqual(t, att.Signature, msg.Signature)
// Try again the same index
require.NoError(t, p.Add(msg2, idx2))
att2 := p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx), int(idx2)}, indices)
require.DeepEqual(t, att, att2)
// Test Seen
require.Equal(t, true, p.Seen(root, idx2))
require.Equal(t, false, p.Seen(root, idx2+1))
// Add another payload status for a different payload status
msg3 := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root[:],
Slot: 1,
PayloadStatus: primitives.PAYLOAD_WITHHELD,
},
}
idx3 := uint64(17)
require.NoError(t, p.Add(msg3, idx3))
att3 := p.attestations[primitives.PAYLOAD_WITHHELD]
indices3 := att3.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx3)}, indices3)
require.DeepEqual(t, singleSig, att3.Signature)
// Add a different root
root2 := [32]byte{'s'}
msg.Data.BeaconBlockRoot = root2[:]
require.NoError(t, p.Add(msg, idx))
require.Equal(t, root2, p.root)
require.Equal(t, true, p.Seen(root2, idx))
require.Equal(t, false, p.Seen(root, idx))
att = p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx)}, indices)
}
func TestPayloadAttestationCache_Get(t *testing.T) {
root := [32]byte{1, 2, 3}
wrongRoot := [32]byte{4, 5, 6}
status := primitives.PAYLOAD_PRESENT
invalidStatus := primitives.PAYLOAD_INVALID_STATUS
cache := &PayloadAttestationCache{
root: root,
attestations: [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{
{
Signature: []byte{1},
},
{
Signature: []byte{2},
},
{
Signature: []byte{3},
},
},
}
t.Run("valid root and status", func(t *testing.T) {
result := cache.Get(root, status)
require.NotNil(t, result, "Expected a non-nil result")
require.DeepEqual(t, cache.attestations[status], result)
})
t.Run("invalid root", func(t *testing.T) {
result := cache.Get(wrongRoot, status)
require.IsNil(t, result)
})
t.Run("status out of bound", func(t *testing.T) {
result := cache.Get(root, invalidStatus)
require.IsNil(t, result)
})
t.Run("no attestation", func(t *testing.T) {
emptyCache := &PayloadAttestationCache{
root: root,
attestations: [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{},
}
result := emptyCache.Get(root, status)
require.IsNil(t, result)
})
}

View File

@@ -0,0 +1,76 @@
package cache
import (
"bytes"
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ExecutionPayloadHeaders is used by the sync service to store signed execution payload headers after they pass validation,
// and filter out subsequent headers with lower value.
// The signed header from this cache could be used by the proposer when proposing the next slot.
type ExecutionPayloadHeaders struct {
headers map[primitives.Slot][]*enginev1.SignedExecutionPayloadHeader
sync.RWMutex
}
func NewExecutionPayloadHeaders() *ExecutionPayloadHeaders {
return &ExecutionPayloadHeaders{
headers: make(map[primitives.Slot][]*enginev1.SignedExecutionPayloadHeader),
}
}
// SaveSignedExecutionPayloadHeader saves the signed execution payload header to the cache.
// The cache stores headers for up to two slots. If the input slot is higher than the lowest slot
// currently in the cache, the lowest slot is removed to make space for the new header.
// Only the highest value header for a given parent block hash will be stored.
// This function assumes caller already checks header's slot is current or next slot, it doesn't account for slot validation.
func (c *ExecutionPayloadHeaders) SaveSignedExecutionPayloadHeader(header *enginev1.SignedExecutionPayloadHeader) {
c.Lock()
defer c.Unlock()
for s := range c.headers {
if s+1 < header.Message.Slot {
delete(c.headers, s)
}
}
// Add or update the header in the map
if _, ok := c.headers[header.Message.Slot]; !ok {
c.headers[header.Message.Slot] = []*enginev1.SignedExecutionPayloadHeader{header}
} else {
found := false
for i, h := range c.headers[header.Message.Slot] {
if bytes.Equal(h.Message.ParentBlockHash, header.Message.ParentBlockHash) && bytes.Equal(h.Message.ParentBlockRoot, header.Message.ParentBlockRoot) {
if header.Message.Value > h.Message.Value {
c.headers[header.Message.Slot][i] = header
}
found = true
break
}
}
if !found {
c.headers[header.Message.Slot] = append(c.headers[header.Message.Slot], header)
}
}
}
// SignedExecutionPayloadHeader returns the signed payload header for the given slot and parent block hash and block root.
// Returns nil if the header is not found.
// This should be used when the caller wants the header to match parent block hash and parent block root such as proposer choosing a header to propose.
func (c *ExecutionPayloadHeaders) SignedExecutionPayloadHeader(slot primitives.Slot, parentBlockHash []byte, parentBlockRoot []byte) *enginev1.SignedExecutionPayloadHeader {
c.RLock()
defer c.RUnlock()
if headers, ok := c.headers[slot]; ok {
for _, header := range headers {
if bytes.Equal(header.Message.ParentBlockHash, parentBlockHash) && bytes.Equal(header.Message.ParentBlockRoot, parentBlockRoot) {
return header
}
}
}
return nil
}

View File

@@ -0,0 +1,243 @@
package cache
import (
"testing"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func Test_SaveSignedExecutionPayloadHeader(t *testing.T) {
t.Run("First header should be added to cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
require.Equal(t, 1, len(c.headers))
require.Equal(t, header, c.headers[1][0])
})
t.Run("Second header with higher slot should be added, and both slots should be in cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 2, len(c.headers))
require.Equal(t, header1, c.headers[1][0])
require.Equal(t, header2, c.headers[2][0])
})
t.Run("Third header with higher slot should replace the oldest slot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
header3 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 3,
ParentBlockHash: []byte("parent3"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
c.SaveSignedExecutionPayloadHeader(header3)
require.Equal(t, 2, len(c.headers))
require.Equal(t, header2, c.headers[2][0])
require.Equal(t, header3, c.headers[3][0])
})
t.Run("Header with same slot but higher value should replace the existing one", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 1, len(c.headers[2]))
require.Equal(t, header2, c.headers[2][0])
})
t.Run("Header with different parent block hash should be appended to the same slot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 2, len(c.headers[2]))
require.Equal(t, header1, c.headers[2][0])
require.Equal(t, header2, c.headers[2][1])
})
}
func TestSignedExecutionPayloadHeader(t *testing.T) {
t.Run("Return header when slot and parentBlockHash match", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte("root1"))
require.NotNil(t, result)
require.Equal(t, header, result)
})
t.Run("Return nil when no matching slot and parentBlockHash", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte("root1"))
require.IsNil(t, result)
})
t.Run("Return nil when no matching slot and parentBlockRoot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(2, []byte("parent1"), []byte("root2"))
require.IsNil(t, result)
})
t.Run("Return header when there are two slots in the cache and a match is found", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
// Check for the first header
result1 := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte{})
require.NotNil(t, result1)
require.Equal(t, header1, result1)
// Check for the second header
result2 := c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte{})
require.NotNil(t, result2)
require.Equal(t, header2, result2)
})
t.Run("Return nil when slot is evicted from cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
header3 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 3,
ParentBlockHash: []byte("parent3"),
Value: 300,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
c.SaveSignedExecutionPayloadHeader(header3)
// The first slot should be evicted, so result should be nil
result := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte{})
require.IsNil(t, result)
// The second slot should still be present
result = c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte{})
require.NotNil(t, result)
require.Equal(t, header2, result)
// The third slot should be present
result = c.SignedExecutionPayloadHeader(3, []byte("parent3"), []byte{})
require.NotNil(t, result)
require.Equal(t, header3, result)
})
}

View File

@@ -107,6 +107,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time/slots:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -7,12 +7,14 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -105,293 +107,162 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
}
func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
statePhase0, keysPhase0 := util.DeterministicGenesisState(t, 100)
stateAltair, keysAltair := util.DeterministicGenesisStateAltair(t, 100)
stateBellatrix, keysBellatrix := util.DeterministicGenesisStateBellatrix(t, 100)
stateCapella, keysCapella := util.DeterministicGenesisStateCapella(t, 100)
stateDeneb, keysDeneb := util.DeterministicGenesisStateDeneb(t, 100)
stateElectra, keysElectra := util.DeterministicGenesisStateElectra(t, 100)
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att1Phase0 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2Phase0 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31750000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att1Electra := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2Electra := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
slashingPhase0 := &ethpb.AttesterSlashing{
Attestation_1: att1Phase0,
Attestation_2: att2Phase0,
}
slashingElectra := &ethpb.AttesterSlashingElectra{
Attestation_1: att1Electra,
Attestation_2: att2Electra,
}
type testCase struct {
name string
st state.BeaconState
keys []bls.SecretKey
att1 ethpb.IndexedAtt
att2 ethpb.IndexedAtt
slashing ethpb.AttSlashing
slashedBalance uint64
}
testCases := []testCase{
{
Attestation_1: att1,
Attestation_2: att2,
name: "phase0",
st: statePhase0,
keys: keysPhase0,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31750000000,
},
{
name: "altair",
st: stateAltair,
keys: keysAltair,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31500000000,
},
{
name: "bellatrix",
st: stateBellatrix,
keys: keysBellatrix,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31000000000,
},
{
name: "capella",
st: stateCapella,
keys: keysCapella,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31000000000,
},
{
name: "deneb",
st: stateDeneb,
keys: keysDeneb,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31000000000,
},
{
name: "electra",
st: stateElectra,
keys: keysElectra,
att1: att1Electra,
att2: att2Electra,
slashing: slashingElectra,
slashedBalance: 31992187500,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
for _, vv := range tc.st.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
domain, err := signing.Domain(tc.st.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, tc.st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(tc.att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := tc.keys[0].Sign(signingRoot[:])
sig1 := tc.keys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
if tc.att1.Version() >= version.Electra {
tc.att1.(*ethpb.IndexedAttestationElectra).Signature = aggregateSig.Marshal()
} else {
tc.att1.(*ethpb.IndexedAttestation).Signature = aggregateSig.Marshal()
}
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
signingRoot, err = signing.ComputeSigningRoot(tc.att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = tc.keys[0].Sign(signingRoot[:])
sig1 = tc.keys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
if tc.att2.Version() >= version.Electra {
tc.att2.(*ethpb.IndexedAttestationElectra).Signature = aggregateSig.Marshal()
} else {
tc.att2.(*ethpb.IndexedAttestation).Signature = aggregateSig.Marshal()
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(context.Background(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != tc.st.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
tc.st.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31500000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateBellatrix(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusCapella(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateCapella(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
require.Equal(t, tc.slashedBalance, newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
})
}
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -222,6 +223,42 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateEPBS:
kzgs := make([][]byte, 0)
kzgRoot, err := ssz.KzgCommitmentsRoot(kzgs)
if err != nil {
return nil, err
}
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockEpbs{
Block: &ethpb.BeaconBlockEpbs{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyEpbs{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
SignedExecutionPayloadHeader: &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
BlobKzgCommitmentsRoot: kzgRoot[:],
},
Signature: make([]byte, 96),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
PayloadAttestations: make([]*ethpb.PayloadAttestation, 0),
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}

View File

@@ -59,6 +59,9 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
//
// return block.body.execution_payload != ExecutionPayload()
func IsExecutionBlock(body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
if body.Version() >= version.Capella {
return true, nil
}
if body == nil {
return false, errors.New("nil block body")
}
@@ -244,12 +247,47 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
if blk.Version() >= version.EPBS {
header, err := blk.Body().SignedExecutionPayloadHeader()
if err != nil {
return payloadHash, err
}
payload, err := header.Header()
if err != nil {
return payloadHash, err
}
return payload.BlockHash(), nil
}
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
if blk.Version() >= version.Bellatrix {
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
return payloadHash, nil
}
// GetBlockParentHash returns the hash of the parent execution payload
func GetBlockParentHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var parentHash [32]byte
if blk.Version() >= version.EPBS {
header, err := blk.Body().SignedExecutionPayloadHeader()
if err != nil {
return parentHash, err
}
payload, err := header.Header()
if err != nil {
return parentHash, err
}
return payload.ParentBlockHash(), nil
}
if blk.Version() >= version.Bellatrix {
payload, err := blk.Body().Execution()
if err != nil {
return parentHash, err
}
return bytesutil.ToBytes32(payload.ParentHash()), nil
}
return parentHash, nil
}

View File

@@ -14,8 +14,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
@@ -118,14 +118,97 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
return val, nil
}
func checkWithdrawalsAgainstPayload(
executionData interfaces.ExecutionData,
numExpected int,
expectedRoot [32]byte,
) error {
var wdRoot [32]byte
if executionData.IsBlinded() {
r, err := executionData.WithdrawalsRoot()
if err != nil {
return errors.Wrap(err, "could not get withdrawals root")
}
copy(wdRoot[:], r)
} else {
wds, err := executionData.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != numExpected {
return fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), numExpected)
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return errors.Wrap(err, "could not get withdrawals root")
}
}
if expectedRoot != wdRoot {
return fmt.Errorf("expected withdrawals root %#x, got %#x", expectedRoot, wdRoot)
}
return nil
}
func processWithdrawalStateTransition(
st state.BeaconState,
expectedWithdrawals []*enginev1.Withdrawal,
partialWithdrawalsCount uint64,
) (err error) {
for _, withdrawal := range expectedWithdrawals {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if err != nil {
return errors.Wrap(err, "could not decrease balance")
}
}
if st.Version() >= version.Electra {
if err := st.DequeuePendingPartialWithdrawals(partialWithdrawalsCount); err != nil {
return fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return errors.Wrap(err, "could not set next withdrawal validator index")
}
return nil
}
// ProcessWithdrawals processes the validator withdrawals from the provided execution payload
// into the beacon state.
//
// Spec pseudocode definition:
//
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
// if state.fork.current_version >= EIP7732_FORK_VERSION :
// if not is_parent_block_full(state): # [New in EIP-7732]
// return
//
// expected_withdrawals, processed_partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
// expected_withdrawals, partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
//
// if state.fork.current_version >= EIP7732_FORK_VERSION :
// state.latest_withdrawals_root = hash_tree_root(expected_withdrawals) # [New in EIP-7732]
// else :
// assert len(payload.withdrawals) == len(expected_withdrawals)
//
// assert len(payload.withdrawals) == len(expected_withdrawals)
//
@@ -152,76 +235,39 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
// next_validator_index = ValidatorIndex(next_index % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
func ProcessWithdrawals(st state.BeaconState, executionData interfaces.ExecutionData) (state.BeaconState, error) {
expectedWithdrawals, processedPartialWithdrawalsCount, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
if st.Version() >= version.EPBS {
IsParentBlockFull, err := st.IsParentBlockFull()
if err != nil {
return nil, errors.Wrap(err, "could not check if parent block is full")
}
if !IsParentBlockFull {
return st, nil
}
}
var wdRoot [32]byte
if executionData.IsBlinded() {
r, err := executionData.WithdrawalsRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
wdRoot = bytesutil.ToBytes32(r)
} else {
wds, err := executionData.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != len(expectedWithdrawals) {
return nil, fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), len(expectedWithdrawals))
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
expectedWithdrawals, partialWithdrawalsCount, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
}
expectedRoot, err := ssz.WithdrawalSliceRoot(expectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals root")
}
if expectedRoot != wdRoot {
return nil, fmt.Errorf("expected withdrawals root %#x, got %#x", expectedRoot, wdRoot)
}
for _, withdrawal := range expectedWithdrawals {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if st.Version() >= version.EPBS {
err = st.SetLastWithdrawalsRoot(expectedRoot[:])
if err != nil {
return nil, errors.Wrap(err, "could not decrease balance")
return nil, errors.Wrap(err, "could not set withdrawals root")
}
}
if st.Version() >= version.Electra {
if err := st.DequeuePendingPartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
return nil, fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
if err := checkWithdrawalsAgainstPayload(executionData, len(expectedWithdrawals), expectedRoot); err != nil {
return nil, err
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal validator index")
if err := processWithdrawalStateTransition(st, expectedWithdrawals, partialWithdrawalsCount); err != nil {
return nil, err
}
return st, nil
}

View File

@@ -22,6 +22,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -1167,13 +1168,407 @@ func TestProcessWithdrawals(t *testing.T) {
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
})
}
}
func TestProcessWithdrawalsEPBS(t *testing.T) {
const (
currentEpoch = primitives.Epoch(10)
epochInFuture = primitives.Epoch(12)
epochInPast = primitives.Epoch(8)
numValidators = 128
notWithdrawableIndex = 127
notPartiallyWithdrawable = 126
maxSweep = uint64(80)
)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
type args struct {
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PendingPartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
PendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal // Electra
LatestBlockHash []byte // EIP-7732
}
type control struct {
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
Balances map[uint64]uint64
}
type Test struct {
Args args
Control control
}
executionAddress := func(i primitives.ValidatorIndex) []byte {
wc := make([]byte, 20)
wc[19] = byte(i)
return wc
}
withdrawalAmount := func(i primitives.ValidatorIndex) uint64 {
return maxEffectiveBalance + uint64(i)*100000
}
fullWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i),
}
}
PendingPartialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i) - maxEffectiveBalance,
}
}
tests := []Test{
{
Args: args{
Name: "success no withdrawals",
NextWithdrawalValidatorIndex: 10,
NextWithdrawalIndex: 3,
},
Control: control{
NextWithdrawalValidatorIndex: 90,
NextWithdrawalIndex: 3,
},
},
{
Args: args{
Name: "success one full withdrawal",
NextWithdrawalIndex: 3,
NextWithdrawalValidatorIndex: 5,
FullWithdrawalIndices: []primitives.ValidatorIndex{70},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(70, 3),
},
},
Control: control{
NextWithdrawalValidatorIndex: 85,
NextWithdrawalIndex: 4,
Balances: map[uint64]uint64{70: 0},
},
},
{
Args: args{
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21),
},
},
Control: control{
NextWithdrawalValidatorIndex: 72,
NextWithdrawalIndex: 22,
Balances: map[uint64]uint64{7: maxEffectiveBalance},
},
},
{
Args: args{
Name: "success many full withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{7: 0, 19: 0, 28: 0},
},
},
{
Args: args{
Name: "less than max sweep at end",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{80, 81, 82, 83},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(80, 22), fullWithdrawal(81, 23), fullWithdrawal(82, 24),
fullWithdrawal(83, 25),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 26,
Balances: map[uint64]uint64{80: 0, 81: 0, 82: 0, 83: 0},
},
},
{
Args: args{
Name: "less than max sweep and beginning",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{4, 5, 6},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(4, 22), fullWithdrawal(5, 23), fullWithdrawal(6, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{4: 0, 5: 0, 6: 0},
},
},
{
Args: args{
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 22), PendingPartialWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{
7: maxEffectiveBalance,
19: maxEffectiveBalance,
28: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals with pending partial withdrawals in state",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
PendingPartialWithdrawals: []*ethpb.PendingPartialWithdrawal{
{
Index: 11,
Amount: withdrawalAmount(11) - maxEffectiveBalance,
},
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success more than max fully withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
FullWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(1, 22), fullWithdrawal(2, 23), fullWithdrawal(3, 24),
fullWithdrawal(4, 25), fullWithdrawal(5, 26), fullWithdrawal(6, 27),
fullWithdrawal(7, 28), fullWithdrawal(8, 29), fullWithdrawal(9, 30),
fullWithdrawal(21, 31), fullWithdrawal(22, 32), fullWithdrawal(23, 33),
fullWithdrawal(24, 34), fullWithdrawal(25, 35), fullWithdrawal(26, 36),
fullWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0,
21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0,
},
},
},
{
Args: args{
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(1, 22), PendingPartialWithdrawal(2, 23), PendingPartialWithdrawal(3, 24),
PendingPartialWithdrawal(4, 25), PendingPartialWithdrawal(5, 26), PendingPartialWithdrawal(6, 27),
PendingPartialWithdrawal(7, 28), PendingPartialWithdrawal(8, 29), PendingPartialWithdrawal(9, 30),
PendingPartialWithdrawal(21, 31), PendingPartialWithdrawal(22, 32), PendingPartialWithdrawal(23, 33),
PendingPartialWithdrawal(24, 34), PendingPartialWithdrawal(25, 35), PendingPartialWithdrawal(26, 36),
PendingPartialWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: maxEffectiveBalance,
2: maxEffectiveBalance,
3: maxEffectiveBalance,
4: maxEffectiveBalance,
5: maxEffectiveBalance,
6: maxEffectiveBalance,
7: maxEffectiveBalance,
8: maxEffectiveBalance,
9: maxEffectiveBalance,
21: maxEffectiveBalance,
22: maxEffectiveBalance,
23: maxEffectiveBalance,
24: maxEffectiveBalance,
25: maxEffectiveBalance,
26: maxEffectiveBalance,
27: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "Parent Node is not full",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
LatestBlockHash: []byte{1, 2, 3},
},
},
}
checkPostState := func(t *testing.T, expected control, st state.BeaconState) {
l, err := st.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalValidatorIndex, l)
n, err := st.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalIndex, n)
balances := st.Balances()
for idx, bal := range expected.Balances {
require.Equal(t, bal, balances[idx])
}
}
prepareValidators := func(st state.BeaconState, arguments args) error {
validators := make([]*ethpb.Validator, numValidators)
if err := st.SetBalances(make([]uint64, numValidators)); err != nil {
return err
}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = epochInFuture
v.WithdrawalCredentials = make([]byte, 32)
v.WithdrawalCredentials[31] = byte(i)
if err := st.UpdateBalancesAtIndex(primitives.ValidatorIndex(i), v.EffectiveBalance-uint64(rand.Intn(1000))); err != nil {
return err
}
validators[i] = v
}
for _, idx := range arguments.FullWithdrawalIndices {
if idx != notWithdrawableIndex {
validators[idx].WithdrawableEpoch = epochInPast
}
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
}
for _, idx := range arguments.PendingPartialWithdrawalIndices {
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
}
return st.SetValidators(validators)
}
for _, test := range tests {
t.Run(test.Args.Name, func(t *testing.T) {
saved := params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = maxSweep
if test.Args.Withdrawals == nil {
test.Args.Withdrawals = make([]*enginev1.Withdrawal, 0)
}
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
if test.Args.PendingPartialWithdrawalIndices == nil {
test.Args.PendingPartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
var st state.BeaconState
var p interfaces.ExecutionData
spb := &ethpb.BeaconStateEPBS{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
PendingPartialWithdrawals: test.Args.PendingPartialWithdrawals,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderEPBS{
BlockHash: []byte{},
},
LatestBlockHash: test.Args.LatestBlockHash,
}
st, err = state_native.InitializeFromProtoUnsafeEpbs(spb)
require.NoError(t, err)
env := random.ExecutionPayloadEnvelope(t)
env.Payload.Withdrawals = test.Args.Withdrawals
wp, err := consensusblocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
p, err = wp.Execution()
require.NoError(t, err)
err = prepareValidators(st, test.Args)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Args.Name == "Parent Node is not full" {
require.DeepEqual(t, post, st)
require.IsNil(t, err)
} else {
require.NoError(t, err)
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
}
func TestProcessBLSToExecutionChanges(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{

View File

@@ -10,7 +10,6 @@ go_library(
"effective_balance_updates.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
"upgrade.go",
"validator.go",
"withdrawals.go",

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -129,6 +130,57 @@ func TestComputeConsolidationEpochAndUpdateChurn(t *testing.T) {
expectedEpoch: 16, // Flows into another epoch.
expectedConsolidationBalanceToConsume: 200000000000, // 200 ETH
},
{
name: "balance to consume is zero, consolidation balance at limit",
state: func(t *testing.T) state.BeaconState {
activeBal := 32000000000000000 // 32M ETH
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 16,
ConsolidationBalanceToConsume: 0,
Validators: createValidatorsWithTotalActiveBalance(primitives.Gwei(activeBal)),
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000),
expectedEpoch: 17, // Flows into another epoch.
expectedConsolidationBalanceToConsume: 0,
},
{
name: "consolidation balance equals consolidation balance to consume",
state: func(t *testing.T) state.BeaconState {
activeBal := 32000000000000000 // 32M ETH
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 16,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(32000000000000000),
Validators: createValidatorsWithTotalActiveBalance(primitives.Gwei(activeBal)),
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000),
expectedEpoch: 16,
expectedConsolidationBalanceToConsume: 0,
},
{
name: "consolidation balance exceeds limit by one",
state: func(t *testing.T) state.BeaconState {
activeBal := 32000000000000000 // 32M ETH
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 16,
ConsolidationBalanceToConsume: 0,
Validators: createValidatorsWithTotalActiveBalance(primitives.Gwei(activeBal)),
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000)+1,
expectedEpoch: 18, // Flows into another epoch.
expectedConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(32000000000000000)-1,
},
}
for _, tt := range tests {

View File

@@ -185,6 +185,9 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
pcLimit := params.BeaconConfig().PendingConsolidationsLimit
for _, cr := range reqs {
if cr == nil {
return errors.New("nil consolidation request")
}
if ctx.Err() != nil {
return fmt.Errorf("cannot process consolidation requests: %w", ctx.Err())
}
@@ -262,7 +265,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if !helpers.IsActiveValidator(srcV, curEpoch) || !helpers.IsActiveValidatorUsingTrie(tgtV, curEpoch) {
continue
}
// Neither validator are exiting.
// Neither validator is exiting.
if srcV.ExitEpoch != ffe || tgtV.ExitEpoch() != ffe {
continue
}

View File

@@ -209,7 +209,22 @@ func TestProcessConsolidationRequests(t *testing.T) {
state state.BeaconState
reqs []*enginev1.ConsolidationRequest
validate func(*testing.T, state.BeaconState)
wantErr bool
}{
{
name: "nil request",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{}
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
return s
}(),
reqs: []*enginev1.ConsolidationRequest{nil},
validate: func(t *testing.T, st state.BeaconState) {
require.DeepEqual(t, st, st)
},
wantErr: true,
},
{
name: "one valid request",
state: func() state.BeaconState {
@@ -405,7 +420,13 @@ func TestProcessConsolidationRequests(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := electra.ProcessConsolidationRequests(context.TODO(), tt.state, tt.reqs)
require.NoError(t, err)
if (err != nil) != tt.wantErr {
t.Errorf("ProcessWithdrawalRequests() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !tt.wantErr {
require.NoError(t, err)
}
if tt.validate != nil {
tt.validate(t, tt.state)
}

View File

@@ -385,14 +385,8 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -410,7 +404,8 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
if found {
_, has := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if has {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
@@ -592,10 +587,10 @@ func processDepositRequest(beaconState state.BeaconState, request *enginev1.Depo
if err != nil {
return nil, errors.Wrap(err, "could not get deposit requests start index")
}
if request == nil {
return nil, errors.New("nil deposit request")
}
if requestsStartIndex == params.BeaconConfig().UnsetDepositRequestsStartIndex {
if request == nil {
return nil, errors.New("nil deposit request")
}
if err := beaconState.SetDepositRequestsStartIndex(request.Index); err != nil {
return nil, errors.Wrap(err, "could not set deposit requests start index")
}

View File

@@ -333,6 +333,7 @@ func TestProcessDepositRequests(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
sk, err := bls.RandKey()
require.NoError(t, err)
require.NoError(t, st.SetDepositRequestsStartIndex(1))
t.Run("empty requests continues", func(t *testing.T) {
newSt, err := electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{})

View File

@@ -38,6 +38,17 @@ func TestProcessWithdrawRequests(t *testing.T) {
wantFn func(t *testing.T, got state.BeaconState)
wantErr bool
}{
{
name: "nil request",
args: args{
st: func() state.BeaconState { return st }(),
wrs: []*enginev1.WithdrawalRequest{nil},
},
wantErr: true,
wantFn: func(t *testing.T, got state.BeaconState) {
require.DeepEqual(t, got, nil)
},
},
{
name: "happy path exit and withdrawal only",
args: args{

View File

@@ -0,0 +1,52 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"attestation.go",
"execution_payload_envelope.go",
"execution_payload_header.go",
"operations.go",
"payload_attestation.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"attestation_test.go",
"execution_payload_envelope_test.go",
],
deps = [
":go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
],
)

View File

@@ -0,0 +1,13 @@
package epbs
import (
"fmt"
)
// RemoveValidatorFlag removes validator flag from existing one.
func RemoveValidatorFlag(flag, flagPosition uint8) (uint8, error) {
if flagPosition > 7 {
return flag, fmt.Errorf("flag position %d exceeds length", flagPosition)
}
return flag & ^(1 << flagPosition), nil
}

View File

@@ -0,0 +1,93 @@
package epbs_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestValidatorFlag_Remove(t *testing.T) {
tests := []struct {
name string
add []uint8
remove []uint8
expectedTrue []uint8
expectedFalse []uint8
}{
{
name: "none",
add: []uint8{},
remove: []uint8{},
expectedTrue: []uint8{},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
remove: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedTrue: []uint8{},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source, target",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex},
remove: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedTrue: []uint8{params.BeaconConfig().TimelyTargetFlagIndex},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source, target, head",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
remove: []uint8{params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
expectedTrue: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedFalse: []uint8{params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
}
var err error
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
flag := uint8(0)
// Add flags.
for _, flagPosition := range test.add {
flag, err = altair.AddValidatorFlag(flag, flagPosition)
require.NoError(t, err)
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, true, has)
}
// Remove flags.
for _, flagPosition := range test.remove {
flag, err = epbs.RemoveValidatorFlag(flag, flagPosition)
require.NoError(t, err)
}
// Check if flags are set correctly.
for _, flagPosition := range test.expectedTrue {
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, true, has)
}
for _, flagPosition := range test.expectedFalse {
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, false, has)
}
})
}
}
func TestValidatorFlag_Remove_ExceedsLength(t *testing.T) {
_, err := epbs.RemoveValidatorFlag(0, 8)
require.ErrorContains(t, "flag position 8 exceeds length", err)
}
func TestValidatorFlag_Remove_NotSet(t *testing.T) {
_, err := epbs.RemoveValidatorFlag(0, 1)
require.NoError(t, err)
}

View File

@@ -0,0 +1,126 @@
package epbs
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ValidatePayloadStateTransition performs the process_execution_payload
// function.
func ValidatePayloadStateTransition(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
if err := UpdateHeaderAndVerify(ctx, preState, envelope); err != nil {
return err
}
committedHeader, err := preState.LatestExecutionPayloadHeaderEPBS()
if err != nil {
return err
}
if err := ValidateAgainstCommittedBid(committedHeader, envelope); err != nil {
return err
}
if err := ProcessPayloadStateTransition(ctx, preState, envelope); err != nil {
return err
}
return CheckPostStateRoot(ctx, preState, envelope)
}
func ProcessPayloadStateTransition(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
er := envelope.ExecutionRequests()
preState, err := electra.ProcessDepositRequests(ctx, preState, er.Deposits)
if err != nil {
return errors.Wrap(err, "could not process deposit receipts")
}
preState, err = electra.ProcessWithdrawalRequests(ctx, preState, er.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process ercution layer withdrawal requests")
}
if err := electra.ProcessConsolidationRequests(ctx, preState, er.Consolidations); err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload")
}
if err := preState.SetLatestBlockHash(payload.BlockHash()); err != nil {
return err
}
return preState.SetLatestFullSlot(preState.Slot())
}
func UpdateHeaderAndVerify(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
blockHeader := preState.LatestBlockHeader()
if blockHeader == nil {
return errors.New("invalid nil latest block header")
}
if len(blockHeader.StateRoot) == 0 || [32]byte(blockHeader.StateRoot) == [32]byte{} {
prevStateRoot, err := preState.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute previous state root")
}
blockHeader.StateRoot = prevStateRoot[:]
if err := preState.SetLatestBlockHeader(blockHeader); err != nil {
return errors.Wrap(err, "could not set latest block header")
}
}
blockHeaderRoot, err := blockHeader.HashTreeRoot()
if err != nil {
return err
}
beaconBlockRoot := envelope.BeaconBlockRoot()
if blockHeaderRoot != beaconBlockRoot {
return fmt.Errorf("beacon block root does not match previous header, got: %#x wanted: %#x", beaconBlockRoot, blockHeaderRoot)
}
return nil
}
func ValidateAgainstCommittedBid(
committedHeader *enginev1.ExecutionPayloadHeaderEPBS,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
builderIndex := envelope.BuilderIndex()
if committedHeader.BuilderIndex != builderIndex {
return errors.New("builder index does not match committed header")
}
kzgRoot, err := envelope.BlobKzgCommitmentsRoot()
if err != nil {
return err
}
if [32]byte(committedHeader.BlobKzgCommitmentsRoot) != kzgRoot {
return errors.New("blob KZG commitments root does not match committed header")
}
return nil
}
func CheckPostStateRoot(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
stateRoot, err := preState.HashTreeRoot(ctx)
if err != nil {
return err
}
envelopeStateRoot := envelope.StateRoot()
if stateRoot != envelopeStateRoot {
return errors.New("state root mismatch")
}
return nil
}

View File

@@ -0,0 +1,110 @@
package epbs_test
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func TestProcessPayloadStateTransition(t *testing.T) {
bh := [32]byte{'h'}
p := random.ExecutionPayloadEnvelope(t)
p.Payload.BlockHash = bh[:]
p.ExecutionRequests = &enginev1.ExecutionRequests{}
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
validators := make([]*ethpb.Validator, 0)
stpb := &ethpb.BeaconStateEPBS{Slot: 3, Validators: validators}
st, err := state_native.InitializeFromProtoUnsafeEpbs(stpb)
require.NoError(t, err)
ctx := context.Background()
lbh, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, [32]byte{}, [32]byte(lbh))
require.NoError(t, epbs.ProcessPayloadStateTransition(ctx, st, e))
lbh, err = st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, bh, [32]byte(lbh))
lfs, err := st.LatestFullSlot()
require.NoError(t, err)
require.Equal(t, lfs, st.Slot())
}
func Test_validateAgainstHeader(t *testing.T) {
bh := [32]byte{'h'}
payload := &enginev1.ExecutionPayloadDeneb{BlockHash: bh[:]}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
stpb := &ethpb.BeaconStateEPBS{Slot: 3}
st, err := state_native.InitializeFromProtoUnsafeEpbs(stpb)
require.NoError(t, err)
ctx := context.Background()
require.ErrorContains(t, "invalid nil latest block header", epbs.UpdateHeaderAndVerify(ctx, st, e))
prest, _ := util.DeterministicGenesisStateEpbs(t, 64)
br := [32]byte{'r'}
p.BeaconBlockRoot = br[:]
require.ErrorContains(t, "beacon block root does not match previous header", epbs.UpdateHeaderAndVerify(ctx, prest, e))
header := prest.LatestBlockHeader()
require.NoError(t, err)
headerRoot, err := header.HashTreeRoot()
require.NoError(t, err)
p.BeaconBlockRoot = headerRoot[:]
require.NoError(t, epbs.UpdateHeaderAndVerify(ctx, prest, e))
}
func Test_validateAgainstCommittedBid(t *testing.T) {
payload := &enginev1.ExecutionPayloadDeneb{}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
p.BuilderIndex = 1
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
h := &enginev1.ExecutionPayloadHeaderEPBS{}
require.ErrorContains(t, "builder index does not match committed header", epbs.ValidateAgainstCommittedBid(h, e))
h.BuilderIndex = 1
p.BlobKzgCommitments = make([][]byte, 6)
for i := range p.BlobKzgCommitments {
p.BlobKzgCommitments[i] = make([]byte, 48)
}
h.BlobKzgCommitmentsRoot = make([]byte, 32)
require.ErrorContains(t, "blob KZG commitments root does not match committed header", epbs.ValidateAgainstCommittedBid(h, e))
root, err := e.BlobKzgCommitmentsRoot()
require.NoError(t, err)
h.BlobKzgCommitmentsRoot = root[:]
require.NoError(t, epbs.ValidateAgainstCommittedBid(h, e))
}
func TestCheckPostStateRoot(t *testing.T) {
payload := &enginev1.ExecutionPayloadDeneb{}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
p.BuilderIndex = 1
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
ctx := context.Background()
st, _ := util.DeterministicGenesisStateEpbs(t, 64)
p.StateRoot = make([]byte, 32)
require.ErrorContains(t, "state root mismatch", epbs.CheckPostStateRoot(ctx, st, e))
root, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
p.StateRoot = root[:]
require.NoError(t, epbs.CheckPostStateRoot(ctx, st, e))
}

View File

@@ -0,0 +1,50 @@
package epbs
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/network/forks"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// ValidatePayloadHeaderSignature validates the signature of the execution payload header.
func ValidatePayloadHeaderSignature(st state.ReadOnlyBeaconState, sh interfaces.ROSignedExecutionPayloadHeader) error {
h, err := sh.Header()
if err != nil {
return err
}
pubkey := st.PubkeyAtIndex(h.BuilderIndex())
pub, err := bls.PublicKeyFromBytes(pubkey[:])
if err != nil {
return err
}
s := sh.Signature()
sig, err := bls.SignatureFromBytes(s[:])
if err != nil {
return err
}
currentEpoch := slots.ToEpoch(h.Slot())
f, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(f, currentEpoch, params.BeaconConfig().DomainBeaconBuilder, st.GenesisValidatorsRoot())
if err != nil {
return err
}
root, err := sh.SigningRoot(domain)
if err != nil {
return err
}
if !sig.Verify(pub, root[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,4 +1,4 @@
package electra
package epbs
import (
"context"
@@ -6,9 +6,11 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
var (
@@ -62,11 +64,11 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
st, err = ProcessAttestationsNoVerifySignature(ctx, st, block)
st, err = electra.ProcessAttestationsNoVerifySignature(ctx, st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
}
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
if _, err := electra.ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
@@ -77,20 +79,27 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
}
// new in ePBS
if block.Version() >= version.EPBS {
if err := ProcessPayloadAttestations(st, bb); err != nil {
return nil, err
}
return st, nil
}
// new in electra
requests, err := bb.ExecutionRequests()
if err != nil {
return nil, errors.Wrap(err, "could not get execution requests")
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit requests")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
if err := electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)
}
return st, nil

View File

@@ -0,0 +1,125 @@
package epbs
import (
"bytes"
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
func ProcessPayloadAttestations(state state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error {
atts, err := body.PayloadAttestations()
if err != nil {
return err
}
if len(atts) == 0 {
return nil
}
ctx, cancel := context.WithTimeout(context.Background(), 12*time.Second)
defer cancel()
lbh := state.LatestBlockHeader()
proposerIndex := lbh.ProposerIndex
var participation []byte
if state.Slot()%32 == 0 {
participation, err = state.PreviousEpochParticipation()
} else {
participation, err = state.CurrentEpochParticipation()
}
if err != nil {
return err
}
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return err
}
baseReward, err := altair.BaseRewardWithTotalBalance(state, proposerIndex, totalBalance)
if err != nil {
return err
}
lfs, err := state.LatestFullSlot()
if err != nil {
return err
}
cfg := params.BeaconConfig()
sourceFlagIndex := cfg.TimelySourceFlagIndex
targetFlagIndex := cfg.TimelyTargetFlagIndex
headFlagIndex := cfg.TimelyHeadFlagIndex
penaltyNumerator := uint64(0)
rewardNumerator := uint64(0)
rewardDenominator := (cfg.WeightDenominator - cfg.ProposerWeight) * cfg.WeightDenominator / cfg.ProposerWeight
for _, att := range atts {
data := att.Data
if !bytes.Equal(data.BeaconBlockRoot, lbh.ParentRoot) {
return errors.New("invalid beacon block root in payload attestation data")
}
if data.Slot+1 != state.Slot() {
return errors.New("invalid data slot")
}
indexed, err := helpers.GetIndexedPayloadAttestation(ctx, state, data.Slot, att)
if err != nil {
return err
}
valid, err := helpers.IsValidIndexedPayloadAttestation(state, indexed)
if err != nil {
return err
}
if !valid {
return errors.New("invalid payload attestation")
}
payloadWasPreset := data.Slot == lfs
votedPresent := data.PayloadStatus == primitives.PAYLOAD_PRESENT
if votedPresent != payloadWasPreset {
for _, idx := range indexed.GetAttestingIndices() {
flags := participation[idx]
has, err := altair.HasValidatorFlag(flags, targetFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelyTargetWeight
}
has, err = altair.HasValidatorFlag(flags, sourceFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelySourceWeight
}
has, err = altair.HasValidatorFlag(flags, headFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelyHeadWeight
}
participation[idx] = 0
}
} else {
for _, idx := range indexed.GetAttestingIndices() {
participation[idx] = (1 << headFlagIndex) | (1 << sourceFlagIndex) | (1 << targetFlagIndex)
rewardNumerator += baseReward * (cfg.TimelyHeadWeight + cfg.TimelySourceWeight + cfg.TimelyTargetWeight)
}
}
}
if penaltyNumerator > 0 {
if err := helpers.DecreaseBalance(state, proposerIndex, penaltyNumerator/rewardDenominator); err != nil {
return err
}
}
if rewardNumerator > 0 {
if err := helpers.IncreaseBalance(state, proposerIndex, penaltyNumerator/rewardDenominator); err != nil {
return err
}
}
return nil
}

View File

@@ -7,7 +7,9 @@ go_library(
"beacon_committee.go",
"block.go",
"genesis.go",
"legacy.go",
"metrics.go",
"payload_attestation.go",
"randao.go",
"rewards_penalties.go",
"shuffle.go",
@@ -20,11 +22,13 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/epbs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//container/trie:go_default_library",
@@ -52,6 +56,9 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"exports_test.go",
"legacy_test.go",
"payload_attestation_test.go",
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
@@ -68,23 +75,30 @@ go_test(
tags = ["CI_race_detection"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/epbs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
],
)

View File

@@ -213,6 +213,7 @@ type CommitteeAssignment struct {
Committee []primitives.ValidatorIndex
AttesterSlot primitives.Slot
CommitteeIndex primitives.CommitteeIndex
PtcSlot primitives.Slot
}
// verifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
@@ -294,7 +295,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
if err := verifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
startSlot, err := slots.EpochStart(epoch)
slot, err := slots.EpochStart(epoch)
if err != nil {
return nil, err
}
@@ -303,14 +304,17 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
vals[v] = struct{}{}
}
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment)
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
ptcPerSlot, ptcMembersPerCommittee := PtcAllocation(len(committees))
// Compute committee assignments for each slot in the epoch.
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
endSlot := slot + params.BeaconConfig().SlotsPerEpoch
for {
for j, committee := range committees {
for _, vIndex := range committee {
for i, vIndex := range committee {
if _, ok := vals[vIndex]; !ok { // Skip if the validator is not in the provided validators slice.
continue
}
@@ -320,8 +324,19 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
assignments[vIndex].Committee = committee
assignments[vIndex].AttesterSlot = slot
assignments[vIndex].CommitteeIndex = primitives.CommitteeIndex(j)
if uint64(j) < ptcPerSlot && uint64(i) < ptcMembersPerCommittee {
assignments[vIndex].PtcSlot = slot
}
}
}
slot++
if slot == endSlot {
break
}
committees, err = BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
}
return assignments, nil
}

View File

@@ -3,6 +3,7 @@ package helpers_test
import (
"context"
"fmt"
"slices"
"strconv"
"testing"
@@ -10,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/slice"
@@ -729,15 +731,26 @@ func TestCommitteeIndices(t *testing.T) {
assert.DeepEqual(t, []primitives.CommitteeIndex{0, 1, 3}, indices)
}
func TestAttestationCommittees(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().TargetCommitteeSize))
func TestCommitteeAssignments_PTC(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
validatorIndices := make([]primitives.ValidatorIndex, validatorCount)
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: k,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
validatorIndices[i] = primitives.ValidatorIndex(i)
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
@@ -761,6 +774,31 @@ func TestAttestationCommittees(t *testing.T) {
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(committees[0])))
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(committees[1])))
})
as, err := helpers.CommitteeAssignments(context.Background(), state, 1, validatorIndices)
require.NoError(t, err)
// Capture all the slots and all the validator index that belonged in a PTC using a map for verification later.
slotValidatorMap := make(map[primitives.Slot][]primitives.ValidatorIndex)
for i, a := range as {
slotValidatorMap[a.PtcSlot] = append(slotValidatorMap[a.PtcSlot], i)
}
// Verify that all the slots have the correct number of PTC.
for s, v := range slotValidatorMap {
if s == 0 {
continue
}
// Make sure all the PTC are the correct size from the map.
require.Equal(t, len(v), field_params.PTCSize)
// Get the actual PTC from the beacon state using the helper function
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, s)
require.NoError(t, err)
for _, index := range ptc {
i := slices.Index(v, index)
require.NotEqual(t, -1, i) // PTC not found from the assignment map
}
}
}
func TestBeaconCommittees(t *testing.T) {

View File

@@ -0,0 +1,12 @@
package helpers
var (
ErrNilMessage = errNilMessage
ErrNilData = errNilData
ErrNilBeaconBlockRoot = errNilBeaconBlockRoot
ErrNilPayloadAttestation = errNilPayloadAttestation
ErrNilSignature = errNilSignature
ErrNilAggregationBits = errNilAggregationBits
ErrPreEPBSState = errPreEPBSState
ErrCommitteeOverflow = errCommitteeOverflow
)

View File

@@ -0,0 +1,20 @@
package helpers
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// DepositRequestsStarted determines if the deposit requests have started.
func DepositRequestsStarted(beaconState state.BeaconState) bool {
if beaconState.Version() < version.Electra {
return false
}
requestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return false
}
return beaconState.Eth1DepositIndex() == requestsStartIndex
}

View File

@@ -0,0 +1,33 @@
package helpers_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/stretchr/testify/require"
)
// TestDepositRequestHaveStarted contains several test cases for depositRequestHaveStarted.
func TestDepositRequestHaveStarted(t *testing.T) {
t.Run("Version below Electra returns false", func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
result := helpers.DepositRequestsStarted(st)
require.False(t, result)
})
t.Run("Version is Electra or higher, no error, but Eth1DepositIndex != requestsStartIndex returns false", func(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
require.NoError(t, st.SetEth1DepositIndex(1))
result := helpers.DepositRequestsStarted(st)
require.False(t, result)
})
t.Run("Version is Electra or higher, no error, and Eth1DepositIndex == requestsStartIndex returns true", func(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
require.NoError(t, st.SetEth1DepositIndex(33))
require.NoError(t, st.SetDepositRequestsStartIndex(33))
result := helpers.DepositRequestsStarted(st)
require.True(t, result)
})
}

View File

@@ -0,0 +1,296 @@
package helpers
import (
"context"
"slices"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/epbs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/math"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
var (
errNilMessage = errors.New("nil PayloadAttestationMessage")
errNilData = errors.New("nil PayloadAttestationData")
errNilBeaconBlockRoot = errors.New("nil BeaconBlockRoot")
errNilPayloadAttestation = errors.New("nil PayloadAttestation")
errNilSignature = errors.New("nil Signature")
errNilAggregationBits = errors.New("nil AggregationBits")
errPreEPBSState = errors.New("beacon state pre ePBS fork")
errCommitteeOverflow = errors.New("beacon committee of insufficient size")
)
// ValidateNilPayloadAttestationData checks if any composite field of the
// payload attestation data is nil
func ValidateNilPayloadAttestationData(data *eth.PayloadAttestationData) error {
if data == nil {
return errNilData
}
if data.BeaconBlockRoot == nil {
return errNilBeaconBlockRoot
}
return nil
}
// ValidateNilPayloadAttestationMessage checks if any composite field of the
// payload attestation message is nil
func ValidateNilPayloadAttestationMessage(att *eth.PayloadAttestationMessage) error {
if att == nil {
return errNilMessage
}
if att.Signature == nil {
return errNilSignature
}
return ValidateNilPayloadAttestationData(att.Data)
}
// ValidateNilPayloadAttestation checks if any composite field of the
// payload attestation is nil
func ValidateNilPayloadAttestation(att *eth.PayloadAttestation) error {
if att == nil {
return errNilPayloadAttestation
}
if att.AggregationBits == nil {
return errNilAggregationBits
}
if att.Signature == nil {
return errNilSignature
}
return ValidateNilPayloadAttestationData(att.Data)
}
// InPayloadTimelinessCommittee returns whether the given index belongs to the
// PTC computed from the passed state.
func InPayloadTimelinessCommittee(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, idx primitives.ValidatorIndex) (bool, error) {
ptc, err := GetPayloadTimelinessCommittee(ctx, state, slot)
if err != nil {
return false, err
}
for _, i := range ptc {
if i == idx {
return true, nil
}
}
return false, nil
}
// GetPayloadTimelinessCommittee returns the PTC for the given slot, computed from the passed state as in the
// spec function `get_ptc`.
func GetPayloadTimelinessCommittee(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (indices []primitives.ValidatorIndex, err error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not get beacon committees")
}
committeesPerSlot, membersPerCommittee := PtcAllocation(len(committees))
for i, committee := range committees {
if uint64(i) >= committeesPerSlot {
return
}
if uint64(len(committee)) < membersPerCommittee {
return nil, errCommitteeOverflow
}
indices = append(indices, committee[:membersPerCommittee]...)
}
return
}
// PtcAllocation returns:
// 1. The number of beacon committees that PTC will borrow from in a slot.
// 2. The number of validators that PTC will borrow from in a beacon committee.
func PtcAllocation(slotCommittees int) (committeesPerSlot, membersPerCommittee uint64) {
committeesPerSlot = math.LargestPowerOfTwo(math.Min(uint64(slotCommittees), fieldparams.PTCSize))
membersPerCommittee = fieldparams.PTCSize / committeesPerSlot
return
}
// GetPayloadAttestingIndices returns the set of attester indices corresponding to the given PayloadAttestation.
//
// Spec pseudocode definition:
//
// def get_payload_attesting_indices(state: BeaconState, slot: Slot,
// payload_attestation: PayloadAttestation) -> Set[ValidatorIndex]:
// """
// Return the set of attesting indices corresponding to ``payload_attestation``.
// """
// ptc = get_ptc(state, slot)
// return set(index for i, index in enumerate(ptc) if payload_attestation.aggregation_bits[i])
func GetPayloadAttestingIndices(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, att *eth.PayloadAttestation) (indices []primitives.ValidatorIndex, err error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
ptc, err := GetPayloadTimelinessCommittee(ctx, state, slot)
if err != nil {
return nil, err
}
for i, validatorIndex := range ptc {
if att.AggregationBits.BitAt(uint64(i)) {
indices = append(indices, validatorIndex)
}
}
return
}
// GetIndexedPayloadAttestation replaces a PayloadAttestation's AggregationBits with sorted AttestingIndices and returns an IndexedPayloadAttestation.
//
// Spec pseudocode definition:
//
// def get_indexed_payload_attestation(state: BeaconState, slot: Slot,
// payload_attestation: PayloadAttestation) -> IndexedPayloadAttestation:
// """
// Return the indexed payload attestation corresponding to ``payload_attestation``.
// """
// attesting_indices = get_payload_attesting_indices(state, slot, payload_attestation)
//
// return IndexedPayloadAttestation(
// attesting_indices=sorted(attesting_indices),
// data=payload_attestation.data,
// signature=payload_attestation.signature,
// )
func GetIndexedPayloadAttestation(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, att *eth.PayloadAttestation) (*epbs.IndexedPayloadAttestation, error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
attestingIndices, err := GetPayloadAttestingIndices(ctx, state, slot, att)
if err != nil {
return nil, err
}
slices.Sort(attestingIndices)
return &epbs.IndexedPayloadAttestation{
AttestingIndices: attestingIndices,
Data: att.Data,
Signature: att.Signature,
}, nil
}
// IsValidIndexedPayloadAttestation validates the given IndexedPayloadAttestation.
//
// Spec pseudocode definition:
//
// def is_valid_indexed_payload_attestation(
// state: BeaconState,
// indexed_payload_attestation: IndexedPayloadAttestation) -> bool:
// """
// Check if ``indexed_payload_attestation`` is not empty, has sorted and unique indices and has
// a valid aggregate signature.
// """
// # Verify the data is valid
// if indexed_payload_attestation.data.payload_status >= PAYLOAD_INVALID_STATUS:
// return False
//
// # Verify indices are sorted and unique
// indices = indexed_payload_attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(set(indices)):
// return False
//
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_PTC_ATTESTER, None)
// signing_root = compute_signing_root(indexed_payload_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_payload_attestation.signature)
func IsValidIndexedPayloadAttestation(state state.ReadOnlyBeaconState, att *epbs.IndexedPayloadAttestation) (bool, error) {
if state.Version() < version.EPBS {
return false, errPreEPBSState
}
// Verify the data is valid.
if att.Data.PayloadStatus >= primitives.PAYLOAD_INVALID_STATUS {
return false, nil
}
// Verify indices are sorted and unique.
indices := att.AttestingIndices
slices.Sort(indices)
if len(indices) == 0 || !slices.Equal(att.AttestingIndices, indices) {
return false, nil
}
// Verify aggregate signature.
publicKeys := make([]bls.PublicKey, len(indices))
for i, index := range indices {
validator, err := state.ValidatorAtIndexReadOnly(index)
if err != nil {
return false, err
}
publicKeyBytes := validator.PublicKey()
publicKey, err := bls.PublicKeyFromBytes(publicKeyBytes[:])
if err != nil {
return false, err
}
publicKeys[i] = publicKey
}
domain, err := signing.Domain(
state.Fork(),
slots.ToEpoch(state.Slot()),
params.BeaconConfig().DomainPTCAttester,
state.GenesisValidatorsRoot(),
)
if err != nil {
return false, err
}
signingRoot, err := signing.ComputeSigningRoot(att.Data, domain)
if err != nil {
return false, err
}
signature, err := bls.SignatureFromBytes(att.Signature)
if err != nil {
return false, err
}
return signature.FastAggregateVerify(publicKeys, signingRoot), nil
}
// ValidatePayloadAttestationMessageSignature verifies the signature of a
// payload attestation message.
func ValidatePayloadAttestationMessageSignature(ctx context.Context, st state.ReadOnlyBeaconState, msg *eth.PayloadAttestationMessage) error {
if err := ValidateNilPayloadAttestationMessage(msg); err != nil {
return err
}
val, err := st.ValidatorAtIndex(msg.ValidatorIndex)
if err != nil {
return err
}
pub, err := bls.PublicKeyFromBytes(val.PublicKey)
if err != nil {
return err
}
sig, err := bls.SignatureFromBytes(msg.Signature)
if err != nil {
return err
}
currentEpoch := slots.ToEpoch(st.Slot())
domain, err := signing.Domain(st.Fork(), currentEpoch, params.BeaconConfig().DomainPTCAttester, st.GenesisValidatorsRoot())
if err != nil {
return err
}
root, err := signing.ComputeSigningRoot(msg.Data, domain)
if err != nil {
return err
}
if !sig.Verify(pub, root[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -0,0 +1,363 @@
package helpers_test
import (
"context"
"slices"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/epbs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
"github.com/prysmaticlabs/prysm/v5/math"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestValidateNilPayloadAttestation(t *testing.T) {
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestationData(nil))
data := &eth.PayloadAttestationData{}
require.ErrorIs(t, helpers.ErrNilBeaconBlockRoot, helpers.ValidateNilPayloadAttestationData(data))
data.BeaconBlockRoot = make([]byte, 32)
require.NoError(t, helpers.ValidateNilPayloadAttestationData(data))
require.ErrorIs(t, helpers.ErrNilMessage, helpers.ValidateNilPayloadAttestationMessage(nil))
message := &eth.PayloadAttestationMessage{}
require.ErrorIs(t, helpers.ErrNilSignature, helpers.ValidateNilPayloadAttestationMessage(message))
message.Signature = make([]byte, 96)
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestationMessage(message))
message.Data = data
require.NoError(t, helpers.ValidateNilPayloadAttestationMessage(message))
require.ErrorIs(t, helpers.ErrNilPayloadAttestation, helpers.ValidateNilPayloadAttestation(nil))
att := &eth.PayloadAttestation{}
require.ErrorIs(t, helpers.ErrNilAggregationBits, helpers.ValidateNilPayloadAttestation(att))
att.AggregationBits = bitfield.NewBitvector512()
require.ErrorIs(t, helpers.ErrNilSignature, helpers.ValidateNilPayloadAttestation(att))
att.Signature = message.Signature
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestation(att))
att.Data = data
require.NoError(t, helpers.ValidateNilPayloadAttestation(att))
}
func TestGetPayloadTimelinessCommittee(t *testing.T) {
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state, err := state_native.InitializeFromProtoEpbs(random.BeaconState(t))
require.NoError(t, err)
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(200))
ctx := context.Background()
indices, err := helpers.BeaconCommitteeFromState(ctx, state, state.Slot(), 1)
require.NoError(t, err)
require.Equal(t, 128, len(indices))
epoch := slots.ToEpoch(state.Slot())
activeCount, err := helpers.ActiveValidatorCount(ctx, state, epoch)
require.NoError(t, err)
require.Equal(t, uint64(40960), activeCount)
computedCommitteeCount := helpers.SlotCommitteeCount(activeCount)
require.Equal(t, committeeCount, computedCommitteeCount)
committeesPerSlot := math.LargestPowerOfTwo(math.Min(committeeCount, fieldparams.PTCSize))
require.Equal(t, uint64(8), committeesPerSlot)
ptc, err := helpers.GetPayloadTimelinessCommittee(ctx, state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
committee1, err := helpers.BeaconCommitteeFromState(ctx, state, state.Slot(), 0)
require.NoError(t, err)
require.DeepEqual(t, committee1[:64], ptc[:64])
}
func Test_PtcAllocation(t *testing.T) {
tests := []struct {
committeeCount int
memberPerCommittee uint64
committeesPerSlot uint64
}{
{1, 512, 1},
{4, 128, 4},
{128, 4, 128},
{512, 1, 512},
{1024, 1, 512},
}
for _, test := range tests {
committeesPerSlot, memberPerCommittee := helpers.PtcAllocation(test.committeeCount)
if memberPerCommittee != test.memberPerCommittee {
t.Errorf("memberPerCommittee(%d) = %d; expected %d", test.committeeCount, memberPerCommittee, test.memberPerCommittee)
}
if committeesPerSlot != test.committeesPerSlot {
t.Errorf("committeesPerSlot(%d) = %d; expected %d", test.committeeCount, committeesPerSlot, test.committeesPerSlot)
}
}
}
func TestGetPayloadAttestingIndices(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
pubkey := make([]byte, 48)
copy(pubkey, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: pubkey,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Get PTC.
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
// Generate random indices. PTC members at the corresponding indices are considered attested.
randGen := rand.NewDeterministicGenerator()
attesterCount := randGen.Intn(fieldparams.PTCSize) + 1
indices := randGen.Perm(fieldparams.PTCSize)[:attesterCount]
slices.Sort(indices)
require.Equal(t, attesterCount, len(indices))
// Create a PayloadAttestation with AggregationBits set true at the indices.
aggregationBits := bitfield.NewBitvector512()
for _, index := range indices {
aggregationBits.SetBitAt(uint64(index), true)
}
payloadAttestation := &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
// Get attesting indices.
attesters, err := helpers.GetPayloadAttestingIndices(context.Background(), state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(attesters))
// Check if each attester equals to the PTC member at the corresponding index.
for i, index := range indices {
require.Equal(t, attesters[i], ptc[index])
}
}
func TestGetIndexedPayloadAttestation(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
publicKey := make([]byte, 48)
copy(publicKey, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: publicKey,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Get PTC.
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
// Generate random indices. PTC members at the corresponding indices are considered attested.
randGen := rand.NewDeterministicGenerator()
attesterCount := randGen.Intn(fieldparams.PTCSize) + 1
indices := randGen.Perm(fieldparams.PTCSize)[:attesterCount]
slices.Sort(indices)
require.Equal(t, attesterCount, len(indices))
// Create a PayloadAttestation with AggregationBits set true at the indices.
aggregationBits := bitfield.NewBitvector512()
for _, index := range indices {
aggregationBits.SetBitAt(uint64(index), true)
}
payloadAttestation := &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
// Get attesting indices.
ctx := context.Background()
attesters, err := helpers.GetPayloadAttestingIndices(ctx, state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(attesters))
// Get an IndexedPayloadAttestation.
indexedPayloadAttestation, err := helpers.GetIndexedPayloadAttestation(ctx, state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(indexedPayloadAttestation.AttestingIndices))
require.DeepEqual(t, payloadAttestation.Data, indexedPayloadAttestation.Data)
require.DeepEqual(t, payloadAttestation.Signature, indexedPayloadAttestation.Signature)
// Check if the attesting indices are the same.
slices.Sort(attesters) // GetIndexedPayloadAttestation sorts attesting indices.
require.DeepEqual(t, attesters, indexedPayloadAttestation.AttestingIndices)
}
func TestIsValidIndexedPayloadAttestation(t *testing.T) {
helpers.ClearCache()
// Create validators.
validatorCount := uint64(350)
validators := make([]*ethpb.Validator, validatorCount)
_, secretKeys, err := util.DeterministicDepositsAndKeys(validatorCount)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
PublicKey: secretKeys[i].PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
Fork: &ethpb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Define test cases.
tests := []struct {
attestation *epbs.IndexedPayloadAttestation
}{
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{1},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{13, 19},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{123, 234, 345},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{38, 46, 54, 62, 70, 78, 86, 194},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{5},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
}
// Run test cases.
for _, test := range tests {
signatures := make([]bls.Signature, len(test.attestation.AttestingIndices))
for i, index := range test.attestation.AttestingIndices {
signedBytes, err := signing.ComputeDomainAndSign(
state,
slots.ToEpoch(test.attestation.Data.Slot),
test.attestation.Data,
params.BeaconConfig().DomainPTCAttester,
secretKeys[index],
)
require.NoError(t, err)
signature, err := bls.SignatureFromBytes(signedBytes)
require.NoError(t, err)
signatures[i] = signature
}
aggregatedSignature := bls.AggregateSignatures(signatures)
test.attestation.Signature = aggregatedSignature.Marshal()
isValid, err := helpers.IsValidIndexedPayloadAttestation(state, test.attestation)
require.NoError(t, err)
require.Equal(t, true, isValid)
}
}

View File

@@ -22,7 +22,7 @@ import (
func BalanceChurnLimit(activeBalance primitives.Gwei) primitives.Gwei {
churn := max(
params.BeaconConfig().MinPerEpochChurnLimitElectra,
(uint64(activeBalance) / params.BeaconConfig().ChurnLimitQuotient),
uint64(activeBalance)/params.BeaconConfig().ChurnLimitQuotient,
)
return primitives.Gwei(churn - churn%params.BeaconConfig().EffectiveBalanceIncrement)
}

View File

@@ -393,9 +393,9 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
effectiveBal := v.EffectiveBalance()
if bState.Version() >= version.Electra {
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/16)
randomByte := hashFunc(seedBuffer)
randomBytes := hashFunc(seedBuffer)
offset := (i % 16) * 2
randomValue := uint64(randomByte[offset]) | uint64(randomByte[offset+1])<<8
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
return candidateIndex, nil
@@ -579,7 +579,7 @@ func IsPartiallyWithdrawableValidator(val state.ReadOnlyValidator, balance uint6
// """
// Check if ``validator`` is partially withdrawable.
// """
// max_effective_balance = get_validator_max_effective_balance(validator)
// max_effective_balance = get_max_effective_balance(validator)
// has_max_effective_balance = validator.effective_balance == max_effective_balance # [Modified in Electra:EIP7251]
// has_excess_balance = balance > max_effective_balance # [Modified in Electra:EIP7251]
// return (
@@ -619,7 +619,7 @@ func isPartiallyWithdrawableValidatorCapella(val state.ReadOnlyValidator, balanc
//
// Spec definition:
//
// def get_validator_max_effective_balance(validator: Validator) -> Gwei:
// def get_max_effective_balance(validator: Validator) -> Gwei:
// """
// Get max effective balance for ``validator``.
// """

View File

@@ -9,6 +9,7 @@ go_library(
"state-bellatrix.go",
"trailing_slot_state_cache.go",
"transition.go",
"transition_epbs.go",
"transition_no_verify_sig.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition",
@@ -20,6 +21,7 @@ go_library(
"//beacon-chain/core/capella:go_default_library",
"//beacon-chain/core/deneb:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/epbs:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/execution:go_default_library",
@@ -46,6 +48,7 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",

View File

@@ -403,11 +403,15 @@ func VerifyOperationLengths(_ context.Context, state state.BeaconState, b interf
)
}
if uint64(len(body.AttesterSlashings())) > params.BeaconConfig().MaxAttesterSlashings {
maxSlashings := params.BeaconConfig().MaxAttesterSlashings
if body.Version() >= version.Electra {
maxSlashings = params.BeaconConfig().MaxAttesterSlashingsElectra
}
if uint64(len(body.AttesterSlashings())) > maxSlashings {
return nil, fmt.Errorf(
"number of attester slashings (%d) in block body exceeds allowed threshold of %d",
len(body.AttesterSlashings()),
params.BeaconConfig().MaxAttesterSlashings,
maxSlashings,
)
}

View File

@@ -0,0 +1,109 @@
package transition
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func processExecution(state state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) (err error) {
if body.Version() >= version.EPBS {
state, err = blocks.ProcessWithdrawals(state, nil)
if err != nil {
return errors.Wrap(err, "could not process withdrawals")
}
return processExecutionPayloadHeader(state, body)
}
enabled, err := blocks.IsExecutionEnabled(state, body)
if err != nil {
return errors.Wrap(err, "could not check if execution is enabled")
}
if !enabled {
return nil
}
executionData, err := body.Execution()
if err != nil {
return err
}
if state.Version() >= version.Capella {
state, err = blocks.ProcessWithdrawals(state, executionData)
if err != nil {
return errors.Wrap(err, "could not process withdrawals")
}
}
if err := blocks.ProcessPayload(state, body); err != nil {
return errors.Wrap(err, "could not process execution data")
}
return nil
}
// This function verifies the signature as it is not necessarily signed by the
// proposer
func processExecutionPayloadHeader(state state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) (err error) {
sh, err := body.SignedExecutionPayloadHeader()
if err != nil {
return err
}
header, err := sh.Header()
if err != nil {
return err
}
if err := epbs.ValidatePayloadHeaderSignature(state, sh); err != nil {
return err
}
builderIndex := header.BuilderIndex()
builder, err := state.ValidatorAtIndex(builderIndex)
if err != nil {
return err
}
epoch := slots.ToEpoch(state.Slot())
if builder.ActivationEpoch > epoch || epoch >= builder.ExitEpoch {
return errors.New("builder is not active")
}
if builder.Slashed {
return errors.New("builder is slashed")
}
amount := header.Value()
builderBalance, err := state.BalanceAtIndex(builderIndex)
if err != nil {
return err
}
if amount > primitives.Gwei(builderBalance) {
return errors.New("builder has insufficient balance")
}
// sate.Slot == block.Slot because of process_slot
if header.Slot() != state.Slot() {
return errors.New("incorrect header slot")
}
// the state latest block header has the parent root because of
// process_block_header
blockHeader := state.LatestBlockHeader()
if header.ParentBlockRoot() != [32]byte(blockHeader.ParentRoot) {
return errors.New("incorrect parent block root")
}
lbh, err := state.LatestBlockHash()
if err != nil {
return err
}
if header.ParentBlockHash() != [32]byte(lbh) {
return errors.New("incorrect latest block hash")
}
if err := state.UpdateBalancesAtIndex(builderIndex, builderBalance-uint64(amount)); err != nil {
return err
}
if err := helpers.IncreaseBalance(state, blockHeader.ProposerIndex, uint64(amount)); err != nil {
return err
}
headerEPBS, ok := header.Proto().(*enginev1.ExecutionPayloadHeaderEPBS)
if !ok {
return errors.New("not an ePBS execution payload header")
}
return state.SetLatestExecutionPayloadHeaderEPBS(headerEPBS)
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
b "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition/interop"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
@@ -272,7 +272,7 @@ func ProcessOperationsNoVerifyAttsSigs(
return nil, err
}
} else {
state, err = electra.ProcessOperations(ctx, state, beaconBlock)
state, err = epbs.ProcessOperations(ctx, state, beaconBlock)
if err != nil {
return nil, err
}
@@ -318,26 +318,9 @@ func ProcessBlockForStateRoot(
return nil, errors.Wrap(err, "could not process block header")
}
enabled, err := b.IsExecutionEnabled(state, blk.Body())
if err != nil {
return nil, errors.Wrap(err, "could not check if execution is enabled")
if err := processExecution(state, blk.Body()); err != nil {
return nil, errors.Wrap(err, "could not process execution")
}
if enabled {
executionData, err := blk.Body().Execution()
if err != nil {
return nil, err
}
if state.Version() >= version.Capella {
state, err = b.ProcessWithdrawals(state, executionData)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err = b.ProcessPayload(state, blk.Body()); err != nil {
return nil, errors.Wrap(err, "could not process execution data")
}
}
randaoReveal := signed.Block().Body().RandaoReveal()
state, err = b.ProcessRandaoNoVerify(state, randaoReveal[:])
if err != nil {

View File

@@ -437,6 +437,25 @@ func TestProcessBlock_OverMaxAttesterSlashings(t *testing.T) {
assert.ErrorContains(t, want, err)
}
func TestProcessBlock_OverMaxAttesterSlashingsElectra(t *testing.T) {
maxSlashings := params.BeaconConfig().MaxAttesterSlashingsElectra
b := &ethpb.SignedBeaconBlockElectra{
Block: &ethpb.BeaconBlockElectra{
Body: &ethpb.BeaconBlockBodyElectra{
AttesterSlashings: make([]*ethpb.AttesterSlashingElectra, maxSlashings+1),
},
},
}
want := fmt.Sprintf("number of attester slashings (%d) in block body exceeds allowed threshold of %d",
len(b.Block.Body.AttesterSlashings), params.BeaconConfig().MaxAttesterSlashingsElectra)
s, err := state_native.InitializeFromProtoUnsafeElectra(&ethpb.BeaconStateElectra{})
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = transition.VerifyOperationLengths(context.Background(), s, wsb.Block())
assert.ErrorContains(t, want, err)
}
func TestProcessBlock_OverMaxAttestations(t *testing.T) {
b := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{

View File

@@ -75,7 +75,7 @@ func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primiti
// Compute exit queue epoch.
if s.Version() < version.Electra {
// Relevant spec code from deneb:
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])

View File

@@ -18,6 +18,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//monitoring/backup:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
],

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/backup"
"github.com/prysmaticlabs/prysm/v5/proto/dbval"
engine "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -23,11 +24,13 @@ import (
type ReadOnlyDatabase interface {
// Block related methods.
Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error)
SignedExecutionPayloadHeader(ctx context.Context, blockRoot [32]byte) (interfaces.ROSignedExecutionPayloadHeader, error)
Blocks(ctx context.Context, f *filters.QueryFilter) ([]interfaces.ReadOnlySignedBeaconBlock, [][32]byte, error)
BlockRoots(ctx context.Context, f *filters.QueryFilter) ([][32]byte, error)
BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error)
BlockRootsBySlot(ctx context.Context, slot primitives.Slot) (bool, [][32]byte, error)
HasBlock(ctx context.Context, blockRoot [32]byte) bool
HasBlindPayloadEnvelope(ctx context.Context, hash [32]byte) bool
GenesisBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
GenesisBlockRoot(ctx context.Context) ([32]byte, error)
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
@@ -53,6 +56,7 @@ type ReadOnlyDatabase interface {
DepositContractAddress(ctx context.Context) ([]byte, error)
// ExecutionChainData operations.
ExecutionChainData(ctx context.Context) (*ethpb.ETH1ChainData, error)
SignedBlindPayloadEnvelope(ctx context.Context, blockHash []byte) (*engine.SignedBlindPayloadEnvelope, error)
// Fee recipients operations.
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
@@ -91,6 +95,7 @@ type NoHeadAccessDatabase interface {
SaveDepositContractAddress(ctx context.Context, addr common.Address) error
// SaveExecutionChainData operations.
SaveExecutionChainData(ctx context.Context, data *ethpb.ETH1ChainData) error
SaveBlindPayloadEnvelope(ctx context.Context, envelope interfaces.ROSignedExecutionPayloadEnvelope) error
// Run any required database migrations.
RunMigrations(ctx context.Context) error
// Fee recipients operations.
@@ -101,6 +106,7 @@ type NoHeadAccessDatabase interface {
SaveLightClientBootstrap(ctx context.Context, blockRoot []byte, bootstrap interfaces.LightClientBootstrap) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
DeleteHistoricalDataBeforeSlot(ctx context.Context, slot primitives.Slot) error
}
// HeadAccessDatabase defines a struct with access to reading chain head data.

View File

@@ -6,10 +6,12 @@ go_library(
"archived_point.go",
"backfill.go",
"backup.go",
"blind_payload_envelope.go",
"blocks.go",
"checkpoint.go",
"deposit_contract.go",
"encoding.go",
"epbs.go",
"error.go",
"execution_chain.go",
"finalized_block_roots.go",
@@ -55,6 +57,7 @@ go_library(
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
@@ -71,6 +74,7 @@ go_library(
"@com_github_schollz_progressbar_v3//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -81,10 +85,12 @@ go_test(
"archived_point_test.go",
"backfill_test.go",
"backup_test.go",
"blind_payload_envelope_test.go",
"blocks_test.go",
"checkpoint_test.go",
"deposit_contract_test.go",
"encoding_test.go",
"epbs_test.go",
"execution_chain_test.go",
"finalized_block_roots_test.go",
"genesis_test.go",
@@ -124,6 +130,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_golang_snappy//:go_default_library",

View File

@@ -0,0 +1,76 @@
package kv
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
engine "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
// SaveBlindPayloadEnvelope saves a signed execution payload envelope blind in the database.
func (s *Store) SaveBlindPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlindPayloadEnvelope")
defer span.End()
pb := signed.Proto()
if pb == nil {
return errors.New("nil payload envelope")
}
env, ok := pb.(*engine.SignedExecutionPayloadEnvelope)
if !ok {
return errors.New("invalid payload envelope")
}
r := env.Message.Payload.BlockHash
blind := env.Blind()
if blind == nil {
return errors.New("nil blind payload envelope")
}
enc, err := encode(ctx, blind)
if err != nil {
return err
}
err = s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(executionPayloadEnvelopeBucket)
return bucket.Put(r, enc)
})
return err
}
// SignedBlindPayloadEnvelope retrieves a signed execution payload envelope blind from the database.
func (s *Store) SignedBlindPayloadEnvelope(ctx context.Context, blockHash []byte) (*engine.SignedBlindPayloadEnvelope, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SignedBlindPayloadEnvelope")
defer span.End()
env := &engine.SignedBlindPayloadEnvelope{}
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(executionPayloadEnvelopeBucket)
enc := bkt.Get(blockHash)
if enc == nil {
return ErrNotFound
}
return decode(ctx, enc, env)
})
return env, err
}
func (s *Store) HasBlindPayloadEnvelope(ctx context.Context, hash [32]byte) bool {
_, span := trace.StartSpan(ctx, "BeaconDB.HasBlock")
defer span.End()
if v, ok := s.blockCache.Get(string(hash[:])); v != nil && ok {
return true
}
exists := false
if err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(executionPayloadEnvelopeBucket)
exists = bkt.Get(hash[:]) != nil
return nil
}); err != nil { // This view never returns an error, but we'll handle anyway for sanity.
panic(err)
}
return exists
}

View File

@@ -0,0 +1,26 @@
package kv
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func TestStore_SignedBlindPayloadEnvelope(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
_, err := db.SignedBlindPayloadEnvelope(ctx, []byte("test"))
require.ErrorIs(t, err, ErrNotFound)
env := random.SignedExecutionPayloadEnvelope(t)
e, err := blocks.WrappedROSignedExecutionPayloadEnvelope(env)
require.NoError(t, err)
err = db.SaveBlindPayloadEnvelope(ctx, e)
require.NoError(t, err)
got, err := db.SignedBlindPayloadEnvelope(ctx, env.Message.Payload.BlockHash)
require.NoError(t, err)
require.DeepEqual(t, got, env.Blind())
}

View File

@@ -227,10 +227,7 @@ func (s *Store) DeleteBlock(ctx context.Context, root [32]byte) error {
return ErrDeleteJustifiedAndFinalized
}
if err := tx.Bucket(blocksBucket).Delete(root[:]); err != nil {
return err
}
if err := tx.Bucket(blockParentRootIndicesBucket).Delete(root[:]); err != nil {
if err := s.deleteBlock(tx, root[:]); err != nil {
return err
}
s.blockCache.Del(string(root[:]))
@@ -238,6 +235,89 @@ func (s *Store) DeleteBlock(ctx context.Context, root [32]byte) error {
})
}
// DeleteHistoricalDataBeforeSlot deletes all blocks and states before the given slot.
// This function deletes data from the following buckets:
// - blocksBucket
// - blockParentRootIndicesBucket
// - finalizedBlockRootsIndexBucket
// - stateBucket
// - stateSummaryBucket
// - blockRootValidatorHashesBucket
// - blockSlotIndicesBucket
// - stateSlotIndicesBucket
func (s *Store) DeleteHistoricalDataBeforeSlot(ctx context.Context, cutoffSlot primitives.Slot) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.DeleteHistoricalDataBeforeSlot")
defer span.End()
// Collect slot/root pairs to perform deletions in a separate read only transaction.
var (
roots [][]byte
slts []primitives.Slot
)
err := s.db.View(func(tx *bolt.Tx) error {
var err error
roots, slts, err = blockRootsBySlotRange(ctx, tx.Bucket(blockSlotIndicesBucket), primitives.Slot(0), cutoffSlot, nil, nil, nil)
if err != nil {
return errors.Wrap(err, "could not retrieve block roots")
}
return nil
})
if err != nil {
return errors.Wrap(err, "could not retrieve block roots and slots")
}
// Perform all deletions in a single transaction for atomicity
return s.db.Update(func(tx *bolt.Tx) error {
for _, root := range roots {
// Delete block
if err = s.deleteBlock(tx, root); err != nil {
return err
}
// Delete finalized block roots index
if err = tx.Bucket(finalizedBlockRootsIndexBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete finalized block root index")
}
// Delete state
if err = tx.Bucket(stateBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete state")
}
// Delete state summary
if err = tx.Bucket(stateSummaryBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete state summary")
}
// Delete validator entries
if err = s.deleteValidatorHashes(tx, root); err != nil {
return errors.Wrap(err, "could not delete validators")
}
}
for _, slot := range slts {
// Delete slot indices
if err = tx.Bucket(blockSlotIndicesBucket).Delete(bytesutil.SlotToBytesBigEndian(slot)); err != nil {
return errors.Wrap(err, "could not delete block slot index")
}
if err = tx.Bucket(stateSlotIndicesBucket).Delete(bytesutil.SlotToBytesBigEndian(slot)); err != nil {
return errors.Wrap(err, "could not delete state slot index")
}
}
// Delete all caches after we have deleted everything from buckets.
// This is done after the buckets are deleted to avoid any issues in case of transaction rollback.
for _, root := range roots {
// Delete block from cache
s.blockCache.Del(string(root))
// Delete state summary from cache
s.stateSummaryCache.delete([32]byte(root))
}
return nil
})
}
// SaveBlock to the db.
func (s *Store) SaveBlock(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlock")
@@ -609,7 +689,7 @@ func blockRootsByFilter(ctx context.Context, tx *bolt.Tx, f *filters.QueryFilter
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange, err := blockRootsBySlotRange(
rootsBySlotRange, _, err := blockRootsBySlotRange(
ctx,
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
@@ -627,6 +707,7 @@ func blockRootsByFilter(ctx context.Context, tx *bolt.Tx, f *filters.QueryFilter
// that list of roots to lookup the block. These block will
// meet the filter criteria.
indices := lookupValuesForIndices(ctx, indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
@@ -653,13 +734,13 @@ func blockRootsBySlotRange(
ctx context.Context,
bkt *bolt.Bucket,
startSlotEncoded, endSlotEncoded, startEpochEncoded, endEpochEncoded, slotStepEncoded interface{},
) ([][]byte, error) {
) ([][]byte, []primitives.Slot, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.blockRootsBySlotRange")
defer span.End()
// Return nothing when all slot parameters are missing
if startSlotEncoded == nil && endSlotEncoded == nil && startEpochEncoded == nil && endEpochEncoded == nil {
return [][]byte{}, nil
return [][]byte{}, nil, nil
}
var startSlot, endSlot primitives.Slot
@@ -680,11 +761,11 @@ func blockRootsBySlotRange(
if startEpochOk && endEpochOk {
startSlot, err = slots.EpochStart(startEpoch)
if err != nil {
return nil, err
return nil, nil, err
}
endSlot, err = slots.EpochStart(endEpoch)
if err != nil {
return nil, err
return nil, nil, err
}
endSlot = endSlot + params.BeaconConfig().SlotsPerEpoch - 1
}
@@ -695,14 +776,15 @@ func blockRootsBySlotRange(
return key != nil && bytes.Compare(key, max) <= 0
}
if endSlot < startSlot {
return nil, errInvalidSlotRange
return nil, nil, errInvalidSlotRange
}
rootsRange := endSlot.SubSlot(startSlot).Div(step)
roots := make([][]byte, 0, rootsRange)
var slts []primitives.Slot
c := bkt.Cursor()
for k, v := c.Seek(min); conditional(k, max); k, v = c.Next() {
slot := bytesutil.BytesToSlotBigEndian(k)
if step > 1 {
slot := bytesutil.BytesToSlotBigEndian(k)
if slot.SubSlot(startSlot).Mod(step) != 0 {
continue
}
@@ -713,8 +795,9 @@ func blockRootsBySlotRange(
splitRoots = append(splitRoots, v[i:i+32])
}
roots = append(roots, splitRoots...)
slts = append(slts, slot)
}
return roots, nil
return roots, slts, nil
}
// blockRootsBySlot retrieves the block roots by slot
@@ -813,9 +896,9 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(denebBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Deneb block")
}
case hasElectraKey(enc):
case HasElectraKey(enc):
rawBlock = &ethpb.SignedBeaconBlockElectra{}
if err := rawBlock.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
if err := rawBlock.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra block")
}
case hasElectraBlindKey(enc):
@@ -833,6 +916,11 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(fuluBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Fulu block")
}
case hasEpbsKey(enc):
rawBlock = &ethpb.SignedBeaconBlockEpbs{}
if err := rawBlock.UnmarshalSSZ(enc[len(epbsKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal EPBS block")
}
default:
// Marshal block bytes to phase 0 beacon block.
rawBlock = &ethpb.SignedBeaconBlock{}
@@ -863,6 +951,10 @@ func encodeBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
v := blk.Version()
if v >= version.EPBS {
return epbsKey, nil
}
if v >= version.Fulu {
if blk.IsBlinded() {
return fuluBlindKey, nil
@@ -874,7 +966,7 @@ func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
if blk.IsBlinded() {
return electraBlindKey, nil
}
return electraKey, nil
return ElectraKey, nil
}
if v >= version.Deneb {
@@ -908,3 +1000,32 @@ func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}
func (s *Store) deleteBlock(tx *bolt.Tx, root []byte) error {
if err := tx.Bucket(blocksBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete block")
}
if err := tx.Bucket(blockParentRootIndicesBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete block parent indices")
}
return nil
}
func (s *Store) deleteValidatorHashes(tx *bolt.Tx, root []byte) error {
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return err
}
if !ok {
return nil
}
// Delete the validator hash index
if err = tx.Bucket(blockRootValidatorHashesBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete validator index")
}
return nil
}

View File

@@ -2,9 +2,13 @@ package kv
import (
"context"
"fmt"
bolt "go.etcd.io/bbolt"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filters"
@@ -18,6 +22,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"google.golang.org/protobuf/proto"
)
@@ -146,6 +151,17 @@ var blockTests = []struct {
}
return blocks.NewSignedBeaconBlock(b)
}},
{
name: "epbs",
newBlock: func(slot primitives.Slot, root []byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
b := random.SignedBeaconBlock(&testing.T{})
b.Block.Slot = slot
if root != nil {
b.Block.ParentRoot = root
}
return blocks.NewSignedBeaconBlock(b)
},
},
}
func TestStore_SaveBlock_NoDuplicates(t *testing.T) {
@@ -202,7 +218,7 @@ func TestStore_BlocksCRUD(t *testing.T) {
retrievedBlock, err = db.Block(ctx, blockRoot)
require.NoError(t, err)
wanted := retrievedBlock
if retrievedBlock.Version() >= version.Bellatrix {
if retrievedBlock.Version() >= version.Bellatrix && retrievedBlock.Version() < version.EPBS {
wanted, err = retrievedBlock.ToBlinded()
require.NoError(t, err)
}
@@ -353,6 +369,189 @@ func TestStore_DeleteFinalizedBlock(t *testing.T) {
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteJustifiedAndFinalized)
}
func TestStore_HistoricalDataBeforeSlot(t *testing.T) {
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
db := setupDB(t)
ctx := context.Background()
// Save genesis block root
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
// Create and save blocks for 4 epochs
blks := makeBlocks(t, 0, slotsPerEpoch*4, genesisBlockRoot)
require.NoError(t, db.SaveBlocks(ctx, blks))
// Mark state validator migration as complete
err := db.db.Update(func(tx *bolt.Tx) error {
return tx.Bucket(migrationsBucket).Put(migrationStateValidatorsKey, migrationCompleted)
})
require.NoError(t, err)
migrated, err := db.isStateValidatorMigrationOver()
require.NoError(t, err)
require.Equal(t, true, migrated)
// Create state summaries and states for each block
ss := make([]*ethpb.StateSummary, len(blks))
states := make([]state.BeaconState, len(blks))
for i, blk := range blks {
slot := blk.Block().Slot()
r, err := blk.Block().HashTreeRoot()
require.NoError(t, err)
// Create and save state summary
ss[i] = &ethpb.StateSummary{
Slot: slot,
Root: r[:],
}
// Create and save state with validator entries
vals := make([]*ethpb.Validator, 2)
for j := range vals {
vals[j] = &ethpb.Validator{
PublicKey: bytesutil.PadTo([]byte{byte(i*j + 1)}, 48),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte(i*j + 2)}, 32),
}
}
st, err := util.NewBeaconState(func(state *ethpb.BeaconState) error {
state.Validators = vals
state.Slot = slot
return nil
})
require.NoError(t, err)
require.NoError(t, db.SaveState(ctx, st, r))
states[i] = st
// Verify validator entries are saved to db
valsActual, err := db.validatorEntries(ctx, r)
require.NoError(t, err)
for j, val := range valsActual {
require.DeepEqual(t, vals[j], val)
}
}
require.NoError(t, db.SaveStateSummaries(ctx, ss))
// Verify slot indices exist before deletion
err = db.db.View(func(tx *bolt.Tx) error {
blockSlotBkt := tx.Bucket(blockSlotIndicesBucket)
stateSlotBkt := tx.Bucket(stateSlotIndicesBucket)
for i := uint64(0); i < slotsPerEpoch; i++ {
slot := bytesutil.SlotToBytesBigEndian(primitives.Slot(i + 1))
assert.NotNil(t, blockSlotBkt.Get(slot), "Expected block slot index to exist")
assert.NotNil(t, stateSlotBkt.Get(slot), "Expected state slot index to exist", i)
}
return nil
})
require.NoError(t, err)
// Delete data before slot at epoch 1
require.NoError(t, db.DeleteHistoricalDataBeforeSlot(ctx, primitives.Slot(slotsPerEpoch)))
// Verify blocks from epoch 0 are deleted
for i := uint64(0); i < slotsPerEpoch; i++ {
root, err := blks[i].Block().HashTreeRoot()
require.NoError(t, err)
// Check block is deleted
retrievedBlocks, err := db.BlocksBySlot(ctx, primitives.Slot(i))
require.NoError(t, err)
assert.Equal(t, 0, len(retrievedBlocks))
// Verify block does not exist
assert.Equal(t, false, db.HasBlock(ctx, root))
// Verify block parent root does not exist
err = db.db.View(func(tx *bolt.Tx) error {
require.Equal(t, 0, len(tx.Bucket(blockParentRootIndicesBucket).Get(root[:])))
return nil
})
require.NoError(t, err)
// Verify state is deleted
hasState := db.HasState(ctx, root)
assert.Equal(t, false, hasState)
// Verify state summary is deleted
hasSummary := db.HasStateSummary(ctx, root)
assert.Equal(t, false, hasSummary)
// Verify validator hashes for block roots are deleted
err = db.db.View(func(tx *bolt.Tx) error {
assert.Equal(t, 0, len(tx.Bucket(blockRootValidatorHashesBucket).Get(root[:])))
return nil
})
require.NoError(t, err)
}
// Verify slot indices are deleted
err = db.db.View(func(tx *bolt.Tx) error {
blockSlotBkt := tx.Bucket(blockSlotIndicesBucket)
stateSlotBkt := tx.Bucket(stateSlotIndicesBucket)
for i := uint64(0); i < slotsPerEpoch; i++ {
slot := bytesutil.SlotToBytesBigEndian(primitives.Slot(i + 1))
assert.Equal(t, 0, len(blockSlotBkt.Get(slot)), fmt.Sprintf("Expected block slot index to be deleted, slot: %d", slot))
assert.Equal(t, 0, len(stateSlotBkt.Get(slot)), fmt.Sprintf("Expected state slot index to be deleted, slot: %d", slot))
}
return nil
})
require.NoError(t, err)
// Verify blocks from epochs 1-3 still exist
for i := slotsPerEpoch; i < slotsPerEpoch*4; i++ {
root, err := blks[i].Block().HashTreeRoot()
require.NoError(t, err)
// Verify block exists
assert.Equal(t, true, db.HasBlock(ctx, root))
// Verify remaining block parent root exists, except last slot since we store parent roots of each block.
if i < slotsPerEpoch*4-1 {
err = db.db.View(func(tx *bolt.Tx) error {
require.NotNil(t, tx.Bucket(blockParentRootIndicesBucket).Get(root[:]), fmt.Sprintf("Expected block parent index to be deleted, slot: %d", i))
return nil
})
require.NoError(t, err)
}
// Verify state exists
hasState := db.HasState(ctx, root)
assert.Equal(t, true, hasState)
// Verify state summary exists
hasSummary := db.HasStateSummary(ctx, root)
assert.Equal(t, true, hasSummary)
// Verify slot indices still exist
err = db.db.View(func(tx *bolt.Tx) error {
blockSlotBkt := tx.Bucket(blockSlotIndicesBucket)
stateSlotBkt := tx.Bucket(stateSlotIndicesBucket)
slot := bytesutil.SlotToBytesBigEndian(primitives.Slot(i + 1))
assert.NotNil(t, blockSlotBkt.Get(slot), "Expected block slot index to exist")
assert.NotNil(t, stateSlotBkt.Get(slot), "Expected state slot index to exist")
return nil
})
require.NoError(t, err)
// Verify validator entries still exist
valsActual, err := db.validatorEntries(ctx, root)
require.NoError(t, err)
assert.NotNil(t, valsActual)
// Verify remaining validator hashes for block roots exists
err = db.db.View(func(tx *bolt.Tx) error {
assert.NotNil(t, tx.Bucket(blockRootValidatorHashesBucket).Get(root[:]))
return nil
})
require.NoError(t, err)
}
}
func TestStore_GenesisBlock(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
@@ -390,7 +589,7 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
require.NoError(t, err)
wanted := blk
if blk.Version() >= version.Bellatrix {
if blk.Version() >= version.Bellatrix && blk.Version() < version.EPBS {
wanted, err = blk.ToBlinded()
require.NoError(t, err)
}
@@ -609,7 +808,7 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
b, err := db.Block(ctx, root)
require.NoError(t, err)
wanted := block1
if block1.Version() >= version.Bellatrix {
if block1.Version() >= version.Bellatrix && block1.Version() < version.EPBS {
wanted, err = wanted.ToBlinded()
require.NoError(t, err)
}
@@ -627,7 +826,7 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted2 := block2
if block2.Version() >= version.Bellatrix {
if block2.Version() >= version.Bellatrix && block2.Version() < version.EPBS {
wanted2, err = block2.ToBlinded()
require.NoError(t, err)
}
@@ -645,7 +844,7 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted = block3
if block3.Version() >= version.Bellatrix {
if block3.Version() >= version.Bellatrix && block3.Version() < version.EPBS {
wanted, err = wanted.ToBlinded()
require.NoError(t, err)
}
@@ -681,7 +880,7 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
b, err := db.Block(ctx, root)
require.NoError(t, err)
wanted := block1
if block1.Version() >= version.Bellatrix {
if block1.Version() >= version.Bellatrix && block1.Version() < version.EPBS {
wanted, err = block1.ToBlinded()
require.NoError(t, err)
}
@@ -698,7 +897,7 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted = genesisBlock
if genesisBlock.Version() >= version.Bellatrix {
if genesisBlock.Version() >= version.Bellatrix && genesisBlock.Version() < version.EPBS {
wanted, err = genesisBlock.ToBlinded()
require.NoError(t, err)
}
@@ -715,7 +914,7 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted = genesisBlock
if genesisBlock.Version() >= version.Bellatrix {
if genesisBlock.Version() >= version.Bellatrix && genesisBlock.Version() < version.EPBS {
wanted, err = genesisBlock.ToBlinded()
require.NoError(t, err)
}
@@ -811,7 +1010,7 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
require.NoError(t, err)
wanted := b1
if b1.Version() >= version.Bellatrix {
if b1.Version() >= version.Bellatrix && b1.Version() < version.EPBS {
wanted, err = b1.ToBlinded()
require.NoError(t, err)
}
@@ -827,7 +1026,7 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
t.Fatalf("Expected 2 blocks, received %d blocks", len(retrievedBlocks))
}
wanted = b2
if b2.Version() >= version.Bellatrix {
if b2.Version() >= version.Bellatrix && b2.Version() < version.EPBS {
wanted, err = b2.ToBlinded()
require.NoError(t, err)
}
@@ -837,7 +1036,7 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(wantedPb, retrieved0Pb), "Wanted: %v, received: %v", retrievedBlocks[0], wanted)
wanted = b3
if b3.Version() >= version.Bellatrix {
if b3.Version() >= version.Bellatrix && b3.Version() < version.EPBS {
wanted, err = b3.ToBlinded()
require.NoError(t, err)
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/golang/snappy"
fastssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
engine "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
)
@@ -78,6 +79,8 @@ func isSSZStorageFormat(obj interface{}) bool {
return true
case *ethpb.VoluntaryExit:
return true
case *engine.SignedBlindPayloadEnvelope:
return true
case *ethpb.ValidatorRegistrationV1:
return true
default:

View File

@@ -0,0 +1,18 @@
package kv
import (
"context"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
)
func (s *Store) SignedExecutionPayloadHeader(ctx context.Context, blockRoot [32]byte) (interfaces.ROSignedExecutionPayloadHeader, error) {
b, err := s.Block(ctx, blockRoot)
if err != nil {
return nil, err
}
if b.IsNil() {
return nil, ErrNotFound
}
return b.Block().Body().SignedExecutionPayloadHeader()
}

View File

@@ -0,0 +1,28 @@
package kv
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func Test_SignedExecutionPayloadHeader(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
b := random.SignedBeaconBlock(t)
blk, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
blockRoot, err := blk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, blk))
retrievedHeader, err := db.SignedExecutionPayloadHeader(ctx, blockRoot)
require.NoError(t, err)
wantedHeader, err := blk.Block().Body().SignedExecutionPayloadHeader()
require.NoError(t, err)
require.DeepEqual(t, wantedHeader, retrievedHeader)
}

View File

@@ -52,11 +52,12 @@ func hasDenebBlindKey(enc []byte) bool {
return bytes.Equal(enc[:len(denebBlindKey)], denebBlindKey)
}
func hasElectraKey(enc []byte) bool {
if len(electraKey) >= len(enc) {
// HasElectraKey verifies if the encoding is Electra compatible.
func HasElectraKey(enc []byte) bool {
if len(ElectraKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(electraKey)], electraKey)
return bytes.Equal(enc[:len(ElectraKey)], ElectraKey)
}
func hasElectraBlindKey(enc []byte) bool {
@@ -79,3 +80,10 @@ func hasFuluBlindKey(enc []byte) bool {
}
return bytes.Equal(enc[:len(fuluBlindKey)], fuluBlindKey)
}
func hasEpbsKey(enc []byte) bool {
if len(epbsKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(epbsKey)], epbsKey)
}

View File

@@ -121,6 +121,9 @@ var Buckets = [][]byte{
feeRecipientBucket,
registrationBucket,
// ePBS
executionPayloadEnvelopeBucket,
}
// KVStoreOption is a functional option that modifies a kv.Store.

View File

@@ -167,13 +167,13 @@ func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, []
}
m = bootstrap
syncCommitteeHash = enc[len(denebKey) : len(denebKey)+32]
case hasElectraKey(enc):
case HasElectraKey(enc):
bootstrap := &ethpb.LightClientBootstrapElectra{}
if err := bootstrap.UnmarshalSSZ(enc[len(electraKey)+32:]); err != nil {
if err := bootstrap.UnmarshalSSZ(enc[len(ElectraKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Electra light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(electraKey) : len(electraKey)+32]
syncCommitteeHash = enc[len(ElectraKey) : len(ElectraKey)+32]
default:
return nil, nil, errors.New("decoding of saved light client bootstrap is unsupported")
}
@@ -277,9 +277,9 @@ func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
return nil, errors.Wrap(err, "could not unmarshal Deneb light client update")
}
m = update
case hasElectraKey(enc):
case HasElectraKey(enc):
update := &ethpb.LightClientUpdateElectra{}
if err := update.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
if err := update.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra light client update")
}
m = update
@@ -292,7 +292,7 @@ func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
func keyForLightClientUpdate(v int) ([]byte, error) {
switch v {
case version.Electra:
return electraKey, nil
return ElectraKey, nil
case version.Deneb:
return denebKey, nil
case version.Capella:

View File

@@ -7,15 +7,16 @@ package kv
// it easy to scan for keys that have a certain shard number as a prefix and return those
// corresponding attestations.
var (
blocksBucket = []byte("blocks")
stateBucket = []byte("state")
stateSummaryBucket = []byte("state-summary")
chainMetadataBucket = []byte("chain-metadata")
checkpointBucket = []byte("check-point")
powchainBucket = []byte("powchain")
stateValidatorsBucket = []byte("state-validators")
feeRecipientBucket = []byte("fee-recipient")
registrationBucket = []byte("registration")
blocksBucket = []byte("blocks")
stateBucket = []byte("state")
stateSummaryBucket = []byte("state-summary")
chainMetadataBucket = []byte("chain-metadata")
checkpointBucket = []byte("check-point")
powchainBucket = []byte("powchain")
stateValidatorsBucket = []byte("state-validators")
feeRecipientBucket = []byte("fee-recipient")
registrationBucket = []byte("registration")
executionPayloadEnvelopeBucket = []byte("execution-payload-envelope")
// Light Client Updates Bucket
lightClientUpdatesBucket = []byte("light-client-updates")
@@ -53,10 +54,11 @@ var (
saveBlindedBeaconBlocksKey = []byte("save-blinded-beacon-blocks")
denebKey = []byte("deneb")
denebBlindKey = []byte("blind-deneb")
electraKey = []byte("electra")
ElectraKey = []byte("electra")
electraBlindKey = []byte("blind-electra")
fuluKey = []byte("fulu")
fuluBlindKey = []byte("blind-fulu")
epbsKey = []byte("epbs")
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")

View File

@@ -253,6 +253,10 @@ func (s *Store) saveStatesEfficientInternal(ctx context.Context, tx *bolt.Tx, bl
if err := s.processElectra(ctx, rawType, rt[:], bucket, valIdxBkt, validatorKeys[i]); err != nil {
return err
}
case *ethpb.BeaconStateEPBS:
if err := s.processEPBS(ctx, rawType, rt[:], bucket, valIdxBkt, validatorKeys[i]); err != nil {
return err
}
default:
return errors.New("invalid state type")
}
@@ -357,7 +361,25 @@ func (s *Store) processElectra(ctx context.Context, pbState *ethpb.BeaconStateEl
if err != nil {
return err
}
encodedState := snappy.Encode(nil, append(electraKey, rawObj...))
encodedState := snappy.Encode(nil, append(ElectraKey, rawObj...))
if err := bucket.Put(rootHash, encodedState); err != nil {
return err
}
pbState.Validators = valEntries
if err := valIdxBkt.Put(rootHash, validatorKey); err != nil {
return err
}
return nil
}
func (s *Store) processEPBS(ctx context.Context, pbState *ethpb.BeaconStateEPBS, rootHash []byte, bucket, valIdxBkt *bolt.Bucket, validatorKey []byte) error {
valEntries := pbState.Validators
pbState.Validators = make([]*ethpb.Validator, 0)
rawObj, err := pbState.MarshalSSZ()
if err != nil {
return err
}
encodedState := snappy.Encode(nil, append(epbsKey, rawObj...))
if err := bucket.Put(rootHash, encodedState); err != nil {
return err
}
@@ -517,22 +539,21 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
}
switch {
case hasEpbsKey(enc):
protoState := &ethpb.BeaconStateEPBS{}
if err := protoState.UnmarshalSSZ(enc[len(epbsKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for EPBS")
}
return statenative.InitializeFromProtoEpbs(protoState)
case hasFuluKey(enc):
protoState := &ethpb.BeaconStateFulu{}
if err := protoState.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Electra")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return nil, err
}
if ok {
protoState.Validators = validatorEntries
}
return statenative.InitializeFromProtoUnsafeFulu(protoState)
case hasElectraKey(enc):
case HasElectraKey(enc):
protoState := &ethpb.BeaconStateElectra{}
if err := protoState.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
if err := protoState.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Electra")
}
ok, err := s.isStateValidatorMigrationOver()
@@ -688,9 +709,9 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(electraKey, rawObj...)), nil
return snappy.Encode(nil, append(ElectraKey, rawObj...)), nil
case version.Fulu:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateFulu)
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateElectra)
if !ok {
return nil, errors.New("non valid inner state")
}
@@ -701,7 +722,20 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(fuluKey, rawObj...)), nil
return snappy.Encode(nil, append(ElectraKey, rawObj...)), nil
case version.EPBS:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateEPBS)
if !ok {
return nil, errors.New("non valid inner state")
}
if rState == nil {
return nil, errors.New("nil state")
}
rawObj, err := rState.MarshalSSZ()
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(epbsKey, rawObj...)), nil
default:
return nil, errors.New("invalid inner state")
}
@@ -725,7 +759,7 @@ func (s *Store) validatorEntries(ctx context.Context, blockRoot [32]byte) ([]*et
idxBkt := tx.Bucket(blockRootValidatorHashesBucket)
valKey := idxBkt.Get(blockRoot[:])
if len(valKey) == 0 {
return errors.Errorf("invalid compressed validator keys length")
return errors.Errorf("validator keys not found for given block root: %x", blockRoot)
}
// decompress the keys and check if they are of proper length.

View File

@@ -5,12 +5,12 @@ import (
"crypto/rand"
"encoding/binary"
mathRand "math/rand"
"strconv"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
statenative "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -22,6 +22,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
bolt "go.etcd.io/bbolt"
)
@@ -158,6 +159,16 @@ func TestState_CanSaveRetrieve(t *testing.T) {
},
rootSeed: 'E',
},
{
name: "epbs",
s: func() state.BeaconState {
stPb := random.BeaconState(t)
st, err := statenative.InitializeFromProtoUnsafeEpbs(stPb)
require.NoError(t, err)
return st
},
rootSeed: 'F',
},
}
db := setupDB(t)
@@ -166,6 +177,9 @@ func TestState_CanSaveRetrieve(t *testing.T) {
reset := features.InitWithReset(&features.Flags{EnableHistoricalSpaceRepresentation: enableFlag})
for _, tc := range cases {
if tc.name == "epbs" && enableFlag {
t.Skip("Skipping epbs test as EnableHistoricalSpaceRepresentation is true")
}
t.Run(tc.name+" - EnableHistoricalSpaceRepresentation is "+strconv.FormatBool(enableFlag), func(t *testing.T) {
rootNonce := byte('0')
if enableFlag {
@@ -1112,6 +1126,26 @@ func TestDenebState_CanDelete(t *testing.T) {
require.Equal(t, state.ReadOnlyBeaconState(nil), savedS, "Unsaved state should've been nil")
}
func TestEpbsState_CanDelete(t *testing.T) {
db := setupDB(t)
r := [32]byte{'A'}
require.Equal(t, false, db.HasState(context.Background(), r))
s := random.BeaconState(t)
st, err := statenative.InitializeFromProtoUnsafeEpbs(s)
require.NoError(t, err)
require.NoError(t, db.SaveState(context.Background(), st, r))
require.Equal(t, true, db.HasState(context.Background(), r))
require.NoError(t, db.DeleteState(context.Background(), r))
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.Equal(t, state.ReadOnlyBeaconState(nil), savedS, "Unsaved state should've been nil")
}
func TestStateDeneb_CanSaveRetrieveValidatorEntries(t *testing.T) {
db := setupDB(t)

View File

@@ -0,0 +1,38 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["pruner.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/pruner",
visibility = [
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["pruner_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/db/testing:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots/testing:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -0,0 +1,174 @@
package pruner
import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/iface"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "db-pruner")
type ServiceOption func(*Service)
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
return func(s *Service) {
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
if retentionEpochs < defaultRetentionEpochs {
log.WithField("userEpochs", retentionEpochs).
WithField("minRequired", defaultRetentionEpochs).
Warn("Retention period too low, using minimum required value")
}
s.ps = pruneStartSlotFunc(retentionEpochs)
}
}
func WithSlotTicker(slotTicker slots.Ticker) ServiceOption {
return func(s *Service) {
s.slotTicker = slotTicker
}
}
// Service defines a service that prunes beacon chain DB based on MIN_EPOCHS_FOR_BLOCK_REQUESTS.
type Service struct {
ctx context.Context
db db.Database
ps func(current primitives.Slot) primitives.Slot
prunedUpto primitives.Slot
done chan struct{}
slotTicker slots.Ticker
backfillWaiter func() error
initSyncWaiter func() error
}
func New(ctx context.Context, db iface.Database, genesisTime uint64, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
p := &Service{
ctx: ctx,
db: db,
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
done: make(chan struct{}),
slotTicker: slots.NewSlotTicker(slots.StartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
initSyncWaiter: initSyncWaiter,
backfillWaiter: backfillWaiter,
}
for _, o := range opts {
o(p)
}
return p, nil
}
func (p *Service) Start() {
log.Info("Starting Beacon DB pruner service")
p.run()
}
func (p *Service) Stop() error {
log.Info("Stopping Beacon DB pruner service")
close(p.done)
return nil
}
func (p *Service) Status() error {
return nil
}
func (p *Service) run() {
if p.initSyncWaiter != nil {
log.Info("Waiting for initial sync service to complete before starting pruner")
if err := p.initSyncWaiter(); err != nil {
log.WithError(err).Error("Failed to start database pruner, error waiting for initial sync completion")
return
}
}
if p.backfillWaiter != nil {
log.Info("Waiting for backfill service to complete before starting pruner")
if err := p.backfillWaiter(); err != nil {
log.WithError(err).Error("Failed to start database pruner, error waiting for backfill completion")
return
}
}
defer p.slotTicker.Done()
for {
select {
case <-p.ctx.Done():
log.Debug("Stopping Beacon DB pruner service", "prunedUpto", p.prunedUpto)
return
case <-p.done:
log.Debug("Stopping Beacon DB pruner service", "prunedUpto", p.prunedUpto)
return
case slot := <-p.slotTicker.C():
// Prune at the middle of every epoch since we do a lot of things around epoch boundaries.
if slots.SinceEpochStarts(slot) != (params.BeaconConfig().SlotsPerEpoch / 2) {
continue
}
if err := p.prune(slot); err != nil {
log.WithError(err).Error("Failed to prune database")
}
}
}
}
// prune deletes historical chain data beyond the pruneSlot.
func (p *Service) prune(slot primitives.Slot) error {
// Prune everything up to this slot (inclusive).
pruneUpto := p.ps(slot)
// Can't prune beyond genesis.
if pruneUpto == 0 {
return nil
}
// Skip if already pruned up to this slot.
if pruneUpto <= p.prunedUpto {
return nil
}
log.WithFields(logrus.Fields{
"pruneUpto": pruneUpto,
}).Debug("Pruning chain data")
tt := time.Now()
if err := p.db.DeleteHistoricalDataBeforeSlot(p.ctx, pruneUpto); err != nil {
return errors.Wrapf(err, "could not delete upto slot %d", pruneUpto)
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"duration": time.Since(tt),
"currentSlot": slot,
}).Debug("Successfully pruned chain data")
// Update pruning checkpoint.
p.prunedUpto = pruneUpto
return nil
}
// pruneStartSlotFunc returns the function to determine the start slot to start pruning.
func pruneStartSlotFunc(retentionEpochs primitives.Epoch) func(primitives.Slot) primitives.Slot {
return func(current primitives.Slot) primitives.Slot {
if retentionEpochs > slots.MaxSafeEpoch() {
retentionEpochs = slots.MaxSafeEpoch()
}
offset := slots.UnsafeEpochStart(retentionEpochs)
if offset >= current {
return 0
}
return current - offset
}
}

View File

@@ -0,0 +1,135 @@
package pruner
import (
"context"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/util"
slottest "github.com/prysmaticlabs/prysm/v5/time/slots/testing"
"github.com/sirupsen/logrus"
dbtest "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestPruner_PruningConditions(t *testing.T) {
tests := []struct {
name string
synced bool
backfillCompleted bool
expectedLog string
}{
{
name: "Not synced",
synced: false,
backfillCompleted: true,
expectedLog: "Waiting for initial sync service to complete before starting pruner",
},
{
name: "Backfill incomplete",
synced: true,
backfillCompleted: false,
expectedLog: "Waiting for backfill service to complete before starting pruner",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(context.Background())
beaconDB := dbtest.SetupDB(t)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
waitChan := make(chan struct{})
waiter := func() error {
close(waitChan)
return nil
}
var initSyncWaiter, backfillWaiter func() error
if !tt.synced {
initSyncWaiter = waiter
}
if !tt.backfillCompleted {
backfillWaiter = waiter
}
p, err := New(ctx, beaconDB, uint64(time.Now().Unix()), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
require.NoError(t, err)
go p.Start()
<-waitChan
cancel()
if tt.expectedLog != "" {
require.LogsContain(t, hook, tt.expectedLog)
}
require.NoError(t, p.Stop())
})
}
}
func TestPruner_PruneSuccess(t *testing.T) {
ctx := context.Background()
beaconDB := dbtest.SetupDB(t)
// Create and save some blocks at different slots
var blks []*eth.SignedBeaconBlock
for slot := primitives.Slot(1); slot <= 32; slot++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = slot
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
blks = append(blks, blk)
}
// Create pruner with retention of 2 epochs (64 slots)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
p, err := New(
ctx,
beaconDB,
uint64(time.Now().Unix()),
nil,
nil,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Start pruner and trigger at middle of 3rd epoch (slot 80)
go p.Start()
currentSlot := primitives.Slot(80) // Middle of 3rd epoch
slotTicker.Channel <- currentSlot
// Send the same slot again to ensure the pruning operation completes
slotTicker.Channel <- currentSlot
for slot := primitives.Slot(1); slot <= 32; slot++ {
root, err := blks[slot-1].Block.HashTreeRoot()
require.NoError(t, err)
present := beaconDB.HasBlock(ctx, root)
if slot <= 16 { // These should be pruned
require.NoError(t, err)
require.Equal(t, false, present, "Expected present at slot %d to be pruned", slot)
} else { // These should remain
require.NoError(t, err)
require.Equal(t, true, present, "Expected present at slot %d to exist", slot)
}
}
require.NoError(t, p.Stop())
}

View File

@@ -15,6 +15,7 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -22,6 +23,7 @@ go_library(
"//io/file:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -50,6 +52,7 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",

View File

@@ -8,6 +8,7 @@ import (
slashertypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
@@ -177,8 +178,8 @@ func TestStore_PruneAttestations_OK(t *testing.T) {
if i > 0 {
source = target - 1
}
att1 := createAttestationWrapper(source, target, []uint64{attester1}, []byte{0})
att2 := createAttestationWrapper(source, target, []uint64{attester2}, []byte{1})
att1 := createAttestationWrapper(version.Phase0, source, target, []uint64{attester1}, []byte{0})
att2 := createAttestationWrapper(version.Phase0, source, target, []uint64{attester2}, []byte{1})
attestations = append(attestations, att1, att2)
}
}

Some files were not shown because too many files have changed in this diff Show More