Compare commits

..

183 Commits

Author SHA1 Message Date
Manu NALEPA
2c56c650e6 jimmy 2025-02-25 17:43:13 +01:00
Manu NALEPA
6c0bba7197 Implement validator custody. 2025-02-22 17:15:34 +01:00
Manu NALEPA
35a2a32106 blobsFromStoredDataColumns: Simplify.
Do not make any more a difference between "can theoretically reconstruct" and "can actually reconstruct".
2025-02-21 16:03:49 +01:00
Manu NALEPA
2a72703d3e dataColumnSidecarByRangeRPCHandler: Remove custody columns in logs. 2025-02-21 16:03:49 +01:00
Manu NALEPA
3b5a6b5e2f dataColumnSidecarByRootRPCHandler: Remove custody columns in logs. 2025-02-21 16:03:49 +01:00
Manu NALEPA
36958b552d Sync service: Add tracked validators cache. 2025-02-21 16:03:49 +01:00
Manu NALEPA
ae1a6be8a3 Implement ValidatorsCustodyRequirement. 2025-02-21 16:03:49 +01:00
Manu NALEPA
4f146f9a30 Add VALIDATOR_CUSTODY_REQUIREMENT and BALANCE_PER_ADDITIONAL_CUSTODY_GROUP. 2025-02-21 16:03:49 +01:00
Manu NALEPA
f07036ab3c Node info: Rename cache and mutex. 2025-02-21 16:03:48 +01:00
Manu NALEPA
da9d4cf5b9 Merge branch 'develop' into peerDAS 2025-02-21 16:03:20 +01:00
terence
56208aa84d Add more verbosity to fork digest mismatch (#14968) 2025-02-21 03:36:31 +00:00
james-prysm
b866a2c744 setting rest endpoints as deprecated for electra (#14967) 2025-02-20 18:20:51 +00:00
Nishant Das
a77234e637 Test Execution Deposit Requests in E2E (#14964)
* Test Deposit Requests

* Remove extra epochs

* Clean up Panic

* Fix Slashing Config

* Fix Slashing Test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-02-20 14:54:45 +00:00
Manu NALEPA
a62cca15dd Merge branch 'develop' into peerDAS 2025-02-20 15:48:07 +01:00
james-prysm
e0e7354708 improving the error messages for execution request deserialization (#14962)
* improving the error messages for execution request deserialization

* changelog
2025-02-20 14:31:02 +00:00
james-prysm
0f86a16915 builder: api calls should have appropriate headers (#14961)
* adding correct headers when posting for validator registration on builder api

* changelog
2025-02-20 14:27:14 +00:00
Radosław Kapka
972c22b02f SingleAttestation support in the monitor service (#14965)
* `SingleAttestation` support in the monitor service

* changelog <3
2025-02-20 11:26:51 +00:00
Manu NALEPA
93c27340e4 Tracked validator TTL (#14957)
* `TrackedValidatorsCache`: Implement a 1-hour TTL by uding `go-cache`.

* `TrackedValidatorsCache`: Add the `ItemCount` method.

* `TrackedValidatorsCache`: Add the `Indices` method.

* Add changelog.

* `TrackedValidatorsCache`: Add prometheus metrics.

* Update beacon-chain/cache/tracked_validators.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-02-19 18:04:13 +00:00
Manu NALEPA
c3edb32558 ServiceRegistry.StartAll: Remove redundant log. (#14958) 2025-02-19 17:12:32 +00:00
Sammy Rosso
3baaa732df Add get pending partial withdrawals (#14949)
* add pending partial withdrawals endpoint

* changelog

* missing new line

* fix changelog

* removing unneeded header

* using generic instead of redundant functions

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-02-19 12:34:48 +00:00
Nishant Das
8ceb7e76ea Log execution requests (#14956) 2025-02-19 10:19:04 +00:00
terence
4d5dddd302 Add request hash to header for builder: executable data to block (#14955)
* Add request hash to header for builder: executable data to block

* go fmt
2025-02-19 05:18:18 +00:00
Sammy Rosso
55efccb07f Add get pending deposits endpoint (#14941)
* Add GetPendingDeposits endpoint

* add comment

* add changelog

* gaz

* Radek' review

* move JSON object params

* gaz

* Radek' nits xD

* James' review
2025-02-18 16:16:20 +00:00
Radosław Kapka
961d8e1481 Don't use MaxCover for Electra on-chain aggregates (#14925)
* Don't use MaxCover for Electra on-chain aggregates

* changelog <3
2025-02-18 14:44:18 +00:00
Nishant Das
d396a9931e Add in Multiclient E2E For Electra (#14946)
* Add in Multiclient E2E

* Fix Execution Engine

* Update testing/endtoend/endtoend_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/endtoend_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-02-18 10:32:58 +00:00
james-prysm
e3f8f121f4 web3signer electra for e2e (#14936)
* changes needed to support web3signer running on electra for e2e

* updating web3signer version and fixing missed configs and test alignment
2025-02-18 01:36:19 +00:00
fuyangpengqi
80f29e9eda refactor: use a more straightforward return value (#14942)
Signed-off-by: fuyangpengqi <995764973@qq.com>
2025-02-17 19:18:15 +00:00
Nishant Das
8995d8133a Fix Deposit Activation Evaluator (#14938)
* Fix evaluator

* fix deposit activation
2025-02-17 13:43:23 +00:00
Preston Van Loon
31044206b8 tracing: Replace deprecated jaeger exporter with otelhttp exporter (#14928)
* Update go.opentelemetry.io/otel to v1.34.0

* Update otel exporter to replace deprecated jaeger exporter

* Changelog

* Use WithEndpointURL

* Clarify potential breaking change
2025-02-15 17:26:57 +00:00
Manu NALEPA
ac04246a2a Avoid computing peerDAS info again and again. (#14893)
* `areDataColumnsAvailable`: `signed` ==> `signedBlock`.

* peerdas: Split `helpers.go` in multiple files respecting the specification.

* peerDAS: Implement `Info`.

* peerDAS: Use cached `Info` when possible.
2025-02-14 18:06:04 +01:00
Manu NALEPA
0923145bd7 Merge branch 'develop' into peerDAS 2025-02-14 16:51:05 +01:00
Manu NALEPA
3a1702e56f Fixed the bazel run //:gazelle command in DEPENDENCIES.md. (#14934) 2025-02-14 15:00:10 +00:00
Nishant Das
501ec74a48 Fix Deposit Evaluator in E2E (#14933)
* fix evaluator in electra

* remove function

* Fix evaluator

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-02-14 14:59:51 +00:00
Potuz
c248fe0bb3 Add logs for RPC handlers registered/removed at forks (#14932) 2025-02-14 13:01:01 +00:00
Manu NALEPA
215fbcb2e4 Remove Fulu block and state. (#14905)
* Remove Fulu block and state.

* Add missing tests.

* Alias `ProtobufBeaconStateFulu` to `ProtobufBeaconStateElectra`
2025-02-14 10:48:24 +00:00
kasey
e39f44b529 fix path parsing bug on windows (#14931)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-02-14 06:10:58 +00:00
Manu NALEPA
a216cb4105 Merge branch 'develop' into peerDAS 2025-02-13 18:22:21 +01:00
Nishant Das
9eff6ae476 Update Blst to v3.14.0 (#14921)
* updateBlst

* changelog
2025-02-13 14:31:19 +00:00
Nishant Das
3eec5a5cb6 Fix Engine Capabilites Check (#14924)
* Fix Capabilities Check

* Changelog
2025-02-13 13:37:05 +00:00
Preston Van Loon
66878deb2c Update changelog for v5.3.0 release (#14918)
* Prysm v5.3.0 changelog update

* Add v5.3.0 preamble

* Remove experimental feature from suggestions

* Changelog fragment
2025-02-12 22:36:34 +00:00
james-prysm
0b6e1711e4 Electra e2e minimal (updates geth to 1.15.0) (#14842)
* wip electra e2e

* add Deneb state to `validatorsParticipating`

* Run bazel

* add Electra state to `validatorsParticipating`

* fixing some e2e issues

* more evaluator fixes and changelog

* adding in special condition to pass electra epoch participation

* fixing typo

* missed updating forks for e2e tests

* reverting change current release fork

* missed updating e2e config for test

* updating to latest devnet 5 to fix unit tests

* go mod tidy

* fixing branch, temporary will need to update geth version later

* update to goethereum v1.15.0

* changing changelog to reflect update in geth dependency

* fixing test failures

* adding fix for range request limit during transition period between forks

* enabling validator rest for Electra

* rolling back error message

* adding fixed change logs

* fixing dependencies based on nishant's comments, deps.bzl should be updated not workspace

* partially reverting incorrect change

* removing fixes from change log, handled in separate prs

* removing comment

* updating update fraction field to the corrected spec value from prague

* Update testing/endtoend/evaluators/fork.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

---------

Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-02-12 15:58:06 +00:00
james-prysm
15025837bb fix: gocognit on publish block and fixing publish blinded block header check (#14913)
* refactored code and added in checks for blinded endpoints

* changelog

* cleaning up some comments and error messages

* fixing linting

* adding clarifying comment
2025-02-11 21:34:37 +00:00
Radosław Kapka
0229a2055e Rename files in beacon-chain/operations/slashings (#14904)
* pool

* service

* changelog <3
2025-02-11 16:13:23 +00:00
terence
eb9af15c7a Add blobs by range electra test (#14912) 2025-02-11 15:34:44 +00:00
james-prysm
0584746815 Dynamic max blobs config (#14911)
* fixing max config helpers to use dynamic values instead of static ones

* changelog
2025-02-11 15:04:22 +00:00
Nishant Das
8c4ea850ba Fix Blobs By Range RPC Handler (#14910)
* Add tests for TestSendBlobsByRangeRequest. Currently not working with sequential blob validation.

* Copy Root First

* Allow Test For Maximum Amount of Blobs

* Fails with the Same error

* Fix Last Test Assertion

* Add in Fix

* Changelog

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-02-11 14:11:12 +00:00
Nishant Das
4b43f13e65 Fix Blob Reconstruction (#14909)
* Fix Mutating Blob Mask

* Changelog

* Typo
2025-02-11 13:44:00 +00:00
james-prysm
26d35474e9 fix: /eth/v2/beacon/blocks post api to handle electra and fulu blocks correctly (#14897)
* adding fix and changelog

* adding no lint gocognit for now

* fixing linting

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* updating based on kasey's suggestions

* preston's comments

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-02-10 23:50:09 +00:00
terence
9fbe3564df Update spec tests to v1.5.0-beta.2 (#14901) 2025-02-10 15:12:57 +00:00
terence
bed5547890 Add pectra testnet dates (#14884) 2025-02-10 15:09:42 +00:00
Nishant Das
47922fe7d8 Remove Unused assignment (#14906)
* Remove unused boolean assignment

* Changelog

* Remove debug line
2025-02-10 15:01:23 +00:00
Radosław Kapka
dcd25d1d97 Add missing config values from the spec (#14903)
* Add missing config values from the spec

* remove placeholders

* add some more values
2025-02-10 14:17:13 +00:00
terence
81a2a17c5f Fix electra state to safe share references on pending fields when append (#14895)
* Fix electra state to safe share references on pending fields when append

* Feedback
2025-02-08 03:04:02 +00:00
Rupam Dey
6b3f1de19d change lc flag name from enable-lightclient to enable-light-client (#14887)
* change flag name from `enable-lightclient` to `enable-light-client`

* changelog
2025-02-07 17:35:12 +00:00
Bastin
7c17af2a41 bundle handlers test (#14834)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-02-07 16:00:22 +00:00
Nishant Das
ecf5a368d7 Update it (#14890) 2025-02-07 08:31:36 +00:00
Jun Song
557c5be433 Prune pending deposits from the deposit cache post-Electra (#14829)
* Add metrics for pruned proofs & pending deposits

* Add PruneAllProofs & PruneAllPendingDeposits

* Add simple unit tests

* Add DepositPruner interface

* Add pruning logic at post finalization task

* Move pruner logic into new file(deposit_pruner.go)

Rationale:
As deposit_fetcher.go contains all pruning logics, it would be better to separate its interest into fetcher/inserter/pruner.

* Gofmt

* Add reference link for deprecating eth1 polling

* Add changelog

* Apply reviews from nisdas and james

* add pre and post deposit request tests

* nishant's comment

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-02-07 04:31:01 +00:00
Radosław Kapka
49405c3afd Notify about attestations from the pending att queue (#14862)
* Notify about attestations from the pending att queue

* changelog <3

* fix tests

* adding to existing tests to track appropriate event feed sends

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-02-06 22:07:42 +00:00
Nishant Das
3439122629 Set New Blob Limits For Electra (#14883)
* Set New Blob Limits For Electra

* Add Changelog

* Bump up blob limit
2025-02-06 16:53:39 +00:00
Potuz
f6e5da6723 Do not error on overflow when converting slashings (#14882) 2025-02-05 21:01:27 +00:00
kasey
842f241cb9 Reduce size of api/client import graph (#14871)
* relocate DownloadFinalizedData from api to sync

* unexpected go mod changes

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-02-05 20:40:13 +00:00
kasey
41daac1b04 Organize blobs on disk by epoch (#14023)
* organize blob directories by period and epoch

* changelog

* remove Indices and replace with Summary

* old PR feedback

* log to advise about the speed of blob migration

* rename level->layer (hoping term is more clear)

* assert path in tests for increased legibility

* lint

* lint

* remove test covering a newly impossible error

* improve feedback from flag validation failure

* Try to clean dangling dirs epoch->flat migration

* lint

* Preston feedback

* try all layouts and short-circuit if base not found

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-02-05 20:09:38 +00:00
Potuz
2a7fc84044 Fix startup log for config file values (#14865) 2025-02-05 16:01:25 +00:00
Rupam Dey
44ff0b1a14 add missing Electra tests for light client (#14783)
* add Electra tests for finality update

* override beacon config

* add Electra tests to

* fix setupTestElectra

* changelog

* cleanup test config

* Update beacon-chain/core/light-client/lightclient_test.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* changelog

* move config to top

---------

Co-authored-by: Bastin <bastin.m@proton.me>
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-02-05 15:16:53 +00:00
Dhruv Bodani
91cdd318a8 Add process slot span to slotCtx (#14874)
* attach process slot span to slotCtx

* add changelog

* fix build

* fix build

* Update changelog/dB2510_processslotspan.md

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-02-05 15:00:27 +00:00
james-prysm
3dc00816fb nil checks on ToConsensus() functions (#14867)
* adding more safety checks and associated tests

* changelog

* Update api/server/structs/conversions.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's feedback

* fixing tests

* gaz

* Update api/server/structs/conversions.go

* Update api/server/structs/conversions.go

* Update api/server/structs/conversions.go

* Update api/server/structs/conversions.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-02-05 14:59:57 +00:00
james-prysm
e331d5b371 improving proposer settings loader readability (#14868)
* updating loader code and adding change log

* updating variable names to reduce confusion

* exporting loader type

* Update config/proposer/loader/loader.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update config/proposer/loader/loader.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update config/proposer/loader/loader.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* gofmt

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2025-02-04 23:18:50 +00:00
Taranpreet26311
8d5090ce54 Update go-ethereum to v1.14.13 (#14872)
* Update geth dependency in go

* Updated geth

* Add changelog update

* Remove change log line

* Modify changelog line
2025-02-04 16:11:18 +00:00
Radosław Kapka
25244d906d Modify comment in recomputeFieldTrie (#14873) 2025-02-04 12:20:40 +00:00
Preston Van Loon
aa445713ac Remove validator.SignValidatorRegistrationRequest span (#14864) 2025-02-03 17:07:49 +00:00
Radosław Kapka
177769a1ce Update Beacon API events to Electra (#14855)
* Update Beacon API events to Electra

* changelog <3

* fix issues

* send notifications from pending att queue

* Revert "send notifications from pending att queue"

This reverts commit 545408f6cf.
2025-02-03 16:16:38 +00:00
Radosław Kapka
967e9255a2 Fix monitor service for Electra (#14853)
* Fix monitor service for Electra

* changelog <3
2025-02-03 15:12:14 +00:00
Manu NALEPA
01705d1f3d Peer das sync empty requests (#14854)
* `TestBuildBwbSlices`: Add test case failing with the current implementation.

* Fix `buildBwbSlices` to comply with the new test case.

* `block_fetchers.go`: Improve logging and godoc.

* `DataColumnsRPCMinValidSlot`: Update to Fulu.
2025-02-03 15:23:04 +01:00
terence
910609a75f Handle errors as no-op for execution requests (#14826)
* Update electra core processing error handling

* Add test for IsExecutionRequestError

* Add TestProcessOperationsWithNilRequests

* gazelle

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-01-31 22:17:27 +00:00
kasey
f9c202190a warnings for flags due for deprecation (#14856)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-01-31 21:30:27 +00:00
Manu NALEPA
14f93b4e9d Sync: Integrate batch directly in buildBwbSlices. (#14843)
Previously, `buildBwbSlices` were built, and then only to big requests were batched in `buildDataColumnSidecarsByRangeRequests`.

In some edge cases, this lead to requesting data columns to peers for blocks with no blobs.

Splitting by batch directly in `buildBwbSlices` fixes the issue.
2025-01-30 12:11:06 +01:00
Manu NALEPA
ad11036c36 reconstructAndBroadcastBlobs: Temporarily deactivate starting at Fulu. 2025-01-27 15:15:34 +01:00
Manu NALEPA
632a06076b Merge branch 'develop' into peerDAS 2025-01-22 21:30:32 +01:00
Manu NALEPA
242c2b0268 Merge branch 'develop' into peerDAS 2025-01-22 20:08:10 +01:00
Ekaterina Riazantseva
19662da905 Add PeerDAS kzg and inclusion proof verification metrics (#14814) 2025-01-21 16:20:10 +01:00
Ekaterina Riazantseva
7faee5af35 Add PeerDAS gossip verification metrics (#14796) 2025-01-21 16:16:12 +01:00
Ekaterina Riazantseva
805ee1bf31 Add 'beacon' prefix to 'data_column_sidecar_computation' metric (#14790) 2025-01-21 16:14:26 +01:00
Manu NALEPA
bea46fdfa1 Merge branch 'develop' into peerDAS 2025-01-20 13:37:29 +01:00
Manu NALEPA
f6b1fb1c88 Merge branch 'develop' into peerDAS 2025-01-16 10:23:21 +01:00
Manu NALEPA
6fb349ea76 unmarshalState: Use hasFuluKey. 2025-01-15 20:48:25 +01:00
Manu NALEPA
e5a425f5c7 Merge branch 'develop' into peerDAS 2025-01-15 17:18:34 +01:00
Manu NALEPA
f157d37e4c peerDAS: Decouple network subnets from das-core. (#14784)
https://github.com/ethereum/consensus-specs/pull/3832/
2025-01-14 10:45:05 +01:00
Manu NALEPA
5f08559bef Merge branch 'develop' into peerDAS 2025-01-08 10:18:18 +01:00
Manu NALEPA
a082d2aecd Merge branch 'fulu-boilerplate' into peerDAS 2025-01-06 13:45:33 +01:00
Manu NALEPA
bcfaff8504 Upgraded state to <fork> log: Move from debug to info.
Rationale:
This log is the only one notifying the user a new fork happened.
A new fork is always a little bit stressful for a node operator.
Having at least one log indicating the client switched fork is something useful.
2025-01-05 16:22:43 +01:00
Manu NALEPA
d8e09c346f Implement the Fulu fork boilerplate. 2025-01-05 16:22:38 +01:00
Manu NALEPA
876519731b Prepare for future fork boilerplate. 2025-01-05 16:14:02 +01:00
Manu NALEPA
de05b83aca Merge branch 'develop' into peerDAS 2024-12-30 15:11:02 +01:00
Manu NALEPA
56c73e7193 Merge branch 'develop' into peerDAS 2024-12-27 22:11:36 +01:00
Manu NALEPA
859ac008a8 Activate peerDAS at electra. (#14734) 2024-12-27 09:48:57 +01:00
Manu NALEPA
f882bd27c8 Merge branch 'develop' into peerDAS 2024-12-18 16:15:32 +01:00
Manu NALEPA
361e5759c1 Merge branch 'develop' into peerDAS 2024-12-17 22:19:20 +01:00
Manu NALEPA
34ef0da896 Merge branch 'develop' into peerDAS 2024-12-10 23:11:45 +01:00
Manu NALEPA
726e8b962f Revert "Revert "Add error count prom metric (#14670)""
This reverts commit 5f17317c1c.
2024-12-10 21:49:40 +01:00
Manu NALEPA
453ea01deb disconnectFromPeer: Remove unused function. 2024-11-28 17:37:30 +01:00
Manu NALEPA
6537f8011e Merge branch 'peerDAS' into peerDAS-do-not-merge 2024-11-28 17:27:44 +01:00
Manu NALEPA
5f17317c1c Revert "Add error count prom metric (#14670)"
This reverts commit b28b1ed6ce.
2024-11-28 16:37:19 +01:00
Manu NALEPA
3432ffa4a3 PeerDAS: Batch columns verifications (#14559)
* `ColumnAlignsWithBlock`: Split lines.

* Data columns verifications: Batch

* Remove completely `DataColumnBatchVerifier`.

Only `DataColumnsVerifier` (with `s`) on columns remains.
It is the responsability of the function which receive the data column
(either by gossip, by range request or by root request) to verify the
data column wrt. corresponding checks.

* Fix Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9dac67635b streamDataColumnBatch: Sort columns by index. (#14542)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#datacolumnsidecarsbyrange-v1

The following data column sidecars, where they exist, MUST be sent in (slot, column_index) order.
2024-11-27 10:37:03 +01:00
Manu NALEPA
9be69fbd07 PeerDAS: Fix major bug in dataColumnSidecarsByRangeRPCHandler and allow syncing from full nodes. (#14532)
* `validateDataColumnsByRange`: `current` ==> `currentSlot`.

* `validateRequest`: Extract `remotePeer` variable.

* `dataColumnSidecarsByRangeRPCHandler`: Small non functional refactor.

* `streamDataColumnBatch`: Fix major bug.

Before this commit, the node was unable to respond with a data column index higher than the count of stored data columns.
For example, if there is 8 data columns stored for a given block, the node was
able to respond for data columns indices 1, 3, and 5, but not for 10, 16 or 127.

The issue was visible only for full nodes, since super nodes always store 128 data columns.

* Initial sync: Fetch data columns from all peers.
(Not only from supernodes.)

* Nishant's comment: Fix `lastSlot` and `endSlot` duplication.

* Address Nishant's comment.
2024-11-27 10:37:03 +01:00
Manu NALEPA
e21261e893 Data columns initial sync: Rework. (#14522) 2024-11-27 10:37:03 +01:00
Nishant Das
da53a8fc48 Fix Commitments Check (#14493)
* Fix Commitments Check

* `highestFinalizedEpoch`: Refactor (no functional change).

* `retrieveMissingDataColumnsFromPeers`: Fix logs.

* `VerifyDataColumnSidecarKZGProofs`: Optimise with capacity.

* Save data columns when initial syncing.

* `dataColumnSidecarsByRangeRPCHandler`: Add logs when a request enters.

* Improve logging.

* Improve logging.

* `peersWithDataColumns: Do not filter any more on peer head slot.

* Fix Nishant's comment.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:37:03 +01:00
Manu NALEPA
a14634e656 PeerDAS: Improve initial sync logs (#14496)
* `retrieveMissingDataColumnsFromPeers`: Search only for needed peers.

* Improve logging.
2024-11-27 10:37:03 +01:00
Manu NALEPA
43761a8066 PeerDAS: Fix initial sync with super nodes (#14495)
* Improve logging.

* `retrieveMissingDataColumnsFromPeers`: Limit to `512` items per request.

* `retrieveMissingDataColumnsFromPeers`: Allow `nil` peers.

Before this commit:
If, when this funcion is called, we are not yet connected to enough peers, then `peers` will be possibly not be satisfaying,
and, if new peers are connected, we will never see them.

After this commit:
If `peers` is `nil`, then we regularly check for all connected peers.
If `peers` is not `nil`, then we use them.
2024-11-27 10:37:03 +01:00
Manu NALEPA
01dbc337c0 PeerDAS: Fix initial sync (#14494)
* `BestFinalized`: Refactor (no functional change).

* `BestNonFinalized`: Refactor (no functional change).

* `beaconBlocksByRangeRPCHandler`: Remove useless log.

The same is already printed at the start of the function.

* `calculateHeadAndTargetEpochs`: Avoid `else`.

* `ConvertPeerIDToNodeID`: Improve error.

* Stop printing noisy "peer should be banned" logs.

* Initial sync: Request data columns from peers which:
- custody a superset of columns we need, and
- have a head slot >= our target slot.

* `requestDataColumnsFromPeers`: Shuffle peers before requesting.

Before this commit, we always requests peers in the same order,
until one responds something.
Without shuffling, we always requests data columns from the same
peer.

* `requestDataColumnsFromPeers`: If error from a peer, just log the error and skip the peer.

* Improve logging.

* Fix tests.
2024-11-27 10:37:03 +01:00
Nishant Das
92f9b55fcb Put Subscriber in Goroutine (#14486) 2024-11-27 10:36:18 +01:00
Manu NALEPA
f65f12f58b Stop disconnecting peers for bad response / excessive colocation. (#14483) 2024-11-27 10:36:17 +01:00
Manu NALEPA
f2b61a3dcf PeerDAS: Misc improvements (#14482)
* `retrieveMissingDataColumnsFromPeers`: Improve logging.

* `dataColumnSidecarByRootRPCHandler`: Stop decreasing peer's score if asking for a column we do not custody.

* `dataColumnSidecarByRootRPCHandler`: If a data column is unavailable, stop waiting for it.

This behaviour was useful for peer sampling.
Now, just return the data column if we store it.
If we don't, skip.

* Dirty code comment.

* `retrieveMissingDataColumnsFromPeers`: Improve logs.

* `SendDataColumnsByRangeRequest`: Improve logs.

* `dataColumnSidecarsByRangeRPCHandler`: Improve logs.
2024-11-27 10:34:38 +01:00
Manu NALEPA
77a6d29a2e PeerDAS: Re-enable full node joining the main fork (#14475)
* `columnErrBuilder`: Uses `Wrap` instead of `Join`.

Reason: `Join` makes a carriage return. The log is quite unreadable.

* `validateDataColumn`: Improve log.

* `areDataColumnsAvailable`: Improve log.

* `SendDataColumnSidecarByRoot` ==> `SendDataColumnSidecarsByRootRequest`.

* `handleDA`: Refactor error message.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

Reason: There is no notion at all of "recent" in the function.

If the caller decides to call this function only with "recent" blocks, that's fine.
However, the function itself will know nothing about the "recentness" of these blocks.

* `sendBatchRootRequest`: Improve comments.

* `sendBeaconBlocksRequest`: Avoid `else` usage and use map of bool instead of `struct{}`.

* `wrapAndReportValidation`: Remove `agent` from log.

Reason: This prevent the log to hold on one line, and it is not really useful to debug.

* `validateAggregateAndProof`: Add comments.

* `GetValidCustodyPeers`: Fix typo.

* `GetValidCustodyPeers` ==> `DataColumnsAdmissibleCustodyPeers`.

* `CustodyHandler` ==> `DataColumnsHandler`.

* `CustodyCountFromRemotePeer` ==> `DataColumnsCustodyCountFromRemotePeer`.

* Implement `DataColumnsAdmissibleSubnetSamplingPeers`.

* Use `SubnetSamplingSize` instead of `CustodySubnetCount` where needed.

* Revert "`wrapAndReportValidation`: Remove `agent` from log."

This reverts commit 55db351102.
2024-11-27 10:34:38 +01:00
Manu NALEPA
31d16da3a0 PeerDAS: Multiple improvements (#14467)
* `scheduleReconstructedDataColumnsBroadcast`: Really minor refactor.

* `receivedDataColumnsFromRootLock` -> `dataColumnsFromRootLock`

* `reconstructDataColumns`: Stop looking into the DB to know if we have some columns.

Before this commit:
Each time we receive a column, we look into the filesystem for all columns we store.
==> For 128 columns, it looks for 1 + 2 + 3 + ... + 128 = 128(128+1)/2 = 8256 files look.

Also, as soon as a column is saved into the file system, then if, right after, we
look at the filesystem again, we assume the column will be available (strict consistency).
It happens not to be always true.

==> Sometimes, we can reconstruct and reseed columns more than once, because of this lack of filesystem strict consistency.

After this commit:
We use a (strictly consistent) cache to determine if we received a column or not.
==> No more consistency issue, and less stress for the filesystem.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

Before this commit, logged values assumed that all requested columns correspond to
the same block root, which is not always the case.

After this commit, we know which columns are requested for which root.

* Add a log when broadcasting a data column.

This is useful to debug "lost data columns" in devnet.

* Address Nishant's comment
2024-11-27 10:34:38 +01:00
Justin Traglia
19221b77bd Update c-kzg-4844 to v2.0.1 (#14421) 2024-11-27 10:34:38 +01:00
Manu NALEPA
83df293647 Peerdas: Several updates (#14459)
* `validateDataColumn`: Refactor logging.

* `dataColumnSidecarByRootRPCHandler`: Improve logging.

* `isDataAvailable`: Improve logging.

* Add hidden debug flag: `--data-columns-reject-slot-multiple`.

* Add more logs about peer disconnection.

* `validPeersExist` --> `enoughPeersAreConnected`

* `beaconBlocksByRangeRPCHandler`: Add remote Peer ID in logs.

* Stop calling twice `writeErrorResponseToStream` in case of rate limit.
2024-11-27 10:34:37 +01:00
Manu NALEPA
c20c09ce36 Peerdas: Full subnet sampling and sendBatchRootRequest fix. (#14452)
* `sendBatchRootRequest`: Refactor and add comments.

* `sendBatchRootRequest`: Do send requests to peers that custodies a superset of our columns.

Before this commit, we sent "data columns by root requests" for data columns peers do not custody.

* Data columns: Use subnet sampling only.

(Instead of peer sampling.)

aaa

* `areDataColumnsAvailable`: Improve logs.

* `GetBeaconBlock`: Improve logs.

Rationale: A `begin` log should always be followed by a `success` log or a `failure` log.
2024-11-27 10:30:29 +01:00
Manu NALEPA
2191faaa3f Fix CPU usage in small devnets (#14446)
* `CustodyCountFromRemotePeer`: Set happy path in the outer scope.

* `FindPeersWithSubnet`: Improve logging.

* `listenForNewNodes`: Avoid infinite loop in a small subnet.

* Address Nishant's comment.

* FIx Nishant's comment.
2024-11-27 10:30:29 +01:00
Nishant Das
2de1e6f3e4 Revert "Change Custody Count to Uint8 (#14386)" (#14415)
This reverts commit bd7ec3fa97.
2024-11-27 10:30:29 +01:00
Manu NALEPA
db44df3964 Fix Initial Sync with 128 data columns subnets (#14403)
* `pingPeers`: Add log with new ENR when modified.

* `p2p Start`: Use idiomatic go error syntax.

* P2P `start`: Fix error message.

* Use not bootnodes at all if the `--chain-config-file` flag is used and no `--bootstrap-node` flag is used.

Before this commit, if the  `--chain-config-file` flag is used and no `--bootstrap-node` flag is used, then bootnodes are (incorrectly) defaulted on `mainnet` ones.

* `validPeersExist`: Centralize logs.

* `AddConnectionHandler`: Improve logging.

"Peer connected" does not really reflect the fact that a new peer is actually connected. --> "New peer connection" is more clear.

Also, instead of writing `0`, `1`or `2` for direction, now it's writted "Unknown", "Inbound", "Outbound".

* Logging: Add 2 decimals for timestamt in text and JSON logs.

* Improve "no valid peers" logging.

* Improve "Some columns have no peers responsible for custody" logging.

* `pubsubSubscriptionRequestLimit`: Increase to be consistent with data columns.

* `sendPingRequest`: Improve logging.

* `FindPeersWithSubnet`: Regularly recheck in our current set of peers if we have enough peers for this topic.

Before this commit, new peers HAD to be found, even if current peers are eventually acceptable.
For very small network, it used to lead to infinite search.

* `subscribeDynamicWithSyncSubnets`: Use exactly the same subscription function initially and every slot.

* Make deepsource happier.

* Nishant's commend: Change peer disconnected log.

* NIshant's comment: Change `Too many incoming subscription` log from error to debug.

* `FindPeersWithSubnet`: Address Nishant's comment.

* `batchSize`: Address Nishant's comment.

* `pingPeers` ==> `pingPeersAndLogEnr`.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:30:29 +01:00
Nishant Das
f92eb44c89 Add Data Column Computation Metrics (#14400)
* Add Data Column Metrics

* Shift it All To Peerdas Package
2024-11-27 10:24:03 +01:00
Nishant Das
a26980b64d Set Precompute at 8 (#14399) 2024-11-27 10:24:03 +01:00
Manu NALEPA
f58cf7e626 PeerDAS: Improve logging and reduce the number of needed goroutines for reconstruction (#14397)
* `broadcastAndReceiveDataColumns`: Use real `sidecar.ColumnIndex` instead of position in the slice.

And improve logging as well.

* `isDataColumnsAvailable`: Improve logging.

* `validateDataColumn`: Print `Accepted data column sidecar gossip` really at the end.

* Subscriber: Improve logging.

* `sendAndSaveDataColumnSidecars`: Use common used function for logging.

* `dataColumnSidecarByRootRPCHandler`: Logging - Pring `all` instead of all the columns for a super node.

* Verification: Improve logging.

* `DataColumnsWithholdCount`: Set as `uint64` instead `int`.

* `DataColumnFields`: Improve logging.

* Logging: Remove now useless private `columnFields`function.

* Avoid useless goroutines blocking for reconstruction.

* Update beacon-chain/sync/subscriber.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Address Nishant's comment.

* Improve logging.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Nishant Das
68da7dabe2 Fix Bugs in PeerDAS Testing (#14396)
* Fix Various Bugs in PeerDAS

* Remove Log

* Remove useless copy var.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d1e43a2c02 Change Custody Count to Uint8 (#14386)
* Add Changes for Uint8 Csc

* Fix Build

* Fix Build for Sync

* Fix Discovery Test
2024-11-27 10:24:03 +01:00
Nishant Das
3652bec2f8 Use Data Column Validation Across Prysm (#14377)
* Use Data Column Validation Everywhere

* Fix Build

* Fix Lint

* Fix Clock Synchronizer

* Fix Panic
2024-11-27 10:24:03 +01:00
Nishant Das
81b7a1725f Update Config To Latest Value (#14352)
* Update values

* Update Spec To v1.5.0-alpha.5

* Fix Discovery Tests

* Hardcode Subnet Count For Tests

* Fix All Initial Sync Tests

* Gazelle

* Less Chaotic Service Initialization

* Gazelle
2024-11-27 10:24:03 +01:00
Nishant Das
0c917079c4 Fix CI in PeerDAS (#14347)
* Update go.yml

* Disable mnd

* Update .golangci.yml

* Update go.yml

* Update go.yml

* Update .golangci.yml

* Update go.yml

* Fix Lint Issues

* Remove comment

* Update .golangci.yml
2024-11-27 10:24:03 +01:00
Manu NALEPA
a732fe7021 Implement /eth/v1/beacon/blob_sidecars/{block_id} for peerDAS. (#14312)
* `parseIndices`: `O(n**2)` ==> `O(n)`.

* PeerDAS: Implement `/eth/v1/beacon/blob_sidecars/{block_id}`.

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Rename some functions.

* `Blobs`: Fix empty slice.

* `recoverCellsAndProofs` --> Move function in `beacon-chain/core/peerdas`.

* peerDAS helpers: Add missing tests.

* Implement `CustodyColumnCount`.

* `RecoverCellsAndProofs`: Remove useless argument `columnsCount`.

* Tests: Add cleanups.

* `blobsFromStoredDataColumns`: Reconstruct if needed.

* Make deepsource happy.

* Beacon API: Use provided indices.

* Make deepsource happier.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-27 10:24:03 +01:00
Nishant Das
d75a7aae6a Add Data Column Verification (#14287)
* Persist All Changes

* Fix All Tests

* Fix Build

* Fix Build

* Fix Build

* Fix Test Again

* Add missing verification

* Add Test Cases for Data Column Validation

* Fix comments for methods

* Fix comments for methods

* Fix Test

* Manu's Review
2024-11-27 10:24:03 +01:00
Manu NALEPA
e788a46e82 PeerDAS: Add MetadataV3 with custody_subnet_count (#14274)
* `sendPingRequest`: Add some comments.

* `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`.

* `pingHandler`: Add comments.

* `sendMetaDataRequest`: Add comments and implement an unique test.

* Gather `SchemaVersion`s in the same `const` definition.

* Define `SchemaVersionV3`.

* `MetaDataV1`: Fix comment.

* Proto: Define `MetaDataV2`.

* `MetaDataV2`: Generate SSZ.

* `newColumnSubnetIDs`: Use smaller lines.

* `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`.

* `RefreshPersistentSubnets`: Refactor tests (no functional change).

* `RefreshPersistentSubnets`: Refactor and add comments (no functional change).

* `RefreshPersistentSubnets`: Compare cache with both ENR & metadata.

* `RefreshPersistentSubnets`: Manage peerDAS.

* `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`.

* `CustodyCountFromRemotePeer`: Retrieve the count from metadata.

Then default to ENR, then default to the default value.

* Update beacon-chain/sync/rpc_metadata.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix duplicate case.

* Remove version testing.

* `debug.proto`: Stop breaking ordering.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-11-27 10:24:03 +01:00
Manu NALEPA
199543125a Fix data columns sampling (#14263)
* Fix the obvious...

* Data columns sampling: Modify logging.

* `waitForChainStart`: Set it threadsafe - Do only wait once.

* Sampling: Wait for chain start before running the sampling.

Reason: `newDataColumnSampler1D` needs `s.ctxMap`.
`s.ctxMap` is only set when chain is started.

Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine.

==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not.

* Adresse Nishant's comments.

* Sampling: Improve logging.

* `waitForChainStart`: Remove `chainIsStarted` check.
2024-11-27 10:19:07 +01:00
Manu NALEPA
ca63efa770 PeerDAS: Fix initial sync (#14208)
* `SendDataColumnsByRangeRequest`: Add some new fields in logs.

* `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`.

* Implement `fetchDataColumnsFromPeers`.

* `fetchBlobsFromPeer`: Return only one error.
2024-11-27 10:19:07 +01:00
Manu NALEPA
345e6edd9c Make deepsource happy (#14237)
* DeepSource: Pass heavy objects by pointers.

* `removeBlockFromQueue`: Remove redundant error checking.

* `fetchBlobsFromPeer`: Use same variable for `append`.

* Remove unused arguments.

* Combine types.

* `Persist`: Add documentation.

* Remove unused receiver

* Remove duplicated import.

* Stop using both pointer and value receiver at the same time.

* `verifyAndPopulateColumns`: Remove unused parameter

* Stop using mpty slice literal used to declare a variable.
2024-11-27 10:19:07 +01:00
Manu NALEPA
6403064126 PeerDAS: Run reconstruction in parallel. (#14236)
* PeerDAS: Run reconstruction in parallel.

* `isDataAvailableDataColumns` --> `isDataColumnsAvailable`

* `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received.

* Make deepsource happy.
2024-11-27 10:19:07 +01:00
Justin Traglia
0517d76631 Update ckzg4844 to latest version of das branch (#14223)
* Update ckzg4844 to latest version

* Run go mod tidy

* Remove unnecessary tests & run goimports

* Remove fieldparams from blockchain/kzg

* Add back blank line

* Avoid large copies

* Run gazelle

* Use trusted setup from the specs & fix issue with struct

* Run goimports

* Fix mistake in makeCellsAndProofs

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:19:07 +01:00
Nishant Das
000d480f77 Add Current Changes (#14231) 2024-11-27 10:19:07 +01:00
Manu NALEPA
b40a8ed37e Implement and use filterPeerForDataColumnsSubnet. (#14230) 2024-11-27 10:19:07 +01:00
Francis Li
d21c2bd63e [PeerDAS] Parallelize data column sampling (#14105)
* PeerDAS: parallelizing sample queries

* PeerDAS: select sample from non custodied columns

* Finish rebase

* Add more test cases
2024-11-27 10:19:07 +01:00
kevaundray
7a256e93f7 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-11-27 10:19:07 +01:00
Nishant Das
07fe76c2da Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-11-27 10:19:07 +01:00
Manu NALEPA
54affa897f PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-11-27 10:16:50 +01:00
kevaundray
ac4c5fae3c chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-11-27 10:16:50 +01:00
Manu NALEPA
2845d87077 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-11-27 10:16:50 +01:00
Nishant Das
dc2c90b8ed Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-11-27 10:16:50 +01:00
kevaundray
b469157e1f chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-11-27 10:16:50 +01:00
kevaundray
2697794e58 chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-11-27 10:16:50 +01:00
Manu NALEPA
48cf24edb4 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-11-27 10:16:50 +01:00
Francis Li
78f90db90b PeerDAS: add data column batch config (#14122) 2024-11-27 10:15:27 +01:00
Francis Li
d0a3b9bc1d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
bfdb6dab86 Fix columns sampling (#14118) 2024-11-27 10:15:27 +01:00
Francis Li
7dd2fd52af [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-11-27 10:15:27 +01:00
Francis Li
b6bad9331b [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-11-27 10:15:27 +01:00
Francis Li
6e2122085d [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-11-27 10:15:27 +01:00
Manu NALEPA
7a847292aa PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
81f4db0afa PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
a7dc2e6c8b PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-11-27 10:15:27 +01:00
Manu NALEPA
0a010b5088 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
1e335e2cf2 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
42f4c0f14e PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-11-27 10:15:27 +01:00
Manu NALEPA
d3c12abe25 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-11-27 10:15:27 +01:00
Nishant Das
b0ba05b4f4 Fix Custody Columns (#14021) 2024-11-27 10:15:27 +01:00
Nishant Das
e206506489 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-11-27 10:15:27 +01:00
Nishant Das
013cb28663 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-11-27 10:15:27 +01:00
Manu NALEPA
496914cb39 Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-11-27 10:15:27 +01:00
Nishant Das
c032e78888 Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-11-27 10:15:26 +01:00
Manu NALEPA
5e4deff6fd Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-11-27 10:15:26 +01:00
Nishant Das
6daa91c465 Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-11-27 10:15:26 +01:00
Nishant Das
32ce6423eb Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Justin Traglia
b0ea450df5 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-11-27 10:15:26 +01:00
Manu NALEPA
8bd10df423 SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-11-27 10:15:26 +01:00
Manu NALEPA
dcbb543be2 Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-11-27 10:15:26 +01:00
Nishant Das
be0580e1a9 Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Manu NALEPA
1355178115 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-11-27 10:15:26 +01:00
Nishant Das
b78c3485b9 Update .bazelrc (#13931) 2024-11-27 10:15:26 +01:00
Manu NALEPA
f503efc6ed Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-11-27 10:15:26 +01:00
Manu NALEPA
1bfbd3980e Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-11-27 10:15:26 +01:00
Nishant Das
3e722ea1bc Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-11-27 10:15:26 +01:00
Nishant Das
d844026433 Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-11-27 10:15:26 +01:00
Nishant Das
9ffc19d5ef Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-11-27 10:15:26 +01:00
Nishant Das
3e23f6e879 add it (#13865) 2024-11-27 10:11:55 +01:00
Manu NALEPA
c688c84393 Add in column sidecars protos (#13862) 2024-11-27 10:11:55 +01:00
682 changed files with 33108 additions and 36903 deletions

View File

@@ -22,6 +22,7 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

View File

@@ -4,6 +4,141 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v5.3.0](https://github.com/prysmaticlabs/prysm/compare/v5.2.0...v5.3.0) - 2025-02-12
This release includes support for Pectra activation in the [Holesky](https://github.com/eth-clients/holesky) and [Sepolia](https://github.com/eth-clients/sepolia) testnets! The release contains many fixes for Electra that have been found in rigorous testing through devnets in the last few months.
For mainnet, we have a few nice features for you to try:
- [PR #14023](https://github.com/prysmaticlabs/prysm/pull/14023) introduces a new file layout structure for storing blobs. Rather than storing all blob root directories in one parent directory, blob root directories are organized in subdirectories by epoch. This should vastly decrease the blob cache warmup time when Prysm is starting. Try this feature with `--blob-storage-layout=by-epoch`.
Updating to this release is **required** for Holesky and Sepolia operators and it is **recommended** for mainnet users as there are a few bug fixes that apply to deneb logic.
### Added
- Added an error field to log `Finished building block`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14696)
- Implemented a new `EmptyExecutionPayloadHeader` function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14713)
- Added proper gas limit check for header from the builder. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14707)
- `Finished building block`: Display error only if not nil. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14722)
- Added light client feature flag check to RPC handlers. [PR](https://github.com/prysmaticlabs/prysm/pull/14736). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14782)
- Added support to update target and max blob count to different values per hard fork config. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14678)
- Log before blob filesystem cache warm-up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14735)
- New design for the attestation pool. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14324)
- Add field param placeholder for Electra blob target and max to pass spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14733)
- Light client: Add better error handling. [PR](https://github.com/prysmaticlabs/prysm/pull/14749). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14782)
- Add EIP-7691: Blob throughput increase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14750)
- Trace IDONTWANT Messages in Pubsub. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14778)
- Add Fulu fork boilerplate. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14771)
- DB optimization for saving light client bootstraps (save unique sync committees only). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14782)
- Separate type for unaggregated network attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14659)
- Remote signer electra fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14477)
- Add Electra test case to rewards API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14816)
- Update `proto_test.go` to Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14817)
- Update slasher service to Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14812)
- Builder API endpoint to support Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14344)
- Added protoc toolchains with a version of v25.3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Add test cases for the eth_lightclient_bootstrap API SSZ support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14824)
- Handle `AttesterSlashingElectra` everywhere in the codebase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14823)
- Add Beacon DB pruning service to prune historical data older than MIN_EPOCHS_FOR_BLOCK_REQUESTS (roughly equivalent to the weak subjectivity period). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14687)
- Nil consolidation request check for core processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14851)
- Updated blob sidecar api endpoint for Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14852)
- Slashing pool service to convert slashings from Phase0 to Electra at the fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14844)
- check to stop eth1 voting after electra and eth1 deposits stop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14835)
- WARN log message on node startup advising of the upcoming deprecation of the --enable-historical-state-representation feature flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14856)
- Beacon API event support for `SingleAttestation` and `SignedAggregateAttestationAndProofElectra`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14855)
- Added Electra tests for `TestLightClient_NewLightClientOptimisticUpdateFromBeaconState` and `TestLightClient_NewLightClientFinalityUpdateFromBeaconState`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14783)
- New option to select an alternate blob storage layout. Rather than a flat directory with a subdir for each block root, a multi-level scheme is used to organize blobs by epoch/slot/root, enabling leaner syscalls, indexing and pruning. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14023)
- Send pending att queue's attestations through the notification feed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14862)
- Prune all pending deposits and proofs in post-Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14829)
- Add Pectra testnet dates. (Sepolia and Holesky). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14884)
### Changed
- Process light client finality updates only for new finalized epochs instead of doing it for every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14713)
- Refactor subnets subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14711)
- Refactor RPC handlers subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14732)
- Go deps upgrade, from `ioutil` to `io`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14737)
- Move successfully registered validator(s) on builder log to debug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14735)
- Update some test files to use `crypto/rand` instead of `math/rand`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14747)
- Re-organize the content of the `*.proto` files (No functional change). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14755)
- SSZ files generation: Remove the `// Hash: ...` header.[[PR]](https://github.com/prysmaticlabs/prysm/pull/14760)
- Updated Electra spec definition for `process_epoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14768)
- Update our `go-libp2p-pubsub` dependency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14770)
- Re-organize the content of files to ease the creation of a new fork boilerplate. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14761)
- Updated spec definition electra `process_registry_updates`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14767)
- Fixed Metadata errors for peers connected via QUIC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14776)
- Updated spec definitions for `process_slashings` in godocs. Simplified `ProcessSlashings` API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14766)
- Update spec tests to v1.5.0-beta.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14788)
- Process light client finality updates only for new finalized epochs instead of doing it for every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14718)
- Update blobs by rpc topics from V2 to V1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14785)
- Updated geth to 1.14~. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14351)
- E2e tests start from bellatrix. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14351)
- Version pinning unclog after making some ux improvements. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14802)
- Remove helpers to check for execution/compounding withdrawal credentials and expose them as methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14808)
- Refactor `2006-01-02 15:04:05` to `time.DateTime`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14792)
- Updated Prysm to Go v1.23.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Updated Bazel version to v7.4.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Updated rules_go to v0.46.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Updated golang.org/x/tools to be compatible with v1.23.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- CI now requires proto files to be properly formatted with clang-format. [[PR](https://github.com/prysmaticlabs/prysm/pull/14831)]. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14831)
- Improved test coverage of beacon-chain/core/electra/churn.go. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14837)
- Update electra spec test to beta1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14841)
- Move deposit request nil check to apply all. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14849)
- Do not mark blocks as invalid on context deadlines during state transition. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14838)
- Update electra core processing to not mark block bad if execution request error. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14826)
- Dependency: Updated go-ethereum to v1.14.13. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14872)
- improving readability on proposer settings loader. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14868)
- Removes existing validator.processSlot span and adds validator.processSlot span to slotCtx. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14874)
- DownloadFinalizedData has moved from the api/client package to beacon-chain/sync/checkpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14871)
- Updated Blob-Batch-Limit to increase to 192 for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14883)
- Updated Blob-Batch-Limit-Burst-Factor to increase to 3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14883)
- Changed the derived batch limit when serving blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14883)
- Updated go-libp2p-pubsub to v0.13.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14890)
- Rename light client flag from `enable-lightclient` to `enable-light-client`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14887)
- Update electra spec test to beta2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14901)
### Removed
- Cleanup ProcessSlashings method to remove unnecessary argument. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14762)
- Remove `/proto/eth/v2` directory. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14765)
- Remove `/memsize/` pprof endpoint as it will no longer be supported in go 1.23. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14351)
- Clean `TestCanUpgrade*` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14791)
- Remove `Copy()` from the `ReadOnlyBeaconBlock` interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14811)
- Removed a tracing span on signature requests. These requests usually took less than 5 nanoseconds and are generally not worth tracing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14864)
### Fixed
- Added check to prevent nil pointer deference or out of bounds array access when validating the BLSToExecutionChange on an impossibly nil validator. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14705)
- EIP-7691: Ensure new blobs subnets are subscribed on epoch in advance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14759)
- Fix kzg commitment inclusion proof depth minimal value. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14787)
- Replace exampleIP to `96.7.129.13`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14795)
- Fixed a p2p test to reliably return a static IP through DNS resolution. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14800)
- `ToBlinded`: Use Fulu struct for Fulu (instead of Electra). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14797)
- fix panic with type cast on pbgenericblock(). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14801)
- Prysmctl generate genesis state: fix truncation of ExtraData to 32 bytes to satisfy SSZ marshaling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14803)
- added conditional evaluators to fix scenario e2e tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14798)
- Use `SingleAttestation` for Fulu in p2p attestation map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14809)
- `UpgradeToFulu`: Respect the specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14821)
- `nodeFilter`: Implement `filterPeerForBlobSubnet` to avoid error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14822)
- Fixed deposit packing for post-Electra: early return if EIP-6110 is applied. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14697)
- Fix batch process new pending deposits by getting validators from state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14827)
- Fix handling unfound block at slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14852)
- Fixed incorrect attester slashing length check. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14833)
- Fix monitor service for Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14853)
- add more nil checks on ToConsensus functions for added safety. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14867)
- Fix electra state to safe share references on pending fields when append. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14895)
- Add missing config values from the spec. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14903)
- We remove the unused `rebuildTrie` assignments for fields which do not use them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14906)
- fix block api endpoint to handle blocks with the same structure but on different forks (i.e. fulu and electra). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14897)
- We change how we track blob indexes during their reconstruction from the EL to prevent. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14909)
- We now use the correct maximum value when serving blobs for electra blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14910)
### Security
- go version upgrade to 1.22.10 for CVE CVE-2024-34156. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14729)
- Update golang.org/x/crypto to v0.31.0 to address CVE-2024-45337. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14777)
- Update golang.org/x/net to v0.33.0 to address CVE-2024-45338. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14780)
## [v5.2.0](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...v5.2.0)
Updating to this release is highly recommended, especially for users running v5.1.1 or v5.1.2.
@@ -2987,4 +3122,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -55,7 +55,7 @@ bazel build //beacon-chain --config=release
## Adding / updating dependencies
1. Add your dependency as you would with go modules. I.e. `go get ...`
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod` to update the bazel managed dependencies.
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=deps.bzl%prysm_deps -prune=true` to update the bazel managed dependencies.
Example:

View File

@@ -255,7 +255,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-beta.1"
consensus_spec_version = "v1.5.0-alpha.10"
bls_test_version = "v0.1.1"
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-R6r60geCfEjMaB1Ag3svaMFXFIgaJvkTJhfKsf76rFE=",
integrity = "sha256-NtWIhbO/mVMb1edq5jqABL0o8R1tNFiuG8PCMAsUHcs=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -287,7 +287,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-2Pem2gMHxW/6bBhZ2BaqkQruQSd/dTS3WMaMQO8rZ/o=",
integrity = "sha256-DFlFlnzls1bBrDm+/xD8NK2ivvkhxR+rSNVLLqScVKc=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -303,7 +303,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-5yP05JTV1MhcUZ2kSh+T+kXjG+uW3A5877veC5c1mD4=",
integrity = "sha256-G9ENPF8udZL/BqRHbi60GhFPnZDPZAH6UjcjRiOlvbk=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -318,7 +318,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-O6Rg6h19T0RsJs0sBDZ9O1k4LnCJ/gu2ilHijFBVfME=",
integrity = "sha256-ClOLKkmAcEi8/uKi6LDeqthask5+E3sgxVoA0bqmQ0c=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -3,7 +3,6 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"checkpoint.go",
"client.go",
"doc.go",
"health.go",
@@ -16,28 +15,19 @@ go_library(
"//api/client/beacon/iface:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//io/file:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_x_mod//semver:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"checkpoint_test.go",
"client_test.go",
"health_test.go",
],
@@ -45,19 +35,7 @@ go_test(
deps = [
"//api/client:go_default_library",
"//api/client/beacon/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blocks/testing:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -1,276 +0,0 @@
package beacon
import (
"context"
"fmt"
"path"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
base "github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/io/file"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"golang.org/x/mod/semver"
)
var errCheckpointBlockMismatch = errors.New("mismatch between checkpoint sync state and block")
// OriginData represents the BeaconState and ReadOnlySignedBeaconBlock necessary to start an empty Beacon Node
// using Checkpoint Sync.
type OriginData struct {
sb []byte
bb []byte
st state.BeaconState
b interfaces.ReadOnlySignedBeaconBlock
vu *detect.VersionedUnmarshaler
br [32]byte
sr [32]byte
}
// SaveBlock saves the downloaded block to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (o *OriginData) SaveBlock(dir string) (string, error) {
blockPath := path.Join(dir, fname("block", o.vu, o.b.Block().Slot(), o.br))
return blockPath, file.WriteFile(blockPath, o.BlockBytes())
}
// SaveState saves the downloaded state to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (o *OriginData) SaveState(dir string) (string, error) {
statePath := path.Join(dir, fname("state", o.vu, o.st.Slot(), o.sr))
return statePath, file.WriteFile(statePath, o.StateBytes())
}
// StateBytes returns the ssz-encoded bytes of the downloaded BeaconState value.
func (o *OriginData) StateBytes() []byte {
return o.sb
}
// BlockBytes returns the ssz-encoded bytes of the downloaded ReadOnlySignedBeaconBlock value.
func (o *OriginData) BlockBytes() []byte {
return o.bb
}
func fname(prefix string, vu *detect.VersionedUnmarshaler, slot primitives.Slot, root [32]byte) string {
return fmt.Sprintf("%s_%s_%s_%d-%#x.ssz", prefix, vu.Config.ConfigName, version.String(vu.Fork), slot, root)
}
// DownloadFinalizedData downloads the most recently finalized state, and the block most recently applied to that state.
// This pair can be used to initialize a new beacon node via checkpoint sync.
func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, error) {
sb, err := client.GetState(ctx, IdFinalized)
if err != nil {
return nil, err
}
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for finalized state")
}
log.WithFields(logrus.Fields{
"name": vu.Config.ConfigName,
"fork": version.String(vu.Fork),
}).Info("Detected supported config in remote finalized state")
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error unmarshaling finalized state to correct version")
}
slot := s.LatestBlockHeader().Slot
bb, err := client.GetBlock(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by slot = %d", slot)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
br, err := b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
}
bodyRoot, err := b.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block body")
}
sbr := bytesutil.ToBytes32(s.LatestBlockHeader().BodyRoot)
if sbr != bodyRoot {
return nil, errors.Wrapf(errCheckpointBlockMismatch, "state body root = %#x, block body root = %#x", sbr, bodyRoot)
}
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
}
log.
WithField("blockSlot", b.Block().Slot()).
WithField("stateSlot", s.Slot()).
WithField("stateRoot", hexutil.Encode(sr[:])).
WithField("blockRoot", hexutil.Encode(br[:])).
Info("Downloaded checkpoint sync state and block.")
return &OriginData{
st: s,
b: b,
sb: sb,
bb: bb,
vu: vu,
br: br,
sr: sr,
}, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + ReadOnlySignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint beacon node flag to be used for validation.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch primitives.Epoch
}
// CheckpointString returns the standard string representation of a Checkpoint.
// The format is a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (wsd *WeakSubjectivityData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", wsd.BlockRoot, wsd.Epoch)
}
// ComputeWeakSubjectivityCheckpoint attempts to use the prysm weak_subjectivity api
// to obtain the current weak_subjectivity checkpoint.
// For non-prysm nodes, the same computation will be performed with extra steps,
// using the head state downloaded from the beacon node api.
func ComputeWeakSubjectivityCheckpoint(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
ws, err := client.GetWeakSubjectivity(ctx)
if err != nil {
// a 404/405 is expected if querying an endpoint that doesn't support the weak subjectivity checkpoint api
if !errors.Is(err, base.ErrNotOK) {
return nil, errors.Wrap(err, "unexpected API response for prysm-only weak subjectivity checkpoint API")
}
// fall back to vanilla Beacon Node API method
return computeBackwardsCompatible(ctx, client)
}
log.Printf("server weak subjectivity checkpoint response - epoch=%d, block_root=%#x, state_root=%#x", ws.Epoch, ws.BlockRoot, ws.StateRoot)
return ws, nil
}
const (
prysmMinimumVersion = "v2.0.7"
prysmImplementationName = "Prysm"
)
// errUnsupportedPrysmCheckpointVersion indicates remote beacon node can't be used for checkpoint retrieval.
var errUnsupportedPrysmCheckpointVersion = errors.New("node does not meet minimum version requirements for checkpoint retrieval")
// for older endpoints or clients that do not support the weak_subjectivity api method
// we gather the necessary data for a checkpoint sync by:
// - inspecting the remote server's head state and computing the weak subjectivity epoch locally
// - requesting the state at the first slot of the epoch
// - using hash_tree_root(state.latest_block_header) to compute the block the state integrates
// - requesting that block by its root
func computeBackwardsCompatible(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
log.Print("falling back to generic checkpoint derivation, weak_subjectivity API not supported by server")
nv, err := client.GetNodeVersion(ctx)
if err != nil {
return nil, errors.Wrap(err, "unable to proceed with fallback method without confirming node version")
}
if nv.implementation == prysmImplementationName && semver.Compare(nv.semver, prysmMinimumVersion) < 0 {
return nil, errors.Wrapf(errUnsupportedPrysmCheckpointVersion, "%s < minimum (%s)", nv.semver, prysmMinimumVersion)
}
epoch, err := getWeakSubjectivityEpochFromHead(ctx, client)
if err != nil {
return nil, errors.Wrap(err, "error computing weak subjectivity epoch via head state inspection")
}
// use first slot of the epoch for the state slot
slot, err := slots.EpochStart(epoch)
if err != nil {
return nil, errors.Wrapf(err, "error computing first slot of epoch=%d", epoch)
}
log.Printf("requesting checkpoint state at slot %d", slot)
// get the state at the first slot of the epoch
sb, err := client.GetState(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "failed to request state by slot from api, slot=%d", slot)
}
// ConfigFork is used to unmarshal the BeaconState so we can read the block root in latest_block_header
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in checkpoint state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error using detected config fork to unmarshal state bytes")
}
// compute state and block roots
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of state")
}
h := s.LatestBlockHeader()
h.StateRoot = sr[:]
br, err := h.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error while computing block root using state data")
}
bb, err := client.GetBlock(ctx, IdFromRoot(br))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by root = %d", br)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
br, err = b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root for block obtained via root")
}
return &WeakSubjectivityData{
Epoch: epoch,
BlockRoot: br,
StateRoot: sr,
}, nil
}
// this method downloads the head state, which can be used to find the correct chain config
// and use prysm's helper methods to compute the latest weak subjectivity epoch.
func getWeakSubjectivityEpochFromHead(ctx context.Context, client *Client) (primitives.Epoch, error) {
headBytes, err := client.GetState(ctx, IdHead)
if err != nil {
return 0, err
}
vu, err := detect.FromState(headBytes)
if err != nil {
return 0, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in remote head state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
headState, err := vu.UnmarshalBeaconState(headBytes)
if err != nil {
return 0, errors.Wrap(err, "error unmarshaling state to correct version")
}
epoch, err := helpers.LatestWeakSubjectivityEpoch(ctx, headState, vu.Config)
if err != nil {
return 0, errors.Wrap(err, "error computing the weak subjectivity epoch from head state")
}
log.Printf("(computed client-side) weak subjectivity epoch = %d", epoch)
return epoch, nil
}

View File

@@ -29,12 +29,13 @@ const (
getSignedBlockPath = "/eth/v2/beacon/blocks"
getBlockRootPath = "/eth/v1/beacon/blocks/{{.Id}}/root"
getForkForStatePath = "/eth/v1/beacon/states/{{.Id}}/fork"
getWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
getForkSchedulePath = "/eth/v1/config/fork_schedule"
getConfigSpecPath = "/eth/v1/config/spec"
getStatePath = "/eth/v2/debug/beacon/states"
getNodeVersionPath = "/eth/v1/node/version"
changeBLStoExecutionPath = "/eth/v1/beacon/pool/bls_to_execution_changes"
GetNodeVersionPath = "/eth/v1/node/version"
GetWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
)
// StateOrBlockId represents the block_id / state_id parameters that several of the Eth Beacon API methods accept.
@@ -80,7 +81,8 @@ func idTemplate(ts string) func(StateOrBlockId) string {
return f
}
func renderGetBlockPath(id StateOrBlockId) string {
// RenderGetBlockPath formats a block id into a path for the GetBlock API endpoint.
func RenderGetBlockPath(id StateOrBlockId) string {
return path.Join(getSignedBlockPath, string(id))
}
@@ -104,7 +106,7 @@ func NewClient(host string, opts ...client.ClientOpt) (*Client, error) {
// for the named identifiers.
// The return value contains the ssz-encoded bytes.
func (c *Client) GetBlock(ctx context.Context, blockId StateOrBlockId) ([]byte, error) {
blockPath := renderGetBlockPath(blockId)
blockPath := RenderGetBlockPath(blockId)
b, err := c.Get(ctx, blockPath, client.WithSSZEncoding())
if err != nil {
return nil, errors.Wrapf(err, "error requesting state by id = %s", blockId)
@@ -195,6 +197,10 @@ type NodeVersion struct {
systemInfo string
}
func (nv *NodeVersion) SetImplementation(impl string) {
nv.implementation = impl
}
var versionRE = regexp.MustCompile(`^(\w+)/(v\d+\.\d+\.\d+[-a-zA-Z0-9]*)\s*/?(.*)$`)
func parseNodeVersion(v string) (*NodeVersion, error) {
@@ -212,7 +218,7 @@ func parseNodeVersion(v string) (*NodeVersion, error) {
// GetNodeVersion requests that the beacon node identify information about its implementation in a format
// similar to a HTTP User-Agent field. ex: Lighthouse/v0.1.5 (Linux x86_64)
func (c *Client) GetNodeVersion(ctx context.Context) (*NodeVersion, error) {
b, err := c.Get(ctx, getNodeVersionPath)
b, err := c.Get(ctx, GetNodeVersionPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting node version")
}
@@ -228,7 +234,8 @@ func (c *Client) GetNodeVersion(ctx context.Context) (*NodeVersion, error) {
return parseNodeVersion(d.Data.Version)
}
func renderGetStatePath(id StateOrBlockId) string {
// RenderGetStatePath formats a state id into a path for the GetState API endpoint.
func RenderGetStatePath(id StateOrBlockId) string {
return path.Join(getStatePath, string(id))
}
@@ -246,13 +253,29 @@ func (c *Client) GetState(ctx context.Context, stateId StateOrBlockId) ([]byte,
return b, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + ReadOnlySignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint beacon node flag to be used for validation.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch primitives.Epoch
}
// CheckpointString returns the standard string representation of a Checkpoint.
// The format is a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (wsd *WeakSubjectivityData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", wsd.BlockRoot, wsd.Epoch)
}
// GetWeakSubjectivity calls a proposed API endpoint that is unique to prysm
// This api method does the following:
// - computes weak subjectivity epoch
// - finds the highest non-skipped block preceding the epoch
// - returns the htr of the found block and returns this + the value of state_root from the block
func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData, error) {
body, err := c.Get(ctx, getWeakSubjectivityPath)
body, err := c.Get(ctx, GetWeakSubjectivityPath)
if err != nil {
return nil, err
}

View File

@@ -97,31 +97,31 @@ func TestValidHostname(t *testing.T) {
{
name: "hostname with port",
hostArg: "mydomain.org:3500",
path: getNodeVersionPath,
path: GetNodeVersionPath,
joined: "http://mydomain.org:3500/eth/v1/node/version",
},
{
name: "https scheme, hostname with port",
hostArg: "https://mydomain.org:3500",
path: getNodeVersionPath,
path: GetNodeVersionPath,
joined: "https://mydomain.org:3500/eth/v1/node/version",
},
{
name: "http scheme, hostname without port",
hostArg: "http://mydomain.org",
path: getNodeVersionPath,
path: GetNodeVersionPath,
joined: "http://mydomain.org/eth/v1/node/version",
},
{
name: "http scheme, trailing slash, hostname without port",
hostArg: "http://mydomain.org/",
path: getNodeVersionPath,
path: GetNodeVersionPath,
joined: "http://mydomain.org/eth/v1/node/version",
},
{
name: "http scheme, hostname with basic auth creds and no port",
hostArg: "http://username:pass@mydomain.org/",
path: getNodeVersionPath,
path: GetNodeVersionPath,
joined: "http://username:pass@mydomain.org/eth/v1/node/version",
},
}

View File

@@ -46,6 +46,7 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -154,6 +154,10 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
if err != nil {
return
}
if method == http.MethodPost {
req.Header.Set("Content-Type", api.JsonMediaType)
}
req.Header.Set("Accept", api.JsonMediaType)
req.Header.Add("User-Agent", version.BuildData())
for _, o := range opts {
o(req)

View File

@@ -12,6 +12,7 @@ import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -89,6 +90,8 @@ func TestClient_RegisterValidator(t *testing.T) {
expectedPath := "/eth/v1/builder/validators"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
body, err := io.ReadAll(r.Body)
defer func() {
require.NoError(t, r.Body.Close())
@@ -364,8 +367,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
@@ -392,8 +395,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "capella", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadCapella)),
@@ -423,8 +426,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)

View File

@@ -4,13 +4,11 @@ go_library(
name = "go_default_library",
srcs = [
"block.go",
"block_epbs.go",
"conversions.go",
"conversions_blob.go",
"conversions_block.go",
"conversions_lightclient.go",
"conversions_state.go",
"converstions_block_epbs.go",
"endpoints_beacon.go",
"endpoints_blob.go",
"endpoints_builder.go",

View File

@@ -579,14 +579,14 @@ type SignedBeaconBlockContentsFulu struct {
}
type BeaconBlockContentsFulu struct {
Block *BeaconBlockFulu `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
Block *BeaconBlockElectra `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type SignedBeaconBlockFulu struct {
Message *BeaconBlockFulu `json:"message"`
Signature string `json:"signature"`
Message *BeaconBlockElectra `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockFulu{}
@@ -599,36 +599,12 @@ func (s *SignedBeaconBlockFulu) SigString() string {
return s.Signature
}
type BeaconBlockFulu struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyFulu `json:"body"`
}
type BeaconBlockBodyFulu struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadDeneb `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type BlindedBeaconBlockFulu struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyFulu `json:"body"`
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyElectra `json:"body"`
}
type SignedBlindedBeaconBlockFulu struct {
@@ -645,19 +621,3 @@ func (s *SignedBlindedBeaconBlockFulu) MessageRawJson() ([]byte, error) {
func (s *SignedBlindedBeaconBlockFulu) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBodyFulu struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}

View File

@@ -1,72 +0,0 @@
package structs
import "encoding/json"
// ----------------------------------------------------------------------------
// Epbs
// ----------------------------------------------------------------------------
type SignedBeaconBlockEpbs struct {
Message *BeaconBlockEpbs `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockElectra{}
func (s *SignedBeaconBlockEpbs) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockEpbs) SigString() string {
return s.Signature
}
type BeaconBlockEpbs struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyEpbs `json:"body"`
}
type BeaconBlockBodyEpbs struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
SignedExecutionPayloadHeader *SignedExecutionPayloadHeader `json:"signed_execution_payload_header"`
PayloadAttestations []*PayloadAttestation `json:"payload_attestations"`
}
type SignedExecutionPayloadHeader struct {
Message *ExecutionPayloadHeaderEPBS `json:"message"`
Signature string `json:"signature"`
}
type ExecutionPayloadHeaderEPBS struct {
ParentBlockHash string `json:"parent_block_hash"`
ParentBlockRoot string `json:"parent_block_root"`
BlockHash string `json:"block_hash"`
GasLimit string `json:"gas_limit"`
BuilderIndex string `json:"builder_index"`
Slot string `json:"slot"`
Value string `json:"value"`
BlobKzgCommitmentsRoot string `json:"blob_kzg_commitments_root"`
}
type PayloadAttestation struct {
AggregationBits string `json:"aggregation_bits"`
Data *PayloadAttestationData `json:"data"`
Signature string `json:"signature"`
}
type PayloadAttestationData struct {
BeaconBlockRoot string `json:"beacon_block_root"`
Slot string `json:"slot"`
PayloadStatus string `json:"payload_status"`
}

View File

@@ -52,6 +52,9 @@ func HistoricalSummaryFromConsensus(s *eth.HistoricalSummary) *HistoricalSummary
}
func (s *SignedBLSToExecutionChange) ToConsensus() (*eth.SignedBLSToExecutionChange, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
change, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -103,14 +106,17 @@ func SignedBLSChangeFromConsensus(ch *eth.SignedBLSToExecutionChange) *SignedBLS
func SignedBLSChangesToConsensus(src []*SignedBLSToExecutionChange) ([]*eth.SignedBLSToExecutionChange, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "SignedBLSToExecutionChanges")
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "SignedBLSToExecutionChanges")
}
changes := make([]*eth.SignedBLSToExecutionChange, len(src))
for i, ch := range src {
if ch == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
changes[i], err = ch.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
@@ -156,6 +162,9 @@ func ForkFromConsensus(f *eth.Fork) *Fork {
}
func (s *SignedValidatorRegistration) ToConsensus() (*eth.SignedValidatorRegistrationV1, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -212,6 +221,9 @@ func SignedValidatorRegistrationFromConsensus(vr *eth.SignedValidatorRegistratio
}
func (s *SignedContributionAndProof) ToConsensus() (*eth.SignedContributionAndProof, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -236,6 +248,9 @@ func SignedContributionAndProofFromConsensus(c *eth.SignedContributionAndProof)
}
func (c *ContributionAndProof) ToConsensus() (*eth.ContributionAndProof, error) {
if c.Contribution == nil {
return nil, server.NewDecodeError(errNilValue, "Contribution")
}
contribution, err := c.Contribution.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Contribution")
@@ -307,6 +322,9 @@ func SyncCommitteeContributionFromConsensus(c *eth.SyncCommitteeContribution) *S
}
func (s *SignedAggregateAttestationAndProof) ToConsensus() (*eth.SignedAggregateAttestationAndProof, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -327,6 +345,9 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
if a.Aggregate == nil {
return nil, server.NewDecodeError(errNilValue, "Aggregate")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
@@ -343,6 +364,9 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
}
func (s *SignedAggregateAttestationAndProofElectra) ToConsensus() (*eth.SignedAggregateAttestationAndProofElectra, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -363,6 +387,9 @@ func (a *AggregateAttestationAndProofElectra) ToConsensus() (*eth.AggregateAttes
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
if a.Aggregate == nil {
return nil, server.NewDecodeError(errNilValue, "Aggregate")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
@@ -383,6 +410,9 @@ func (a *Attestation) ToConsensus() (*eth.Attestation, error) {
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -412,6 +442,9 @@ func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -433,6 +466,15 @@ func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
}, nil
}
func SingleAttFromConsensus(a *eth.SingleAttestation) *SingleAttestation {
return &SingleAttestation{
CommitteeIndex: fmt.Sprintf("%d", a.CommitteeId),
AttesterIndex: fmt.Sprintf("%d", a.AttesterIndex),
Data: AttDataFromConsensus(a.Data),
Signature: hexutil.Encode(a.Signature),
}
}
func (a *SingleAttestation) ToConsensus() (*eth.SingleAttestation, error) {
ci, err := strconv.ParseUint(a.CommitteeIndex, 10, 64)
if err != nil {
@@ -442,6 +484,9 @@ func (a *SingleAttestation) ToConsensus() (*eth.SingleAttestation, error) {
if err != nil {
return nil, server.NewDecodeError(err, "AttesterIndex")
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -481,10 +526,16 @@ func (a *AttestationData) ToConsensus() (*eth.AttestationData, error) {
if err != nil {
return nil, server.NewDecodeError(err, "BeaconBlockRoot")
}
if a.Source == nil {
return nil, server.NewDecodeError(errNilValue, "Source")
}
source, err := a.Source.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Source")
}
if a.Target == nil {
return nil, server.NewDecodeError(errNilValue, "Target")
}
target, err := a.Target.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Target")
@@ -584,15 +635,17 @@ func (b *BeaconCommitteeSubscription) ToConsensus() (*validator.BeaconCommitteeS
}
func (e *SignedVoluntaryExit) ToConsensus() (*eth.SignedVoluntaryExit, error) {
sig, err := bytesutil.DecodeHexWithLength(e.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
if e.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
exit, err := e.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
sig, err := bytesutil.DecodeHexWithLength(e.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SignedVoluntaryExit{
Exit: exit,
Signature: sig,
@@ -695,10 +748,16 @@ func Eth1DataFromConsensus(e1d *eth.Eth1Data) *Eth1Data {
}
func (s *ProposerSlashing) ToConsensus() (*eth.ProposerSlashing, error) {
if s.SignedHeader1 == nil {
return nil, server.NewDecodeError(errNilValue, "SignedHeader1")
}
h1, err := s.SignedHeader1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "SignedHeader1")
}
if s.SignedHeader2 == nil {
return nil, server.NewDecodeError(errNilValue, "SignedHeader2")
}
h2, err := s.SignedHeader2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "SignedHeader2")
@@ -711,10 +770,16 @@ func (s *ProposerSlashing) ToConsensus() (*eth.ProposerSlashing, error) {
}
func (s *AttesterSlashing) ToConsensus() (*eth.AttesterSlashing, error) {
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation1")
}
att1, err := s.Attestation1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation1")
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation2")
}
att2, err := s.Attestation2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation2")
@@ -723,10 +788,16 @@ func (s *AttesterSlashing) ToConsensus() (*eth.AttesterSlashing, error) {
}
func (s *AttesterSlashingElectra) ToConsensus() (*eth.AttesterSlashingElectra, error) {
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation1")
}
att1, err := s.Attestation1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation1")
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation2")
}
att2, err := s.Attestation2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation2")
@@ -747,6 +818,9 @@ func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
return nil, server.NewDecodeError(err, fmt.Sprintf("AttestingIndices[%d]", i))
}
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -779,6 +853,9 @@ func (a *IndexedAttestationElectra) ToConsensus() (*eth.IndexedAttestationElectr
return nil, server.NewDecodeError(err, fmt.Sprintf("AttestingIndices[%d]", i))
}
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -934,11 +1011,11 @@ func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
func ProposerSlashingsToConsensus(src []*ProposerSlashing) ([]*eth.ProposerSlashing, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "ProposerSlashings")
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "ProposerSlashings")
}
proposerSlashings := make([]*eth.ProposerSlashing, len(src))
for i, s := range src {
@@ -1067,11 +1144,11 @@ func ProposerSlashingFromConsensus(src *eth.ProposerSlashing) *ProposerSlashing
func AttesterSlashingsToConsensus(src []*AttesterSlashing) ([]*eth.AttesterSlashing, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "AttesterSlashings")
}
err := slice.VerifyMaxLength(src, 2)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "AttesterSlashings")
}
attesterSlashings := make([]*eth.AttesterSlashing, len(src))
@@ -1082,10 +1159,19 @@ func AttesterSlashingsToConsensus(src []*AttesterSlashing) ([]*eth.AttesterSlash
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1", i))
}
if s.Attestation1.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1.Data", i))
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2", i))
}
if s.Attestation2.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2.Data", i))
}
a1Sig, err := bytesutil.DecodeHexWithLength(s.Attestation1.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
@@ -1102,6 +1188,7 @@ func AttesterSlashingsToConsensus(src []*AttesterSlashing) ([]*eth.AttesterSlash
}
a1AttestingIndices[j] = attestingIndex
}
a1Data, err := s.Attestation1.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Data", i))
@@ -1199,11 +1286,11 @@ func AttesterSlashingFromConsensus(src *eth.AttesterSlashing) *AttesterSlashing
func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth.AttesterSlashingElectra, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "AttesterSlashingsElectra")
}
err := slice.VerifyMaxLength(src, fieldparams.MaxAttesterSlashingsElectra)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "AttesterSlashingsElectra")
}
attesterSlashings := make([]*eth.AttesterSlashingElectra, len(src))
@@ -1211,13 +1298,23 @@ func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth
if s == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1", i))
}
if s.Attestation1.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1.Data", i))
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2", i))
}
if s.Attestation2.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2.Data", i))
}
a1Sig, err := bytesutil.DecodeHexWithLength(s.Attestation1.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
@@ -1331,15 +1428,18 @@ func AttesterSlashingElectraFromConsensus(src *eth.AttesterSlashingElectra) *Att
func AttsToConsensus(src []*Attestation) ([]*eth.Attestation, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "Attestations")
}
err := slice.VerifyMaxLength(src, 128)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "Attestations")
}
atts := make([]*eth.Attestation, len(src))
for i, a := range src {
if a == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
atts[i], err = a.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
@@ -1358,15 +1458,18 @@ func AttsFromConsensus(src []*eth.Attestation) []*Attestation {
func AttsElectraToConsensus(src []*AttestationElectra) ([]*eth.AttestationElectra, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "AttestationsElectra")
}
err := slice.VerifyMaxLength(src, 8)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "AttestationsElectra")
}
atts := make([]*eth.AttestationElectra, len(src))
for i, a := range src {
if a == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
atts[i], err = a.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
@@ -1385,11 +1488,11 @@ func AttsElectraFromConsensus(src []*eth.AttestationElectra) []*AttestationElect
func DepositsToConsensus(src []*Deposit) ([]*eth.Deposit, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "Deposits")
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "Deposits")
}
deposits := make([]*eth.Deposit, len(src))
@@ -1461,15 +1564,18 @@ func DepositsFromConsensus(src []*eth.Deposit) []*Deposit {
func SignedExitsToConsensus(src []*SignedVoluntaryExit) ([]*eth.SignedVoluntaryExit, error) {
if src == nil {
return nil, errNilValue
return nil, server.NewDecodeError(errNilValue, "SignedVoluntaryExits")
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, err
return nil, server.NewDecodeError(err, "SignedVoluntaryExits")
}
exits := make([]*eth.SignedVoluntaryExit, len(src))
for i, e := range src {
if e == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
exits[i], err = e.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))

View File

@@ -269,8 +269,6 @@ func SignedBeaconBlockMessageJsoner(block interfaces.ReadOnlySignedBeaconBlock)
return SignedBlindedBeaconBlockFuluFromConsensus(pbStruct)
case *eth.SignedBeaconBlockFulu:
return SignedBeaconBlockFuluFromConsensus(pbStruct)
case *eth.SignedBeaconBlockEpbs:
return SignedBeaconBlockEpbsFromConsensus(pbStruct)
default:
return nil, ErrUnsupportedConversion
}
@@ -3367,285 +3365,6 @@ func (b *BeaconBlockContentsFulu) ToConsensus() (*eth.BeaconBlockContentsFulu, e
}, nil
}
func (b *BeaconBlockFulu) ToConsensus() (*eth.BeaconBlockFulu, error) {
if b == nil {
return nil, errNilValue
}
if b.Body == nil {
return nil, server.NewDecodeError(errNilValue, "Body")
}
if b.Body.Eth1Data == nil {
return nil, server.NewDecodeError(errNilValue, "Body.Eth1Data")
}
if b.Body.SyncAggregate == nil {
return nil, server.NewDecodeError(errNilValue, "Body.SyncAggregate")
}
if b.Body.ExecutionPayload == nil {
return nil, server.NewDecodeError(errNilValue, "Body.ExecutionPayload")
}
slot, err := strconv.ParseUint(b.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
}
proposerIndex, err := strconv.ParseUint(b.ProposerIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ProposerIndex")
}
parentRoot, err := bytesutil.DecodeHexWithLength(b.ParentRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ParentRoot")
}
stateRoot, err := bytesutil.DecodeHexWithLength(b.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "StateRoot")
}
randaoReveal, err := bytesutil.DecodeHexWithLength(b.Body.RandaoReveal, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.RandaoReveal")
}
depositRoot, err := bytesutil.DecodeHexWithLength(b.Body.Eth1Data.DepositRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Eth1Data.DepositRoot")
}
depositCount, err := strconv.ParseUint(b.Body.Eth1Data.DepositCount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Eth1Data.DepositCount")
}
blockHash, err := bytesutil.DecodeHexWithLength(b.Body.Eth1Data.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Eth1Data.BlockHash")
}
graffiti, err := bytesutil.DecodeHexWithLength(b.Body.Graffiti, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Graffiti")
}
proposerSlashings, err := ProposerSlashingsToConsensus(b.Body.ProposerSlashings)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ProposerSlashings")
}
attesterSlashings, err := AttesterSlashingsElectraToConsensus(b.Body.AttesterSlashings)
if err != nil {
return nil, server.NewDecodeError(err, "Body.AttesterSlashings")
}
atts, err := AttsElectraToConsensus(b.Body.Attestations)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Attestations")
}
deposits, err := DepositsToConsensus(b.Body.Deposits)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Deposits")
}
exits, err := SignedExitsToConsensus(b.Body.VoluntaryExits)
if err != nil {
return nil, server.NewDecodeError(err, "Body.VoluntaryExits")
}
syncCommitteeBits, err := bytesutil.DecodeHexWithLength(b.Body.SyncAggregate.SyncCommitteeBits, fieldparams.SyncAggregateSyncCommitteeBytesLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.SyncAggregate.SyncCommitteeBits")
}
syncCommitteeSig, err := bytesutil.DecodeHexWithLength(b.Body.SyncAggregate.SyncCommitteeSignature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.SyncAggregate.SyncCommitteeSignature")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(b.Body.ExecutionPayload.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(b.Body.ExecutionPayload.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(b.Body.ExecutionPayload.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(b.Body.ExecutionPayload.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(b.Body.ExecutionPayload.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(b.Body.ExecutionPayload.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(b.Body.ExecutionPayload.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(b.Body.ExecutionPayload.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.Transactions")
}
txs := make([][]byte, len(b.Body.ExecutionPayload.Transactions))
for i, tx := range b.Body.ExecutionPayload.Transactions {
txs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionPayload.Transactions[%d]", i))
}
}
err = slice.VerifyMaxLength(b.Body.ExecutionPayload.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.Withdrawals")
}
withdrawals := make([]*enginev1.Withdrawal, len(b.Body.ExecutionPayload.Withdrawals))
for i, w := range b.Body.ExecutionPayload.Withdrawals {
withdrawalIndex, err := strconv.ParseUint(w.WithdrawalIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionPayload.Withdrawals[%d].WithdrawalIndex", i))
}
validatorIndex, err := strconv.ParseUint(w.ValidatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionPayload.Withdrawals[%d].ValidatorIndex", i))
}
address, err := bytesutil.DecodeHexWithLength(w.ExecutionAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionPayload.Withdrawals[%d].ExecutionAddress", i))
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionPayload.Withdrawals[%d].Amount", i))
}
withdrawals[i] = &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(validatorIndex),
Address: address,
Amount: amount,
}
}
payloadBlobGasUsed, err := strconv.ParseUint(b.Body.ExecutionPayload.BlobGasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.BlobGasUsed")
}
payloadExcessBlobGas, err := strconv.ParseUint(b.Body.ExecutionPayload.ExcessBlobGas, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ExecutionPayload.ExcessBlobGas")
}
if b.Body.ExecutionRequests == nil {
return nil, server.NewDecodeError(errors.New("nil execution requests"), "Body.ExequtionRequests")
}
depositRequests := make([]*enginev1.DepositRequest, len(b.Body.ExecutionRequests.Deposits))
for i, d := range b.Body.ExecutionRequests.Deposits {
depositRequests[i], err = d.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionRequests.Deposits[%d]", i))
}
}
withdrawalRequests := make([]*enginev1.WithdrawalRequest, len(b.Body.ExecutionRequests.Withdrawals))
for i, w := range b.Body.ExecutionRequests.Withdrawals {
withdrawalRequests[i], err = w.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionRequests.Withdrawals[%d]", i))
}
}
consolidationRequests := make([]*enginev1.ConsolidationRequest, len(b.Body.ExecutionRequests.Consolidations))
for i, c := range b.Body.ExecutionRequests.Consolidations {
consolidationRequests[i], err = c.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.ExecutionRequests.Consolidations[%d]", i))
}
}
blsChanges, err := SignedBLSChangesToConsensus(b.Body.BLSToExecutionChanges)
if err != nil {
return nil, server.NewDecodeError(err, "Body.BLSToExecutionChanges")
}
err = slice.VerifyMaxLength(b.Body.BlobKzgCommitments, fieldparams.MaxBlobCommitmentsPerBlock)
if err != nil {
return nil, server.NewDecodeError(err, "Body.BlobKzgCommitments")
}
blobKzgCommitments := make([][]byte, len(b.Body.BlobKzgCommitments))
for i, b := range b.Body.BlobKzgCommitments {
kzg, err := bytesutil.DecodeHexWithLength(b, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.BlobKzgCommitments[%d]", i))
}
blobKzgCommitments[i] = kzg
}
return &eth.BeaconBlockFulu{
Slot: primitives.Slot(slot),
ProposerIndex: primitives.ValidatorIndex(proposerIndex),
ParentRoot: parentRoot,
StateRoot: stateRoot,
Body: &eth.BeaconBlockBodyFulu{
RandaoReveal: randaoReveal,
Eth1Data: &eth.Eth1Data{
DepositRoot: depositRoot,
DepositCount: depositCount,
BlockHash: blockHash,
},
Graffiti: graffiti,
ProposerSlashings: proposerSlashings,
AttesterSlashings: attesterSlashings,
Attestations: atts,
Deposits: deposits,
VoluntaryExits: exits,
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeBits: syncCommitteeBits,
SyncCommitteeSignature: syncCommitteeSig,
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: txs,
Withdrawals: withdrawals,
BlobGasUsed: payloadBlobGasUsed,
ExcessBlobGas: payloadExcessBlobGas,
},
BlsToExecutionChanges: blsChanges,
BlobKzgCommitments: blobKzgCommitments,
ExecutionRequests: &enginev1.ExecutionRequests{
Deposits: depositRequests,
Withdrawals: withdrawalRequests,
Consolidations: consolidationRequests,
},
},
}, nil
}
func (b *SignedBeaconBlockFulu) ToConsensus() (*eth.SignedBeaconBlockFulu, error) {
if b == nil {
return nil, errNilValue
@@ -3900,7 +3619,7 @@ func (b *BlindedBeaconBlockFulu) ToConsensus() (*eth.BlindedBeaconBlockFulu, err
ProposerIndex: primitives.ValidatorIndex(proposerIndex),
ParentRoot: parentRoot,
StateRoot: stateRoot,
Body: &eth.BlindedBeaconBlockBodyFulu{
Body: &eth.BlindedBeaconBlockBodyElectra{
RandaoReveal: randaoReveal,
Eth1Data: &eth.Eth1Data{
DepositRoot: depositRoot,
@@ -4017,7 +3736,7 @@ func BlindedBeaconBlockFuluFromConsensus(b *eth.BlindedBeaconBlockFulu) (*Blinde
ProposerIndex: fmt.Sprintf("%d", b.ProposerIndex),
ParentRoot: hexutil.Encode(b.ParentRoot),
StateRoot: hexutil.Encode(b.StateRoot),
Body: &BlindedBeaconBlockBodyFulu{
Body: &BlindedBeaconBlockBodyElectra{
RandaoReveal: hexutil.Encode(b.Body.RandaoReveal),
Eth1Data: Eth1DataFromConsensus(b.Body.Eth1Data),
Graffiti: hexutil.Encode(b.Body.Graffiti),
@@ -4049,42 +3768,6 @@ func SignedBlindedBeaconBlockFuluFromConsensus(b *eth.SignedBlindedBeaconBlockFu
}, nil
}
func BeaconBlockFuluFromConsensus(b *eth.BeaconBlockFulu) (*BeaconBlockFulu, error) {
payload, err := ExecutionPayloadFuluFromConsensus(b.Body.ExecutionPayload)
if err != nil {
return nil, err
}
blobKzgCommitments := make([]string, len(b.Body.BlobKzgCommitments))
for i := range b.Body.BlobKzgCommitments {
blobKzgCommitments[i] = hexutil.Encode(b.Body.BlobKzgCommitments[i])
}
return &BeaconBlockFulu{
Slot: fmt.Sprintf("%d", b.Slot),
ProposerIndex: fmt.Sprintf("%d", b.ProposerIndex),
ParentRoot: hexutil.Encode(b.ParentRoot),
StateRoot: hexutil.Encode(b.StateRoot),
Body: &BeaconBlockBodyFulu{
RandaoReveal: hexutil.Encode(b.Body.RandaoReveal),
Eth1Data: Eth1DataFromConsensus(b.Body.Eth1Data),
Graffiti: hexutil.Encode(b.Body.Graffiti),
ProposerSlashings: ProposerSlashingsFromConsensus(b.Body.ProposerSlashings),
AttesterSlashings: AttesterSlashingsElectraFromConsensus(b.Body.AttesterSlashings),
Attestations: AttsElectraFromConsensus(b.Body.Attestations),
Deposits: DepositsFromConsensus(b.Body.Deposits),
VoluntaryExits: SignedExitsFromConsensus(b.Body.VoluntaryExits),
SyncAggregate: &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(b.Body.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(b.Body.SyncAggregate.SyncCommitteeSignature),
},
ExecutionPayload: payload,
BLSToExecutionChanges: SignedBLSChangesFromConsensus(b.Body.BlsToExecutionChanges),
BlobKzgCommitments: blobKzgCommitments,
ExecutionRequests: ExecutionRequestsFromConsensus(b.Body.ExecutionRequests),
},
}, nil
}
func SignedBeaconBlockFuluFromConsensus(b *eth.SignedBeaconBlockFulu) (*SignedBeaconBlockFulu, error) {
block, err := BeaconBlockFuluFromConsensus(b.Block)
if err != nil {
@@ -4099,4 +3782,5 @@ func SignedBeaconBlockFuluFromConsensus(b *eth.SignedBeaconBlockFulu) (*SignedBe
var (
ExecutionPayloadFuluFromConsensus = ExecutionPayloadDenebFromConsensus
ExecutionPayloadHeaderFuluFromConsensus = ExecutionPayloadHeaderDenebFromConsensus
BeaconBlockFuluFromConsensus = BeaconBlockElectraFromConsensus
)

View File

@@ -24,3 +24,96 @@ func TestDepositSnapshotFromConsensus(t *testing.T) {
require.Equal(t, "0x1234", res.ExecutionBlockHash)
require.Equal(t, "67890", res.ExecutionBlockHeight)
}
func TestSignedBLSToExecutionChange_ToConsensus(t *testing.T) {
s := &SignedBLSToExecutionChange{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedValidatorRegistration_ToConsensus(t *testing.T) {
s := &SignedValidatorRegistration{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedContributionAndProof_ToConsensus(t *testing.T) {
s := &SignedContributionAndProof{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestContributionAndProof_ToConsensus(t *testing.T) {
c := &ContributionAndProof{
Contribution: nil,
AggregatorIndex: "invalid",
SelectionProof: "",
}
_, err := c.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedAggregateAttestationAndProof_ToConsensus(t *testing.T) {
s := &SignedAggregateAttestationAndProof{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAggregateAttestationAndProof_ToConsensus(t *testing.T) {
a := &AggregateAttestationAndProof{
AggregatorIndex: "1",
Aggregate: nil,
SelectionProof: "",
}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAttestation_ToConsensus(t *testing.T) {
a := &Attestation{
AggregationBits: "0x10",
Data: nil,
Signature: "",
}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSingleAttestation_ToConsensus(t *testing.T) {
s := &SingleAttestation{
CommitteeIndex: "1",
AttesterIndex: "1",
Data: nil,
Signature: "",
}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedVoluntaryExit_ToConsensus(t *testing.T) {
s := &SignedVoluntaryExit{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestProposerSlashing_ToConsensus(t *testing.T) {
p := &ProposerSlashing{SignedHeader1: nil, SignedHeader2: nil}
_, err := p.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAttesterSlashing_ToConsensus(t *testing.T) {
a := &AttesterSlashing{Attestation1: nil, Attestation2: nil}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestIndexedAttestation_ToConsensus(t *testing.T) {
a := &IndexedAttestation{
AttestingIndices: []string{"1"},
Data: nil,
Signature: "invalid",
}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}

View File

@@ -1,364 +0,0 @@
package structs
import (
"fmt"
"strconv"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// ----------------------------------------------------------------------------
// Epbs
// ----------------------------------------------------------------------------
// nolint:gocognit
func (b *BeaconBlockEpbs) ToConsensus() (*eth.BeaconBlockEpbs, error) {
if b == nil {
return nil, errNilValue
}
if b.Body == nil {
return nil, server.NewDecodeError(errNilValue, "Body")
}
if b.Body.Eth1Data == nil {
return nil, server.NewDecodeError(errNilValue, "Body.Eth1Data")
}
if b.Body.SyncAggregate == nil {
return nil, server.NewDecodeError(errNilValue, "Body.SyncAggregate")
}
slot, err := strconv.ParseUint(b.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
}
proposerIndex, err := strconv.ParseUint(b.ProposerIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ProposerIndex")
}
parentRoot, err := bytesutil.DecodeHexWithLength(b.ParentRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ParentRoot")
}
stateRoot, err := bytesutil.DecodeHexWithLength(b.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "StateRoot")
}
randaoReveal, err := bytesutil.DecodeHexWithLength(b.Body.RandaoReveal, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.RandaoReveal")
}
depositRoot, err := bytesutil.DecodeHexWithLength(b.Body.Eth1Data.DepositRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Eth1Data.DepositRoot")
}
depositCount, err := strconv.ParseUint(b.Body.Eth1Data.DepositCount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Eth1Data.DepositCount")
}
blockHash, err := bytesutil.DecodeHexWithLength(b.Body.Eth1Data.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Eth1Data.BlockHash")
}
graffiti, err := bytesutil.DecodeHexWithLength(b.Body.Graffiti, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Graffiti")
}
proposerSlashings, err := ProposerSlashingsToConsensus(b.Body.ProposerSlashings)
if err != nil {
return nil, server.NewDecodeError(err, "Body.ProposerSlashings")
}
attesterSlashings, err := AttesterSlashingsElectraToConsensus(b.Body.AttesterSlashings)
if err != nil {
return nil, server.NewDecodeError(err, "Body.AttesterSlashings")
}
atts, err := AttsElectraToConsensus(b.Body.Attestations)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Attestations")
}
deposits, err := DepositsToConsensus(b.Body.Deposits)
if err != nil {
return nil, server.NewDecodeError(err, "Body.Deposits")
}
exits, err := SignedExitsToConsensus(b.Body.VoluntaryExits)
if err != nil {
return nil, server.NewDecodeError(err, "Body.VoluntaryExits")
}
syncCommitteeBits, err := bytesutil.DecodeHexWithLength(b.Body.SyncAggregate.SyncCommitteeBits, fieldparams.SyncAggregateSyncCommitteeBytesLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.SyncAggregate.SyncCommitteeBits")
}
syncCommitteeSig, err := bytesutil.DecodeHexWithLength(b.Body.SyncAggregate.SyncCommitteeSignature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Body.SyncAggregate.SyncCommitteeSignature")
}
signedPayloadHeader, err := b.Body.SignedExecutionPayloadHeader.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Body.SignedExecutionPayloadHeader")
}
blsChanges, err := SignedBLSChangesToConsensus(b.Body.BLSToExecutionChanges)
if err != nil {
return nil, server.NewDecodeError(err, "Body.BLSToExecutionChanges")
}
payloadAttestations := make([]*eth.PayloadAttestation, len(b.Body.PayloadAttestations))
for i, p := range b.Body.PayloadAttestations {
payloadAttestations[i], err = p.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("Body.PayloadAttestations[%d]", i))
}
}
return &eth.BeaconBlockEpbs{
Slot: primitives.Slot(slot),
ProposerIndex: primitives.ValidatorIndex(proposerIndex),
ParentRoot: parentRoot,
StateRoot: stateRoot,
Body: &eth.BeaconBlockBodyEpbs{
RandaoReveal: randaoReveal,
Eth1Data: &eth.Eth1Data{
DepositRoot: depositRoot,
DepositCount: depositCount,
BlockHash: blockHash,
},
Graffiti: graffiti,
ProposerSlashings: proposerSlashings,
AttesterSlashings: attesterSlashings,
Attestations: atts,
Deposits: deposits,
VoluntaryExits: exits,
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeBits: syncCommitteeBits,
SyncCommitteeSignature: syncCommitteeSig,
},
BlsToExecutionChanges: blsChanges,
SignedExecutionPayloadHeader: signedPayloadHeader,
PayloadAttestations: payloadAttestations,
},
}, nil
}
func (p *PayloadAttestation) ToConsensus() (*eth.PayloadAttestation, error) {
if p == nil {
return nil, errNilValue
}
aggregationBits, err := bytesutil.DecodeHexWithLength(p.AggregationBits, fieldparams.PTCSize/8)
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
data, err := p.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
}
sig, err := bytesutil.DecodeHexWithLength(p.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: data,
Signature: sig,
}, nil
}
func (p *PayloadAttestationData) ToConsensus() (*eth.PayloadAttestationData, error) {
if p == nil {
return nil, errNilValue
}
beaconBlockRoot, err := bytesutil.DecodeHexWithLength(p.BeaconBlockRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "BeaconBlockRoot")
}
slot, err := strconv.ParseUint(p.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
}
payloadStatus, err := strconv.ParseUint(p.PayloadStatus, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "PayloadStatus")
}
return &eth.PayloadAttestationData{
BeaconBlockRoot: beaconBlockRoot,
Slot: primitives.Slot(slot),
PayloadStatus: primitives.PTCStatus(payloadStatus),
}, nil
}
func (p *SignedExecutionPayloadHeader) ToConsensus() (*enginev1.SignedExecutionPayloadHeader, error) {
if p == nil {
return nil, errNilValue
}
sig, err := bytesutil.DecodeHexWithLength(p.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
header, err := p.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Header")
}
return &enginev1.SignedExecutionPayloadHeader{
Message: header,
Signature: sig,
}, nil
}
func (p *ExecutionPayloadHeaderEPBS) ToConsensus() (*enginev1.ExecutionPayloadHeaderEPBS, error) {
parentBlockHash, err := bytesutil.DecodeHexWithLength(p.ParentBlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ParentBlockHash")
}
parentBlockRoot, err := bytesutil.DecodeHexWithLength(p.ParentBlockRoot, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ParentBlockRoot")
}
blockHash, err := bytesutil.DecodeHexWithLength(p.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "BlockHash")
}
gasLimit, err := strconv.ParseUint(p.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "GasLimit")
}
builderIndex, err := strconv.ParseUint(p.BuilderIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "BuilderIndex")
}
slot, err := strconv.ParseUint(p.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
}
value, err := strconv.ParseUint(p.Value, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Value")
}
blobKzgCommitmentsRoot, err := bytesutil.DecodeHexWithLength(p.BlobKzgCommitmentsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "BlobKzgCommitmentsRoot")
}
return &enginev1.ExecutionPayloadHeaderEPBS{
ParentBlockHash: parentBlockHash,
ParentBlockRoot: parentBlockRoot,
BlockHash: blockHash,
GasLimit: gasLimit,
BuilderIndex: primitives.ValidatorIndex(builderIndex),
Slot: primitives.Slot(slot),
Value: value,
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot,
}, nil
}
func (b *SignedBeaconBlockEpbs) ToConsensus() (*eth.SignedBeaconBlockEpbs, error) {
if b == nil {
return nil, errNilValue
}
sig, err := bytesutil.DecodeHexWithLength(b.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
block, err := b.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
return &eth.SignedBeaconBlockEpbs{
Block: block,
Signature: sig,
}, nil
}
func BeaconBlockEpbsFromConsensus(b *eth.BeaconBlockEpbs) (*BeaconBlockEpbs, error) {
signedPayloadHeader, err := SignedExecutionPayloadHeaderFromConsensus(b.Body.SignedExecutionPayloadHeader)
if err != nil {
return nil, err
}
payloadAttestations, err := PayloadAttestationsFromConsensus(b.Body.PayloadAttestations)
if err != nil {
return nil, err
}
return &BeaconBlockEpbs{
Slot: fmt.Sprintf("%d", b.Slot),
ProposerIndex: fmt.Sprintf("%d", b.ProposerIndex),
ParentRoot: hexutil.Encode(b.ParentRoot),
StateRoot: hexutil.Encode(b.StateRoot),
Body: &BeaconBlockBodyEpbs{
RandaoReveal: hexutil.Encode(b.Body.RandaoReveal),
Eth1Data: Eth1DataFromConsensus(b.Body.Eth1Data),
Graffiti: hexutil.Encode(b.Body.Graffiti),
ProposerSlashings: ProposerSlashingsFromConsensus(b.Body.ProposerSlashings),
AttesterSlashings: AttesterSlashingsElectraFromConsensus(b.Body.AttesterSlashings),
Attestations: AttsElectraFromConsensus(b.Body.Attestations),
Deposits: DepositsFromConsensus(b.Body.Deposits),
VoluntaryExits: SignedExitsFromConsensus(b.Body.VoluntaryExits),
SyncAggregate: &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(b.Body.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(b.Body.SyncAggregate.SyncCommitteeSignature),
},
BLSToExecutionChanges: SignedBLSChangesFromConsensus(b.Body.BlsToExecutionChanges),
SignedExecutionPayloadHeader: signedPayloadHeader,
PayloadAttestations: payloadAttestations,
},
}, nil
}
func SignedBeaconBlockEpbsFromConsensus(b *eth.SignedBeaconBlockEpbs) (*SignedBeaconBlockEpbs, error) {
block, err := BeaconBlockEpbsFromConsensus(b.Block)
if err != nil {
return nil, err
}
return &SignedBeaconBlockEpbs{
Message: block,
Signature: hexutil.Encode(b.Signature),
}, nil
}
func SignedExecutionPayloadHeaderFromConsensus(b *enginev1.SignedExecutionPayloadHeader) (*SignedExecutionPayloadHeader, error) {
header, err := ExecutionPayloadHeaderEPBSFromConsensus(b.Message)
if err != nil {
return nil, err
}
return &SignedExecutionPayloadHeader{
Message: header,
Signature: hexutil.Encode(b.Signature),
}, nil
}
func ExecutionPayloadHeaderEPBSFromConsensus(b *enginev1.ExecutionPayloadHeaderEPBS) (*ExecutionPayloadHeaderEPBS, error) {
return &ExecutionPayloadHeaderEPBS{
ParentBlockHash: hexutil.Encode(b.ParentBlockHash),
ParentBlockRoot: hexutil.Encode(b.ParentBlockRoot),
BlockHash: hexutil.Encode(b.BlockHash),
GasLimit: fmt.Sprintf("%d", b.GasLimit),
BuilderIndex: fmt.Sprintf("%d", b.BuilderIndex),
Slot: fmt.Sprintf("%d", b.Slot),
Value: fmt.Sprintf("%d", b.Value),
BlobKzgCommitmentsRoot: hexutil.Encode(b.BlobKzgCommitmentsRoot),
}, nil
}
func PayloadAttestationsFromConsensus(b []*eth.PayloadAttestation) ([]*PayloadAttestation, error) {
payloadAttestations := make([]*PayloadAttestation, len(b))
for i, p := range b {
data, err := PayloadAttestationDataFromConsensus(p.Data)
if err != nil {
return nil, err
}
payloadAttestations[i] = &PayloadAttestation{
AggregationBits: hexutil.Encode(p.AggregationBits),
Data: data,
Signature: hexutil.Encode(p.Signature),
}
}
return payloadAttestations, nil
}
func PayloadAttestationDataFromConsensus(b *eth.PayloadAttestationData) (*PayloadAttestationData, error) {
return &PayloadAttestationData{
BeaconBlockRoot: hexutil.Encode(b.BeaconBlockRoot),
Slot: fmt.Sprintf("%d", b.Slot),
PayloadStatus: fmt.Sprintf("%d", b.PayloadStatus),
}, nil
}

View File

@@ -250,3 +250,17 @@ type ChainHead struct {
PreviousJustifiedBlockRoot string `json:"previous_justified_block_root"`
OptimisticStatus bool `json:"optimistic_status"`
}
type GetPendingDepositsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*PendingDeposit `json:"data"`
}
type GetPendingPartialWithdrawalsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*PendingPartialWithdrawal `json:"data"`
}

View File

@@ -6,11 +6,9 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"currently_syncing_execution_payload_envelope.go",
"defragment.go",
"error.go",
"execution_engine.go",
"execution_engine_epbs.go",
"forkchoice_update_execution.go",
"head.go",
"head_sync_committee_info.go",
@@ -27,8 +25,7 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_execution_payload_envelope.go",
"receive_payload_attestation_message.go",
"receive_data_column.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -47,12 +44,13 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epbs:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -101,7 +99,6 @@ go_library(
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
@@ -114,7 +111,6 @@ go_test(
"chain_info_norace_test.go",
"chain_info_test.go",
"checktags_test.go",
"epbs_test.go",
"error_test.go",
"execution_engine_test.go",
"forkchoice_update_execution_test.go",
@@ -130,7 +126,6 @@ go_test(
"process_block_test.go",
"receive_attestation_test.go",
"receive_block_test.go",
"receive_execution_payload_envelope_test.go",
"service_norace_test.go",
"service_test.go",
"setup_test.go",
@@ -165,6 +160,7 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
@@ -185,7 +181,6 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -43,7 +43,7 @@ type ForkchoiceFetcher interface {
GetProposerHead() [32]byte
SetForkChoiceGenesisTime(uint64)
UpdateHead(context.Context, primitives.Slot)
HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte)
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, consensus_blocks.ROBlock) error
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
@@ -51,8 +51,6 @@ type ForkchoiceFetcher interface {
ProposerBoost() [32]byte
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error)
GetPTCVote(root [32]byte) primitives.PTCStatus
HashForBlockRoot(root [32]byte) [32]byte
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
@@ -121,12 +119,6 @@ type OptimisticModeFetcher interface {
IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error)
}
// ExecutionPayloadFetcher defines a common interface that returns forkchoice
// information about payload block hashes
type ExecutionPayloadFetcher interface {
HashInForkchoice([32]byte) bool
}
// FinalizedCheckpt returns the latest finalized checkpoint from chain store.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
s.cfg.ForkChoiceStore.RLock()
@@ -408,14 +400,6 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// HashInForkchoice returns true if the given payload block hash is found in
// forkchoice
func (s *Service) HashInForkchoice(hash [32]byte) bool {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HasHash(hash)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -6,7 +6,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
@@ -31,11 +30,11 @@ func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
s.cfg.ForkChoiceStore.SetGenesisTime(timestamp)
}
// HighestReceivedBlockSlotRoot returns the corresponding value from forkchoice
func (s *Service) HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte) {
// HighestReceivedBlockSlot returns the corresponding value from forkchoice
func (s *Service) HighestReceivedBlockSlot() primitives.Slot {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HighestReceivedBlockSlotRoot()
return s.cfg.ForkChoiceStore.HighestReceivedBlockSlot()
}
// ReceivedBlocksLastEpoch returns the corresponding value from forkchoice
@@ -101,37 +100,3 @@ func (s *Service) ParentRoot(root [32]byte) ([32]byte, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ParentRoot(root)
}
// HashForBlockRoot wraps a call to the corresponding method in forkchoice
func (s *Service) HashForBlockRoot(root [32]byte) [32]byte {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HashForBlockRoot(root)
}
// GetPTCVote wraps a call to the corresponding method in forkchoice and checks
// the currently syncing status
// Warning: this method will return the current PTC status regardless of
// timeliness. A client MUST call this method when about to submit a PTC
// attestation, that is exactly at the threshold to submit the attestation.
func (s *Service) GetPTCVote(root [32]byte) primitives.PTCStatus {
s.cfg.ForkChoiceStore.RLock()
f := s.cfg.ForkChoiceStore.GetPTCVote()
s.cfg.ForkChoiceStore.RUnlock()
if f != primitives.PAYLOAD_ABSENT {
return f
}
f, isSyncing := s.payloadBeingSynced.isSyncing(root)
if isSyncing {
return f
}
return primitives.PAYLOAD_ABSENT
}
// insertPayloadEnvelope wraps a locked call to the corresponding method in
// forkchoice
func (s *Service) insertPayloadEnvelope(envelope interfaces.ROExecutionPayloadEnvelope) error {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.InsertPayloadEnvelope(envelope)
}

View File

@@ -36,7 +36,6 @@ func prepareForkchoiceState(
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
parentHash [32]byte,
justified *ethpb.Checkpoint,
finalized *ethpb.Checkpoint,
) (state.BeaconState, blocks.ROBlock, error) {
@@ -69,8 +68,7 @@ func prepareForkchoiceState(
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &enginev1.ExecutionPayload{
BlockHash: payloadHash[:],
ParentHash: parentHash[:],
BlockHash: payloadHash[:],
},
},
},
@@ -143,7 +141,7 @@ func TestUnrealizedJustifiedBlockHash(t *testing.T) {
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: []byte{'j'}}
ofc := &ethpb.Checkpoint{Root: []byte{'f'}}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
service.cfg.ForkChoiceStore.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
@@ -337,22 +335,22 @@ func TestService_ChainHeads(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, [32]byte{'E'}, [32]byte{'D'}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -434,10 +432,10 @@ func TestService_IsOptimistic(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -470,10 +468,10 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))

View File

@@ -1,32 +0,0 @@
package blockchain
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
type currentlySyncingPayload struct {
sync.Mutex
roots map[[32]byte]primitives.PTCStatus
}
func (b *currentlySyncingPayload) set(envelope interfaces.ROExecutionPayloadEnvelope) {
b.Lock()
defer b.Unlock()
b.roots[envelope.BeaconBlockRoot()] = primitives.PAYLOAD_PRESENT
}
func (b *currentlySyncingPayload) unset(root [32]byte) {
b.Lock()
defer b.Unlock()
delete(b.roots, root)
}
func (b *currentlySyncingPayload) isSyncing(root [32]byte) (status primitives.PTCStatus, isSyncing bool) {
b.Lock()
defer b.Unlock()
status, isSyncing = b.roots[root]
return
}

View File

@@ -1,18 +0,0 @@
package blockchain
import (
"testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestServiceGetPTCVote(t *testing.T) {
c := &currentlySyncingPayload{roots: make(map[[32]byte]primitives.PTCStatus)}
s := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, payloadBeingSynced: c}
r := [32]byte{'r'}
require.Equal(t, primitives.PAYLOAD_ABSENT, s.GetPTCVote(r))
c.roots[r] = primitives.PAYLOAD_WITHHELD
require.Equal(t, primitives.PAYLOAD_WITHHELD, s.GetPTCVote(r))
}

View File

@@ -30,12 +30,10 @@ var (
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
// errInvalidValidatorIndex is returned when a validator index is
// invalid or unexpected
errInvalidValidatorIndex = errors.New("invalid validator index")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
var errMaxDataColumnsExceeded = errors.New("Expected data columns for node exceeds NUMBER_OF_COLUMNS")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.

View File

@@ -2,7 +2,6 @@ package blockchain
import (
"context"
"crypto/sha256"
"fmt"
"github.com/ethereum/go-ethereum/common"
@@ -31,8 +30,6 @@ import (
"github.com/sirupsen/logrus"
)
const blobCommitmentVersionKZG uint8 = 0x01
var defaultLatestValidHash = bytesutil.PadTo([]byte{0xff}, 32)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
@@ -98,14 +95,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
log.WithError(err).Error("Could not set head root to invalid")
return nil, nil
}
if len(invalidRoots) == 0 {
log.WithFields(logrus.Fields{
"slot": headBlk.Slot(),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot[:])),
}).Warn("invalid payload")
return nil, nil
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
log.WithError(err).Error("Could not remove invalid block and state")
return nil, nil
@@ -120,7 +109,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}).Warn("Pruned invalid blocks, could not update head root")
return nil, invalidBlock{error: ErrInvalidPayload, root: arg.headRoot, invalidAncestorRoots: invalidRoots}
}
b, err := s.getBlock(ctx, r)
if err != nil {
log.WithError(err).Error("Could not get head block")
@@ -468,13 +456,7 @@ func kzgCommitmentsToVersionedHashes(body interfaces.ReadOnlyBeaconBlockBody) ([
versionedHashes := make([]common.Hash, len(commitments))
for i, commitment := range commitments {
versionedHashes[i] = ConvertKzgCommitmentToVersionedHash(commitment)
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(commitment)
}
return versionedHashes, nil
}
func ConvertKzgCommitmentToVersionedHash(commitment []byte) common.Hash {
versionedHash := sha256.Sum256(commitment)
versionedHash[0] = blobCommitmentVersionKZG
return versionedHash
}

View File

@@ -1,62 +0,0 @@
package blockchain
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/config/features"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/sirupsen/logrus"
)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdateEPBS(ctx context.Context, blockhash [32]byte, attributes payloadattribute.Attributer) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdateEPBS")
defer span.End()
finalizedHash := s.cfg.ForkChoiceStore.FinalizedPayloadBlockHash()
justifiedHash := s.cfg.ForkChoiceStore.UnrealizedJustifiedPayloadBlockHash()
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: blockhash[:],
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
if attributes == nil {
attributes = payloadattribute.EmptyWithVersion(version.EPBS)
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attributes)
if err != nil {
switch {
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
forkchoiceUpdatedOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"headPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(blockhash[:])),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
log.WithError(err).Info("forkchoice updated to invalid block")
return nil, invalidBlock{error: ErrInvalidPayload, root: [32]byte(lastValidHash)}
default:
log.WithError(err).Error(ErrUndefinedExecutionEngineError)
return nil, nil
}
}
forkchoiceUpdatedValidNodeCount.Inc()
// If the forkchoice update call has an attribute, update the payload ID cache.
hasAttr := attributes != nil && !attributes.IsEmpty()
if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", blockhash[:]),
}).Error("Received nil payload ID on VALID engine response")
}
return payloadID, nil
}

View File

@@ -46,13 +46,13 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -104,13 +104,13 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -287,16 +287,16 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{}))
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -316,8 +316,10 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
headRoot: brd,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
// The incoming block is not invalid because the empty node is still valid on ePBS.
require.Equal(t, false, IsInvalidBlock(err))
require.Equal(t, true, IsInvalidBlock(err))
require.Equal(t, brd, InvalidBlockRoot(err))
require.Equal(t, brd, InvalidAncestorRoots(err)[0])
require.Equal(t, 1, len(InvalidAncestorRoots(err)))
}
//
@@ -396,28 +398,28 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{}))
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
bState, _ := util.DeterministicGenesisState(t, 10)
require.NoError(t, beaconDB.SaveState(ctx, bState, bra))
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, [32]byte{'D'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, [32]byte{'E'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, [32]byte{'F'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -510,10 +512,10 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -690,7 +692,7 @@ func Test_NotifyNewPayload(t *testing.T) {
}
service.cfg.ExecutionEngineCaller = e
root := [32]byte{'a'}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
@@ -757,17 +759,17 @@ func Test_reportInvalidBlock(t *testing.T) {
service, tr := minimalTestService(t)
ctx, _, fcs := tr.ctx, tr.db, tr.fcs
jcp := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, [32]byte{}, jcp, jcp)
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, [32]byte{'a'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, [32]byte{'b'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, [32]byte{'c'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
@@ -929,7 +931,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
fjc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, fjc))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(fjc))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
fcs.SetOriginRoot(genesisRoot)
@@ -963,7 +965,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(ctx, opStateSummary))
tenjc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
tenfc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, [32]byte{}, tenjc, tenfc)
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, tenjc, tenfc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, opRoot))
@@ -992,7 +994,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(ctx, validSummary))
twentyjc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
twentyfc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, [32]byte{}, twentyjc, twentyfc)
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, twentyjc, twentyfc)
require.NoError(t, err)
fcs.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -1054,8 +1056,8 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
require.NoError(t, service.removeInvalidBlockAndState(ctx, [][32]byte{r1, r2}))
require.Equal(t, false, service.chainHasBlock(ctx, r1))
require.Equal(t, false, service.chainHasBlock(ctx, r2))
require.Equal(t, false, service.hasBlock(ctx, r1))
require.Equal(t, false, service.hasBlock(ctx, r2))
require.Equal(t, false, service.cfg.BeaconDB.HasStateSummary(ctx, r1))
require.Equal(t, false, service.cfg.BeaconDB.HasStateSummary(ctx, r2))
has, err := service.cfg.StateGen.HasState(ctx, r1)

View File

@@ -122,13 +122,13 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -164,10 +164,10 @@ func TestShouldOverrideFCU(t *testing.T) {
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, [32]byte{}, ojc, ojc)
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))

View File

@@ -48,7 +48,7 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -63,11 +63,11 @@ func TestSaveHead_Different(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -101,7 +101,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -110,7 +110,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
}
reorgChainParent := [32]byte{'B'}
state, blkRoot, err = prepareForkchoiceState(ctx, 0, reorgChainParent, oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 0, reorgChainParent, oldRoot, oldBlock.Block().ParentRoot(), ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
@@ -122,7 +122,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -238,11 +238,11 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -304,7 +304,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -381,7 +381,7 @@ func TestSaveOrphanedOps(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -451,7 +451,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlockCapella{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -509,7 +509,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -568,7 +568,7 @@ func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -583,7 +583,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
ojp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
@@ -603,7 +603,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
headRoot := service.headRoot()
require.Equal(t, [32]byte{}, headRoot)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, [32]byte{}, fcp, fcp)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
fcs.SetBalancesByRooter(func(context.Context, [32]byte) ([]uint64, error) { return []uint64{1, 2}, nil })

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"kzg.go",
"trusted_setup.go",
"validation.go",
],
@@ -12,6 +13,9 @@ go_library(
deps = [
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,111 @@
package kzg
import (
"errors"
ckzg4844 "github.com/ethereum/c-kzg-4844/v2/bindings/go"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
)
// BytesPerBlob is the number of bytes in a single blob.
const BytesPerBlob = ckzg4844.BytesPerBlob
// Blob represents a serialized chunk of data.
type Blob [BytesPerBlob]byte
// BytesPerCell is the number of bytes in a single cell.
const BytesPerCell = ckzg4844.BytesPerCell
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
// Commitment represent a KZG commitment to a Blob.
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [48]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48
// Bytes32 is a 32-byte array.
type Bytes32 = ckzg4844.Bytes32
// CellsAndProofs represents the Cells and Proofs corresponding to
// a single blob.
type CellsAndProofs struct {
Cells []Cell
Proofs []Proof
}
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
kzgBlob := kzg4844.Blob(*blob)
comm, err := kzg4844.BlobToCommitment(&kzgBlob)
if err != nil {
return Commitment{}, err
}
return Commitment(comm), nil
}
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
kzgBlob := kzg4844.Blob(*blob)
proof, err := kzg4844.ComputeBlobProof(&kzgBlob, kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
}
return Proof(proof), nil
}
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
ckzgBlob := (*ckzg4844.Blob)(blob)
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(ckzgBlob)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, cells []Cell, proofsBytes []Bytes48) (bool, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(cells))
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
for i := range partialCells {
ckzgPartialCells[i] = ckzg4844.Cell(partialCells[i])
}
ckzgCells, ckzgProofs, err := ckzg4844.RecoverCellsAndKZGProofs(cellIndices, ckzgPartialCells)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
// Convert cells/proofs to the CellsAndProofs type defined in this package.
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
if len(ckzgCells) != len(ckzgProofs) {
return CellsAndProofs{}, errors.New("different number of cells/proofs")
}
var cells []Cell
var proofs []Proof
for i := range ckzgCells {
cells = append(cells, Cell(ckzgCells[i]))
proofs = append(proofs, Proof(ckzgProofs[i]))
}
return CellsAndProofs{
Cells: cells,
Proofs: proofs,
}, nil
}

View File

@@ -5,6 +5,8 @@ import (
"encoding/json"
GoKZG "github.com/crate-crypto/go-kzg-4844"
CKZG "github.com/ethereum/c-kzg-4844/v2/bindings/go"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
@@ -12,17 +14,53 @@ var (
//go:embed trusted_setup.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context
kzgLoaded bool
)
type TrustedSetup struct {
G1Monomial [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_monomial"`
G1Lagrange [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_lagrange"`
G2Monomial [65]GoKZG.G2CompressedHexStr `json:"g2_monomial"`
}
func Start() error {
parsedSetup := GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup)
trustedSetup := &TrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, trustedSetup)
if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON")
}
kzgContext, err = GoKZG.NewContext4096(&parsedSetup)
kzgContext, err = GoKZG.NewContext4096(&GoKZG.JSONTrustedSetup{
SetupG2: trustedSetup.G2Monomial[:],
SetupG1Lagrange: trustedSetup.G1Lagrange})
if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context")
}
// Length of a G1 point, converted from hex to binary.
g1MonomialBytes := make([]byte, len(trustedSetup.G1Monomial)*(len(trustedSetup.G1Monomial[0])-2)/2)
for i, g1 := range &trustedSetup.G1Monomial {
copy(g1MonomialBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G1 point, converted from hex to binary.
g1LagrangeBytes := make([]byte, len(trustedSetup.G1Lagrange)*(len(trustedSetup.G1Lagrange[0])-2)/2)
for i, g1 := range &trustedSetup.G1Lagrange {
copy(g1LagrangeBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2MonomialBytes := make([]byte, len(trustedSetup.G2Monomial)*(len(trustedSetup.G2Monomial[0])-2)/2)
for i, g2 := range &trustedSetup.G2Monomial {
copy(g2MonomialBytes[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// TODO: Provide a configuration option for this.
var precompute uint = 8
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1MonomialBytes, g1LagrangeBytes, g2MonomialBytes, precompute); err != nil {
panic(err)
}
}
kzgLoaded = true
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -45,43 +45,43 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.EPBS {
sh, err := b.Body().SignedExecutionPayloadHeader()
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
header, err := sh.Header()
if err != nil {
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
log = log.WithFields(logrus.Fields{"payloadHash": fmt.Sprintf("%#x", header.BlockHash()),
"builderIndex": header.BuilderIndex(),
"value": header.Value(),
"blobKzgCommitmentsRoot": fmt.Sprintf("%#x", header.BlobKzgCommitmentsRoot()),
})
} else {
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
if b.Version() >= version.Electra {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
} else {
if len(eReqs.Deposits) > 0 {
log = log.WithField("depositRequestCount", len(eReqs.Deposits))
}
if len(eReqs.Consolidations) > 0 {
log = log.WithField("consolidationRequestCount", len(eReqs.Consolidations))
}
if len(eReqs.Withdrawals) > 0 {
log = log.WithField("withdrawalRequestCount", len(eReqs.Withdrawals))
}
}
}
@@ -113,18 +113,6 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"dataAvailabilityWaitedTime": daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
if block.Version() >= version.EPBS {
ph, err := block.Body().SignedExecutionPayloadHeader()
if err != nil {
return err
}
header, err := ph.Header()
if err != nil {
return err
}
hash := header.ParentBlockHash()
lf["parentHash"] = fmt.Sprintf("0x%s...", hex.EncodeToString(hash[:])[:8])
}
log.WithFields(lf).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
@@ -140,9 +128,6 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
if block.Version() >= version.EPBS {
return nil
}
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")

View File

@@ -182,10 +182,6 @@ var (
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
executionEngineProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "execution_engine_processing_milliseconds",
Help: "Total time to process an execution payload envelope in ReceiveExecutionPayloadEnvelope()",
})
dataAvailWaitedTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "da_waited_time_milliseconds",
Help: "Total time spent waiting for a data availability check in ReceiveBlock()",

View File

@@ -1,8 +1,6 @@
package blockchain
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -71,22 +69,6 @@ func WithDepositCache(c cache.DepositCache) Option {
}
}
// WithPayloadAttestationCache for payload attestation cache.
func WithPayloadAttestationCache(c *cache.PayloadAttestationCache) Option {
return func(s *Service) error {
s.cfg.PayloadAttestationCache = c
return nil
}
}
// WithPayloadEnvelopeCache for payload envelope cache.
func WithPayloadEnvelopeCache(c *sync.Map) Option {
return func(s *Service) error {
s.cfg.PayloadEnvelopeCache = c
return nil
}
}
// WithPayloadIDCache for payload ID cache.
func WithPayloadIDCache(c *cache.PayloadIDCache) Option {
return func(s *Service) error {
@@ -144,9 +126,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
func WithP2PBroadcaster(p p2p.Acceser) Option {
return func(s *Service) error {
s.cfg.P2p = p
s.cfg.P2P = p
return nil
}
}

View File

@@ -97,7 +97,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// We assume trusted attestation in this function has verified signature.
// Update forkchoice store with the new attestation for updating weight.
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Slot)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Target.Epoch)
return nil
}

View File

@@ -32,7 +32,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
cp := &ethpb.Checkpoint{}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, cp, cp)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -41,7 +41,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
r, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
cp = &ethpb.Checkpoint{Root: r[:]}
st, roblock, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, cp, cp)
st, roblock, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
@@ -139,7 +139,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
state, roblock, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, roblock, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, roblock))
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
@@ -170,7 +170,7 @@ func TestService_GetRecentPreState(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp0, cp0)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
@@ -202,7 +202,7 @@ func TestService_GetAttPreState_Concurrency(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: ckRoot}))
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp1, cp1)
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -259,7 +259,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)}))
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp1, cp1)
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s1, err := service.getAttPreState(ctx, cp1)
@@ -273,7 +273,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
_, err = service.getAttPreState(ctx, cp2)
require.ErrorContains(t, "epoch 2 root 0x4200000000000000000000000000000000000000000000000000000000000000: not a checkpoint in forkchoice", err)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, [32]byte{}, cp2, cp2)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -298,7 +298,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}))
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, [32]byte{}, cp2, cp2)
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -318,7 +318,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, err)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r1[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root)))
st, roblock, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, checkpoint, checkpoint)
st, roblock, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, checkpoint, checkpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
returned, err := service.getAttPreState(ctx, checkpoint)
@@ -336,7 +336,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, err)
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r2[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(newCheckpoint.Root)))
st, roblock, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, [32]byte{}, newCheckpoint, newCheckpoint)
st, roblock, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, newCheckpoint, newCheckpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
returned, err = service.getAttPreState(ctx, newCheckpoint)

View File

@@ -3,11 +3,13 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
@@ -64,9 +66,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
fcuArgs := &fcuConfig{}
if s.inRegularSync() {
if cfg.roblock.Version() < version.EPBS {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
@@ -104,18 +104,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
s.logNonCanonicalBlockReceived(cfg.roblock.Root(), cfg.headRoot)
return nil
}
if cfg.roblock.Version() >= version.EPBS {
if err := s.saveHead(ctx, cfg.headRoot, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("could not save head")
}
if err := s.pruneAttsFromPool(cfg.roblock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
// update the NSC and handle epoch boundaries here since we do
// not send FCU at all
return s.updateCachesPostBlockProcessing(cfg)
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
@@ -248,7 +236,9 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
nodeID := s.cfg.P2P.NodeID()
if err := avs.IsDataAvailable(ctx, nodeID, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
@@ -392,7 +382,7 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Re
}
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
if s.cfg.ForkChoiceStore.HasNode(r) {
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Slot)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Target.Epoch)
} else if features.Get().EnableExperimentalAttestationPool {
if err = s.cfg.AttestationCache.Add(a); err != nil {
return err
@@ -502,15 +492,9 @@ func (s *Service) runLateBlockTasks() {
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
epbs := params.BeaconConfig().EPBSForkEpoch
for {
select {
case slot := <-ticker.C():
if slots.ToEpoch(slot) == epbs && slot%32 == 0 {
ticker.Done()
attThreshold := params.BeaconConfig().SecondsPerSlot / 4
ticker = slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
}
case <-ticker.C():
s.lateBlockTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
@@ -532,28 +516,49 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
if len(expected) > maxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices, err := bs.Indices(root, slot)
if err != nil {
return nil, err
}
indices := bs.Summary(root)
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
ui := uint64(i)
if len(expected[i]) > 0 {
if !indices[i] {
missing[ui] = struct{}{}
}
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
missing[uint64(i)] = struct{}{}
}
}
return missing, nil
}
func missingDataColumns(bs *filesystem.BlobStorage, root [32]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > int(params.BeaconConfig().NumberOfColumns) {
return nil, errMaxDataColumnsExceeded
}
// Get a summary of the data columns stored in the database.
summary := bs.Summary(root)
// Check all expected data columns against the summary.
missing := make(map[uint64]bool)
for column := range expected {
if !summary.HasDataColumnIndex(column) {
missing[column] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if coreTime.PeerDASIsActive(signed.Block().Slot()) {
return s.areDataColumnsAvailable(ctx, root, signed)
}
if signed.Version() < version.Deneb {
return nil
}
@@ -583,7 +588,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
if err != nil {
return err
return errors.Wrap(err, "missing indices")
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
@@ -602,8 +607,13 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if len(missing) == 0 {
return
}
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
})
defer nst.Stop()
}
@@ -625,12 +635,178 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
}
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, signedBlock interfaces.ReadOnlySignedBeaconBlock) error {
if signedBlock.Version() < version.Fulu {
return nil
}
block := signedBlock.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "blob KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
// All columns to sample need to be available for the block to be considered available.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
nodeID := s.cfg.P2P.NodeID()
// Prevent custody group count to change during the rest of the function.
peerdas.CustodyGroupCountMut.RLock()
defer peerdas.CustodyGroupCountMut.RUnlock()
// Get the custody group sampling size for the node.
custodyGroupSamplingSize := peerdas.CustodyGroupSamplingSize(peerdas.Actual)
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupSamplingSize)
if err != nil {
return errors.Wrap(err, "peer info")
}
// Exit early if the node is not expected to custody any data columns.
if len(peerInfo.CustodyColumns) == 0 {
return nil
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
// Get the count of data columns we already have in the store.
summary := s.blobStorage.Summary(root)
numberOfColumns := params.BeaconConfig().NumberOfColumns
retrievedDataColumnsCount := uint64(0)
for column := range numberOfColumns {
if summary.HasDataColumnIndex(column) {
retrievedDataColumnsCount++
}
}
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumns(s.blobStorage, root, peerInfo.CustodyColumns)
if err != nil {
return err
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missingMap) == 0 {
return nil
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signedBlock.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
missingMapCount := uint64(len(missingMap))
if missingMapCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(peerInfo.CustodyColumns))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": signedBlock.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": missing,
}).Error("Some data columns are still unavailable at slot end")
})
defer nst.Stop()
}
for {
select {
case rootIndex := <-rootIndexChan:
if rootIndex.Root != root {
// This is not the root we are looking for.
continue
}
// This is a data column we are expecting.
if _, ok := missingMap[rootIndex.Index]; ok {
retrievedDataColumnsCount++
}
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Remove the index from the missing map.
delete(missingMap, rootIndex.Index)
// Exit if there is no more missing data columns.
if len(missingMap) == 0 {
return nil
}
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missingMap))
if missingIndicesCount < numberOfColumns {
missingIndices = uint64MapToSortedSlice(missingMap)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
}
}
}
@@ -686,36 +862,24 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
return
}
if headState.Version() >= version.EPBS {
bh, err := headState.LatestBlockHash()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateEPBS(ctx, [32]byte(bh), attribute)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
} else {
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
}
@@ -729,7 +893,7 @@ func (s *Service) waitForSync() error {
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
}

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/ethereum/go-ethereum/common"
@@ -368,31 +369,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
return nil, err
}
parentRoot := b.ParentRoot()
s.ForkChoicer().RLock()
slot, err := s.ForkChoicer().Slot(parentRoot)
s.ForkChoicer().RUnlock()
if err != nil {
return nil, errors.Wrap(err, "could not get slot for parent root")
}
if slots.ToEpoch(slot) >= params.BeaconConfig().EPBSForkEpoch {
s.ForkChoicer().RLock()
parentHash := s.ForkChoicer().HashForBlockRoot(parentRoot)
s.ForkChoicer().RUnlock()
signedBid, err := b.Body().SignedExecutionPayloadHeader()
if err != nil {
return nil, errors.Wrap(err, "could not get signed execution payload header")
}
bid, err := signedBid.Header()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
if parentHash == bid.ParentBlockHash() {
// It's based on full, use the state by hash
parentRoot = parentHash
}
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, parentRoot)
preState, err := s.cfg.StateGen.StateByRoot(ctx, b.ParentRoot())
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot())
}
@@ -576,7 +553,8 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
// inserts finalized deposits into our finalized deposit trie, needs to be
// called in the background
func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) {
// Post-Electra: prunes all proofs and pending deposits in the cache
func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "blockChain.insertFinalizedDeposits")
defer span.End()
startTime := time.Now()
@@ -587,6 +565,16 @@ func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) {
log.WithError(err).Error("could not fetch finalized state")
return
}
// Check if we should prune all pending deposits.
// In post-Electra(after the legacy deposit mechanism is deprecated),
// we can prune all pending deposits in the deposit cache.
// See: https://eips.ethereum.org/EIPS/eip-6110#eth1data-poll-deprecation
if helpers.DepositRequestsStarted(finalizedState) {
s.pruneAllPendingDepositsAndProofs(ctx)
return
}
// We update the cache up to the last deposit index in the finalized block's state.
// We can be confident that these deposits will be included in some block
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
@@ -615,6 +603,12 @@ func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) {
log.WithField("duration", time.Since(startTime).String()).Debugf("Finalized deposit insertion completed at index %d", finalizedEth1DepIdx)
}
// pruneAllPendingDepositsAndProofs prunes all proofs and pending deposits in the cache.
func (s *Service) pruneAllPendingDepositsAndProofs(ctx context.Context) {
s.cfg.DepositCache.PruneAllPendingDeposits(ctx)
s.cfg.DepositCache.PruneAllProofs(ctx)
}
// This ensures that the input root defaults to using genesis root instead of zero hashes. This is needed for handling
// fork choice justification routine.
func (s *Service) ensureRootNotZeros(root [32]byte) [32]byte {

View File

@@ -142,7 +142,7 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
// the parent of the last block inserted is the tree node.
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
r0 := bytesutil.ToBytes32(roots[0])
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, [32]byte{}, fcp, fcp)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
@@ -184,7 +184,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
// the parent of the last block inserted is the tree node.
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
r0 := bytesutil.ToBytes32(roots[0])
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, [32]byte{}, fcp, fcp)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
@@ -464,7 +464,7 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
}
@@ -504,7 +504,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
}
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
@@ -723,7 +723,7 @@ func TestInsertFinalizedDeposits(t *testing.T) {
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
}
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits, err := depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -759,7 +759,7 @@ func TestInsertFinalizedDeposits_PrunePendingDeposits(t *testing.T) {
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root))
}
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits, err := depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -799,7 +799,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
}
// Insert 3 deposits before hand.
require.NoError(t, depositCache.InsertFinalizedDeposits(ctx, 2, [32]byte{}, 0))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits, err := depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 5, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -810,7 +810,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
}
// Insert New Finalized State with higher deposit count.
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'})
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k', '2'})
fDeposits, err = depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 12, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -1153,7 +1153,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
logHook := logTest.NewGlobal()
for i := 0; i < 10; i++ {
fc := &ethpb.Checkpoint{}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, [32]byte{}, fc, fc)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
var wg sync.WaitGroup
@@ -2297,7 +2297,7 @@ func TestMissingIndices(t *testing.T) {
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, c.present...))
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
missing, err := missingIndices(bs, c.root, c.expected, 0)
if c.err != nil {
require.ErrorIs(t, err, c.err)

View File

@@ -11,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
@@ -149,35 +148,15 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
var attributes payloadattribute.Attributer
if s.inRegularSync() {
attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if headState.Version() >= version.EPBS {
bh, err := headState.LatestBlockHash()
if err != nil {
log.WithError(err).Error("could not get latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateEPBS(ctx, [32]byte(bh), attributes)
if err != nil {
log.WithError(err).Error("could not notify forkchoice update")
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
return
}
if err := s.pruneAttsFromPool(headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return
}
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
@@ -205,7 +184,7 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
}
hasState := s.cfg.BeaconDB.HasStateSummary(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.chainHasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.hasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
if !(hasState && hasBlock) {
continue
}

View File

@@ -42,11 +42,11 @@ func TestVerifyLMDFFGConsistent(t *testing.T) {
f := service.cfg.ForkChoiceStore
fc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, [32]byte{}, fc, fc)
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r32))
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32.Root(), params.BeaconConfig().ZeroHash, [32]byte{}, fc, fc)
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32.Root(), params.BeaconConfig().ZeroHash, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r33))
@@ -82,7 +82,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
attsToSave := make([]ethpb.Att, len(atts))
@@ -142,7 +142,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, [32]byte{}, ojc, ojc)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())
@@ -191,7 +191,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, [32]byte{}, ojc, ojc)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())

View File

@@ -7,6 +7,7 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
@@ -39,28 +40,22 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveExecutionPayloadEnvelope(ctx context.Context, env interfaces.ROSignedExecutionPayloadEnvelope, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
}
// PayloadAttestationReceiver defines methods of the chain service for receiving
// and processing new payload attestations and payload attestation messages
type PayloadAttestationReceiver interface {
ReceivePayloadAttestationMessage(ctx context.Context, a *ethpb.PayloadAttestationMessage) error
}
// BlobReceiver interface defines the methods of chain service for receiving new
// blobs
type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// ExecutionPayloadReceiver interface defines the methods of chain service for receiving `ROExecutionPayloadEnvelope`.
type ExecutionPayloadReceiver interface {
ReceiveExecutionPayloadEnvelope(ctx context.Context, envelope interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
@@ -81,6 +76,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
return nil
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
defer s.blockBeingSynced.unset(blockRoot)
@@ -89,6 +85,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
@@ -99,31 +96,17 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
var postState state.BeaconState
var isValidPayload bool
var daWaitedTime time.Duration
if blockCopy.Version() >= version.EPBS {
postState, err = s.validateStateTransition(ctx, preState, roblock)
if err != nil {
return errors.Wrap(err, "could not validate state transition")
}
optimistic, err := s.IsOptimisticForRoot(ctx, roblock.Block().ParentRoot())
if err != nil {
return errors.Wrap(err, "could not check if parent is optimistic")
}
// if the parent is not optimistic then we can set the block as
// not optimistic.
isValidPayload = !optimistic
} else {
postState, isValidPayload, err = s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
}
daWaitedTime, err = s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
}
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
}
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -259,12 +242,14 @@ func (s *Service) handleDA(
if err != nil {
return 0, err
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
nodeID := s.cfg.P2P.NodeID()
if err := avs.IsDataAvailable(ctx, nodeID, s.CurrentSlot(), rob); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability")
return 0, errors.Wrap(err, "is data available")
}
}
daWaitedTime := time.Since(daStartTime)
@@ -306,9 +291,10 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
go func() {
s.sendNewFinalizedEvent(ctx, finalizedState)
}()
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
s.insertFinalizedDepositsAndPrune(depCtx, finalized.Root)
cancel()
}()
}
@@ -496,7 +482,7 @@ func (s *Service) validateStateTransition(ctx context.Context, preState state.Be
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
if ctx.Err() != nil {
if ctx.Err() != nil || electra.IsExecutionRequestError(err) {
return nil, err
}
return nil, invalidBlock{error: err}

View File

@@ -455,41 +455,81 @@ func Test_executePostFinalizationTasks(t *testing.T) {
Root: headRoot[:],
}))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
t.Run("pre deposit request", func(t *testing.T) {
require.NoError(t, headState.SetEth1DepositIndex(1))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
// check deposit
require.LogsContain(t, logHook, "Finalized deposit insertion completed at index")
})
t.Run("deposit requests started", func(t *testing.T) {
require.NoError(t, headState.SetEth1DepositIndex(1))
require.NoError(t, headState.SetDepositRequestsStartIndex(1))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
})
// check deposit
require.LogsContain(t, logHook, "Finalized deposit insertion completed at index")
}

View File

@@ -0,0 +1,14 @@
package blockchain
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
)
func (s *Service) ReceiveDataColumn(ds blocks.VerifiedRODataColumn) error {
if err := s.blobStorage.SaveDataColumn(ds); err != nil {
return errors.Wrap(err, "save data column")
}
return nil
}

View File

@@ -1,242 +0,0 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
// ReceiveExecutionPayloadEnvelope is a function that defines the operations (minus pubsub)
// that are performed on a received execution payload envelope. The operations consist of:
// 1. Validate the payload, apply state transition.
// 2. Apply fork choice to the processed payload
// 3. Save latest head info
func (s *Service) ReceiveExecutionPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error {
receivedTime := time.Now()
envelope, err := signed.Envelope()
if err != nil {
return err
}
log.Info("Receiving execution payload envelope")
root := envelope.BeaconBlockRoot()
s.payloadBeingSynced.set(envelope)
defer s.payloadBeingSynced.unset(root)
preState, err := s.getPayloadEnvelopePrestate(ctx, envelope)
if err != nil {
return errors.Wrap(err, "could not get prestate")
}
eg, _ := errgroup.WithContext(ctx)
eg.Go(func() error {
if err := epbs.ValidatePayloadStateTransition(ctx, preState, envelope); err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnEnvelope(ctx, envelope)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
daStartTime := time.Now()
// TODO: Add DA check
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
if err := s.savePostPayload(ctx, signed, preState); err != nil {
return err
}
if err := s.insertPayloadEnvelope(envelope); err != nil {
return errors.Wrap(err, "could not insert payload to forkchoice")
}
if isValidPayload {
s.ForkChoicer().Lock()
if err := s.ForkChoicer().SetOptimisticToValid(ctx, root); err != nil {
s.ForkChoicer().Unlock()
return errors.Wrap(err, "could not set optimistic payload to valid")
}
s.ForkChoicer().Unlock()
}
headRoot, err := s.HeadRoot(ctx)
if err != nil {
log.WithError(err).Error("could not get headroot to compute attributes")
return nil
}
if bytes.Equal(headRoot, root[:]) {
attr := s.getPayloadAttribute(ctx, preState, envelope.Slot()+1, headRoot)
execution, err := envelope.Execution()
if err != nil {
log.WithError(err).Error("could not get execution data")
return nil
}
blockHash := [32]byte(execution.BlockHash())
payloadID, err := s.notifyForkchoiceUpdateEPBS(ctx, blockHash, attr)
if err != nil {
if IsInvalidBlock(err) {
// TODO handle the lvh here
return err
}
return nil
}
if attr != nil && !attr.IsEmpty() && payloadID != nil {
var pid [8]byte
copy(pid[:], payloadID[:])
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot)),
"headSlot": envelope.Slot(),
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.PayloadIDCache.Set(envelope.Slot()+1, root, pid)
}
// simply update the headstate in head
s.headLock.Lock()
s.head.state = preState.Copy()
s.headLock.Unlock()
// update the NSC with the hash for the full block, we use the block root as the key
if err := transition.UpdateNextSlotCache(ctx, root[:], preState); err != nil {
log.WithError(err).Error("could not update next slot cache with payload")
}
}
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
executionEngineProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
ex, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution data")
}
log.WithFields(logrus.Fields{
"slot": envelope.Slot(),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(root[:])),
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(ex.BlockHash())),
"ParentHash": fmt.Sprintf("%#x", bytesutil.Trunc(ex.ParentHash())),
}).Info("Processed execution payload envelope")
return nil
}
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewEnvelope(ctx context.Context, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
payload, err := envelope.Execution()
if err != nil {
return false, errors.Wrap(err, "could not get execution payload")
}
versionedHashes := envelope.VersionedHashes()
root := envelope.BeaconBlockRoot()
parentRoot, err := s.ParentRoot(root)
if err != nil {
return false, errors.Wrap(err, "could not get parent block root")
}
pr := common.Hash(parentRoot)
requests := envelope.ExecutionRequests()
lastValidHash, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &pr, requests)
switch {
case err == nil:
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: lvh,
}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
}
// validateExecutionOnEnvelope notifies the engine of the incoming execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnEnvelope(ctx context.Context, e interfaces.ROExecutionPayloadEnvelope) (bool, error) {
isValidPayload, err := s.notifyNewEnvelope(ctx, e)
if err == nil {
return isValidPayload, nil
}
blockRoot := e.BeaconBlockRoot()
parentRoot, rootErr := s.ParentRoot(blockRoot)
if rootErr != nil {
return false, errors.Wrap(rootErr, "could not get parent block root")
}
s.cfg.ForkChoiceStore.Lock()
err = s.handleInvalidExecutionError(ctx, err, blockRoot, parentRoot)
s.cfg.ForkChoiceStore.Unlock()
return false, err
}
func (s *Service) getPayloadEnvelopePrestate(ctx context.Context, e interfaces.ROExecutionPayloadEnvelope) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.getPayloadEnvelopePreState")
defer span.End()
// Verify incoming payload has a valid pre state.
root := e.BeaconBlockRoot()
// Verify the referred block is known to forkchoice
if !s.InForkchoice(root) {
return nil, errors.New("Cannot import execution payload envelope for unknown block")
}
if err := s.verifyBlkPreState(ctx, root); err != nil {
return nil, errors.Wrap(err, "could not verify payload prestate")
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, root)
if err != nil {
return nil, errors.Wrap(err, "could not get pre state")
}
if preState == nil || preState.IsNil() {
return nil, errors.Wrap(err, "nil pre state")
}
return preState, nil
}
func (s *Service) savePostPayload(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, st state.BeaconState) error {
if err := s.cfg.BeaconDB.SaveBlindPayloadEnvelope(ctx, signed); err != nil {
return err
}
envelope, err := signed.Envelope()
if err != nil {
return err
}
execution, err := envelope.Execution()
if err != nil {
return err
}
r := envelope.BeaconBlockRoot()
if err := s.cfg.StateGen.SaveState(ctx, [32]byte(execution.BlockHash()), st); err != nil {
log.Warnf("Rolling back insertion of block with root %#x", r)
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
}
return errors.Wrap(err, "could not save state")
}
return nil
}

View File

@@ -1,103 +0,0 @@
package blockchain
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func Test_getPayloadEnvelopePrestate(t *testing.T) {
service, tr := minimalTestService(t)
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
_, err = service.getPayloadEnvelopePrestate(ctx, e)
require.NoError(t, err)
}
func Test_notifyNewEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
isValidPayload, err := service.notifyNewEnvelope(ctx, e)
require.NoError(t, err)
require.Equal(t, true, isValidPayload)
}
func Test_validateExecutionOnEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
isValidPayload, err := service.validateExecutionOnEnvelope(ctx, e)
require.NoError(t, err)
require.Equal(t, true, isValidPayload)
}
func Test_ReceiveExecutionPayloadEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
post := gs.Copy()
p := &enginev1.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BeaconBlockRoot: service.originBlockRoot[:],
BlobKzgCommitments: make([][]byte, 0),
StateRoot: make([]byte, 32),
ExecutionRequests: &enginev1.ExecutionRequests{},
}
sp := &enginev1.SignedExecutionPayloadEnvelope{
Message: p,
}
e, err := blocks.WrappedROSignedExecutionPayloadEnvelope(sp)
require.NoError(t, err)
das := &das.MockAvailabilityStore{}
blockHeader := post.LatestBlockHeader()
prevStateRoot, err := post.HashTreeRoot(ctx)
require.NoError(t, err)
blockHeader.StateRoot = prevStateRoot[:]
require.NoError(t, post.SetLatestBlockHeader(blockHeader))
stRoot, err := post.HashTreeRoot(ctx)
require.NoError(t, err)
p.StateRoot = stRoot[:]
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
require.NoError(t, service.ReceiveExecutionPayloadEnvelope(ctx, e, das))
}

View File

@@ -1,33 +0,0 @@
package blockchain
import (
"context"
"slices"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
func (s *Service) ReceivePayloadAttestationMessage(ctx context.Context, a *eth.PayloadAttestationMessage) error {
if err := helpers.ValidateNilPayloadAttestationMessage(a); err != nil {
return err
}
root := [32]byte(a.Data.BeaconBlockRoot)
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
return err
}
ptc, err := helpers.GetPayloadTimelinessCommittee(ctx, st, a.Data.Slot)
if err != nil {
return err
}
idx := slices.Index(ptc, a.ValidatorIndex)
if idx == -1 {
return errInvalidValidatorIndex
}
if s.cfg.PayloadAttestationCache.Seen(root, uint64(primitives.ValidatorIndex(idx))) {
return nil
}
return s.cfg.PayloadAttestationCache.Add(a, uint64(idx))
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -47,26 +48,24 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
lastPublishedLightClientEpoch primitives.Epoch
blobStorage *filesystem.BlobStorage
payloadBeingSynced *currentlySyncingPayload
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
}
// config options for the service.
@@ -75,8 +74,6 @@ type config struct {
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadAttestationCache *cache.PayloadAttestationCache
PayloadEnvelopeCache *sync.Map
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttestationCache *cache.AttestationCache
@@ -84,7 +81,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2p p2p.Broadcaster
P2P p2p.Acceser
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
@@ -109,22 +106,26 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][]bool
// TODO: Separate blobs from data columns
// seenIndex map[[32]byte][]bool
seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitives.Slot) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if idx >= uint64(maxBlobsPerBlock) {
return
}
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// if idx >= uint64(maxBlobsPerBlock) {
// return
// }
bn.Lock()
seen := bn.seenIndex[root]
if seen == nil {
seen = make([]bool, maxBlobsPerBlock)
}
// TODO: Separate blobs from data columns
// if seen == nil {
// seen = make([]bool, maxBlobsPerBlock)
// }
if seen[idx] {
bn.Unlock()
return
@@ -135,7 +136,9 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
// c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
@@ -145,12 +148,15 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
}
func (bn *blobNotifierMap) forRoot(root [32]byte, slot primitives.Slot) chan uint64 {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
// TODO: Separate blobs from data columns
// maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
bn.Lock()
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
// TODO: Separate blobs from data columns
// c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
return c
@@ -176,7 +182,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][]bool),
// TODO: Separate blobs from data columns
// seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
}
srv := &Service{
ctx: ctx,
@@ -187,7 +195,6 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
payloadBeingSynced: &currentlySyncingPayload{roots: make(map[[32]byte]primitives.PTCStatus)},
}
for _, opt := range opts {
if err := opt(srv); err != nil {
@@ -563,7 +570,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
// 2.) Check DB.
// Checking 1.) is ten times faster than checking 2.)
// this function requires a lock in forkchoice
func (s *Service) chainHasBlock(ctx context.Context, root [32]byte) bool {
func (s *Service) hasBlock(ctx context.Context, root [32]byte) bool {
if s.cfg.ForkChoiceStore.HasNode(root) {
return true
}

View File

@@ -97,13 +97,14 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithP2PBroadcaster(&mockAccesser{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc),
WithAttestationService(attService),
WithStateGen(stateGen),
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithClockSynchronizer(startup.NewClockSynchronizer()),
WithP2PBroadcaster(&mockAccesser{}),
}
chainService, err := NewService(ctx, opts...)
@@ -386,8 +387,8 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
require.NoError(t, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, roblock))
assert.Equal(t, false, s.chainHasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.chainHasBlock(ctx, r), "Should have block")
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
}
func TestServiceStop_SaveCachedBlocks(t *testing.T) {
@@ -587,7 +588,9 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][]bool),
// TODO: Separate blobs from data columns
// seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
notifiers: make(map[[32]byte]chan uint64),
}

View File

@@ -19,8 +19,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"google.golang.org/protobuf/proto"
@@ -45,6 +47,11 @@ type mockBroadcaster struct {
broadcastCalled bool
}
type mockAccesser struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
@@ -65,6 +72,11 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ context.Context, _ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}
@@ -122,6 +134,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithSyncChecker(mock.MockChecker{}),
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
WithP2PBroadcaster(&mockAccesser{}),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -3,10 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = [
"mock.go",
"mock_epbs.go",
],
srcs = ["mock.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -37,50 +37,44 @@ var ErrNilState = errors.New("nil state")
// ChainService defines the mock interface for testing
type ChainService struct {
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
Slot *primitives.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block interfaces.ReadOnlySignedBeaconBlock
ExecutionPayloadEnvelope interfaces.ROExecutionPayloadEnvelope
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []interfaces.ReadOnlySignedBeaconBlock
SyncCommitteeIndices []primitives.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
ReceiveEnvelopeMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
HighestReceivedSlot primitives.Slot
HighestReceivedRoot [32]byte
PayloadStatus primitives.PTCStatus
ReceivePayloadAttestationMessageErr error
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
Slot *primitives.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block interfaces.ReadOnlySignedBeaconBlock
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []interfaces.ReadOnlySignedBeaconBlock
SyncCommitteeIndices []primitives.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
}
func (s *ChainService) Ancestor(ctx context.Context, root []byte, slot primitives.Slot) ([]byte, error) {
@@ -647,12 +641,12 @@ func (s *ChainService) ReceivedBlocksLastEpoch() (uint64, error) {
return 0, nil
}
// HighestReceivedBlockSlotRoot mocks the same method in the chain service
func (s *ChainService) HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte) {
// HighestReceivedBlockSlot mocks the same method in the chain service
func (s *ChainService) HighestReceivedBlockSlot() primitives.Slot {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.HighestReceivedBlockSlotRoot()
return s.ForkChoiceStore.HighestReceivedBlockSlot()
}
return s.HighestReceivedSlot, s.HighestReceivedRoot
return 0
}
// InsertNode mocks the same method in the chain service
@@ -708,25 +702,12 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil
}
// ReceiveDataColumn implements the same method in chain service
func (*ChainService) ReceiveDataColumn(_ blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil
}
// HashInForkchoice mocks the same method in the chain service
func (c *ChainService) HashInForkchoice([32]byte) bool {
return false
}
// ReceivePayloadAttestationMessage mocks the same method in the chain service
func (c *ChainService) ReceivePayloadAttestationMessage(_ context.Context, _ *ethpb.PayloadAttestationMessage) error {
return c.ReceivePayloadAttestationMessageErr
}
func (c *ChainService) GetPTCVote(root [32]byte) primitives.PTCStatus {
return c.PayloadStatus
}
func (c *ChainService) HashForBlockRoot(root [32]byte) [32]byte {
return root
}

View File

@@ -1,30 +0,0 @@
package testing
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
)
// ReceiveExecutionPayloadEnvelope mocks the method in chain service.
func (s *ChainService) ReceiveExecutionPayloadEnvelope(ctx context.Context, env interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}
if s.State == nil {
return ErrNilState
}
e, err := env.Envelope()
if err != nil {
return err
}
if s.State.Slot() == e.Slot() {
if err := s.State.SetLatestFullSlot(s.State.Slot()); err != nil {
return err
}
}
s.ExecutionPayloadEnvelope = e
return nil
}

View File

@@ -16,13 +16,11 @@ go_library(
"doc.go",
"error.go",
"interfaces.go",
"payload_attestation.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
"proposer_indices_type.go",
"registration.go",
"signed_execution_header.go",
"skip_slot_cache.go",
"subnet_ids.go",
"sync_committee.go",
@@ -52,7 +50,6 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
@@ -78,17 +75,16 @@ go_test(
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"payload_attestation_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",
"registration_test.go",
"signed_execution_header_test.go",
"skip_slot_cache_test.go",
"subnet_ids_test.go",
"sync_committee_head_state_test.go",
"sync_committee_test.go",
"sync_subnet_ids_test.go",
"tracked_validators_test.go",
],
embed = [":go_default_library"],
deps = [
@@ -98,10 +94,8 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/blst:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"deposit_fetcher.go",
"deposit_inserter.go",
"deposit_pruner.go",
"deposit_tree.go",
"deposit_tree_snapshot.go",
"merkle_tree.go",
@@ -35,6 +36,7 @@ go_test(
srcs = [
"deposit_cache_test.go",
"deposit_fetcher_test.go",
"deposit_pruner_test.go",
"deposit_tree_snapshot_test.go",
"merkle_tree_test.go",
"spec_test.go",

View File

@@ -903,189 +903,6 @@ func TestMin(t *testing.T) {
}
func TestPruneProofs_Ok(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 1))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.NotNil(t, dc.deposits[2].Deposit.Proof)
assert.NotNil(t, dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}}, index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 2))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
}
func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 99))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 4))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestDepositMap_WorksCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)

View File

@@ -178,52 +178,6 @@ func (c *Cache) NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int
return deposits
}
// PruneProofs removes proofs from all deposits whose index is equal or less than untilDepositIndex.
func (c *Cache) PruneProofs(ctx context.Context, untilDepositIndex int64) error {
_, span := trace.StartSpan(ctx, "Cache.PruneProofs")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
if untilDepositIndex >= int64(len(c.deposits)) {
untilDepositIndex = int64(len(c.deposits) - 1)
}
for i := untilDepositIndex; i >= 0; i-- {
// Finding a nil proof means that all proofs up to this deposit have been already pruned.
if c.deposits[i].Deposit.Proof == nil {
break
}
c.deposits[i].Deposit.Proof = nil
}
return nil
}
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (c *Cache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
_, span := trace.StartSpan(ctx, "Cache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
log.Debug("Ignoring 0 deposit removal")
return
}
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
cleanDeposits := make([]*ethpb.DepositContainer, 0, len(c.pendingDeposits))
for _, dp := range c.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)
}
}
c.pendingDeposits = cleanDeposits
pendingDepositsCount.Set(float64(len(c.pendingDeposits)))
}
// InsertPendingDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (c *Cache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {

View File

@@ -44,67 +44,3 @@ func TestPendingDeposits_OK(t *testing.T) {
all := dc.PendingDeposits(context.Background(), nil)
assert.Equal(t, len(dc.pendingDeposits), len(all), "PendingDeposits(ctx, nil) did not return all deposits")
}
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}

View File

@@ -0,0 +1,88 @@
package depositsnapshot
import (
"context"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// PruneProofs removes proofs from all deposits whose index is equal or less than untilDepositIndex.
func (c *Cache) PruneProofs(ctx context.Context, untilDepositIndex int64) error {
_, span := trace.StartSpan(ctx, "Cache.PruneProofs")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
if untilDepositIndex >= int64(len(c.deposits)) {
untilDepositIndex = int64(len(c.deposits) - 1)
}
for i := untilDepositIndex; i >= 0; i-- {
// Finding a nil proof means that all proofs up to this deposit have been already pruned.
if c.deposits[i].Deposit.Proof == nil {
break
}
c.deposits[i].Deposit.Proof = nil
}
return nil
}
// PruneAllProofs removes proofs from all deposits.
// As EIP-6110 applies and the legacy deposit mechanism is deprecated,
// proofs in deposit snapshot are no longer needed.
// See: https://eips.ethereum.org/EIPS/eip-6110#eth1data-poll-deprecation
func (c *Cache) PruneAllProofs(ctx context.Context) {
_, span := trace.StartSpan(ctx, "Cache.PruneAllProofs")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
for i := len(c.deposits) - 1; i >= 0; i-- {
if c.deposits[i].Deposit.Proof == nil {
break
}
c.deposits[i].Deposit.Proof = nil
}
}
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (c *Cache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
_, span := trace.StartSpan(ctx, "Cache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
log.Debug("Ignoring 0 deposit removal")
return
}
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
cleanDeposits := make([]*ethpb.DepositContainer, 0, len(c.pendingDeposits))
for _, dp := range c.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)
}
}
c.pendingDeposits = cleanDeposits
pendingDepositsCount.Set(float64(len(c.pendingDeposits)))
}
// PruneAllPendingDeposits removes all pending deposits from the cache.
// As EIP-6110 applies and the legacy deposit mechanism is deprecated,
// pending deposits in deposit snapshot are no longer needed.
// See: https://eips.ethereum.org/EIPS/eip-6110#eth1data-poll-deprecation
func (c *Cache) PruneAllPendingDeposits(ctx context.Context) {
_, span := trace.StartSpan(ctx, "Cache.PruneAllPendingDeposits")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
c.pendingDeposits = make([]*ethpb.DepositContainer, 0)
pendingDepositsCount.Set(float64(0))
}

View File

@@ -0,0 +1,323 @@
package depositsnapshot
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPruneAllPendingDeposits(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PruneAllPendingDeposits(context.Background())
expected := []*ethpb.DepositContainer{}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPruneProofs_Ok(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 1))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.NotNil(t, dc.deposits[2].Deposit.Proof)
assert.NotNil(t, dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}}, index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 2))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
}
func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 99))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 4))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestPruneAllProofs(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
dc.PruneAllProofs(context.Background())
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}

View File

@@ -12,6 +12,7 @@ import (
type DepositCache interface {
DepositFetcher
DepositInserter
DepositPruner
}
// DepositFetcher defines a struct which can retrieve deposit information from a store.
@@ -23,8 +24,6 @@ type DepositFetcher interface {
InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte)
PendingDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit
PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer
PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64)
PruneProofs(ctx context.Context, untilDepositIndex int64) error
FinalizedFetcher
}
@@ -42,6 +41,14 @@ type FinalizedFetcher interface {
NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int64, untilBlk *big.Int) []*ethpb.Deposit
}
// DepositPruner is an interface for pruning deposits and proofs.
type DepositPruner interface {
PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64)
PruneAllPendingDeposits(ctx context.Context)
PruneProofs(ctx context.Context, untilDepositIndex int64) error
PruneAllProofs(ctx context.Context)
}
// FinalizedDeposits defines a method to access a merkle tree containing deposits and their indexes.
type FinalizedDeposits interface {
Deposits() MerkleTree

View File

@@ -1,132 +0,0 @@
package cache
import (
"errors"
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var errNilPayloadAttestationMessage = errors.New("nil Payload Attestation Message")
// PayloadAttestationCache keeps a map of all the PTC votes that were seen,
// already aggregated. The key is the beacon block root.
type PayloadAttestationCache struct {
root [32]byte
attestations [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation
sync.Mutex
}
// Seen returns true if a vote for the given Beacon Block Root has already been processed
// for this Payload Timeliness Committee index. This will return true even if
// the Payload status differs.
func (p *PayloadAttestationCache) Seen(root [32]byte, idx uint64) bool {
p.Lock()
defer p.Unlock()
if p.root != root {
return false
}
for _, agg := range p.attestations {
if agg == nil {
continue
}
if agg.AggregationBits.BitAt(idx) {
return true
}
}
return false
}
// messageToPayloadAttestation creates a PayloadAttestation with a single
// aggregated bit from the passed PayloadAttestationMessage
func messageToPayloadAttestation(att *eth.PayloadAttestationMessage, idx uint64) *eth.PayloadAttestation {
bits := primitives.NewPayloadAttestationAggregationBits()
bits.SetBitAt(idx, true)
data := &eth.PayloadAttestationData{
BeaconBlockRoot: bytesutil.SafeCopyBytes(att.Data.BeaconBlockRoot),
Slot: att.Data.Slot,
PayloadStatus: att.Data.PayloadStatus,
}
return &eth.PayloadAttestation{
AggregationBits: bits,
Data: data,
Signature: bytesutil.SafeCopyBytes(att.Signature),
}
}
// aggregateSigFromMessage returns the aggregated signature from a Payload
// Attestation by adding the passed signature in the PayloadAttestationMessage,
// no signature validation is performed.
func aggregateSigFromMessage(aggregated *eth.PayloadAttestation, message *eth.PayloadAttestationMessage) ([]byte, error) {
aggSig, err := bls.SignatureFromBytesNoValidation(aggregated.Signature)
if err != nil {
return nil, err
}
sig, err := bls.SignatureFromBytesNoValidation(message.Signature)
if err != nil {
return nil, err
}
return bls.AggregateSignatures([]bls.Signature{aggSig, sig}).Marshal(), nil
}
// Add adds a PayloadAttestationMessage to the internal cache of aggregated
// PayloadAttestations.
// If the index has already been seen for this attestation status the function does nothing.
// If the root is not the cached root, the function will clear the previous cache
// This function assumes that the message has already been validated. In
// particular that the signature is valid and that the block root corresponds to
// the given slot in the attestation data.
func (p *PayloadAttestationCache) Add(att *eth.PayloadAttestationMessage, idx uint64) error {
if att == nil || att.Data == nil || att.Data.BeaconBlockRoot == nil {
return errNilPayloadAttestationMessage
}
p.Lock()
defer p.Unlock()
root := [32]byte(att.Data.BeaconBlockRoot)
if p.root != root {
p.root = root
p.attestations = [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{}
}
agg := p.attestations[att.Data.PayloadStatus]
if agg == nil {
p.attestations[att.Data.PayloadStatus] = messageToPayloadAttestation(att, idx)
return nil
}
if agg.AggregationBits.BitAt(idx) {
return nil
}
sig, err := aggregateSigFromMessage(agg, att)
if err != nil {
return err
}
agg.Signature = sig
agg.AggregationBits.SetBitAt(idx, true)
return nil
}
// Get returns the aggregated PayloadAttestation for the given root and status
// if the root doesn't exist or status is invalid, the function returns nil.
func (p *PayloadAttestationCache) Get(root [32]byte, status primitives.PTCStatus) *eth.PayloadAttestation {
p.Lock()
defer p.Unlock()
if p.root != root {
return nil
}
if status >= primitives.PAYLOAD_INVALID_STATUS {
return nil
}
return eth.CopyPayloadAttestation(p.attestations[status])
}
// Clear clears the internal map
func (p *PayloadAttestationCache) Clear() {
p.Lock()
defer p.Unlock()
p.root = [32]byte{}
p.attestations = [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{}
}

View File

@@ -1,143 +0,0 @@
package cache
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestPayloadAttestationCache(t *testing.T) {
p := &PayloadAttestationCache{}
//Test Has seen
root := [32]byte{'r'}
idx := uint64(5)
require.Equal(t, false, p.Seen(root, idx))
// Test Add
msg := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root[:],
Slot: 1,
PayloadStatus: primitives.PAYLOAD_PRESENT,
},
}
// Add new root
require.NoError(t, p.Add(msg, idx))
require.Equal(t, true, p.Seen(root, idx))
require.Equal(t, root, p.root)
att := p.attestations[primitives.PAYLOAD_PRESENT]
indices := att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx)}, indices)
singleSig := bytesutil.SafeCopyBytes(msg.Signature)
require.DeepEqual(t, singleSig, att.Signature)
// Test Seen
require.Equal(t, true, p.Seen(root, idx))
require.Equal(t, false, p.Seen(root, idx+1))
// Add another attestation on the same data
msg2 := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: att.Data,
}
idx2 := uint64(7)
require.NoError(t, p.Add(msg2, idx2))
att = p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx), int(idx2)}, indices)
require.DeepNotEqual(t, att.Signature, msg.Signature)
// Try again the same index
require.NoError(t, p.Add(msg2, idx2))
att2 := p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx), int(idx2)}, indices)
require.DeepEqual(t, att, att2)
// Test Seen
require.Equal(t, true, p.Seen(root, idx2))
require.Equal(t, false, p.Seen(root, idx2+1))
// Add another payload status for a different payload status
msg3 := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root[:],
Slot: 1,
PayloadStatus: primitives.PAYLOAD_WITHHELD,
},
}
idx3 := uint64(17)
require.NoError(t, p.Add(msg3, idx3))
att3 := p.attestations[primitives.PAYLOAD_WITHHELD]
indices3 := att3.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx3)}, indices3)
require.DeepEqual(t, singleSig, att3.Signature)
// Add a different root
root2 := [32]byte{'s'}
msg.Data.BeaconBlockRoot = root2[:]
require.NoError(t, p.Add(msg, idx))
require.Equal(t, root2, p.root)
require.Equal(t, true, p.Seen(root2, idx))
require.Equal(t, false, p.Seen(root, idx))
att = p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx)}, indices)
}
func TestPayloadAttestationCache_Get(t *testing.T) {
root := [32]byte{1, 2, 3}
wrongRoot := [32]byte{4, 5, 6}
status := primitives.PAYLOAD_PRESENT
invalidStatus := primitives.PAYLOAD_INVALID_STATUS
cache := &PayloadAttestationCache{
root: root,
attestations: [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{
{
Signature: []byte{1},
},
{
Signature: []byte{2},
},
{
Signature: []byte{3},
},
},
}
t.Run("valid root and status", func(t *testing.T) {
result := cache.Get(root, status)
require.NotNil(t, result, "Expected a non-nil result")
require.DeepEqual(t, cache.attestations[status], result)
})
t.Run("invalid root", func(t *testing.T) {
result := cache.Get(wrongRoot, status)
require.IsNil(t, result)
})
t.Run("status out of bound", func(t *testing.T) {
result := cache.Get(root, invalidStatus)
require.IsNil(t, result)
})
t.Run("no attestation", func(t *testing.T) {
emptyCache := &PayloadAttestationCache{
root: root,
attestations: [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{},
}
result := emptyCache.Get(root, status)
require.IsNil(t, result)
})
}

View File

@@ -1,76 +0,0 @@
package cache
import (
"bytes"
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ExecutionPayloadHeaders is used by the sync service to store signed execution payload headers after they pass validation,
// and filter out subsequent headers with lower value.
// The signed header from this cache could be used by the proposer when proposing the next slot.
type ExecutionPayloadHeaders struct {
headers map[primitives.Slot][]*enginev1.SignedExecutionPayloadHeader
sync.RWMutex
}
func NewExecutionPayloadHeaders() *ExecutionPayloadHeaders {
return &ExecutionPayloadHeaders{
headers: make(map[primitives.Slot][]*enginev1.SignedExecutionPayloadHeader),
}
}
// SaveSignedExecutionPayloadHeader saves the signed execution payload header to the cache.
// The cache stores headers for up to two slots. If the input slot is higher than the lowest slot
// currently in the cache, the lowest slot is removed to make space for the new header.
// Only the highest value header for a given parent block hash will be stored.
// This function assumes caller already checks header's slot is current or next slot, it doesn't account for slot validation.
func (c *ExecutionPayloadHeaders) SaveSignedExecutionPayloadHeader(header *enginev1.SignedExecutionPayloadHeader) {
c.Lock()
defer c.Unlock()
for s := range c.headers {
if s+1 < header.Message.Slot {
delete(c.headers, s)
}
}
// Add or update the header in the map
if _, ok := c.headers[header.Message.Slot]; !ok {
c.headers[header.Message.Slot] = []*enginev1.SignedExecutionPayloadHeader{header}
} else {
found := false
for i, h := range c.headers[header.Message.Slot] {
if bytes.Equal(h.Message.ParentBlockHash, header.Message.ParentBlockHash) && bytes.Equal(h.Message.ParentBlockRoot, header.Message.ParentBlockRoot) {
if header.Message.Value > h.Message.Value {
c.headers[header.Message.Slot][i] = header
}
found = true
break
}
}
if !found {
c.headers[header.Message.Slot] = append(c.headers[header.Message.Slot], header)
}
}
}
// SignedExecutionPayloadHeader returns the signed payload header for the given slot and parent block hash and block root.
// Returns nil if the header is not found.
// This should be used when the caller wants the header to match parent block hash and parent block root such as proposer choosing a header to propose.
func (c *ExecutionPayloadHeaders) SignedExecutionPayloadHeader(slot primitives.Slot, parentBlockHash []byte, parentBlockRoot []byte) *enginev1.SignedExecutionPayloadHeader {
c.RLock()
defer c.RUnlock()
if headers, ok := c.headers[slot]; ok {
for _, header := range headers {
if bytes.Equal(header.Message.ParentBlockHash, parentBlockHash) && bytes.Equal(header.Message.ParentBlockRoot, parentBlockRoot) {
return header
}
}
}
return nil
}

View File

@@ -1,243 +0,0 @@
package cache
import (
"testing"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func Test_SaveSignedExecutionPayloadHeader(t *testing.T) {
t.Run("First header should be added to cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
require.Equal(t, 1, len(c.headers))
require.Equal(t, header, c.headers[1][0])
})
t.Run("Second header with higher slot should be added, and both slots should be in cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 2, len(c.headers))
require.Equal(t, header1, c.headers[1][0])
require.Equal(t, header2, c.headers[2][0])
})
t.Run("Third header with higher slot should replace the oldest slot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
header3 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 3,
ParentBlockHash: []byte("parent3"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
c.SaveSignedExecutionPayloadHeader(header3)
require.Equal(t, 2, len(c.headers))
require.Equal(t, header2, c.headers[2][0])
require.Equal(t, header3, c.headers[3][0])
})
t.Run("Header with same slot but higher value should replace the existing one", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 1, len(c.headers[2]))
require.Equal(t, header2, c.headers[2][0])
})
t.Run("Header with different parent block hash should be appended to the same slot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 2, len(c.headers[2]))
require.Equal(t, header1, c.headers[2][0])
require.Equal(t, header2, c.headers[2][1])
})
}
func TestSignedExecutionPayloadHeader(t *testing.T) {
t.Run("Return header when slot and parentBlockHash match", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte("root1"))
require.NotNil(t, result)
require.Equal(t, header, result)
})
t.Run("Return nil when no matching slot and parentBlockHash", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte("root1"))
require.IsNil(t, result)
})
t.Run("Return nil when no matching slot and parentBlockRoot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(2, []byte("parent1"), []byte("root2"))
require.IsNil(t, result)
})
t.Run("Return header when there are two slots in the cache and a match is found", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
// Check for the first header
result1 := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte{})
require.NotNil(t, result1)
require.Equal(t, header1, result1)
// Check for the second header
result2 := c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte{})
require.NotNil(t, result2)
require.Equal(t, header2, result2)
})
t.Run("Return nil when slot is evicted from cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
header3 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 3,
ParentBlockHash: []byte("parent3"),
Value: 300,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
c.SaveSignedExecutionPayloadHeader(header3)
// The first slot should be evicted, so result should be nil
result := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte{})
require.IsNil(t, result)
// The second slot should still be present
result = c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte{})
require.NotNil(t, result)
require.Equal(t, header2, result)
// The third slot should be present
result = c.SignedExecutionPayloadHeader(3, []byte("parent3"), []byte{})
require.NotNil(t, result)
require.Equal(t, header3, result)
})
}

View File

@@ -1,49 +1,139 @@
package cache
import (
"sync"
"strconv"
"time"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/sirupsen/logrus"
)
type TrackedValidator struct {
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
}
const (
defaultExpiration = 1 * time.Hour
cleanupInterval = 15 * time.Minute
)
type TrackedValidatorsCache struct {
sync.Mutex
trackedValidators map[primitives.ValidatorIndex]TrackedValidator
}
type (
TrackedValidator struct {
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
}
TrackedValidatorsCache struct {
trackedValidators cache.Cache
}
)
var (
// Metrics.
trackedValidatorsCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "tracked_validators_cache_miss",
Help: "The number of tracked validators requests that are not present in the cache.",
})
trackedValidatorsCacheTotal = promauto.NewCounter(prometheus.CounterOpts{
Name: "tracked_validators_cache_total",
Help: "The total number of tracked validators requests in the cache.",
})
trackedValidatorsCacheCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "tracked_validators_cache_count",
Help: "The number of tracked validators in the cache.",
})
)
// NewTrackedValidatorsCache creates a new cache for tracking validators.
func NewTrackedValidatorsCache() *TrackedValidatorsCache {
return &TrackedValidatorsCache{
trackedValidators: make(map[primitives.ValidatorIndex]TrackedValidator),
trackedValidators: *cache.New(defaultExpiration, cleanupInterval),
}
}
// Validator retrieves a tracked validator from the cache (if present).
func (t *TrackedValidatorsCache) Validator(index primitives.ValidatorIndex) (TrackedValidator, bool) {
t.Lock()
defer t.Unlock()
val, ok := t.trackedValidators[index]
return val, ok
trackedValidatorsCacheTotal.Inc()
key := toCacheKey(index)
item, ok := t.trackedValidators.Get(key)
if !ok {
trackedValidatorsCacheMiss.Inc()
return TrackedValidator{}, false
}
val, ok := item.(TrackedValidator)
if !ok {
logrus.Errorf("Failed to cast tracked validator from cache, got unexpected item type %T", item)
return TrackedValidator{}, false
}
return val, true
}
// Set adds a tracked validator to the cache.
func (t *TrackedValidatorsCache) Set(val TrackedValidator) {
t.Lock()
defer t.Unlock()
t.trackedValidators[val.Index] = val
key := toCacheKey(val.Index)
t.trackedValidators.Set(key, val, cache.DefaultExpiration)
}
// Delete removes a tracked validator from the cache.
func (t *TrackedValidatorsCache) Prune() {
t.Lock()
defer t.Unlock()
t.trackedValidators = make(map[primitives.ValidatorIndex]TrackedValidator)
t.trackedValidators.Flush()
trackedValidatorsCacheCount.Set(0)
}
// Validating returns true if there are at least one tracked validators in the cache.
func (t *TrackedValidatorsCache) Validating() bool {
t.Lock()
defer t.Unlock()
return len(t.trackedValidators) > 0
count := t.trackedValidators.ItemCount()
trackedValidatorsCacheCount.Set(float64(count))
return count > 0
}
// ItemCount returns the number of tracked validators in the cache.
func (t *TrackedValidatorsCache) ItemCount() int {
count := t.trackedValidators.ItemCount()
trackedValidatorsCacheCount.Set(float64(count))
return count
}
// Indices returns a map of validator indices that are being tracked.
func (t *TrackedValidatorsCache) Indices() map[primitives.ValidatorIndex]bool {
items := t.trackedValidators.Items()
count := len(items)
trackedValidatorsCacheCount.Set(float64(count))
indices := make(map[primitives.ValidatorIndex]bool, count)
for cacheKey := range items {
index, err := fromCacheKey(cacheKey)
if err != nil {
logrus.WithError(err).Error("Failed to get validator index from cache key")
continue
}
indices[index] = true
}
return indices
}
// toCacheKey creates a cache key from the validator index.
func toCacheKey(validatorIndex primitives.ValidatorIndex) string {
return strconv.FormatUint(uint64(validatorIndex), 10)
}
// fromCacheKey gets the validator index from the cache key.
func fromCacheKey(key string) (primitives.ValidatorIndex, error) {
validatorIndex, err := strconv.ParseUint(key, 10, 64)
if err != nil {
return 0, errors.Wrapf(err, "parse Uint: %s", key)
}
return primitives.ValidatorIndex(validatorIndex), nil
}

View File

@@ -0,0 +1,79 @@
package cache
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func mapEqual(a, b map[primitives.ValidatorIndex]bool) bool {
if len(a) != len(b) {
return false
}
for k, v := range a {
if b[k] != v {
return false
}
}
return true
}
func TestTrackedValidatorsCache(t *testing.T) {
vc := NewTrackedValidatorsCache()
// No validators in cache.
require.Equal(t, 0, vc.ItemCount())
require.Equal(t, false, vc.Validating())
require.Equal(t, 0, len(vc.Indices()))
_, ok := vc.Validator(41)
require.Equal(t, false, ok)
// Add some validators (one twice).
v42Expected := TrackedValidator{Active: true, FeeRecipient: [20]byte{1}, Index: 42}
v43Expected := TrackedValidator{Active: false, FeeRecipient: [20]byte{2}, Index: 43}
vc.Set(v42Expected)
vc.Set(v43Expected)
vc.Set(v42Expected)
// Check if they are in the cache.
v42Actual, ok := vc.Validator(42)
require.Equal(t, true, ok)
require.Equal(t, v42Expected, v42Actual)
v43Actual, ok := vc.Validator(43)
require.Equal(t, true, ok)
require.Equal(t, v43Expected, v43Actual)
expected := map[primitives.ValidatorIndex]bool{42: true, 43: true}
actual := vc.Indices()
require.Equal(t, true, mapEqual(expected, actual))
// Check the item count and if the cache is validating.
require.Equal(t, 2, vc.ItemCount())
require.Equal(t, true, vc.Validating())
// Check if a non-existing validator is in the cache.
_, ok = vc.Validator(41)
require.Equal(t, false, ok)
// Prune the cache and test it.
vc.Prune()
_, ok = vc.Validator(41)
require.Equal(t, false, ok)
_, ok = vc.Validator(42)
require.Equal(t, false, ok)
_, ok = vc.Validator(43)
require.Equal(t, false, ok)
require.Equal(t, 0, vc.ItemCount())
require.Equal(t, false, vc.Validating())
require.Equal(t, 0, len(vc.Indices()))
}

View File

@@ -107,7 +107,6 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time/slots:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -12,7 +12,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -223,42 +222,6 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateEPBS:
kzgs := make([][]byte, 0)
kzgRoot, err := ssz.KzgCommitmentsRoot(kzgs)
if err != nil {
return nil, err
}
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockEpbs{
Block: &ethpb.BeaconBlockEpbs{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyEpbs{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
SignedExecutionPayloadHeader: &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
BlobKzgCommitmentsRoot: kzgRoot[:],
},
Signature: make([]byte, 96),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
PayloadAttestations: make([]*ethpb.PayloadAttestation, 0),
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}

View File

@@ -59,9 +59,6 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
//
// return block.body.execution_payload != ExecutionPayload()
func IsExecutionBlock(body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
if body.Version() >= version.Capella {
return true, nil
}
if body == nil {
return false, errors.New("nil block body")
}
@@ -97,9 +94,6 @@ func IsExecutionEnabled(st state.BeaconState, body interfaces.ReadOnlyBeaconBloc
if IsPreBellatrixVersion(st.Version()) {
return false, nil
}
if body.Version() >= version.Capella {
return true, nil
}
header, err := st.LatestExecutionPayloadHeader()
if err != nil {
return false, err
@@ -250,47 +244,12 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte
if blk.Version() >= version.EPBS {
header, err := blk.Body().SignedExecutionPayloadHeader()
if err != nil {
return payloadHash, err
}
payload, err := header.Header()
if err != nil {
return payloadHash, err
}
return payload.BlockHash(), nil
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
}
if blk.Version() >= version.Bellatrix {
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
}
return payloadHash, nil
}
// GetBlockParentHash returns the hash of the parent execution payload
func GetBlockParentHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var parentHash [32]byte
if blk.Version() >= version.EPBS {
header, err := blk.Body().SignedExecutionPayloadHeader()
if err != nil {
return parentHash, err
}
payload, err := header.Header()
if err != nil {
return parentHash, err
}
return payload.ParentBlockHash(), nil
}
if blk.Version() >= version.Bellatrix {
payload, err := blk.Body().Execution()
if err != nil {
return parentHash, err
}
return bytesutil.ToBytes32(payload.ParentHash()), nil
}
return parentHash, nil
return bytesutil.ToBytes32(payload.BlockHash()), nil
}

View File

@@ -96,6 +96,24 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.

View File

@@ -14,8 +14,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
@@ -118,97 +118,14 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
return val, nil
}
func checkWithdrawalsAgainstPayload(
executionData interfaces.ExecutionData,
numExpected int,
expectedRoot [32]byte,
) error {
var wdRoot [32]byte
if executionData.IsBlinded() {
r, err := executionData.WithdrawalsRoot()
if err != nil {
return errors.Wrap(err, "could not get withdrawals root")
}
copy(wdRoot[:], r)
} else {
wds, err := executionData.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != numExpected {
return fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), numExpected)
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return errors.Wrap(err, "could not get withdrawals root")
}
}
if expectedRoot != wdRoot {
return fmt.Errorf("expected withdrawals root %#x, got %#x", expectedRoot, wdRoot)
}
return nil
}
func processWithdrawalStateTransition(
st state.BeaconState,
expectedWithdrawals []*enginev1.Withdrawal,
partialWithdrawalsCount uint64,
) (err error) {
for _, withdrawal := range expectedWithdrawals {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if err != nil {
return errors.Wrap(err, "could not decrease balance")
}
}
if st.Version() >= version.Electra {
if err := st.DequeuePendingPartialWithdrawals(partialWithdrawalsCount); err != nil {
return fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return errors.Wrap(err, "could not set next withdrawal validator index")
}
return nil
}
// ProcessWithdrawals processes the validator withdrawals from the provided execution payload
// into the beacon state.
//
// Spec pseudocode definition:
//
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
// if state.fork.current_version >= EIP7732_FORK_VERSION :
// if not is_parent_block_full(state): # [New in EIP-7732]
// return
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
//
// expected_withdrawals, partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
//
// if state.fork.current_version >= EIP7732_FORK_VERSION :
// state.latest_withdrawals_root = hash_tree_root(expected_withdrawals) # [New in EIP-7732]
// else :
// assert len(payload.withdrawals) == len(expected_withdrawals)
// expected_withdrawals, processed_partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
//
// assert len(payload.withdrawals) == len(expected_withdrawals)
//
@@ -235,39 +152,76 @@ func processWithdrawalStateTransition(
// next_validator_index = ValidatorIndex(next_index % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
func ProcessWithdrawals(st state.BeaconState, executionData interfaces.ExecutionData) (state.BeaconState, error) {
if st.Version() >= version.EPBS {
IsParentBlockFull, err := st.IsParentBlockFull()
if err != nil {
return nil, errors.Wrap(err, "could not check if parent block is full")
}
if !IsParentBlockFull {
return st, nil
}
}
expectedWithdrawals, partialWithdrawalsCount, err := st.ExpectedWithdrawals()
expectedWithdrawals, processedPartialWithdrawalsCount, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
}
var wdRoot [32]byte
if executionData.IsBlinded() {
r, err := executionData.WithdrawalsRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
wdRoot = bytesutil.ToBytes32(r)
} else {
wds, err := executionData.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != len(expectedWithdrawals) {
return nil, fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), len(expectedWithdrawals))
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
}
expectedRoot, err := ssz.WithdrawalSliceRoot(expectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals root")
}
if expectedRoot != wdRoot {
return nil, fmt.Errorf("expected withdrawals root %#x, got %#x", expectedRoot, wdRoot)
}
if st.Version() >= version.EPBS {
err = st.SetLastWithdrawalsRoot(expectedRoot[:])
for _, withdrawal := range expectedWithdrawals {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if err != nil {
return nil, errors.Wrap(err, "could not set withdrawals root")
}
} else {
if err := checkWithdrawalsAgainstPayload(executionData, len(expectedWithdrawals), expectedRoot); err != nil {
return nil, err
return nil, errors.Wrap(err, "could not decrease balance")
}
}
if err := processWithdrawalStateTransition(st, expectedWithdrawals, partialWithdrawalsCount); err != nil {
return nil, err
if st.Version() >= version.Electra {
if err := st.DequeuePendingPartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
return nil, fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal validator index")
}
return st, nil
}

View File

@@ -22,7 +22,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -1168,407 +1167,13 @@ func TestProcessWithdrawals(t *testing.T) {
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
})
}
}
func TestProcessWithdrawalsEPBS(t *testing.T) {
const (
currentEpoch = primitives.Epoch(10)
epochInFuture = primitives.Epoch(12)
epochInPast = primitives.Epoch(8)
numValidators = 128
notWithdrawableIndex = 127
notPartiallyWithdrawable = 126
maxSweep = uint64(80)
)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
type args struct {
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PendingPartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
PendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal // Electra
LatestBlockHash []byte // EIP-7732
}
type control struct {
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
Balances map[uint64]uint64
}
type Test struct {
Args args
Control control
}
executionAddress := func(i primitives.ValidatorIndex) []byte {
wc := make([]byte, 20)
wc[19] = byte(i)
return wc
}
withdrawalAmount := func(i primitives.ValidatorIndex) uint64 {
return maxEffectiveBalance + uint64(i)*100000
}
fullWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i),
}
}
PendingPartialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i) - maxEffectiveBalance,
}
}
tests := []Test{
{
Args: args{
Name: "success no withdrawals",
NextWithdrawalValidatorIndex: 10,
NextWithdrawalIndex: 3,
},
Control: control{
NextWithdrawalValidatorIndex: 90,
NextWithdrawalIndex: 3,
},
},
{
Args: args{
Name: "success one full withdrawal",
NextWithdrawalIndex: 3,
NextWithdrawalValidatorIndex: 5,
FullWithdrawalIndices: []primitives.ValidatorIndex{70},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(70, 3),
},
},
Control: control{
NextWithdrawalValidatorIndex: 85,
NextWithdrawalIndex: 4,
Balances: map[uint64]uint64{70: 0},
},
},
{
Args: args{
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21),
},
},
Control: control{
NextWithdrawalValidatorIndex: 72,
NextWithdrawalIndex: 22,
Balances: map[uint64]uint64{7: maxEffectiveBalance},
},
},
{
Args: args{
Name: "success many full withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{7: 0, 19: 0, 28: 0},
},
},
{
Args: args{
Name: "less than max sweep at end",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{80, 81, 82, 83},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(80, 22), fullWithdrawal(81, 23), fullWithdrawal(82, 24),
fullWithdrawal(83, 25),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 26,
Balances: map[uint64]uint64{80: 0, 81: 0, 82: 0, 83: 0},
},
},
{
Args: args{
Name: "less than max sweep and beginning",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{4, 5, 6},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(4, 22), fullWithdrawal(5, 23), fullWithdrawal(6, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{4: 0, 5: 0, 6: 0},
},
},
{
Args: args{
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 22), PendingPartialWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{
7: maxEffectiveBalance,
19: maxEffectiveBalance,
28: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals with pending partial withdrawals in state",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
PendingPartialWithdrawals: []*ethpb.PendingPartialWithdrawal{
{
Index: 11,
Amount: withdrawalAmount(11) - maxEffectiveBalance,
},
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success more than max fully withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
FullWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(1, 22), fullWithdrawal(2, 23), fullWithdrawal(3, 24),
fullWithdrawal(4, 25), fullWithdrawal(5, 26), fullWithdrawal(6, 27),
fullWithdrawal(7, 28), fullWithdrawal(8, 29), fullWithdrawal(9, 30),
fullWithdrawal(21, 31), fullWithdrawal(22, 32), fullWithdrawal(23, 33),
fullWithdrawal(24, 34), fullWithdrawal(25, 35), fullWithdrawal(26, 36),
fullWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0,
21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0,
},
},
},
{
Args: args{
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(1, 22), PendingPartialWithdrawal(2, 23), PendingPartialWithdrawal(3, 24),
PendingPartialWithdrawal(4, 25), PendingPartialWithdrawal(5, 26), PendingPartialWithdrawal(6, 27),
PendingPartialWithdrawal(7, 28), PendingPartialWithdrawal(8, 29), PendingPartialWithdrawal(9, 30),
PendingPartialWithdrawal(21, 31), PendingPartialWithdrawal(22, 32), PendingPartialWithdrawal(23, 33),
PendingPartialWithdrawal(24, 34), PendingPartialWithdrawal(25, 35), PendingPartialWithdrawal(26, 36),
PendingPartialWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: maxEffectiveBalance,
2: maxEffectiveBalance,
3: maxEffectiveBalance,
4: maxEffectiveBalance,
5: maxEffectiveBalance,
6: maxEffectiveBalance,
7: maxEffectiveBalance,
8: maxEffectiveBalance,
9: maxEffectiveBalance,
21: maxEffectiveBalance,
22: maxEffectiveBalance,
23: maxEffectiveBalance,
24: maxEffectiveBalance,
25: maxEffectiveBalance,
26: maxEffectiveBalance,
27: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "Parent Node is not full",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
LatestBlockHash: []byte{1, 2, 3},
},
},
}
checkPostState := func(t *testing.T, expected control, st state.BeaconState) {
l, err := st.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalValidatorIndex, l)
n, err := st.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalIndex, n)
balances := st.Balances()
for idx, bal := range expected.Balances {
require.Equal(t, bal, balances[idx])
}
}
prepareValidators := func(st state.BeaconState, arguments args) error {
validators := make([]*ethpb.Validator, numValidators)
if err := st.SetBalances(make([]uint64, numValidators)); err != nil {
return err
}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = epochInFuture
v.WithdrawalCredentials = make([]byte, 32)
v.WithdrawalCredentials[31] = byte(i)
if err := st.UpdateBalancesAtIndex(primitives.ValidatorIndex(i), v.EffectiveBalance-uint64(rand.Intn(1000))); err != nil {
return err
}
validators[i] = v
}
for _, idx := range arguments.FullWithdrawalIndices {
if idx != notWithdrawableIndex {
validators[idx].WithdrawableEpoch = epochInPast
}
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
}
for _, idx := range arguments.PendingPartialWithdrawalIndices {
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
}
return st.SetValidators(validators)
}
for _, test := range tests {
t.Run(test.Args.Name, func(t *testing.T) {
saved := params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = maxSweep
if test.Args.Withdrawals == nil {
test.Args.Withdrawals = make([]*enginev1.Withdrawal, 0)
}
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
if test.Args.PendingPartialWithdrawalIndices == nil {
test.Args.PendingPartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
var st state.BeaconState
var p interfaces.ExecutionData
spb := &ethpb.BeaconStateEPBS{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
PendingPartialWithdrawals: test.Args.PendingPartialWithdrawals,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderEPBS{
BlockHash: []byte{},
},
LatestBlockHash: test.Args.LatestBlockHash,
}
st, err = state_native.InitializeFromProtoUnsafeEpbs(spb)
require.NoError(t, err)
env := random.ExecutionPayloadEnvelope(t)
env.Payload.Withdrawals = test.Args.Withdrawals
wp, err := consensusblocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
p, err = wp.Execution()
require.NoError(t, err)
err = prepareValidators(st, test.Args)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Args.Name == "Parent Node is not full" {
require.DeepEqual(t, post, st)
require.IsNil(t, err)
} else {
require.NoError(t, err)
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
}
func TestProcessBLSToExecutionChanges(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{

View File

@@ -8,8 +8,10 @@ go_library(
"consolidations.go",
"deposits.go",
"effective_balance_updates.go",
"error.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
"upgrade.go",
"validator.go",
"withdrawals.go",
@@ -54,13 +56,16 @@ go_test(
"deposit_fuzz_test.go",
"deposits_test.go",
"effective_balance_updates_test.go",
"error_test.go",
"export_test.go",
"registry_updates_test.go",
"transition_no_verify_sig_test.go",
"transition_test.go",
"upgrade_test.go",
"validator_test.go",
"withdrawals_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
@@ -85,6 +90,7 @@ go_test(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],

View File

@@ -177,9 +177,9 @@ func TestComputeConsolidationEpochAndUpdateChurn(t *testing.T) {
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000)+1,
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000) + 1,
expectedEpoch: 18, // Flows into another epoch.
expectedConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(32000000000000000)-1,
expectedConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(32000000000000000) - 1,
},
}

View File

@@ -0,0 +1,16 @@
package electra
import "github.com/pkg/errors"
type execReqErr struct {
error
}
// IsExecutionRequestError returns true if the error has `execReqErr`.
func IsExecutionRequestError(e error) bool {
if e == nil {
return false
}
var d execReqErr
return errors.As(e, &d)
}

View File

@@ -0,0 +1,45 @@
package electra
import (
"testing"
"github.com/pkg/errors"
)
func TestIsExecutionRequestError(t *testing.T) {
tests := []struct {
name string
err error
want bool
}{
{
name: "nil error",
err: nil,
want: false,
},
{
name: "random error",
err: errors.New("some error"),
want: false,
},
{
name: "execution request error",
err: execReqErr{errors.New("execution request failed")},
want: true,
},
{
name: "wrapped execution request error",
err: errors.Wrap(execReqErr{errors.New("execution request failed")}, "wrapped"),
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := IsExecutionRequestError(tt.err)
if got != tt.want {
t.Errorf("IsExecutionRequestError(%v) = %v, want %v", tt.err, got, tt.want)
}
})
}
}

View File

@@ -1,16 +1,13 @@
package epbs
package electra
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
var (
@@ -64,11 +61,11 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
st, err = electra.ProcessAttestationsNoVerifySignature(ctx, st, block)
st, err = ProcessAttestationsNoVerifySignature(ctx, st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
}
if _, err := electra.ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
@@ -79,28 +76,36 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
}
// new in ePBS
if block.Version() >= version.EPBS {
if err := ProcessPayloadAttestations(st, bb); err != nil {
return nil, err
}
return st, nil
}
// new in electra
requests, err := bb.ExecutionRequests()
if err != nil {
return nil, errors.Wrap(err, "could not get execution requests")
}
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit requests")
for _, d := range requests.Deposits {
if d == nil {
return nil, errors.New("nil deposit request")
}
}
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawal requests")
return nil, execReqErr{errors.Wrap(err, "could not process deposit requests")}
}
if err := electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)
for _, w := range requests.Withdrawals {
if w == nil {
return nil, errors.New("nil withdrawal request")
}
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process withdrawal requests")}
}
for _, c := range requests.Consolidations {
if c == nil {
return nil, errors.New("nil consolidation request")
}
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process consolidation requests")}
}
return st, nil
}

View File

@@ -0,0 +1,61 @@
package electra_test
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessOperationsWithNilRequests(t *testing.T) {
tests := []struct {
name string
modifyBlk func(blockElectra *ethpb.SignedBeaconBlockElectra)
errMsg string
}{
{
name: "Nil deposit request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Deposits = []*enginev1.DepositRequest{nil}
},
errMsg: "nil deposit request",
},
{
name: "Nil withdrawal request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Withdrawals = []*enginev1.WithdrawalRequest{nil}
},
errMsg: "nil withdrawal request",
},
{
name: "Nil consolidation request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Consolidations = []*enginev1.ConsolidationRequest{nil}
},
errMsg: "nil consolidation request",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
st, ks := util.DeterministicGenesisStateElectra(t, 128)
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tc.modifyBlk(blk)
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
_, err = electra.ProcessOperations(context.Background(), st, b.Block())
require.ErrorContains(t, tc.errMsg, err)
})
}
}

View File

@@ -1,61 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"attestation.go",
"execution_payload_envelope.go",
"execution_payload_header.go",
"operations.go",
"payload_attestation.go",
"upgrade.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"attestation_test.go",
"execution_payload_envelope_test.go",
"upgrade_test.go",
],
deps = [
":go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -1,13 +0,0 @@
package epbs
import (
"fmt"
)
// RemoveValidatorFlag removes validator flag from existing one.
func RemoveValidatorFlag(flag, flagPosition uint8) (uint8, error) {
if flagPosition > 7 {
return flag, fmt.Errorf("flag position %d exceeds length", flagPosition)
}
return flag & ^(1 << flagPosition), nil
}

View File

@@ -1,93 +0,0 @@
package epbs_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestValidatorFlag_Remove(t *testing.T) {
tests := []struct {
name string
add []uint8
remove []uint8
expectedTrue []uint8
expectedFalse []uint8
}{
{
name: "none",
add: []uint8{},
remove: []uint8{},
expectedTrue: []uint8{},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
remove: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedTrue: []uint8{},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source, target",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex},
remove: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedTrue: []uint8{params.BeaconConfig().TimelyTargetFlagIndex},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source, target, head",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
remove: []uint8{params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
expectedTrue: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedFalse: []uint8{params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
}
var err error
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
flag := uint8(0)
// Add flags.
for _, flagPosition := range test.add {
flag, err = altair.AddValidatorFlag(flag, flagPosition)
require.NoError(t, err)
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, true, has)
}
// Remove flags.
for _, flagPosition := range test.remove {
flag, err = epbs.RemoveValidatorFlag(flag, flagPosition)
require.NoError(t, err)
}
// Check if flags are set correctly.
for _, flagPosition := range test.expectedTrue {
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, true, has)
}
for _, flagPosition := range test.expectedFalse {
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, false, has)
}
})
}
}
func TestValidatorFlag_Remove_ExceedsLength(t *testing.T) {
_, err := epbs.RemoveValidatorFlag(0, 8)
require.ErrorContains(t, "flag position 8 exceeds length", err)
}
func TestValidatorFlag_Remove_NotSet(t *testing.T) {
_, err := epbs.RemoveValidatorFlag(0, 1)
require.NoError(t, err)
}

View File

@@ -1,126 +0,0 @@
package epbs
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ValidatePayloadStateTransition performs the process_execution_payload
// function.
func ValidatePayloadStateTransition(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
if err := UpdateHeaderAndVerify(ctx, preState, envelope); err != nil {
return err
}
committedHeader, err := preState.LatestExecutionPayloadHeaderEPBS()
if err != nil {
return err
}
if err := ValidateAgainstCommittedBid(committedHeader, envelope); err != nil {
return err
}
if err := ProcessPayloadStateTransition(ctx, preState, envelope); err != nil {
return err
}
return CheckPostStateRoot(ctx, preState, envelope)
}
func ProcessPayloadStateTransition(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
er := envelope.ExecutionRequests()
preState, err := electra.ProcessDepositRequests(ctx, preState, er.Deposits)
if err != nil {
return errors.Wrap(err, "could not process deposit receipts")
}
preState, err = electra.ProcessWithdrawalRequests(ctx, preState, er.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process ercution layer withdrawal requests")
}
if err := electra.ProcessConsolidationRequests(ctx, preState, er.Consolidations); err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload")
}
if err := preState.SetLatestBlockHash(payload.BlockHash()); err != nil {
return err
}
return preState.SetLatestFullSlot(preState.Slot())
}
func UpdateHeaderAndVerify(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
blockHeader := preState.LatestBlockHeader()
if blockHeader == nil {
return errors.New("invalid nil latest block header")
}
if len(blockHeader.StateRoot) == 0 || [32]byte(blockHeader.StateRoot) == [32]byte{} {
prevStateRoot, err := preState.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute previous state root")
}
blockHeader.StateRoot = prevStateRoot[:]
if err := preState.SetLatestBlockHeader(blockHeader); err != nil {
return errors.Wrap(err, "could not set latest block header")
}
}
blockHeaderRoot, err := blockHeader.HashTreeRoot()
if err != nil {
return err
}
beaconBlockRoot := envelope.BeaconBlockRoot()
if blockHeaderRoot != beaconBlockRoot {
return fmt.Errorf("beacon block root does not match previous header, got: %#x wanted: %#x", beaconBlockRoot, blockHeaderRoot)
}
return nil
}
func ValidateAgainstCommittedBid(
committedHeader *enginev1.ExecutionPayloadHeaderEPBS,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
builderIndex := envelope.BuilderIndex()
if committedHeader.BuilderIndex != builderIndex {
return errors.New("builder index does not match committed header")
}
kzgRoot, err := envelope.BlobKzgCommitmentsRoot()
if err != nil {
return err
}
if [32]byte(committedHeader.BlobKzgCommitmentsRoot) != kzgRoot {
return errors.New("blob KZG commitments root does not match committed header")
}
return nil
}
func CheckPostStateRoot(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
stateRoot, err := preState.HashTreeRoot(ctx)
if err != nil {
return err
}
envelopeStateRoot := envelope.StateRoot()
if stateRoot != envelopeStateRoot {
return errors.New("state root mismatch")
}
return nil
}

View File

@@ -1,110 +0,0 @@
package epbs_test
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func TestProcessPayloadStateTransition(t *testing.T) {
bh := [32]byte{'h'}
p := random.ExecutionPayloadEnvelope(t)
p.Payload.BlockHash = bh[:]
p.ExecutionRequests = &enginev1.ExecutionRequests{}
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
validators := make([]*ethpb.Validator, 0)
stpb := &ethpb.BeaconStateEPBS{Slot: 3, Validators: validators}
st, err := state_native.InitializeFromProtoUnsafeEpbs(stpb)
require.NoError(t, err)
ctx := context.Background()
lbh, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, [32]byte{}, [32]byte(lbh))
require.NoError(t, epbs.ProcessPayloadStateTransition(ctx, st, e))
lbh, err = st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, bh, [32]byte(lbh))
lfs, err := st.LatestFullSlot()
require.NoError(t, err)
require.Equal(t, lfs, st.Slot())
}
func Test_validateAgainstHeader(t *testing.T) {
bh := [32]byte{'h'}
payload := &enginev1.ExecutionPayloadDeneb{BlockHash: bh[:]}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
stpb := &ethpb.BeaconStateEPBS{Slot: 3}
st, err := state_native.InitializeFromProtoUnsafeEpbs(stpb)
require.NoError(t, err)
ctx := context.Background()
require.ErrorContains(t, "invalid nil latest block header", epbs.UpdateHeaderAndVerify(ctx, st, e))
prest, _ := util.DeterministicGenesisStateEpbs(t, 64)
br := [32]byte{'r'}
p.BeaconBlockRoot = br[:]
require.ErrorContains(t, "beacon block root does not match previous header", epbs.UpdateHeaderAndVerify(ctx, prest, e))
header := prest.LatestBlockHeader()
require.NoError(t, err)
headerRoot, err := header.HashTreeRoot()
require.NoError(t, err)
p.BeaconBlockRoot = headerRoot[:]
require.NoError(t, epbs.UpdateHeaderAndVerify(ctx, prest, e))
}
func Test_validateAgainstCommittedBid(t *testing.T) {
payload := &enginev1.ExecutionPayloadDeneb{}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
p.BuilderIndex = 1
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
h := &enginev1.ExecutionPayloadHeaderEPBS{}
require.ErrorContains(t, "builder index does not match committed header", epbs.ValidateAgainstCommittedBid(h, e))
h.BuilderIndex = 1
p.BlobKzgCommitments = make([][]byte, 6)
for i := range p.BlobKzgCommitments {
p.BlobKzgCommitments[i] = make([]byte, 48)
}
h.BlobKzgCommitmentsRoot = make([]byte, 32)
require.ErrorContains(t, "blob KZG commitments root does not match committed header", epbs.ValidateAgainstCommittedBid(h, e))
root, err := e.BlobKzgCommitmentsRoot()
require.NoError(t, err)
h.BlobKzgCommitmentsRoot = root[:]
require.NoError(t, epbs.ValidateAgainstCommittedBid(h, e))
}
func TestCheckPostStateRoot(t *testing.T) {
payload := &enginev1.ExecutionPayloadDeneb{}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
p.BuilderIndex = 1
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
ctx := context.Background()
st, _ := util.DeterministicGenesisStateEpbs(t, 64)
p.StateRoot = make([]byte, 32)
require.ErrorContains(t, "state root mismatch", epbs.CheckPostStateRoot(ctx, st, e))
root, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
p.StateRoot = root[:]
require.NoError(t, epbs.CheckPostStateRoot(ctx, st, e))
}

View File

@@ -1,50 +0,0 @@
package epbs
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/network/forks"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// ValidatePayloadHeaderSignature validates the signature of the execution payload header.
func ValidatePayloadHeaderSignature(st state.ReadOnlyBeaconState, sh interfaces.ROSignedExecutionPayloadHeader) error {
h, err := sh.Header()
if err != nil {
return err
}
pubkey := st.PubkeyAtIndex(h.BuilderIndex())
pub, err := bls.PublicKeyFromBytes(pubkey[:])
if err != nil {
return err
}
s := sh.Signature()
sig, err := bls.SignatureFromBytes(s[:])
if err != nil {
return err
}
currentEpoch := slots.ToEpoch(h.Slot())
f, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(f, currentEpoch, params.BeaconConfig().DomainBeaconBuilder, st.GenesisValidatorsRoot())
if err != nil {
return err
}
root, err := sh.SigningRoot(domain)
if err != nil {
return err
}
if !sig.Verify(pub, root[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,125 +0,0 @@
package epbs
import (
"bytes"
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
func ProcessPayloadAttestations(state state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error {
atts, err := body.PayloadAttestations()
if err != nil {
return err
}
if len(atts) == 0 {
return nil
}
ctx, cancel := context.WithTimeout(context.Background(), 12*time.Second)
defer cancel()
lbh := state.LatestBlockHeader()
proposerIndex := lbh.ProposerIndex
var participation []byte
if state.Slot()%32 == 0 {
participation, err = state.PreviousEpochParticipation()
} else {
participation, err = state.CurrentEpochParticipation()
}
if err != nil {
return err
}
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return err
}
baseReward, err := altair.BaseRewardWithTotalBalance(state, proposerIndex, totalBalance)
if err != nil {
return err
}
lfs, err := state.LatestFullSlot()
if err != nil {
return err
}
cfg := params.BeaconConfig()
sourceFlagIndex := cfg.TimelySourceFlagIndex
targetFlagIndex := cfg.TimelyTargetFlagIndex
headFlagIndex := cfg.TimelyHeadFlagIndex
penaltyNumerator := uint64(0)
rewardNumerator := uint64(0)
rewardDenominator := (cfg.WeightDenominator - cfg.ProposerWeight) * cfg.WeightDenominator / cfg.ProposerWeight
for _, att := range atts {
data := att.Data
if !bytes.Equal(data.BeaconBlockRoot, lbh.ParentRoot) {
return errors.New("invalid beacon block root in payload attestation data")
}
if data.Slot+1 != state.Slot() {
return errors.New("invalid data slot")
}
indexed, err := helpers.GetIndexedPayloadAttestation(ctx, state, data.Slot, att)
if err != nil {
return err
}
valid, err := helpers.IsValidIndexedPayloadAttestation(state, indexed)
if err != nil {
return err
}
if !valid {
return errors.New("invalid payload attestation")
}
payloadWasPreset := data.Slot == lfs
votedPresent := data.PayloadStatus == primitives.PAYLOAD_PRESENT
if votedPresent != payloadWasPreset {
for _, idx := range indexed.GetAttestingIndices() {
flags := participation[idx]
has, err := altair.HasValidatorFlag(flags, targetFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelyTargetWeight
}
has, err = altair.HasValidatorFlag(flags, sourceFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelySourceWeight
}
has, err = altair.HasValidatorFlag(flags, headFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelyHeadWeight
}
participation[idx] = 0
}
} else {
for _, idx := range indexed.GetAttestingIndices() {
participation[idx] = (1 << headFlagIndex) | (1 << sourceFlagIndex) | (1 << targetFlagIndex)
rewardNumerator += baseReward * (cfg.TimelyHeadWeight + cfg.TimelySourceWeight + cfg.TimelyTargetWeight)
}
}
}
if penaltyNumerator > 0 {
if err := helpers.DecreaseBalance(state, proposerIndex, penaltyNumerator/rewardDenominator); err != nil {
return err
}
}
if rewardNumerator > 0 {
if err := helpers.IncreaseBalance(state, proposerIndex, penaltyNumerator/rewardDenominator); err != nil {
return err
}
}
return nil
}

View File

@@ -1,150 +0,0 @@
package epbs
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// UpgradeToEIP7732 updates inputs a generic state to return the version EIP-7732 state.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7732/fork.md
func UpgradeToEIP7732(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
historicalRoots, err := beaconState.HistoricalRoots()
if err != nil {
return nil, err
}
depositBalanceToConsume, err := beaconState.DepositBalanceToConsume()
if err != nil {
return nil, err
}
exitBalanceToConsume, err := beaconState.ExitBalanceToConsume()
if err != nil {
return nil, err
}
earliestExitEpoch, err := beaconState.EarliestExitEpoch()
if err != nil {
return nil, err
}
consolidationBalanceToConsume, err := beaconState.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
earliestConsolidationEpoch, err := beaconState.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pendingDeposits, err := beaconState.PendingDeposits()
if err != nil {
return nil, err
}
pendingPartialWithdrawals, err := beaconState.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pendingConsolidations, err := beaconState.PendingConsolidations()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateEPBS{
GenesisTime: beaconState.GenesisTime(),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().EPBSForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: historicalRoots,
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: depositBalanceToConsume,
ExitBalanceToConsume: exitBalanceToConsume,
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: consolidationBalanceToConsume,
EarliestConsolidationEpoch: earliestConsolidationEpoch,
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
// Newly added for EIP7732
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderEPBS{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
BlobKzgCommitmentsRoot: make([]byte, 32),
},
LatestBlockHash: payloadHeader.BlockHash(),
LatestFullSlot: beaconState.Slot(),
LastWithdrawalsRoot: make([]byte, 32),
}
post, err := state_native.InitializeFromProtoUnsafeEpbs(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post EIP-7732 beaconState")
}
return post, nil
}

View File

@@ -1,135 +0,0 @@
package epbs_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestUpgradeToEip7732(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
preForkState := st.Copy()
mSt, err := epbs.UpgradeToEIP7732(st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), mSt.GenesisTime())
require.DeepSSZEqual(t, preForkState.GenesisValidatorsRoot(), mSt.GenesisValidatorsRoot())
require.Equal(t, preForkState.Slot(), mSt.Slot())
require.DeepSSZEqual(t, preForkState.LatestBlockHeader(), mSt.LatestBlockHeader())
require.DeepSSZEqual(t, preForkState.BlockRoots(), mSt.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), mSt.StateRoots())
require.DeepSSZEqual(t, preForkState.Validators()[2:], mSt.Validators()[2:])
require.DeepSSZEqual(t, preForkState.Balances()[2:], mSt.Balances()[2:])
require.DeepSSZEqual(t, preForkState.Eth1Data(), mSt.Eth1Data())
require.DeepSSZEqual(t, preForkState.Eth1DataVotes(), mSt.Eth1DataVotes())
require.DeepSSZEqual(t, preForkState.Eth1DepositIndex(), mSt.Eth1DepositIndex())
require.DeepSSZEqual(t, preForkState.RandaoMixes(), mSt.RandaoMixes())
require.DeepSSZEqual(t, preForkState.Slashings(), mSt.Slashings())
require.DeepSSZEqual(t, preForkState.JustificationBits(), mSt.JustificationBits())
require.DeepSSZEqual(t, preForkState.PreviousJustifiedCheckpoint(), mSt.PreviousJustifiedCheckpoint())
require.DeepSSZEqual(t, preForkState.CurrentJustifiedCheckpoint(), mSt.CurrentJustifiedCheckpoint())
require.DeepSSZEqual(t, preForkState.FinalizedCheckpoint(), mSt.FinalizedCheckpoint())
require.Equal(t, len(preForkState.Validators()), len(mSt.Validators()))
numValidators := mSt.NumValidators()
p, err := mSt.PreviousEpochParticipation()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]byte, numValidators), p)
p, err = mSt.CurrentEpochParticipation()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]byte, numValidators), p)
s, err := mSt.InactivityScores()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().EPBSForkVersion,
Epoch: time.CurrentEpoch(st),
}, f)
csc, err := mSt.CurrentSyncCommittee()
require.NoError(t, err)
psc, err := preForkState.CurrentSyncCommittee()
require.NoError(t, err)
require.DeepSSZEqual(t, psc, csc)
nsc, err := mSt.NextSyncCommittee()
require.NoError(t, err)
psc, err = preForkState.NextSyncCommittee()
require.NoError(t, err)
require.DeepSSZEqual(t, psc, nsc)
nwi, err := mSt.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, uint64(0), nwi)
lwvi, err := mSt.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(0), lwvi)
summaries, err := mSt.HistoricalSummaries()
require.NoError(t, err)
require.Equal(t, 0, len(summaries))
startIndex, err := mSt.DepositRequestsStartIndex()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().UnsetDepositRequestsStartIndex, startIndex)
balance, err := mSt.DepositBalanceToConsume()
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), balance)
tab, err := helpers.TotalActiveBalance(mSt)
require.NoError(t, err)
ebtc, err := mSt.ExitBalanceToConsume()
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitChurnLimit(primitives.Gwei(tab)), ebtc)
cbtc, err := mSt.ConsolidationBalanceToConsume()
require.NoError(t, err)
require.Equal(t, helpers.ConsolidationChurnLimit(primitives.Gwei(tab)), cbtc)
earliestConsolidationEpoch, err := mSt.EarliestConsolidationEpoch()
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitEpoch(slots.ToEpoch(preForkState.Slot())), earliestConsolidationEpoch)
// EIP-7732 checks.
h, err := mSt.LatestExecutionPayloadHeaderEPBS()
require.NoError(t, err)
require.DeepEqual(t, &enginev1.ExecutionPayloadHeaderEPBS{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
BlobKzgCommitmentsRoot: make([]byte, 32),
}, h)
lwr, err := mSt.LastWithdrawalsRoot()
require.NoError(t, err)
require.DeepEqual(t, lwr, make([]byte, 32))
lbh, err := mSt.LatestBlockHash()
require.NoError(t, err)
lh, err := preForkState.LatestExecutionPayloadHeader()
require.NoError(t, err)
require.DeepEqual(t, lbh, lh.BlockHash())
slot, err := mSt.LatestFullSlot()
require.NoError(t, err)
require.Equal(t, slot, preForkState.Slot())
}

View File

@@ -32,6 +32,12 @@ const (
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8
// SingleAttReceived is sent after a single attestation object is received from gossip or rpc
SingleAttReceived = 9
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 10
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -43,7 +49,7 @@ type UnAggregatedAttReceivedData struct {
// AggregatedAttReceivedData is the data sent with AggregatedAttReceived events.
type AggregatedAttReceivedData struct {
// Attestation is the aggregated attestation object.
Attestation *ethpb.AggregateAttestationAndProof
Attestation ethpb.AggregateAttAndProof
}
// ExitReceivedData is the data sent with ExitReceived events.
@@ -77,3 +83,11 @@ type ProposerSlashingReceivedData struct {
type AttesterSlashingReceivedData struct {
AttesterSlashing ethpb.AttSlashing
}
// SingleAttReceivedData is the data sent with SingleAttReceived events.
type SingleAttReceivedData struct {
Attestation ethpb.Att
}
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}

View File

@@ -102,7 +102,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
return nil, err
}
s := &ethpb.BeaconStateFulu{
s := &ethpb.BeaconStateElectra{
GenesisTime: beaconState.GenesisTime(),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),

View File

@@ -9,7 +9,6 @@ go_library(
"genesis.go",
"legacy.go",
"metrics.go",
"payload_attestation.go",
"randao.go",
"rewards_penalties.go",
"shuffle.go",
@@ -22,13 +21,11 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/epbs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//container/trie:go_default_library",
@@ -56,9 +53,7 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"exports_test.go",
"legacy_test.go",
"payload_attestation_test.go",
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
@@ -75,27 +70,21 @@ go_test(
tags = ["CI_race_detection"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/epbs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -213,7 +213,6 @@ type CommitteeAssignment struct {
Committee []primitives.ValidatorIndex
AttesterSlot primitives.Slot
CommitteeIndex primitives.CommitteeIndex
PtcSlot primitives.Slot
}
// verifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
@@ -295,7 +294,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
if err := verifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
slot, err := slots.EpochStart(epoch)
startSlot, err := slots.EpochStart(epoch)
if err != nil {
return nil, err
}
@@ -304,17 +303,14 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
vals[v] = struct{}{}
}
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment)
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
ptcPerSlot, ptcMembersPerCommittee := PtcAllocation(len(committees))
// Compute committee assignments for each slot in the epoch.
endSlot := slot + params.BeaconConfig().SlotsPerEpoch
for {
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
for j, committee := range committees {
for i, vIndex := range committee {
for _, vIndex := range committee {
if _, ok := vals[vIndex]; !ok { // Skip if the validator is not in the provided validators slice.
continue
}
@@ -324,19 +320,8 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
assignments[vIndex].Committee = committee
assignments[vIndex].AttesterSlot = slot
assignments[vIndex].CommitteeIndex = primitives.CommitteeIndex(j)
if uint64(j) < ptcPerSlot && uint64(i) < ptcMembersPerCommittee {
assignments[vIndex].PtcSlot = slot
}
}
}
slot++
if slot == endSlot {
break
}
committees, err = BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
}
return assignments, nil
}

View File

@@ -3,7 +3,6 @@ package helpers_test
import (
"context"
"fmt"
"slices"
"strconv"
"testing"
@@ -11,7 +10,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/slice"
@@ -731,26 +729,15 @@ func TestCommitteeIndices(t *testing.T) {
assert.DeepEqual(t, []primitives.CommitteeIndex{0, 1, 3}, indices)
}
func TestCommitteeAssignments_PTC(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
validatorIndices := make([]primitives.ValidatorIndex, validatorCount)
func TestAttestationCommittees(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().TargetCommitteeSize))
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
validatorIndices[i] = primitives.ValidatorIndex(i)
}
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
@@ -774,31 +761,6 @@ func TestCommitteeAssignments_PTC(t *testing.T) {
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(committees[0])))
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(committees[1])))
})
as, err := helpers.CommitteeAssignments(context.Background(), state, 1, validatorIndices)
require.NoError(t, err)
// Capture all the slots and all the validator index that belonged in a PTC using a map for verification later.
slotValidatorMap := make(map[primitives.Slot][]primitives.ValidatorIndex)
for i, a := range as {
slotValidatorMap[a.PtcSlot] = append(slotValidatorMap[a.PtcSlot], i)
}
// Verify that all the slots have the correct number of PTC.
for s, v := range slotValidatorMap {
if s == 0 {
continue
}
// Make sure all the PTC are the correct size from the map.
require.Equal(t, len(v), field_params.PTCSize)
// Get the actual PTC from the beacon state using the helper function
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, s)
require.NoError(t, err)
for _, index := range ptc {
i := slices.Index(v, index)
require.NotEqual(t, -1, i) // PTC not found from the assignment map
}
}
}
func TestBeaconCommittees(t *testing.T) {

View File

@@ -1,12 +0,0 @@
package helpers
var (
ErrNilMessage = errNilMessage
ErrNilData = errNilData
ErrNilBeaconBlockRoot = errNilBeaconBlockRoot
ErrNilPayloadAttestation = errNilPayloadAttestation
ErrNilSignature = errNilSignature
ErrNilAggregationBits = errNilAggregationBits
ErrPreEPBSState = errPreEPBSState
ErrCommitteeOverflow = errCommitteeOverflow
)

View File

@@ -1,296 +0,0 @@
package helpers
import (
"context"
"slices"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/epbs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/math"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
var (
errNilMessage = errors.New("nil PayloadAttestationMessage")
errNilData = errors.New("nil PayloadAttestationData")
errNilBeaconBlockRoot = errors.New("nil BeaconBlockRoot")
errNilPayloadAttestation = errors.New("nil PayloadAttestation")
errNilSignature = errors.New("nil Signature")
errNilAggregationBits = errors.New("nil AggregationBits")
errPreEPBSState = errors.New("beacon state pre ePBS fork")
errCommitteeOverflow = errors.New("beacon committee of insufficient size")
)
// ValidateNilPayloadAttestationData checks if any composite field of the
// payload attestation data is nil
func ValidateNilPayloadAttestationData(data *eth.PayloadAttestationData) error {
if data == nil {
return errNilData
}
if data.BeaconBlockRoot == nil {
return errNilBeaconBlockRoot
}
return nil
}
// ValidateNilPayloadAttestationMessage checks if any composite field of the
// payload attestation message is nil
func ValidateNilPayloadAttestationMessage(att *eth.PayloadAttestationMessage) error {
if att == nil {
return errNilMessage
}
if att.Signature == nil {
return errNilSignature
}
return ValidateNilPayloadAttestationData(att.Data)
}
// ValidateNilPayloadAttestation checks if any composite field of the
// payload attestation is nil
func ValidateNilPayloadAttestation(att *eth.PayloadAttestation) error {
if att == nil {
return errNilPayloadAttestation
}
if att.AggregationBits == nil {
return errNilAggregationBits
}
if att.Signature == nil {
return errNilSignature
}
return ValidateNilPayloadAttestationData(att.Data)
}
// InPayloadTimelinessCommittee returns whether the given index belongs to the
// PTC computed from the passed state.
func InPayloadTimelinessCommittee(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, idx primitives.ValidatorIndex) (bool, error) {
ptc, err := GetPayloadTimelinessCommittee(ctx, state, slot)
if err != nil {
return false, err
}
for _, i := range ptc {
if i == idx {
return true, nil
}
}
return false, nil
}
// GetPayloadTimelinessCommittee returns the PTC for the given slot, computed from the passed state as in the
// spec function `get_ptc`.
func GetPayloadTimelinessCommittee(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (indices []primitives.ValidatorIndex, err error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not get beacon committees")
}
committeesPerSlot, membersPerCommittee := PtcAllocation(len(committees))
for i, committee := range committees {
if uint64(i) >= committeesPerSlot {
return
}
if uint64(len(committee)) < membersPerCommittee {
return nil, errCommitteeOverflow
}
indices = append(indices, committee[:membersPerCommittee]...)
}
return
}
// PtcAllocation returns:
// 1. The number of beacon committees that PTC will borrow from in a slot.
// 2. The number of validators that PTC will borrow from in a beacon committee.
func PtcAllocation(slotCommittees int) (committeesPerSlot, membersPerCommittee uint64) {
committeesPerSlot = math.LargestPowerOfTwo(math.Min(uint64(slotCommittees), fieldparams.PTCSize))
membersPerCommittee = fieldparams.PTCSize / committeesPerSlot
return
}
// GetPayloadAttestingIndices returns the set of attester indices corresponding to the given PayloadAttestation.
//
// Spec pseudocode definition:
//
// def get_payload_attesting_indices(state: BeaconState, slot: Slot,
// payload_attestation: PayloadAttestation) -> Set[ValidatorIndex]:
// """
// Return the set of attesting indices corresponding to ``payload_attestation``.
// """
// ptc = get_ptc(state, slot)
// return set(index for i, index in enumerate(ptc) if payload_attestation.aggregation_bits[i])
func GetPayloadAttestingIndices(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, att *eth.PayloadAttestation) (indices []primitives.ValidatorIndex, err error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
ptc, err := GetPayloadTimelinessCommittee(ctx, state, slot)
if err != nil {
return nil, err
}
for i, validatorIndex := range ptc {
if att.AggregationBits.BitAt(uint64(i)) {
indices = append(indices, validatorIndex)
}
}
return
}
// GetIndexedPayloadAttestation replaces a PayloadAttestation's AggregationBits with sorted AttestingIndices and returns an IndexedPayloadAttestation.
//
// Spec pseudocode definition:
//
// def get_indexed_payload_attestation(state: BeaconState, slot: Slot,
// payload_attestation: PayloadAttestation) -> IndexedPayloadAttestation:
// """
// Return the indexed payload attestation corresponding to ``payload_attestation``.
// """
// attesting_indices = get_payload_attesting_indices(state, slot, payload_attestation)
//
// return IndexedPayloadAttestation(
// attesting_indices=sorted(attesting_indices),
// data=payload_attestation.data,
// signature=payload_attestation.signature,
// )
func GetIndexedPayloadAttestation(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, att *eth.PayloadAttestation) (*epbs.IndexedPayloadAttestation, error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
attestingIndices, err := GetPayloadAttestingIndices(ctx, state, slot, att)
if err != nil {
return nil, err
}
slices.Sort(attestingIndices)
return &epbs.IndexedPayloadAttestation{
AttestingIndices: attestingIndices,
Data: att.Data,
Signature: att.Signature,
}, nil
}
// IsValidIndexedPayloadAttestation validates the given IndexedPayloadAttestation.
//
// Spec pseudocode definition:
//
// def is_valid_indexed_payload_attestation(
// state: BeaconState,
// indexed_payload_attestation: IndexedPayloadAttestation) -> bool:
// """
// Check if ``indexed_payload_attestation`` is not empty, has sorted and unique indices and has
// a valid aggregate signature.
// """
// # Verify the data is valid
// if indexed_payload_attestation.data.payload_status >= PAYLOAD_INVALID_STATUS:
// return False
//
// # Verify indices are sorted and unique
// indices = indexed_payload_attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(set(indices)):
// return False
//
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_PTC_ATTESTER, None)
// signing_root = compute_signing_root(indexed_payload_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_payload_attestation.signature)
func IsValidIndexedPayloadAttestation(state state.ReadOnlyBeaconState, att *epbs.IndexedPayloadAttestation) (bool, error) {
if state.Version() < version.EPBS {
return false, errPreEPBSState
}
// Verify the data is valid.
if att.Data.PayloadStatus >= primitives.PAYLOAD_INVALID_STATUS {
return false, nil
}
// Verify indices are sorted and unique.
indices := att.AttestingIndices
slices.Sort(indices)
if len(indices) == 0 || !slices.Equal(att.AttestingIndices, indices) {
return false, nil
}
// Verify aggregate signature.
publicKeys := make([]bls.PublicKey, len(indices))
for i, index := range indices {
validator, err := state.ValidatorAtIndexReadOnly(index)
if err != nil {
return false, err
}
publicKeyBytes := validator.PublicKey()
publicKey, err := bls.PublicKeyFromBytes(publicKeyBytes[:])
if err != nil {
return false, err
}
publicKeys[i] = publicKey
}
domain, err := signing.Domain(
state.Fork(),
slots.ToEpoch(state.Slot()),
params.BeaconConfig().DomainPTCAttester,
state.GenesisValidatorsRoot(),
)
if err != nil {
return false, err
}
signingRoot, err := signing.ComputeSigningRoot(att.Data, domain)
if err != nil {
return false, err
}
signature, err := bls.SignatureFromBytes(att.Signature)
if err != nil {
return false, err
}
return signature.FastAggregateVerify(publicKeys, signingRoot), nil
}
// ValidatePayloadAttestationMessageSignature verifies the signature of a
// payload attestation message.
func ValidatePayloadAttestationMessageSignature(ctx context.Context, st state.ReadOnlyBeaconState, msg *eth.PayloadAttestationMessage) error {
if err := ValidateNilPayloadAttestationMessage(msg); err != nil {
return err
}
val, err := st.ValidatorAtIndex(msg.ValidatorIndex)
if err != nil {
return err
}
pub, err := bls.PublicKeyFromBytes(val.PublicKey)
if err != nil {
return err
}
sig, err := bls.SignatureFromBytes(msg.Signature)
if err != nil {
return err
}
currentEpoch := slots.ToEpoch(st.Slot())
domain, err := signing.Domain(st.Fork(), currentEpoch, params.BeaconConfig().DomainPTCAttester, st.GenesisValidatorsRoot())
if err != nil {
return err
}
root, err := signing.ComputeSigningRoot(msg.Data, domain)
if err != nil {
return err
}
if !sig.Verify(pub, root[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,363 +0,0 @@
package helpers_test
import (
"context"
"slices"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/epbs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
"github.com/prysmaticlabs/prysm/v5/math"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestValidateNilPayloadAttestation(t *testing.T) {
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestationData(nil))
data := &eth.PayloadAttestationData{}
require.ErrorIs(t, helpers.ErrNilBeaconBlockRoot, helpers.ValidateNilPayloadAttestationData(data))
data.BeaconBlockRoot = make([]byte, 32)
require.NoError(t, helpers.ValidateNilPayloadAttestationData(data))
require.ErrorIs(t, helpers.ErrNilMessage, helpers.ValidateNilPayloadAttestationMessage(nil))
message := &eth.PayloadAttestationMessage{}
require.ErrorIs(t, helpers.ErrNilSignature, helpers.ValidateNilPayloadAttestationMessage(message))
message.Signature = make([]byte, 96)
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestationMessage(message))
message.Data = data
require.NoError(t, helpers.ValidateNilPayloadAttestationMessage(message))
require.ErrorIs(t, helpers.ErrNilPayloadAttestation, helpers.ValidateNilPayloadAttestation(nil))
att := &eth.PayloadAttestation{}
require.ErrorIs(t, helpers.ErrNilAggregationBits, helpers.ValidateNilPayloadAttestation(att))
att.AggregationBits = bitfield.NewBitvector512()
require.ErrorIs(t, helpers.ErrNilSignature, helpers.ValidateNilPayloadAttestation(att))
att.Signature = message.Signature
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestation(att))
att.Data = data
require.NoError(t, helpers.ValidateNilPayloadAttestation(att))
}
func TestGetPayloadTimelinessCommittee(t *testing.T) {
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state, err := state_native.InitializeFromProtoEpbs(random.BeaconState(t))
require.NoError(t, err)
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(200))
ctx := context.Background()
indices, err := helpers.BeaconCommitteeFromState(ctx, state, state.Slot(), 1)
require.NoError(t, err)
require.Equal(t, 128, len(indices))
epoch := slots.ToEpoch(state.Slot())
activeCount, err := helpers.ActiveValidatorCount(ctx, state, epoch)
require.NoError(t, err)
require.Equal(t, uint64(40960), activeCount)
computedCommitteeCount := helpers.SlotCommitteeCount(activeCount)
require.Equal(t, committeeCount, computedCommitteeCount)
committeesPerSlot := math.LargestPowerOfTwo(math.Min(committeeCount, fieldparams.PTCSize))
require.Equal(t, uint64(8), committeesPerSlot)
ptc, err := helpers.GetPayloadTimelinessCommittee(ctx, state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
committee1, err := helpers.BeaconCommitteeFromState(ctx, state, state.Slot(), 0)
require.NoError(t, err)
require.DeepEqual(t, committee1[:64], ptc[:64])
}
func Test_PtcAllocation(t *testing.T) {
tests := []struct {
committeeCount int
memberPerCommittee uint64
committeesPerSlot uint64
}{
{1, 512, 1},
{4, 128, 4},
{128, 4, 128},
{512, 1, 512},
{1024, 1, 512},
}
for _, test := range tests {
committeesPerSlot, memberPerCommittee := helpers.PtcAllocation(test.committeeCount)
if memberPerCommittee != test.memberPerCommittee {
t.Errorf("memberPerCommittee(%d) = %d; expected %d", test.committeeCount, memberPerCommittee, test.memberPerCommittee)
}
if committeesPerSlot != test.committeesPerSlot {
t.Errorf("committeesPerSlot(%d) = %d; expected %d", test.committeeCount, committeesPerSlot, test.committeesPerSlot)
}
}
}
func TestGetPayloadAttestingIndices(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
pubkey := make([]byte, 48)
copy(pubkey, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: pubkey,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Get PTC.
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
// Generate random indices. PTC members at the corresponding indices are considered attested.
randGen := rand.NewDeterministicGenerator()
attesterCount := randGen.Intn(fieldparams.PTCSize) + 1
indices := randGen.Perm(fieldparams.PTCSize)[:attesterCount]
slices.Sort(indices)
require.Equal(t, attesterCount, len(indices))
// Create a PayloadAttestation with AggregationBits set true at the indices.
aggregationBits := bitfield.NewBitvector512()
for _, index := range indices {
aggregationBits.SetBitAt(uint64(index), true)
}
payloadAttestation := &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
// Get attesting indices.
attesters, err := helpers.GetPayloadAttestingIndices(context.Background(), state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(attesters))
// Check if each attester equals to the PTC member at the corresponding index.
for i, index := range indices {
require.Equal(t, attesters[i], ptc[index])
}
}
func TestGetIndexedPayloadAttestation(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
publicKey := make([]byte, 48)
copy(publicKey, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: publicKey,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Get PTC.
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
// Generate random indices. PTC members at the corresponding indices are considered attested.
randGen := rand.NewDeterministicGenerator()
attesterCount := randGen.Intn(fieldparams.PTCSize) + 1
indices := randGen.Perm(fieldparams.PTCSize)[:attesterCount]
slices.Sort(indices)
require.Equal(t, attesterCount, len(indices))
// Create a PayloadAttestation with AggregationBits set true at the indices.
aggregationBits := bitfield.NewBitvector512()
for _, index := range indices {
aggregationBits.SetBitAt(uint64(index), true)
}
payloadAttestation := &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
// Get attesting indices.
ctx := context.Background()
attesters, err := helpers.GetPayloadAttestingIndices(ctx, state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(attesters))
// Get an IndexedPayloadAttestation.
indexedPayloadAttestation, err := helpers.GetIndexedPayloadAttestation(ctx, state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(indexedPayloadAttestation.AttestingIndices))
require.DeepEqual(t, payloadAttestation.Data, indexedPayloadAttestation.Data)
require.DeepEqual(t, payloadAttestation.Signature, indexedPayloadAttestation.Signature)
// Check if the attesting indices are the same.
slices.Sort(attesters) // GetIndexedPayloadAttestation sorts attesting indices.
require.DeepEqual(t, attesters, indexedPayloadAttestation.AttestingIndices)
}
func TestIsValidIndexedPayloadAttestation(t *testing.T) {
helpers.ClearCache()
// Create validators.
validatorCount := uint64(350)
validators := make([]*ethpb.Validator, validatorCount)
_, secretKeys, err := util.DeterministicDepositsAndKeys(validatorCount)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
PublicKey: secretKeys[i].PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
Fork: &ethpb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Define test cases.
tests := []struct {
attestation *epbs.IndexedPayloadAttestation
}{
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{1},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{13, 19},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{123, 234, 345},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{38, 46, 54, 62, 70, 78, 86, 194},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{5},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
}
// Run test cases.
for _, test := range tests {
signatures := make([]bls.Signature, len(test.attestation.AttestingIndices))
for i, index := range test.attestation.AttestingIndices {
signedBytes, err := signing.ComputeDomainAndSign(
state,
slots.ToEpoch(test.attestation.Data.Slot),
test.attestation.Data,
params.BeaconConfig().DomainPTCAttester,
secretKeys[index],
)
require.NoError(t, err)
signature, err := bls.SignatureFromBytes(signedBytes)
require.NoError(t, err)
signatures[i] = signature
}
aggregatedSignature := bls.AggregateSignatures(signatures)
test.attestation.Signature = aggregatedSignature.Marshal()
isValid, err := helpers.IsValidIndexedPayloadAttestation(state, test.attestation)
require.NoError(t, err)
require.Equal(t, true, isValid)
}
}

Some files were not shown because too many files have changed in this diff Show More