Compare commits

...

200 Commits

Author SHA1 Message Date
Raul Jordan
f739bc0181 Merge branch 'develop' into revert-12704-proposer-verify-attestation 2023-08-29 10:15:05 -07:00
Preston Van Loon
63126fd51f Holesky support (#12821)
* Add holesky config


Copy config/params from deneb-integration

* Add --holesky flag

* Handle no genesis block error and suggest to user a workaround
2023-08-29 14:27:50 +00:00
Delweng
0ea0afb494 beacon-chain/execution: use pointer instead of value to reduce copy (#12818)
* beacon-chain/execution: use pointer instead of value to reduce value copy

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/execution: fix unit test

Signed-off-by: jsvisa <delweng@gmail.com>

---------

Signed-off-by: jsvisa <delweng@gmail.com>
2023-08-29 13:24:54 +00:00
terencechain
2f1764f6c6 Revert "fix(proposer): verify attestations without mutating state (#12704)"
This reverts commit 09d761e1ab.
2023-08-28 09:28:20 -07:00
m3diumrare
286f6a9931 Minor typo in CLI flag (#12808)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-25 21:14:08 +00:00
Dhruv Bodani
4ef29a24e4 Add REST implementation of node version endpoint (#12809) 2023-08-25 13:14:08 +00:00
Dhruv Bodani
30eaddf48f Add support for validator count endpoint (#12752) 2023-08-24 23:11:41 +02:00
Nishant Das
bcf728b9ff Fix Sepolia Version (#12792) 2023-08-24 10:07:41 +00:00
terencechain
6f6c06a95b chore: remove unused proto import (#12787) 2023-08-24 01:09:06 +00:00
Preston Van Loon
5433502055 Add deneb config/params from deneb-integration (#12783)
Co-authored-by: Terence Tsao <terence@prysmaticlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-08-23 20:07:25 +00:00
Radosław Kapka
f0d54254ed HTTP implementation of voluntary exit pool endpoints (#12777)
* impl

* protos

* tests

* review

* test fix
2023-08-23 07:51:03 +00:00
james-prysm
da244c9e9a adding register validator route (#12776) 2023-08-22 15:44:00 +00:00
Sammy Rosso
e49f1321b7 HTTP Beacon API: /eth/v1/beacon/blocks/{block_id}/root (#12716)
* Initial setup

* Fix all tests and handler func

* Cleanup

* Fix the tests

* Remove middleware endpoint

* Add endpoint

* Switch query param to path

* Fix e2e test

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Radek & James Reviews

* Check blockId length

* Add length check

* Fix

* Revert "Fix e2e test"

This reverts commit 6289c1de5c.

* use v1alpha1 server to get root

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-22 06:18:02 +00:00
Raul Jordan
5eb8b88073 Update Fuzz List Timeout Github Workflow (#12768) 2023-08-22 04:48:13 +00:00
james-prysm
b0fc368185 local-block-value-boost flag from float64 to uint64 [BREAKING CHANGE] (#12765) 2023-08-21 22:34:34 +00:00
Raul Jordan
777ca06b78 Add Github Go Fuzzing Workflow With Cron Schedule (#12756)
* add in github workflow for fuzzing that runs with cron

* every day

* go version

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-21 16:09:51 +00:00
james-prysm
4d520460e0 REST implementation of /eth/v1/validator/register_validator (#12758)
* migrating v2 to http native url

* removing old code

* gaz

* updating beacon api validator client code

* fixing tests
2023-08-21 15:42:43 +00:00
Potuz
27d8b1c358 add more descriptive log on FFG-LMD consistency (#12763)
* add more descriptive log on FFG-LMD consistency

* Update beacon-chain/blockchain/receive_attestation.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Update beacon-chain/blockchain/receive_attestation.go

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-21 14:06:59 +00:00
terencechain
17500dcfca Fix: download genesis log (#12759) 2023-08-20 14:46:40 -07:00
Radosław Kapka
0452fd02e8 HTTP Beacon API: /pool/attestations (#12735)
* attestations

* post

* tests

* Revert "Auxiliary commit to revert individual files from afede4d949a7519902be2f1e0c485306c4ccdea7"

This reverts commit 9de74879e0c41e43183da2fa7e63094cac030abe.

* remove test

* remove redundant return

* Update beacon-chain/rpc/eth/beacon/handlers_pool.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* review

* include index in broadcast log

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-18 14:29:40 +00:00
terencechain
09d761e1ab fix(proposer): verify attestations without mutating state (#12704)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-17 16:37:19 +00:00
Potuz
9a741c52d1 Remove quadratic loops when exiting (#12737)
* Remove quadratic loops when exiting

* unit test

* fix tests

* lint

* duplicated imports

* handle new error

* revert spurious unit test

* radek's review

* fix name

* use errors.Is

* fix error check

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-08-17 15:38:46 +00:00
Sammy Rosso
baed8da9c0 HTTP Beacon API: /eth/v1/validator/sync_committee_contribution (#12698)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-08-17 12:01:24 +02:00
Matthew Obwaya
33410a0ec1 Remove uses of deprecated go_embed rules_go (#12719)
* Remove prysm_web_ui http_archive from WORKSPACE

* Remove go_embed_data_dependencies() from WORKSPACE

* Remove go_embed_data target from validator/web/BUILD.bazel

* Remove # gazelle:ignore site_data.go annotation from validator/web/BUILD.bazel

* Run bazel run //:gazelle

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-08-16 15:47:09 +00:00
Nishant Das
cda1797d4e Remove Alpine Images From Prysm (#12749) 2023-08-16 14:01:09 +00:00
Potuz
e7f6048b8c Use last optimistic status on batches (#12741)
* Use last optimistic status on batches

* more descriptive errors

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-16 01:40:36 +00:00
Potuz
418959565f set optimistic status in head at init sync (#12748) 2023-08-16 00:47:56 +00:00
Raul Jordan
8229f3eb84 Prysm Web UI Release v2.0.4 (#12746)
Co-authored-by: james-prysm <james-prysm@users.noreply.github.com>
2023-08-15 21:11:44 +00:00
Nishant Das
4098d098aa Shift Error Logs To Debug (#12739) 2023-08-15 14:44:54 +00:00
anukul
46c72798c7 HTTP Beacon API: /eth/v1/validator/attestation_data (#12634)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-14 17:56:36 +03:00
Preston Van Loon
a5474200de Update geth to 1.12.2 (#12731) 2023-08-14 17:48:11 +08:00
Preston Van Loon
a85b4445fc Update bazel to 6.3.2 (#12725)
* Update bazel version to v6.3.2

* Disable legacy external runfile generation for CI

* Revert "Disable legacy external runfile generation for CI"

This reverts commit 1911f293d6.

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-11 19:08:33 +00:00
Preston Van Loon
751dd847b8 Update rules go & gazelle (#12721)
* Update rules_go to v0.40.0

* Update gazelle to v0.26.0

* Update gazelle to v0.27.0

* Update gazelle to v0.28.0

* Update gazelle to v0.29.0

* Update gazelle to v0.30.0

* Update gazelle to v0.31.0
2023-08-11 16:39:35 +00:00
Preston Van Loon
aeb7a45864 Update geth to 1.12.1 (#12718)
* Update geth to 1.12.1

* Fix //cmd/validator/flags:go_default_test

* fix //beacon-chain/node:go_default_test

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-08-11 10:45:42 +00:00
Radosław Kapka
e952fd802b HTTP Beacon API: /eth/v1/validator/beacon_committee_subscriptions (#12700)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

* in progress

* implementation

* remove test file

* remove duplicate

* tests

* HTTP Beacon API: `/eth/v1/validator/sync_committee_subscriptions`

* pointers, pointers everywhere

* fix

* fix after merge

* implementation

* tests

* test fixes

* review

* clear cache in validator tests

* add missing stuff

* compilation fix

* remove `required` from bool

* remove time fetcher from tests

* no need to convert

* add conversion

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-10 18:00:19 +00:00
Nishant Das
b511eef848 Update BLST to v0.3.11 (#12717)
* add changes

* Remove server.c exclusion in headers

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-08-10 22:51:19 +08:00
dependabot[bot]
7aa043892b Bump github.com/libp2p/go-libp2p from 0.27.5 to 0.27.8 (#12709)
* Bump github.com/libp2p/go-libp2p from 0.27.5 to 0.27.8

Bumps [github.com/libp2p/go-libp2p](https://github.com/libp2p/go-libp2p) from 0.27.5 to 0.27.8.
- [Release notes](https://github.com/libp2p/go-libp2p/releases)
- [Changelog](https://github.com/libp2p/go-libp2p/blob/master/CHANGELOG.md)
- [Commits](https://github.com/libp2p/go-libp2p/compare/v0.27.5...v0.27.8)

---
updated-dependencies:
- dependency-name: github.com/libp2p/go-libp2p
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* gazelle and remove libp2p patch. The patch is merged upstream as of this version

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-08-09 17:24:07 +00:00
Radosław Kapka
36be057a11 HTTP Beacon API: /eth/v1/node/syncing (#12706)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-09 17:23:59 +02:00
Radosław Kapka
049e608c75 HTTP Beacon API: /eth/v1/validator/sync_committee_subscriptions (#12689)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

* in progress

* implementation

* remove test file

* remove duplicate

* tests

* HTTP Beacon API: `/eth/v1/validator/sync_committee_subscriptions`

* pointers, pointers everywhere

* fix

* fix after merge

* review

* James' review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-08 22:54:04 +00:00
Preston Van Loon
4541598850 Update go to 1.20.7 (#12707) 2023-08-08 21:39:22 +00:00
Sammy Rosso
8c08854dd0 Fix prysmctl writing empty JSON/YAML files (#12599)
* Setup

* add in marshalable

* Cleanup

* Raul' review

* Remove yaml Marshaler

* minimal fixes

* same imports

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-08 15:20:41 +00:00
m3diumrare
d2ff995eb2 Fix reported effective balance for unknown/pending validators (#12693)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-07 09:19:02 -05:00
anukul
56a0315dde fix: set CoreService in beaconv1alpha1.Server (#12702) 2023-08-06 18:16:54 -07:00
anukul
634133fedc use struct in beacon-chain/rpc/core to store dependencies (#12701) 2023-08-05 22:54:12 +02:00
terencechain
c1c1b7ecfa Feat: aggregate parallel default (#12699) 2023-08-04 16:03:10 +00:00
Nishant Das
9a4670ec64 add changes (#12697)
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-08-04 22:05:47 +08:00
Radosław Kapka
a664a07303 Multi Value Slice (#12616)
* multi value slice

* extract helper function

* comments

* setup godoc fix

* value benchmarks

* use guid

* fix bug when deleting items

* remove callback and rename MultiValue

* godoc

* tiny change

* Nishant's review

* typos

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-08-04 12:42:54 +00:00
Potuz
dd14d5cef0 refactor slot tickers with intervals (#12440)
* refactor slot tickers with intervals

* GoDoc

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-03 20:55:16 -03:00
Radosław Kapka
3a09405bb7 HTTP Beacon API: /eth/v1/validator/aggregate_and_proofs (#12686)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

* in progress

* implementation

* remove test file

* remove duplicate

* tests

* review

* test fixes

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-03 22:24:23 +00:00
terencechain
cb59081887 Remove: span for convert to indexed attestation (#12687)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-03 16:17:46 +00:00
Nishant Das
a820d4dcc8 Minor Optimization on InnerShuffleList (#12690)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-03 06:56:07 +00:00
terencechain
dbc17cf2ca Fix: use correct context for UpdateCommitteeCache (#12691) 2023-08-03 06:00:58 +00:00
terencechain
d38762772a test: add execution payload operation tests (#12685)
* test: add execution payload operation tests

* thanks!

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Update testing/spectest/shared/bellatrix/operations/execution_payload.go

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-02 08:16:46 +00:00
Potuz
e5d1eb885d Update caches blocking (#12679)
* update NSC together with epoch boundary caches

* block when updating caches

* reviews

* removal of very useful helper because the reviewers requested it :)

* use IsEpochEnd
2023-08-02 05:59:09 +00:00
terencechain
a9d7701081 test: add missing random and fork transition spec tests (#12681)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-01 14:08:40 +00:00
Bharath Vedartham
33abe6eb90 add metric to check if validator is in next sync commitee (#12650)
* add metric to check if validator is in next sync commitee

* Update validator/client/validator.go

* Update validator/client/metrics.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2023-08-01 08:25:10 +00:00
terencechain
c342c9a14e fix: update nil check for new validator (#12677) 2023-08-01 04:13:57 +00:00
Radosław Kapka
a9b003e1fe HTTP Beacon API: /eth/v1/validator/contribution_and_proofs (#12660)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-07-31 17:32:39 +00:00
Radosław Kapka
955175b7eb Update server-side events dependency (#12676)
* Update server-side events dependency

* go sum

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-31 15:51:19 +00:00
Potuz
c17682940e only handle epoch boundary on canonical blocks (#12666)
* only handle epoch boundary on canonical blocks

* do not fetch headstate
2023-07-31 12:07:09 -03:00
Nishant Das
db450f53a4 Fix Update Of Committee Cache (#12668)
* fix it

* sammy's comment

* fix tests

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-28 13:34:03 +00:00
terencechain
493905ee9e feat: add state regren duration metric (#12672) 2023-07-27 19:33:25 +00:00
james-prysm
e449724034 Bugfix: proposer-settings edge case for activating validators (#12671) 2023-07-27 16:45:16 +00:00
terencechain
a44c209be0 feat: add metric for block gossip time (#12670) 2023-07-27 15:17:24 +00:00
terencechain
183e72b194 fix: update epoch + 1 (#12667) 2023-07-27 18:53:36 +08:00
Potuz
337c254161 update shuffling caches at epoch boundary (#12661)
* update shuffling caches at epoch boundary

* move span

* do not advance to a past slot
2023-07-26 18:46:18 +00:00
james-prysm
ec60cab2bf Bugfix: Metrics adding fix pending validators balance (#12665) 2023-07-26 16:09:24 +00:00
Potuz
ded00495e7 Parallelize cl el validation (#12590)
* Parallelize EL and CL block validation

* fix by Nishant
2023-07-26 12:23:51 -03:00
Nishant Das
113172d8aa Exit Initial Sync Early (#12659)
* add check

* fix test
2023-07-26 00:38:25 +00:00
Radosław Kapka
2b40c44879 Use the correct root in consensus validation (#12657)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-25 12:04:08 +02:00
Radosław Kapka
fc193b09bf HTTP Beacon API: /eth/v1/validator/aggregate_attestation (#12643)
* initial implementation

* it works

* tests

* extracted helper functions

* fix code

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-24 22:16:45 +00:00
Radosław Kapka
a0d53f5155 Register GetValidatorPerformance as POST (#12658) 2023-07-24 21:17:08 +00:00
terencechain
ff3d2bc69f fix: add mainnet withdrawals and bls spec tests (#12655) 2023-07-24 14:13:02 +00:00
Nishant Das
dd403f830c clean up logging (#12653) 2023-07-24 20:29:50 +08:00
Raul Jordan
e9c8e84618 Integrate Read-Only-Lock-on-Get LRU Cache for Public Keys (#12646)
* use new lru cache

* update build

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-22 04:37:53 +00:00
james-prysm
9c250dd4c2 Builder gas limit fix default to 0 in some cases (#12647)
* adding fix for nethermind's findings on gaslimit =0 on some default setups

* adding in default gaslimit check

* fixing linting on complexity

* fixing cognitive complexity linting marker

* fixing unit test and bug with referencing

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-22 03:44:50 +00:00
Potuz
f97db3b738 Different parallel hashing (#12639)
* Paralellize hashing of large lists

* add unit test

* add file

* do not parallelize on low processor count

* revert minimal proc count

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-21 20:36:20 -04:00
Radosław Kapka
43378ae8d5 Use BlockProcessed event in Beacon API (#12625)
* Use `BlockProcessed` event in Beacon API

* log error

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-20 17:26:34 +00:00
Radosław Kapka
2217b45e16 Return historical roots in Capella state (#12642)
* Return historical roots in Capella state

* test fix

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-20 17:00:05 +00:00
james-prysm
405cd6ed86 Publish blockv2 ssz (#12636)
* wip produce block v2 ssz

* adding functions for ssz processing

* fixing linting

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing review feedback

* fixing linting

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-07-20 16:26:40 +00:00
terencechain
ba9bbdd6b9 Style: minor cleanups to blockchain pkg (#12640)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-20 10:17:35 +00:00
Nishant Das
945c76132c Add Allocation Data To Benchmark (#12641)
* add more data

* terence's review
2023-07-20 08:49:17 +00:00
Radosław Kapka
056d3ff0cc Fix GetValidatorPerformance endpoint (#12638) 2023-07-19 15:29:07 +00:00
Nishant Das
d4fd3c34de Use GetPayloadBodies in our Engine Client (#12630)
* add it to our engine client

* lint

* add better debugging info

* temp debugging

* fix

* fix it

* pretty

* dumb bug

* Revert "pretty"

This reverts commit 6a6df3cc5f.

* Revert "fix it"

This reverts commit 73dc617bb0.

* Revert "fix"

This reverts commit 3aecdaac6d.

* Revert "temp debugging"

This reverts commit ffcd2c61a0.

* Revert "add better debugging info"

This reverts commit 96184e8567.

* raul's comment

* regression test

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-07-18 15:46:23 +00:00
Sammy Rosso
4ac4d00377 Implement GetValidatorPerformance beacon chain client (#12581)
* Add http endpoint for GetValidatorPerformance

* Add tests

* fix up client usage

* Revert changes

* Implement GetValidatorPerformance

* refactor to reuse code

* Move endpoint + move ComputeValidatorPerformance

* Radek's comment change

* Add Bazel file

* Change endpoint path

* Add server for http endpoints

* Fix server

* Create core package

* Gaz

* Update + add broken test

* Gaz

* fix

* Fix test

* Create const for endpoint

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-18 08:57:54 +00:00
Radosław Kapka
ec2fda7ad9 getSyncCommitteeRewards API endpoint (#12633) 2023-07-18 10:31:15 +02:00
Preston Van Loon
292f4de099 Update hermetic_cc_toolchain (#12631) 2023-07-17 14:50:45 +00:00
terencechain
145a485b75 fix(pcli): use state trie for HTR duration (#12629) 2023-07-15 20:39:58 -07:00
terencechain
7e474b7a30 fix: benchmark deserialize without clone and init trie (#12626)
* fix: benchmark deserialize without clone

* remove tree initialization

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-15 12:35:58 +00:00
Nishant Das
af0ee9bd16 use read-only validators (#12628) 2023-07-15 00:34:45 -07:00
Radosław Kapka
456ba7c498 Fix comments when receiving block (#12624) 2023-07-14 10:30:02 +00:00
Potuz
cd8847c53b Add deserialization time in pcli benchmark (#12620) 2023-07-13 12:19:10 +00:00
Kaushal Kumar Singh
1894a124ea Fix: Size of SyncCommitteeBits should be 64 bytes (512 bits) instead of 512 bytes (#12586)
* Fix: Size of SyncCommitteeBits should be 64 bytes (512 bits) instead of 512 bytes

* Updated unit test
2023-07-13 10:43:17 +00:00
Preston Van Loon
490bd22b97 Update go version to 1.20.6 (#12617) 2023-07-12 19:55:33 +00:00
terencechain
f23e720a16 fix: add local boost flag to main/usage (#12615)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-12 12:15:25 +00:00
Raul Jordan
402799a584 Threadsafe LRU With Non-Blocking Reads for Concurrent Readers (#12476)
* add nonblocking simple lru

* method

* add in missing tests, fix panic
2023-07-12 17:57:52 +08:00
james-prysm
0266609bf6 bugfix : Eth-Consensus-Version header on response header (#12600)
* adding in custom header

* adding in parsing for middleware

* fixing casing

* add handling on error as well

* changing how error is handled for header

* changing how error is handled

* fixing casing

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blinded_blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blinded_blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing unit tests and review comment

* making some constants consistent in 1 file

* fixing missed blinded blocks

* fixing constants in custom handler tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-07-11 13:27:23 -05:00
Potuz
58df1f1ba5 lock before saving the poststate to db (#12612) 2023-07-11 16:10:32 +00:00
Simon
cec32cb996 Append Dynamic Addinng Trusted Peer Apis (#12531)
* Append Dynamic Addinng Trusted Peer Apis

* Append unit tests for Dynamic Addinng Trusted Peer Apis

* Update beacon-chain/p2p/peers/peerdata/store.go

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* Update beacon-chain/p2p/peers/status.go

* Update beacon-chain/p2p/peers/status_test.go

* Update beacon-chain/p2p/peers/status_test.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Move trusted peer apis from rpc/eth/v1/node to rpc/prysm/node; move peersToWatch into ensurePeerConnections function;

* Update beacon-chain/rpc/prysm/node/server.go

* Update beacon-chain/rpc/prysm/node/server.go

* fix go lint problem

* p2p/watch_peers.go: trusted peer makes AddrInfo structure by itself instead of using MakePeer().

p2p/service.go: add connectWithAllTrustedPeers function, before connectWithPeer, add trusted peer info into peer status.

p2p/peers/status.go: trusted peers are not included, when pruning outdated and disconnected peers.

* use readlock for GetTrustedPeers and IsTrustedPeers

---------

Co-authored-by: simon <sunminghui2@huawei.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-11 09:26:08 +00:00
Nishant Das
d56a530c86 Copy Bytes Alternatively (#12608)
* copy bytes alternatively

* test
2023-07-10 19:47:29 +08:00
Nishant Das
0a68d2d302 Fix Context Cancellation (#12604) 2023-07-08 08:50:04 +00:00
minh-bq
25ebd335cb Fix bls signature batch unit test (#12602)
We randomly observe this failure when running unit test

go test -test.v -run=^TestSignatureBatch_AggregateBatch/common_and_uncommon_messages_in_batch_with_multiple_messages
=== RUN   TestSignatureBatch_AggregateBatch
=== RUN   TestSignatureBatch_AggregateBatch/common_and_uncommon_messages_in_batch_with_multiple_messages
    signature_batch_test.go:643: AggregateBatch() Descriptions got = [test signature bls aggregated signature test signature bls aggregated signature test signature bls aggregated signature], want [bls aggregated signature test signature bls aggregated signature test signature bls aggregated signature test signature]
--- FAIL: TestSignatureBatch_AggregateBatch (0.02s)
    --- FAIL: TestSignatureBatch_AggregateBatch/common_and_uncommon_messages_in_batch_with_multiple_messages (0.02s)

The problem is that the signature sort forgets to swap the description when a
swap occurs. This commit adds the description swap when swap occurs.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-07-07 14:26:02 -05:00
Sammy Rosso
6a0db800b3 GetValidatorPerformance http endpoint (#12557)
* Add http endpoint for GetValidatorPerformance

* Add tests

* fix up client usage

* Revert changes

* refactor to reuse code

* Move endpoint + move ComputeValidatorPerformance

* Radek's comment change

* Add Bazel file

* Change endpoint path

* Add server for http endpoints

* Fix server

* Create core package

* Gaz

* Add correct error code

* Fix error code in test

* Adding errors

* Fix errors

* Fix default GRPC error

* Change http errors to core ones

* Use error status without helper

* Fix

* Capitalize GRPC error messages

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-07-07 14:49:44 +00:00
Nishant Das
085f90a4f1 Prune Pending Deposits on Finalization (#12598)
* prune them

* Update beacon-chain/blockchain/process_block_helpers.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-07-07 11:20:14 +08:00
dependabot[bot]
ecb26e9885 Bump google.golang.org/grpc from 1.40.0 to 1.53.0 (#12595)
* Bump google.golang.org/grpc from 1.40.0 to 1.53.0

Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.40.0 to 1.53.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.40.0...v1.53.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Run gazelle and fix new gRPC API

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-07-06 20:11:43 +00:00
james-prysm
7eb0091936 Checkpoint sync ux (#12584)
* small ux improvement for checkpoint sync

* adding small log for ux update

* gaz
2023-07-06 16:46:55 +00:00
terencechain
f8408b9ec1 feat: add metric for ReceiveBlock (#12597) 2023-07-06 02:57:55 -07:00
Preston Van Loon
d6d5139d68 Clarify sync committee message validation (#12594)
* Clarify sync committee validation and error message

* fix test
2023-07-05 19:20:21 +00:00
terencechain
2e0e29ecbe fix: prune invalid blocks during initial sync (#12591) 2023-07-05 08:33:33 -07:00
Potuz
e9b5e52ee2 Move consensus and execution validation outside of onBlock (#12589)
* Move consensus and execution validation outside of onBlock

* reviews

* fix unit test

* revert version change

* fix tests

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-07-05 21:12:21 +08:00
Nishant Das
2a4441762e Handle Epoch Boundary Misses (#12579)
* add changes

* fix tests

* fix edge case

* fix logging
2023-07-05 09:23:51 +00:00
Nishant Das
401fccc723 Log Finalized Deposit Insertion (#12593)
* add log

* update key

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-07-05 08:07:38 +00:00
Radosław Kapka
c80f88fc07 Rename payloadHash to lastValidHash in setOptimisticToInvalid (#12592) 2023-07-04 17:03:45 +00:00
Kevin Wood
faa0a2c4cf Correct log level for 'Could not send a chunked response' (#12562)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-07-02 22:58:12 +00:00
Nishant Das
c45cb7e188 Optimize Validator Roots Computation (#12585)
* add changes

* change it
2023-07-01 02:23:25 +00:00
0xalex88
0b10263dd5 Increase validator client startup proposer settings deadline (#12533)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-30 21:37:39 +00:00
Anukul Sangwan
3bc808352f run ineffassign for all code (#12578)
* run `ineffassign` for all code

* fix reported ineffassign errors

* remove redundant changes

* fix remaining ineffassign errors

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-06-29 15:38:26 +00:00
james-prysm
d0c740f477 Registration Cache used by default and other UX changes for Proposer settings (#12456)
* WIP

* WIP

* adding in migration function

* updating mock validator and gaz

* adding descriptive logs

* fixing mocking

* fixing tests

* fixing mock

* adding changes to handle enable builder settings

* fixing tests and edge case

* reduce cognative complexity of function

* further reducing cognative complexity on function

* WIP

* fixing unit test on migration

* adding more tests

* gaz and fix unit test

* fixing deepsource issues

* fixing more deesource issues missed previously

* removing unused reciever name

* WIP fix to migration logic

* fixing loging info

* reverting migration logic, converting logic to address issues discussed on slack, adding unit tests

* adding test for builder setting only not saved to db

* addressing comment

* fixing flag

* removing accidently missed deprecated flags

* rolling back mock on pr

* fixing fmt linting

* updating comments based on feedback

* Update config/features/flags.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* fixing based on feedback on PR

* Update config/validator/service/proposer_settings.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update validator/client/runner.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update validator/db/kv/proposer_settings.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* adding additional logs to clear up some steps based on feedback

* fixing log

* deepsource

* adding comments based on review feedback

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-06-29 02:49:21 +00:00
Preston Van Loon
cbe67f1970 Update protobuf and protobuf deps (#12569)
* Update protobuf and protobuf deps

* gazelle

* enforce c++14

* bump to c++17 since practically all modern compilers support it

* update protobuf again to resolve mac issues, bump c++20

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-28 14:50:43 +00:00
Potuz
5bb482e5d6 Remove forkchoice call from notify new payload (#12560)
* Remove forkchoice call from notify new payload

* add unit test
2023-06-28 13:38:24 +00:00
terencechain
83494c5b23 fix: use diff context to update proposer cache background (#12571) 2023-06-27 20:31:54 +00:00
terencechain
a10ffa9c0e Cache next epoch proposers at epoch boundary (#12484)
* Cache next epoch proposers at epoch boundary

* Fix new lines

* Use UpdateProposerIndicesInCache

* dont set state slot

* Update beacon_committee.go

* dont set state slot

* genesis epoch check

* Rm check

* fix: rm logging ctx

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* feat: move update to background

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-06-27 14:41:24 +00:00
Radosław Kapka
e545b57f26 Deflake cloners_test.go (#12566) 2023-06-26 15:43:00 +00:00
Preston Van Loon
c026b9e897 Set blst_modern=true to be the bazel default build (#12564)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-26 15:06:33 +00:00
Preston Van Loon
a19044051f Add hermetic_cc_toolchain for a hermetic cc toolchain (#12135)
* Add bazel-zig-cc for a hermetic cc toolchain

* gazelle

* Remove llvm

* remove wl

* Add new URLs for renamed repo

* gazelle

* Update to v2.0.0-rc1

* bump to rc2

* Some PR feedback

* use v2.0.0 from rc2

* Disable hermetic builds for mac and windows.

* bump bazel version, add darwin hack

* fix

* Add the no-op emtpy cc toolchain code

* typo and additional copy

* update protobuf and fix vaticle warning

* Revert "update protobuf and fix vaticle warning"

This reverts commit 7bb4b6b564.

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-26 14:31:40 +00:00
Potuz
1ebef16196 use the incoming payload status instead of calling forkchoice (#12559)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-22 18:09:02 +00:00
terencechain
8af634a6a0 feat: aggregate atts using fixed pool of go routines (#12553)
* feat: aggregate atts using fixed pool of go routines

* fix: deepsrc complains

* style: comment

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* feat: aggregate atts using fixed pool of go routines

* fix: deepsrc complains

* style: comment

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-06-22 17:48:42 +00:00
Potuz
884ba4959a Remove unneded helper (#12558)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-22 16:54:52 +00:00
terencechain
75e94120b4 fix(aggregator): remove single bit aggregation (#12555) 2023-06-22 09:34:25 -07:00
Sammy Rosso
20f4d21b83 Keymanager API: Add validator voluntary exit endpoint (#12299)
* Initial setup

* Fix + Cleanup

* Add query

* Fix

* Add epoch

* James' review part 1

* James' review part 2

* James' review part 3

* Radek' review

* Gazelle

* Fix cycle

* Start unit test

* fixing part of the test

* Mostly fix test

* Fix tests

* Cleanup

* Handle error

* Remove times

* Fix all tests

* Fix accidental deletion

* Unmarshal epoch

* Add custom_type

* Small fix

* Fix epoch

* Lint fix

* Add test + fix empty query panic

* Add comment

* Fix regex

* Add correct error message

* Change current epoch to use slot

* Return error if incorrect epoch passed

* Remove redundant type conversion

* Fix tests

* gaz

* Remove nodeClient + pass slot

* Remove slot from parameters

* Fix tests

* Fix test attempt 2

* Fix test attempt 2

* Remove nodeClient from ProposeExit

* Fix

* Fix tests

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-21 14:06:16 -05:00
Bryce T
c018981951 Add expected withdrawals API (#12519)
* add structs for expected-withdrawals-api

* add server handler

* add tests

* add bazel file

* register api in service

* remove get prefix for endpoint

* fix review comments

* Update beacon-chain/rpc/eth/builder/handlers.go

* use goimports sorting type

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-21 14:36:47 +00:00
Radosław Kapka
b92226bedb getAttestationRewards API endpoint (#12480)
* handler

* very much work in progress

* remove Polish

* thinking

* working but differs from LH

* remove old stuff

* review from Potuz

* validator performance beacon server

* Revert "validator performance beacon server"

This reverts commit 42464cc6d3.

* reuse precompute calculations

* todos

* production quality

* add json tags to AttestationRewards

* Potuz's review

* extract vars

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-06-21 13:16:53 +00:00
Potuz
57f97feb84 Track optimistic status on head (#12552) 2023-06-20 08:59:48 -07:00
Sanghee Choi
2bf0560dc7 fix typo (beacon-chain/node/node.go) (#12551) 2023-06-20 08:32:34 +00:00
Radosław Kapka
a40f903f76 Fix TestFieldTrie_NativeState_fieldConvertersNative (#12550) 2023-06-19 13:49:12 +00:00
Sanghee Choi
ba55ae8cea fix typo (CONTRIBUTING.md) (#12548) 2023-06-18 19:24:19 -07:00
Potuz
27aac105d7 disable nil payloadid log on relayers flags (#12465)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-06-16 17:01:57 +00:00
terencechain
115d565f49 fix: late block task wait for initial sync (#12526)
* fix: late block task wait for initial sync

* fix: remove wait for clock
2023-06-16 13:47:19 +00:00
Potuz
019e0b56e2 Do not validate merge transition block after Capella (#12459) 2023-06-16 13:11:07 +00:00
Nishant Das
0efb038984 Fix Fuzz Target For ExecutionPayload (#12541) 2023-06-16 12:41:28 +00:00
Nishant Das
63d81144e9 Fix Uint256 Json Parsing (#12540)
* add stronger checks

* radek's review
2023-06-16 09:43:20 +00:00
james-prysm
6edbfa3128 multiple validator status - optimization (#12487)
* adding optmization

* addressing comments

* adding a test and fixing change in assignments.go

* making some changes based on review of the code

* removing irrelevant test

* changing formatting
2023-06-15 17:20:00 -05:00
Nishant Das
194b3b1c5e Ensure File Does Not Exist (#12536)
* error out

* gaz

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-15 21:41:46 +00:00
james-prysm
996ec67229 changing default on bad validators (#12535)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-06-15 16:40:59 +00:00
Nishant Das
c7b2c011d8 fix parsing (#12534) 2023-06-15 11:12:39 -04:00
james-prysm
d15122fae2 Reopenning fix for keystore field name change to align with EIP2335 (#12530)
* adding changes

* fixing deepsource
2023-06-14 15:48:30 -05:00
Potuz
3e17dbb532 log the right blocknumber (#12529) 2023-06-14 19:55:33 +00:00
Nishant Das
a75e78ddb4 Ignore Late Message Logs (#12525) 2023-06-14 10:37:39 +00:00
Nishant Das
1862422db9 Remove Defer In ProposeGenericBlock (#12524) 2023-06-14 05:25:52 +00:00
james-prysm
152d21059e adding additional comments and safe copies to protos (#12518) 2023-06-13 10:31:29 -05:00
terencechain
2b410893a0 optimization: epoch boundary uses next slot cache (#12515)
* optimization: epoch boundary uses next slot cache

* test: fix

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-12 17:13:49 +00:00
Potuz
826267310e benchmark cold hashing of capella beaconstates (#12516)
* benchmark cold hashing of capella beaconstates

* use since
2023-06-12 16:54:43 +00:00
Nishant Das
d5057cfb42 Add the Ability for Prysm To Handle Trusted Peers (#12492)
* add all changes

* add to peers to watch

* add tests

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* radek's review

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-12 14:47:52 +00:00
james-prysm
8d01cf2ec1 change update duties to handle all validators exited check (#12505)
* wip have update duties handle all validators updated

* removing function and adding tests

* removing unnessesary test

* fixing unit test

* gaz

* removing number on wait group

* trying lower threshold to reduce timeout

* testing removal of test to resolve timeout on buildkite

* gaz

* removing test that is breaking buildkite on timeouts, will need to return to revaluate difference between buildkite and local mock

* addressing feedback

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-12 14:27:52 +00:00
Potuz
e4e315da94 log validation time for blocks (#12514) 2023-06-12 22:06:57 +08:00
terencechain
0a4e42545e Use next slot cache for sync committee (#12287)
* Use next slot cache for sync committee

* RWMutex

* change mutex for last cached state

* feat: change mutex

* test: add db

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-12 04:30:06 +00:00
kasey
6fa2d768b5 Checkpoint sync: get block using state.latest_block_header.slot (#12447)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-11 03:00:38 +00:00
Nishant Das
0f228896b0 Add Patch For Libp2p (#12507)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-11 01:59:18 +00:00
terencechain
6896b41963 optimization(proposer rpc): move htr to after broadcast (#12504) 2023-06-09 06:32:29 -07:00
Nishant Das
3bf6abe27c Ignore Phase0 Blocks For Monitor (#12503) 2023-06-09 05:00:36 +00:00
Nishant Das
c1391f0de3 Always Favour Yamux for Multiplexing (#12502) 2023-06-08 04:02:46 +00:00
james-prysm
6672d1499a prysmctl: output proposer settings (#12181)
* wip proposer settings

* WIP validator client APIs

* adding proposer settings output

* adding unit tests

* fixing linting

* fixing deepsource issues

* fixing e2e

* fixing deep source issue

* updating naming to not stutter

* updating bazel

* fixing linting error

* reverting comment

* adding builder settings

* gaz

* Update validator/client/validator.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* adding comments

* adding some tests

* gaz

* Update cmd/prysmctl/validator/proposer_settings.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update cmd/prysmctl/validator/proposer_settings.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update cmd/prysmctl/validator/proposer_settings.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update cmd/prysmctl/validator/proposer_settings.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/options.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/options.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/errors.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/options.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/options.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/validator/client.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update cmd/prysmctl/validator/cmd.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/validator/client.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/validator/client.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update cmd/prysmctl/validator/proposer_settings.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update api/client/errors.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing feedback

* fixing unit test

* addressign comments

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-06-06 17:03:30 +00:00
Nishant Das
33cf52831c Update Libp2p to v0.27.5 (#12486)
* add deps

* update to v0.27.5 and handle panic
2023-06-06 08:41:15 +08:00
terencechain
d543e9be00 Update spec tests to v1.4.0-alpha.1 (#12489) 2023-06-03 11:17:13 +00:00
Nishant Das
0669050ffa Add Appropriate Size for the Attestation Queue (#12485)
* add tag

* fix off by 1
2023-06-02 11:33:28 +00:00
zghh
ceff0c2024 Fix the bug that return 500 in /eth/v1/node/peers interface (#12483)
* Fix the bug that return 500 in /eth/v1/node/peers interface

* Update node.go

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-06-02 03:27:17 +00:00
Radosław Kapka
c32b581e8e Add broadcast_validation to block publishing (#12432)
* day 1

* day 2

* day 2+

* day 3

* day 4

* making bazel happy

* PublishBlindedBlockV2

* remove file

* use lock in insertSeenProposerIndex

* remove EquivocationChecker interface

* update deps.bzl

* remove middleware json tags

* go mod tidy

* remove redundant return statements

* validate in handler

* improvements

* extract common code

* remove import

* sync test fix

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-06-01 11:22:49 +00:00
terencechain
e516a2004f Update next slot cache correctly under late task (#12462) 2023-05-31 08:50:37 -07:00
terencechain
cb65d8af96 Proposer RPC: make setExecutionData better (#12466) 2023-05-31 06:06:32 -07:00
Nishant Das
70152bf476 Copy All Field Tries For Late Blocks (#12461)
* add new thing

* only have it for late blocks

* comments

* change to lock

* add test

* Update beacon-chain/state/state-native/state_test.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-05-30 09:57:20 +00:00
Radosław Kapka
8aa688729d Cleanup of ProposerPayloadIDsCache (#12474)
* Cleanup of `ProposerPayloadIDsCache`

* one more comment

* Update beacon-chain/cache/payload_id.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Update beacon-chain/cache/payload_id.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-05-29 16:10:28 +00:00
Preston Van Loon
1ffc92999f p2p: Check peer threshold is met before giving up on ctx deadline (#12446)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-28 13:24:59 +00:00
terencechain
2dcef85f97 Add spec test for v1.4.0-alpha.0 (#12460)
* Fix spec test

* Fix sha

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-25 14:05:43 +00:00
Nishant Das
52da7b3de6 Release Lock Before Panicking (#12464) 2023-05-25 06:42:21 -07:00
terencechain
be16b64535 Remove SubmitBlindBlock context timeout (#12453) 2023-05-24 14:19:23 +00:00
terencechain
f4d3939b62 Add logs for build block times (#12452)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-05-24 13:37:26 +00:00
Nishant Das
245d8a29e0 Optimize Zerohash Comparisons In Forkchoice (#12458) 2023-05-24 09:58:02 +00:00
james-prysm
666188dfea Improve validator import logs (#12429)
* adding small ux improvement

* gaz

* rolling back dir test changes

* Update validator/accounts/accounts_import.go

* adding review suggestion

* missed else part of statement

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-05-23 15:41:41 -05:00
Preston Van Loon
cfa64ae013 Restore disable-peer-scorer flag (#12386)
* Revert "Make Peer Scorer Permanent Default (#12138)"

This reverts commit 4d28d69fd9.

* make peer scoring flag warning scary
2023-05-23 13:53:02 +00:00
Potuz
cd0f814f2e fixed erroneous panic (#12450) 2023-05-23 11:12:31 +00:00
Radosław Kapka
abc81e6dde Merge all block unblinding code into a single unblinder struct (#12240)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-05-23 11:38:52 +02:00
terencechain
6b26183e73 Add missing config yamls for domains (#12442)
* Add missing config yamls for domains

* Fix GetSpec test

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-22 18:02:41 +00:00
Preston Van Loon
7fe935e94d Fix metric name from PR #12430 (#12445)
* Fix metric name from PR #12430

* @potuz can't spell 'unknown'

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-22 17:43:21 +00:00
Potuz
e0e7c71eb5 Fix sandwich attack on honest reorgs (#12418)
* Fix sandwich attack on honest reorgs

* fix test

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-22 17:23:20 +00:00
Potuz
c80019bd0b remove trivial helper (#12443) 2023-05-22 15:59:16 +00:00
james-prysm
8dfb92c605 reverting expiration logic on validator while using --enable-registration-cache (#12436)
* reverting expiration logic

* gaz
2023-05-22 14:54:09 +00:00
Potuz
9d192a3608 Remove unused function (#12439)
* Remove unused function

* gazelle
2023-05-22 11:09:08 -03:00
Nishant Das
51bde7a845 disable it (#12438) 2023-05-22 19:18:13 +08:00
kasey
385a317902 Revert initsync revert (#12431)
* Revert "Revert "BeaconBlocksByRange and BlobSidecarsByRange consistency (#123… (#12426)"

This reverts commit ddc1e48e05.

* fix metrics bug, add batch.next tests

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-05-19 16:59:13 +00:00
Potuz
b84dd40ba9 Use forkchoice to validate sync messages faster (#12430)
* Use forkchoice to validate sync messages faster

* add metric
2023-05-19 14:47:39 +00:00
kasey
aeaa72fdc2 fix deadlock between monitor service and init-sync (#12427)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-05-18 18:35:06 +00:00
kasey
ddc1e48e05 Revert "BeaconBlocksByRange and BlobSidecarsByRange consistency (#123… (#12426)
This reverts commit 73e4bdccbb.

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-05-18 18:01:26 +00:00
Nishant Das
f91159337b Fix Migration Of State (#12423) 2023-05-18 13:18:56 +00:00
Potuz
537236e1c9 Add aggregation metrics (#12417) 2023-05-17 12:18:59 -07:00
kasey
73e4bdccbb BeaconBlocksByRange and BlobSidecarsByRange consistency (#12383)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-05-17 12:16:10 +00:00
Potuz
f54bd64bdd Default aggregation ticker times (#12412)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-16 19:38:44 +00:00
james-prysm
d907cae595 persistent validator settings in validator client (#12354)
* WIP

* improving proposer settings store

* WIP persistent validator settings

* WIP persistent validator settings

* changing visibility level

* fixing some deepsource issues

* fixing more deepsource issues

* fixing json marshalling

* fix linting

* fixing tests

* fixing more tests

* fixing more tests

* fixing more tests

* fixing linting

* WIP fixing unit tests

* fixing remaining db tests

* converting json to protobuf

* fixing e2e

* k8s yaml library is used directly

* fixing linting

* fixing broken unit test

* reverting changes on e2e

* fixing linting

* fixing deepsource issue

* resolving some internal comments

* resolving some comments and adding more tests

* adding more unit tests

* gaz

* fixing flaking test

* fixing unit test contamination

* fixing deepsource issue

* resolving review item

* gaz
2023-05-16 14:08:49 -05:00
Potuz
be23773924 Don't use max cover on unaggregated atts nor check subgroup of validated signatures (#12350)
* Don't use max cover on unnaggregated atts

* Do not validate signature on the attestation package

* separate nil error checks

* fix unit tests
2023-05-16 17:06:26 +00:00
terencechain
29f6de1e96 Flip build block parallel feature flag (#12408) 2023-05-16 08:43:15 -07:00
Potuz
955a21fea4 revert revert of f6764fe62b (#12399) 2023-05-16 11:50:02 +00:00
516 changed files with 27583 additions and 13512 deletions

View File

@@ -3,6 +3,7 @@ import %workspace%/build/bazelrc/convenience.bazelrc
import %workspace%/build/bazelrc/correctness.bazelrc
import %workspace%/build/bazelrc/cross.bazelrc
import %workspace%/build/bazelrc/debug.bazelrc
import %workspace%/build/bazelrc/hermetic-cc.bazelrc
import %workspace%/build/bazelrc/performance.bazelrc
# E2E run with debug gotag
@@ -14,7 +15,7 @@ coverage --define=coverage_enabled=1
# Stamp binaries with git information
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --define blst_disabled=false --define blst_modern=true
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true
@@ -27,30 +28,7 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
build:release --compilation_mode=opt
build:release --stamp
# LLVM compiler for building C/C++ dependencies.
build:llvm --define compiler=llvm
build:llvm --copt -fno-sanitize=vptr,function
build:llvm --linkopt -fno-sanitize=vptr,function
# --incompatible_enable_cc_toolchain_resolution not needed after this issue is closed https://github.com/bazelbuild/bazel/issues/7260
build:llvm --incompatible_enable_cc_toolchain_resolution
build:asan --copt -fsanitize=address,undefined
build:asan --copt -fno-omit-frame-pointer
build:asan --linkopt -fsanitize=address,undefined
build:asan --copt -fno-sanitize=vptr,function
build:asan --linkopt -fno-sanitize=vptr,function
build:asan --copt -DADDRESS_SANITIZER=1
build:asan --copt -D__SANITIZE_ADDRESS__
build:asan --linkopt -ldl
build:llvm-asan --config=llvm
build:llvm-asan --config=asan
build:llvm-asan --linkopt -fuse-ld=ld.lld
build:fuzz --@io_bazel_rules_go//go/config:tags=fuzz
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --config=llvm
build:cgo_symbolizer --copt=-g
build:cgo_symbolizer --define=USE_CGO_SYMBOLIZER=true
build:cgo_symbolizer -c dbg
@@ -59,9 +37,13 @@ build:cgo_symbolizer --define=gotags=cgosymbolizer_enabled
# toolchain build debug configs
#------------------------------
build:debug --sandbox_debug
build:debug --toolchain_resolution_debug
build:debug --toolchain_resolution_debug=".*"
build:debug --verbose_failures
build:debug -s
# Set bazel gotag
build --define gotags=bazel
# Abseil requires c++14 or greater.
build --cxxopt=-std=c++20
build --host_cxxopt=-std=c++20

View File

@@ -1 +1 @@
6.1.0
6.3.2

42
.github/workflows/fuzz.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: "fuzz"
on:
workflow_dispatch:
schedule:
- cron: "0 12 * * *"
permissions:
contents: write
pull-requests: write
jobs:
list:
runs-on: ubuntu-latest
timeout-minutes: 180
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.20'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
outputs:
fuzz-tests: ${{steps.list.outputs.fuzz-tests}}
fuzz:
runs-on: ubuntu-latest
timeout-minutes: 360
needs: list
strategy:
fail-fast: false
matrix:
include: ${{fromJson(needs.list.outputs.fuzz-tests)}}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.20'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}
fuzz-regexp: ${{ matrix.func }}
fuzz-time: "20m"

View File

@@ -3,7 +3,6 @@ load("@com_github_atlassian_bazel_tools//gometalinter:def.bzl", "gometalinter")
load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@vaticle_bazel_distribution//common:rules.bzl", "assemble_targz", "assemble_versioned")
load("@bazel_skylib//rules:common_settings.bzl", "string_setting")
prefix = "github.com/prysmaticlabs/prysm"
@@ -134,8 +133,8 @@ nogo(
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],
"//conditions:default": [
"@org_golang_x_tools//go/analysis/passes/lostcancel:go_default_library",
"@org_golang_x_tools//go/analysis/passes/composite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/lostcancel:go_default_library",
],
}),
)

View File

@@ -1,6 +1,6 @@
# Contribution Guidelines
Note: The latest and most up to date documenation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
Note: The latest and most up-to-date documentation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
@@ -10,9 +10,9 @@ You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues)
**1. Set up Prysm following the instructions in README.md.**
**2. Fork the prysm repo.**
**2. Fork the Prysm repo.**
Sign in to your Github account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
**3. Create a local clone of Prysm.**
@@ -23,7 +23,7 @@ $ git clone https://github.com/prysmaticlabs/prysm.git
$ cd $GOPATH/src/github.com/prysmaticlabs/prysm
```
**4. Link your local clone to the fork on your Github repo.**
**4. Link your local clone to the fork on your GitHub repo.**
```
$ git remote add myprysmrepo https://github.com/<your_github_user_name>/prysm.git
@@ -68,7 +68,7 @@ $ go test <file_you_are_working_on>
$ git add --all
```
This command stages all of the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
This command stages all the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
**11. Commit the file or files.**
@@ -96,8 +96,7 @@ If there are conflicts between your edits and those made by others since you sta
$ git status
```
Open those files one at a time and you
will see lines inserted by Git that identify the conflicts:
Open those files one at a time, and you will see lines inserted by Git that identify the conflicts:
```
<<<<<< HEAD
@@ -119,7 +118,7 @@ $ git push myrepo feature-in-progress-branch
**15. Check to be sure your fork of the Prysm repo contains your feature branch with the latest edits.**
Navigate to your fork of the repo on Github. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
Navigate to your fork of the repo on GitHub. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
**16. Create a pull request.**
@@ -151,7 +150,7 @@ pick hash fix a bug
pick hash add a feature
```
Replace the word pick with the word “squash” for every line but the first so you end with ….
Replace the word pick with the word “squash” for every line but the first, so you end with ….
```
pick hash do some work
@@ -178,7 +177,7 @@ We consider two types of contributions to our repo and categorize them as follow
Anyone can become a part-time contributor and help out on implementing Ethereum consensus. The responsibilities of a part-time contributor include:
- Engaging in Gitter conversations, asking the questions on how to begin contributing to the project
- Opening up github issues to express interest in code to implement
- Opening up GitHub issues to express interest in code to implement
- Opening up PRs referencing any open issue in the repo. PRs should include:
- Detailed context of what would be required for merge
- Tests that are consistent with how other tests are written in our implementation
@@ -188,12 +187,12 @@ Anyone can become a part-time contributor and help out on implementing Ethereum
### Core Contributors
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all of the responsibilities of part-time contributors plus the majority of the following:
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all the responsibilities of part-time contributors plus the majority of the following:
- Stay up to date on the latest beacon chain specification
- Monitor github issues and PRs to make sure owner, labels, descriptions are correct
- Monitor GitHub issues and PRs to make sure owner, labels, descriptions are correct
- Formulate independent ideas, suggest new work to do, point out improvements to existing approaches
- Participate in code review, ensure code quality is excellent, and have ensure high code coverage
- Participate in code review, ensure code quality is excellent, and ensure high code coverage
- Help with social media presence, write bi-weekly development update
- Represent Prysmatic Labs at events to help spread the word on scalability research and solutions

109
WORKSPACE
View File

@@ -17,26 +17,34 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
http_archive(
name = "com_grail_bazel_toolchain",
sha256 = "b210fc8e58782ef171f428bfc850ed7179bdd805543ebd1aa144b9c93489134f",
strip_prefix = "bazel-toolchain-83e69ba9e4b4fdad0d1d057fcb87addf77c281c9",
urls = ["https://github.com/grailbio/bazel-toolchain/archive/83e69ba9e4b4fdad0d1d057fcb87addf77c281c9.tar.gz"],
name = "hermetic_cc_toolchain",
sha256 = "973ab22945b921ef45b8e1d6ce01ca7ce1b8a462167449a36e297438c4ec2755",
strip_prefix = "hermetic_cc_toolchain-5098046bccc15d2962f3cc8e7e53d6a2a26072dc",
urls = [
"https://github.com/uber/hermetic_cc_toolchain/archive/5098046bccc15d2962f3cc8e7e53d6a2a26072dc.tar.gz", # 2023-06-28
],
)
load("@com_grail_bazel_toolchain//toolchain:deps.bzl", "bazel_toolchain_dependencies")
load("@hermetic_cc_toolchain//toolchain:defs.bzl", zig_toolchains = "toolchains")
bazel_toolchain_dependencies()
zig_toolchains()
load("@com_grail_bazel_toolchain//toolchain:rules.bzl", "llvm_toolchain")
llvm_toolchain(
name = "llvm_toolchain",
llvm_version = "13.0.1",
# Register zig sdk toolchains with support for Ubuntu 20.04 (Focal Fossa) which has an EOL date of April, 2025.
# For ubuntu glibc support, see https://launchpad.net/ubuntu/+source/glibc
register_toolchains(
"@zig_sdk//toolchain:linux_amd64_gnu.2.31",
"@zig_sdk//toolchain:linux_arm64_gnu.2.31",
# Hermetic cc toolchain is not yet supported on darwin. Sysroot needs to be provided.
# See https://github.com/uber/hermetic_cc_toolchain#osx-sysroot
# "@zig_sdk//toolchain:darwin_amd64",
# "@zig_sdk//toolchain:darwin_arm64",
# Windows builds are not supported yet.
# "@zig_sdk//toolchain:windows_amd64",
)
load("@llvm_toolchain//:toolchains.bzl", "llvm_register_toolchains")
load("@prysm//tools/cross-toolchain:darwin_cc_hack.bzl", "configure_nonhermetic_darwin")
llvm_register_toolchains()
configure_nonhermetic_darwin()
load("@prysm//tools/cross-toolchain:prysm_toolchains.bzl", "configure_prysm_toolchains")
@@ -59,10 +67,10 @@ bazel_skylib_workspace()
http_archive(
name = "bazel_gazelle",
sha256 = "5982e5463f171da99e3bdaeff8c0f48283a7a5f396ec5282910b9e8a49c0dd7e",
sha256 = "29d5dafc2a5582995488c6735115d1d366fcd6a0fc2e2a153f02988706349825",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.31.0/bazel-gazelle-v0.31.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.31.0/bazel-gazelle-v0.31.0.tar.gz",
],
)
@@ -86,10 +94,10 @@ http_archive(
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "6b65cb7917b4d1709f9410ffe00ecf3e160edf674b78c54a894471320862184f",
sha256 = "bfc5ce70b9d1634ae54f4e7b495657a18a04e0d596785f672d35d5f505ab491a",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.39.0/rules_go-v0.39.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.39.0/rules_go-v0.39.0.zip",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.40.0/rules_go-v0.40.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.40.0/rules_go-v0.40.0.zip",
],
)
@@ -164,7 +172,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.20.3",
go_version = "1.20.7",
nogo = "@//:nogo",
)
@@ -205,7 +213,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.3.0"
consensus_spec_version = "v1.4.0-alpha.1"
bls_test_version = "v0.1.1"
@@ -221,7 +229,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "1c806e04ac5e3779032c06a6009350b3836b6809bb23812993d6ececd7047cf5",
sha256 = "1118a663be4a00ba00f0635eb20287157f2b2f993aed64335bfbcd04af424c2b",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -237,7 +245,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "2b42796dc5ccd9f1246032d0c17663e20f70334ff7e00325f0fc3af28cb24186",
sha256 = "acde6e10940d14f22277eda5b55b65a24623ac88e4c7a2e34134a6069f5eea82",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -253,7 +261,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "231e3371e81ce9acde65d2910ec4580587e74dbbcfcbd9c675e473e022deec8a",
sha256 = "49c022f3a3478cea849ba8f877a9f7e4c1ded549edddc09993550bbc5bb192e1",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -268,7 +276,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "219b74d95664ea7e8dfbf31162dfa206b9c0cf45919ea86db5fa0f8902977e3c",
sha256 = "c3e246ff01f6b7b9e9e41939954a6ff89dfca7297415f88781809165fa83267c",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -305,39 +313,34 @@ filegroup(
)
http_archive(
name = "com_github_bazelbuild_buildtools",
sha256 = "7a182df18df1debabd9e36ae07c8edfa1378b8424a04561b674d933b965372b3",
strip_prefix = "buildtools-f2aed9ee205d62d45c55cfabbfd26342f8526862",
url = "https://github.com/bazelbuild/buildtools/archive/f2aed9ee205d62d45c55cfabbfd26342f8526862.zip",
name = "holesky_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"custom_config_data/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
sha256 = "4116c8acb54eb3ca28cc4dc9bc688e08e25da91d70ed1f2622f02d3c33eba922",
strip_prefix = "holesky-76057d57ab1f585519ecb606a9e5f7780e925a37",
url = "https://github.com/eth-clients/holesky/archive/76057d57ab1f585519ecb606a9e5f7780e925a37.tar.gz", # Aug 27, 2023
)
git_repository(
http_archive(
name = "com_google_protobuf",
commit = "436bd7880e458532901c58f4d9d1ea23fa7edd52",
remote = "https://github.com/protocolbuffers/protobuf",
shallow_since = "1617835118 -0700",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
strip_prefix = "protobuf-23.3",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v23.3.tar.gz",
],
)
# Group the sources of the library so that CMake rule have access to it
all_content = """filegroup(name = "all", srcs = glob(["**"]), visibility = ["//visibility:public"])"""
# External dependencies
http_archive(
name = "prysm_web_ui",
build_file_content = """
filegroup(
name = "site",
srcs = glob(["**/*"]),
visibility = ["//visibility:public"],
)
""",
sha256 = "5006614c33e358699b4e072c649cd4c3866f7d41a691449d5156f6c6e07a4c60",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v2.0.3/prysm-web-ui.tar.gz",
],
)
load("//:deps.bzl", "prysm_deps")
# gazelle:repository_macro deps.bzl%prysm_deps
@@ -369,10 +372,6 @@ load(
_cc_image_repos()
load("@io_bazel_rules_go//extras:embed_data_deps.bzl", "go_embed_data_dependencies")
go_embed_data_dependencies()
load("@com_github_atlassian_bazel_tools//gometalinter:deps.bzl", "gometalinter_dependencies")
gometalinter_dependencies()
@@ -381,10 +380,6 @@ load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")
gazelle_dependencies()
load("@com_github_bazelbuild_buildtools//buildifier:deps.bzl", "buildifier_dependencies")
buildifier_dependencies()
load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()

8
api/BUILD.bazel Normal file
View File

@@ -0,0 +1,8 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["headers.go"],
importpath = "github.com/prysmaticlabs/prysm/v4/api",
visibility = ["//visibility:public"],
)

20
api/client/BUILD.bazel Normal file
View File

@@ -0,0 +1,20 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"client.go",
"errors.go",
"options.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/client",
visibility = ["//visibility:public"],
deps = ["@com_github_pkg_errors//:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = ["client_test.go"],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -6,11 +6,11 @@ go_library(
"checkpoint.go",
"client.go",
"doc.go",
"errors.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/client/beacon",
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/apimiddleware:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -39,6 +39,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api/client:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -5,11 +5,14 @@ import (
"fmt"
"path"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
base "github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v4/io/file"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
@@ -18,6 +21,8 @@ import (
"golang.org/x/mod/semver"
)
var errCheckpointBlockMismatch = errors.New("mismatch between checkpoint sync state and block")
// OriginData represents the BeaconState and ReadOnlySignedBeaconBlock necessary to start an empty Beacon Node
// using Checkpoint Sync.
type OriginData struct {
@@ -74,37 +79,40 @@ func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, er
if err != nil {
return nil, errors.Wrap(err, "error unmarshaling finalized state to correct version")
}
if s.Slot() != s.LatestBlockHeader().Slot {
return nil, fmt.Errorf("finalized state slot does not match latest block header slot %d != %d", s.Slot(), s.LatestBlockHeader().Slot)
}
sr, err := s.HashTreeRoot(ctx)
slot := s.LatestBlockHeader().Slot
bb, err := client.GetBlock(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
}
header := s.LatestBlockHeader()
header.StateRoot = sr[:]
br, err := header.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error while computing block root using state data")
}
bb, err := client.GetBlock(ctx, IdFromRoot(br))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by root = %#x", br)
return nil, errors.Wrapf(err, "error requesting block by slot = %d", slot)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
realBlockRoot, err := b.Block().HashTreeRoot()
br, err := b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
}
bodyRoot, err := b.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block body")
}
log.Printf("BeaconState slot=%d, Block slot=%d", s.Slot(), b.Block().Slot())
log.Printf("BeaconState htr=%#x, Block state_root=%#x", sr, b.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#x, block htr=%#x", br, realBlockRoot)
sbr := bytesutil.ToBytes32(s.LatestBlockHeader().BodyRoot)
if sbr != bodyRoot {
return nil, errors.Wrapf(errCheckpointBlockMismatch, "state body root = %#x, block body root = %#x", sbr, bodyRoot)
}
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
}
log.
WithField("block_slot", b.Block().Slot()).
WithField("state_slot", s.Slot()).
WithField("state_root", hexutil.Encode(sr[:])).
WithField("block_root", hexutil.Encode(br[:])).
Info("Downloaded checkpoint sync state and block.")
return &OriginData{
st: s,
b: b,
@@ -140,7 +148,7 @@ func ComputeWeakSubjectivityCheckpoint(ctx context.Context, client *Client) (*We
ws, err := client.GetWeakSubjectivity(ctx)
if err != nil {
// a 404/405 is expected if querying an endpoint that doesn't support the weak subjectivity checkpoint api
if !errors.Is(err, ErrNotOK) {
if !errors.Is(err, base.ErrNotOK) {
return nil, errors.Wrap(err, "unexpected API response for prysm-only weak subjectivity checkpoint API")
}
// fall back to vanilla Beacon Node API method

View File

@@ -7,9 +7,9 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
blocktest "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks/testing"
@@ -66,11 +66,7 @@ func TestMarshalToEnvelope(t *testing.T) {
}
func TestFallbackVersionCheck(t *testing.T) {
c := &Client{
hc: &http.Client{},
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
c.hc.Transport = &testRT{rt: func(req *http.Request) (*http.Response, error) {
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getNodeVersionPath:
@@ -88,12 +84,13 @@ func TestFallbackVersionCheck(t *testing.T) {
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
ctx := context.Background()
_, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
_, err = ComputeWeakSubjectivityCheckpoint(ctx, c)
require.ErrorIs(t, err, errUnsupportedPrysmCheckpointVersion)
}
@@ -131,6 +128,7 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
wst, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, wst.SetFork(fork))
// set up checkpoint block
@@ -170,44 +168,41 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
Epoch: epoch,
}
hc := &http.Client{
Transport: &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getWeakSubjectivityPath:
res.StatusCode = http.StatusOK
cp := struct {
Epoch string `json:"epoch"`
Root string `json:"root"`
}{
Epoch: fmt.Sprintf("%d", slots.ToEpoch(b.Block().Slot())),
Root: fmt.Sprintf("%#x", bRoot),
}
wsr := struct {
Checkpoint interface{} `json:"ws_checkpoint"`
StateRoot string `json:"state_root"`
}{
Checkpoint: cp,
StateRoot: fmt.Sprintf("%#x", wRoot),
}
rb, err := marshalToEnvelope(wsr)
require.NoError(t, err)
res.Body = io.NopCloser(bytes.NewBuffer(rb))
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getWeakSubjectivityPath:
res.StatusCode = http.StatusOK
cp := struct {
Epoch string `json:"epoch"`
Root string `json:"root"`
}{
Epoch: fmt.Sprintf("%d", slots.ToEpoch(b.Block().Slot())),
Root: fmt.Sprintf("%#x", bRoot),
}
wsr := struct {
Checkpoint interface{} `json:"ws_checkpoint"`
StateRoot string `json:"state_root"`
}{
Checkpoint: cp,
StateRoot: fmt.Sprintf("%#x", wRoot),
}
rb, err := marshalToEnvelope(wsr)
require.NoError(t, err)
res.Body = io.NopCloser(bytes.NewBuffer(rb))
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
}
return res, nil
}},
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
wsd, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.NoError(t, err)
@@ -232,6 +227,7 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
wst, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, cfg.GenesisEpoch)
require.NoError(t, err)
require.NoError(t, wst.SetFork(fork))
// set up checkpoint block
@@ -266,42 +262,39 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
wsSerialized, err := wst.MarshalSSZ()
require.NoError(t, err)
hc := &http.Client{
Transport: &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getNodeVersionPath:
res.StatusCode = http.StatusOK
b := bytes.NewBuffer(nil)
d := struct {
Version string `json:"version"`
}{
Version: "Lighthouse/v0.1.5 (Linux x86_64)",
}
encoded, err := marshalToEnvelope(d)
require.NoError(t, err)
b.Write(encoded)
res.Body = io.NopCloser(b)
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
case renderGetStatePath(IdHead):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getNodeVersionPath:
res.StatusCode = http.StatusOK
b := bytes.NewBuffer(nil)
d := struct {
Version string `json:"version"`
}{
Version: "Lighthouse/v0.1.5 (Linux x86_64)",
}
encoded, err := marshalToEnvelope(d)
require.NoError(t, err)
b.Write(encoded)
res.Body = io.NopCloser(b)
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
case renderGetStatePath(IdHead):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
}
return res, nil
}},
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
wsPub, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.NoError(t, err)
@@ -315,21 +308,16 @@ func TestGetWeakSubjectivityEpochFromHead(t *testing.T) {
st, expectedEpoch := defaultTestHeadState(t, params.MainnetConfig())
serialized, err := st.MarshalSSZ()
require.NoError(t, err)
hc := &http.Client{
Transport: &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case renderGetStatePath(IdHead):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
}
return res, nil
}},
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
if req.URL.Path == renderGetStatePath(IdHead) {
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
actualEpoch, err := getWeakSubjectivityEpochFromHead(context.Background(), c)
require.NoError(t, err)
require.Equal(t, expectedEpoch, actualEpoch)
@@ -413,6 +401,7 @@ func TestDownloadFinalizedData(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, st.SetFork(fork))
require.NoError(t, st.SetSlot(slot))
@@ -448,29 +437,24 @@ func TestDownloadFinalizedData(t *testing.T) {
ms, err := st.MarshalSSZ()
require.NoError(t, err)
hc := &http.Client{
Transport: &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case renderGetStatePath(IdFinalized):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(ms))
case renderGetBlockPath(IdFromRoot(br)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(mb))
default:
res.StatusCode = http.StatusInternalServerError
res.Body = io.NopCloser(bytes.NewBufferString(""))
}
return res, nil
}},
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case renderGetStatePath(IdFinalized):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(ms))
case renderGetBlockPath(IdFromSlot(b.Block().Slot())):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(mb))
default:
res.StatusCode = http.StatusInternalServerError
res.Body = io.NopCloser(bytes.NewBufferString(""))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
// sanity check before we go through checkpoint
// make sure we can download the state and unmarshal it with the VersionedUnmarshaler
sb, err := c.GetState(ctx, IdFinalized)

View File

@@ -5,8 +5,6 @@ import (
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/url"
"path"
@@ -14,8 +12,8 @@ import (
"sort"
"strconv"
"text/template"
"time"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/network/forks"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
@@ -54,8 +52,6 @@ const (
IdFinalized StateOrBlockId = "finalized"
)
var ErrMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
// IdFromRoot encodes a block root in the format expected by the API in places where a root can be used to identify
// a BeaconState or SignedBeaconBlock.
func IdFromRoot(r [32]byte) StateOrBlockId {
@@ -85,96 +81,22 @@ func idTemplate(ts string) func(StateOrBlockId) string {
return f
}
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
// WithTimeout sets the .Timeout attribute of the wrapped http.Client.
func WithTimeout(timeout time.Duration) ClientOpt {
return func(c *Client) {
c.hc.Timeout = timeout
}
func renderGetBlockPath(id StateOrBlockId) string {
return path.Join(getSignedBlockPath, string(id))
}
// Client provides a collection of helper methods for calling the Eth Beacon Node API endpoints.
type Client struct {
hc *http.Client
baseURL *url.URL
*client.Client
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
u, err := urlForHost(host)
// NewClient returns a new Client that includes functions for rest calls to Beacon API.
func NewClient(host string, opts ...client.ClientOpt) (*Client, error) {
c, err := client.NewClient(host, opts...)
if err != nil {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
}
return c, nil
}
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
u, err := url.Parse(h)
if err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
host, port, err := net.SplitHostPort(h)
if err != nil {
return nil, ErrMalformedHostname
}
return &url.URL{Host: fmt.Sprintf("%s:%s", host, port), Scheme: "http"}, nil
}
// NodeURL returns a human-readable string representation of the beacon node base url.
func (c *Client) NodeURL() string {
return c.baseURL.String()
}
type reqOption func(*http.Request)
func withSSZEncoding() reqOption {
return func(req *http.Request) {
req.Header.Set("Accept", "application/octet-stream")
}
}
// get is a generic, opinionated GET function to reduce boilerplate amongst the getters in this package.
func (c *Client) get(ctx context.Context, path string, opts ...reqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
if err != nil {
return nil, err
}
for _, o := range opts {
o(req)
}
r, err := c.hc.Do(req)
if err != nil {
return nil, err
}
defer func() {
err = r.Body.Close()
}()
if r.StatusCode != http.StatusOK {
return nil, non200Err(r)
}
b, err := io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body from GetBlock")
}
return b, nil
}
func renderGetBlockPath(id StateOrBlockId) string {
return path.Join(getSignedBlockPath, string(id))
return &Client{c}, nil
}
// GetBlock retrieves the SignedBeaconBlock for the given block id.
@@ -184,7 +106,7 @@ func renderGetBlockPath(id StateOrBlockId) string {
// The return value contains the ssz-encoded bytes.
func (c *Client) GetBlock(ctx context.Context, blockId StateOrBlockId) ([]byte, error) {
blockPath := renderGetBlockPath(blockId)
b, err := c.get(ctx, blockPath, withSSZEncoding())
b, err := c.Get(ctx, blockPath, client.WithSSZEncoding())
if err != nil {
return nil, errors.Wrapf(err, "error requesting state by id = %s", blockId)
}
@@ -199,7 +121,7 @@ var getBlockRootTpl = idTemplate(getBlockRootPath)
// for the named identifiers.
func (c *Client) GetBlockRoot(ctx context.Context, blockId StateOrBlockId) ([32]byte, error) {
rootPath := getBlockRootTpl(blockId)
b, err := c.get(ctx, rootPath)
b, err := c.Get(ctx, rootPath)
if err != nil {
return [32]byte{}, errors.Wrapf(err, "error requesting block root by id = %s", blockId)
}
@@ -222,7 +144,7 @@ var getForkTpl = idTemplate(getForkForStatePath)
// <slot>, <hex encoded blockRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
// for the named identifiers.
func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fork, error) {
body, err := c.get(ctx, getForkTpl(stateId))
body, err := c.Get(ctx, getForkTpl(stateId))
if err != nil {
return nil, errors.Wrapf(err, "error requesting fork by state id = %s", stateId)
}
@@ -238,7 +160,7 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
body, err := c.get(ctx, getForkSchedulePath)
body, err := c.Get(ctx, getForkSchedulePath)
if err != nil {
return nil, errors.Wrap(err, "error requesting fork schedule")
}
@@ -256,7 +178,7 @@ func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, er
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*v1.SpecResponse, error) {
body, err := c.get(ctx, getConfigSpecPath)
body, err := c.Get(ctx, getConfigSpecPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting configSpecPath")
}
@@ -279,7 +201,7 @@ var versionRE = regexp.MustCompile(`^(\w+)/(v\d+\.\d+\.\d+[-a-zA-Z0-9]*)\s*/?(.*
func parseNodeVersion(v string) (*NodeVersion, error) {
groups := versionRE.FindStringSubmatch(v)
if len(groups) != 4 {
return nil, errors.Wrapf(ErrInvalidNodeVersion, "could not be parsed: %s", v)
return nil, errors.Wrapf(client.ErrInvalidNodeVersion, "could not be parsed: %s", v)
}
return &NodeVersion{
implementation: groups[1],
@@ -291,7 +213,7 @@ func parseNodeVersion(v string) (*NodeVersion, error) {
// GetNodeVersion requests that the beacon node identify information about its implementation in a format
// similar to a HTTP User-Agent field. ex: Lighthouse/v0.1.5 (Linux x86_64)
func (c *Client) GetNodeVersion(ctx context.Context) (*NodeVersion, error) {
b, err := c.get(ctx, getNodeVersionPath)
b, err := c.Get(ctx, getNodeVersionPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting node version")
}
@@ -318,7 +240,7 @@ func renderGetStatePath(id StateOrBlockId) string {
// The return value contains the ssz-encoded bytes.
func (c *Client) GetState(ctx context.Context, stateId StateOrBlockId) ([]byte, error) {
statePath := path.Join(getStatePath, string(stateId))
b, err := c.get(ctx, statePath, withSSZEncoding())
b, err := c.Get(ctx, statePath, client.WithSSZEncoding())
if err != nil {
return nil, errors.Wrapf(err, "error requesting state by id = %s", stateId)
}
@@ -331,7 +253,7 @@ func (c *Client) GetState(ctx context.Context, stateId StateOrBlockId) ([]byte,
// - finds the highest non-skipped block preceding the epoch
// - returns the htr of the found block and returns this + the value of state_root from the block
func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData, error) {
body, err := c.get(ctx, getWeakSubjectivityPath)
body, err := c.Get(ctx, getWeakSubjectivityPath)
if err != nil {
return nil, err
}
@@ -362,7 +284,7 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
// SubmitChangeBLStoExecution calls a beacon API endpoint to set the withdrawal addresses based on the given signed messages.
// If the API responds with something other than OK there will be failure messages associated to the corresponding request message.
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*apimiddleware.SignedBLSToExecutionChangeJson) error {
u := c.baseURL.ResolveReference(&url.URL{Path: changeBLStoExecutionPath})
u := c.BaseURL().ResolveReference(&url.URL{Path: changeBLStoExecutionPath})
body, err := json.Marshal(request)
if err != nil {
return errors.Wrap(err, "failed to marshal JSON")
@@ -372,7 +294,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*apim
return errors.Wrap(err, "invalid format, failed to create new POST request object")
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.hc.Do(req)
resp, err := c.Do(req)
if err != nil {
return err
}
@@ -401,7 +323,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*apim
// GetBLStoExecutionChanges gets all the set withdrawal messages in the node's operation pool.
// Returns a struct representation of json response.
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*apimiddleware.BLSToExecutionChangesPoolResponseJson, error) {
body, err := c.get(ctx, changeBLStoExecutionPath)
body, err := c.Get(ctx, changeBLStoExecutionPath)
if err != nil {
return nil, err
}
@@ -413,23 +335,6 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*apimiddleware.B
return poolResponse, nil
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case 404:
return errors.Wrap(ErrNotFound, msg)
default:
return errors.Wrap(ErrNotOK, msg)
}
}
type forkResponse struct {
PreviousVersion string `json:"previous_version"`
CurrentVersion string `json:"current_version"`

View File

@@ -4,6 +4,7 @@ import (
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
@@ -17,17 +18,17 @@ func TestParseNodeVersion(t *testing.T) {
{
name: "empty string",
v: "",
err: ErrInvalidNodeVersion,
err: client.ErrInvalidNodeVersion,
},
{
name: "Prysm as the version string",
v: "Prysm",
err: ErrInvalidNodeVersion,
err: client.ErrInvalidNodeVersion,
},
{
name: "semver only",
v: "v2.0.6",
err: ErrInvalidNodeVersion,
err: client.ErrInvalidNodeVersion,
},
{
name: "complete version",
@@ -91,7 +92,7 @@ func TestValidHostname(t *testing.T) {
{
name: "hostname without port",
hostArg: "mydomain.org",
err: ErrMalformedHostname,
err: client.ErrMalformedHostname,
},
{
name: "hostname with port",
@@ -132,7 +133,7 @@ func TestValidHostname(t *testing.T) {
return
}
require.NoError(t, err)
require.Equal(t, c.joined, cl.baseURL.ResolveReference(&url.URL{Path: c.path}).String())
require.Equal(t, c.joined, cl.BaseURL().ResolveReference(&url.URL{Path: c.path}).String())
})
}
}

View File

@@ -1,13 +0,0 @@
package beacon
import "github.com/pkg/errors"
// ErrNotOK is used to indicate when an HTTP request to the Beacon Node API failed with any non-2xx response code.
// More specific errors may be returned, but an error in reaction to a non-2xx response will always wrap ErrNotOK.
var ErrNotOK = errors.New("did not receive 2xx response from API")
// ErrNotFound specifically means that a '404 - NOT FOUND' response was received from the API.
var ErrNotFound = errors.Wrap(ErrNotOK, "recv 404 NotFound response from API")
// ErrInvalidNodeVersion indicates that the /eth/v1/node/version api response format was not recognized.
var ErrInvalidNodeVersion = errors.New("invalid node version response")

View File

@@ -3,7 +3,6 @@ package builder
import (
"math/big"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
consensus_types "github.com/prysmaticlabs/prysm/v4/consensus-types"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -162,9 +161,6 @@ func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
// Header returns the execution data interface.
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
if b.p == nil {
return nil, errors.New("builder bid is nil")
}
// We have to convert big endian to little endian because the value is coming from the execution layer.
v := big.NewInt(0).SetBytes(bytesutil.ReverseByteOrder(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, math.WeiToGwei(v))

View File

@@ -11,7 +11,6 @@ import (
"net/url"
"strings"
"text/template"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -36,7 +35,6 @@ const (
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
var errNotBlinded = errors.New("submitted block is not blinded")
var submitBlindedBlockTimeout = 3 * time.Second
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
@@ -292,8 +290,6 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
return nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockBellatrix value body in SubmitBlindedBlock")
}
ctx, cancel := context.WithTimeout(ctx, submitBlindedBlockTimeout)
defer cancel()
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Bellatrix))
}
@@ -325,8 +321,6 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
return nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockCapella value body in SubmitBlindedBlockCapella")
}
ctx, cancel := context.WithTimeout(ctx, submitBlindedBlockTimeout)
defer cancel()
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Capella))
}

View File

@@ -14,14 +14,17 @@ import (
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// SignedValidatorRegistration a struct for signed validator registrations.
type SignedValidatorRegistration struct {
*eth.SignedValidatorRegistrationV1
}
// ValidatorRegistration a struct for validator registrations.
type ValidatorRegistration struct {
*eth.ValidatorRegistrationV1
}
// MarshalJSON returns a json representation copy of signed validator registration.
func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *ValidatorRegistration `json:"message"`
@@ -32,6 +35,7 @@ func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
})
}
// UnmarshalJSON returns a byte representation of signed validator registration from json.
func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
if r.SignedValidatorRegistrationV1 == nil {
r.SignedValidatorRegistrationV1 = &eth.SignedValidatorRegistrationV1{}
@@ -48,6 +52,7 @@ func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
return nil
}
// MarshalJSON returns a json representation copy of validator registration.
func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
@@ -62,6 +67,7 @@ func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
})
}
// UnmarshalJSON returns a byte representation of validator registration from json.
func (r *ValidatorRegistration) UnmarshalJSON(b []byte) error {
if r.ValidatorRegistrationV1 == nil {
r.ValidatorRegistrationV1 = &eth.ValidatorRegistrationV1{}
@@ -92,6 +98,7 @@ func (r *ValidatorRegistration) UnmarshalJSON(b []byte) error {
var errInvalidUint256 = errors.New("invalid Uint256")
var errDecodeUint256 = errors.New("unable to decode into Uint256")
// Uint256 a wrapper representation of big.Int
type Uint256 struct {
*big.Int
}
@@ -118,7 +125,7 @@ func sszBytesToUint256(b []byte) (Uint256, error) {
return Uint256{Int: bi}, nil
}
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256.
func (s Uint256) SSZBytes() []byte {
if !isValidUint256(s.Int) {
return []byte{}
@@ -126,18 +133,19 @@ func (s Uint256) SSZBytes() []byte {
return bytesutil.PadTo(bytesutil.ReverseByteOrder(s.Int.Bytes()), 32)
}
// UnmarshalJSON takes in a byte array and unmarshals the value in Uint256
func (s *Uint256) UnmarshalJSON(t []byte) error {
start := 0
end := len(t)
if t[0] == '"' {
start += 1
if len(t) < 2 {
return errors.Errorf("provided Uint256 json string is too short: %s", string(t))
}
if t[end-1] == '"' {
end -= 1
if t[0] != '"' || t[end-1] != '"' {
return errors.Errorf("provided Uint256 json string is malformed: %s", string(t))
}
return s.UnmarshalText(t[start:end])
return s.UnmarshalText(t[1 : end-1])
}
// UnmarshalText takes in a byte array and unmarshals the text in Uint256
func (s *Uint256) UnmarshalText(t []byte) error {
if s.Int == nil {
s.Int = big.NewInt(0)
@@ -153,6 +161,7 @@ func (s *Uint256) UnmarshalText(t []byte) error {
return nil
}
// MarshalJSON returns a json byte representation of Uint256.
func (s Uint256) MarshalJSON() ([]byte, error) {
t, err := s.MarshalText()
if err != nil {
@@ -163,6 +172,7 @@ func (s Uint256) MarshalJSON() ([]byte, error) {
return t, nil
}
// MarshalText returns a text byte representation of Uint256.
func (s Uint256) MarshalText() ([]byte, error) {
if !isValidUint256(s.Int) {
return nil, errors.Wrapf(errInvalidUint256, "value=%s", s.Int)
@@ -170,22 +180,27 @@ func (s Uint256) MarshalText() ([]byte, error) {
return []byte(s.String()), nil
}
// Uint64String is a custom type that allows marshalling from text to uint64 and vice versa.
type Uint64String uint64
// UnmarshalText takes a byte array and unmarshals the text in Uint64String.
func (s *Uint64String) UnmarshalText(t []byte) error {
u, err := strconv.ParseUint(string(t), 10, 64)
*s = Uint64String(u)
return err
}
// MarshalText returns a byte representation of the text from Uint64String.
func (s Uint64String) MarshalText() ([]byte, error) {
return []byte(fmt.Sprintf("%d", s)), nil
}
// VersionResponse is a JSON representation of a field in the builder API header response.
type VersionResponse struct {
Version string `json:"version"`
}
// ExecHeaderResponse is a JSON representation of the builder API header response for Bellatrix.
type ExecHeaderResponse struct {
Version string `json:"version"`
Data struct {
@@ -194,6 +209,7 @@ type ExecHeaderResponse struct {
} `json:"data"`
}
// ToProto returns a SignedBuilderBid from ExecHeaderResponse for Bellatrix.
func (ehr *ExecHeaderResponse) ToProto() (*eth.SignedBuilderBid, error) {
bb, err := ehr.Data.Message.ToProto()
if err != nil {
@@ -205,6 +221,7 @@ func (ehr *ExecHeaderResponse) ToProto() (*eth.SignedBuilderBid, error) {
}, nil
}
// ToProto returns a BuilderBid Proto for Bellatrix.
func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
header, err := bb.Header.ToProto()
if err != nil {
@@ -217,31 +234,34 @@ func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
}, nil
}
// ToProto returns a ExecutionPayloadHeader for Bellatrix.
func (h *ExecutionPayloadHeader) ToProto() (*v1.ExecutionPayloadHeader, error) {
return &v1.ExecutionPayloadHeader{
ParentHash: h.ParentHash,
FeeRecipient: h.FeeRecipient,
StateRoot: h.StateRoot,
ReceiptsRoot: h.ReceiptsRoot,
LogsBloom: h.LogsBloom,
PrevRandao: h.PrevRandao,
ParentHash: bytesutil.SafeCopyBytes(h.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(h.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(h.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(h.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(h.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(h.PrevRandao),
BlockNumber: uint64(h.BlockNumber),
GasLimit: uint64(h.GasLimit),
GasUsed: uint64(h.GasUsed),
Timestamp: uint64(h.Timestamp),
ExtraData: h.ExtraData,
BaseFeePerGas: h.BaseFeePerGas.SSZBytes(),
BlockHash: h.BlockHash,
TransactionsRoot: h.TransactionsRoot,
ExtraData: bytesutil.SafeCopyBytes(h.ExtraData),
BaseFeePerGas: bytesutil.SafeCopyBytes(h.BaseFeePerGas.SSZBytes()),
BlockHash: bytesutil.SafeCopyBytes(h.BlockHash),
TransactionsRoot: bytesutil.SafeCopyBytes(h.TransactionsRoot),
}, nil
}
// BuilderBid is part of ExecHeaderResponse for Bellatrix.
type BuilderBid struct {
Header *ExecutionPayloadHeader `json:"header"`
Value Uint256 `json:"value"`
Pubkey hexutil.Bytes `json:"pubkey"`
}
// ExecutionPayloadHeader is a field in BuilderBid.
type ExecutionPayloadHeader struct {
ParentHash hexutil.Bytes `json:"parent_hash"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
@@ -260,6 +280,7 @@ type ExecutionPayloadHeader struct {
*v1.ExecutionPayloadHeader
}
// MarshalJSON returns the JSON bytes representation of ExecutionPayloadHeader.
func (h *ExecutionPayloadHeader) MarshalJSON() ([]byte, error) {
type MarshalCaller ExecutionPayloadHeader
baseFeePerGas, err := sszBytesToUint256(h.ExecutionPayloadHeader.BaseFeePerGas)
@@ -284,6 +305,7 @@ func (h *ExecutionPayloadHeader) MarshalJSON() ([]byte, error) {
})
}
// UnmarshalJSON takes in a JSON byte array and sets ExecutionPayloadHeader.
func (h *ExecutionPayloadHeader) UnmarshalJSON(b []byte) error {
type UnmarshalCaller ExecutionPayloadHeader
uc := &UnmarshalCaller{}
@@ -297,11 +319,13 @@ func (h *ExecutionPayloadHeader) UnmarshalJSON(b []byte) error {
return err
}
// ExecPayloadResponse is the builder API /eth/v1/builder/blinded_blocks for Bellatrix.
type ExecPayloadResponse struct {
Version string `json:"version"`
Data ExecutionPayload `json:"data"`
}
// ExecutionPayload is a field of ExecPayloadResponse
type ExecutionPayload struct {
ParentHash hexutil.Bytes `json:"parent_hash"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
@@ -319,29 +343,31 @@ type ExecutionPayload struct {
Transactions []hexutil.Bytes `json:"transactions"`
}
// ToProto returns a ExecutionPayload Proto from ExecPayloadResponse
func (r *ExecPayloadResponse) ToProto() (*v1.ExecutionPayload, error) {
return r.Data.ToProto()
}
// ToProto returns a ExecutionPayload Proto
func (p *ExecutionPayload) ToProto() (*v1.ExecutionPayload, error) {
txs := make([][]byte, len(p.Transactions))
for i := range p.Transactions {
txs[i] = p.Transactions[i]
txs[i] = bytesutil.SafeCopyBytes(p.Transactions[i])
}
return &v1.ExecutionPayload{
ParentHash: p.ParentHash,
FeeRecipient: p.FeeRecipient,
StateRoot: p.StateRoot,
ReceiptsRoot: p.ReceiptsRoot,
LogsBloom: p.LogsBloom,
PrevRandao: p.PrevRandao,
ParentHash: bytesutil.SafeCopyBytes(p.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(p.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(p.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(p.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(p.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(p.PrevRandao),
BlockNumber: uint64(p.BlockNumber),
GasLimit: uint64(p.GasLimit),
GasUsed: uint64(p.GasUsed),
Timestamp: uint64(p.Timestamp),
ExtraData: p.ExtraData,
BaseFeePerGas: p.BaseFeePerGas.SSZBytes(),
BlockHash: p.BlockHash,
ExtraData: bytesutil.SafeCopyBytes(p.ExtraData),
BaseFeePerGas: bytesutil.SafeCopyBytes(p.BaseFeePerGas.SSZBytes()),
BlockHash: bytesutil.SafeCopyBytes(p.BlockHash),
Transactions: txs,
}, nil
}
@@ -355,22 +381,22 @@ func FromProto(payload *v1.ExecutionPayload) (ExecutionPayload, error) {
}
txs := make([]hexutil.Bytes, len(payload.Transactions))
for i := range payload.Transactions {
txs[i] = payload.Transactions[i]
txs[i] = bytesutil.SafeCopyBytes(payload.Transactions[i])
}
return ExecutionPayload{
ParentHash: payload.ParentHash,
FeeRecipient: payload.FeeRecipient,
StateRoot: payload.StateRoot,
ReceiptsRoot: payload.ReceiptsRoot,
LogsBloom: payload.LogsBloom,
PrevRandao: payload.PrevRandao,
ParentHash: bytesutil.SafeCopyBytes(payload.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(payload.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
BlockNumber: Uint64String(payload.BlockNumber),
GasLimit: Uint64String(payload.GasLimit),
GasUsed: Uint64String(payload.GasUsed),
Timestamp: Uint64String(payload.Timestamp),
ExtraData: payload.ExtraData,
ExtraData: bytesutil.SafeCopyBytes(payload.ExtraData),
BaseFeePerGas: bFee,
BlockHash: payload.BlockHash,
BlockHash: bytesutil.SafeCopyBytes(payload.BlockHash),
Transactions: txs,
}, nil
}
@@ -384,36 +410,37 @@ func FromProtoCapella(payload *v1.ExecutionPayloadCapella) (ExecutionPayloadCape
}
txs := make([]hexutil.Bytes, len(payload.Transactions))
for i := range payload.Transactions {
txs[i] = payload.Transactions[i]
txs[i] = bytesutil.SafeCopyBytes(payload.Transactions[i])
}
withdrawals := make([]Withdrawal, len(payload.Withdrawals))
for i, w := range payload.Withdrawals {
withdrawals[i] = Withdrawal{
Index: Uint256{Int: big.NewInt(0).SetUint64(w.Index)},
ValidatorIndex: Uint256{Int: big.NewInt(0).SetUint64(uint64(w.ValidatorIndex))},
Address: w.Address,
Address: bytesutil.SafeCopyBytes(w.Address),
Amount: Uint256{Int: big.NewInt(0).SetUint64(w.Amount)},
}
}
return ExecutionPayloadCapella{
ParentHash: payload.ParentHash,
FeeRecipient: payload.FeeRecipient,
StateRoot: payload.StateRoot,
ReceiptsRoot: payload.ReceiptsRoot,
LogsBloom: payload.LogsBloom,
PrevRandao: payload.PrevRandao,
ParentHash: bytesutil.SafeCopyBytes(payload.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(payload.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
BlockNumber: Uint64String(payload.BlockNumber),
GasLimit: Uint64String(payload.GasLimit),
GasUsed: Uint64String(payload.GasUsed),
Timestamp: Uint64String(payload.Timestamp),
ExtraData: payload.ExtraData,
ExtraData: bytesutil.SafeCopyBytes(payload.ExtraData),
BaseFeePerGas: bFee,
BlockHash: payload.BlockHash,
BlockHash: bytesutil.SafeCopyBytes(payload.BlockHash),
Transactions: txs,
Withdrawals: withdrawals,
}, nil
}
// ExecHeaderResponseCapella is the response of builder API /eth/v1/builder/header/{slot}/{parent_hash}/{pubkey} for Capella.
type ExecHeaderResponseCapella struct {
Data struct {
Signature hexutil.Bytes `json:"signature"`
@@ -421,6 +448,7 @@ type ExecHeaderResponseCapella struct {
} `json:"data"`
}
// ToProto returns a SignedBuilderBidCapella Proto from ExecHeaderResponseCapella.
func (ehr *ExecHeaderResponseCapella) ToProto() (*eth.SignedBuilderBidCapella, error) {
bb, err := ehr.Data.Message.ToProto()
if err != nil {
@@ -428,10 +456,11 @@ func (ehr *ExecHeaderResponseCapella) ToProto() (*eth.SignedBuilderBidCapella, e
}
return &eth.SignedBuilderBidCapella{
Message: bb,
Signature: ehr.Data.Signature,
Signature: bytesutil.SafeCopyBytes(ehr.Data.Signature),
}, nil
}
// ToProto returns a BuilderBidCapella Proto.
func (bb *BuilderBidCapella) ToProto() (*eth.BuilderBidCapella, error) {
header, err := bb.Header.ToProto()
if err != nil {
@@ -439,37 +468,40 @@ func (bb *BuilderBidCapella) ToProto() (*eth.BuilderBidCapella, error) {
}
return &eth.BuilderBidCapella{
Header: header,
Value: bb.Value.SSZBytes(),
Pubkey: bb.Pubkey,
Value: bytesutil.SafeCopyBytes(bb.Value.SSZBytes()),
Pubkey: bytesutil.SafeCopyBytes(bb.Pubkey),
}, nil
}
// ToProto returns a ExecutionPayloadHeaderCapella Proto
func (h *ExecutionPayloadHeaderCapella) ToProto() (*v1.ExecutionPayloadHeaderCapella, error) {
return &v1.ExecutionPayloadHeaderCapella{
ParentHash: h.ParentHash,
FeeRecipient: h.FeeRecipient,
StateRoot: h.StateRoot,
ReceiptsRoot: h.ReceiptsRoot,
LogsBloom: h.LogsBloom,
PrevRandao: h.PrevRandao,
ParentHash: bytesutil.SafeCopyBytes(h.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(h.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(h.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(h.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(h.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(h.PrevRandao),
BlockNumber: uint64(h.BlockNumber),
GasLimit: uint64(h.GasLimit),
GasUsed: uint64(h.GasUsed),
Timestamp: uint64(h.Timestamp),
ExtraData: h.ExtraData,
BaseFeePerGas: h.BaseFeePerGas.SSZBytes(),
BlockHash: h.BlockHash,
TransactionsRoot: h.TransactionsRoot,
WithdrawalsRoot: h.WithdrawalsRoot,
ExtraData: bytesutil.SafeCopyBytes(h.ExtraData),
BaseFeePerGas: bytesutil.SafeCopyBytes(h.BaseFeePerGas.SSZBytes()),
BlockHash: bytesutil.SafeCopyBytes(h.BlockHash),
TransactionsRoot: bytesutil.SafeCopyBytes(h.TransactionsRoot),
WithdrawalsRoot: bytesutil.SafeCopyBytes(h.WithdrawalsRoot),
}, nil
}
// BuilderBidCapella is field of ExecHeaderResponseCapella.
type BuilderBidCapella struct {
Header *ExecutionPayloadHeaderCapella `json:"header"`
Value Uint256 `json:"value"`
Pubkey hexutil.Bytes `json:"pubkey"`
}
// ExecutionPayloadHeaderCapella is a field in BuilderBidCapella.
type ExecutionPayloadHeaderCapella struct {
ParentHash hexutil.Bytes `json:"parent_hash"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
@@ -489,6 +521,7 @@ type ExecutionPayloadHeaderCapella struct {
*v1.ExecutionPayloadHeaderCapella
}
// MarshalJSON returns a JSON byte representation of ExecutionPayloadHeaderCapella.
func (h *ExecutionPayloadHeaderCapella) MarshalJSON() ([]byte, error) {
type MarshalCaller ExecutionPayloadHeaderCapella
baseFeePerGas, err := sszBytesToUint256(h.ExecutionPayloadHeaderCapella.BaseFeePerGas)
@@ -514,6 +547,7 @@ func (h *ExecutionPayloadHeaderCapella) MarshalJSON() ([]byte, error) {
})
}
// UnmarshalJSON takes a JSON byte array and sets ExecutionPayloadHeaderCapella.
func (h *ExecutionPayloadHeaderCapella) UnmarshalJSON(b []byte) error {
type UnmarshalCaller ExecutionPayloadHeaderCapella
uc := &UnmarshalCaller{}
@@ -527,11 +561,13 @@ func (h *ExecutionPayloadHeaderCapella) UnmarshalJSON(b []byte) error {
return err
}
// ExecPayloadResponseCapella is the builder API /eth/v1/builder/blinded_blocks for Capella.
type ExecPayloadResponseCapella struct {
Version string `json:"version"`
Data ExecutionPayloadCapella `json:"data"`
}
// ExecutionPayloadCapella is a field of ExecPayloadResponseCapella.
type ExecutionPayloadCapella struct {
ParentHash hexutil.Bytes `json:"parent_hash"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
@@ -550,43 +586,46 @@ type ExecutionPayloadCapella struct {
Withdrawals []Withdrawal `json:"withdrawals"`
}
// ToProto returns a ExecutionPayloadCapella Proto.
func (r *ExecPayloadResponseCapella) ToProto() (*v1.ExecutionPayloadCapella, error) {
return r.Data.ToProto()
}
// ToProto returns a ExecutionPayloadCapella Proto.
func (p *ExecutionPayloadCapella) ToProto() (*v1.ExecutionPayloadCapella, error) {
txs := make([][]byte, len(p.Transactions))
for i := range p.Transactions {
txs[i] = p.Transactions[i]
txs[i] = bytesutil.SafeCopyBytes(p.Transactions[i])
}
withdrawals := make([]*v1.Withdrawal, len(p.Withdrawals))
for i, w := range p.Withdrawals {
withdrawals[i] = &v1.Withdrawal{
Index: w.Index.Uint64(),
ValidatorIndex: types.ValidatorIndex(w.ValidatorIndex.Uint64()),
Address: w.Address,
Address: bytesutil.SafeCopyBytes(w.Address),
Amount: w.Amount.Uint64(),
}
}
return &v1.ExecutionPayloadCapella{
ParentHash: p.ParentHash,
FeeRecipient: p.FeeRecipient,
StateRoot: p.StateRoot,
ReceiptsRoot: p.ReceiptsRoot,
LogsBloom: p.LogsBloom,
PrevRandao: p.PrevRandao,
ParentHash: bytesutil.SafeCopyBytes(p.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(p.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(p.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(p.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(p.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(p.PrevRandao),
BlockNumber: uint64(p.BlockNumber),
GasLimit: uint64(p.GasLimit),
GasUsed: uint64(p.GasUsed),
Timestamp: uint64(p.Timestamp),
ExtraData: p.ExtraData,
BaseFeePerGas: p.BaseFeePerGas.SSZBytes(),
BlockHash: p.BlockHash,
ExtraData: bytesutil.SafeCopyBytes(p.ExtraData),
BaseFeePerGas: bytesutil.SafeCopyBytes(p.BaseFeePerGas.SSZBytes()),
BlockHash: bytesutil.SafeCopyBytes(p.BlockHash),
Transactions: txs,
Withdrawals: withdrawals,
}, nil
}
// Withdrawal is a field of ExecutionPayloadCapella.
type Withdrawal struct {
Index Uint256 `json:"index"`
ValidatorIndex Uint256 `json:"validator_index"`
@@ -594,18 +633,22 @@ type Withdrawal struct {
Amount Uint256 `json:"amount"`
}
// SignedBlindedBeaconBlockBellatrix is the request object for builder API /eth/v1/builder/blinded_blocks.
type SignedBlindedBeaconBlockBellatrix struct {
*eth.SignedBlindedBeaconBlockBellatrix
}
// BlindedBeaconBlockBellatrix is a field in SignedBlindedBeaconBlockBellatrix.
type BlindedBeaconBlockBellatrix struct {
*eth.BlindedBeaconBlockBellatrix
}
// BlindedBeaconBlockBodyBellatrix is a field in BlindedBeaconBlockBellatrix.
type BlindedBeaconBlockBodyBellatrix struct {
*eth.BlindedBeaconBlockBodyBellatrix
}
// MarshalJSON returns a JSON byte array representation of SignedBlindedBeaconBlockBellatrix.
func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *BlindedBeaconBlockBellatrix `json:"message"`
@@ -616,6 +659,7 @@ func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
})
}
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockBellatrix.
func (b *BlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot"`
@@ -632,10 +676,12 @@ func (b *BlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
})
}
// ProposerSlashing is a field in BlindedBeaconBlockBodyCapella.
type ProposerSlashing struct {
*eth.ProposerSlashing
}
// MarshalJSON returns a JSON byte array representation of ProposerSlashing.
func (s *ProposerSlashing) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
SignedHeader1 *SignedBeaconBlockHeader `json:"signed_header_1"`
@@ -646,10 +692,12 @@ func (s *ProposerSlashing) MarshalJSON() ([]byte, error) {
})
}
// SignedBeaconBlockHeader is a field of ProposerSlashing.
type SignedBeaconBlockHeader struct {
*eth.SignedBeaconBlockHeader
}
// MarshalJSON returns a JSON byte array representation of SignedBeaconBlockHeader.
func (h *SignedBeaconBlockHeader) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Header *BeaconBlockHeader `json:"message"`
@@ -660,10 +708,12 @@ func (h *SignedBeaconBlockHeader) MarshalJSON() ([]byte, error) {
})
}
// BeaconBlockHeader is a field of SignedBeaconBlockHeader.
type BeaconBlockHeader struct {
*eth.BeaconBlockHeader
}
// MarshalJSON returns a JSON byte array representation of BeaconBlockHeader.
func (h *BeaconBlockHeader) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot"`
@@ -680,10 +730,12 @@ func (h *BeaconBlockHeader) MarshalJSON() ([]byte, error) {
})
}
// IndexedAttestation is a field of AttesterSlashing.
type IndexedAttestation struct {
*eth.IndexedAttestation
}
// MarshalJSON returns a JSON byte array representation of IndexedAttestation.
func (a *IndexedAttestation) MarshalJSON() ([]byte, error) {
indices := make([]string, len(a.IndexedAttestation.AttestingIndices))
for i := range a.IndexedAttestation.AttestingIndices {
@@ -700,10 +752,12 @@ func (a *IndexedAttestation) MarshalJSON() ([]byte, error) {
})
}
// AttesterSlashing is a field of a Beacon Block Body.
type AttesterSlashing struct {
*eth.AttesterSlashing
}
// MarshalJSON returns a JSON byte array representation of AttesterSlashing.
func (s *AttesterSlashing) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Attestation1 *IndexedAttestation `json:"attestation_1"`
@@ -714,10 +768,12 @@ func (s *AttesterSlashing) MarshalJSON() ([]byte, error) {
})
}
// Checkpoint is a field of AttestationData.
type Checkpoint struct {
*eth.Checkpoint
}
// MarshalJSON returns a JSON byte array representation of Checkpoint.
func (c *Checkpoint) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Epoch string `json:"epoch"`
@@ -728,10 +784,12 @@ func (c *Checkpoint) MarshalJSON() ([]byte, error) {
})
}
// AttestationData is a field of IndexedAttestation.
type AttestationData struct {
*eth.AttestationData
}
// MarshalJSON returns a JSON byte array representation of AttestationData.
func (a *AttestationData) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot"`
@@ -748,10 +806,12 @@ func (a *AttestationData) MarshalJSON() ([]byte, error) {
})
}
// Attestation is a field of Beacon Block Body.
type Attestation struct {
*eth.Attestation
}
// MarshalJSON returns a JSON byte array representation of Attestation.
func (a *Attestation) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
AggregationBits hexutil.Bytes `json:"aggregation_bits"`
@@ -764,10 +824,12 @@ func (a *Attestation) MarshalJSON() ([]byte, error) {
})
}
// DepositData is a field of Deposit.
type DepositData struct {
*eth.Deposit_Data
}
// MarshalJSON returns a JSON byte array representation of DepositData.
func (d *DepositData) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
PublicKey hexutil.Bytes `json:"pubkey"`
@@ -782,10 +844,12 @@ func (d *DepositData) MarshalJSON() ([]byte, error) {
})
}
// Deposit is a field of Beacon Block Body.
type Deposit struct {
*eth.Deposit
}
// MarshalJSON returns a JSON byte array representation of Deposit.
func (d *Deposit) MarshalJSON() ([]byte, error) {
proof := make([]hexutil.Bytes, len(d.Proof))
for i := range d.Proof {
@@ -800,10 +864,12 @@ func (d *Deposit) MarshalJSON() ([]byte, error) {
})
}
// SignedVoluntaryExit is a field of Beacon Block Body.
type SignedVoluntaryExit struct {
*eth.SignedVoluntaryExit
}
// MarshalJSON returns a JSON byte array representation of SignedVoluntaryExit.
func (sve *SignedVoluntaryExit) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *VoluntaryExit `json:"message"`
@@ -814,10 +880,12 @@ func (sve *SignedVoluntaryExit) MarshalJSON() ([]byte, error) {
})
}
// VoluntaryExit is a field in SignedVoluntaryExit
type VoluntaryExit struct {
*eth.VoluntaryExit
}
// MarshalJSON returns a JSON byte array representation of VoluntaryExit
func (ve *VoluntaryExit) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Epoch string `json:"epoch"`
@@ -828,10 +896,12 @@ func (ve *VoluntaryExit) MarshalJSON() ([]byte, error) {
})
}
// SyncAggregate is a field of Beacon Block Body.
type SyncAggregate struct {
*eth.SyncAggregate
}
// MarshalJSON returns a JSON byte array representation of SyncAggregate.
func (s *SyncAggregate) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
SyncCommitteeBits hexutil.Bytes `json:"sync_committee_bits"`
@@ -842,10 +912,12 @@ func (s *SyncAggregate) MarshalJSON() ([]byte, error) {
})
}
// Eth1Data is a field of Beacon Block Body.
type Eth1Data struct {
*eth.Eth1Data
}
// MarshalJSON returns a JSON byte array representation of Eth1Data.
func (e *Eth1Data) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
DepositRoot hexutil.Bytes `json:"deposit_root"`
@@ -858,6 +930,7 @@ func (e *Eth1Data) MarshalJSON() ([]byte, error) {
})
}
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockBodyBellatrix.
func (b *BlindedBeaconBlockBodyBellatrix) MarshalJSON() ([]byte, error) {
sve := make([]*SignedVoluntaryExit, len(b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits))
for i := range b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits {
@@ -904,10 +977,12 @@ func (b *BlindedBeaconBlockBodyBellatrix) MarshalJSON() ([]byte, error) {
})
}
// SignedBLSToExecutionChange is a field in Beacon Block Body for capella and above.
type SignedBLSToExecutionChange struct {
*eth.SignedBLSToExecutionChange
}
// MarshalJSON returns a JSON byte array representation of SignedBLSToExecutionChange.
func (ch *SignedBLSToExecutionChange) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *BLSToExecutionChange `json:"message"`
@@ -918,10 +993,12 @@ func (ch *SignedBLSToExecutionChange) MarshalJSON() ([]byte, error) {
})
}
// BLSToExecutionChange is a field in SignedBLSToExecutionChange.
type BLSToExecutionChange struct {
*eth.BLSToExecutionChange
}
// MarshalJSON returns a JSON byte array representation of BLSToExecutionChange.
func (ch *BLSToExecutionChange) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
ValidatorIndex string `json:"validator_index"`
@@ -934,18 +1011,22 @@ func (ch *BLSToExecutionChange) MarshalJSON() ([]byte, error) {
})
}
// SignedBlindedBeaconBlockCapella is part of the request object sent to builder API /eth/v1/builder/blinded_blocks for Capella.
type SignedBlindedBeaconBlockCapella struct {
*eth.SignedBlindedBeaconBlockCapella
}
// BlindedBeaconBlockCapella is a field in SignedBlindedBeaconBlockCapella.
type BlindedBeaconBlockCapella struct {
*eth.BlindedBeaconBlockCapella
}
// BlindedBeaconBlockBodyCapella is a field in BlindedBeaconBlockCapella.
type BlindedBeaconBlockBodyCapella struct {
*eth.BlindedBeaconBlockBodyCapella
}
// MarshalJSON returns a JSON byte array representation of SignedBlindedBeaconBlockCapella.
func (b *SignedBlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *BlindedBeaconBlockCapella `json:"message"`
@@ -956,6 +1037,7 @@ func (b *SignedBlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
})
}
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockCapella
func (b *BlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot"`
@@ -972,6 +1054,7 @@ func (b *BlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
})
}
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockBodyCapella
func (b *BlindedBeaconBlockBodyCapella) MarshalJSON() ([]byte, error) {
sve := make([]*SignedVoluntaryExit, len(b.VoluntaryExits))
for i := range b.VoluntaryExits {
@@ -1024,6 +1107,7 @@ func (b *BlindedBeaconBlockBodyCapella) MarshalJSON() ([]byte, error) {
})
}
// ErrorMessage is a JSON representation of the builder API's returned error message.
type ErrorMessage struct {
Code int `json:"code"`
Message string `json:"message"`

View File

@@ -1156,6 +1156,14 @@ func TestUint256Unmarshal(t *testing.T) {
require.Equal(t, expected, string(m))
}
func TestUint256Unmarshal_BadData(t *testing.T) {
var bigNum Uint256
assert.ErrorContains(t, "provided Uint256 json string is too short", bigNum.UnmarshalJSON([]byte{'"'}))
assert.ErrorContains(t, "provided Uint256 json string is malformed", bigNum.UnmarshalJSON([]byte{'"', '1', '2'}))
}
func TestUint256UnmarshalNegative(t *testing.T) {
m := "-1"
var value Uint256

97
api/client/client.go Normal file
View File

@@ -0,0 +1,97 @@
package client
import (
"context"
"io"
"net"
"net/http"
"net/url"
"github.com/pkg/errors"
)
// Client is a wrapper object around the HTTP client.
type Client struct {
hc *http.Client
baseURL *url.URL
token string
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
u, err := urlForHost(host)
if err != nil {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
}
return c, nil
}
// Token returns the bearer token used for jwt authentication
func (c *Client) Token() string {
return c.token
}
// BaseURL returns the base url of the client
func (c *Client) BaseURL() *url.URL {
return c.baseURL
}
// Do execute the request against the http client
func (c *Client) Do(req *http.Request) (*http.Response, error) {
return c.hc.Do(req)
}
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
u, err := url.Parse(h)
if err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
host, port, err := net.SplitHostPort(h)
if err != nil {
return nil, ErrMalformedHostname
}
return &url.URL{Host: net.JoinHostPort(host, port), Scheme: "http"}, nil
}
// NodeURL returns a human-readable string representation of the beacon node base url.
func (c *Client) NodeURL() string {
return c.baseURL.String()
}
// Get is a generic, opinionated GET function to reduce boilerplate amongst the getters in this package.
func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
if err != nil {
return nil, err
}
for _, o := range opts {
o(req)
}
r, err := c.hc.Do(req)
if err != nil {
return nil, err
}
defer func() {
err = r.Body.Close()
}()
if r.StatusCode != http.StatusOK {
return nil, Non200Err(r)
}
b, err := io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body")
}
return b, nil
}

48
api/client/client_test.go Normal file
View File

@@ -0,0 +1,48 @@
package client
import (
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestValidHostname(t *testing.T) {
cases := []struct {
name string
hostArg string
path string
joined string
err error
}{
{
name: "hostname without port",
hostArg: "mydomain.org",
err: ErrMalformedHostname,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cl, err := NewClient(c.hostArg)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.joined, cl.BaseURL().ResolveReference(&url.URL{Path: c.path}).String())
})
}
}
func TestWithAuthenticationToken(t *testing.T) {
cl, err := NewClient("https://www.offchainlabs.com:3500", WithAuthenticationToken("my token"))
require.NoError(t, err)
require.Equal(t, cl.Token(), "my token")
}
func TestBaseURL(t *testing.T) {
cl, err := NewClient("https://www.offchainlabs.com:3500")
require.NoError(t, err)
require.Equal(t, "www.offchainlabs.com", cl.BaseURL().Hostname())
require.Equal(t, "3500", cl.BaseURL().Port())
}

40
api/client/errors.go Normal file
View File

@@ -0,0 +1,40 @@
package client
import (
"fmt"
"io"
"net/http"
"github.com/pkg/errors"
)
// ErrMalformedHostname is used to indicate if a host name's format is incorrect.
var ErrMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
// ErrNotOK is used to indicate when an HTTP request to the API failed with any non-2xx response code.
// More specific errors may be returned, but an error in reaction to a non-2xx response will always wrap ErrNotOK.
var ErrNotOK = errors.New("did not receive 2xx response from API")
// ErrNotFound specifically means that a '404 - NOT FOUND' response was received from the API.
var ErrNotFound = errors.Wrap(ErrNotOK, "recv 404 NotFound response from API")
// ErrInvalidNodeVersion indicates that the /eth/v1/node/version API response format was not recognized.
var ErrInvalidNodeVersion = errors.New("invalid node version response")
// Non200Err is a function that parses an HTTP response to handle responses that are not 200 with a formatted error.
func Non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case 404:
return errors.Wrap(ErrNotFound, msg)
default:
return errors.Wrap(ErrNotOK, msg)
}
}

48
api/client/options.go Normal file
View File

@@ -0,0 +1,48 @@
package client
import (
"fmt"
"net/http"
"time"
)
// ReqOption is a request functional option.
type ReqOption func(*http.Request)
// WithSSZEncoding is a request functional option that adds SSZ encoding header.
func WithSSZEncoding() ReqOption {
return func(req *http.Request) {
req.Header.Set("Accept", "application/octet-stream")
}
}
// WithAuthorizationToken is a request functional option that adds header for authorization token.
func WithAuthorizationToken(token string) ReqOption {
return func(req *http.Request) {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
}
}
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
// WithTimeout sets the .Timeout attribute of the wrapped http.Client.
func WithTimeout(timeout time.Duration) ClientOpt {
return func(c *Client) {
c.hc.Timeout = timeout
}
}
// WithRoundTripper replaces the underlying HTTP's transport with a custom one.
func WithRoundTripper(t http.RoundTripper) ClientOpt {
return func(c *Client) {
c.hc.Transport = t
}
}
// WithAuthenticationToken sets an oauth token to be used.
func WithAuthenticationToken(token string) ClientOpt {
return func(c *Client) {
c.token = token
}
}

View File

@@ -0,0 +1,13 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["client.go"],
importpath = "github.com/prysmaticlabs/prysm/v4/api/client/validator",
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//validator/rpc/apimiddleware:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,121 @@
package validator
import (
"context"
"encoding/json"
"fmt"
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/validator/rpc/apimiddleware"
)
const (
localKeysPath = "/eth/v1/keystores"
remoteKeysPath = "/eth/v1/remotekeys"
feeRecipientPath = "/eth/v1/validator/{pubkey}/feerecipient"
)
// Client provides a collection of helper methods for calling the Keymanager API endpoints.
type Client struct {
*client.Client
}
// NewClient returns a new Client that includes functions for REST calls to keymanager APIs.
func NewClient(host string, opts ...client.ClientOpt) (*Client, error) {
c, err := client.NewClient(host, opts...)
if err != nil {
return nil, err
}
return &Client{c}, nil
}
// GetValidatorPubKeys gets the current list of web3signer or the local validator public keys in hex format.
func (c *Client) GetValidatorPubKeys(ctx context.Context) ([]string, error) {
jsonlocal, err := c.GetLocalValidatorKeys(ctx)
if err != nil {
return nil, err
}
jsonremote, err := c.GetRemoteValidatorKeys(ctx)
if err != nil {
return nil, err
}
if len(jsonlocal.Keystores) == 0 && len(jsonremote.Keystores) == 0 {
return nil, errors.New("there are no local keys or remote keys on the validator")
}
hexKeys := make(map[string]bool)
for index := range jsonlocal.Keystores {
hexKeys[jsonlocal.Keystores[index].ValidatingPubkey] = true
}
for index := range jsonremote.Keystores {
hexKeys[jsonremote.Keystores[index].Pubkey] = true
}
keys := make([]string, 0)
for k := range hexKeys {
keys = append(keys, k)
}
return keys, nil
}
// GetLocalValidatorKeys calls the keymanager APIs for local validator keys
func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*apimiddleware.ListKeystoresResponseJson, error) {
localBytes, err := c.Get(ctx, localKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
jsonlocal := &apimiddleware.ListKeystoresResponseJson{}
if err := json.Unmarshal(localBytes, jsonlocal); err != nil {
return nil, errors.Wrap(err, "failed to parse local keystore list")
}
return jsonlocal, nil
}
// GetRemoteValidatorKeys calls the keymanager APIs for web3signer validator keys
func (c *Client) GetRemoteValidatorKeys(ctx context.Context) (*apimiddleware.ListRemoteKeysResponseJson, error) {
remoteBytes, err := c.Get(ctx, remoteKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
if !strings.Contains(err.Error(), "Prysm Wallet is not of type Web3Signer") {
return nil, err
}
}
jsonremote := &apimiddleware.ListRemoteKeysResponseJson{}
if len(remoteBytes) != 0 {
if err := json.Unmarshal(remoteBytes, jsonremote); err != nil {
return nil, errors.Wrap(err, "failed to parse remote keystore list")
}
}
return jsonremote, nil
}
// GetFeeRecipientAddresses takes a list of validators in hex format and returns an equal length list of fee recipients in hex format.
func (c *Client) GetFeeRecipientAddresses(ctx context.Context, validators []string) ([]string, error) {
feeRecipients := make([]string, len(validators))
for index, validator := range validators {
feejson, err := c.GetFeeRecipientAddress(ctx, validator)
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("keymanager API failed to retrieve fee recipient for validator %s", validators[index]))
}
if feejson.Data == nil {
continue
}
feeRecipients[index] = feejson.Data.Ethaddress
}
return feeRecipients, nil
}
// GetFeeRecipientAddress takes a public key and calls the keymanager API to return its fee recipient.
func (c *Client) GetFeeRecipientAddress(ctx context.Context, pubkey string) (*apimiddleware.GetFeeRecipientByPubkeyResponseJson, error) {
path := strings.Replace(feeRecipientPath, "{pubkey}", pubkey, 1)
b, err := c.Get(ctx, path, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
feejson := &apimiddleware.GetFeeRecipientByPubkeyResponseJson{}
if err := json.Unmarshal(b, feejson); err != nil {
return nil, errors.Wrap(err, "failed to parse fee recipient")
}
return feejson, nil
}

View File

@@ -13,6 +13,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//encoding/bytesutil:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
@@ -32,6 +33,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
)
@@ -116,7 +117,11 @@ func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, respBody []
// Something went wrong, but the request completed, meaning we can write headers and the error message.
for h, vs := range resp.Header {
for _, v := range vs {
w.Header().Set(h, v)
if strings.HasSuffix(h, api.VersionHeader) {
w.Header().Set(api.VersionHeader, v)
} else {
w.Header().Set(h, v)
}
}
}
// Handle gRPC timeout.
@@ -187,9 +192,11 @@ func WriteMiddlewareResponseHeadersAndBody(grpcResp *http.Response, responseJson
var statusCodeHeader string
for h, vs := range grpcResp.Header {
// We don't want to expose any gRPC metadata in the HTTP response, so we skip forwarding metadata headers.
if strings.HasPrefix(h, "Grpc-Metadata") {
if h == "Grpc-Metadata-"+grpc.HttpCodeMetadataKey {
if strings.HasPrefix(h, grpc.MetadataPrefix) {
if h == grpc.WithPrefix(grpc.HttpCodeMetadataKey) {
statusCodeHeader = vs[0]
} else if strings.HasSuffix(h, api.VersionHeader) {
w.Header().Set(api.VersionHeader, vs[0])
}
} else {
for _, v := range vs {
@@ -223,7 +230,7 @@ func WriteError(w http.ResponseWriter, errJson ErrorJson, responseHeader http.He
// Include custom error in the error JSON.
hasCustomError := false
if responseHeader != nil {
customError, ok := responseHeader["Grpc-Metadata-"+grpc.CustomErrorMetadataKey]
customError, ok := responseHeader[grpc.WithPrefix(grpc.CustomErrorMetadataKey)]
if ok {
hasCustomError = true
// Assume header has only one value and read the 0 index.

View File

@@ -8,6 +8,7 @@ import (
"strings"
"testing"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -280,7 +281,8 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
response := &http.Response{
Header: http.Header{
"Foo": []string{"foo"},
"Grpc-Metadata-" + grpc.HttpCodeMetadataKey: []string{"204"},
grpc.WithPrefix(grpc.HttpCodeMetadataKey): []string{"204"},
grpc.WithPrefix(api.VersionHeader): []string{"capella"},
},
}
container := defaultResponseContainer()
@@ -299,6 +301,9 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "224", v[0])
v, ok = writer.Header()["Eth-Consensus-Version"]
require.Equal(t, true, ok, "header not found")
assert.Equal(t, "capella", v[0])
assert.Equal(t, 204, writer.Code)
assert.DeepEqual(t, responseJson, writer.Body.Bytes())
})
@@ -320,11 +325,12 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
t.Run("GET_invalid_status_code", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
Header: http.Header{"Grpc-Metadata-Eth-Consensus-Version": []string{"capella"}},
}
// Set invalid status code.
response.Header["Grpc-Metadata-"+grpc.HttpCodeMetadataKey] = []string{"invalid"}
response.Header[grpc.WithPrefix(grpc.HttpCodeMetadataKey)] = []string{"invalid"}
response.Header[grpc.WithPrefix(api.VersionHeader)] = []string{"capella"}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
@@ -390,7 +396,7 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
func TestWriteError(t *testing.T) {
t.Run("ok", func(t *testing.T) {
responseHeader := http.Header{
"Grpc-Metadata-" + grpc.CustomErrorMetadataKey: []string{"{\"CustomField\":\"bar\"}"},
grpc.WithPrefix(grpc.CustomErrorMetadataKey): []string{"{\"CustomField\":\"bar\"}"},
}
errJson := &testErrorJson{
Message: "foo",
@@ -420,7 +426,7 @@ func TestWriteError(t *testing.T) {
logHook := test.NewGlobal()
responseHeader := http.Header{
"Grpc-Metadata-" + grpc.CustomErrorMetadataKey: []string{"invalid"},
grpc.WithPrefix(grpc.CustomErrorMetadataKey): []string{"invalid"},
}
WriteError(httptest.NewRecorder(), &testErrorJson{}, responseHeader)

View File

@@ -6,3 +6,11 @@ const CustomErrorMetadataKey = "Custom-Error"
// HttpCodeMetadataKey is the key to use when setting custom HTTP status codes in gRPC metadata.
const HttpCodeMetadataKey = "X-Http-Code"
// MetadataPrefix is the prefix for grpc headers on metadata
const MetadataPrefix = "Grpc-Metadata"
// WithPrefix creates a new string with grpc metadata prefix
func WithPrefix(value string) string {
return MetadataPrefix + "-" + value
}

7
api/headers.go Normal file
View File

@@ -0,0 +1,7 @@
package api
const (
VersionHeader = "Eth-Consensus-Version"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
)

View File

@@ -144,6 +144,7 @@ func (f *Feed) Send(value interface{}) (nsent int) {
if !f.typecheck(rvalue.Type()) {
f.sendLock <- struct{}{}
f.mu.Unlock()
panic(feedTypeError{op: "Send", got: rvalue.Type(), want: f.etype})
}
f.mu.Unlock()

View File

@@ -32,6 +32,8 @@ func TestFeedPanics(t *testing.T) {
f.Send(2)
want := feedTypeError{op: "Send", got: reflect.TypeOf(uint64(0)), want: reflect.TypeOf(0)}
assert.NoError(t, checkPanic(want, func() { f.Send(uint64(2)) }))
// Validate it doesn't deadlock.
assert.NoError(t, checkPanic(want, func() { f.Send(uint64(2)) }))
}
{
var f Feed

View File

@@ -88,6 +88,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)

View File

@@ -340,7 +340,13 @@ func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
}
s.headLock.RLock()
headRoot := s.head.root
headSlot := s.head.slot
headOptimistic := s.head.optimistic
s.headLock.RUnlock()
// we trust the head package for recent head slots, otherwise fallback to forkchoice
if headSlot+2 >= s.CurrentSlot() {
return headOptimistic, nil
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
@@ -381,7 +387,7 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// IsViableForkCheckpoint returns whether the given checkpoint is a checkpoint in any
// IsViableForCheckpoint returns whether the given checkpoint is a checkpoint in any
// chain known to forkchoice
func (s *Service) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
s.cfg.ForkChoiceStore.RLock()
@@ -493,6 +499,13 @@ func (s *Service) Ancestor(ctx context.Context, root []byte, slot primitives.Slo
return ar[:], nil
}
// SetOptimisticToInvalid wraps the corresponding method in forkchoice
func (s *Service) SetOptimisticToInvalid(ctx context.Context, root, parent, lvh [32]byte) ([][32]byte, error) {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, parent, lvh)
}
// SetGenesisTime sets the genesis time of beacon chain.
func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t

View File

@@ -422,6 +422,12 @@ func TestService_IsOptimistic(t *testing.T) {
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, primitives.Slot(0), c.CurrentSlot())
require.Equal(t, false, opt)
c.SetGenesisTime(time.Now().Add(-time.Second * time.Duration(4*params.BeaconConfig().SecondsPerSlot)))
opt, err = c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
}

View File

@@ -41,13 +41,15 @@ var (
type invalidBlock struct {
invalidAncestorRoots [][32]byte
error
root [32]byte
root [32]byte
lastValidHash [32]byte
}
type invalidBlockError interface {
Error() string
InvalidAncestorRoots() [][32]byte
BlockRoot() [32]byte
LastValidHash() [32]byte
}
// BlockRoot returns the invalid block root.
@@ -55,6 +57,11 @@ func (e invalidBlock) BlockRoot() [32]byte {
return e.root
}
// LastValidHash returns the last valid hash root.
func (e invalidBlock) LastValidHash() [32]byte {
return e.lastValidHash
}
// InvalidAncestorRoots returns an optional list of invalid roots of the invalid block which leads up last valid root.
func (e invalidBlock) InvalidAncestorRoots() [][32]byte {
return e.invalidAncestorRoots
@@ -72,6 +79,19 @@ func IsInvalidBlock(e error) bool {
return true
}
// InvalidBlockLVH returns the invalid block last valid hash root. If the error
// doesn't have a last valid hash, [32]byte{} is returned.
func InvalidBlockLVH(e error) [32]byte {
if e == nil {
return [32]byte{}
}
d, ok := e.(invalidBlockError)
if !ok {
return [32]byte{}
}
return d.LastValidHash()
}
// InvalidBlockRoot returns the invalid block root. If the error
// doesn't have an invalid blockroot. [32]byte{} is returned.
func InvalidBlockRoot(e error) [32]byte {

View File

@@ -154,7 +154,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
var pId [8]byte
copy(pId[:], payloadID[:])
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
} else if hasAttr && payloadID == nil {
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
"slot": headBlk.Slot(),
@@ -182,21 +182,24 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
postStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
preStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
// Execution payload is only supported in Bellatrix and beyond. Pre
// merge blocks are never optimistic
if blocks.IsPreBellatrixVersion(postStateVersion) {
if blk == nil {
return false, errors.New("signed beacon block can't be nil")
}
if preStateVersion < version.Bellatrix {
return true, nil
}
if err := consensusblocks.BeaconBlockIsNil(blk); err != nil {
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(postStateHeader, body)
enabled, err := blocks.IsExecutionEnabledUsingHeader(preStateHeader, body)
if err != nil {
return false, errors.Wrap(invalidBlock{error: err}, "could not determine if execution is enabled")
}
@@ -220,35 +223,37 @@ func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
}).Info("Called new payload with optimistic block")
return false, nil
case execution.ErrInvalidPayloadStatus:
newPayloadInvalidNodeCount.Inc()
root, err := blk.Block().HashTreeRoot()
if err != nil {
return false, err
}
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, blk.Block().ParentRoot(), bytesutil.ToBytes32(lastValidHash))
if err != nil {
return false, err
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
return false, err
}
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"blockRoot": fmt.Sprintf("%#x", root),
"invalidChildrenCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
invalidAncestorRoots: invalidRoots,
error: ErrInvalidPayload,
error: ErrInvalidPayload,
lastValidHash: lvh,
}
case execution.ErrInvalidBlockHashPayloadStatus:
newPayloadInvalidNodeCount.Inc()
return false, ErrInvalidBlockHashPayloadStatus
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
}
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer
func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, lvh [32]byte) error {
newPayloadInvalidNodeCount.Inc()
invalidRoots, err := s.SetOptimisticToInvalid(ctx, root, parentRoot, lvh)
if err != nil {
return err
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
return err
}
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", root),
"invalidChildrenCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
return invalidBlock{
invalidAncestorRoots: invalidRoots,
error: ErrInvalidPayload,
lastValidHash: lvh,
}
}
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) (bool, payloadattribute.Attributer, primitives.ValidatorIndex) {

View File

@@ -525,11 +525,13 @@ func Test_NotifyNewPayload(t *testing.T) {
{
name: "phase 0 post state",
postState: phase0State,
blk: altairBlk, // same as phase 0 for this test
isValidPayload: true,
},
{
name: "altair post state",
postState: altairState,
blk: altairBlk,
isValidPayload: true,
},
{
@@ -743,6 +745,37 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
require.Equal(t, true, validated)
}
func Test_reportInvalidBlock(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MainnetConfig())
service, tr := minimalTestService(t)
ctx, _, fcs := tr.ctx, tr.db, tr.fcs
jcp := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
require.NoError(t, fcs.SetOptimisticToValid(ctx, [32]byte{'A'}))
err = service.pruneInvalidBlock(ctx, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'a'})
require.Equal(t, IsInvalidBlock(err), true)
require.Equal(t, InvalidBlockLVH(err), [32]byte{'a'})
invalidRoots := InvalidAncestorRoots(err)
require.Equal(t, 3, len(invalidRoots))
require.Equal(t, [32]byte{'D'}, invalidRoots[0])
require.Equal(t, [32]byte{'C'}, invalidRoots[1])
require.Equal(t, [32]byte{'B'}, invalidRoots[2])
}
func Test_GetPayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
ctx := tr.ctx

View File

@@ -47,9 +47,11 @@ func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
// This defines the current chain service's view of head.
type head struct {
root [32]byte // current head root.
block interfaces.ReadOnlySignedBeaconBlock // current head block.
state state.BeaconState // current head state.
root [32]byte // current head root.
block interfaces.ReadOnlySignedBeaconBlock // current head block.
state state.BeaconState // current head state.
slot primitives.Slot // the head block slot number
optimistic bool // optimistic status when saved head
}
// This saves head info to the local service cache, it also saves the
@@ -94,6 +96,10 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
return errors.Wrap(err, "could not get old head root")
}
oldHeadRoot := bytesutil.ToBytes32(r)
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
if err != nil {
log.WithError(err).Error("could not check if node is optimistically synced")
}
if headBlock.Block().ParentRoot() != oldHeadRoot {
// A chain re-org occurred, so we fire an event notifying the rest of the services.
commonRoot, forkSlot, err := s.cfg.ForkChoiceStore.CommonAncestor(ctx, oldHeadRoot, newHeadRoot)
@@ -125,10 +131,6 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
reorgDistance.Observe(float64(dis))
reorgDepth.Observe(float64(dep))
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Reorg,
Data: &ethpbv1.EventChainReorg{
@@ -150,7 +152,14 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
}
// Cache the new head info.
if err := s.setHead(newHeadRoot, headBlock, headState); err != nil {
newHead := &head{
root: newHeadRoot,
block: headBlock,
state: headState,
optimistic: isOptimistic,
slot: headBlock.Block().Slot(),
}
if err := s.setHead(newHead); err != nil {
return errors.Wrap(err, "could not set head")
}
@@ -173,7 +182,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
// This gets called to update canonical root mapping. It does not save head block
// root in DB. With the inception of initial-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, r [32]byte, hs state.BeaconState) error {
func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, r [32]byte, hs state.BeaconState, optimistic bool) error {
if err := blocks.BeaconBlockIsNil(b); err != nil {
return err
}
@@ -189,26 +198,28 @@ func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedB
if err != nil {
return err
}
if err := s.setHeadInitialSync(r, bCp, hs); err != nil {
if err := s.setHeadInitialSync(r, bCp, hs, optimistic); err != nil {
return errors.Wrap(err, "could not set head")
}
return nil
}
// This sets head view object which is used to track the head slot, root, block and state.
func (s *Service) setHead(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState) error {
// This sets head view object which is used to track the head slot, root, block, state and optimistic status
func (s *Service) setHead(newHead *head) error {
s.headLock.Lock()
defer s.headLock.Unlock()
// This does a full copy of the block and state.
bCp, err := block.Copy()
bCp, err := newHead.block.Copy()
if err != nil {
return err
}
s.head = &head{
root: root,
block: bCp,
state: state.Copy(),
root: newHead.root,
block: bCp,
state: newHead.state.Copy(),
optimistic: newHead.optimistic,
slot: newHead.slot,
}
return nil
}
@@ -216,7 +227,7 @@ func (s *Service) setHead(root [32]byte, block interfaces.ReadOnlySignedBeaconBl
// This sets head view object which is used to track the head slot, root, block and state. The method
// assumes that state being passed into the method will not be modified by any other alternate
// caller which holds the state's reference.
func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState) error {
func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState, optimistic bool) error {
s.headLock.Lock()
defer s.headLock.Unlock()
@@ -226,9 +237,10 @@ func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.ReadOnlySig
return err
}
s.head = &head{
root: root,
block: bCp,
state: state,
root: root,
block: bCp,
state: state,
optimistic: optimistic,
}
return nil
}

View File

@@ -157,7 +157,11 @@ func (s *Service) getSyncCommitteeHeadState(ctx context.Context, slot primitives
if headState == nil || headState.IsNil() {
return nil, errors.New("nil state")
}
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, slot)
headRoot, err := s.HeadRoot(ctx)
if err != nil {
return nil, err
}
headState, err = transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, slot)
if err != nil {
return nil, err
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -15,7 +16,7 @@ import (
func TestService_HeadSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c.head = &head{state: s}
// Current period
@@ -38,7 +39,7 @@ func TestService_HeadSyncCommitteeIndices(t *testing.T) {
func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c.head = &head{state: s}
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
@@ -66,7 +67,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c.head = &head{state: s}
// Process slot up to 2 * `EpochsPerSyncCommitteePeriod` so it can run `ProcessSyncCommitteeUpdates` twice.
@@ -81,7 +82,7 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
func TestService_HeadSyncCommitteeDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())

View File

@@ -53,7 +53,7 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedGetter):
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
@@ -120,7 +120,7 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber,
"blockNumber": payload.BlockNumber(),
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}
if block.Version() >= version.Capella {

View File

@@ -172,11 +172,15 @@ var (
})
onBlockProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "on_block_processing_milliseconds",
Help: "Total time in milliseconds to complete a call to onBlock()",
Help: "Total time in milliseconds to complete a call to postBlockProcess()",
})
stateTransitionProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "state_transition_processing_milliseconds",
Help: "Total time to call a state transition in onBlock()",
Help: "Total time to call a state transition in validateStateTransition()",
})
chainServiceProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
processAttsElapsedTime = promauto.NewHistogram(
prometheus.HistogramOpts{
@@ -246,40 +250,45 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
slashingBalance := uint64(0)
slashingEffectiveBalance := uint64(0)
for i, validator := range postState.Validators() {
for i := 0; i < postState.NumValidators(); i++ {
validator, err := postState.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(i))
if err != nil {
log.WithError(err).Error("Could not load validator")
continue
}
bal, err := postState.BalanceAtIndex(primitives.ValidatorIndex(i))
if err != nil {
log.WithError(err).Error("Could not load validator balance")
continue
}
if validator.Slashed {
if currentEpoch < validator.ExitEpoch {
if validator.Slashed() {
if currentEpoch < validator.ExitEpoch() {
slashingInstances++
slashingBalance += bal
slashingEffectiveBalance += validator.EffectiveBalance
slashingEffectiveBalance += validator.EffectiveBalance()
} else {
slashedInstances++
}
continue
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch {
if validator.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch() {
exitingInstances++
exitingBalance += bal
exitingEffectiveBalance += validator.EffectiveBalance
exitingEffectiveBalance += validator.EffectiveBalance()
} else {
exitedInstances++
}
continue
}
if currentEpoch < validator.ActivationEpoch {
if currentEpoch < validator.ActivationEpoch() {
pendingInstances++
pendingBalance += bal
continue
}
activeInstances++
activeBalance += bal
activeEffectiveBalance += validator.EffectiveBalance
activeEffectiveBalance += validator.EffectiveBalance()
}
activeInstances += exitingInstances + slashingInstances
activeBalance += exitingBalance + slashingBalance

View File

@@ -172,3 +172,10 @@ func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
return nil
}
}
func WithSyncComplete(c chan struct{}) Option {
return func(s *Service) error {
s.syncComplete = c
return nil
}
}

View File

@@ -22,7 +22,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
@@ -40,59 +39,11 @@ const depositDeadline = 20 * time.Second
// This defines size of the upper bound for initial sync block cache.
var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
// onBlock is called when a gossip block is received. It runs regular state transition on the block.
// The block's signing root should be computed before calling this method to avoid redundant
// computation in this method and methods it calls into.
//
// Spec pseudocode definition:
//
// def on_block(store: Store, signed_block: ReadOnlySignedBeaconBlock) -> None:
// block = signed_block.message
// # Parent block must be known
// assert block.parent_root in store.block_states
// # Make a copy of the state to avoid mutability issues
// pre_state = copy(store.block_states[block.parent_root])
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert get_current_slot(store) >= block.slot
//
// # Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// assert block.slot > finalized_slot
// # Check block is a descendant of the finalized block at the checkpoint finalized slot
// assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
//
// # Check the block is valid and compute the post-state
// state = pre_state.copy()
// state_transition(state, signed_block, True)
// # Add new block to the store
// store.blocks[hash_tree_root(block)] = block
// # Add new state for this block to the store
// store.block_states[hash_tree_root(block)] = state
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
// if should_update_justified_checkpoint(store, state.current_justified_checkpoint):
// store.justified_checkpoint = state.current_justified_checkpoint
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
//
// # Potentially update justified if different from store
// if store.justified_checkpoint != state.current_justified_checkpoint:
// # Update justified if new justified is later than store justified
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// store.justified_checkpoint = state.current_justified_checkpoint
// return
//
// # Update justified if store justified is not in chain with finalized checkpoint
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// ancestor_at_finalized_slot = get_ancestor(store, store.justified_checkpoint.root, finalized_slot)
// if ancestor_at_finalized_slot != store.finalized_checkpoint.root:
// store.justified_checkpoint = state.current_justified_checkpoint
func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error {
// postBlockProcess is called when a gossip block is received. This function performs
// several duties most importantly informing the engine if head was updated,
// saving the new head information to the blockchain package and
// handling attestations, slashings and similar included in the block.
func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, postState state.BeaconState, isValidPayload bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
defer span.End()
if err := consensusblocks.BeaconBlockIsNil(signed); err != nil {
@@ -101,52 +52,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
startTime := time.Now()
b := signed.Block()
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
return err
}
// Verify that the parent block is in forkchoice
if !s.cfg.ForkChoiceStore.HasNode(b.ParentRoot()) {
return ErrNotDescendantOfFinalized
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.cfg.ForkChoiceStore.JustifiedCheckpoint().Epoch
currStoreFinalizedEpoch := s.cfg.ForkChoiceStore.FinalizedCheckpoint().Epoch
preStateFinalizedEpoch := preState.FinalizedCheckpoint().Epoch
preStateJustifiedEpoch := preState.CurrentJustifiedCheckpoint().Epoch
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return invalidBlock{error: err}
}
stateTransitionProcessingTime.Observe(float64(time.Since(stateTransitionStartTime).Milliseconds()))
postStateVersion, postStateHeader, err := getStateVersionAndPayload(postState)
if err != nil {
return err
}
isValidPayload, err := s.notifyNewPayload(ctx, postStateVersion, postStateHeader, signed)
if err != nil {
return errors.Wrap(err, "could not validate new payload")
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preStateVersion, preStateHeader, signed); err != nil {
return err
}
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState); err != nil {
return err
}
if err := s.insertBlockToForkchoiceStore(ctx, signed.Block(), blockRoot, postState); err != nil {
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, postState, blockRoot); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
}
if err := s.handleBlockAttestations(ctx, signed.Block(), postState); err != nil {
@@ -160,34 +66,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
}
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
// Feed the indexed attestation to slasher if enabled. This action
// is done in the background to avoid adding more load to this critical code path.
go func() {
// Using a different context to prevent timeouts as this operation can be expensive
// and we want to avoid affecting the critical code path.
ctx := context.TODO()
for _, att := range signed.Block().Body().Attestations() {
committee, err := helpers.BeaconCommitteeFromState(ctx, preState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
log.WithError(err).Error("Could not get attestation committee")
tracing.AnnotateError(span, err)
return
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
tracing.AnnotateError(span, err)
return
}
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
}
}()
}
justified := s.cfg.ForkChoiceStore.JustifiedCheckpoint()
start := time.Now()
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
@@ -208,18 +86,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
} else {
// Updating next slot state cache can happen in the background. It shouldn't block rest of the process.
go func() {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
}()
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
@@ -229,6 +95,12 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
return err
}
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
@@ -237,56 +109,34 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
BlockRoot: blockRoot,
SignedBlock: signed,
Verified: true,
Optimistic: optimistic,
},
})
// Save justified check point to db.
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > currStoreJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
// Save finalized check point to db and more.
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
if finalized.Epoch > currStoreFinalizedEpoch || (finalized.Epoch == postStateFinalizedEpoch && finalized.Epoch > preStateFinalizedEpoch) {
if err := s.updateFinalized(ctx, &ethpb.Checkpoint{Epoch: finalized.Epoch, Root: finalized.Root[:]}); err != nil {
return err
}
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(finalized.Root)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
go func() {
// Send an event regarding the new finalized checkpoint over a common event feed.
stateRoot := signed.Block().StateRoot()
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpbv1.EventFinalizedCheckpoint{
Epoch: postState.FinalizedCheckpoint().Epoch,
Block: postState.FinalizedCheckpoint().Root,
State: stateRoot[:],
ExecutionOptimistic: isOptimistic,
},
})
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
defer cancel()
if err := s.insertFinalizedDeposits(depCtx, finalized.Root); err != nil {
log.WithError(err).Error("Could not insert finalized deposits.")
}
}()
}
defer reportAttestationInclusion(b)
if err := s.handleEpochBoundary(ctx, postState); err != nil {
return err
if headRoot == blockRoot {
// Updating next slot state cache can happen in the background
// except in the epoch boundary in which case we lock to handle
// the shuffling and proposer caches updates.
// We handle these caches only on canonical
// blocks, otherwise this will be handled by lateBlockTasks
slot := postState.Slot()
if slots.IsEpochEnd(slot) {
if err := transition.UpdateNextSlotCache(ctx, blockRoot[:], postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
return errors.Wrap(err, "could not handle epoch boundary")
}
} else {
go func() {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Error("could not update next slot state cache")
}
}()
}
}
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
@@ -409,7 +259,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
postVersionAndHeaders[i].version,
postVersionAndHeaders[i].header, b)
if err != nil {
return err
return s.handleInvalidExecutionError(ctx, err, blockRoots[i], b.Block().ParentRoot())
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preVersionAndHeaders[i].version,
@@ -479,70 +329,51 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return err
}
return s.saveHeadNoDB(ctx, lastB, lastBR, preState)
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
// Epoch boundary bookkeeping such as logging epoch summaries.
func (s *Service) handleEpochBoundary(ctx context.Context, postState state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
defer span.End()
var err error
if postState.Slot()+1 == s.nextEpochBoundarySlot {
copied := postState.Copy()
copied, err := transition.ProcessSlots(ctx, copied, copied.Slot()+1)
if err != nil {
return err
}
// Update caches for the next epoch at epoch boundary slot - 1.
if err := helpers.UpdateCommitteeCache(ctx, copied, coreTime.CurrentEpoch(copied)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, copied); err != nil {
return err
}
} else if postState.Slot() >= s.nextEpochBoundarySlot {
s.nextEpochBoundarySlot, err = slots.EpochStart(coreTime.NextEpoch(postState))
if err != nil {
return err
}
// Update caches at epoch boundary slot.
// The following updates have shortcut to return nil cheaply if fulfilled during boundary slot - 1.
if err := helpers.UpdateCommitteeCache(ctx, postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, postState); err != nil {
return err
}
headSt, err := s.HeadState(ctx)
if err != nil {
return err
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
return err
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
return errors.Wrap(err, "could not update committee cache")
}
if err := helpers.UpdateProposerIndicesInCache(ctx, st, e); err != nil {
return errors.Wrap(err, "could not update proposer index cache")
}
go func() {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := helpers.UpdateCommitteeCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Could not update committee cache")
}
if err := helpers.UpdateProposerIndicesInCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Failed to cache next epoch proposers")
}
}()
return nil
}
// This feeds in the block to fork choice store. It's allows fork choice store
// to gain information on the most current chain.
func (s *Service) insertBlockToForkchoiceStore(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock, root [32]byte, st state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.insertBlockToForkchoiceStore")
// Epoch boundary tasks: it copies the headState and updates the epoch boundary
// caches.
func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot, headState state.BeaconState, blockRoot []byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
defer span.End()
if !s.cfg.ForkChoiceStore.HasNode(blk.ParentRoot()) {
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
// return early if we are advancing to a past epoch
if slot < headState.Slot() {
return nil
}
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, root)
if !slots.IsEpochEnd(slot) {
return nil
}
copied := headState.Copy()
copied, err := transition.ProcessSlotsUsingNextSlotCache(ctx, copied, blockRoot, slot+1)
if err != nil {
return err
}
return s.updateEpochBoundaryCaches(ctx, copied)
}
// This feeds in the attestations included in the block to fork choice store. It's allows fork choice store
@@ -651,33 +482,31 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// This routine checks if there is a cached proposer payload ID available for the next slot proposer.
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
func (s *Service) spawnLateBlockTasksLoop() {
go func() {
_, err := s.clockWaiter.WaitForClock(s.ctx)
if err != nil {
log.WithError(err).Error("spawnLateBlockTasksLoop encountered an error waiting for initialization")
func (s *Service) runLateBlockTasks() {
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("failed to wait for initial sync")
return
}
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
for {
select {
case <-ticker.C():
s.lateBlockTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
return
}
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
for {
select {
case <-ticker.C():
s.lateBlockTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
return
}
}
}()
}
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot
// it also updates the next slot cache to deal with skipped slots.
// it also updates the next slot cache and the proposer index cache to deal with skipped slots.
func (s *Service) lateBlockTasks(ctx context.Context) {
currentSlot := s.CurrentSlot()
if s.CurrentSlot() == s.HeadSlot() {
return
}
@@ -685,12 +514,30 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
s.headLock.RUnlock()
lastRoot, lastState := transition.LastCachedState()
if lastState == nil {
lastRoot, lastState = headRoot[:], headState
}
// Copy all the field tries in our cached state in the event of late
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if (!has && !features.Get().PrepareAllPayloads) || id != [8]byte{} {
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
@@ -698,8 +545,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
headRoot := s.headRoot()
headState := s.headState(ctx)
s.headLock.RUnlock()
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: headState,
@@ -709,11 +554,21 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
lastRoot, lastState := transition.LastCachedState()
if lastState == nil {
lastRoot, lastState = headRoot[:], headState
}
if err = transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
// waitForSync blocks until the node is synced to the head.
func (s *Service) waitForSync() error {
select {
case <-s.syncComplete:
return nil
case <-s.ctx.Done():
return errors.New("context closed, exiting goroutine")
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
}
return err
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
@@ -209,35 +210,44 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk interfa
return s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes)
}
// inserts finalized deposits into our finalized deposit trie.
func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) error {
// inserts finalized deposits into our finalized deposit trie, needs to be
// called in the background
func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "blockChain.insertFinalizedDeposits")
defer span.End()
startTime := time.Now()
// Update deposit cache.
finalizedState, err := s.cfg.StateGen.StateByRoot(ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not fetch finalized state")
log.WithError(err).Error("could not fetch finalized state")
return
}
// We update the cache up to the last deposit index in the finalized block's state.
// We can be confident that these deposits will be included in some block
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
eth1DepositIndex, err := mathutil.Int(finalizedState.Eth1DepositIndex())
if err != nil {
return errors.Wrap(err, "could not cast eth1 deposit index")
log.WithError(err).Error("could not cast eth1 deposit index")
return
}
// The deposit index in the state is always the index of the next deposit
// to be included(rather than the last one to be processed). This was most likely
// done as the state cannot represent signed integers.
eth1DepositIndex -= 1
if err = s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(eth1DepositIndex)); err != nil {
return err
finalizedEth1DepIdx := eth1DepositIndex - 1
if err = s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(finalizedEth1DepIdx)); err != nil {
log.WithError(err).Error("could not insert finalized deposits")
return
}
// Deposit proofs are only used during state transition and can be safely removed to save space.
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(eth1DepositIndex)); err != nil {
return errors.Wrap(err, "could not prune deposit proofs")
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(finalizedEth1DepIdx)); err != nil {
log.WithError(err).Error("could not prune deposit proofs")
}
return nil
// Prune deposits which have already been finalized, the below method prunes all pending deposits (non-inclusive) up
// to the provided eth1 deposit index.
s.cfg.DepositCache.PrunePendingDeposits(ctx, int64(eth1DepositIndex)) // lint:ignore uintcast -- Deposit index should not exceed int64 in your lifetime.
log.WithField("duration", time.Since(startTime).String()).Debug("Finalized deposit insertion completed")
}
// This ensures that the input root defaults to using genesis root instead of zero hashes. This is needed for handling

View File

@@ -41,103 +41,6 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestStore_OnBlock(t *testing.T) {
service, tr := minimalTestService(t)
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
var genesisStateRoot [32]byte
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
util.SaveBlock(t, ctx, beaconDB, genesis)
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
ojc := &ethpb.Checkpoint{}
stfcs, root, err := prepareForkchoiceState(ctx, 0, validGenesisRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stfcs, root))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
random := util.NewBeaconBlock()
random.Block.Slot = 1
random.Block.ParentRoot = validGenesisRoot[:]
util.SaveBlock(t, ctx, beaconDB, random)
randomParentRoot, err := random.Block.HashTreeRoot()
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), randomParentRoot))
randomParentRoot2 := roots[1]
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot2}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)))
stfcs, root, err = prepareForkchoiceState(ctx, 2, bytesutil.ToBytes32(randomParentRoot2),
validGenesisRoot, [32]byte{'r'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stfcs, root))
tests := []struct {
name string
blk *ethpb.SignedBeaconBlock
s state.BeaconState
time uint64
wantErrString string
}{
{
name: "parent block root does not have a state",
blk: util.NewBeaconBlock(),
s: st.Copy(),
wantErrString: "could not reconstruct parent state",
},
{
name: "block is from the future",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot2
b.Block.Slot = params.BeaconConfig().FarFutureSlot
return b
}(),
s: st.Copy(),
wantErrString: "is in the far distant future",
},
{
name: "could not get finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot[:]
b.Block.Slot = 2
return b
}(),
s: st.Copy(),
wantErrString: "not descendant of finalized checkpoint",
},
{
name: "same slot as finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Slot = 0
b.Block.ParentRoot = randomParentRoot2
return b
}(),
s: st.Copy(),
wantErrString: "block is equal or earlier than finalized block, slot 0 < slot 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fRoot := bytesutil.ToBytes32(roots[0])
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: fRoot}))
root, err := tt.blk.Block.HashTreeRoot()
assert.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(tt.blk)
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
assert.ErrorContains(t, tt.wantErrString, err)
})
}
}
func TestStore_OnBlockBatch(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
@@ -636,8 +539,7 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 1024)
service.head = &head{state: s}
require.NoError(t, s.SetSlot(2*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.handleEpochBoundary(ctx, s))
require.Equal(t, 3*params.BeaconConfig().SlotsPerEpoch, service.nextEpochBoundarySlot)
require.NoError(t, service.handleEpochBoundary(ctx, s.Slot(), s, []byte{}))
}
func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
@@ -657,7 +559,20 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, fcs.NewSlot(ctx, i))
require.NoError(t, service.onBlock(ctx, wsb, r))
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -692,7 +607,20 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, r))
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -714,8 +642,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
func TestOnBlock_NilBlock(t *testing.T) {
service, tr := minimalTestService(t)
err := service.onBlock(tr.ctx, nil, [32]byte{})
err := service.postBlockProcess(tr.ctx, nil, [32]byte{}, nil, true)
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -729,11 +656,11 @@ func TestOnBlock_InvalidSignature(t *testing.T) {
blk, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
blk.Signature = []byte{'a'} // Mutate the signature.
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = service.onBlock(ctx, wsb, r)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
_, err = service.validateStateTransition(ctx, preState, wsb)
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -757,7 +684,13 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, r))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, false))
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -783,7 +716,7 @@ func TestInsertFinalizedDeposits(t *testing.T) {
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
}
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'}))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(107))
@@ -792,6 +725,45 @@ func TestInsertFinalizedDeposits(t *testing.T) {
}
}
func TestInsertFinalizedDeposits_PrunePendingDeposits(t *testing.T) {
service, tr := minimalTestService(t)
ctx, depositCache := tr.ctx, tr.dc
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gs = gs.Copy()
assert.NoError(t, gs.SetEth1Data(&ethpb.Eth1Data{DepositCount: 10}))
assert.NoError(t, gs.SetEth1DepositIndex(8))
assert.NoError(t, service.cfg.StateGen.SaveState(ctx, [32]byte{'m', 'o', 'c', 'k'}, gs))
var zeroSig [96]byte
for i := uint64(0); i < uint64(4*params.BeaconConfig().SlotsPerEpoch); i++ {
root := []byte(strconv.Itoa(int(i)))
assert.NoError(t, depositCache.InsertDeposit(ctx, &ethpb.Deposit{Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.FromBytes48([fieldparams.BLSPubkeyLength]byte{}),
WithdrawalCredentials: params.BeaconConfig().ZeroHash[:],
Amount: 0,
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
depositCache.InsertPendingDeposit(ctx, &ethpb.Deposit{Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.FromBytes48([fieldparams.BLSPubkeyLength]byte{}),
WithdrawalCredentials: params.BeaconConfig().ZeroHash[:],
Amount: 0,
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root))
}
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(107))
for _, d := range deps {
assert.DeepEqual(t, [][]byte(nil), d.Proof, "Proofs are not empty")
}
pendingDeps := depositCache.PendingContainers(ctx, nil)
for _, d := range pendingDeps {
assert.DeepEqual(t, true, d.Index >= 8, "Pending deposits were not pruned")
}
}
func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
service, tr := minimalTestService(t)
ctx, depositCache := tr.ctx, tr.dc
@@ -819,7 +791,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
// Insert 3 deposits before hand.
require.NoError(t, depositCache.InsertFinalizedDeposits(ctx, 2))
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'}))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 5, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
@@ -829,7 +801,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
}
// Insert New Finalized State with higher deposit count.
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'}))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'})
fDeposits = depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 12, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps = depositCache.AllDeposits(ctx, big.NewInt(112))
@@ -1131,19 +1103,35 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
var wg sync.WaitGroup
wg.Add(4)
go func() {
require.NoError(t, service.onBlock(ctx, wsb1, r1))
preState, err := service.getBlockPreState(ctx, wsb1.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb1)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb1, r1, postState, true))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb2, r2))
preState, err := service.getBlockPreState(ctx, wsb2.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb2)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb2, r2, postState, true))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb3, r3))
preState, err := service.getBlockPreState(ctx, wsb3.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb3)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb3, r3, postState, true))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb4, r4))
preState, err := service.getBlockPreState(ctx, wsb4.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb4)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb4, r4, postState, true))
wg.Done()
}()
wg.Wait()
@@ -1211,7 +1199,13 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1224,7 +1218,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1238,7 +1237,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1255,7 +1259,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, firstInvalidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1278,7 +1287,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head is the last invalid block imported. The
// store's headroot is the previous head (since the invalid block did
@@ -1301,7 +1315,13 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1358,7 +1378,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1371,7 +1396,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1385,7 +1415,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1402,7 +1438,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, firstInvalidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1425,7 +1466,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
// not finish importing and it was never imported to forkchoice). Check
@@ -1448,7 +1494,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1506,7 +1557,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1519,7 +1576,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1533,7 +1596,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
lastValidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, lastValidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1555,7 +1623,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
invalidRoots[i-13], err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, invalidRoots[i-13])
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
err = service.postBlockProcess(ctx, wsb, invalidRoots[i-13], postState, false)
require.NoError(t, err)
}
// Check that we have justified the second epoch
@@ -1576,7 +1649,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
@@ -1610,7 +1688,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, true))
// Check that the head is still INVALID and the node is still optimistic
require.Equal(t, invalidHeadRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
optimistic, err = service.IsOptimistic(ctx)
@@ -1628,7 +1711,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
@@ -1648,7 +1736,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
@@ -1699,7 +1793,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1712,7 +1811,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1726,7 +1830,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
lastValidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, lastValidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1747,7 +1856,18 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
}
// Check that we have justified the second epoch
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
@@ -1766,7 +1886,11 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that the headroot/state are not in DB and restart the node
@@ -1848,7 +1972,12 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
st, err = service.HeadState(ctx)
require.NoError(t, err)

View File

@@ -62,7 +62,7 @@ func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestat
return err
}
if !bytes.Equal(a.Data.Target.Root, r) {
return errors.New("FFG and LMD votes are not consistent")
return fmt.Errorf("FFG and LMD votes are not consistent, block root: %#x, target root: %#x, canonical target root: %#x", a.Data.BeaconBlockRoot, a.Data.Target.Root, r)
}
return nil
}
@@ -86,23 +86,30 @@ func (s *Service) spawnProcessAttestationsRoutine() {
}
log.Warn("Genesis time received, now available to process attestations")
}
// Wait for node to be synced before running the routine.
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("Could not wait to sync")
return
}
st := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
pat := slots.NewSlotTickerWithOffset(s.genesisTime, -reorgLateBlockCountAttestations, params.BeaconConfig().SecondsPerSlot)
reorgInterval := time.Second*time.Duration(params.BeaconConfig().SecondsPerSlot) - reorgLateBlockCountAttestations
ticker := slots.NewSlotTickerWithIntervals(s.genesisTime, []time.Duration{0, reorgInterval})
for {
select {
case <-s.ctx.Done():
return
case <-pat.C():
s.UpdateHead(s.ctx, s.CurrentSlot()+1)
case <-st.C():
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, s.CurrentSlot()); err != nil {
log.WithError(err).Error("could not process new slot")
}
s.cfg.ForkChoiceStore.Unlock()
case slotInterval := <-ticker.C():
if slotInterval.Interval > 0 {
s.UpdateHead(s.ctx, slotInterval.Slot+1)
} else {
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, slotInterval.Slot); err != nil {
log.WithError(err).Error("could not process new slot")
}
s.cfg.ForkChoiceStore.Unlock()
s.UpdateHead(s.ctx, s.CurrentSlot())
s.UpdateHead(s.ctx, slotInterval.Slot)
}
}
}
}()

View File

@@ -128,7 +128,13 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
@@ -178,7 +184,13 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)

View File

@@ -7,15 +7,23 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
// This defines how many epochs since finality the run time will begin to save hot state on to the DB.
@@ -47,14 +55,83 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := s.FinalizedCheckpt().Epoch
currentEpoch := coreTime.CurrentEpoch(preState)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
eg, _ := errgroup.WithContext(ctx)
var postState state.BeaconState
eg.Go(func() error {
postState, err = s.validateStateTransition(ctx, preState, blockCopy)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the new block.
if err := s.onBlock(ctx, blockCopy, blockRoot); err != nil {
if err := s.savePostStateInfo(ctx, blockRoot, blockCopy, postState); err != nil {
return errors.Wrap(err, "could not save post state info")
}
if err := s.postBlockProcess(ctx, blockCopy, blockRoot, postState, isValidPayload); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return err
}
if coreTime.CurrentEpoch(postState) > currentEpoch {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
newFinalized, err := s.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
if err != nil {
return errors.Wrap(err, "could not update finalized checkpoint")
}
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(blockCopy, postState)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
if features.Get().EnableSlasher {
go s.sendBlockAttestationsToSlasher(blockCopy, preState)
}
// Handle post block operations such as pruning exits and bls messages if incoming block is the head
if err := s.prunePostBlockOperationPools(ctx, blockCopy, blockRoot); err != nil {
@@ -86,6 +163,8 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithError(err).Error("Unable to log state transition data")
}
chainServiceProcessingTime.Observe(float64(time.Since(receivedTime).Milliseconds()))
return nil
}
@@ -106,6 +185,13 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Rea
return err
}
lastBR := blkRoots[len(blkRoots)-1]
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(lastBR)
if err != nil {
lastSlot := blocks[len(blocks)-1].Block().Slot()
log.WithError(err).Errorf("Could not check if block is optimistic, Root: %#x, Slot: %d", lastBR, lastSlot)
optimistic = true
}
for i, b := range blocks {
blockCopy, err := b.Copy()
if err != nil {
@@ -119,6 +205,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Rea
BlockRoot: blkRoots[i],
SignedBlock: blockCopy,
Verified: true,
Optimistic: optimistic,
},
})
@@ -226,3 +313,109 @@ func (s *Service) checkSaveHotStateDB(ctx context.Context) error {
return s.cfg.StateGen.DisableSaveHotStateToDB(ctx)
}
// This performs the state transition function and returns the poststate or an
// error if the block fails to verify the consensus rules
func (s *Service) validateStateTransition(ctx context.Context, preState state.BeaconState, signed interfaces.ReadOnlySignedBeaconBlock) (state.BeaconState, error) {
b := signed.Block()
// Verify that the parent block is in forkchoice
parentRoot := b.ParentRoot()
if !s.InForkchoice(parentRoot) {
return nil, ErrNotDescendantOfFinalized
}
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return nil, invalidBlock{error: err}
}
stateTransitionProcessingTime.Observe(float64(time.Since(stateTransitionStartTime).Milliseconds()))
return postState, nil
}
// updateJustificationOnBlock updates the justified checkpoint on DB if the
// incoming block has updated it on forkchoice.
func (s *Service) updateJustificationOnBlock(ctx context.Context, preState, postState state.BeaconState, preJustifiedEpoch primitives.Epoch) error {
justified := s.cfg.ForkChoiceStore.JustifiedCheckpoint()
preStateJustifiedEpoch := preState.CurrentJustifiedCheckpoint().Epoch
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > preJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
return nil
}
// updateFinalizationOnBlock performs some duties when the incoming block
// changes the finalized checkpoint. It returns true when this has happened.
func (s *Service) updateFinalizationOnBlock(ctx context.Context, preState, postState state.BeaconState, preFinalizedEpoch primitives.Epoch) (bool, error) {
preStateFinalizedEpoch := preState.FinalizedCheckpoint().Epoch
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
if finalized.Epoch > preFinalizedEpoch || (finalized.Epoch == postStateFinalizedEpoch && finalized.Epoch > preStateFinalizedEpoch) {
if err := s.updateFinalized(ctx, &ethpb.Checkpoint{Epoch: finalized.Epoch, Root: finalized.Root[:]}); err != nil {
return true, err
}
return true, nil
}
return false, nil
}
// sendNewFinalizedEvent sends a new finalization checkpoint event over the
// event feed. It needs to be called on the background
func (s *Service) sendNewFinalizedEvent(signed interfaces.ReadOnlySignedBeaconBlock, postState state.BeaconState) {
isValidPayload := false
s.headLock.RLock()
if s.head != nil {
isValidPayload = s.head.optimistic
}
s.headLock.RUnlock()
// Send an event regarding the new finalized checkpoint over a common event feed.
stateRoot := signed.Block().StateRoot()
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpbv1.EventFinalizedCheckpoint{
Epoch: postState.FinalizedCheckpoint().Epoch,
Block: postState.FinalizedCheckpoint().Root,
State: stateRoot[:],
ExecutionOptimistic: isValidPayload,
},
})
}
// sendBlockAttestationsToSlasher sends the incoming block's attestation to the slasher
func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySignedBeaconBlock, preState state.BeaconState) {
// Feed the indexed attestation to slasher if enabled. This action
// is done in the background to avoid adding more load to this critical code path.
ctx := context.TODO()
for _, att := range signed.Block().Body().Attestations() {
committee, err := helpers.BeaconCommitteeFromState(ctx, preState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
log.WithError(err).Error("Could not get attestation committee")
return
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
return
}
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
}
}
// validateExecutionOnBlock notifies the engine of the incoming block execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, signed)
if err != nil {
return false, s.handleInvalidExecutionError(ctx, err, blockRoot, signed.Block().ParentRoot())
}
if signed.Version() < version.Capella && isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, ver, header, signed); err != nil {
return isValidPayload, err
}
}
return isValidPayload, nil
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
@@ -34,7 +35,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
@@ -45,21 +45,21 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
nextEpochBoundarySlot primitives.Slot
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
}
// config options for the service.
@@ -130,7 +130,7 @@ func (s *Service) Start() {
}
}
s.spawnProcessAttestationsRoutine()
s.spawnLateBlockTasksLoop()
go s.runLateBlockTasks()
}
// Stop the blockchain service's main event loop and associated goroutines.
@@ -307,7 +307,13 @@ func (s *Service) initializeHeadFromDB(ctx context.Context) error {
if err != nil {
return errors.Wrap(err, "could not get finalized block")
}
if err := s.setHead(finalizedRoot, finalizedBlock, finalizedState); err != nil {
if err := s.setHead(&head{
finalizedRoot,
finalizedBlock,
finalizedState,
finalizedBlock.Block().Slot(),
false,
}); err != nil {
return errors.Wrap(err, "could not set head")
}
@@ -401,7 +407,7 @@ func (s *Service) initializeBeaconChain(
if err := helpers.UpdateCommitteeCache(ctx, genesisState, 0); err != nil {
return nil, err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState); err != nil {
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState, coreTime.CurrentEpoch(genesisState)); err != nil {
return nil, err
}
@@ -439,7 +445,13 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
if err := s.setHead(genesisBlkRoot, genesisBlk, genesisState); err != nil {
if err := s.setHead(&head{
genesisBlkRoot,
genesisBlk,
genesisState,
genesisBlk.Block().Slot(),
false,
}); err != nil {
log.WithError(err).Fatal("Could not set head")
}
return nil

View File

@@ -357,7 +357,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
require.NoError(t, s.cfg.StateGen.SaveState(ctx, r, newState))
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, s.saveHeadNoDB(ctx, wsb, r, newState))
require.NoError(t, s.saveHeadNoDB(ctx, wsb, r, newState, false))
newB, err := s.cfg.BeaconDB.HeadBlock(ctx)
require.NoError(t, err)
@@ -377,9 +377,7 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
require.NoError(t, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, r))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
@@ -453,9 +451,7 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := state_native.InitializeFromProtoPhase0(bs)
require.NoError(b, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(b, err)
require.NoError(b, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
require.NoError(b, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, r))
b.ResetTimer()
for i := 0; i < b.N; i++ {

View File

@@ -533,7 +533,7 @@ func (s *ChainService) GetProposerHead() [32]byte {
return [32]byte{}
}
// SetForkchoiceGenesisTime mocks the same method in the chain service
// SetForkChoiceGenesisTime mocks the same method in the chain service
func (s *ChainService) SetForkChoiceGenesisTime(timestamp uint64) {
if s.ForkChoiceStore != nil {
s.ForkChoiceStore.SetGenesisTime(timestamp)

View File

@@ -3,6 +3,7 @@ package builder
import (
"context"
"testing"
"time"
buildertesting "github.com/prysmaticlabs/prysm/v4/api/client/builder/testing"
blockchainTesting "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
@@ -38,6 +39,21 @@ func Test_RegisterValidator(t *testing.T) {
assert.Equal(t, true, builder.RegisteredVals[pubkey])
}
func Test_RegisterValidator_WithCache(t *testing.T) {
ctx := context.Background()
headFetcher := &blockchainTesting.ChainService{}
builder := buildertesting.NewClient()
s, err := NewService(ctx, WithRegistrationCache(), WithHeadFetcher(headFetcher), WithBuilderClient(&builder))
require.NoError(t, err)
pubkey := bytesutil.ToBytes48([]byte("pubkey"))
var feeRecipient [20]byte
reg := &eth.ValidatorRegistrationV1{Pubkey: pubkey[:], Timestamp: uint64(time.Now().UTC().Unix()), FeeRecipient: feeRecipient[:]}
require.NoError(t, s.RegisterValidator(ctx, []*eth.SignedValidatorRegistrationV1{{Message: reg}}))
registration, err := s.registrationCache.RegistrationByIndex(0)
require.NoError(t, err)
require.DeepEqual(t, reg, registration)
}
func Test_BuilderMethodsWithouClient(t *testing.T) {
s, err := NewService(context.Background())
require.NoError(t, err)

View File

@@ -34,6 +34,7 @@ go_library(
deps = [
"//beacon-chain/state:go_default_library",
"//cache/lru:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
@@ -87,7 +88,6 @@ go_test(
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -4,35 +4,41 @@ import (
"bytes"
"sync"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
)
const keyLength = 40
const vIdLength = 8
const pIdLength = 8
const vpIdsLength = vIdLength + pIdLength
// ProposerPayloadIDsCache is a cache of proposer payload IDs.
// The key is the slot. The value is the concatenation of the proposer and payload IDs. 8 bytes each.
// The key is the concatenation of the slot and the block root.
// The value is the concatenation of the proposer and payload IDs, 8 bytes each.
type ProposerPayloadIDsCache struct {
slotToProposerAndPayloadIDs map[[40]byte][vpIdsLength]byte
slotToProposerAndPayloadIDs map[[keyLength]byte][vpIdsLength]byte
sync.RWMutex
}
// NewProposerPayloadIDsCache creates a new proposer payload IDs cache.
func NewProposerPayloadIDsCache() *ProposerPayloadIDsCache {
return &ProposerPayloadIDsCache{
slotToProposerAndPayloadIDs: make(map[[40]byte][vpIdsLength]byte),
slotToProposerAndPayloadIDs: make(map[[keyLength]byte][vpIdsLength]byte),
}
}
// GetProposerPayloadIDs returns the proposer and payload IDs for the given slot.
func (f *ProposerPayloadIDsCache) GetProposerPayloadIDs(slot primitives.Slot, r [32]byte) (primitives.ValidatorIndex, [8]byte, bool) {
// GetProposerPayloadIDs returns the proposer and payload IDs for the given slot and head root to build the block.
func (f *ProposerPayloadIDsCache) GetProposerPayloadIDs(
slot primitives.Slot,
r [fieldparams.RootLength]byte,
) (primitives.ValidatorIndex, [pIdLength]byte, bool) {
f.RLock()
defer f.RUnlock()
ids, ok := f.slotToProposerAndPayloadIDs[idKey(slot, r)]
if !ok {
return 0, [8]byte{}, false
return 0, [pIdLength]byte{}, false
}
vId := ids[:vIdLength]
@@ -43,8 +49,13 @@ func (f *ProposerPayloadIDsCache) GetProposerPayloadIDs(slot primitives.Slot, r
return primitives.ValidatorIndex(bytesutil.BytesToUint64BigEndian(vId)), pId, true
}
// SetProposerAndPayloadIDs sets the proposer and payload IDs for the given slot.
func (f *ProposerPayloadIDsCache) SetProposerAndPayloadIDs(slot primitives.Slot, vId primitives.ValidatorIndex, pId [8]byte, r [32]byte) {
// SetProposerAndPayloadIDs sets the proposer and payload IDs for the given slot and head root to build block.
func (f *ProposerPayloadIDsCache) SetProposerAndPayloadIDs(
slot primitives.Slot,
vId primitives.ValidatorIndex,
pId [pIdLength]byte,
r [fieldparams.RootLength]byte,
) {
f.Lock()
defer f.Unlock()
var vIdBytes [vIdLength]byte
@@ -63,7 +74,7 @@ func (f *ProposerPayloadIDsCache) SetProposerAndPayloadIDs(slot primitives.Slot,
}
}
// PrunePayloadIDs removes the payload id entries that's current than input slot.
// PrunePayloadIDs removes the payload ID entries older than input slot.
func (f *ProposerPayloadIDsCache) PrunePayloadIDs(slot primitives.Slot) {
f.Lock()
defer f.Unlock()
@@ -76,8 +87,8 @@ func (f *ProposerPayloadIDsCache) PrunePayloadIDs(slot primitives.Slot) {
}
}
func idKey(slot primitives.Slot, r [32]byte) [40]byte {
var k [40]byte
func idKey(slot primitives.Slot, r [fieldparams.RootLength]byte) [keyLength]byte {
var k [keyLength]byte
copy(k[:], append(bytesutil.Uint64ToBytesBigEndian(uint64(slot)), r[:]...))
return k
}

View File

@@ -3,15 +3,11 @@ package cache
import (
"context"
"sync"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/math"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -38,33 +34,10 @@ func (regCache *RegistrationCache) RegistrationByIndex(id primitives.ValidatorIn
regCache.lock.RUnlock()
return nil, errors.Wrapf(ErrNotFoundRegistration, "validator id %d", id)
}
isExpired, err := RegistrationTimeStampExpired(v.Timestamp)
if err != nil {
return nil, errors.Wrapf(err, "failed to check registration expiration")
}
if isExpired {
regCache.lock.RUnlock()
regCache.lock.Lock()
defer regCache.lock.Unlock()
delete(regCache.indexToRegistration, id)
log.Warnf("registration for validator index %d expired at unix time %d", id, v.Timestamp)
return nil, errors.Wrapf(ErrNotFoundRegistration, "validator id %d", id)
}
regCache.lock.RUnlock()
return v, nil
}
func RegistrationTimeStampExpired(ts uint64) (bool, error) {
// safely convert unint64 to int64
i, err := math.Int(ts)
if err != nil {
return false, err
}
expiryDuration := params.BeaconConfig().RegistrationDuration
// registered time + expiration duration < current time = expired
return time.Unix(int64(i), 0).Add(expiryDuration).Before(time.Now()), nil
}
// UpdateIndexToRegisteredMap adds or updates values in the cache based on the argument.
func (regCache *RegistrationCache) UpdateIndexToRegisteredMap(ctx context.Context, m map[primitives.ValidatorIndex]*ethpb.ValidatorRegistrationV1) {
_, span := trace.StartSpan(ctx, "RegistrationCache.UpdateIndexToRegisteredMap")

View File

@@ -6,15 +6,12 @@ import (
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestRegistrationCache(t *testing.T) {
hook := logTest.NewGlobal()
pubkey, err := hexutil.Decode("0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
require.NoError(t, err)
validatorIndex := primitives.ValidatorIndex(1)
@@ -31,29 +28,14 @@ func TestRegistrationCache(t *testing.T) {
reg, err := cache.RegistrationByIndex(validatorIndex)
require.NoError(t, err)
require.Equal(t, string(reg.Pubkey), string(pubkey))
t.Run("Registration expired", func(t *testing.T) {
validatorIndex2 := primitives.ValidatorIndex(2)
overExpirationPadTime := time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot*uint64(params.BeaconConfig().SlotsPerEpoch)*4) // 4 epochs
m[validatorIndex2] = &ethpb.ValidatorRegistrationV1{
FeeRecipient: []byte{},
GasLimit: 100,
Timestamp: uint64(time.Now().Add(-1 * overExpirationPadTime).Unix()),
Pubkey: pubkey,
}
cache.UpdateIndexToRegisteredMap(context.Background(), m)
_, err := cache.RegistrationByIndex(validatorIndex2)
require.ErrorContains(t, "no validator registered", err)
require.LogsContain(t, hook, "expired")
})
t.Run("Registration close to expiration still passes", func(t *testing.T) {
t.Run("successfully updates", func(t *testing.T) {
pubkey, err := hexutil.Decode("0x88247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
require.NoError(t, err)
validatorIndex2 := primitives.ValidatorIndex(2)
overExpirationPadTime := time.Second * time.Duration((params.BeaconConfig().SecondsPerSlot*uint64(params.BeaconConfig().SlotsPerEpoch)*3)-5) // 3 epochs - 5 seconds
m[validatorIndex2] = &ethpb.ValidatorRegistrationV1{
FeeRecipient: []byte{},
GasLimit: 100,
Timestamp: uint64(time.Now().Add(-1 * overExpirationPadTime).Unix()),
Timestamp: uint64(time.Now().Unix()),
Pubkey: pubkey,
}
cache.UpdateIndexToRegisteredMap(context.Background(), m)
@@ -62,21 +44,3 @@ func TestRegistrationCache(t *testing.T) {
require.Equal(t, string(reg.Pubkey), string(pubkey))
})
}
func Test_RegistrationTimeStampExpired(t *testing.T) {
// expiration set at 3 epochs
t.Run("expired registration", func(t *testing.T) {
overExpirationPadTime := time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot*uint64(params.BeaconConfig().SlotsPerEpoch)*4) // 4 epochs
ts := uint64(time.Now().Add(-1 * overExpirationPadTime).Unix())
isExpired, err := RegistrationTimeStampExpired(ts)
require.NoError(t, err)
require.Equal(t, true, isExpired)
})
t.Run("is not expired registration", func(t *testing.T) {
overExpirationPadTime := time.Second * time.Duration((params.BeaconConfig().SecondsPerSlot*uint64(params.BeaconConfig().SlotsPerEpoch)*3)-5) // 3 epochs -5 seconds
ts := uint64(time.Now().Add(-1 * overExpirationPadTime).Unix())
isExpired, err := RegistrationTimeStampExpired(ts)
require.NoError(t, err)
require.Equal(t, false, isExpired)
})
}

View File

@@ -13,6 +13,15 @@ import (
"go.opencensus.io/trace"
)
// AttDelta contains rewards and penalties for a single attestation.
type AttDelta struct {
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
}
// InitializePrecomputeValidators precomputes individual validator for its attested balances and the total sum of validators attested balances of the epoch.
func InitializePrecomputeValidators(ctx context.Context, beaconState state.BeaconState) ([]*precompute.Validator, *precompute.Balance, error) {
ctx, span := trace.StartSpan(ctx, "altair.InitializePrecomputeValidators")
@@ -226,7 +235,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
return beaconState, errors.New("validator registries not the same length as state's validator registries")
}
attsRewards, attsPenalties, err := AttestationsDelta(beaconState, bal, vals)
attDeltas, err := AttestationsDelta(beaconState, bal, vals)
if err != nil {
return nil, errors.Wrap(err, "could not get attestation delta")
}
@@ -237,11 +246,12 @@ func ProcessRewardsAndPenaltiesPrecompute(
// Compute the post balance of the validator after accounting for the
// attester and proposer rewards and penalties.
balances[i], err = helpers.IncreaseBalanceWithVal(balances[i], attsRewards[i])
delta := attDeltas[i]
balances[i], err = helpers.IncreaseBalanceWithVal(balances[i], delta.HeadReward+delta.SourceReward+delta.TargetReward)
if err != nil {
return nil, err
}
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], attsPenalties[i])
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty)
vals[i].AfterEpochTransitionBalance = balances[i]
}
@@ -255,10 +265,8 @@ func ProcessRewardsAndPenaltiesPrecompute(
// AttestationsDelta computes and returns the rewards and penalties differences for individual validators based on the
// voting records.
func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, vals []*precompute.Validator) (rewards, penalties []uint64, err error) {
numOfVals := beaconState.NumValidators()
rewards = make([]uint64, numOfVals)
penalties = make([]uint64, numOfVals)
func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, vals []*precompute.Validator) ([]*AttDelta, error) {
attDeltas := make([]*AttDelta, len(vals))
cfg := params.BeaconConfig()
prevEpoch := time.PrevEpoch(beaconState)
@@ -272,29 +280,29 @@ func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, v
bias := cfg.InactivityScoreBias
inactivityPenaltyQuotient, err := beaconState.InactivityPenaltyQuotient()
if err != nil {
return nil, nil, err
return nil, err
}
inactivityDenominator := bias * inactivityPenaltyQuotient
for i, v := range vals {
rewards[i], penalties[i], err = attestationDelta(bal, v, baseRewardMultiplier, inactivityDenominator, leak)
attDeltas[i], err = attestationDelta(bal, v, baseRewardMultiplier, inactivityDenominator, leak)
if err != nil {
return nil, nil, err
return nil, err
}
}
return rewards, penalties, nil
return attDeltas, nil
}
func attestationDelta(
bal *precompute.Balance,
val *precompute.Validator,
baseRewardMultiplier, inactivityDenominator uint64,
inactivityLeak bool) (reward, penalty uint64, err error) {
inactivityLeak bool) (*AttDelta, error) {
eligible := val.IsActivePrevEpoch || (val.IsSlashed && !val.IsWithdrawableCurrentEpoch)
// Per spec `ActiveCurrentEpoch` can't be 0 to process attestation delta.
if !eligible || bal.ActiveCurrentEpoch == 0 {
return 0, 0, nil
return &AttDelta{}, nil
}
cfg := params.BeaconConfig()
@@ -307,32 +315,32 @@ func attestationDelta(
srcWeight := cfg.TimelySourceWeight
tgtWeight := cfg.TimelyTargetWeight
headWeight := cfg.TimelyHeadWeight
reward, penalty = uint64(0), uint64(0)
attDelta := &AttDelta{}
// Process source reward / penalty
if val.IsPrevEpochSourceAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * srcWeight * (bal.PrevEpochAttested / increment)
reward += n / (activeIncrement * weightDenominator)
attDelta.SourceReward += n / (activeIncrement * weightDenominator)
}
} else {
penalty += baseReward * srcWeight / weightDenominator
attDelta.SourcePenalty += baseReward * srcWeight / weightDenominator
}
// Process target reward / penalty
if val.IsPrevEpochTargetAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * tgtWeight * (bal.PrevEpochTargetAttested / increment)
reward += n / (activeIncrement * weightDenominator)
attDelta.TargetReward += n / (activeIncrement * weightDenominator)
}
} else {
penalty += baseReward * tgtWeight / weightDenominator
attDelta.TargetPenalty += baseReward * tgtWeight / weightDenominator
}
// Process head reward / penalty
if val.IsPrevEpochHeadAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * headWeight * (bal.PrevEpochHeadAttested / increment)
reward += n / (activeIncrement * weightDenominator)
attDelta.HeadReward += n / (activeIncrement * weightDenominator)
}
}
@@ -341,10 +349,10 @@ func attestationDelta(
if !val.IsPrevEpochTargetAttester || val.IsSlashed {
n, err := math.Mul64(effectiveBalance, val.InactivityScore)
if err != nil {
return 0, 0, err
return &AttDelta{}, err
}
penalty += n / inactivityDenominator
attDelta.TargetPenalty += n / inactivityDenominator
}
return reward, penalty, nil
return attDelta, nil
}

View File

@@ -213,9 +213,16 @@ func TestAttestationsDelta(t *testing.T) {
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
rewards, penalties, err := AttestationsDelta(s, balance, validators)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
rewards := make([]uint64, len(deltas))
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
}
// Reward amount should increase as validator index increases due to setup.
for i := 1; i < len(rewards); i++ {
require.Equal(t, true, rewards[i] > rewards[i-1])
@@ -244,9 +251,16 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
rewards, penalties, err := AttestationsDelta(s, balance, validators)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
rewards := make([]uint64, len(deltas))
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
}
// Reward amount should increase as validator index increases due to setup.
for i := 1; i < len(rewards); i++ {
require.Equal(t, true, rewards[i] > rewards[i-1])
@@ -285,8 +299,15 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
}
wanted := make([]uint64, s.NumValidators())
rewards, penalties, err := AttestationsDelta(s, balance, validators)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
rewards := make([]uint64, len(deltas))
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
}
for i := range rewards {
wanted[i] += rewards[i]
}

View File

@@ -2,6 +2,7 @@ package altair
import (
"context"
goErrors "errors"
"fmt"
"time"
@@ -22,6 +23,10 @@ import (
const maxRandomByte = uint64(1<<8 - 1)
var (
ErrTooLate = errors.New("sync message is too late")
)
// ValidateNilSyncContribution validates the following fields are not nil:
// -the contribution and proof itself
// -the message within contribution and proof
@@ -190,6 +195,7 @@ func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
}
// ValidateSyncMessageTime validates sync message to ensure that the provided slot is valid.
// Spec: [IGNORE] The message's slot is for the current slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance), i.e. sync_committee_message.slot == current_slot
func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
if err := slots.ValidateClock(slot, uint64(genesisTime.Unix())); err != nil {
return err
@@ -217,15 +223,19 @@ func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockD
upperBound := time.Now().Add(clockDisparity)
// Verify sync message slot is within the time range.
if messageTime.Before(lowerBound) || messageTime.After(upperBound) {
return fmt.Errorf(
"sync message time %v (slot %d) not within allowable range of %v (slot %d) to %v (slot %d)",
syncErr := fmt.Errorf(
"sync message time %v (message slot %d) not within allowable range of %v to %v (current slot %d)",
messageTime,
slot,
lowerBound,
uint64(lowerBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
upperBound,
uint64(upperBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
currentSlot,
)
// Wrap error message if sync message is too late.
if messageTime.Before(lowerBound) {
syncErr = goErrors.Join(ErrTooLate, syncErr)
}
return syncErr
}
return nil
}

View File

@@ -311,7 +311,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 16,
genesisTime: prysmTime.Now().Add(-(15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)),
},
wantedErr: "(slot 16) not within allowable range of",
wantedErr: "(message slot 16) not within allowable range of",
},
{
name: "sync_message.slot == current_slot+CLOCK_DISPARITY",
@@ -327,7 +327,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 100,
genesisTime: prysmTime.Now().Add(-(100 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second) + params.BeaconNetworkConfig().MaximumGossipClockDisparity + 1000*time.Millisecond),
},
wantedErr: "(slot 100) not within allowable range of",
wantedErr: "(message slot 100) not within allowable range of",
},
{
name: "sync_message.slot == current_slot-CLOCK_DISPARITY",
@@ -343,7 +343,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 101,
genesisTime: prysmTime.Now().Add(-(100*time.Duration(params.BeaconConfig().SecondsPerSlot)*time.Second + params.BeaconNetworkConfig().MaximumGossipClockDisparity)),
},
wantedErr: "(slot 101) not within allowable range of",
wantedErr: "(message slot 101) not within allowable range of",
},
{
name: "sync_message.slot is well beyond current slot",

View File

@@ -50,6 +50,8 @@ func ProcessVoluntaryExits(
beaconState state.BeaconState,
exits []*ethpb.SignedVoluntaryExit,
) (state.BeaconState, error) {
maxExitEpoch, churn := v.ValidatorsMaxExitEpochAndChurn(beaconState)
var exitEpoch primitives.Epoch
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
@@ -61,8 +63,15 @@ func ProcessVoluntaryExits(
if err := VerifyExitAndSignature(val, beaconState.Slot(), beaconState.Fork(), exit, beaconState.GenesisValidatorsRoot()); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex)
if err != nil {
beaconState, exitEpoch, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, maxExitEpoch, churn)
if err == nil {
if exitEpoch > maxExitEpoch {
maxExitEpoch = exitEpoch
churn = 1
} else if exitEpoch == maxExitEpoch {
churn++
}
} else if !errors.Is(err, v.ValidatorAlreadyExitedErr) {
return nil, err
}
}

View File

@@ -61,7 +61,7 @@ func IsExecutionBlock(body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
}
payload, err := body.Execution()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedGetter):
case errors.Is(err, consensus_types.ErrUnsupportedField):
return false, nil
case err != nil:
return false, err

View File

@@ -110,8 +110,11 @@ func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state
isActive := helpers.IsActiveValidator(validator, currentEpoch)
belowEjectionBalance := validator.EffectiveBalance <= ejectionBal
if isActive && belowEjectionBalance {
state, err = validators.InitiateValidatorExit(ctx, state, primitives.ValidatorIndex(idx))
if err != nil {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.ValidatorsMaxExitEpochAndChurn(state)
state, _, err = validators.InitiateValidatorExit(ctx, state, primitives.ValidatorIndex(idx), maxExitEpoch, churn)
if err != nil && !errors.Is(err, validators.ValidatorAlreadyExitedErr) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}
}

View File

@@ -39,6 +39,8 @@ type BlockProcessedData struct {
SignedBlock interfaces.ReadOnlySignedBeaconBlock
// Verified is true if the block's BLS contents have been verified.
Verified bool
// Optimistic is true if the block is optimistic.
Optimistic bool
}
// ChainStartedData is the data sent with ChainStarted events.

View File

@@ -38,6 +38,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
@@ -68,7 +69,6 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -8,13 +8,16 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
var (
ErrTooLate = errors.New("attestation is too late")
)
// ValidateNilAttestation checks if any composite field of input attestation is nil.
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
@@ -66,25 +69,6 @@ func IsAggregator(committeeCount uint64, slotSig []byte) (bool, error) {
return binary.LittleEndian.Uint64(b[:8])%modulo == 0, nil
}
// AggregateSignature returns the aggregated signature of the input attestations.
//
// Spec pseudocode definition:
//
// def get_aggregate_signature(attestations: Sequence[Attestation]) -> BLSSignature:
// signatures = [attestation.signature for attestation in attestations]
// return bls.Aggregate(signatures)
func AggregateSignature(attestations []*ethpb.Attestation) (bls.Signature, error) {
sigs := make([]bls.Signature, len(attestations))
var err error
for i := 0; i < len(sigs); i++ {
sigs[i], err = bls.SignatureFromBytes(attestations[i].Signature)
if err != nil {
return nil, err
}
}
return bls.AggregateSignatures(sigs), nil
}
// IsAggregated returns true if the attestation is an aggregated attestation,
// false otherwise.
func IsAggregated(attestation *ethpb.Attestation) bool {
@@ -184,7 +168,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
)
if attTime.Before(lowerBounds) {
attReceivedTooLateCount.Inc()
return attError
return errors.Join(ErrTooLate, attError)
}
if attTime.After(upperBounds) {
attReceivedTooEarlyCount.Inc()

View File

@@ -10,8 +10,6 @@ import (
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -45,44 +43,6 @@ func TestAttestation_IsAggregator(t *testing.T) {
})
}
func TestAttestation_AggregateSignature(t *testing.T) {
t.Run("verified", func(t *testing.T) {
pubkeys := make([]bls.PublicKey, 0, 100)
atts := make([]*ethpb.Attestation, 0, 100)
msg := bytesutil.ToBytes32([]byte("hello"))
for i := 0; i < 100; i++ {
priv, err := bls.RandKey()
require.NoError(t, err)
pub := priv.PublicKey()
sig := priv.Sign(msg[:])
pubkeys = append(pubkeys, pub)
att := &ethpb.Attestation{Signature: sig.Marshal()}
atts = append(atts, att)
}
aggSig, err := helpers.AggregateSignature(atts)
require.NoError(t, err)
assert.Equal(t, true, aggSig.FastAggregateVerify(pubkeys, msg), "Signature did not verify")
})
t.Run("not verified", func(t *testing.T) {
pubkeys := make([]bls.PublicKey, 0, 100)
atts := make([]*ethpb.Attestation, 0, 100)
msg := []byte("hello")
for i := 0; i < 100; i++ {
priv, err := bls.RandKey()
require.NoError(t, err)
pub := priv.PublicKey()
sig := priv.Sign(msg)
pubkeys = append(pubkeys, pub)
att := &ethpb.Attestation{Signature: sig.Marshal()}
atts = append(atts, att)
}
aggSig, err := helpers.AggregateSignature(atts[0 : len(atts)-2])
require.NoError(t, err)
assert.Equal(t, false, aggSig.FastAggregateVerify(pubkeys, bytesutil.ToBytes32(msg)), "Signature not suppose to verify")
})
}
func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
// Create 10 committees
committeeCount := uint64(10)

View File

@@ -295,61 +295,58 @@ func ShuffledIndices(s state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]pri
}
// UpdateCommitteeCache gets called at the beginning of every epoch to cache the committee shuffled indices
// list with committee index and epoch number. It caches the shuffled indices for current epoch and next epoch.
func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
for _, e := range []primitives.Epoch{epoch, epoch + 1} {
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
if committeeCache.HasEntry(string(seed[:])) {
return nil
}
shuffledIndices, err := ShuffledIndices(state, e)
if err != nil {
return err
}
count := SlotCommitteeCount(uint64(len(shuffledIndices)))
// Store the sorted indices as well as shuffled indices. In current spec,
// sorted indices is required to retrieve proposer index. This is also
// used for failing verify signature fallback.
sortedIndices := make([]primitives.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
if err := committeeCache.AddCommitteeShuffledList(ctx, &cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,
SortedIndices: sortedIndices,
}); err != nil {
return err
}
// list with committee index and epoch number. It caches the shuffled indices for the input epoch.
func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState, e primitives.Epoch) error {
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
if committeeCache.HasEntry(string(seed[:])) {
return nil
}
shuffledIndices, err := ShuffledIndices(state, e)
if err != nil {
return err
}
count := SlotCommitteeCount(uint64(len(shuffledIndices)))
// Store the sorted indices as well as shuffled indices. In current spec,
// sorted indices is required to retrieve proposer index. This is also
// used for failing verify signature fallback.
sortedIndices := make([]primitives.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
if err := committeeCache.AddCommitteeShuffledList(ctx, &cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,
SortedIndices: sortedIndices,
}); err != nil {
return err
}
return nil
}
// UpdateProposerIndicesInCache updates proposer indices entry of the committee cache.
func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState) error {
// Input state is used to retrieve active validator indices.
// Input epoch is the epoch to retrieve proposer indices for.
func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
// The cache uses the state root at the (current epoch - 1)'s slot as key. (e.g. for epoch 2, the key is root at slot 63)
// Which is the reason why we skip genesis epoch.
if time.CurrentEpoch(state) <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
if epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
return nil
}
// Use state root from (current_epoch - 1))
wantedEpoch := time.PrevEpoch(state)
s, err := slots.EpochEnd(wantedEpoch)
s, err := slots.EpochEnd(epoch - 1)
if err != nil {
return err
}
r, err := StateRootAtSlot(state, s)
r, err := state.StateRootAtIndex(uint64(s % params.BeaconConfig().SlotsPerHistoricalRoot))
if err != nil {
return err
}
@@ -366,11 +363,11 @@ func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaco
return nil
}
indices, err := ActiveValidatorIndices(ctx, state, time.CurrentEpoch(state))
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return err
}
proposerIndices, err := precomputeProposerIndices(state, indices)
proposerIndices, err := precomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
@@ -432,11 +429,10 @@ func computeCommittee(
// This computes proposer indices of the current epoch and returns a list of proposer indices,
// the index of the list represents the slot number.
func precomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex) ([]primitives.ValidatorIndex, error) {
func precomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex, e primitives.Epoch) ([]primitives.ValidatorIndex, error) {
hashFunc := hash.CustomSHA256Hasher()
proposerIndices := make([]primitives.ValidatorIndex, params.BeaconConfig().SlotsPerEpoch)
e := time.CurrentEpoch(state)
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return nil, errors.Wrap(err, "could not generate seed")

View File

@@ -413,7 +413,7 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
require.NoError(t, err)
require.NoError(t, UpdateCommitteeCache(context.Background(), state, time.CurrentEpoch(state)))
epoch := primitives.Epoch(1)
epoch := primitives.Epoch(0)
idx := primitives.CommitteeIndex(1)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
@@ -423,6 +423,40 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(indices)), "Did not save correct indices lengths")
}
func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)
for i := primitives.ValidatorIndex(0); uint64(i) < validatorCount; i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 1,
}
indices[i] = i
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
e := time.CurrentEpoch(state)
require.NoError(t, UpdateCommitteeCache(context.Background(), state, e))
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.Equal(t, true, committeeCache.HasEntry(string(seed[:])))
nextSeed, err := Seed(state, e+1, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.Equal(t, false, committeeCache.HasEntry(string(nextSeed[:])))
require.NoError(t, UpdateCommitteeCache(context.Background(), state, e+1))
require.Equal(t, true, committeeCache.HasEntry(string(nextSeed[:])))
}
func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
validators := make([]*ethpb.Validator, 300000)
for i := 0; i < len(validators); i++ {
@@ -639,7 +673,7 @@ func TestPrecomputeProposerIndices_Ok(t *testing.T) {
indices, err := ActiveValidatorIndices(context.Background(), state, 0)
require.NoError(t, err)
proposerIndices, err := precomputeProposerIndices(state, indices)
proposerIndices, err := precomputeProposerIndices(state, indices, time.CurrentEpoch(state))
require.NoError(t, err)
var wantedProposerIndices []primitives.ValidatorIndex

View File

@@ -184,7 +184,7 @@ func innerShuffleList(input []primitives.ValidatorIndex, seed [32]byte, shuffle
for {
buf[seedSize] = r
ph := hashfunc(buf[:pivotViewSize])
pivot := bytesutil.FromBytes8(ph[:8]) % listSize
pivot := binary.LittleEndian.Uint64(ph[:8]) % listSize
mirror := (pivot + 1) >> 1
binary.LittleEndian.PutUint32(buf[pivotViewSize:], uint32(pivot>>8))
source := hashfunc(buf)

View File

@@ -17,6 +17,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var CommitteeCacheInProgressHit = promauto.NewCounter(prometheus.CounterOpts{
@@ -261,7 +262,7 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
}
return proposerIndices[state.Slot()%params.BeaconConfig().SlotsPerEpoch], nil
}
if err := UpdateProposerIndicesInCache(ctx, state); err != nil {
if err := UpdateProposerIndicesInCache(ctx, state, time.CurrentEpoch(state)); err != nil {
return 0, errors.Wrap(err, "could not update committee cache")
}
}
@@ -396,3 +397,22 @@ func isEligibleForActivation(activationEligibilityEpoch, activationEpoch, finali
return activationEligibilityEpoch <= finalizedEpoch &&
activationEpoch == params.BeaconConfig().FarFutureEpoch
}
// LastActivatedValidatorIndex provides the last activated validator given a state
func LastActivatedValidatorIndex(ctx context.Context, st state.ReadOnlyBeaconState) (primitives.ValidatorIndex, error) {
_, span := trace.StartSpan(ctx, "helpers.LastActivatedValidatorIndex")
defer span.End()
var lastActivatedvalidatorIndex primitives.ValidatorIndex
// linear search because status are not sorted
for j := st.NumValidators() - 1; j >= 0; j-- {
val, err := st.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
if err != nil {
return 0, err
}
if IsActiveValidatorUsingTrie(val, time.CurrentEpoch(st)) {
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
break
}
}
return lastActivatedvalidatorIndex, nil
}

View File

@@ -727,3 +727,26 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
}
}
}
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
validators := make([]*ethpb.Validator, 4)
balances := make([]uint64, len(validators))
for i := uint64(0); i < 4; i++ {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, params.BeaconConfig().BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),
EffectiveBalance: 32 * 1e9,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
balances[i] = validators[i].EffectiveBalance
}
require.NoError(t, beaconState.SetValidators(validators))
require.NoError(t, beaconState.SetBalances(balances))
index, err := LastActivatedValidatorIndex(context.Background(), beaconState)
require.NoError(t, err)
require.Equal(t, index, primitives.ValidatorIndex(3))
}

View File

@@ -15,7 +15,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -13,11 +13,34 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
// ValidatorAlreadyExitedErr is an error raised when trying to process an exit of
// an already exited validator
var ValidatorAlreadyExitedErr = errors.New("validator already exited")
// ValidatorsMaxExitEpochAndChurn returns the maximum non-FAR_FUTURE_EPOCH exit
// epoch and the number of them
func ValidatorsMaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, churn uint64) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > maxExitEpoch {
maxExitEpoch = e
churn = 1
} else if e == maxExitEpoch {
churn++
}
}
return nil
})
_ = err
return
}
// InitiateValidatorExit takes in validator index and updates
// validator with correct voluntary exit parameters.
//
@@ -42,73 +65,43 @@ import (
// # Set validator exit epoch and withdrawable epoch
// validator.exit_epoch = exit_queue_epoch
// validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) (state.BeaconState, error) {
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex, exitQueueEpoch primitives.Epoch, churn uint64) (state.BeaconState, primitives.Epoch, error) {
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitQueueEpoch {
exitQueueEpoch = exitableEpoch
churn = 0
}
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
return nil, 0, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, nil
}
var exitEpochs []primitives.Epoch
err = s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if val.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
exitEpochs = append(exitEpochs, val.ExitEpoch())
}
return nil
})
if err != nil {
return nil, err
}
exitEpochs = append(exitEpochs, helpers.ActivationExitEpoch(time.CurrentEpoch(s)))
// Obtain the exit queue epoch as the maximum number in the exit epochs array.
exitQueueEpoch := primitives.Epoch(0)
for _, i := range exitEpochs {
if exitQueueEpoch < i {
exitQueueEpoch = i
}
}
// We use the exit queue churn to determine if we have passed a churn limit.
exitQueueChurn := uint64(0)
err = s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if val.ExitEpoch() == exitQueueEpoch {
var mErr error
exitQueueChurn, mErr = mathutil.Add64(exitQueueChurn, 1)
if mErr != nil {
return mErr
}
}
return nil
})
if err != nil {
return nil, err
return s, validator.ExitEpoch, ValidatorAlreadyExitedErr
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return nil, errors.Wrap(err, "could not get active validator count")
return nil, 0, errors.Wrap(err, "could not get active validator count")
}
churn, err := helpers.ValidatorChurnLimit(activeValidatorCount)
currentChurn, err := helpers.ValidatorChurnLimit(activeValidatorCount)
if err != nil {
return nil, errors.Wrap(err, "could not get churn limit")
return nil, 0, errors.Wrap(err, "could not get churn limit")
}
if exitQueueChurn >= churn {
if churn >= currentChurn {
exitQueueEpoch, err = exitQueueEpoch.SafeAdd(1)
if err != nil {
return nil, err
return nil, 0, err
}
}
validator.ExitEpoch = exitQueueEpoch
validator.WithdrawableEpoch, err = exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
return nil, 0, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
return nil, 0, err
}
return s, nil
return s, exitQueueEpoch, nil
}
// SlashValidator slashes the malicious validator's balance and awards
@@ -144,8 +137,9 @@ func SlashValidator(
slashedIdx primitives.ValidatorIndex,
penaltyQuotient uint64,
proposerRewardQuotient uint64) (state.BeaconState, error) {
s, err := InitiateValidatorExit(ctx, s, slashedIdx)
if err != nil {
maxExitEpoch, churn := ValidatorsMaxExitEpochAndChurn(s)
s, _, err := InitiateValidatorExit(ctx, s, slashedIdx, maxExitEpoch, churn)
if err != nil && !errors.Is(err, ValidatorAlreadyExitedErr) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)
}
currentEpoch := slots.ToEpoch(s.Slot())

View File

@@ -48,8 +48,9 @@ func TestInitiateValidatorExit_AlreadyExited(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := InitiateValidatorExit(context.Background(), state, 0)
require.NoError(t, err)
newState, epoch, err := InitiateValidatorExit(context.Background(), state, 0, 199, 1)
require.ErrorIs(t, err, ValidatorAlreadyExitedErr)
require.Equal(t, exitEpoch, epoch)
v, err := newState.ValidatorAtIndex(0)
require.NoError(t, err)
assert.Equal(t, exitEpoch, v.ExitEpoch, "Already exited")
@@ -66,8 +67,9 @@ func TestInitiateValidatorExit_ProperExit(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := InitiateValidatorExit(context.Background(), state, idx)
newState, epoch, err := InitiateValidatorExit(context.Background(), state, idx, exitedEpoch+2, 1)
require.NoError(t, err)
require.Equal(t, exitedEpoch+2, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, v.ExitEpoch, "Exit epoch was not the highest")
@@ -85,8 +87,9 @@ func TestInitiateValidatorExit_ChurnOverflow(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := InitiateValidatorExit(context.Background(), state, idx)
newState, epoch, err := InitiateValidatorExit(context.Background(), state, idx, exitedEpoch+2, 4)
require.NoError(t, err)
require.Equal(t, exitedEpoch+3, epoch)
// Because of exit queue overflow,
// validator who init exited has to wait one more epoch.
@@ -106,7 +109,7 @@ func TestInitiateValidatorExit_WithdrawalOverflows(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
_, err = InitiateValidatorExit(context.Background(), state, 1)
_, _, err = InitiateValidatorExit(context.Background(), state, 1, params.BeaconConfig().FarFutureEpoch-1, 1)
require.ErrorContains(t, "addition overflows", err)
}
@@ -337,3 +340,78 @@ func TestExitedValidatorIndices(t *testing.T) {
assert.DeepEqual(t, tt.wanted, exitedIndices)
}
}
func TestValidatorMaxExitEpochAndChurn(t *testing.T) {
tests := []struct {
state *ethpb.BeaconState
wantedEpoch primitives.Epoch
wantedChurn uint64
}{
{
state: &ethpb.BeaconState{
Validators: []*ethpb.Validator{
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: 10,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
},
},
wantedEpoch: 0,
wantedChurn: 3,
},
{
state: &ethpb.BeaconState{
Validators: []*ethpb.Validator{
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
},
},
wantedEpoch: 0,
wantedChurn: 0,
},
{
state: &ethpb.BeaconState{
Validators: []*ethpb.Validator{
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 1,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: 10,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 1,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
},
},
wantedEpoch: 1,
wantedChurn: 2,
},
}
for _, tt := range tests {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
epoch, churn := ValidatorsMaxExitEpochAndChurn(s)
require.Equal(t, tt.wantedEpoch, epoch)
require.Equal(t, tt.wantedChurn, churn)
}
}

View File

@@ -130,9 +130,15 @@ func performValidatorStateMigration(ctx context.Context, bar *progressbar.Progre
return err
}
item := enc
if hasAltairKey(item) {
switch {
case hasAltairKey(enc):
item = item[len(altairKey):]
case hasBellatrixKey(enc):
item = item[len(bellatrixKey):]
case hasCapellaKey(enc):
item = item[len(capellaKey):]
}
detector, err := detect.FromState(item)
if err != nil {
return err
@@ -165,9 +171,14 @@ func performValidatorStateMigration(ctx context.Context, bar *progressbar.Progre
return err
}
var stateBytes []byte
if hasAltairKey(enc) {
switch {
case hasAltairKey(enc):
stateBytes = snappy.Encode(nil, append(altairKey, rawObj...))
} else {
case hasBellatrixKey(enc):
stateBytes = snappy.Encode(nil, append(bellatrixKey, rawObj...))
case hasCapellaKey(enc):
stateBytes = snappy.Encode(nil, append(capellaKey, rawObj...))
default:
stateBytes = snappy.Encode(nil, rawObj)
}
if stateErr := stateBkt.Put(keys[index], stateBytes); stateErr != nil {

View File

@@ -313,3 +313,217 @@ func Test_migrateAltairStateValidators(t *testing.T) {
})
}
}
func Test_migrateBellatrixStateValidators(t *testing.T) {
tests := []struct {
name string
setup func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
eval func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
}{
{
name: "migrates validators and adds them to new buckets",
setup: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// create some new buckets that should be present for this migration
err := dbStore.db.Update(func(tx *bbolt.Tx) error {
_, err := tx.CreateBucketIfNotExists(stateValidatorsBucket)
assert.NoError(t, err)
_, err = tx.CreateBucketIfNotExists(blockRootValidatorHashesBucket)
assert.NoError(t, err)
return nil
})
assert.NoError(t, err)
},
eval: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// check whether the new buckets are present
err := dbStore.db.View(func(tx *bbolt.Tx) error {
valBkt := tx.Bucket(stateValidatorsBucket)
assert.NotNil(t, valBkt)
idxBkt := tx.Bucket(blockRootValidatorHashesBucket)
assert.NotNil(t, idxBkt)
return nil
})
assert.NoError(t, err)
// check if the migration worked
blockRoot := [32]byte{'A'}
rcvdState, err := dbStore.State(context.Background(), blockRoot)
assert.NoError(t, err)
require.DeepSSZEqual(t, rcvdState.ToProtoUnsafe(), state.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// find hashes of the validators that are set as part of the state
var hashes []byte
var individualHashes [][]byte
for _, val := range vals {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
hashes = append(hashes, hash[:]...)
individualHashes = append(individualHashes, hash[:])
}
// check if all the validators that were in the state, are stored properly in the validator bucket
pbState, err := state_native.ProtobufBeaconStateBellatrix(rcvdState.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
found := false
for _, h := range individualHashes {
if bytes.Equal(hash[:], h) {
found = true
}
}
require.Equal(t, true, found)
validatorsFoundCount++
}
require.Equal(t, len(vals), validatorsFoundCount)
// check if the state validator indexes are stored properly
err = dbStore.db.View(func(tx *bbolt.Tx) error {
rcvdValhashBytes := tx.Bucket(blockRootValidatorHashesBucket).Get(blockRoot[:])
rcvdValHashes, sErr := snappy.Decode(nil, rcvdValhashBytes)
assert.NoError(t, sErr)
require.DeepEqual(t, hashes, rcvdValHashes)
return nil
})
assert.NoError(t, err)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
dbStore := setupDB(t)
// add a state with the given validators
vals := validators(10)
blockRoot := [32]byte{'A'}
st, _ := util.DeterministicGenesisStateBellatrix(t, 20)
err := st.SetFork(&v1alpha1.Fork{
PreviousVersion: params.BeaconConfig().AltairForkVersion,
CurrentVersion: params.BeaconConfig().BellatrixForkVersion,
Epoch: 0,
})
require.NoError(t, err)
assert.NoError(t, st.SetSlot(100))
assert.NoError(t, st.SetValidators(vals))
assert.NoError(t, dbStore.SaveState(context.Background(), st, blockRoot))
// enable historical state representation flag to test this
resetCfg := features.InitWithReset(&features.Flags{
EnableHistoricalSpaceRepresentation: true,
})
defer resetCfg()
tt.setup(t, dbStore, st, vals)
assert.NoError(t, migrateStateValidators(context.Background(), dbStore.db), "migrateArchivedIndex(tx) error")
tt.eval(t, dbStore, st, vals)
})
}
}
func Test_migrateCapellaStateValidators(t *testing.T) {
tests := []struct {
name string
setup func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
eval func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
}{
{
name: "migrates validators and adds them to new buckets",
setup: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// create some new buckets that should be present for this migration
err := dbStore.db.Update(func(tx *bbolt.Tx) error {
_, err := tx.CreateBucketIfNotExists(stateValidatorsBucket)
assert.NoError(t, err)
_, err = tx.CreateBucketIfNotExists(blockRootValidatorHashesBucket)
assert.NoError(t, err)
return nil
})
assert.NoError(t, err)
},
eval: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// check whether the new buckets are present
err := dbStore.db.View(func(tx *bbolt.Tx) error {
valBkt := tx.Bucket(stateValidatorsBucket)
assert.NotNil(t, valBkt)
idxBkt := tx.Bucket(blockRootValidatorHashesBucket)
assert.NotNil(t, idxBkt)
return nil
})
assert.NoError(t, err)
// check if the migration worked
blockRoot := [32]byte{'A'}
rcvdState, err := dbStore.State(context.Background(), blockRoot)
assert.NoError(t, err)
require.DeepSSZEqual(t, rcvdState.ToProtoUnsafe(), state.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// find hashes of the validators that are set as part of the state
var hashes []byte
var individualHashes [][]byte
for _, val := range vals {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
hashes = append(hashes, hash[:]...)
individualHashes = append(individualHashes, hash[:])
}
// check if all the validators that were in the state, are stored properly in the validator bucket
pbState, err := state_native.ProtobufBeaconStateCapella(rcvdState.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
found := false
for _, h := range individualHashes {
if bytes.Equal(hash[:], h) {
found = true
}
}
require.Equal(t, true, found)
validatorsFoundCount++
}
require.Equal(t, len(vals), validatorsFoundCount)
// check if the state validator indexes are stored properly
err = dbStore.db.View(func(tx *bbolt.Tx) error {
rcvdValhashBytes := tx.Bucket(blockRootValidatorHashesBucket).Get(blockRoot[:])
rcvdValHashes, sErr := snappy.Decode(nil, rcvdValhashBytes)
assert.NoError(t, sErr)
require.DeepEqual(t, hashes, rcvdValHashes)
return nil
})
assert.NoError(t, err)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
dbStore := setupDB(t)
// add a state with the given validators
vals := validators(10)
blockRoot := [32]byte{'A'}
st, _ := util.DeterministicGenesisStateCapella(t, 20)
err := st.SetFork(&v1alpha1.Fork{
PreviousVersion: params.BeaconConfig().BellatrixForkVersion,
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
Epoch: 0,
})
require.NoError(t, err)
assert.NoError(t, st.SetSlot(100))
assert.NoError(t, st.SetValidators(vals))
assert.NoError(t, dbStore.SaveState(context.Background(), st, blockRoot))
// enable historical state representation flag to test this
resetCfg := features.InitWithReset(&features.Flags{
EnableHistoricalSpaceRepresentation: true,
})
defer resetCfg()
tt.setup(t, dbStore, st, vals)
assert.NoError(t, migrateStateValidators(context.Background(), dbStore.db), "migrateArchivedIndex(tx) error")
tt.eval(t, dbStore, st, vals)
})
}
}

View File

@@ -297,9 +297,6 @@ func (s *Service) ExchangeTransitionConfiguration(
}
func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeCapabilities")
defer span.End()
@@ -491,9 +488,6 @@ func (s *Service) HeaderByNumber(ctx context.Context, number *big.Int) (*types.H
// GetPayloadBodiesByHash returns the relevant payload bodies for the provided block hash.
func (s *Service) GetPayloadBodiesByHash(ctx context.Context, executionBlockHashes []common.Hash) ([]*pb.ExecutionPayloadBodyV1, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayloadBodiesByHashV1")
defer span.End()
@@ -513,9 +507,6 @@ func (s *Service) GetPayloadBodiesByHash(ctx context.Context, executionBlockHash
// GetPayloadBodiesByRange returns the relevant payload bodies for the provided range.
func (s *Service) GetPayloadBodiesByRange(ctx context.Context, start, count uint64) ([]*pb.ExecutionPayloadBodyV1, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayloadBodiesByRangeV1")
defer span.End()
@@ -560,19 +551,7 @@ func (s *Service) ReconstructFullBlock(
}
executionBlockHash := common.BytesToHash(header.BlockHash())
executionBlock, err := s.ExecutionBlockByHash(ctx, executionBlockHash, true /* with txs */)
if err != nil {
return nil, fmt.Errorf("could not fetch execution block with txs by hash %#x: %v", executionBlockHash, err)
}
if executionBlock == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionBlockHash)
}
if bytes.Equal(executionBlock.Hash.Bytes(), []byte{}) {
return nil, EmptyBlockHash
}
executionBlock.Version = blindedBlock.Version()
payload, err := fullPayloadFromExecutionBlock(header, executionBlock)
payload, err := s.retrievePayloadFromExecutionHash(ctx, executionBlockHash, header, blindedBlock.Version())
if err != nil {
return nil, err
}
@@ -619,32 +598,9 @@ func (s *Service) ReconstructFullBellatrixBlockBatch(
executionHashes = append(executionHashes, executionBlockHash)
}
}
execBlocks, err := s.ExecutionBlocksByHashes(ctx, executionHashes, true /* with txs*/)
fullBlocks, err := s.retrievePayloadsFromExecutionHashes(ctx, executionHashes, validExecPayloads, blindedBlocks)
if err != nil {
return nil, fmt.Errorf("could not fetch execution blocks with txs by hash %#x: %v", executionHashes, err)
}
// For each valid payload, we reconstruct the full block from it with the
// blinded block.
fullBlocks := make([]interfaces.SignedBeaconBlock, len(blindedBlocks))
for sliceIdx, realIdx := range validExecPayloads {
b := execBlocks[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionHashes[sliceIdx])
}
header, err := blindedBlocks[realIdx].Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err := fullPayloadFromExecutionBlock(header, b)
if err != nil {
return nil, err
}
fullBlock, err := blocks.BuildSignedBeaconBlockFromExecutionPayload(blindedBlocks[realIdx], payload.Proto())
if err != nil {
return nil, err
}
fullBlocks[realIdx] = fullBlock
return nil, err
}
// For blocks that are pre-merge we simply reconstruct them via an empty
// execution payload.
@@ -660,6 +616,95 @@ func (s *Service) ReconstructFullBellatrixBlockBatch(
return fullBlocks, nil
}
func (s *Service) retrievePayloadFromExecutionHash(ctx context.Context, executionBlockHash common.Hash, header interfaces.ExecutionData, version int) (interfaces.ExecutionData, error) {
if features.Get().EnableOptionalEngineMethods {
pBodies, err := s.GetPayloadBodiesByHash(ctx, []common.Hash{executionBlockHash})
if err != nil {
return nil, fmt.Errorf("could not get payload body by hash %#x: %v", executionBlockHash, err)
}
if len(pBodies) != 1 {
return nil, errors.Errorf("could not retrieve the correct number of payload bodies: wanted 1 but got %d", len(pBodies))
}
bdy := pBodies[0]
return fullPayloadFromPayloadBody(header, bdy, version)
}
executionBlock, err := s.ExecutionBlockByHash(ctx, executionBlockHash, true /* with txs */)
if err != nil {
return nil, fmt.Errorf("could not fetch execution block with txs by hash %#x: %v", executionBlockHash, err)
}
if executionBlock == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionBlockHash)
}
if bytes.Equal(executionBlock.Hash.Bytes(), []byte{}) {
return nil, EmptyBlockHash
}
executionBlock.Version = version
return fullPayloadFromExecutionBlock(header, executionBlock)
}
func (s *Service) retrievePayloadsFromExecutionHashes(
ctx context.Context,
executionHashes []common.Hash,
validExecPayloads []int,
blindedBlocks []interfaces.ReadOnlySignedBeaconBlock) ([]interfaces.SignedBeaconBlock, error) {
fullBlocks := make([]interfaces.SignedBeaconBlock, len(blindedBlocks))
var execBlocks []*pb.ExecutionBlock
var payloadBodies []*pb.ExecutionPayloadBodyV1
var err error
if features.Get().EnableOptionalEngineMethods {
payloadBodies, err = s.GetPayloadBodiesByHash(ctx, executionHashes)
if err != nil {
return nil, fmt.Errorf("could not fetch payload bodies by hash %#x: %v", executionHashes, err)
}
} else {
execBlocks, err = s.ExecutionBlocksByHashes(ctx, executionHashes, true /* with txs*/)
if err != nil {
return nil, fmt.Errorf("could not fetch execution blocks with txs by hash %#x: %v", executionHashes, err)
}
}
// For each valid payload, we reconstruct the full block from it with the
// blinded block.
for sliceIdx, realIdx := range validExecPayloads {
var payload interfaces.ExecutionData
if features.Get().EnableOptionalEngineMethods {
b := payloadBodies[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil payload body for request by hash %#x", executionHashes[sliceIdx])
}
header, err := blindedBlocks[realIdx].Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromPayloadBody(header, b, blindedBlocks[realIdx].Version())
if err != nil {
return nil, err
}
} else {
b := execBlocks[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionHashes[sliceIdx])
}
header, err := blindedBlocks[realIdx].Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromExecutionBlock(header, b)
if err != nil {
return nil, err
}
}
fullBlock, err := blocks.BuildSignedBeaconBlockFromExecutionPayload(blindedBlocks[realIdx], payload.Proto())
if err != nil {
return nil, err
}
fullBlocks[realIdx] = fullBlock
}
return fullBlocks, nil
}
func fullPayloadFromExecutionBlock(
header interfaces.ExecutionData, block *pb.ExecutionBlock,
) (interfaces.ExecutionData, error) {
@@ -721,6 +766,50 @@ func fullPayloadFromExecutionBlock(
}, 0) // We can't get the block value and don't care about the block value for this instance
}
func fullPayloadFromPayloadBody(
header interfaces.ExecutionData, body *pb.ExecutionPayloadBodyV1, bVersion int,
) (interfaces.ExecutionData, error) {
if header.IsNil() || body == nil {
return nil, errors.New("execution block and header cannot be nil")
}
if bVersion == version.Bellatrix {
return blocks.WrappedExecutionPayload(&pb.ExecutionPayload{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: body.Transactions,
})
}
return blocks.WrappedExecutionPayloadCapella(&pb.ExecutionPayloadCapella{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: body.Transactions,
Withdrawals: body.Withdrawals,
}, 0) // We can't get the block value and don't care about the block value for this instance
}
// Handles errors received from the RPC server according to the specification.
func handleRPCError(err error) error {
if err == nil {

View File

@@ -115,28 +115,32 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
func FuzzExecutionPayload(f *testing.F) {
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
execData := &engine.ExecutableData{
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
LogsBloom: logsBloom[:],
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Number: math.MaxUint64,
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
execData := &engine.ExecutionPayloadEnvelope{
ExecutionPayload: &engine.ExecutableData{
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
LogsBloom: logsBloom[:],
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Number: math.MaxUint64,
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
Withdrawals: []*types.Withdrawal{},
},
BlockValue: nil,
}
output, err := json.Marshal(execData)
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &engine.ExecutableData{}
prysmResp := &pb.ExecutionPayload{}
gethResp := &engine.ExecutionPayloadEnvelope{}
prysmResp := &pb.ExecutionPayloadCapellaWithValue{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
@@ -147,10 +151,10 @@ func FuzzExecutionPayload(f *testing.F) {
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
newGethResp := &engine.ExecutableData{}
newGethResp := &engine.ExecutionPayloadEnvelope{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
newGethResp2 := &engine.ExecutableData{}
newGethResp2 := &engine.ExecutionPayloadEnvelope{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)

View File

@@ -69,12 +69,12 @@ func (s *Service) ProcessETH1Block(ctx context.Context, blkNum *big.Int) error {
if err != nil {
return err
}
for _, filterLog := range logs {
for i, filterLog := range logs {
// ignore logs that are not of the required block number
if filterLog.BlockNumber != blkNum.Uint64() {
continue
}
if err := s.ProcessLog(ctx, filterLog); err != nil {
if err := s.ProcessLog(ctx, &logs[i]); err != nil {
return errors.Wrap(err, "could not process log")
}
}
@@ -88,7 +88,7 @@ func (s *Service) ProcessETH1Block(ctx context.Context, blkNum *big.Int) error {
// ProcessLog is the main method which handles the processing of all
// logs from the deposit contract on the eth1 chain.
func (s *Service) ProcessLog(ctx context.Context, depositLog gethtypes.Log) error {
func (s *Service) ProcessLog(ctx context.Context, depositLog *gethtypes.Log) error {
s.processingLock.RLock()
defer s.processingLock.RUnlock()
// Process logs according to their event signature.
@@ -108,7 +108,7 @@ func (s *Service) ProcessLog(ctx context.Context, depositLog gethtypes.Log) erro
// ProcessDepositLog processes the log which had been received from
// the eth1 chain by trying to ascertain which participant deposited
// in the contract.
func (s *Service) ProcessDepositLog(ctx context.Context, depositLog gethtypes.Log) error {
func (s *Service) ProcessDepositLog(ctx context.Context, depositLog *gethtypes.Log) error {
pubkey, withdrawalCredentials, amount, signature, merkleTreeIndex, err := contracts.UnpackDepositLogData(depositLog.Data)
if err != nil {
return errors.Wrap(err, "Could not unpack log")
@@ -406,7 +406,7 @@ func (s *Service) processBlockInBatch(ctx context.Context, currentBlockNum uint6
lastReqBlock := s.latestEth1Data.LastRequestedBlock
s.latestEth1DataLock.RUnlock()
for _, filterLog := range logs {
for i, filterLog := range logs {
if filterLog.BlockNumber > currentBlockNum {
if err := s.checkHeaderRange(ctx, currentBlockNum, filterLog.BlockNumber-1, headersMap, requestHeaders); err != nil {
return 0, 0, err
@@ -417,7 +417,7 @@ func (s *Service) processBlockInBatch(ctx context.Context, currentBlockNum uint6
s.latestEth1DataLock.Unlock()
currentBlockNum = filterLog.BlockNumber
}
if err := s.ProcessLog(ctx, filterLog); err != nil {
if err := s.ProcessLog(ctx, &logs[i]); err != nil {
// In the event the execution client gives us a garbled/bad log
// we reset the last requested block to the previous valid block range. This
// prevents the beacon from advancing processing of logs to another range

View File

@@ -79,7 +79,7 @@ func TestProcessDepositLog_OK(t *testing.T) {
t.Fatal("no logs")
}
err = web3Service.ProcessLog(context.Background(), logs[0])
err = web3Service.ProcessLog(context.Background(), &logs[0])
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Could not unpack log")
@@ -149,9 +149,9 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
web3Service.chainStartData.Chainstarted = true
err = web3Service.ProcessDepositLog(context.Background(), logs[0])
err = web3Service.ProcessDepositLog(context.Background(), &logs[0])
require.NoError(t, err)
err = web3Service.ProcessDepositLog(context.Background(), logs[1])
err = web3Service.ProcessDepositLog(context.Background(), &logs[1])
require.NoError(t, err)
pendingDeposits := web3Service.cfg.depositCache.PendingDeposits(context.Background(), nil /*blockNum*/)
@@ -272,8 +272,8 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
logs, err := testAcc.Backend.FilterLogs(web3Service.ctx, query)
require.NoError(t, err, "Unable to retrieve logs")
for _, log := range logs {
err = web3Service.ProcessLog(context.Background(), log)
for i := range logs {
err = web3Service.ProcessLog(context.Background(), &logs[i])
require.NoError(t, err)
}
assert.Equal(t, false, web3Service.chainStartData.Chainstarted, "Genesis has been triggered despite being 8 duplicate keys")
@@ -351,8 +351,8 @@ func TestProcessETH2GenesisLog(t *testing.T) {
stateSub := web3Service.cfg.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for _, log := range logs {
err = web3Service.ProcessLog(context.Background(), log)
for i := range logs {
err = web3Service.ProcessLog(context.Background(), &logs[i])
require.NoError(t, err)
}

View File

@@ -265,6 +265,7 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
// validators' latest votes.
func (f *ForkChoice) updateBalances() error {
newBalances := f.justifiedBalances
zHash := params.BeaconConfig().ZeroHash
for index, vote := range f.votes {
// Skip if validator has been slashed
@@ -273,7 +274,7 @@ func (f *ForkChoice) updateBalances() error {
}
// Skip if validator has never voted for current root and next root (i.e. if the
// votes are zero hash aka genesis block), there's nothing to compute.
if vote.currentRoot == params.BeaconConfig().ZeroHash && vote.nextRoot == params.BeaconConfig().ZeroHash {
if vote.currentRoot == zHash && vote.nextRoot == zHash {
continue
}
@@ -293,7 +294,7 @@ func (f *ForkChoice) updateBalances() error {
// Ignore the vote if the root is not in fork choice
// store, that means we have not seen the block before.
nextNode, ok := f.store.nodeByRoot[vote.nextRoot]
if ok && vote.nextRoot != params.BeaconConfig().ZeroHash {
if ok && vote.nextRoot != zHash {
// Protection against nil node
if nextNode == nil {
return errors.Wrap(ErrNilNode, "could not update balances")
@@ -302,7 +303,7 @@ func (f *ForkChoice) updateBalances() error {
}
currentNode, ok := f.store.nodeByRoot[vote.currentRoot]
if ok && vote.currentRoot != params.BeaconConfig().ZeroHash {
if ok && vote.currentRoot != zHash {
// Protection against nil node
if currentNode == nil {
return errors.Wrap(ErrNilNode, "could not update balances")

View File

@@ -31,9 +31,7 @@ import (
// store.justified_checkpoint = store.best_justified_checkpoint
func (f *ForkChoice) NewSlot(ctx context.Context, slot primitives.Slot) error {
// Reset proposer boost root
if err := f.resetBoostedProposerRoot(ctx); err != nil {
return errors.Wrap(err, "could not reset boosted proposer root in fork choice")
}
f.store.proposerBoostRoot = [32]byte{}
// Return if it's not a new epoch.
if !slots.IsEpochStart(slot) {

View File

@@ -7,7 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
)
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, payloadHash [32]byte) ([][32]byte, error) {
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, lastValidHash [32]byte) ([][32]byte, error) {
invalidRoots := make([][32]byte, 0)
node, ok := s.nodeByRoot[root]
if !ok {
@@ -16,7 +16,7 @@ func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, pa
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid")
}
// return early if the parent is LVH
if node.payloadHash == payloadHash {
if node.payloadHash == lastValidHash {
return invalidRoots, nil
}
} else {
@@ -28,7 +28,7 @@ func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, pa
}
}
firstInvalid := node
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != payloadHash; firstInvalid = firstInvalid.parent {
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != lastValidHash; firstInvalid = firstInvalid.parent {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}

View File

@@ -1,19 +1,12 @@
package doublylinkedtree
import (
"context"
"fmt"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
)
// resetBoostedProposerRoot sets the value of the proposer boosted root to zeros.
func (f *ForkChoice) resetBoostedProposerRoot(_ context.Context) error {
f.store.proposerBoostRoot = [32]byte{}
return nil
}
// applyProposerBoostScore applies the current proposer boost scores to the
// relevant nodes.
func (f *ForkChoice) applyProposerBoostScore() error {

View File

@@ -82,6 +82,11 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
if head.weight*100 > f.store.committeeWeight*params.BeaconConfig().ReorgWeightThreshold {
return
}
// Only orphan a block if the parent LMD vote is strong
if parent.weight*100 < f.store.committeeWeight*params.BeaconConfig().ReorgParentWeightThreshold {
return
}
return true
}
@@ -137,6 +142,11 @@ func (f *ForkChoice) GetProposerHead() [32]byte {
return head.root
}
// Only orphan a block if the parent LMD vote is strong
if parent.weight*100 < f.store.committeeWeight*params.BeaconConfig().ReorgParentWeightThreshold {
return head.root
}
// Only reorg if we are proposing early
secs, err := slots.SecondsSinceSlotStart(head.slot+1, f.store.genesisTime, uint64(time.Now().Unix()))
if err != nil {

View File

@@ -22,7 +22,11 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
f.ProcessAttestation(ctx, []uint64{0, 1, 2}, root, 0)
attesters := make([]uint64, f.numActiveValidators-64)
for i := range attesters {
attesters[i] = uint64(i + 64)
}
f.ProcessAttestation(ctx, attesters, root, 0)
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
@@ -80,6 +84,12 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.parent = saved
})
t.Run("parent is weak", func(t *testing.T) {
saved := f.store.headNode.parent.weight
f.store.headNode.parent.weight = 0
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.parent.weight = saved
})
t.Run("Head is strong", func(t *testing.T) {
f.store.headNode.weight = f.store.committeeWeight
require.Equal(t, false, f.ShouldOverrideFCU())
@@ -101,7 +111,11 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
f.ProcessAttestation(ctx, []uint64{0, 1, 2}, root, 0)
attesters := make([]uint64, f.numActiveValidators-64)
for i := range attesters {
attesters[i] = uint64(i + 64)
}
f.ProcessAttestation(ctx, attesters, root, 0)
driftGenesisTime(f, 3, 1)
childRoot := [32]byte{'b'}
@@ -161,6 +175,12 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
require.Equal(t, childRoot, f.GetProposerHead())
f.store.headNode.parent = saved
})
t.Run("parent is weak", func(t *testing.T) {
saved := f.store.headNode.parent.weight
f.store.headNode.parent.weight = 0
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.parent.weight = saved
})
t.Run("Head is strong", func(t *testing.T) {
f.store.headNode.weight = f.store.committeeWeight
require.Equal(t, childRoot, f.GetProposerHead())

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/sirupsen/logrus"
)
@@ -29,6 +30,9 @@ func (s *Service) processSyncAggregate(state state.BeaconState, blk interfaces.R
if blk == nil || blk.Body() == nil {
return
}
if blk.Version() == version.Phase0 {
return
}
bits, err := blk.Body().SyncAggregate()
if err != nil {
log.WithError(err).Error("Could not get SyncAggregate")

View File

@@ -114,19 +114,11 @@ func (s *Service) Start() {
"ValidatorIndices": tracked,
}).Info("Starting service")
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)
go s.run(stateChannel, stateSub)
go s.run()
}
// run waits until the beacon is synced and starts the monitoring system.
func (s *Service) run(stateChannel chan *feed.Event, stateSub event.Subscription) {
if stateChannel == nil {
log.Error("State state is nil")
return
}
func (s *Service) run() {
if err := s.waitForSync(s.config.InitialSyncComplete); err != nil {
log.WithError(err)
return
@@ -154,6 +146,8 @@ func (s *Service) run(stateChannel chan *feed.Event, stateSub event.Subscription
s.isLogging = true
s.Unlock()
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)
s.monitorRoutine(stateChannel, stateSub)
}

View File

@@ -271,11 +271,9 @@ func TestWaitForSyncCanceled(t *testing.T) {
func TestRun(t *testing.T) {
hook := logTest.NewGlobal()
s := setupService(t)
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)
go func() {
s.run(stateChannel, stateSub)
s.run()
}()
close(s.config.InitialSyncComplete)

View File

@@ -158,8 +158,7 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
return cmd.LoadFlagsFromConfig(cliCtx, comFlags)
},
Action: func(cliCtx *cli.Context) error {
//TODO: https://github.com/urfave/cli/issues/1197 right now does not set flag
require.Equal(t, false, cliCtx.IsSet(cmd.BootstrapNode.Name))
require.Equal(t, true, cliCtx.IsSet(cmd.BootstrapNode.Name))
require.Equal(t, strings.Join([]string{"node1", "node2"}, ","),
strings.Join(cliCtx.StringSlice(cmd.BootstrapNode.Name), ","))

View File

@@ -212,6 +212,11 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
log.Debugln("Starting State Gen")
if err := beacon.startStateGen(ctx, bfs, beacon.forkChoicer); err != nil {
if errors.Is(err, stategen.ErrNoGenesisBlock) {
log.Errorf("No genesis block/state is found. Prysm only provides a mainnet genesis "+
"state bundled in the application. You must provide the --%s or --%s flag to load "+
"a genesis block/state for this network.", "genesis-state", "genesis-beacon-api-url")
}
return nil, err
}
@@ -230,13 +235,13 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
return nil, err
}
log.Debugln("Registering Determinstic Genesis Service")
if err := beacon.registerDeterminsticGenesisService(); err != nil {
log.Debugln("Registering Deterministic Genesis Service")
if err := beacon.registerDeterministicGenesisService(); err != nil {
return nil, err
}
log.Debugln("Registering Blockchain Service")
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer); err != nil {
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer, beacon.initialSyncComplete); err != nil {
return nil, err
}
@@ -581,7 +586,8 @@ func (b *BeaconNode) fetchBuilderService() *builder.Service {
func (b *BeaconNode) registerAttestationPool() error {
s, err := attestations.NewService(b.ctx, &attestations.Config{
Pool: b.attestationPool,
Pool: b.attestationPool,
InitialSyncComplete: b.initialSyncComplete,
})
if err != nil {
return errors.Wrap(err, "could not register atts pool service")
@@ -589,7 +595,7 @@ func (b *BeaconNode) registerAttestationPool() error {
return b.services.RegisterService(s)
}
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer) error {
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer, syncComplete chan struct{}) error {
var web3Service *execution.Service
if err := b.services.FetchService(&web3Service); err != nil {
return err
@@ -620,6 +626,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
blockchain.WithProposerIdsCache(b.proposerIdsCache),
blockchain.WithClockSynchronizer(gs),
blockchain.WithSyncComplete(syncComplete),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
@@ -922,7 +929,7 @@ func (b *BeaconNode) registerGRPCGateway(router *mux.Router) error {
return b.services.RegisterService(g)
}
func (b *BeaconNode) registerDeterminsticGenesisService() error {
func (b *BeaconNode) registerDeterministicGenesisService() error {
genesisTime := b.cliCtx.Uint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
@@ -985,7 +992,8 @@ func (b *BeaconNode) registerBuilderService(cliCtx *cli.Context) error {
opts := append(b.serviceFlagOpts.builderOpts,
builder.WithHeadFetcher(chainService),
builder.WithDatabase(b.db))
if cliCtx.Bool(flags.EnableRegistrationCache.Name) {
// make cache the default.
if !cliCtx.Bool(features.DisableRegistrationCache.Name) {
opts = append(opts, builder.WithRegistrationCache())
}
svc, err := builder.NewService(b.ctx, opts...)

View File

@@ -18,6 +18,7 @@ go_library(
deps = [
"//beacon-chain/operations/attestations/kv:go_default_library",
"//cache/lru:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
@@ -46,6 +47,7 @@ go_test(
deps = [
"//async:go_default_library",
"//beacon-chain/operations/attestations/kv:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//crypto/bls:go_default_library",

View File

@@ -14,6 +14,7 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
@@ -39,8 +40,8 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -2,9 +2,12 @@ package kv
import (
"context"
"runtime"
"sync"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
attaggregation "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation/aggregation/attestations"
@@ -23,21 +26,11 @@ func (c *AttCaches) AggregateUnaggregatedAttestations(ctx context.Context) error
if err != nil {
return err
}
return c.aggregateUnaggregatedAttestations(ctx, unaggregatedAtts)
return c.aggregateUnaggregatedAtts(ctx, unaggregatedAtts)
}
// AggregateUnaggregatedAttestationsBySlotIndex aggregates the unaggregated attestations and saves
// newly aggregated attestations in the pool. Unaggregated attestations are filtered by slot and
// committee index.
func (c *AttCaches) AggregateUnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) error {
ctx, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregateUnaggregatedAttestationsBySlotIndex")
defer span.End()
unaggregatedAtts := c.UnaggregatedAttestationsBySlotIndex(ctx, slot, committeeIndex)
return c.aggregateUnaggregatedAttestations(ctx, unaggregatedAtts)
}
func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unaggregatedAtts []*ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAttestations")
func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedAtts []*ethpb.Attestation) error {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAtts")
defer span.End()
attsByDataRoot := make(map[[32]byte][]*ethpb.Attestation, len(unaggregatedAtts))
@@ -52,26 +45,30 @@ func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unagg
// Aggregate unaggregated attestations from the pool and save them in the pool.
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
for _, atts := range attsByDataRoot {
aggregatedAtts := make([]*ethpb.Attestation, 0, len(atts))
processedAtts, err := attaggregation.Aggregate(atts)
if err != nil {
return err
}
for _, att := range processedAtts {
if helpers.IsAggregated(att) {
aggregatedAtts = append(aggregatedAtts, att)
if features.Get().AggregateParallel {
leftOverUnaggregatedAtt = c.aggregateParallel(attsByDataRoot, leftOverUnaggregatedAtt)
} else {
for _, atts := range attsByDataRoot {
aggregated, err := attaggregation.AggregateDisjointOneBitAtts(atts)
if err != nil {
return errors.Wrap(err, "could not aggregate unaggregated attestations")
}
if aggregated == nil {
return errors.New("could not aggregate unaggregated attestations")
}
if helpers.IsAggregated(aggregated) {
if err := c.SaveAggregatedAttestations([]*ethpb.Attestation{aggregated}); err != nil {
return err
}
} else {
h, err := hashFn(att)
h, err := hashFn(aggregated)
if err != nil {
return err
}
leftOverUnaggregatedAtt[h] = true
}
}
if err := c.SaveAggregatedAttestations(aggregatedAtts); err != nil {
return err
}
}
// Remove the unaggregated attestations from the pool that were successfully aggregated.
@@ -87,10 +84,61 @@ func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unagg
return err
}
}
return nil
}
// aggregateParallel aggregates attestations in parallel for `atts` and saves them in the pool,
// returns the unaggregated attestations that weren't able to aggregate.
// Given `n` CPU cores, it creates a channel of size `n` and spawns `n` goroutines to aggregate attestations
func (c *AttCaches) aggregateParallel(atts map[[32]byte][]*ethpb.Attestation, leftOver map[[32]byte]bool) map[[32]byte]bool {
var leftoverLock sync.Mutex
wg := sync.WaitGroup{}
n := runtime.GOMAXPROCS(0) // defaults to the value of runtime.NumCPU
ch := make(chan []*ethpb.Attestation, n)
wg.Add(n)
for i := 0; i < n; i++ {
go func() {
defer wg.Done()
for as := range ch {
aggregated, err := attaggregation.AggregateDisjointOneBitAtts(as)
if err != nil {
log.WithError(err).Error("could not aggregate unaggregated attestations")
continue
}
if aggregated == nil {
log.Error("nil aggregated attestation")
continue
}
if helpers.IsAggregated(aggregated) {
if err := c.SaveAggregatedAttestations([]*ethpb.Attestation{aggregated}); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
continue
}
} else {
h, err := hashFn(aggregated)
if err != nil {
log.WithError(err).Error("could not hash attestation")
continue
}
leftoverLock.Lock()
leftOver[h] = true
leftoverLock.Unlock()
}
}
}()
}
for _, as := range atts {
ch <- as
}
close(ch)
wg.Wait()
return leftOver
}
// SaveAggregatedAttestation saves an aggregated attestation in cache.
func (c *AttCaches) SaveAggregatedAttestation(att *ethpb.Attestation) error {
if err := helpers.ValidateNilAttestation(att); err != nil {
@@ -168,7 +216,7 @@ func (c *AttCaches) AggregatedAttestations() []*ethpb.Attestation {
// AggregatedAttestationsBySlotIndex returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation {
ctx, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]*ethpb.Attestation, 0)

View File

@@ -9,7 +9,7 @@ import (
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
@@ -18,6 +18,11 @@ import (
)
func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
AggregateParallel: true,
})
defer resetFn()
cache := NewAttCaches()
priv, err := bls.RandKey()
require.NoError(t, err)
@@ -39,61 +44,6 @@ func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
require.Equal(t, 1, len(cache.AggregatedAttestationsBySlotIndex(context.Background(), 2, 0)), "Did not aggregate correctly")
}
func TestKV_Aggregated_AggregateUnaggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
genData := func(slot primitives.Slot, committeeIndex primitives.CommitteeIndex) *ethpb.AttestationData {
return util.HydrateAttestationData(&ethpb.AttestationData{
Slot: slot,
CommitteeIndex: committeeIndex,
})
}
genSign := func() []byte {
priv, err := bls.RandKey()
require.NoError(t, err)
return priv.Sign([]byte{'a'}).Marshal()
}
atts := []*ethpb.Attestation{
// The first slot.
{AggregationBits: bitfield.Bitlist{0b1001}, Data: genData(1, 2), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1010}, Data: genData(1, 2), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1100}, Data: genData(1, 2), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1001}, Data: genData(1, 3), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1100}, Data: genData(1, 3), Signature: genSign()},
// The second slot.
{AggregationBits: bitfield.Bitlist{0b1001}, Data: genData(2, 3), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1010}, Data: genData(2, 3), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1100}, Data: genData(2, 4), Signature: genSign()},
}
ctx := context.Background()
// Make sure that no error is produced if aggregation is requested on empty unaggregated list.
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 1, 2))
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 2, 3))
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)), "Did not aggregate correctly")
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 3)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 3)), "Did not aggregate correctly")
// Persist unaggregated attestations, and aggregate on per slot/committee index base.
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 1, 2))
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 2, 3))
// Committee attestations at a slot should be aggregated.
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)))
require.Equal(t, 1, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)), "Did not aggregate correctly")
// Committee attestations haven't been aggregated.
require.Equal(t, 2, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 3)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 3)), "Did not aggregate correctly")
// Committee at a second slot is aggregated.
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 3)))
require.Equal(t, 1, len(cache.AggregatedAttestationsBySlotIndex(ctx, 2, 3)), "Did not aggregate correctly")
// The second committee at second slot is not aggregated.
require.Equal(t, 1, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 4)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 2, 4)), "Did not aggregate correctly")
}
func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
tests := []struct {
name string

View File

@@ -30,6 +30,20 @@ var (
Name: "expired_block_atts_total",
Help: "The number of expired and deleted block attestations in the pool.",
})
batchForkChoiceAttsT1 = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "aggregate_attestations_t1",
Help: "Captures times of attestation aggregation in milliseconds during the first interval per slot",
Buckets: []float64{100, 200, 500, 1000, 1500, 2000, 2500, 3500},
},
)
batchForkChoiceAttsT2 = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "aggregate_attestations_t2",
Help: "Captures times of attestation aggregation in milliseconds during the second interval per slot",
Buckets: []float64{10, 40, 100, 200, 600},
},
)
)
func (s *Service) updateMetrics() {

View File

@@ -15,7 +15,6 @@ import (
type Pool interface {
// For Aggregated attestations
AggregateUnaggregatedAttestations(ctx context.Context) error
AggregateUnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) error
SaveAggregatedAttestation(att *ethpb.Attestation) error
SaveAggregatedAttestations(atts []*ethpb.Attestation) error
AggregatedAttestations() []*ethpb.Attestation

View File

@@ -7,6 +7,8 @@ import (
"time"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
attaggregation "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation/aggregation/attestations"
@@ -14,20 +16,37 @@ import (
"go.opencensus.io/trace"
)
// Prepare attestations for fork choice three times per slot.
var prepareForkChoiceAttsPeriod = slots.DivideSlotBy(3 /* times-per-slot */)
// This prepares fork choice attestations by running batchForkChoiceAtts
// every prepareForkChoiceAttsPeriod.
func (s *Service) prepareForkChoiceAtts() {
ticker := time.NewTicker(prepareForkChoiceAttsPeriod)
defer ticker.Stop()
intervals := features.Get().AggregateIntervals
slotDuration := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
// Adjust intervals for networks with a lower slot duration (Hive, e2e, etc)
for {
if intervals[len(intervals)-1] >= slotDuration {
for i, offset := range intervals {
intervals[i] = offset / 2
}
} else {
break
}
}
ticker := slots.NewSlotTickerWithIntervals(time.Unix(int64(s.genesisTime), 0), intervals[:])
for {
select {
case <-ticker.C:
case slotInterval := <-ticker.C():
t := time.Now()
if err := s.batchForkChoiceAtts(s.ctx); err != nil {
log.WithError(err).Error("Could not prepare attestations for fork choice")
}
switch slotInterval.Interval {
case 0:
duration := time.Since(t)
log.WithField("Duration", duration).Debug("aggregated unaggregated attestations")
batchForkChoiceAttsT1.Observe(float64(duration.Milliseconds()))
case 1:
batchForkChoiceAttsT2.Observe(float64(time.Since(t).Milliseconds()))
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
return

Some files were not shown because too many files have changed in this diff Show More