Compare commits

...

84 Commits

Author SHA1 Message Date
Kasey Kirkham
41b7331ffc add protos, not consenus types or gw code 2023-08-21 21:02:57 -05:00
james-prysm
b0fc368185 local-block-value-boost flag from float64 to uint64 [BREAKING CHANGE] (#12765) 2023-08-21 22:34:34 +00:00
Raul Jordan
777ca06b78 Add Github Go Fuzzing Workflow With Cron Schedule (#12756)
* add in github workflow for fuzzing that runs with cron

* every day

* go version

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-21 16:09:51 +00:00
james-prysm
4d520460e0 REST implementation of /eth/v1/validator/register_validator (#12758)
* migrating v2 to http native url

* removing old code

* gaz

* updating beacon api validator client code

* fixing tests
2023-08-21 15:42:43 +00:00
Potuz
27d8b1c358 add more descriptive log on FFG-LMD consistency (#12763)
* add more descriptive log on FFG-LMD consistency

* Update beacon-chain/blockchain/receive_attestation.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Update beacon-chain/blockchain/receive_attestation.go

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-21 14:06:59 +00:00
terencechain
17500dcfca Fix: download genesis log (#12759) 2023-08-20 14:46:40 -07:00
Radosław Kapka
0452fd02e8 HTTP Beacon API: /pool/attestations (#12735)
* attestations

* post

* tests

* Revert "Auxiliary commit to revert individual files from afede4d949a7519902be2f1e0c485306c4ccdea7"

This reverts commit 9de74879e0c41e43183da2fa7e63094cac030abe.

* remove test

* remove redundant return

* Update beacon-chain/rpc/eth/beacon/handlers_pool.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* review

* include index in broadcast log

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-18 14:29:40 +00:00
terencechain
09d761e1ab fix(proposer): verify attestations without mutating state (#12704)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-17 16:37:19 +00:00
Potuz
9a741c52d1 Remove quadratic loops when exiting (#12737)
* Remove quadratic loops when exiting

* unit test

* fix tests

* lint

* duplicated imports

* handle new error

* revert spurious unit test

* radek's review

* fix name

* use errors.Is

* fix error check

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-08-17 15:38:46 +00:00
Sammy Rosso
baed8da9c0 HTTP Beacon API: /eth/v1/validator/sync_committee_contribution (#12698)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-08-17 12:01:24 +02:00
Matthew Obwaya
33410a0ec1 Remove uses of deprecated go_embed rules_go (#12719)
* Remove prysm_web_ui http_archive from WORKSPACE

* Remove go_embed_data_dependencies() from WORKSPACE

* Remove go_embed_data target from validator/web/BUILD.bazel

* Remove # gazelle:ignore site_data.go annotation from validator/web/BUILD.bazel

* Run bazel run //:gazelle

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-08-16 15:47:09 +00:00
Nishant Das
cda1797d4e Remove Alpine Images From Prysm (#12749) 2023-08-16 14:01:09 +00:00
Potuz
e7f6048b8c Use last optimistic status on batches (#12741)
* Use last optimistic status on batches

* more descriptive errors

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-16 01:40:36 +00:00
Potuz
418959565f set optimistic status in head at init sync (#12748) 2023-08-16 00:47:56 +00:00
Raul Jordan
8229f3eb84 Prysm Web UI Release v2.0.4 (#12746)
Co-authored-by: james-prysm <james-prysm@users.noreply.github.com>
2023-08-15 21:11:44 +00:00
Nishant Das
4098d098aa Shift Error Logs To Debug (#12739) 2023-08-15 14:44:54 +00:00
anukul
46c72798c7 HTTP Beacon API: /eth/v1/validator/attestation_data (#12634)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-14 17:56:36 +03:00
Preston Van Loon
a5474200de Update geth to 1.12.2 (#12731) 2023-08-14 17:48:11 +08:00
Preston Van Loon
a85b4445fc Update bazel to 6.3.2 (#12725)
* Update bazel version to v6.3.2

* Disable legacy external runfile generation for CI

* Revert "Disable legacy external runfile generation for CI"

This reverts commit 1911f293d6.

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-11 19:08:33 +00:00
Preston Van Loon
751dd847b8 Update rules go & gazelle (#12721)
* Update rules_go to v0.40.0

* Update gazelle to v0.26.0

* Update gazelle to v0.27.0

* Update gazelle to v0.28.0

* Update gazelle to v0.29.0

* Update gazelle to v0.30.0

* Update gazelle to v0.31.0
2023-08-11 16:39:35 +00:00
Preston Van Loon
aeb7a45864 Update geth to 1.12.1 (#12718)
* Update geth to 1.12.1

* Fix //cmd/validator/flags:go_default_test

* fix //beacon-chain/node:go_default_test

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-08-11 10:45:42 +00:00
Radosław Kapka
e952fd802b HTTP Beacon API: /eth/v1/validator/beacon_committee_subscriptions (#12700)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

* in progress

* implementation

* remove test file

* remove duplicate

* tests

* HTTP Beacon API: `/eth/v1/validator/sync_committee_subscriptions`

* pointers, pointers everywhere

* fix

* fix after merge

* implementation

* tests

* test fixes

* review

* clear cache in validator tests

* add missing stuff

* compilation fix

* remove `required` from bool

* remove time fetcher from tests

* no need to convert

* add conversion

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-10 18:00:19 +00:00
Nishant Das
b511eef848 Update BLST to v0.3.11 (#12717)
* add changes

* Remove server.c exclusion in headers

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-08-10 22:51:19 +08:00
dependabot[bot]
7aa043892b Bump github.com/libp2p/go-libp2p from 0.27.5 to 0.27.8 (#12709)
* Bump github.com/libp2p/go-libp2p from 0.27.5 to 0.27.8

Bumps [github.com/libp2p/go-libp2p](https://github.com/libp2p/go-libp2p) from 0.27.5 to 0.27.8.
- [Release notes](https://github.com/libp2p/go-libp2p/releases)
- [Changelog](https://github.com/libp2p/go-libp2p/blob/master/CHANGELOG.md)
- [Commits](https://github.com/libp2p/go-libp2p/compare/v0.27.5...v0.27.8)

---
updated-dependencies:
- dependency-name: github.com/libp2p/go-libp2p
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* gazelle and remove libp2p patch. The patch is merged upstream as of this version

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-08-09 17:24:07 +00:00
Radosław Kapka
36be057a11 HTTP Beacon API: /eth/v1/node/syncing (#12706)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-09 17:23:59 +02:00
Radosław Kapka
049e608c75 HTTP Beacon API: /eth/v1/validator/sync_committee_subscriptions (#12689)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

* in progress

* implementation

* remove test file

* remove duplicate

* tests

* HTTP Beacon API: `/eth/v1/validator/sync_committee_subscriptions`

* pointers, pointers everywhere

* fix

* fix after merge

* review

* James' review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-08 22:54:04 +00:00
Preston Van Loon
4541598850 Update go to 1.20.7 (#12707) 2023-08-08 21:39:22 +00:00
Sammy Rosso
8c08854dd0 Fix prysmctl writing empty JSON/YAML files (#12599)
* Setup

* add in marshalable

* Cleanup

* Raul' review

* Remove yaml Marshaler

* minimal fixes

* same imports

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-08-08 15:20:41 +00:00
m3diumrare
d2ff995eb2 Fix reported effective balance for unknown/pending validators (#12693)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-07 09:19:02 -05:00
anukul
56a0315dde fix: set CoreService in beaconv1alpha1.Server (#12702) 2023-08-06 18:16:54 -07:00
anukul
634133fedc use struct in beacon-chain/rpc/core to store dependencies (#12701) 2023-08-05 22:54:12 +02:00
terencechain
c1c1b7ecfa Feat: aggregate parallel default (#12699) 2023-08-04 16:03:10 +00:00
Nishant Das
9a4670ec64 add changes (#12697)
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-08-04 22:05:47 +08:00
Radosław Kapka
a664a07303 Multi Value Slice (#12616)
* multi value slice

* extract helper function

* comments

* setup godoc fix

* value benchmarks

* use guid

* fix bug when deleting items

* remove callback and rename MultiValue

* godoc

* tiny change

* Nishant's review

* typos

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-08-04 12:42:54 +00:00
Potuz
dd14d5cef0 refactor slot tickers with intervals (#12440)
* refactor slot tickers with intervals

* GoDoc

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-08-03 20:55:16 -03:00
Radosław Kapka
3a09405bb7 HTTP Beacon API: /eth/v1/validator/aggregate_and_proofs (#12686)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

* in progress

* implementation

* remove test file

* remove duplicate

* tests

* review

* test fixes

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-03 22:24:23 +00:00
terencechain
cb59081887 Remove: span for convert to indexed attestation (#12687)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-03 16:17:46 +00:00
Nishant Das
a820d4dcc8 Minor Optimization on InnerShuffleList (#12690)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-03 06:56:07 +00:00
terencechain
dbc17cf2ca Fix: use correct context for UpdateCommitteeCache (#12691) 2023-08-03 06:00:58 +00:00
terencechain
d38762772a test: add execution payload operation tests (#12685)
* test: add execution payload operation tests

* thanks!

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Update testing/spectest/shared/bellatrix/operations/execution_payload.go

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-02 08:16:46 +00:00
Potuz
e5d1eb885d Update caches blocking (#12679)
* update NSC together with epoch boundary caches

* block when updating caches

* reviews

* removal of very useful helper because the reviewers requested it :)

* use IsEpochEnd
2023-08-02 05:59:09 +00:00
terencechain
a9d7701081 test: add missing random and fork transition spec tests (#12681)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-08-01 14:08:40 +00:00
Bharath Vedartham
33abe6eb90 add metric to check if validator is in next sync commitee (#12650)
* add metric to check if validator is in next sync commitee

* Update validator/client/validator.go

* Update validator/client/metrics.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2023-08-01 08:25:10 +00:00
terencechain
c342c9a14e fix: update nil check for new validator (#12677) 2023-08-01 04:13:57 +00:00
Radosław Kapka
a9b003e1fe HTTP Beacon API: /eth/v1/validator/contribution_and_proofs (#12660)
* HTTP Beacon API: `/eth/v1/validator/contribution_and_proofs`

* add comment to invalid test case

* fix validation and test

* review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-07-31 17:32:39 +00:00
Radosław Kapka
955175b7eb Update server-side events dependency (#12676)
* Update server-side events dependency

* go sum

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-31 15:51:19 +00:00
Potuz
c17682940e only handle epoch boundary on canonical blocks (#12666)
* only handle epoch boundary on canonical blocks

* do not fetch headstate
2023-07-31 12:07:09 -03:00
Nishant Das
db450f53a4 Fix Update Of Committee Cache (#12668)
* fix it

* sammy's comment

* fix tests

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-28 13:34:03 +00:00
terencechain
493905ee9e feat: add state regren duration metric (#12672) 2023-07-27 19:33:25 +00:00
james-prysm
e449724034 Bugfix: proposer-settings edge case for activating validators (#12671) 2023-07-27 16:45:16 +00:00
terencechain
a44c209be0 feat: add metric for block gossip time (#12670) 2023-07-27 15:17:24 +00:00
terencechain
183e72b194 fix: update epoch + 1 (#12667) 2023-07-27 18:53:36 +08:00
Potuz
337c254161 update shuffling caches at epoch boundary (#12661)
* update shuffling caches at epoch boundary

* move span

* do not advance to a past slot
2023-07-26 18:46:18 +00:00
james-prysm
ec60cab2bf Bugfix: Metrics adding fix pending validators balance (#12665) 2023-07-26 16:09:24 +00:00
Potuz
ded00495e7 Parallelize cl el validation (#12590)
* Parallelize EL and CL block validation

* fix by Nishant
2023-07-26 12:23:51 -03:00
Nishant Das
113172d8aa Exit Initial Sync Early (#12659)
* add check

* fix test
2023-07-26 00:38:25 +00:00
Radosław Kapka
2b40c44879 Use the correct root in consensus validation (#12657)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-25 12:04:08 +02:00
Radosław Kapka
fc193b09bf HTTP Beacon API: /eth/v1/validator/aggregate_attestation (#12643)
* initial implementation

* it works

* tests

* extracted helper functions

* fix code

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-24 22:16:45 +00:00
Radosław Kapka
a0d53f5155 Register GetValidatorPerformance as POST (#12658) 2023-07-24 21:17:08 +00:00
terencechain
ff3d2bc69f fix: add mainnet withdrawals and bls spec tests (#12655) 2023-07-24 14:13:02 +00:00
Nishant Das
dd403f830c clean up logging (#12653) 2023-07-24 20:29:50 +08:00
Raul Jordan
e9c8e84618 Integrate Read-Only-Lock-on-Get LRU Cache for Public Keys (#12646)
* use new lru cache

* update build

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-22 04:37:53 +00:00
james-prysm
9c250dd4c2 Builder gas limit fix default to 0 in some cases (#12647)
* adding fix for nethermind's findings on gaslimit =0 on some default setups

* adding in default gaslimit check

* fixing linting on complexity

* fixing cognitive complexity linting marker

* fixing unit test and bug with referencing

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-22 03:44:50 +00:00
Potuz
f97db3b738 Different parallel hashing (#12639)
* Paralellize hashing of large lists

* add unit test

* add file

* do not parallelize on low processor count

* revert minimal proc count

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-21 20:36:20 -04:00
Radosław Kapka
43378ae8d5 Use BlockProcessed event in Beacon API (#12625)
* Use `BlockProcessed` event in Beacon API

* log error

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-20 17:26:34 +00:00
Radosław Kapka
2217b45e16 Return historical roots in Capella state (#12642)
* Return historical roots in Capella state

* test fix

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-20 17:00:05 +00:00
james-prysm
405cd6ed86 Publish blockv2 ssz (#12636)
* wip produce block v2 ssz

* adding functions for ssz processing

* fixing linting

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing review feedback

* fixing linting

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-07-20 16:26:40 +00:00
terencechain
ba9bbdd6b9 Style: minor cleanups to blockchain pkg (#12640)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-20 10:17:35 +00:00
Nishant Das
945c76132c Add Allocation Data To Benchmark (#12641)
* add more data

* terence's review
2023-07-20 08:49:17 +00:00
Radosław Kapka
056d3ff0cc Fix GetValidatorPerformance endpoint (#12638) 2023-07-19 15:29:07 +00:00
Nishant Das
d4fd3c34de Use GetPayloadBodies in our Engine Client (#12630)
* add it to our engine client

* lint

* add better debugging info

* temp debugging

* fix

* fix it

* pretty

* dumb bug

* Revert "pretty"

This reverts commit 6a6df3cc5f.

* Revert "fix it"

This reverts commit 73dc617bb0.

* Revert "fix"

This reverts commit 3aecdaac6d.

* Revert "temp debugging"

This reverts commit ffcd2c61a0.

* Revert "add better debugging info"

This reverts commit 96184e8567.

* raul's comment

* regression test

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-07-18 15:46:23 +00:00
Sammy Rosso
4ac4d00377 Implement GetValidatorPerformance beacon chain client (#12581)
* Add http endpoint for GetValidatorPerformance

* Add tests

* fix up client usage

* Revert changes

* Implement GetValidatorPerformance

* refactor to reuse code

* Move endpoint + move ComputeValidatorPerformance

* Radek's comment change

* Add Bazel file

* Change endpoint path

* Add server for http endpoints

* Fix server

* Create core package

* Gaz

* Update + add broken test

* Gaz

* fix

* Fix test

* Create const for endpoint

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-18 08:57:54 +00:00
Radosław Kapka
ec2fda7ad9 getSyncCommitteeRewards API endpoint (#12633) 2023-07-18 10:31:15 +02:00
Preston Van Loon
292f4de099 Update hermetic_cc_toolchain (#12631) 2023-07-17 14:50:45 +00:00
terencechain
145a485b75 fix(pcli): use state trie for HTR duration (#12629) 2023-07-15 20:39:58 -07:00
terencechain
7e474b7a30 fix: benchmark deserialize without clone and init trie (#12626)
* fix: benchmark deserialize without clone

* remove tree initialization

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-15 12:35:58 +00:00
Nishant Das
af0ee9bd16 use read-only validators (#12628) 2023-07-15 00:34:45 -07:00
Radosław Kapka
456ba7c498 Fix comments when receiving block (#12624) 2023-07-14 10:30:02 +00:00
Potuz
cd8847c53b Add deserialization time in pcli benchmark (#12620) 2023-07-13 12:19:10 +00:00
Kaushal Kumar Singh
1894a124ea Fix: Size of SyncCommitteeBits should be 64 bytes (512 bits) instead of 512 bytes (#12586)
* Fix: Size of SyncCommitteeBits should be 64 bytes (512 bits) instead of 512 bytes

* Updated unit test
2023-07-13 10:43:17 +00:00
Preston Van Loon
490bd22b97 Update go version to 1.20.6 (#12617) 2023-07-12 19:55:33 +00:00
terencechain
f23e720a16 fix: add local boost flag to main/usage (#12615)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-07-12 12:15:25 +00:00
Raul Jordan
402799a584 Threadsafe LRU With Non-Blocking Reads for Concurrent Readers (#12476)
* add nonblocking simple lru

* method

* add in missing tests, fix panic
2023-07-12 17:57:52 +08:00
james-prysm
0266609bf6 bugfix : Eth-Consensus-Version header on response header (#12600)
* adding in custom header

* adding in parsing for middleware

* fixing casing

* add handling on error as well

* changing how error is handled for header

* changing how error is handled

* fixing casing

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blinded_blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blinded_blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing unit tests and review comment

* making some constants consistent in 1 file

* fixing missed blinded blocks

* fixing constants in custom handler tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-07-11 13:27:23 -05:00
296 changed files with 31559 additions and 13068 deletions

View File

@@ -1 +1 @@
6.2.1
6.3.2

42
.github/workflows/fuzz.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: "fuzz"
on:
workflow_dispatch:
schedule:
- cron: "0 12 * * *"
permissions:
contents: write
pull-requests: write
jobs:
list:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.20'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
outputs:
fuzz-tests: ${{steps.list.outputs.fuzz-tests}}
fuzz:
runs-on: ubuntu-latest
timeout-minutes: 360
needs: list
strategy:
fail-fast: false
matrix:
include: ${{fromJson(needs.list.outputs.fuzz-tests)}}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.20'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}
fuzz-regexp: ${{ matrix.func }}
fuzz-time: "20m"

View File

@@ -133,8 +133,8 @@ nogo(
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],
"//conditions:default": [
"@org_golang_x_tools//go/analysis/passes/lostcancel:go_default_library",
"@org_golang_x_tools//go/analysis/passes/composite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/lostcancel:go_default_library",
],
}),
)

View File

@@ -16,14 +16,12 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
HERMETIC_CC_TOOLCHAIN_VERSION = "v2.0.0"
http_archive(
name = "hermetic_cc_toolchain",
sha256 = "57f03a6c29793e8add7bd64186fc8066d23b5ffd06fe9cc6b0b8c499914d3a65",
sha256 = "973ab22945b921ef45b8e1d6ce01ca7ce1b8a462167449a36e297438c4ec2755",
strip_prefix = "hermetic_cc_toolchain-5098046bccc15d2962f3cc8e7e53d6a2a26072dc",
urls = [
"https://mirror.bazel.build/github.com/uber/hermetic_cc_toolchain/releases/download/{0}/hermetic_cc_toolchain-{0}.tar.gz".format(HERMETIC_CC_TOOLCHAIN_VERSION),
"https://github.com/uber/hermetic_cc_toolchain/releases/download/{0}/hermetic_cc_toolchain-{0}.tar.gz".format(HERMETIC_CC_TOOLCHAIN_VERSION),
"https://github.com/uber/hermetic_cc_toolchain/archive/5098046bccc15d2962f3cc8e7e53d6a2a26072dc.tar.gz", # 2023-06-28
],
)
@@ -69,10 +67,10 @@ bazel_skylib_workspace()
http_archive(
name = "bazel_gazelle",
sha256 = "5982e5463f171da99e3bdaeff8c0f48283a7a5f396ec5282910b9e8a49c0dd7e",
sha256 = "29d5dafc2a5582995488c6735115d1d366fcd6a0fc2e2a153f02988706349825",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.31.0/bazel-gazelle-v0.31.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.31.0/bazel-gazelle-v0.31.0.tar.gz",
],
)
@@ -96,10 +94,10 @@ http_archive(
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "6b65cb7917b4d1709f9410ffe00ecf3e160edf674b78c54a894471320862184f",
sha256 = "bfc5ce70b9d1634ae54f4e7b495657a18a04e0d596785f672d35d5f505ab491a",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.39.0/rules_go-v0.39.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.39.0/rules_go-v0.39.0.zip",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.40.0/rules_go-v0.40.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.40.0/rules_go-v0.40.0.zip",
],
)
@@ -174,7 +172,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.20.3",
go_version = "1.20.7",
nogo = "@//:nogo",
)
@@ -314,13 +312,6 @@ filegroup(
url = "https://github.com/eth-clients/eth2-networks/archive/7b4897888cebef23801540236f73123e21774954.tar.gz",
)
http_archive(
name = "com_github_bazelbuild_buildtools",
sha256 = "7a182df18df1debabd9e36ae07c8edfa1378b8424a04561b674d933b965372b3",
strip_prefix = "buildtools-f2aed9ee205d62d45c55cfabbfd26342f8526862",
url = "https://github.com/bazelbuild/buildtools/archive/f2aed9ee205d62d45c55cfabbfd26342f8526862.zip",
)
http_archive(
name = "com_google_protobuf",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
@@ -334,22 +325,6 @@ http_archive(
all_content = """filegroup(name = "all", srcs = glob(["**"]), visibility = ["//visibility:public"])"""
# External dependencies
http_archive(
name = "prysm_web_ui",
build_file_content = """
filegroup(
name = "site",
srcs = glob(["**/*"]),
visibility = ["//visibility:public"],
)
""",
sha256 = "5006614c33e358699b4e072c649cd4c3866f7d41a691449d5156f6c6e07a4c60",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v2.0.3/prysm-web-ui.tar.gz",
],
)
load("//:deps.bzl", "prysm_deps")
# gazelle:repository_macro deps.bzl%prysm_deps
@@ -381,10 +356,6 @@ load(
_cc_image_repos()
load("@io_bazel_rules_go//extras:embed_data_deps.bzl", "go_embed_data_dependencies")
go_embed_data_dependencies()
load("@com_github_atlassian_bazel_tools//gometalinter:deps.bzl", "gometalinter_dependencies")
gometalinter_dependencies()
@@ -393,10 +364,6 @@ load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")
gazelle_dependencies()
load("@com_github_bazelbuild_buildtools//buildifier:deps.bzl", "buildifier_dependencies")
buildifier_dependencies()
load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()

8
api/BUILD.bazel Normal file
View File

@@ -0,0 +1,8 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["headers.go"],
importpath = "github.com/prysmaticlabs/prysm/v4/api",
visibility = ["//visibility:public"],
)

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"path"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
base "github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
@@ -109,8 +110,8 @@ func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, er
log.
WithField("block_slot", b.Block().Slot()).
WithField("state_slot", s.Slot()).
WithField("state_root", sr).
WithField("block_root", br).
WithField("state_root", hexutil.Encode(sr[:])).
WithField("block_root", hexutil.Encode(br[:])).
Info("Downloaded checkpoint sync state and block.")
return &OriginData{
st: s,

View File

@@ -13,6 +13,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//encoding/bytesutil:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
@@ -32,6 +33,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
)
@@ -116,7 +117,11 @@ func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, respBody []
// Something went wrong, but the request completed, meaning we can write headers and the error message.
for h, vs := range resp.Header {
for _, v := range vs {
w.Header().Set(h, v)
if strings.HasSuffix(h, api.VersionHeader) {
w.Header().Set(api.VersionHeader, v)
} else {
w.Header().Set(h, v)
}
}
}
// Handle gRPC timeout.
@@ -187,9 +192,11 @@ func WriteMiddlewareResponseHeadersAndBody(grpcResp *http.Response, responseJson
var statusCodeHeader string
for h, vs := range grpcResp.Header {
// We don't want to expose any gRPC metadata in the HTTP response, so we skip forwarding metadata headers.
if strings.HasPrefix(h, "Grpc-Metadata") {
if h == "Grpc-Metadata-"+grpc.HttpCodeMetadataKey {
if strings.HasPrefix(h, grpc.MetadataPrefix) {
if h == grpc.WithPrefix(grpc.HttpCodeMetadataKey) {
statusCodeHeader = vs[0]
} else if strings.HasSuffix(h, api.VersionHeader) {
w.Header().Set(api.VersionHeader, vs[0])
}
} else {
for _, v := range vs {
@@ -223,7 +230,7 @@ func WriteError(w http.ResponseWriter, errJson ErrorJson, responseHeader http.He
// Include custom error in the error JSON.
hasCustomError := false
if responseHeader != nil {
customError, ok := responseHeader["Grpc-Metadata-"+grpc.CustomErrorMetadataKey]
customError, ok := responseHeader[grpc.WithPrefix(grpc.CustomErrorMetadataKey)]
if ok {
hasCustomError = true
// Assume header has only one value and read the 0 index.

View File

@@ -8,6 +8,7 @@ import (
"strings"
"testing"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -280,7 +281,8 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
response := &http.Response{
Header: http.Header{
"Foo": []string{"foo"},
"Grpc-Metadata-" + grpc.HttpCodeMetadataKey: []string{"204"},
grpc.WithPrefix(grpc.HttpCodeMetadataKey): []string{"204"},
grpc.WithPrefix(api.VersionHeader): []string{"capella"},
},
}
container := defaultResponseContainer()
@@ -299,6 +301,9 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "224", v[0])
v, ok = writer.Header()["Eth-Consensus-Version"]
require.Equal(t, true, ok, "header not found")
assert.Equal(t, "capella", v[0])
assert.Equal(t, 204, writer.Code)
assert.DeepEqual(t, responseJson, writer.Body.Bytes())
})
@@ -320,11 +325,12 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
t.Run("GET_invalid_status_code", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
Header: http.Header{"Grpc-Metadata-Eth-Consensus-Version": []string{"capella"}},
}
// Set invalid status code.
response.Header["Grpc-Metadata-"+grpc.HttpCodeMetadataKey] = []string{"invalid"}
response.Header[grpc.WithPrefix(grpc.HttpCodeMetadataKey)] = []string{"invalid"}
response.Header[grpc.WithPrefix(api.VersionHeader)] = []string{"capella"}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
@@ -390,7 +396,7 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
func TestWriteError(t *testing.T) {
t.Run("ok", func(t *testing.T) {
responseHeader := http.Header{
"Grpc-Metadata-" + grpc.CustomErrorMetadataKey: []string{"{\"CustomField\":\"bar\"}"},
grpc.WithPrefix(grpc.CustomErrorMetadataKey): []string{"{\"CustomField\":\"bar\"}"},
}
errJson := &testErrorJson{
Message: "foo",
@@ -420,7 +426,7 @@ func TestWriteError(t *testing.T) {
logHook := test.NewGlobal()
responseHeader := http.Header{
"Grpc-Metadata-" + grpc.CustomErrorMetadataKey: []string{"invalid"},
grpc.WithPrefix(grpc.CustomErrorMetadataKey): []string{"invalid"},
}
WriteError(httptest.NewRecorder(), &testErrorJson{}, responseHeader)

View File

@@ -6,3 +6,11 @@ const CustomErrorMetadataKey = "Custom-Error"
// HttpCodeMetadataKey is the key to use when setting custom HTTP status codes in gRPC metadata.
const HttpCodeMetadataKey = "X-Http-Code"
// MetadataPrefix is the prefix for grpc headers on metadata
const MetadataPrefix = "Grpc-Metadata"
// WithPrefix creates a new string with grpc metadata prefix
func WithPrefix(value string) string {
return MetadataPrefix + "-" + value
}

7
api/headers.go Normal file
View File

@@ -0,0 +1,7 @@
package api
const (
VersionHeader = "Eth-Consensus-Version"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
)

View File

@@ -88,6 +88,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)

View File

@@ -387,7 +387,7 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// IsViableForkCheckpoint returns whether the given checkpoint is a checkpoint in any
// IsViableForCheckpoint returns whether the given checkpoint is a checkpoint in any
// chain known to forkchoice
func (s *Service) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
s.cfg.ForkChoiceStore.RLock()

View File

@@ -182,7 +182,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
// This gets called to update canonical root mapping. It does not save head block
// root in DB. With the inception of initial-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, r [32]byte, hs state.BeaconState) error {
func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock, r [32]byte, hs state.BeaconState, optimistic bool) error {
if err := blocks.BeaconBlockIsNil(b); err != nil {
return err
}
@@ -198,7 +198,7 @@ func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedB
if err != nil {
return err
}
if err := s.setHeadInitialSync(r, bCp, hs); err != nil {
if err := s.setHeadInitialSync(r, bCp, hs, optimistic); err != nil {
return errors.Wrap(err, "could not set head")
}
return nil
@@ -227,7 +227,7 @@ func (s *Service) setHead(newHead *head) error {
// This sets head view object which is used to track the head slot, root, block and state. The method
// assumes that state being passed into the method will not be modified by any other alternate
// caller which holds the state's reference.
func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState) error {
func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState, optimistic bool) error {
s.headLock.Lock()
defer s.headLock.Unlock()
@@ -237,9 +237,10 @@ func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.ReadOnlySig
return err
}
s.head = &head{
root: root,
block: bCp,
state: state,
root: root,
block: bCp,
state: state,
optimistic: optimistic,
}
return nil
}

View File

@@ -250,40 +250,45 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
slashingBalance := uint64(0)
slashingEffectiveBalance := uint64(0)
for i, validator := range postState.Validators() {
for i := 0; i < postState.NumValidators(); i++ {
validator, err := postState.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(i))
if err != nil {
log.WithError(err).Error("Could not load validator")
continue
}
bal, err := postState.BalanceAtIndex(primitives.ValidatorIndex(i))
if err != nil {
log.WithError(err).Error("Could not load validator balance")
continue
}
if validator.Slashed {
if currentEpoch < validator.ExitEpoch {
if validator.Slashed() {
if currentEpoch < validator.ExitEpoch() {
slashingInstances++
slashingBalance += bal
slashingEffectiveBalance += validator.EffectiveBalance
slashingEffectiveBalance += validator.EffectiveBalance()
} else {
slashedInstances++
}
continue
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch {
if validator.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch() {
exitingInstances++
exitingBalance += bal
exitingEffectiveBalance += validator.EffectiveBalance
exitingEffectiveBalance += validator.EffectiveBalance()
} else {
exitedInstances++
}
continue
}
if currentEpoch < validator.ActivationEpoch {
if currentEpoch < validator.ActivationEpoch() {
pendingInstances++
pendingBalance += bal
continue
}
activeInstances++
activeBalance += bal
activeEffectiveBalance += validator.EffectiveBalance
activeEffectiveBalance += validator.EffectiveBalance()
}
activeInstances += exitingInstances + slashingInstances
activeBalance += exitingBalance + slashingBalance

View File

@@ -41,7 +41,7 @@ var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
// postBlockProcess is called when a gossip block is received. This function performs
// several duties most importantly informing the engine if head was updated,
// saving the new head information to the blockchain package and database and
// saving the new head information to the blockchain package and
// handling attestations, slashings and similar included in the block.
func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, postState state.BeaconState, isValidPayload bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
@@ -86,18 +86,6 @@ func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOn
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
} else {
// Updating next slot state cache can happen in the background. It shouldn't block rest of the process.
go func() {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
}()
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
@@ -107,6 +95,12 @@ func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOn
return err
}
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
@@ -115,12 +109,34 @@ func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOn
BlockRoot: blockRoot,
SignedBlock: signed,
Verified: true,
Optimistic: optimistic,
},
})
defer reportAttestationInclusion(b)
if err := s.handleEpochBoundary(ctx, postState, blockRoot[:]); err != nil {
return err
if headRoot == blockRoot {
// Updating next slot state cache can happen in the background
// except in the epoch boundary in which case we lock to handle
// the shuffling and proposer caches updates.
// We handle these caches only on canonical
// blocks, otherwise this will be handled by lateBlockTasks
slot := postState.Slot()
if slots.IsEpochEnd(slot) {
if err := transition.UpdateNextSlotCache(ctx, blockRoot[:], postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
return errors.Wrap(err, "could not handle epoch boundary")
}
} else {
go func() {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Error("could not update next slot state cache")
}
}()
}
}
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
@@ -313,63 +329,51 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return err
}
return s.saveHeadNoDB(ctx, lastB, lastBR, preState)
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
// Epoch boundary bookkeeping such as logging epoch summaries.
func (s *Service) handleEpochBoundary(ctx context.Context, postState state.BeaconState, blockRoot []byte) error {
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
return errors.Wrap(err, "could not update committee cache")
}
if err := helpers.UpdateProposerIndicesInCache(ctx, st, e); err != nil {
return errors.Wrap(err, "could not update proposer index cache")
}
go func() {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := helpers.UpdateCommitteeCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Could not update committee cache")
}
if err := helpers.UpdateProposerIndicesInCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Failed to cache next epoch proposers")
}
}()
return nil
}
// Epoch boundary tasks: it copies the headState and updates the epoch boundary
// caches.
func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot, headState state.BeaconState, blockRoot []byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
defer span.End()
var err error
if postState.Slot()+1 == s.nextEpochBoundarySlot {
copied := postState.Copy()
copied, err := transition.ProcessSlotsUsingNextSlotCache(ctx, copied, blockRoot, copied.Slot()+1)
if err != nil {
return err
}
// Update caches for the next epoch at epoch boundary slot - 1.
if err := helpers.UpdateCommitteeCache(ctx, copied, coreTime.CurrentEpoch(copied)); err != nil {
return err
}
e := coreTime.CurrentEpoch(copied)
if err := helpers.UpdateProposerIndicesInCache(ctx, copied, e); err != nil {
return err
}
go func() {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := helpers.UpdateProposerIndicesInCache(slotCtx, copied, e+1); err != nil {
log.WithError(err).Warn("Failed to cache next epoch proposers")
}
}()
} else if postState.Slot() >= s.nextEpochBoundarySlot {
s.nextEpochBoundarySlot, err = slots.EpochStart(coreTime.NextEpoch(postState))
if err != nil {
return err
}
// Update caches at epoch boundary slot.
// The following updates have shortcut to return nil cheaply if fulfilled during boundary slot - 1.
if err := helpers.UpdateCommitteeCache(ctx, postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
headSt, err := s.HeadState(ctx)
if err != nil {
return err
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
return err
}
// return early if we are advancing to a past epoch
if slot < headState.Slot() {
return nil
}
return nil
if !slots.IsEpochEnd(slot) {
return nil
}
copied := headState.Copy()
copied, err := transition.ProcessSlotsUsingNextSlotCache(ctx, copied, blockRoot, slot+1)
if err != nil {
return err
}
return s.updateEpochBoundaryCaches(ctx, copied)
}
// This feeds in the attestations included in the block to fork choice store. It's allows fork choice store
@@ -500,8 +504,9 @@ func (s *Service) runLateBlockTasks() {
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot
// it also updates the next slot cache to deal with skipped slots.
// it also updates the next slot cache and the proposer index cache to deal with skipped slots.
func (s *Service) lateBlockTasks(ctx context.Context) {
currentSlot := s.CurrentSlot()
if s.CurrentSlot() == s.HeadSlot() {
return
}
@@ -509,8 +514,10 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
s.headLock.RUnlock()
lastRoot, lastState := transition.LastCachedState()
if lastState == nil {
lastRoot, lastState = headRoot[:], headState
@@ -521,7 +528,9 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.

View File

@@ -539,8 +539,7 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 1024)
service.head = &head{state: s}
require.NoError(t, s.SetSlot(2*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.handleEpochBoundary(ctx, s, []byte{}))
require.Equal(t, 3*params.BeaconConfig().SlotsPerEpoch, service.nextEpochBoundarySlot)
require.NoError(t, service.handleEpochBoundary(ctx, s.Slot(), s, []byte{}))
}
func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {

View File

@@ -62,7 +62,7 @@ func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestat
return err
}
if !bytes.Equal(a.Data.Target.Root, r) {
return errors.New("FFG and LMD votes are not consistent")
return fmt.Errorf("FFG and LMD votes are not consistent, block root: %#x, target root: %#x, canonical target root: %#x", a.Data.BeaconBlockRoot, a.Data.Target.Root, r)
}
return nil
}
@@ -86,23 +86,30 @@ func (s *Service) spawnProcessAttestationsRoutine() {
}
log.Warn("Genesis time received, now available to process attestations")
}
// Wait for node to be synced before running the routine.
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("Could not wait to sync")
return
}
st := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
pat := slots.NewSlotTickerWithOffset(s.genesisTime, -reorgLateBlockCountAttestations, params.BeaconConfig().SecondsPerSlot)
reorgInterval := time.Second*time.Duration(params.BeaconConfig().SecondsPerSlot) - reorgLateBlockCountAttestations
ticker := slots.NewSlotTickerWithIntervals(s.genesisTime, []time.Duration{0, reorgInterval})
for {
select {
case <-s.ctx.Done():
return
case <-pat.C():
s.UpdateHead(s.ctx, s.CurrentSlot()+1)
case <-st.C():
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, s.CurrentSlot()); err != nil {
log.WithError(err).Error("could not process new slot")
}
s.cfg.ForkChoiceStore.Unlock()
case slotInterval := <-ticker.C():
if slotInterval.Interval > 0 {
s.UpdateHead(s.ctx, slotInterval.Slot+1)
} else {
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, slotInterval.Slot); err != nil {
log.WithError(err).Error("could not process new slot")
}
s.cfg.ForkChoiceStore.Unlock()
s.UpdateHead(s.ctx, s.CurrentSlot())
s.UpdateHead(s.ctx, slotInterval.Slot)
}
}
}
}()

View File

@@ -8,8 +8,8 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -23,6 +23,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
// This defines how many epochs since finality the run time will begin to save hot state on to the DB.
@@ -61,19 +62,31 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := s.FinalizedCheckpt().Epoch
currentEpoch := coreTime.CurrentEpoch(preState)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
postState, err := s.validateStateTransition(ctx, preState, blockCopy)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
isValidPayload, err := s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
eg, _ := errgroup.WithContext(ctx)
var postState state.BeaconState
eg.Go(func() error {
postState, err = s.validateStateTransition(ctx, preState, blockCopy)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
@@ -82,13 +95,20 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
return errors.Wrap(err, "could not save post state info")
}
// Apply state transition on the new block.
if err := s.postBlockProcess(ctx, blockCopy, blockRoot, postState, isValidPayload); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return err
}
if coreTime.CurrentEpoch(postState) > currentEpoch {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
@@ -100,7 +120,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(ctx, blockCopy, postState, finalized)
go s.sendNewFinalizedEvent(blockCopy, postState)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
@@ -165,6 +185,13 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Rea
return err
}
lastBR := blkRoots[len(blkRoots)-1]
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(lastBR)
if err != nil {
lastSlot := blocks[len(blocks)-1].Block().Slot()
log.WithError(err).Errorf("Could not check if block is optimistic, Root: %#x, Slot: %d", lastBR, lastSlot)
optimistic = true
}
for i, b := range blocks {
blockCopy, err := b.Copy()
if err != nil {
@@ -178,6 +205,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Rea
BlockRoot: blkRoots[i],
SignedBlock: blockCopy,
Verified: true,
Optimistic: optimistic,
},
})
@@ -337,7 +365,7 @@ func (s *Service) updateFinalizationOnBlock(ctx context.Context, preState, postS
// sendNewFinalizedEvent sends a new finalization checkpoint event over the
// event feed. It needs to be called on the background
func (s *Service) sendNewFinalizedEvent(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, postState state.BeaconState, finalized *forkchoicetypes.Checkpoint) {
func (s *Service) sendNewFinalizedEvent(signed interfaces.ReadOnlySignedBeaconBlock, postState state.BeaconState) {
isValidPayload := false
s.headLock.RLock()
if s.head != nil {

View File

@@ -35,7 +35,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
@@ -46,22 +45,21 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
nextEpochBoundarySlot primitives.Slot
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
}
// config options for the service.

View File

@@ -357,7 +357,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
require.NoError(t, s.cfg.StateGen.SaveState(ctx, r, newState))
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, s.saveHeadNoDB(ctx, wsb, r, newState))
require.NoError(t, s.saveHeadNoDB(ctx, wsb, r, newState, false))
newB, err := s.cfg.BeaconDB.HeadBlock(ctx)
require.NoError(t, err)

View File

@@ -533,7 +533,7 @@ func (s *ChainService) GetProposerHead() [32]byte {
return [32]byte{}
}
// SetForkchoiceGenesisTime mocks the same method in the chain service
// SetForkChoiceGenesisTime mocks the same method in the chain service
func (s *ChainService) SetForkChoiceGenesisTime(timestamp uint64) {
if s.ForkChoiceStore != nil {
s.ForkChoiceStore.SetGenesisTime(timestamp)

View File

@@ -50,6 +50,8 @@ func ProcessVoluntaryExits(
beaconState state.BeaconState,
exits []*ethpb.SignedVoluntaryExit,
) (state.BeaconState, error) {
maxExitEpoch, churn := v.ValidatorsMaxExitEpochAndChurn(beaconState)
var exitEpoch primitives.Epoch
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
@@ -61,8 +63,15 @@ func ProcessVoluntaryExits(
if err := VerifyExitAndSignature(val, beaconState.Slot(), beaconState.Fork(), exit, beaconState.GenesisValidatorsRoot()); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex)
if err != nil {
beaconState, exitEpoch, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, maxExitEpoch, churn)
if err == nil {
if exitEpoch > maxExitEpoch {
maxExitEpoch = exitEpoch
churn = 1
} else if exitEpoch == maxExitEpoch {
churn++
}
} else if !errors.Is(err, v.ValidatorAlreadyExitedErr) {
return nil, err
}
}

View File

@@ -110,8 +110,11 @@ func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state
isActive := helpers.IsActiveValidator(validator, currentEpoch)
belowEjectionBalance := validator.EffectiveBalance <= ejectionBal
if isActive && belowEjectionBalance {
state, err = validators.InitiateValidatorExit(ctx, state, primitives.ValidatorIndex(idx))
if err != nil {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.ValidatorsMaxExitEpochAndChurn(state)
state, _, err = validators.InitiateValidatorExit(ctx, state, primitives.ValidatorIndex(idx), maxExitEpoch, churn)
if err != nil && !errors.Is(err, validators.ValidatorAlreadyExitedErr) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}
}

View File

@@ -39,6 +39,8 @@ type BlockProcessedData struct {
SignedBlock interfaces.ReadOnlySignedBeaconBlock
// Verified is true if the block's BLS contents have been verified.
Verified bool
// Optimistic is true if the block is optimistic.
Optimistic bool
}
// ChainStartedData is the data sent with ChainStarted events.

View File

@@ -295,43 +295,39 @@ func ShuffledIndices(s state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]pri
}
// UpdateCommitteeCache gets called at the beginning of every epoch to cache the committee shuffled indices
// list with committee index and epoch number. It caches the shuffled indices for current epoch and next epoch.
func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
for _, e := range []primitives.Epoch{epoch, epoch + 1} {
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
if committeeCache.HasEntry(string(seed[:])) {
return nil
}
shuffledIndices, err := ShuffledIndices(state, e)
if err != nil {
return err
}
count := SlotCommitteeCount(uint64(len(shuffledIndices)))
// Store the sorted indices as well as shuffled indices. In current spec,
// sorted indices is required to retrieve proposer index. This is also
// used for failing verify signature fallback.
sortedIndices := make([]primitives.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
if err := committeeCache.AddCommitteeShuffledList(ctx, &cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,
SortedIndices: sortedIndices,
}); err != nil {
return err
}
// list with committee index and epoch number. It caches the shuffled indices for the input epoch.
func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState, e primitives.Epoch) error {
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
if committeeCache.HasEntry(string(seed[:])) {
return nil
}
shuffledIndices, err := ShuffledIndices(state, e)
if err != nil {
return err
}
count := SlotCommitteeCount(uint64(len(shuffledIndices)))
// Store the sorted indices as well as shuffled indices. In current spec,
// sorted indices is required to retrieve proposer index. This is also
// used for failing verify signature fallback.
sortedIndices := make([]primitives.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
if err := committeeCache.AddCommitteeShuffledList(ctx, &cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,
SortedIndices: sortedIndices,
}); err != nil {
return err
}
return nil
}

View File

@@ -413,7 +413,7 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
require.NoError(t, err)
require.NoError(t, UpdateCommitteeCache(context.Background(), state, time.CurrentEpoch(state)))
epoch := primitives.Epoch(1)
epoch := primitives.Epoch(0)
idx := primitives.CommitteeIndex(1)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
@@ -423,6 +423,40 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(indices)), "Did not save correct indices lengths")
}
func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)
for i := primitives.ValidatorIndex(0); uint64(i) < validatorCount; i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 1,
}
indices[i] = i
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
e := time.CurrentEpoch(state)
require.NoError(t, UpdateCommitteeCache(context.Background(), state, e))
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.Equal(t, true, committeeCache.HasEntry(string(seed[:])))
nextSeed, err := Seed(state, e+1, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.Equal(t, false, committeeCache.HasEntry(string(nextSeed[:])))
require.NoError(t, UpdateCommitteeCache(context.Background(), state, e+1))
require.Equal(t, true, committeeCache.HasEntry(string(nextSeed[:])))
}
func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
validators := make([]*ethpb.Validator, 300000)
for i := 0; i < len(validators); i++ {

View File

@@ -184,7 +184,7 @@ func innerShuffleList(input []primitives.ValidatorIndex, seed [32]byte, shuffle
for {
buf[seedSize] = r
ph := hashfunc(buf[:pivotViewSize])
pivot := bytesutil.FromBytes8(ph[:8]) % listSize
pivot := binary.LittleEndian.Uint64(ph[:8]) % listSize
mirror := (pivot + 1) >> 1
binary.LittleEndian.PutUint32(buf[pivotViewSize:], uint32(pivot>>8))
source := hashfunc(buf)

View File

@@ -15,7 +15,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -13,11 +13,34 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
// ValidatorAlreadyExitedErr is an error raised when trying to process an exit of
// an already exited validator
var ValidatorAlreadyExitedErr = errors.New("validator already exited")
// ValidatorsMaxExitEpochAndChurn returns the maximum non-FAR_FUTURE_EPOCH exit
// epoch and the number of them
func ValidatorsMaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, churn uint64) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > maxExitEpoch {
maxExitEpoch = e
churn = 1
} else if e == maxExitEpoch {
churn++
}
}
return nil
})
_ = err
return
}
// InitiateValidatorExit takes in validator index and updates
// validator with correct voluntary exit parameters.
//
@@ -42,73 +65,43 @@ import (
// # Set validator exit epoch and withdrawable epoch
// validator.exit_epoch = exit_queue_epoch
// validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) (state.BeaconState, error) {
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex, exitQueueEpoch primitives.Epoch, churn uint64) (state.BeaconState, primitives.Epoch, error) {
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitQueueEpoch {
exitQueueEpoch = exitableEpoch
churn = 0
}
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
return nil, 0, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, nil
}
var exitEpochs []primitives.Epoch
err = s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if val.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
exitEpochs = append(exitEpochs, val.ExitEpoch())
}
return nil
})
if err != nil {
return nil, err
}
exitEpochs = append(exitEpochs, helpers.ActivationExitEpoch(time.CurrentEpoch(s)))
// Obtain the exit queue epoch as the maximum number in the exit epochs array.
exitQueueEpoch := primitives.Epoch(0)
for _, i := range exitEpochs {
if exitQueueEpoch < i {
exitQueueEpoch = i
}
}
// We use the exit queue churn to determine if we have passed a churn limit.
exitQueueChurn := uint64(0)
err = s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if val.ExitEpoch() == exitQueueEpoch {
var mErr error
exitQueueChurn, mErr = mathutil.Add64(exitQueueChurn, 1)
if mErr != nil {
return mErr
}
}
return nil
})
if err != nil {
return nil, err
return s, validator.ExitEpoch, ValidatorAlreadyExitedErr
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return nil, errors.Wrap(err, "could not get active validator count")
return nil, 0, errors.Wrap(err, "could not get active validator count")
}
churn, err := helpers.ValidatorChurnLimit(activeValidatorCount)
currentChurn, err := helpers.ValidatorChurnLimit(activeValidatorCount)
if err != nil {
return nil, errors.Wrap(err, "could not get churn limit")
return nil, 0, errors.Wrap(err, "could not get churn limit")
}
if exitQueueChurn >= churn {
if churn >= currentChurn {
exitQueueEpoch, err = exitQueueEpoch.SafeAdd(1)
if err != nil {
return nil, err
return nil, 0, err
}
}
validator.ExitEpoch = exitQueueEpoch
validator.WithdrawableEpoch, err = exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
return nil, 0, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
return nil, 0, err
}
return s, nil
return s, exitQueueEpoch, nil
}
// SlashValidator slashes the malicious validator's balance and awards
@@ -144,8 +137,9 @@ func SlashValidator(
slashedIdx primitives.ValidatorIndex,
penaltyQuotient uint64,
proposerRewardQuotient uint64) (state.BeaconState, error) {
s, err := InitiateValidatorExit(ctx, s, slashedIdx)
if err != nil {
maxExitEpoch, churn := ValidatorsMaxExitEpochAndChurn(s)
s, _, err := InitiateValidatorExit(ctx, s, slashedIdx, maxExitEpoch, churn)
if err != nil && !errors.Is(err, ValidatorAlreadyExitedErr) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)
}
currentEpoch := slots.ToEpoch(s.Slot())

View File

@@ -48,8 +48,9 @@ func TestInitiateValidatorExit_AlreadyExited(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := InitiateValidatorExit(context.Background(), state, 0)
require.NoError(t, err)
newState, epoch, err := InitiateValidatorExit(context.Background(), state, 0, 199, 1)
require.ErrorIs(t, err, ValidatorAlreadyExitedErr)
require.Equal(t, exitEpoch, epoch)
v, err := newState.ValidatorAtIndex(0)
require.NoError(t, err)
assert.Equal(t, exitEpoch, v.ExitEpoch, "Already exited")
@@ -66,8 +67,9 @@ func TestInitiateValidatorExit_ProperExit(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := InitiateValidatorExit(context.Background(), state, idx)
newState, epoch, err := InitiateValidatorExit(context.Background(), state, idx, exitedEpoch+2, 1)
require.NoError(t, err)
require.Equal(t, exitedEpoch+2, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, v.ExitEpoch, "Exit epoch was not the highest")
@@ -85,8 +87,9 @@ func TestInitiateValidatorExit_ChurnOverflow(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := InitiateValidatorExit(context.Background(), state, idx)
newState, epoch, err := InitiateValidatorExit(context.Background(), state, idx, exitedEpoch+2, 4)
require.NoError(t, err)
require.Equal(t, exitedEpoch+3, epoch)
// Because of exit queue overflow,
// validator who init exited has to wait one more epoch.
@@ -106,7 +109,7 @@ func TestInitiateValidatorExit_WithdrawalOverflows(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
_, err = InitiateValidatorExit(context.Background(), state, 1)
_, _, err = InitiateValidatorExit(context.Background(), state, 1, params.BeaconConfig().FarFutureEpoch-1, 1)
require.ErrorContains(t, "addition overflows", err)
}
@@ -337,3 +340,78 @@ func TestExitedValidatorIndices(t *testing.T) {
assert.DeepEqual(t, tt.wanted, exitedIndices)
}
}
func TestValidatorMaxExitEpochAndChurn(t *testing.T) {
tests := []struct {
state *ethpb.BeaconState
wantedEpoch primitives.Epoch
wantedChurn uint64
}{
{
state: &ethpb.BeaconState{
Validators: []*ethpb.Validator{
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: 10,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
},
},
wantedEpoch: 0,
wantedChurn: 3,
},
{
state: &ethpb.BeaconState{
Validators: []*ethpb.Validator{
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
},
},
wantedEpoch: 0,
wantedChurn: 0,
},
{
state: &ethpb.BeaconState{
Validators: []*ethpb.Validator{
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 1,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 0,
WithdrawableEpoch: 10,
},
{
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ExitEpoch: 1,
WithdrawableEpoch: params.BeaconConfig().MinValidatorWithdrawabilityDelay,
},
},
},
wantedEpoch: 1,
wantedChurn: 2,
},
}
for _, tt := range tests {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
epoch, churn := ValidatorsMaxExitEpochAndChurn(s)
require.Equal(t, tt.wantedEpoch, epoch)
require.Equal(t, tt.wantedChurn, churn)
}
}

View File

@@ -297,9 +297,6 @@ func (s *Service) ExchangeTransitionConfiguration(
}
func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeCapabilities")
defer span.End()
@@ -491,9 +488,6 @@ func (s *Service) HeaderByNumber(ctx context.Context, number *big.Int) (*types.H
// GetPayloadBodiesByHash returns the relevant payload bodies for the provided block hash.
func (s *Service) GetPayloadBodiesByHash(ctx context.Context, executionBlockHashes []common.Hash) ([]*pb.ExecutionPayloadBodyV1, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayloadBodiesByHashV1")
defer span.End()
@@ -513,9 +507,6 @@ func (s *Service) GetPayloadBodiesByHash(ctx context.Context, executionBlockHash
// GetPayloadBodiesByRange returns the relevant payload bodies for the provided range.
func (s *Service) GetPayloadBodiesByRange(ctx context.Context, start, count uint64) ([]*pb.ExecutionPayloadBodyV1, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayloadBodiesByRangeV1")
defer span.End()
@@ -560,19 +551,7 @@ func (s *Service) ReconstructFullBlock(
}
executionBlockHash := common.BytesToHash(header.BlockHash())
executionBlock, err := s.ExecutionBlockByHash(ctx, executionBlockHash, true /* with txs */)
if err != nil {
return nil, fmt.Errorf("could not fetch execution block with txs by hash %#x: %v", executionBlockHash, err)
}
if executionBlock == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionBlockHash)
}
if bytes.Equal(executionBlock.Hash.Bytes(), []byte{}) {
return nil, EmptyBlockHash
}
executionBlock.Version = blindedBlock.Version()
payload, err := fullPayloadFromExecutionBlock(header, executionBlock)
payload, err := s.retrievePayloadFromExecutionHash(ctx, executionBlockHash, header, blindedBlock.Version())
if err != nil {
return nil, err
}
@@ -619,32 +598,9 @@ func (s *Service) ReconstructFullBellatrixBlockBatch(
executionHashes = append(executionHashes, executionBlockHash)
}
}
execBlocks, err := s.ExecutionBlocksByHashes(ctx, executionHashes, true /* with txs*/)
fullBlocks, err := s.retrievePayloadsFromExecutionHashes(ctx, executionHashes, validExecPayloads, blindedBlocks)
if err != nil {
return nil, fmt.Errorf("could not fetch execution blocks with txs by hash %#x: %v", executionHashes, err)
}
// For each valid payload, we reconstruct the full block from it with the
// blinded block.
fullBlocks := make([]interfaces.SignedBeaconBlock, len(blindedBlocks))
for sliceIdx, realIdx := range validExecPayloads {
b := execBlocks[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionHashes[sliceIdx])
}
header, err := blindedBlocks[realIdx].Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err := fullPayloadFromExecutionBlock(header, b)
if err != nil {
return nil, err
}
fullBlock, err := blocks.BuildSignedBeaconBlockFromExecutionPayload(blindedBlocks[realIdx], payload.Proto())
if err != nil {
return nil, err
}
fullBlocks[realIdx] = fullBlock
return nil, err
}
// For blocks that are pre-merge we simply reconstruct them via an empty
// execution payload.
@@ -660,6 +616,95 @@ func (s *Service) ReconstructFullBellatrixBlockBatch(
return fullBlocks, nil
}
func (s *Service) retrievePayloadFromExecutionHash(ctx context.Context, executionBlockHash common.Hash, header interfaces.ExecutionData, version int) (interfaces.ExecutionData, error) {
if features.Get().EnableOptionalEngineMethods {
pBodies, err := s.GetPayloadBodiesByHash(ctx, []common.Hash{executionBlockHash})
if err != nil {
return nil, fmt.Errorf("could not get payload body by hash %#x: %v", executionBlockHash, err)
}
if len(pBodies) != 1 {
return nil, errors.Errorf("could not retrieve the correct number of payload bodies: wanted 1 but got %d", len(pBodies))
}
bdy := pBodies[0]
return fullPayloadFromPayloadBody(header, bdy, version)
}
executionBlock, err := s.ExecutionBlockByHash(ctx, executionBlockHash, true /* with txs */)
if err != nil {
return nil, fmt.Errorf("could not fetch execution block with txs by hash %#x: %v", executionBlockHash, err)
}
if executionBlock == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionBlockHash)
}
if bytes.Equal(executionBlock.Hash.Bytes(), []byte{}) {
return nil, EmptyBlockHash
}
executionBlock.Version = version
return fullPayloadFromExecutionBlock(header, executionBlock)
}
func (s *Service) retrievePayloadsFromExecutionHashes(
ctx context.Context,
executionHashes []common.Hash,
validExecPayloads []int,
blindedBlocks []interfaces.ReadOnlySignedBeaconBlock) ([]interfaces.SignedBeaconBlock, error) {
fullBlocks := make([]interfaces.SignedBeaconBlock, len(blindedBlocks))
var execBlocks []*pb.ExecutionBlock
var payloadBodies []*pb.ExecutionPayloadBodyV1
var err error
if features.Get().EnableOptionalEngineMethods {
payloadBodies, err = s.GetPayloadBodiesByHash(ctx, executionHashes)
if err != nil {
return nil, fmt.Errorf("could not fetch payload bodies by hash %#x: %v", executionHashes, err)
}
} else {
execBlocks, err = s.ExecutionBlocksByHashes(ctx, executionHashes, true /* with txs*/)
if err != nil {
return nil, fmt.Errorf("could not fetch execution blocks with txs by hash %#x: %v", executionHashes, err)
}
}
// For each valid payload, we reconstruct the full block from it with the
// blinded block.
for sliceIdx, realIdx := range validExecPayloads {
var payload interfaces.ExecutionData
if features.Get().EnableOptionalEngineMethods {
b := payloadBodies[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil payload body for request by hash %#x", executionHashes[sliceIdx])
}
header, err := blindedBlocks[realIdx].Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromPayloadBody(header, b, blindedBlocks[realIdx].Version())
if err != nil {
return nil, err
}
} else {
b := execBlocks[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionHashes[sliceIdx])
}
header, err := blindedBlocks[realIdx].Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromExecutionBlock(header, b)
if err != nil {
return nil, err
}
}
fullBlock, err := blocks.BuildSignedBeaconBlockFromExecutionPayload(blindedBlocks[realIdx], payload.Proto())
if err != nil {
return nil, err
}
fullBlocks[realIdx] = fullBlock
}
return fullBlocks, nil
}
func fullPayloadFromExecutionBlock(
header interfaces.ExecutionData, block *pb.ExecutionBlock,
) (interfaces.ExecutionData, error) {
@@ -721,6 +766,50 @@ func fullPayloadFromExecutionBlock(
}, 0) // We can't get the block value and don't care about the block value for this instance
}
func fullPayloadFromPayloadBody(
header interfaces.ExecutionData, body *pb.ExecutionPayloadBodyV1, bVersion int,
) (interfaces.ExecutionData, error) {
if header.IsNil() || body == nil {
return nil, errors.New("execution block and header cannot be nil")
}
if bVersion == version.Bellatrix {
return blocks.WrappedExecutionPayload(&pb.ExecutionPayload{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: body.Transactions,
})
}
return blocks.WrappedExecutionPayloadCapella(&pb.ExecutionPayloadCapella{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: body.Transactions,
Withdrawals: body.Withdrawals,
}, 0) // We can't get the block value and don't care about the block value for this instance
}
// Handles errors received from the RPC server according to the specification.
func handleRPCError(err error) error {
if err == nil {

View File

@@ -158,8 +158,7 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
return cmd.LoadFlagsFromConfig(cliCtx, comFlags)
},
Action: func(cliCtx *cli.Context) error {
//TODO: https://github.com/urfave/cli/issues/1197 right now does not set flag
require.Equal(t, false, cliCtx.IsSet(cmd.BootstrapNode.Name))
require.Equal(t, true, cliCtx.IsSet(cmd.BootstrapNode.Name))
require.Equal(t, strings.Join([]string{"node1", "node2"}, ","),
strings.Join(cliCtx.StringSlice(cmd.BootstrapNode.Name), ","))

View File

@@ -34,14 +34,17 @@ func (s *Service) prepareForkChoiceAtts() {
ticker := slots.NewSlotTickerWithIntervals(time.Unix(int64(s.genesisTime), 0), intervals[:])
for {
select {
case <-ticker.C():
case slotInterval := <-ticker.C():
t := time.Now()
if err := s.batchForkChoiceAtts(s.ctx); err != nil {
log.WithError(err).Error("Could not prepare attestations for fork choice")
}
if slots.TimeIntoSlot(s.genesisTime) < intervals[1] {
batchForkChoiceAttsT1.Observe(float64(time.Since(t).Milliseconds()))
} else if slots.TimeIntoSlot(s.genesisTime) < intervals[2] {
switch slotInterval.Interval {
case 0:
duration := time.Since(t)
log.WithField("Duration", duration).Debug("aggregated unaggregated attestations")
batchForkChoiceAttsT1.Observe(float64(duration.Milliseconds()))
case 1:
batchForkChoiceAttsT2.Observe(float64(time.Since(t).Milliseconds()))
}
case <-s.ctx.Done():

View File

@@ -24,6 +24,7 @@ go_library(
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/builder:go_default_library",
"//beacon-chain/rpc/eth/debug:go_default_library",

View File

@@ -12,17 +12,18 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/apimiddleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/gateway/apimiddleware:go_default_library",
"//api/grpc:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/eth/v2:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_r3labs_sse//:go_default_library",
"@com_github_r3labs_sse_v2//:go_default_library",
],
)
@@ -35,6 +36,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/gateway/apimiddleware:go_default_library",
"//api/grpc:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",
@@ -44,6 +46,6 @@ go_test(
"//testing/require:go_default_library",
"//time/slots:go_default_library",
"@com_github_gogo_protobuf//types:go_default_library",
"@com_github_r3labs_sse//:go_default_library",
"@com_github_r3labs_sse_v2//:go_default_library",
],
)

View File

@@ -12,18 +12,12 @@ import (
"strconv"
"strings"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/events"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/r3labs/sse"
)
const (
versionHeader = "Eth-Consensus-Version"
grpcVersionHeader = "Grpc-metadata-Eth-Consensus-Version"
jsonMediaType = "application/json"
octetStreamMediaType = "application/octet-stream"
"github.com/r3labs/sse/v2"
)
// match a number with optional decimals
@@ -223,7 +217,7 @@ func sszRequested(req *http.Request) (bool, error) {
for _, t := range types {
values := strings.Split(t, ";")
name := values[0]
if name != jsonMediaType && name != octetStreamMediaType {
if name != api.JsonMediaType && name != api.OctetStreamMediaType {
continue
}
// no params specified
@@ -248,7 +242,7 @@ func sszRequested(req *http.Request) (bool, error) {
}
}
return currentType == octetStreamMediaType, nil
return currentType == api.OctetStreamMediaType, nil
}
func sszPosted(req *http.Request) bool {
@@ -259,7 +253,7 @@ func sszPosted(req *http.Request) bool {
if len(ct) != 1 {
return false
}
return ct[0] == octetStreamMediaType
return ct[0] == api.OctetStreamMediaType
}
func prepareSSZRequestForProxying(m *apimiddleware.ApiProxyMiddleware, endpoint apimiddleware.Endpoint, req *http.Request) apimiddleware.ErrorJson {
@@ -278,10 +272,10 @@ func prepareSSZRequestForProxying(m *apimiddleware.ApiProxyMiddleware, endpoint
}
func prepareCustomHeaders(req *http.Request) {
ver := req.Header.Get(versionHeader)
ver := req.Header.Get(api.VersionHeader)
if ver != "" {
req.Header.Del(versionHeader)
req.Header.Add(grpcVersionHeader, ver)
req.Header.Del(api.VersionHeader)
req.Header.Add(grpc.WithPrefix(api.VersionHeader), ver)
}
}
@@ -297,7 +291,7 @@ func preparePostedSSZData(req *http.Request) apimiddleware.ErrorJson {
}
req.Body = io.NopCloser(bytes.NewBuffer(data))
req.ContentLength = int64(len(data))
req.Header.Set("Content-Type", jsonMediaType)
req.Header.Set("Content-Type", api.JsonMediaType)
return nil
}
@@ -325,9 +319,9 @@ func writeSSZResponseHeaderAndBody(grpcResp *http.Response, w http.ResponseWrite
}
}
w.Header().Set("Content-Length", strconv.Itoa(len(respSsz)))
w.Header().Set("Content-Type", octetStreamMediaType)
w.Header().Set("Content-Type", api.OctetStreamMediaType)
w.Header().Set("Content-Disposition", "attachment; filename="+fileName)
w.Header().Set(versionHeader, respVersion)
w.Header().Set(api.VersionHeader, respVersion)
if statusCodeHeader != "" {
code, err := strconv.Atoi(statusCodeHeader)
if err != nil {

View File

@@ -11,12 +11,13 @@ import (
"testing"
"time"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/events"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/r3labs/sse"
"github.com/r3labs/sse/v2"
)
type testSSZResponseJson struct {
@@ -45,7 +46,7 @@ func (t testSSZResponseJson) SSZFinalized() bool {
func TestSSZRequested(t *testing.T) {
t.Run("ssz_requested", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{octetStreamMediaType}
request.Header["Accept"] = []string{api.OctetStreamMediaType}
result, err := sszRequested(request)
require.NoError(t, err)
assert.Equal(t, true, result)
@@ -53,7 +54,7 @@ func TestSSZRequested(t *testing.T) {
t.Run("ssz_content_type_first", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s", octetStreamMediaType, jsonMediaType)}
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s", api.OctetStreamMediaType, api.JsonMediaType)}
result, err := sszRequested(request)
require.NoError(t, err)
assert.Equal(t, true, result)
@@ -61,7 +62,7 @@ func TestSSZRequested(t *testing.T) {
t.Run("ssz_content_type_preferred_1", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s;q=0.9,%s", jsonMediaType, octetStreamMediaType)}
request.Header["Accept"] = []string{fmt.Sprintf("%s;q=0.9,%s", api.JsonMediaType, api.OctetStreamMediaType)}
result, err := sszRequested(request)
require.NoError(t, err)
assert.Equal(t, true, result)
@@ -69,7 +70,7 @@ func TestSSZRequested(t *testing.T) {
t.Run("ssz_content_type_preferred_2", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s;q=0.95,%s;q=0.9", octetStreamMediaType, jsonMediaType)}
request.Header["Accept"] = []string{fmt.Sprintf("%s;q=0.95,%s;q=0.9", api.OctetStreamMediaType, api.JsonMediaType)}
result, err := sszRequested(request)
require.NoError(t, err)
assert.Equal(t, true, result)
@@ -77,7 +78,7 @@ func TestSSZRequested(t *testing.T) {
t.Run("other_content_type_preferred", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s;q=0.9", jsonMediaType, octetStreamMediaType)}
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s;q=0.9", api.JsonMediaType, api.OctetStreamMediaType)}
result, err := sszRequested(request)
require.NoError(t, err)
assert.Equal(t, false, result)
@@ -85,7 +86,7 @@ func TestSSZRequested(t *testing.T) {
t.Run("other_params", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s;q=0.9,otherparam=xyz", jsonMediaType, octetStreamMediaType)}
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s;q=0.9,otherparam=xyz", api.JsonMediaType, api.OctetStreamMediaType)}
result, err := sszRequested(request)
require.NoError(t, err)
assert.Equal(t, false, result)
@@ -153,7 +154,7 @@ func TestPreparePostedSszData(t *testing.T) {
preparePostedSSZData(request)
assert.Equal(t, int64(19), request.ContentLength)
assert.Equal(t, jsonMediaType, request.Header.Get("Content-Type"))
assert.Equal(t, api.JsonMediaType, request.Header.Get("Content-Type"))
}
func TestSerializeMiddlewareResponseIntoSSZ(t *testing.T) {
@@ -209,12 +210,12 @@ func TestWriteSSZResponseHeaderAndBody(t *testing.T) {
v, ok = writer.Header()["Content-Type"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, octetStreamMediaType, v[0])
assert.Equal(t, api.OctetStreamMediaType, v[0])
v, ok = writer.Header()["Content-Disposition"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "attachment; filename=test.ssz", v[0])
v, ok = writer.Header()[versionHeader]
v, ok = writer.Header()[api.VersionHeader]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "version", v[0])

View File

@@ -63,51 +63,6 @@ func wrapFeeRecipientsArray(
return true, nil
}
// https://ethereum.github.io/beacon-APIs/#/Validator/registerValidator expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct.
func wrapSignedValidatorRegistrationsArray(
endpoint *apimiddleware.Endpoint,
_ http.ResponseWriter,
req *http.Request,
) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
if _, ok := endpoint.PostRequest.(*SignedValidatorRegistrationsRequestJson); !ok {
return true, nil
}
registrations := make([]*SignedValidatorRegistrationJson, 0)
if err := json.NewDecoder(req.Body).Decode(&registrations); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not decode body")
}
j := &SignedValidatorRegistrationsRequestJson{Registrations: registrations}
b, err := json.Marshal(j)
if err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not marshal wrapped body")
}
req.Body = io.NopCloser(bytes.NewReader(b))
return true, nil
}
// https://ethereum.github.io/beacon-apis/#/Beacon/submitPoolAttestations expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct with a 'data' field.
func wrapAttestationsArray(
endpoint *apimiddleware.Endpoint,
_ http.ResponseWriter,
req *http.Request,
) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
if _, ok := endpoint.PostRequest.(*SubmitAttestationRequestJson); ok {
atts := make([]*AttestationJson, 0)
if err := json.NewDecoder(req.Body).Decode(&atts); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not decode body")
}
j := &SubmitAttestationRequestJson{Data: atts}
b, err := json.Marshal(j)
if err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not marshal wrapped body")
}
req.Body = io.NopCloser(bytes.NewReader(b))
}
return true, nil
}
// Some endpoints e.g. https://ethereum.github.io/beacon-apis/#/Validator/getAttesterDuties expect posting a top-level array of validator indices.
// We make it more proto-friendly by wrapping it in a struct with an 'Index' field.
func wrapValidatorIndicesArray(
@@ -130,72 +85,6 @@ func wrapValidatorIndicesArray(
return true, nil
}
// https://ethereum.github.io/beacon-apis/#/Validator/publishAggregateAndProofs expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct with a 'data' field.
func wrapSignedAggregateAndProofArray(
endpoint *apimiddleware.Endpoint,
_ http.ResponseWriter,
req *http.Request,
) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
if _, ok := endpoint.PostRequest.(*SubmitAggregateAndProofsRequestJson); ok {
data := make([]*SignedAggregateAttestationAndProofJson, 0)
if err := json.NewDecoder(req.Body).Decode(&data); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not decode body")
}
j := &SubmitAggregateAndProofsRequestJson{Data: data}
b, err := json.Marshal(j)
if err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not marshal wrapped body")
}
req.Body = io.NopCloser(bytes.NewReader(b))
}
return true, nil
}
// https://ethereum.github.io/beacon-apis/#/Validator/prepareBeaconCommitteeSubnet expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct with a 'data' field.
func wrapBeaconCommitteeSubscriptionsArray(
endpoint *apimiddleware.Endpoint,
_ http.ResponseWriter,
req *http.Request,
) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
if _, ok := endpoint.PostRequest.(*SubmitBeaconCommitteeSubscriptionsRequestJson); ok {
data := make([]*BeaconCommitteeSubscribeJson, 0)
if err := json.NewDecoder(req.Body).Decode(&data); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not decode body")
}
j := &SubmitBeaconCommitteeSubscriptionsRequestJson{Data: data}
b, err := json.Marshal(j)
if err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not marshal wrapped body")
}
req.Body = io.NopCloser(bytes.NewReader(b))
}
return true, nil
}
// https://ethereum.github.io/beacon-APIs/#/Validator/prepareSyncCommitteeSubnets expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct with a 'data' field.
func wrapSyncCommitteeSubscriptionsArray(
endpoint *apimiddleware.Endpoint,
_ http.ResponseWriter,
req *http.Request,
) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
if _, ok := endpoint.PostRequest.(*SubmitSyncCommitteeSubscriptionRequestJson); ok {
data := make([]*SyncCommitteeSubscriptionJson, 0)
if err := json.NewDecoder(req.Body).Decode(&data); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not decode body")
}
j := &SubmitSyncCommitteeSubscriptionRequestJson{Data: data}
b, err := json.Marshal(j)
if err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not marshal wrapped body")
}
req.Body = io.NopCloser(bytes.NewReader(b))
}
return true, nil
}
// https://ethereum.github.io/beacon-APIs/#/Beacon/submitPoolSyncCommitteeSignatures expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct with a 'data' field.
func wrapSyncCommitteeSignaturesArray(
@@ -218,28 +107,6 @@ func wrapSyncCommitteeSignaturesArray(
return true, nil
}
// https://ethereum.github.io/beacon-APIs/#/Validator/publishContributionAndProofs expects posting a top-level array.
// We make it more proto-friendly by wrapping it in a struct with a 'data' field.
func wrapSignedContributionAndProofsArray(
endpoint *apimiddleware.Endpoint,
_ http.ResponseWriter,
req *http.Request,
) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
if _, ok := endpoint.PostRequest.(*SubmitContributionAndProofsRequestJson); ok {
data := make([]*SignedContributionAndProofJson, 0)
if err := json.NewDecoder(req.Body).Decode(&data); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not decode body")
}
j := &SubmitContributionAndProofsRequestJson{Data: data}
b, err := json.Marshal(j)
if err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not marshal wrapped body")
}
req.Body = io.NopCloser(bytes.NewReader(b))
}
return true, nil
}
type phase0PublishBlockRequestJson struct {
Phase0Block *BeaconBlockJson `json:"phase0_block"`
Signature string `json:"signature" hex:"true"`

View File

@@ -19,46 +19,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
func TestWrapAttestationArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitAttestationRequestJson{},
}
unwrappedAtts := []*AttestationJson{{AggregationBits: "1010"}}
unwrappedAttsJson, err := json.Marshal(unwrappedAtts)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(unwrappedAttsJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapAttestationsArray(endpoint, nil, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(true), runDefault)
wrappedAtts := &SubmitAttestationRequestJson{}
require.NoError(t, json.NewDecoder(request.Body).Decode(wrappedAtts))
require.Equal(t, 1, len(wrappedAtts.Data), "wrong number of wrapped items")
assert.Equal(t, "1010", wrappedAtts.Data[0].AggregationBits)
})
t.Run("invalid_body", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitAttestationRequestJson{},
}
var body bytes.Buffer
_, err := body.Write([]byte("invalid"))
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapAttestationsArray(endpoint, nil, request)
require.Equal(t, false, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(false), runDefault)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestWrapValidatorIndicesArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
@@ -140,151 +100,6 @@ func TestWrapBLSChangesArray(t *testing.T) {
})
}
func TestWrapSignedAggregateAndProofArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitAggregateAndProofsRequestJson{},
}
unwrappedAggs := []*SignedAggregateAttestationAndProofJson{{Signature: "sig"}}
unwrappedAggsJson, err := json.Marshal(unwrappedAggs)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(unwrappedAggsJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapSignedAggregateAndProofArray(endpoint, nil, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(true), runDefault)
wrappedAggs := &SubmitAggregateAndProofsRequestJson{}
require.NoError(t, json.NewDecoder(request.Body).Decode(wrappedAggs))
require.Equal(t, 1, len(wrappedAggs.Data), "wrong number of wrapped items")
assert.Equal(t, "sig", wrappedAggs.Data[0].Signature)
})
t.Run("invalid_body", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitAggregateAndProofsRequestJson{},
}
var body bytes.Buffer
_, err := body.Write([]byte("invalid"))
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapSignedAggregateAndProofArray(endpoint, nil, request)
require.Equal(t, false, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(false), runDefault)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestWrapBeaconCommitteeSubscriptionsArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitBeaconCommitteeSubscriptionsRequestJson{},
}
unwrappedSubs := []*BeaconCommitteeSubscribeJson{{
ValidatorIndex: "1",
CommitteeIndex: "1",
CommitteesAtSlot: "1",
Slot: "1",
IsAggregator: true,
}}
unwrappedSubsJson, err := json.Marshal(unwrappedSubs)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(unwrappedSubsJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapBeaconCommitteeSubscriptionsArray(endpoint, nil, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(true), runDefault)
wrappedSubs := &SubmitBeaconCommitteeSubscriptionsRequestJson{}
require.NoError(t, json.NewDecoder(request.Body).Decode(wrappedSubs))
require.Equal(t, 1, len(wrappedSubs.Data), "wrong number of wrapped items")
assert.Equal(t, "1", wrappedSubs.Data[0].ValidatorIndex)
assert.Equal(t, "1", wrappedSubs.Data[0].CommitteeIndex)
assert.Equal(t, "1", wrappedSubs.Data[0].CommitteesAtSlot)
assert.Equal(t, "1", wrappedSubs.Data[0].Slot)
assert.Equal(t, true, wrappedSubs.Data[0].IsAggregator)
})
t.Run("invalid_body", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitBeaconCommitteeSubscriptionsRequestJson{},
}
var body bytes.Buffer
_, err := body.Write([]byte("invalid"))
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapBeaconCommitteeSubscriptionsArray(endpoint, nil, request)
require.Equal(t, false, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(false), runDefault)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestWrapSyncCommitteeSubscriptionsArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitSyncCommitteeSubscriptionRequestJson{},
}
unwrappedSubs := []*SyncCommitteeSubscriptionJson{
{
ValidatorIndex: "1",
SyncCommitteeIndices: []string{"1", "2"},
UntilEpoch: "1",
},
{
ValidatorIndex: "2",
SyncCommitteeIndices: []string{"3", "4"},
UntilEpoch: "2",
},
}
unwrappedSubsJson, err := json.Marshal(unwrappedSubs)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(unwrappedSubsJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapSyncCommitteeSubscriptionsArray(endpoint, nil, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(true), runDefault)
wrappedSubs := &SubmitSyncCommitteeSubscriptionRequestJson{}
require.NoError(t, json.NewDecoder(request.Body).Decode(wrappedSubs))
require.Equal(t, 2, len(wrappedSubs.Data), "wrong number of wrapped items")
assert.Equal(t, "1", wrappedSubs.Data[0].ValidatorIndex)
require.Equal(t, 2, len(wrappedSubs.Data[0].SyncCommitteeIndices), "wrong number of committee indices")
assert.Equal(t, "1", wrappedSubs.Data[0].SyncCommitteeIndices[0])
assert.Equal(t, "2", wrappedSubs.Data[0].SyncCommitteeIndices[1])
assert.Equal(t, "1", wrappedSubs.Data[0].UntilEpoch)
})
t.Run("invalid_body", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitSyncCommitteeSubscriptionRequestJson{},
}
var body bytes.Buffer
_, err := body.Write([]byte("invalid"))
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapSyncCommitteeSubscriptionsArray(endpoint, nil, request)
require.Equal(t, false, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(false), runDefault)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestWrapSyncCommitteeSignaturesArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
@@ -333,75 +148,6 @@ func TestWrapSyncCommitteeSignaturesArray(t *testing.T) {
})
}
func TestWrapSignedContributionAndProofsArray(t *testing.T) {
t.Run("ok", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitContributionAndProofsRequestJson{},
}
unwrapped := []*SignedContributionAndProofJson{
{
Message: &ContributionAndProofJson{
AggregatorIndex: "1",
Contribution: &SyncCommitteeContributionJson{
Slot: "1",
BeaconBlockRoot: "root",
SubcommitteeIndex: "1",
AggregationBits: "bits",
Signature: "sig",
},
SelectionProof: "proof",
},
Signature: "sig",
},
{
Message: &ContributionAndProofJson{},
Signature: "sig",
},
}
unwrappedJson, err := json.Marshal(unwrapped)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(unwrappedJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapSignedContributionAndProofsArray(endpoint, nil, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(true), runDefault)
wrapped := &SubmitContributionAndProofsRequestJson{}
require.NoError(t, json.NewDecoder(request.Body).Decode(wrapped))
require.Equal(t, 2, len(wrapped.Data), "wrong number of wrapped items")
assert.Equal(t, "sig", wrapped.Data[0].Signature)
require.NotNil(t, wrapped.Data[0].Message)
msg := wrapped.Data[0].Message
assert.Equal(t, "1", msg.AggregatorIndex)
assert.Equal(t, "proof", msg.SelectionProof)
require.NotNil(t, msg.Contribution)
assert.Equal(t, "1", msg.Contribution.Slot)
assert.Equal(t, "root", msg.Contribution.BeaconBlockRoot)
assert.Equal(t, "1", msg.Contribution.SubcommitteeIndex)
assert.Equal(t, "bits", msg.Contribution.AggregationBits)
assert.Equal(t, "sig", msg.Contribution.Signature)
})
t.Run("invalid_body", func(t *testing.T) {
endpoint := &apimiddleware.Endpoint{
PostRequest: &SubmitContributionAndProofsRequestJson{},
}
var body bytes.Buffer
_, err := body.Write([]byte("invalid"))
require.NoError(t, err)
request := httptest.NewRequest("POST", "http://foo.example", &body)
runDefault, errJson := wrapSignedContributionAndProofsArray(endpoint, nil, request)
require.Equal(t, false, errJson == nil)
assert.Equal(t, apimiddleware.RunDefault(false), runDefault)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestSetInitialPublishBlockPostRequest(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()

View File

@@ -35,7 +35,6 @@ func (_ *BeaconEndpointFactory) Paths() []string {
"/eth/v1/beacon/blocks/{block_id}/root",
"/eth/v1/beacon/blocks/{block_id}/attestations",
"/eth/v1/beacon/blinded_blocks/{block_id}",
"/eth/v1/beacon/pool/attestations",
"/eth/v1/beacon/pool/attester_slashings",
"/eth/v1/beacon/pool/proposer_slashings",
"/eth/v1/beacon/pool/voluntary_exits",
@@ -48,7 +47,6 @@ func (_ *BeaconEndpointFactory) Paths() []string {
"/eth/v1/node/peers/{peer_id}",
"/eth/v1/node/peer_count",
"/eth/v1/node/version",
"/eth/v1/node/syncing",
"/eth/v1/node/health",
"/eth/v1/debug/beacon/states/{state_id}",
"/eth/v2/debug/beacon/states/{state_id}",
@@ -65,15 +63,7 @@ func (_ *BeaconEndpointFactory) Paths() []string {
"/eth/v1/validator/blocks/{slot}",
"/eth/v2/validator/blocks/{slot}",
"/eth/v1/validator/blinded_blocks/{slot}",
"/eth/v1/validator/attestation_data",
"/eth/v1/validator/aggregate_attestation",
"/eth/v1/validator/beacon_committee_subscriptions",
"/eth/v1/validator/sync_committee_subscriptions",
"/eth/v1/validator/aggregate_and_proofs",
"/eth/v1/validator/sync_committee_contribution",
"/eth/v1/validator/contribution_and_proofs",
"/eth/v1/validator/prepare_beacon_proposer",
"/eth/v1/validator/register_validator",
"/eth/v1/validator/liveness/{epoch}",
}
}
@@ -146,14 +136,6 @@ func (_ *BeaconEndpointFactory) Create(path string) (*apimiddleware.Endpoint, er
OnPreSerializeMiddlewareResponseIntoJson: serializeBlindedBlock,
}
endpoint.CustomHandlers = []apimiddleware.CustomHandler{handleGetBlindedBeaconBlockSSZ}
case "/eth/v1/beacon/pool/attestations":
endpoint.RequestQueryParams = []apimiddleware.QueryParam{{Name: "slot"}, {Name: "committee_index"}}
endpoint.GetResponse = &AttestationsPoolResponseJson{}
endpoint.PostRequest = &SubmitAttestationRequestJson{}
endpoint.Err = &IndexedVerificationFailureErrorJson{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapAttestationsArray,
}
case "/eth/v1/beacon/pool/attester_slashings":
endpoint.PostRequest = &AttesterSlashingJson{}
endpoint.GetResponse = &AttesterSlashingsPoolResponseJson{}
@@ -190,8 +172,6 @@ func (_ *BeaconEndpointFactory) Create(path string) (*apimiddleware.Endpoint, er
endpoint.GetResponse = &PeerCountResponseJson{}
case "/eth/v1/node/version":
endpoint.GetResponse = &VersionResponseJson{}
case "/eth/v1/node/syncing":
endpoint.GetResponse = &SyncingResponseJson{}
case "/eth/v1/node/health":
// Use default endpoint
case "/eth/v1/debug/beacon/states/{state_id}":
@@ -260,47 +240,11 @@ func (_ *BeaconEndpointFactory) Create(path string) (*apimiddleware.Endpoint, er
OnPreSerializeMiddlewareResponseIntoJson: serializeProducedBlindedBlock,
}
endpoint.CustomHandlers = []apimiddleware.CustomHandler{handleProduceBlindedBlockSSZ}
case "/eth/v1/validator/attestation_data":
endpoint.GetResponse = &ProduceAttestationDataResponseJson{}
endpoint.RequestQueryParams = []apimiddleware.QueryParam{{Name: "slot"}, {Name: "committee_index"}}
case "/eth/v1/validator/aggregate_attestation":
endpoint.GetResponse = &AggregateAttestationResponseJson{}
endpoint.RequestQueryParams = []apimiddleware.QueryParam{{Name: "attestation_data_root", Hex: true}, {Name: "slot"}}
case "/eth/v1/validator/beacon_committee_subscriptions":
endpoint.PostRequest = &SubmitBeaconCommitteeSubscriptionsRequestJson{}
endpoint.Err = &NodeSyncDetailsErrorJson{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapBeaconCommitteeSubscriptionsArray,
}
case "/eth/v1/validator/sync_committee_subscriptions":
endpoint.PostRequest = &SubmitSyncCommitteeSubscriptionRequestJson{}
endpoint.Err = &NodeSyncDetailsErrorJson{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapSyncCommitteeSubscriptionsArray,
}
case "/eth/v1/validator/aggregate_and_proofs":
endpoint.PostRequest = &SubmitAggregateAndProofsRequestJson{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapSignedAggregateAndProofArray,
}
case "/eth/v1/validator/sync_committee_contribution":
endpoint.GetResponse = &ProduceSyncCommitteeContributionResponseJson{}
endpoint.RequestQueryParams = []apimiddleware.QueryParam{{Name: "slot"}, {Name: "subcommittee_index"}, {Name: "beacon_block_root", Hex: true}}
case "/eth/v1/validator/contribution_and_proofs":
endpoint.PostRequest = &SubmitContributionAndProofsRequestJson{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapSignedContributionAndProofsArray,
}
case "/eth/v1/validator/prepare_beacon_proposer":
endpoint.PostRequest = &FeeRecipientsRequestJSON{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapFeeRecipientsArray,
}
case "/eth/v1/validator/register_validator":
endpoint.PostRequest = &SignedValidatorRegistrationsRequestJson{}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeRequestBodyIntoContainer: wrapSignedValidatorRegistrationsArray,
}
case "/eth/v1/validator/liveness/{epoch}":
endpoint.PostRequest = &ValidatorIndicesJson{}
endpoint.PostResponse = &LivenessResponseJson{}

View File

@@ -4,7 +4,7 @@ import (
"strings"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
)
@@ -143,14 +143,6 @@ type BlockAttestationsResponseJson struct {
Finalized bool `json:"finalized"`
}
type AttestationsPoolResponseJson struct {
Data []*AttestationJson `json:"data"`
}
type SubmitAttestationRequestJson struct {
Data []*AttestationJson `json:"data"`
}
type AttesterSlashingsPoolResponseJson struct {
Data []*AttesterSlashingJson `json:"data"`
}
@@ -199,7 +191,7 @@ type VersionResponseJson struct {
}
type SyncingResponseJson struct {
Data *helpers.SyncDetailsJson `json:"data"`
Data *shared.SyncDetails `json:"data"`
}
type BeaconStateResponseJson struct {
@@ -268,18 +260,10 @@ type ProduceBlindedBlockResponseJson struct {
Data *BlindedBeaconBlockContainerJson `json:"data"`
}
type ProduceAttestationDataResponseJson struct {
Data *AttestationDataJson `json:"data"`
}
type AggregateAttestationResponseJson struct {
Data *AttestationJson `json:"data"`
}
type SubmitBeaconCommitteeSubscriptionsRequestJson struct {
Data []*BeaconCommitteeSubscribeJson `json:"data"`
}
type BeaconCommitteeSubscribeJson struct {
ValidatorIndex string `json:"validator_index"`
CommitteeIndex string `json:"committee_index"`
@@ -288,28 +272,10 @@ type BeaconCommitteeSubscribeJson struct {
IsAggregator bool `json:"is_aggregator"`
}
type SubmitSyncCommitteeSubscriptionRequestJson struct {
Data []*SyncCommitteeSubscriptionJson `json:"data"`
}
type SyncCommitteeSubscriptionJson struct {
ValidatorIndex string `json:"validator_index"`
SyncCommitteeIndices []string `json:"sync_committee_indices"`
UntilEpoch string `json:"until_epoch"`
}
type SubmitAggregateAndProofsRequestJson struct {
Data []*SignedAggregateAttestationAndProofJson `json:"data"`
}
type ProduceSyncCommitteeContributionResponseJson struct {
Data *SyncCommitteeContributionJson `json:"data"`
}
type SubmitContributionAndProofsRequestJson struct {
Data []*SignedContributionAndProofJson `json:"data"`
}
type ForkChoiceNodeResponseJson struct {
Slot string `json:"slot"`
BlockRoot string `json:"block_root" hex:"true"`
@@ -1004,22 +970,6 @@ type SyncCommitteeContributionJson struct {
Signature string `json:"signature" hex:"true"`
}
type ValidatorRegistrationJson struct {
FeeRecipient string `json:"fee_recipient" hex:"true"`
GasLimit string `json:"gas_limit"`
Timestamp string `json:"timestamp"`
Pubkey string `json:"pubkey" hex:"true"`
}
type SignedValidatorRegistrationJson struct {
Message *ValidatorRegistrationJson `json:"message"`
Signature string `json:"signature" hex:"true"`
}
type SignedValidatorRegistrationsRequestJson struct {
Registrations []*SignedValidatorRegistrationJson `json:"registrations"`
}
type ForkChoiceNodeJson struct {
Slot string `json:"slot"`
BlockRoot string `json:"block_root" hex:"true"`
@@ -1219,7 +1169,7 @@ type SingleIndexedVerificationFailureJson struct {
type NodeSyncDetailsErrorJson struct {
apimiddleware.DefaultErrorJson
SyncDetails helpers.SyncDetailsJson `json:"sync_details"`
SyncDetails shared.SyncDetails `json:"sync_details"`
}
type EventErrorJson struct {

View File

@@ -4,22 +4,40 @@ go_library(
name = "go_default_library",
srcs = [
"errors.go",
"log.go",
"service.go",
"validator.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)

View File

@@ -0,0 +1,5 @@
package core
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "rpc/core")

View File

@@ -0,0 +1,22 @@
package core
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
opfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/synccommittee"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
)
type Service struct {
HeadFetcher blockchain.HeadFetcher
GenesisTimeFetcher blockchain.TimeFetcher
SyncChecker sync.Checker
Broadcaster p2p.Broadcaster
SyncCommitteePool synccommittee.Pool
OperationNotifier opfeed.Notifier
AttestationCache *cache.AttestationCache
StateGen stategen.StateManager
}

View File

@@ -1,34 +1,71 @@
package core
import (
"bytes"
"context"
"fmt"
"sort"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
opfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/crypto/rand"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
func ComputeValidatorPerformance(
// AggregateBroadcastFailedError represents an error scenario where
// broadcasting an aggregate selection proof failed.
type AggregateBroadcastFailedError struct {
err error
}
// NewAggregateBroadcastFailedError creates a new error instance.
func NewAggregateBroadcastFailedError(err error) AggregateBroadcastFailedError {
return AggregateBroadcastFailedError{
err: err,
}
}
// Error returns the underlying error message.
func (e *AggregateBroadcastFailedError) Error() string {
return fmt.Sprintf("could not broadcast signed aggregated attestation: %s", e.err.Error())
}
// ComputeValidatorPerformance reports the validator's latest balance along with other important metrics on
// rewards and penalties throughout its lifecycle in the beacon chain.
func (s *Service) ComputeValidatorPerformance(
ctx context.Context,
req *ethpb.ValidatorPerformanceRequest,
headFetcher blockchain.HeadFetcher,
currSlot primitives.Slot,
) (*ethpb.ValidatorPerformanceResponse, *RpcError) {
headState, err := headFetcher.HeadState(ctx)
if s.SyncChecker.Syncing() {
return nil, &RpcError{Reason: Unavailable, Err: errors.New("Syncing to latest head, not ready to respond")}
}
headState, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get head state"), Reason: Internal}
}
currSlot := s.GenesisTimeFetcher.CurrentSlot()
if currSlot > headState.Slot() {
headRoot, err := headFetcher.HeadRoot(ctx)
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get head root"), Reason: Internal}
}
@@ -166,3 +203,256 @@ func ComputeValidatorPerformance(
InactivityScores: inactivityScores, // Only populated in Altair
}, nil
}
// SubmitSignedContributionAndProof is called by a sync committee aggregator
// to submit signed contribution and proof object.
func (s *Service) SubmitSignedContributionAndProof(
ctx context.Context,
req *ethpb.SignedContributionAndProof,
) *RpcError {
errs, ctx := errgroup.WithContext(ctx)
// Broadcasting and saving contribution into the pool in parallel. As one fail should not affect another.
errs.Go(func() error {
return s.Broadcaster.Broadcast(ctx, req)
})
if err := s.SyncCommitteePool.SaveSyncCommitteeContribution(req.Message.Contribution); err != nil {
return &RpcError{Err: err, Reason: Internal}
}
// Wait for p2p broadcast to complete and return the first error (if any)
err := errs.Wait()
if err != nil {
return &RpcError{Err: err, Reason: Internal}
}
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: opfeed.SyncCommitteeContributionReceived,
Data: &opfeed.SyncCommitteeContributionReceivedData{
Contribution: req,
},
})
return nil
}
// SubmitSignedAggregateSelectionProof verifies given aggregate and proofs and publishes them on appropriate gossipsub topic.
func (s *Service) SubmitSignedAggregateSelectionProof(
ctx context.Context,
req *ethpb.SignedAggregateSubmitRequest,
) *RpcError {
if req.SignedAggregateAndProof == nil || req.SignedAggregateAndProof.Message == nil ||
req.SignedAggregateAndProof.Message.Aggregate == nil || req.SignedAggregateAndProof.Message.Aggregate.Data == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
emptySig := make([]byte, fieldparams.BLSSignatureLength)
if bytes.Equal(req.SignedAggregateAndProof.Signature, emptySig) ||
bytes.Equal(req.SignedAggregateAndProof.Message.SelectionProof, emptySig) {
return &RpcError{Err: errors.New("signed signatures can't be zero hashes"), Reason: BadRequest}
}
// As a preventive measure, a beacon node shouldn't broadcast an attestation whose slot is out of range.
if err := helpers.ValidateAttestationTime(req.SignedAggregateAndProof.Message.Aggregate.Data.Slot,
s.GenesisTimeFetcher.GenesisTime(), params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
return &RpcError{Err: errors.New("attestation slot is no longer valid from current time"), Reason: BadRequest}
}
if err := s.Broadcaster.Broadcast(ctx, req.SignedAggregateAndProof); err != nil {
return &RpcError{Err: &AggregateBroadcastFailedError{err: err}, Reason: Internal}
}
log.WithFields(logrus.Fields{
"slot": req.SignedAggregateAndProof.Message.Aggregate.Data.Slot,
"committeeIndex": req.SignedAggregateAndProof.Message.Aggregate.Data.CommitteeIndex,
"validatorIndex": req.SignedAggregateAndProof.Message.AggregatorIndex,
"aggregatedCount": req.SignedAggregateAndProof.Message.Aggregate.AggregationBits.Count(),
}).Debug("Broadcasting aggregated attestation and proof")
return nil
}
// AggregatedSigAndAggregationBits returns the aggregated signature and aggregation bits
// associated with a particular set of sync committee messages.
func (s *Service) AggregatedSigAndAggregationBits(
ctx context.Context,
req *ethpb.AggregatedSigAndAggregationBitsRequest) ([]byte, []byte, error) {
subCommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
sigs := make([][]byte, 0, subCommitteeSize)
bits := ethpb.NewSyncCommitteeAggregationBits()
for _, msg := range req.Msgs {
if bytes.Equal(req.BlockRoot, msg.BlockRoot) {
headSyncCommitteeIndices, err := s.HeadFetcher.HeadSyncCommitteeIndices(ctx, msg.ValidatorIndex, req.Slot)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get sync subcommittee index")
}
for _, index := range headSyncCommitteeIndices {
i := uint64(index)
subnetIndex := i / subCommitteeSize
indexMod := i % subCommitteeSize
if subnetIndex == req.SubnetId && !bits.BitAt(indexMod) {
bits.SetBitAt(indexMod, true)
sigs = append(sigs, msg.Signature)
}
}
}
}
aggregatedSig := make([]byte, 96)
aggregatedSig[0] = 0xC0
if len(sigs) != 0 {
uncompressedSigs, err := bls.MultipleSignaturesFromBytes(sigs)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not decompress signatures")
}
aggregatedSig = bls.AggregateSignatures(uncompressedSigs).Marshal()
}
return aggregatedSig, bits, nil
}
// AssignValidatorToSubnet checks the status and pubkey of a particular validator
// to discern whether persistent subnets need to be registered for them.
func AssignValidatorToSubnet(pubkey []byte, status validator.ValidatorStatus) {
if status != validator.Active {
return
}
assignValidatorToSubnet(pubkey)
}
// AssignValidatorToSubnetProto checks the status and pubkey of a particular validator
// to discern whether persistent subnets need to be registered for them.
//
// It has a Proto suffix because the status is a protobuf type.
func AssignValidatorToSubnetProto(pubkey []byte, status ethpb.ValidatorStatus) {
if status != ethpb.ValidatorStatus_ACTIVE && status != ethpb.ValidatorStatus_EXITING {
return
}
assignValidatorToSubnet(pubkey)
}
func assignValidatorToSubnet(pubkey []byte) {
_, ok, expTime := cache.SubnetIDs.GetPersistentSubnets(pubkey)
if ok && expTime.After(prysmTime.Now()) {
return
}
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
var assignedIdxs []uint64
randGen := rand.NewGenerator()
for i := uint64(0); i < params.BeaconConfig().RandomSubnetsPerValidator; i++ {
assignedIdx := randGen.Intn(int(params.BeaconNetworkConfig().AttestationSubnetCount))
assignedIdxs = append(assignedIdxs, uint64(assignedIdx))
}
assignedDuration := uint64(randGen.Intn(int(params.BeaconConfig().EpochsPerRandomSubnetSubscription)))
assignedDuration += params.BeaconConfig().EpochsPerRandomSubnetSubscription
totalDuration := epochDuration * time.Duration(assignedDuration)
cache.SubnetIDs.AddPersistentCommittee(pubkey, assignedIdxs, totalDuration*time.Second)
}
// GetAttestationData requests that the beacon node produces attestation data for
// the requested committee index and slot based on the nodes current head.
func (s *Service) GetAttestationData(
ctx context.Context, req *ethpb.AttestationDataRequest,
) (*ethpb.AttestationData, *RpcError) {
if err := helpers.ValidateAttestationTime(
req.Slot,
s.GenesisTimeFetcher.GenesisTime(),
params.BeaconNetworkConfig().MaximumGossipClockDisparity,
); err != nil {
return nil, &RpcError{Reason: BadRequest, Err: errors.Errorf("invalid request: %v", err)}
}
res, err := s.AttestationCache.Get(ctx, req)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not retrieve data from attestation cache: %v", err)}
}
if res != nil {
res.CommitteeIndex = req.CommitteeIndex
return res, nil
}
if err := s.AttestationCache.MarkInProgress(req); err != nil {
if errors.Is(err, cache.ErrAlreadyInProgress) {
res, err := s.AttestationCache.Get(ctx, req)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not retrieve data from attestation cache: %v", err)}
}
if res == nil {
return nil, &RpcError{Reason: Internal, Err: errors.New("a request was in progress and resolved to nil")}
}
res.CommitteeIndex = req.CommitteeIndex
return res, nil
}
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not mark attestation as in-progress: %v", err)}
}
defer func() {
if err := s.AttestationCache.MarkNotInProgress(req); err != nil {
log.WithError(err).Error("could not mark attestation as not-in-progress")
}
}()
headState, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not retrieve head state: %v", err)}
}
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not retrieve head root: %v", err)}
}
// In the case that we receive an attestation request after a newer state/block has been processed.
if headState.Slot() > req.Slot {
headRoot, err = helpers.BlockRootAtSlot(headState, req.Slot)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not get historical head root: %v", err)}
}
headState, err = s.StateGen.StateByRoot(ctx, bytesutil.ToBytes32(headRoot))
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not get historical head state: %v", err)}
}
}
if headState == nil || headState.IsNil() {
return nil, &RpcError{Reason: Internal, Err: errors.New("could not lookup parent state from head")}
}
if coreTime.CurrentEpoch(headState) < slots.ToEpoch(req.Slot) {
headState, err = transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, req.Slot)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not process slots up to %d: %v", req.Slot, err)}
}
}
targetEpoch := coreTime.CurrentEpoch(headState)
epochStartSlot, err := slots.EpochStart(targetEpoch)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not calculate epoch start: %v", err)}
}
var targetRoot []byte
if epochStartSlot == headState.Slot() {
targetRoot = headRoot
} else {
targetRoot, err = helpers.BlockRootAtSlot(headState, epochStartSlot)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: errors.Errorf("could not get target block for slot %d: %v", epochStartSlot, err)}
}
if bytesutil.ToBytes32(targetRoot) == params.BeaconConfig().ZeroHash {
targetRoot = headRoot
}
}
res = &ethpb.AttestationData{
Slot: req.Slot,
CommitteeIndex: req.CommitteeIndex,
BeaconBlockRoot: headRoot,
Source: headState.CurrentJustifiedCheckpoint(),
Target: &ethpb.Checkpoint{
Epoch: targetEpoch,
Root: targetRoot,
},
}
if err := s.AttestationCache.Put(ctx, req, res); err != nil {
log.WithError(err).Error("could not store attestation data in cache")
}
return res, nil
}

View File

@@ -7,6 +7,7 @@ go_library(
"blocks.go",
"config.go",
"handlers.go",
"handlers_pool.go",
"log.go",
"pool.go",
"server.go",
@@ -18,6 +19,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/altair:go_default_library",
@@ -36,6 +38,7 @@ go_library(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -48,11 +51,12 @@ go_library(
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//network:go_default_library",
"//network/forks:go_default_library",
"//network/http:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
@@ -67,6 +71,7 @@ go_library(
"@com_github_wealdtech_go_bytesutil//:go_default_library",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
@@ -81,6 +86,7 @@ go_test(
"blinded_blocks_test.go",
"blocks_test.go",
"config_test.go",
"handlers_pool_test.go",
"handlers_test.go",
"init_test.go",
"pool_test.go",
@@ -91,6 +97,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/signing:go_default_library",
@@ -98,6 +105,7 @@ go_test(
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/blstoexec/mock:go_default_library",
@@ -105,6 +113,7 @@ go_test(
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/operations/voluntaryexits/mock:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/rpc/apimiddleware:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
@@ -116,12 +125,14 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//network/forks:go_default_library",
"//network/http:go_default_library",
"//proto/eth/service:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
@@ -133,6 +144,7 @@ go_test(
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_mock//gomock:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/validator"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -17,7 +18,9 @@ import (
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"go.opencensus.io/trace"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
@@ -38,7 +41,9 @@ func (bs *Server) GetBlindedBlock(ctx context.Context, req *ethpbv1.BlockRequest
if err != nil {
return nil, errors.Wrapf(err, "could not get block root")
}
if err := grpc.SetHeader(ctx, metadata.Pairs(api.VersionHeader, version.String(blk.Version()))); err != nil {
return nil, status.Errorf(codes.Internal, "Could not set "+api.VersionHeader+" header: %v", err)
}
result, err := getBlindedBlockPhase0(blk)
if result != nil {
result.Finalized = bs.FinalizationFetcher.IsFinalized(ctx, blkRoot)
@@ -196,11 +201,11 @@ func (bs *Server) SubmitBlindedBlockSSZ(ctx context.Context, req *ethpbv2.SSZCon
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read"+versionHeader+" header")
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read"+api.VersionHeader+" header")
}
ver := md.Get(versionHeader)
ver := md.Get(api.VersionHeader)
if len(ver) == 0 {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read"+versionHeader+" header")
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read"+api.VersionHeader+" header")
}
schedule := forks.NewOrderedSchedule(params.BeaconConfig())
forkVer, err := schedule.VersionForName(ver[0])

View File

@@ -5,6 +5,8 @@ import (
"testing"
"github.com/golang/mock/gomock"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/prysm/v4/api"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
mockSync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
@@ -17,11 +19,13 @@ import (
mock2 "github.com/prysmaticlabs/prysm/v4/testing/mock"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
)
func TestServer_GetBlindedBlock(t *testing.T) {
ctx := context.Background()
stream := &runtime.ServerTransportStream{}
ctx := grpc.NewContextWithServerTransportStream(context.Background(), stream)
t.Run("Phase 0", func(t *testing.T) {
b := util.NewBeaconBlock()
@@ -321,7 +325,7 @@ func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "phase0")
md.Set(api.VersionHeader, "phase0")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -342,7 +346,7 @@ func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "altair")
md.Set(api.VersionHeader, "altair")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -363,7 +367,7 @@ func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
md.Set(api.VersionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -381,7 +385,7 @@ func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
md.Set(api.VersionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
@@ -402,7 +406,7 @@ func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
md.Set(api.VersionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -420,7 +424,7 @@ func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
md.Set(api.VersionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)

View File

@@ -8,6 +8,7 @@ import (
"github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
@@ -25,16 +26,16 @@ import (
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/emptypb"
)
const versionHeader = "eth-consensus-version"
var (
errNilBlock = errors.New("nil block")
)
@@ -253,11 +254,11 @@ func (bs *Server) SubmitBlockSSZ(ctx context.Context, req *ethpbv2.SSZContainer)
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read "+versionHeader+" header")
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read "+api.VersionHeader+" header")
}
ver := md.Get(versionHeader)
ver := md.Get(api.VersionHeader)
if len(ver) == 0 {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read "+versionHeader+" header")
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read "+api.VersionHeader+" header")
}
schedule := forks.NewOrderedSchedule(params.BeaconConfig())
forkVer, err := schedule.VersionForName(ver[0])
@@ -424,6 +425,9 @@ func (bs *Server) GetBlockV2(ctx context.Context, req *ethpbv2.BlockRequestV2) (
if !errors.Is(err, consensus_types.ErrUnsupportedField) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
if err := grpc.SetHeader(ctx, metadata.Pairs(api.VersionHeader, version.String(blk.Version()))); err != nil {
return nil, status.Errorf(codes.Internal, "Could not set "+api.VersionHeader+" header: %v", err)
}
result, err = getBlockAltair(blk)
if result != nil {
result.Finalized = bs.FinalizationFetcher.IsFinalized(ctx, blkRoot)

View File

@@ -5,7 +5,9 @@ import (
"testing"
"github.com/golang/mock/gomock"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/api"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
@@ -24,6 +26,7 @@ import (
mock2 "github.com/prysmaticlabs/prysm/v4/testing/mock"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
)
@@ -444,7 +447,7 @@ func TestServer_SubmitBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "phase0")
md.Set(api.VersionHeader, "phase0")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -465,7 +468,7 @@ func TestServer_SubmitBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "altair")
md.Set(api.VersionHeader, "altair")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -486,7 +489,7 @@ func TestServer_SubmitBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
md.Set(api.VersionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -504,7 +507,7 @@ func TestServer_SubmitBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
md.Set(api.VersionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
@@ -525,7 +528,7 @@ func TestServer_SubmitBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
md.Set(api.VersionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
@@ -543,7 +546,7 @@ func TestServer_SubmitBlockSSZ(t *testing.T) {
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
md.Set(api.VersionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
@@ -579,8 +582,8 @@ func TestServer_GetBlock(t *testing.T) {
}
func TestServer_GetBlockV2(t *testing.T) {
ctx := context.Background()
stream := &runtime.ServerTransportStream{}
ctx := grpc.NewContextWithServerTransportStream(context.Background(), stream)
t.Run("Phase 0", func(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Slot = 123

View File

@@ -11,10 +11,13 @@ import (
"github.com/go-playground/validator/v10"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
@@ -35,18 +38,153 @@ const (
// a `SignedBeaconBlock`. The broadcast behaviour may be adjusted via the `broadcast_validation`
// query parameter.
func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request) {
if ok := bs.checkSync(r.Context(), w); !ok {
if shared.IsSyncing(r.Context(), w, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher) {
return
}
isSSZ, err := http2.SszRequested(r)
if isSSZ && err == nil {
publishBlindedBlockV2SSZ(bs, w, r)
} else {
publishBlindedBlockV2(bs, w, r)
}
}
func publishBlindedBlockV2SSZ(bs *Server, w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(r.Body)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not read request body: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
capellaBlock := &ethpbv2.SignedBlindedBeaconBlockCapella{}
if err := capellaBlock.UnmarshalSSZ(body); err == nil {
v1block, err := migration.BlindedCapellaToV1Alpha1SignedBlock(capellaBlock)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedCapella{
BlindedCapella: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
bellatrixBlock := &ethpbv2.SignedBlindedBeaconBlockBellatrix{}
if err := bellatrixBlock.UnmarshalSSZ(body); err == nil {
v1block, err := migration.BlindedBellatrixToV1Alpha1SignedBlock(bellatrixBlock)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{
BlindedBellatrix: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
// blinded is not supported before bellatrix hardfork
altairBlock := &ethpbv2.SignedBeaconBlockAltair{}
if err := altairBlock.UnmarshalSSZ(body); err == nil {
v1block, err := migration.AltairToV1Alpha1SignedBlock(altairBlock)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
phase0Block := &ethpbv1.SignedBeaconBlock{}
if err := phase0Block.UnmarshalSSZ(body); err == nil {
v1block, err := migration.V1ToV1Alpha1SignedBlock(phase0Block)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
errJson := &http2.DefaultErrorJson{
Message: "Body does not represent a valid block type",
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
}
func publishBlindedBlockV2(bs *Server, w http.ResponseWriter, r *http.Request) {
validate := validator.New()
body, err := io.ReadAll(r.Body)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not read request body",
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
@@ -55,19 +193,19 @@ func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request)
if err = validate.Struct(capellaBlock); err == nil {
consensusBlock, err := capellaBlock.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -80,19 +218,19 @@ func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request)
if err = validate.Struct(bellatrixBlock); err == nil {
consensusBlock, err := bellatrixBlock.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -104,19 +242,19 @@ func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request)
if err = validate.Struct(altairBlock); err == nil {
consensusBlock, err := altairBlock.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -128,19 +266,19 @@ func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request)
if err = validate.Struct(phase0Block); err == nil {
consensusBlock, err := phase0Block.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -148,11 +286,11 @@ func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request)
}
}
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Body does not represent a valid block type",
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
}
// PublishBlockV2 instructs the beacon node to broadcast a newly signed beacon block to the beacon network,
@@ -164,39 +302,180 @@ func (bs *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request)
// successfully broadcast but failed integration. The broadcast behaviour may be adjusted via the
// `broadcast_validation` query parameter.
func (bs *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
if ok := bs.checkSync(r.Context(), w); !ok {
if shared.IsSyncing(r.Context(), w, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher) {
return
}
isSSZ, err := http2.SszRequested(r)
if isSSZ && err == nil {
publishBlockV2SSZ(bs, w, r)
} else {
publishBlockV2(bs, w, r)
}
}
func publishBlockV2SSZ(bs *Server, w http.ResponseWriter, r *http.Request) {
validate := validator.New()
body, err := io.ReadAll(r.Body)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not read request body",
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
capellaBlock := &ethpbv2.SignedBeaconBlockCapella{}
if err := capellaBlock.UnmarshalSSZ(body); err == nil {
if err = validate.Struct(capellaBlock); err == nil {
v1block, err := migration.CapellaToV1Alpha1SignedBlock(capellaBlock)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Capella{
Capella: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
}
bellatrixBlock := &ethpbv2.SignedBeaconBlockBellatrix{}
if err := bellatrixBlock.UnmarshalSSZ(body); err == nil {
if err = validate.Struct(bellatrixBlock); err == nil {
v1block, err := migration.BellatrixToV1Alpha1SignedBlock(bellatrixBlock)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Bellatrix{
Bellatrix: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
}
altairBlock := &ethpbv2.SignedBeaconBlockAltair{}
if err := altairBlock.UnmarshalSSZ(body); err == nil {
if err = validate.Struct(altairBlock); err == nil {
v1block, err := migration.AltairToV1Alpha1SignedBlock(altairBlock)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
}
phase0Block := &ethpbv1.SignedBeaconBlock{}
if err := phase0Block.UnmarshalSSZ(body); err == nil {
if err = validate.Struct(phase0Block); err == nil {
v1block, err := migration.V1ToV1Alpha1SignedBlock(phase0Block)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: v1block,
},
}
if err = bs.validateBroadcast(r, genericBlock); err != nil {
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, genericBlock)
return
}
}
errJson := &http2.DefaultErrorJson{
Message: "Body does not represent a valid block type",
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
}
func publishBlockV2(bs *Server, w http.ResponseWriter, r *http.Request) {
validate := validator.New()
body, err := io.ReadAll(r.Body)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not read request body",
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
var capellaBlock *SignedBeaconBlockCapella
if err = unmarshalStrict(body, &capellaBlock); err == nil {
if err = validate.Struct(capellaBlock); err == nil {
consensusBlock, err := capellaBlock.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -208,19 +487,19 @@ func (bs *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
if err = validate.Struct(bellatrixBlock); err == nil {
consensusBlock, err := bellatrixBlock.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -232,19 +511,19 @@ func (bs *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
if err = validate.Struct(altairBlock); err == nil {
consensusBlock, err := altairBlock.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -256,19 +535,19 @@ func (bs *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
if err = validate.Struct(phase0Block); err == nil {
consensusBlock, err := phase0Block.ToGeneric()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode request body into consensus block: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if err = bs.validateBroadcast(r, consensusBlock); err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
bs.proposeBlock(r.Context(), w, consensusBlock)
@@ -276,21 +555,21 @@ func (bs *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
}
}
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Body does not represent a valid block type",
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
}
func (bs *Server) proposeBlock(ctx context.Context, w http.ResponseWriter, blk *eth.GenericSignedBeaconBlock) {
_, err := bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, blk)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
}
@@ -329,8 +608,13 @@ func (bs *Server) validateBroadcast(r *http.Request, blk *eth.GenericSignedBeaco
}
func (bs *Server) validateConsensus(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) error {
parentRoot := blk.Block().ParentRoot()
parentState, err := bs.Stater.State(ctx, parentRoot[:])
parentBlockRoot := blk.Block().ParentRoot()
parentBlock, err := bs.Blocker.Block(ctx, parentBlockRoot[:])
if err != nil {
return errors.Wrap(err, "could not get parent block")
}
parentStateRoot := parentBlock.Block().StateRoot()
parentState, err := bs.Stater.State(ctx, parentStateRoot[:])
if err != nil {
return errors.Wrap(err, "could not get parent state")
}
@@ -347,29 +631,3 @@ func (bs *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error
}
return nil
}
func (bs *Server) checkSync(ctx context.Context, w http.ResponseWriter) bool {
isSyncing, syncDetails, err := helpers.ValidateSyncHTTP(ctx, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not check if node is syncing: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return false
}
if isSyncing {
msg := "Beacon node is currently syncing and not serving request on that endpoint"
details, err := json.Marshal(syncDetails)
if err == nil {
msg += " Details: " + string(details)
}
errJson := &network.DefaultErrorJson{
Message: msg,
Code: http.StatusServiceUnavailable,
}
network.WriteError(w, errJson)
return false
}
return true
}

View File

@@ -0,0 +1,170 @@
package beacon
import (
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"github.com/go-playground/validator/v10"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
corehelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
ethpbalpha "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// ListAttestations retrieves attestations known by the node but
// not necessarily incorporated into any block. Allows filtering by committee index or slot.
func (s *Server) ListAttestations(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "beacon.ListAttestations")
defer span.End()
ok, rawSlot, slot := shared.UintFromQuery(w, r, "slot")
if !ok {
return
}
ok, rawCommitteeIndex, committeeIndex := shared.UintFromQuery(w, r, "committee_index")
if !ok {
return
}
attestations := s.AttestationsPool.AggregatedAttestations()
unaggAtts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
http2.HandleError(w, "Could not get unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return
}
attestations = append(attestations, unaggAtts...)
isEmptyReq := rawSlot == "" && rawCommitteeIndex == ""
if isEmptyReq {
allAtts := make([]*shared.Attestation, len(attestations))
for i, att := range attestations {
allAtts[i] = shared.AttestationFromConsensus(att)
}
http2.WriteJson(w, &ListAttestationsResponse{Data: allAtts})
return
}
bothDefined := rawSlot != "" && rawCommitteeIndex != ""
filteredAtts := make([]*shared.Attestation, 0, len(attestations))
for _, att := range attestations {
committeeIndexMatch := rawCommitteeIndex != "" && att.Data.CommitteeIndex == primitives.CommitteeIndex(committeeIndex)
slotMatch := rawSlot != "" && att.Data.Slot == primitives.Slot(slot)
shouldAppend := (bothDefined && committeeIndexMatch && slotMatch) || (!bothDefined && (committeeIndexMatch || slotMatch))
if shouldAppend {
filteredAtts = append(filteredAtts, shared.AttestationFromConsensus(att))
}
}
http2.WriteJson(w, &ListAttestationsResponse{Data: filteredAtts})
}
// SubmitAttestations submits an Attestation object to node. If the attestation passes all validation
// constraints, node MUST publish the attestation on an appropriate subnet.
func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestations")
defer span.End()
if r.Body == http.NoBody {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
var req SubmitAttestationsRequest
if err := json.NewDecoder(r.Body).Decode(&req.Data); err != nil {
http2.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
validate := validator.New()
if err := validate.Struct(req); err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
var validAttestations []*ethpbalpha.Attestation
var attFailures []*shared.IndexedVerificationFailure
for i, sourceAtt := range req.Data {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &shared.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &shared.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
validAttestations = append(validAttestations, att)
}
failedBroadcasts := make([]string, 0)
for i, att := range validAttestations {
// Determine subnet to broadcast attestation to
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
http2.HandleError(w, "Could not get head validator indices: "+err.Error(), http.StatusInternalServerError)
return
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated att")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated att")
}
}
}
if len(failedBroadcasts) > 0 {
http2.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &shared.IndexedVerificationFailureError{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Failures: attFailures,
}
http2.WriteError(w, failuresErr)
}
}

View File

@@ -0,0 +1,375 @@
package beacon
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
blockchainmock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/attestations"
p2pMock "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
ethpbv1alpha1 "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
)
func TestListAttestations(t *testing.T) {
att1 := &ethpbv1alpha1.Attestation{
AggregationBits: []byte{1, 10},
Data: &ethpbv1alpha1.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature1"), 96),
}
att2 := &ethpbv1alpha1.Attestation{
AggregationBits: []byte{1, 10},
Data: &ethpbv1alpha1.AttestationData{
Slot: 1,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature2"), 96),
}
att3 := &ethpbv1alpha1.Attestation{
AggregationBits: bitfield.NewBitlist(8),
Data: &ethpbv1alpha1.AttestationData{
Slot: 2,
CommitteeIndex: 2,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot3"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot3"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot3"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature3"), 96),
}
att4 := &ethpbv1alpha1.Attestation{
AggregationBits: bitfield.NewBitlist(8),
Data: &ethpbv1alpha1.AttestationData{
Slot: 2,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot4"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot4"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot4"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature4"), 96),
}
s := &Server{
AttestationsPool: attestations.NewPool(),
}
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]*ethpbv1alpha1.Attestation{att1, att2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]*ethpbv1alpha1.Attestation{att3, att4}))
t.Run("empty request", func(t *testing.T) {
url := "http://example.com"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, 4, len(resp.Data))
})
t.Run("slot request", func(t *testing.T) {
url := "http://example.com?slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, 2, len(resp.Data))
for _, a := range resp.Data {
assert.Equal(t, "2", a.Data.Slot)
}
})
t.Run("index request", func(t *testing.T) {
url := "http://example.com?committee_index=4"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, 2, len(resp.Data))
for _, a := range resp.Data {
assert.Equal(t, "4", a.Data.CommitteeIndex)
}
})
t.Run("both slot + index request", func(t *testing.T) {
url := "http://example.com?slot=2&committee_index=4"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, 1, len(resp.Data))
for _, a := range resp.Data {
assert.Equal(t, "2", a.Data.Slot)
assert.Equal(t, "4", a.Data.CommitteeIndex)
}
})
}
func TestServer_SubmitAttestations(t *testing.T) {
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.
c.SlotsPerEpoch = 1
params.OverrideBeaconConfig(c)
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validators := []*ethpbv1alpha1.Validator{
{
PublicKey: keys[0].PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
state.Validators = validators
state.Slot = 1
state.PreviousJustifiedCheckpoint = &ethpbv1alpha1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
}
return nil
})
require.NoError(t, err)
b := bitfield.NewBitlist(1)
b.SetBitAt(0, true)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled)
assert.Equal(t, 1, len(broadcaster.BroadcastAttestations))
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].AggregationBits))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].Signature))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].Data.Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].Data.CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].Data.BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].Data.Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].Data.Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].Data.Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].Data.Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled)
assert.Equal(t, 2, len(broadcaster.BroadcastAttestations))
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &apimiddleware.IndexedVerificationFailureErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
}
const (
singleAtt = `[
{
"aggregation_bits": "0x03",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
multipleAtts = `[
{
"aggregation_bits": "0x03",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7431000000000000000000000000000000000000000000"
}
}
},
{
"aggregation_bits": "0x03",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7432000000000000000000000000000000000000000000"
}
}
}
]`
// signature is invalid
invalidAtt = `[
{
"aggregation_bits": "0x03",
"signature": "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
)

View File

@@ -2,6 +2,8 @@ package beacon
import (
"bytes"
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
@@ -9,10 +11,20 @@ import (
"github.com/golang/mock/gomock"
testing2 "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
mockSync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
mock2 "github.com/prysmaticlabs/prysm/v4/testing/mock"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"github.com/stretchr/testify/mock"
)
@@ -30,7 +42,7 @@ func TestPublishBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(phase0Block)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(phase0Block)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -47,7 +59,7 @@ func TestPublishBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(altairBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(altairBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -64,7 +76,7 @@ func TestPublishBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(bellatrixBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(bellatrixBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -81,7 +93,7 @@ func TestPublishBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(capellaBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(capellaBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -92,7 +104,7 @@ func TestPublishBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(blindedBellatrixBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(blindedBellatrixBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -108,7 +120,7 @@ func TestPublishBlockV2(t *testing.T) {
OptimisticModeFetcher: chainService,
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte("foo")))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte("foo")))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -117,9 +129,74 @@ func TestPublishBlockV2(t *testing.T) {
})
}
func TestPublishBlindedBlockV2(t *testing.T) {
func TestPublishBlockV2SSZ(t *testing.T) {
ctrl := gomock.NewController(t)
t.Run("Bellatrix", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Bellatrix)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var bellablock SignedBeaconBlockBellatrix
err := json.Unmarshal([]byte(bellatrixBlock), &bellablock)
require.NoError(t, err)
genericBlock, err := bellablock.ToGeneric()
require.NoError(t, err)
sszvalue, err := genericBlock.GetBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(sszvalue))
request.Header.Set("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Capella", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Capella)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var cblock SignedBeaconBlockCapella
err := json.Unmarshal([]byte(capellaBlock), &cblock)
require.NoError(t, err)
genericBlock, err := cblock.ToGeneric()
require.NoError(t, err)
sszvalue, err := genericBlock.GetCapella().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(sszvalue))
request.Header.Set("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("invalid block", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(blindedBellatrixBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Body does not represent a valid block type"))
})
}
func TestPublishBlindedBlockV2(t *testing.T) {
ctrl := gomock.NewController(t)
t.Run("Phase 0", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
@@ -131,7 +208,7 @@ func TestPublishBlindedBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(phase0Block)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(phase0Block)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
@@ -148,7 +225,7 @@ func TestPublishBlindedBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(altairBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(altairBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
@@ -165,7 +242,7 @@ func TestPublishBlindedBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(blindedBellatrixBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(blindedBellatrixBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
@@ -182,7 +259,7 @@ func TestPublishBlindedBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(blindedCapellaBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(blindedCapellaBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
@@ -193,7 +270,7 @@ func TestPublishBlindedBlockV2(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte(bellatrixBlock)))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(bellatrixBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
@@ -209,7 +286,7 @@ func TestPublishBlindedBlockV2(t *testing.T) {
OptimisticModeFetcher: chainService,
}
request := httptest.NewRequest("GET", "http://foo.example", bytes.NewReader([]byte("foo")))
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte("foo")))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
@@ -218,6 +295,129 @@ func TestPublishBlindedBlockV2(t *testing.T) {
})
}
func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ctrl := gomock.NewController(t)
t.Run("Bellatrix", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedBellatrix)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var bellablock SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(blindedBellatrixBlock), &bellablock)
require.NoError(t, err)
genericBlock, err := bellablock.ToGeneric()
require.NoError(t, err)
sszvalue, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(sszvalue))
request.Header.Set("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Capella", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedCapella)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var cblock SignedBlindedBeaconBlockCapella
err := json.Unmarshal([]byte(blindedCapellaBlock), &cblock)
require.NoError(t, err)
genericBlock, err := cblock.ToGeneric()
require.NoError(t, err)
sszvalue, err := genericBlock.GetBlindedCapella().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(sszvalue))
request.Header.Set("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("invalid block", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(bellatrixBlock)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Body does not represent a valid block type"))
})
}
func TestValidateConsensus(t *testing.T) {
ctx := context.Background()
parentState, privs := util.DeterministicGenesisState(t, params.MinimalSpecConfig().MinGenesisActiveValidatorCount)
parentBlock, err := util.GenerateFullBlock(parentState, privs, util.DefaultBlockGenConfig(), parentState.Slot())
require.NoError(t, err)
parentSbb, err := blocks.NewSignedBeaconBlock(parentBlock)
require.NoError(t, err)
st, err := transition.ExecuteStateTransition(ctx, parentState, parentSbb)
require.NoError(t, err)
block, err := util.GenerateFullBlock(st, privs, util.DefaultBlockGenConfig(), st.Slot())
require.NoError(t, err)
sbb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
parentRoot, err := parentSbb.Block().HashTreeRoot()
require.NoError(t, err)
server := &Server{
Blocker: &testutil.MockBlocker{RootBlockMap: map[[32]byte]interfaces.ReadOnlySignedBeaconBlock{parentRoot: parentSbb}},
Stater: &testutil.MockStater{StatesByRoot: map[[32]byte]state.BeaconState{bytesutil.ToBytes32(parentBlock.Block.StateRoot): parentState}},
}
require.NoError(t, server.validateConsensus(ctx, sbb))
}
func TestValidateEquivocation(t *testing.T) {
t.Run("ok", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(10))
fc := doublylinkedtree.New()
require.NoError(t, fc.InsertNode(context.Background(), st, bytesutil.ToBytes32([]byte("root"))))
server := &Server{
ForkchoiceFetcher: &testing2.ChainService{ForkChoiceStore: fc},
}
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
blk.SetSlot(st.Slot() + 1)
require.NoError(t, server.validateEquivocation(blk.Block()))
})
t.Run("block already exists", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(10))
fc := doublylinkedtree.New()
require.NoError(t, fc.InsertNode(context.Background(), st, bytesutil.ToBytes32([]byte("root"))))
server := &Server{
ForkchoiceFetcher: &testing2.ChainService{ForkChoiceStore: fc},
}
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
blk.SetSlot(st.Slot())
assert.ErrorContains(t, "already exists", server.validateEquivocation(blk.Block()))
})
}
const (
phase0Block = `{
"message": {
@@ -668,7 +868,8 @@ const (
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
],
"data": {
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
@@ -688,7 +889,7 @@ const (
}
],
"sync_aggregate": {
"sync_committee_bits": "0x01",
"sync_committee_bits": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"sync_committee_signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"execution_payload": {
@@ -846,7 +1047,8 @@ const (
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
],
"data": {
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
@@ -866,7 +1068,7 @@ const (
}
],
"sync_aggregate": {
"sync_committee_bits": "0x01",
"sync_committee_bits": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"sync_committee_signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"execution_payload_header": {
@@ -1022,7 +1224,8 @@ const (
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
],
"data": {
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
@@ -1042,7 +1245,7 @@ const (
}
],
"sync_aggregate": {
"sync_committee_bits": "0x01",
"sync_committee_bits": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"sync_committee_signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"execution_payload": {
@@ -1218,7 +1421,8 @@ const (
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
],
"data": {
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
@@ -1238,7 +1442,7 @@ const (
}
],
"sync_aggregate": {
"sync_committee_bits": "0x01",
"sync_committee_bits": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"sync_committee_signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"execution_payload_header": {

View File

@@ -8,11 +8,9 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
corehelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
@@ -27,121 +25,6 @@ import (
const broadcastBLSChangesRateLimit = 128
// ListPoolAttestations retrieves attestations known by the node but
// not necessarily incorporated into any block. Allows filtering by committee index or slot.
func (bs *Server) ListPoolAttestations(ctx context.Context, req *ethpbv1.AttestationsPoolRequest) (*ethpbv1.AttestationsPoolResponse, error) {
ctx, span := trace.StartSpan(ctx, "beacon.ListPoolAttestations")
defer span.End()
attestations := bs.AttestationsPool.AggregatedAttestations()
unaggAtts, err := bs.AttestationsPool.UnaggregatedAttestations()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get unaggregated attestations: %v", err)
}
attestations = append(attestations, unaggAtts...)
isEmptyReq := req.Slot == nil && req.CommitteeIndex == nil
if isEmptyReq {
allAtts := make([]*ethpbv1.Attestation, len(attestations))
for i, att := range attestations {
allAtts[i] = migration.V1Alpha1AttestationToV1(att)
}
return &ethpbv1.AttestationsPoolResponse{Data: allAtts}, nil
}
filteredAtts := make([]*ethpbv1.Attestation, 0, len(attestations))
for _, att := range attestations {
bothDefined := req.Slot != nil && req.CommitteeIndex != nil
committeeIndexMatch := req.CommitteeIndex != nil && att.Data.CommitteeIndex == *req.CommitteeIndex
slotMatch := req.Slot != nil && att.Data.Slot == *req.Slot
if bothDefined && committeeIndexMatch && slotMatch {
filteredAtts = append(filteredAtts, migration.V1Alpha1AttestationToV1(att))
} else if !bothDefined && (committeeIndexMatch || slotMatch) {
filteredAtts = append(filteredAtts, migration.V1Alpha1AttestationToV1(att))
}
}
return &ethpbv1.AttestationsPoolResponse{Data: filteredAtts}, nil
}
// SubmitAttestations submits Attestation object to node. If attestation passes all validation
// constraints, node MUST publish attestation on appropriate subnet.
func (bs *Server) SubmitAttestations(ctx context.Context, req *ethpbv1.SubmitAttestationsRequest) (*emptypb.Empty, error) {
ctx, span := trace.StartSpan(ctx, "beacon.SubmitAttestation")
defer span.End()
var validAttestations []*ethpbalpha.Attestation
var attFailures []*helpers.SingleIndexedVerificationFailure
for i, sourceAtt := range req.Data {
att := migration.V1AttToV1Alpha1(sourceAtt)
if _, err := bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &helpers.SingleIndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
bs.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
validAttestations = append(validAttestations, att)
}
broadcastFailed := false
for _, att := range validAttestations {
// Determine subnet to broadcast attestation to
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := bs.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
return nil, err
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err := bs.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
broadcastFailed = true
}
if corehelpers.IsAggregated(att) {
if err := bs.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated att")
}
} else {
if err := bs.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated att")
}
}
}
if broadcastFailed {
return nil, status.Errorf(
codes.Internal,
"Could not publish one or more attestations. Some attestations could be published successfully.")
}
if len(attFailures) > 0 {
failuresContainer := &helpers.IndexedVerificationFailure{Failures: attFailures}
err := grpc.AppendCustomErrorHeader(ctx, failuresContainer)
if err != nil {
return nil, status.Errorf(
codes.InvalidArgument,
"One or more attestations failed validation. Could not prepare attestation failure information: %v",
err,
)
}
return nil, status.Errorf(codes.InvalidArgument, "One or more attestations failed validation")
}
return &emptypb.Empty{}, nil
}
// ListPoolAttesterSlashings retrieves attester slashings known by the node but
// not necessarily incorporated into any block.
func (bs *Server) ListPoolAttesterSlashings(ctx context.Context, _ *emptypb.Empty) (*ethpbv1.AttesterSlashingsPoolResponse, error) {

View File

@@ -2,14 +2,9 @@ package beacon
import (
"context"
"reflect"
"strings"
"testing"
"time"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/go-bitfield"
grpcutil "github.com/prysmaticlabs/prysm/v4/api/grpc"
blockchainmock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
prysmtime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
@@ -36,172 +31,9 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
func TestListPoolAttestations(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
att1 := &ethpbv1alpha1.Attestation{
AggregationBits: []byte{1, 10},
Data: &ethpbv1alpha1.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature1"), 96),
}
att2 := &ethpbv1alpha1.Attestation{
AggregationBits: []byte{4, 40},
Data: &ethpbv1alpha1.AttestationData{
Slot: 4,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot4"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 4,
Root: bytesutil.PadTo([]byte("sourceroot4"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 40,
Root: bytesutil.PadTo([]byte("targetroot4"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature4"), 96),
}
att3 := &ethpbv1alpha1.Attestation{
AggregationBits: []byte{2, 20},
Data: &ethpbv1alpha1.AttestationData{
Slot: 2,
CommitteeIndex: 2,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 2,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 20,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature2"), 96),
}
att4 := &ethpbv1alpha1.Attestation{
AggregationBits: bitfield.NewBitlist(8),
Data: &ethpbv1alpha1.AttestationData{
Slot: 4,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 2,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 20,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature2"), 96),
}
att5 := &ethpbv1alpha1.Attestation{
AggregationBits: bitfield.NewBitlist(8),
Data: &ethpbv1alpha1.AttestationData{
Slot: 2,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 2,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 20,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature1"), 96),
}
att6 := &ethpbv1alpha1.Attestation{
AggregationBits: bitfield.NewBitlist(8),
Data: &ethpbv1alpha1.AttestationData{
Slot: 2,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 2,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 20,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature2"), 96),
}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
AttestationsPool: attestations.NewPool(),
}
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]*ethpbv1alpha1.Attestation{att1, att2, att3}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]*ethpbv1alpha1.Attestation{att4, att5, att6}))
t.Run("empty request", func(t *testing.T) {
req := &ethpbv1.AttestationsPoolRequest{}
resp, err := s.ListPoolAttestations(context.Background(), req)
require.NoError(t, err)
require.Equal(t, 6, len(resp.Data))
})
t.Run("slot request", func(t *testing.T) {
slot := primitives.Slot(2)
req := &ethpbv1.AttestationsPoolRequest{
Slot: &slot,
}
resp, err := s.ListPoolAttestations(context.Background(), req)
require.NoError(t, err)
require.Equal(t, 3, len(resp.Data))
for _, datum := range resp.Data {
assert.DeepEqual(t, datum.Data.Slot, slot)
}
})
t.Run("index request", func(t *testing.T) {
index := primitives.CommitteeIndex(4)
req := &ethpbv1.AttestationsPoolRequest{
CommitteeIndex: &index,
}
resp, err := s.ListPoolAttestations(context.Background(), req)
require.NoError(t, err)
require.Equal(t, 4, len(resp.Data))
for _, datum := range resp.Data {
assert.DeepEqual(t, datum.Data.Index, index)
}
})
t.Run("both slot + index request", func(t *testing.T) {
slot := primitives.Slot(2)
index := primitives.CommitteeIndex(4)
req := &ethpbv1.AttestationsPoolRequest{
Slot: &slot,
CommitteeIndex: &index,
}
resp, err := s.ListPoolAttestations(context.Background(), req)
require.NoError(t, err)
require.Equal(t, 2, len(resp.Data))
for _, datum := range resp.Data {
assert.DeepEqual(t, datum.Data.Index, index)
assert.DeepEqual(t, datum.Data.Slot, slot)
}
})
}
func TestListPoolAttesterSlashings(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
@@ -383,6 +215,9 @@ func TestListPoolVoluntaryExits(t *testing.T) {
func TestSubmitAttesterSlashing_Ok(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validator := &ethpbv1alpha1.Validator{
@@ -460,6 +295,9 @@ func TestSubmitAttesterSlashing_Ok(t *testing.T) {
func TestSubmitAttesterSlashing_AcrossFork(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
@@ -536,6 +374,10 @@ func TestSubmitAttesterSlashing_AcrossFork(t *testing.T) {
func TestSubmitAttesterSlashing_InvalidSlashing(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
bs, err := util.NewBeaconState()
require.NoError(t, err)
@@ -577,6 +419,9 @@ func TestSubmitAttesterSlashing_InvalidSlashing(t *testing.T) {
func TestSubmitProposerSlashing_Ok(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validator := &ethpbv1alpha1.Validator{
@@ -647,6 +492,9 @@ func TestSubmitProposerSlashing_Ok(t *testing.T) {
func TestSubmitProposerSlashing_AcrossFork(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
@@ -715,6 +563,10 @@ func TestSubmitProposerSlashing_AcrossFork(t *testing.T) {
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
bs, err := util.NewBeaconState()
require.NoError(t, err)
@@ -749,6 +601,9 @@ func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
func TestSubmitVoluntaryExit_Ok(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validator := &ethpbv1alpha1.Validator{
@@ -796,6 +651,9 @@ func TestSubmitVoluntaryExit_Ok(t *testing.T) {
func TestSubmitVoluntaryExit_AcrossFork(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = params.BeaconConfig().ShardCommitteePeriod + 1
@@ -837,6 +695,9 @@ func TestSubmitVoluntaryExit_AcrossFork(t *testing.T) {
func TestSubmitVoluntaryExit_InvalidValidatorIndex(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validator := &ethpbv1alpha1.Validator{
@@ -872,6 +733,9 @@ func TestSubmitVoluntaryExit_InvalidValidatorIndex(t *testing.T) {
func TestSubmitVoluntaryExit_InvalidExit(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validator := &ethpbv1alpha1.Validator{
@@ -904,290 +768,6 @@ func TestSubmitVoluntaryExit_InvalidExit(t *testing.T) {
assert.Equal(t, false, broadcaster.BroadcastCalled)
}
func TestServer_SubmitAttestations_Ok(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.
c.SlotsPerEpoch = 1
params.OverrideBeaconConfig(c)
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validators := []*ethpbv1alpha1.Validator{
{
PublicKey: keys[0].PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
state.Validators = validators
state.Slot = 1
state.PreviousJustifiedCheckpoint = &ethpbv1alpha1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
}
return nil
})
require.NoError(t, err)
b := bitfield.NewBitlist(1)
b.SetBitAt(0, true)
sourceCheckpoint := &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
}
att1 := &ethpbv1.Attestation{
AggregationBits: b,
Data: &ethpbv1.AttestationData{
Slot: 0,
Index: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("beaconblockroot1"), 32),
Source: sourceCheckpoint,
Target: &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
},
},
Signature: make([]byte, 96),
}
att2 := &ethpbv1.Attestation{
AggregationBits: b,
Data: &ethpbv1.AttestationData{
Slot: 0,
Index: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("beaconblockroot2"), 32),
Source: sourceCheckpoint,
Target: &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: make([]byte, 96),
}
for _, att := range []*ethpbv1.Attestation{att1, att2} {
sb, err := signing.ComputeDomainAndSign(
bs,
slots.ToEpoch(att.Data.Slot),
att.Data,
params.BeaconConfig().DomainBeaconAttester,
keys[0],
)
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
att.Signature = sig.Marshal()
}
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
AttestationsPool: attestations.NewPool(),
Broadcaster: broadcaster,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
_, err = s.SubmitAttestations(ctx, &ethpbv1.SubmitAttestationsRequest{
Data: []*ethpbv1.Attestation{att1, att2},
})
require.NoError(t, err)
assert.Equal(t, true, broadcaster.BroadcastCalled)
assert.Equal(t, 2, len(broadcaster.BroadcastAttestations))
expectedAtt1, err := att1.HashTreeRoot()
require.NoError(t, err)
expectedAtt2, err := att2.HashTreeRoot()
require.NoError(t, err)
actualAtt1, err := broadcaster.BroadcastAttestations[0].HashTreeRoot()
require.NoError(t, err)
actualAtt2, err := broadcaster.BroadcastAttestations[1].HashTreeRoot()
require.NoError(t, err)
for _, r := range [][32]byte{actualAtt1, actualAtt2} {
assert.Equal(t, true, reflect.DeepEqual(expectedAtt1, r) || reflect.DeepEqual(expectedAtt2, r))
}
require.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
}
func TestServer_SubmitAttestations_ValidAttestationSubmitted(t *testing.T) {
ctx := grpc.NewContextWithServerTransportStream(context.Background(), &runtime.ServerTransportStream{})
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.
c.SlotsPerEpoch = 1
params.OverrideBeaconConfig(c)
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validators := []*ethpbv1alpha1.Validator{
{
PublicKey: keys[0].PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
state.Validators = validators
state.Slot = 1
state.PreviousJustifiedCheckpoint = &ethpbv1alpha1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
}
return nil
})
require.NoError(t, err)
sourceCheckpoint := &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
}
b := bitfield.NewBitlist(1)
b.SetBitAt(0, true)
attValid := &ethpbv1.Attestation{
AggregationBits: b,
Data: &ethpbv1.AttestationData{
Slot: 0,
Index: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("beaconblockroot1"), 32),
Source: sourceCheckpoint,
Target: &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
},
},
Signature: make([]byte, 96),
}
attInvalidSignature := &ethpbv1.Attestation{
AggregationBits: b,
Data: &ethpbv1.AttestationData{
Slot: 0,
Index: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("beaconblockroot2"), 32),
Source: sourceCheckpoint,
Target: &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: make([]byte, 96),
}
// Don't sign attInvalidSignature.
sb, err := signing.ComputeDomainAndSign(
bs,
slots.ToEpoch(attValid.Data.Slot),
attValid.Data,
params.BeaconConfig().DomainBeaconAttester,
keys[0],
)
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
attValid.Signature = sig.Marshal()
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
AttestationsPool: attestations.NewPool(),
Broadcaster: broadcaster,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
_, err = s.SubmitAttestations(ctx, &ethpbv1.SubmitAttestationsRequest{
Data: []*ethpbv1.Attestation{attValid, attInvalidSignature},
})
require.ErrorContains(t, "One or more attestations failed validation", err)
expectedAtt, err := attValid.HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, true, broadcaster.BroadcastCalled)
require.Equal(t, 1, len(broadcaster.BroadcastAttestations))
broadcastRoot, err := broadcaster.BroadcastAttestations[0].HashTreeRoot()
require.NoError(t, err)
require.DeepEqual(t, expectedAtt, broadcastRoot)
require.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
}
func TestServer_SubmitAttestations_InvalidAttestationGRPCHeader(t *testing.T) {
ctx := grpc.NewContextWithServerTransportStream(context.Background(), &runtime.ServerTransportStream{})
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.
c.SlotsPerEpoch = 1
params.OverrideBeaconConfig(c)
_, keys, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
validators := []*ethpbv1alpha1.Validator{
{
PublicKey: keys[0].PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
state.Validators = validators
state.Slot = 1
state.PreviousJustifiedCheckpoint = &ethpbv1alpha1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
}
return nil
})
require.NoError(t, err)
b := bitfield.NewBitlist(1)
b.SetBitAt(0, true)
att := &ethpbv1.Attestation{
AggregationBits: b,
Data: &ethpbv1.AttestationData{
Slot: 0,
Index: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("beaconblockroot2"), 32),
Source: &ethpbv1.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: nil,
}
chain := &blockchainmock.ChainService{State: bs}
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
ChainInfoFetcher: chain,
AttestationsPool: attestations.NewPool(),
Broadcaster: broadcaster,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
HeadFetcher: chain,
}
_, err = s.SubmitAttestations(ctx, &ethpbv1.SubmitAttestationsRequest{
Data: []*ethpbv1.Attestation{att},
})
require.ErrorContains(t, "One or more attestations failed validation", err)
sts, ok := grpc.ServerTransportStreamFromContext(ctx).(*runtime.ServerTransportStream)
require.Equal(t, true, ok, "type assertion failed")
md := sts.Header()
v, ok := md[strings.ToLower(grpcutil.CustomErrorMetadataKey)]
require.Equal(t, true, ok, "could not retrieve custom error metadata value")
assert.DeepEqual(
t,
[]string{"{\"failures\":[{\"index\":0,\"message\":\"Incorrect attestation signature: could not create signature from byte slice: signature must be 96 bytes\"}]}"},
v,
)
}
func TestListBLSToExecutionChanges(t *testing.T) {
change1 := &ethpbv1alpha1.SignedBLSToExecutionChange{
Message: &ethpbv1alpha1.BLSToExecutionChange{
@@ -1219,6 +799,10 @@ func TestListBLSToExecutionChanges(t *testing.T) {
func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.
@@ -1312,6 +896,10 @@ func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
func TestSubmitSignedBLSToExecutionChanges_Bellatrix(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.
@@ -1420,6 +1008,10 @@ func TestSubmitSignedBLSToExecutionChanges_Bellatrix(t *testing.T) {
func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
ctx := context.Background()
transition.SkipSlotCache.Disable()
defer transition.SkipSlotCache.Enable()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
// Required for correct committee size calculation.

View File

@@ -6,6 +6,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
bytesutil2 "github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
@@ -13,6 +14,14 @@ import (
"github.com/wealdtech/go-bytesutil"
)
type ListAttestationsResponse struct {
Data []*shared.Attestation `json:"data"`
}
type SubmitAttestationsRequest struct {
Data []*shared.Attestation `json:"data" validate:"required,dive"`
}
type SignedBeaconBlock struct {
Message BeaconBlock `json:"message" validate:"required"`
Signature string `json:"signature" validate:"required"`

View File

@@ -11,6 +11,7 @@ import (
statenative "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
@@ -102,13 +103,13 @@ func (bs *Server) ListValidators(ctx context.Context, req *ethpb.StateValidators
return &ethpb.StateValidatorsResponse{Data: valContainers, ExecutionOptimistic: isOptimistic, Finalized: isFinalized}, nil
}
filterStatus := make(map[ethpb.ValidatorStatus]bool, len(req.Status))
const lastValidStatusValue = ethpb.ValidatorStatus(12)
filterStatus := make(map[validator.ValidatorStatus]bool, len(req.Status))
const lastValidStatusValue = 12
for _, ss := range req.Status {
if ss > lastValidStatusValue {
return nil, status.Errorf(codes.InvalidArgument, "Invalid status "+ss.String())
}
filterStatus[ss] = true
filterStatus[validator.ValidatorStatus(ss)] = true
}
epoch := slots.ToEpoch(st.Slot())
filteredVals := make([]*ethpb.ValidatorContainer, 0, len(valContainers))
@@ -243,8 +244,8 @@ func valContainersByRequestIds(state state.BeaconState, validatorIds [][]byte) (
if len(validatorIds) == 0 {
allValidators := state.Validators()
valContainers = make([]*ethpb.ValidatorContainer, len(allValidators))
for i, validator := range allValidators {
readOnlyVal, err := statenative.NewValidator(validator)
for i, val := range allValidators {
readOnlyVal, err := statenative.NewValidator(val)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not convert validator: %v", err)
}
@@ -255,8 +256,8 @@ func valContainersByRequestIds(state state.BeaconState, validatorIds [][]byte) (
valContainers[i] = &ethpb.ValidatorContainer{
Index: primitives.ValidatorIndex(i),
Balance: allBalances[i],
Status: subStatus,
Validator: migration.V1Alpha1ValidatorToV1(validator),
Status: ethpb.ValidatorStatus(subStatus),
Validator: migration.V1Alpha1ValidatorToV1(val),
}
}
} else {
@@ -278,7 +279,7 @@ func valContainersByRequestIds(state state.BeaconState, validatorIds [][]byte) (
}
valIndex = primitives.ValidatorIndex(index)
}
validator, err := state.ValidatorAtIndex(valIndex)
val, err := state.ValidatorAtIndex(valIndex)
if _, ok := err.(*statenative.ValidatorIndexOutOfRangeError); ok {
// Ignore well-formed yet unknown indexes.
continue
@@ -286,8 +287,8 @@ func valContainersByRequestIds(state state.BeaconState, validatorIds [][]byte) (
if err != nil {
return nil, errors.Wrap(err, "could not get validator")
}
v1Validator := migration.V1Alpha1ValidatorToV1(validator)
readOnlyVal, err := statenative.NewValidator(validator)
v1Validator := migration.V1Alpha1ValidatorToV1(val)
readOnlyVal, err := statenative.NewValidator(val)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not convert validator: %v", err)
}
@@ -298,7 +299,7 @@ func valContainersByRequestIds(state state.BeaconState, validatorIds [][]byte) (
valContainers = append(valContainers, &ethpb.ValidatorContainer{
Index: valIndex,
Balance: allBalances[valIndex],
Status: subStatus,
Status: ethpb.ValidatorStatus(subStatus),
Validator: v1Validator,
})
}

View File

@@ -15,6 +15,7 @@ import (
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
@@ -463,7 +464,7 @@ func TestListValidators_Status(t *testing.T) {
require.Equal(
t,
true,
status == ethpb.ValidatorStatus_ACTIVE,
status == validator.Active,
)
require.Equal(
t,
@@ -501,7 +502,7 @@ func TestListValidators_Status(t *testing.T) {
require.Equal(
t,
true,
status == ethpb.ValidatorStatus_ACTIVE_ONGOING,
status == validator.ActiveOngoing,
)
require.Equal(
t,
@@ -538,7 +539,7 @@ func TestListValidators_Status(t *testing.T) {
require.Equal(
t,
true,
status == ethpb.ValidatorStatus_EXITED,
status == validator.Exited,
)
require.Equal(
t,
@@ -574,7 +575,7 @@ func TestListValidators_Status(t *testing.T) {
require.Equal(
t,
true,
status == ethpb.ValidatorStatus_PENDING_INITIALIZED || status == ethpb.ValidatorStatus_EXITED_UNSLASHED,
status == validator.PendingInitialized || status == validator.ExitedUnslashed,
)
require.Equal(
t,
@@ -612,7 +613,7 @@ func TestListValidators_Status(t *testing.T) {
require.Equal(
t,
true,
status == ethpb.ValidatorStatus_PENDING || subStatus == ethpb.ValidatorStatus_EXITED_SLASHED,
status == validator.Pending || subStatus == validator.ExitedSlashed,
)
require.Equal(
t,

View File

@@ -16,7 +16,7 @@ go_library(
"//beacon-chain/rpc/lookup:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network:go_default_library",
"//network/http:go_default_library",
"//proto/engine/v1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
@@ -36,7 +36,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//network:go_default_library",
"//network/http:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -12,7 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
@@ -22,7 +22,7 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
// Retrieve beacon state
stateId := mux.Vars(r)["state_id"]
if stateId == "" {
network.WriteError(w, &network.DefaultErrorJson{
http2.WriteError(w, &http2.DefaultErrorJson{
Message: "state_id is required in URL params",
Code: http.StatusBadRequest,
})
@@ -30,7 +30,7 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
}
st, err := s.Stater.State(r.Context(), []byte(stateId))
if err != nil {
network.WriteError(w, handleWrapError(err, "could not retrieve state", http.StatusNotFound))
http2.WriteError(w, handleWrapError(err, "could not retrieve state", http.StatusNotFound))
return
}
queryParam := r.URL.Query().Get("proposal_slot")
@@ -38,7 +38,7 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
if queryParam != "" {
pSlot, err := strconv.ParseUint(queryParam, 10, 64)
if err != nil {
network.WriteError(w, handleWrapError(err, "invalid proposal slot value", http.StatusBadRequest))
http2.WriteError(w, handleWrapError(err, "invalid proposal slot value", http.StatusBadRequest))
return
}
proposalSlot = primitives.Slot(pSlot)
@@ -48,18 +48,18 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
// Perform sanity checks on proposal slot before computing state
capellaStart, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
if err != nil {
network.WriteError(w, handleWrapError(err, "could not calculate Capella start slot", http.StatusInternalServerError))
http2.WriteError(w, handleWrapError(err, "could not calculate Capella start slot", http.StatusInternalServerError))
return
}
if proposalSlot < capellaStart {
network.WriteError(w, &network.DefaultErrorJson{
http2.WriteError(w, &http2.DefaultErrorJson{
Message: "expected withdrawals are not supported before Capella fork",
Code: http.StatusBadRequest,
})
return
}
if proposalSlot <= st.Slot() {
network.WriteError(w, &network.DefaultErrorJson{
http2.WriteError(w, &http2.DefaultErrorJson{
Message: fmt.Sprintf("proposal slot must be bigger than state slot. proposal slot: %d, state slot: %d", proposalSlot, st.Slot()),
Code: http.StatusBadRequest,
})
@@ -67,7 +67,7 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
}
lookAheadLimit := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().MaxSeedLookahead)))
if st.Slot().Add(lookAheadLimit) <= proposalSlot {
network.WriteError(w, &network.DefaultErrorJson{
http2.WriteError(w, &http2.DefaultErrorJson{
Message: fmt.Sprintf("proposal slot cannot be >= %d slots ahead of state slot", lookAheadLimit),
Code: http.StatusBadRequest,
})
@@ -76,12 +76,12 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
// Get metadata for response
isOptimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
network.WriteError(w, handleWrapError(err, "could not get optimistic mode info", http.StatusInternalServerError))
http2.WriteError(w, handleWrapError(err, "could not get optimistic mode info", http.StatusInternalServerError))
return
}
root, err := helpers.BlockRootAtSlot(st, st.Slot()-1)
if err != nil {
network.WriteError(w, handleWrapError(err, "could not get block root", http.StatusInternalServerError))
http2.WriteError(w, handleWrapError(err, "could not get block root", http.StatusInternalServerError))
return
}
var blockRoot = [32]byte(root)
@@ -89,7 +89,7 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
// Advance state forward to proposal slot
st, err = transition.ProcessSlots(r.Context(), st, proposalSlot)
if err != nil {
network.WriteError(w, &network.DefaultErrorJson{
http2.WriteError(w, &http2.DefaultErrorJson{
Message: "could not process slots",
Code: http.StatusInternalServerError,
})
@@ -97,13 +97,13 @@ func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
}
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
network.WriteError(w, &network.DefaultErrorJson{
http2.WriteError(w, &http2.DefaultErrorJson{
Message: "could not get expected withdrawals",
Code: http.StatusInternalServerError,
})
return
}
network.WriteJson(w, &ExpectedWithdrawalsResponse{
http2.WriteJson(w, &ExpectedWithdrawalsResponse{
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
Data: buildExpectedWithdrawalsData(withdrawals),
@@ -123,8 +123,8 @@ func buildExpectedWithdrawalsData(withdrawals []*enginev1.Withdrawal) []*Expecte
return data
}
func handleWrapError(err error, message string, code int) *network.DefaultErrorJson {
return &network.DefaultErrorJson{
func handleWrapError(err error, message string, code int) *http2.DefaultErrorJson {
return &http2.DefaultErrorJson{
Message: errors.Wrapf(err, message).Error(),
Code: code,
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -96,7 +96,7 @@ func TestExpectedWithdrawals_BadRequest(t *testing.T) {
s.ExpectedWithdrawals(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, testCase.errorMessage, e.Message)

View File

@@ -11,7 +11,6 @@ go_library(
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -42,7 +41,6 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -6,7 +6,6 @@ import (
gwpb "github.com/grpc-ecosystem/grpc-gateway/v2/proto/gateway"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
@@ -80,26 +79,18 @@ func (s *Server) StreamEvents(
}
// Subscribe to event feeds from information received in the beacon node runtime.
blockChan := make(chan *feed.Event, 1)
blockSub := s.BlockNotifier.BlockFeed().Subscribe(blockChan)
opsChan := make(chan *feed.Event, 1)
opsSub := s.OperationNotifier.OperationFeed().Subscribe(opsChan)
stateChan := make(chan *feed.Event, 1)
stateSub := s.StateNotifier.StateFeed().Subscribe(stateChan)
defer blockSub.Unsubscribe()
defer opsSub.Unsubscribe()
defer stateSub.Unsubscribe()
// Handle each event received and context cancelation.
for {
select {
case event := <-blockChan:
if err := handleBlockEvents(stream, requestedTopics, event); err != nil {
return status.Errorf(codes.Internal, "Could not handle block event: %v", err)
}
case event := <-opsChan:
if err := handleBlockOperationEvents(stream, requestedTopics, event); err != nil {
return status.Errorf(codes.Internal, "Could not handle block operations event: %v", err)
@@ -116,37 +107,6 @@ func (s *Server) StreamEvents(
}
}
func handleBlockEvents(
stream ethpbservice.Events_StreamEventsServer, requestedTopics map[string]bool, event *feed.Event,
) error {
switch event.Type {
case blockfeed.ReceivedBlock:
if _, ok := requestedTopics[BlockTopic]; !ok {
return nil
}
blkData, ok := event.Data.(*blockfeed.ReceivedBlockData)
if !ok {
return nil
}
v1Data, err := migration.BlockIfaceToV1BlockHeader(blkData.SignedBlock)
if err != nil {
return err
}
item, err := v1Data.Message.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash tree root block")
}
eventBlock := &ethpb.EventBlock{
Slot: v1Data.Message.Slot,
Block: item[:],
ExecutionOptimistic: blkData.IsOptimistic,
}
return streamData(stream, BlockTopic, eventBlock)
default:
return nil
}
}
func handleBlockOperationEvents(
stream ethpbservice.Events_StreamEventsServer, requestedTopics map[string]bool, event *feed.Event,
) error {
@@ -252,6 +212,28 @@ func (s *Server) handleStateEvents(
return nil
}
return streamData(stream, ChainReorgTopic, reorg)
case statefeed.BlockProcessed:
if _, ok := requestedTopics[BlockTopic]; !ok {
return nil
}
blkData, ok := event.Data.(*statefeed.BlockProcessedData)
if !ok {
return nil
}
v1Data, err := migration.BlockIfaceToV1BlockHeader(blkData.SignedBlock)
if err != nil {
return err
}
item, err := v1Data.Message.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash tree root block")
}
eventBlock := &ethpb.EventBlock{
Slot: blkData.Slot,
Block: item[:],
ExecutionOptimistic: blkData.Optimistic,
}
return streamData(stream, BlockTopic, eventBlock)
default:
return nil
}

View File

@@ -13,7 +13,6 @@ import (
mockChain "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
b "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
@@ -51,55 +50,6 @@ func TestStreamEvents_Preconditions(t *testing.T) {
})
}
func TestStreamEvents_BlockEvents(t *testing.T) {
t.Run(BlockTopic, func(t *testing.T) {
ctx := context.Background()
srv, ctrl, mockStream := setupServer(ctx, t)
defer ctrl.Finish()
blk := util.HydrateSignedBeaconBlock(&eth.SignedBeaconBlock{
Block: &eth.BeaconBlock{
Slot: 8,
},
})
bodyRoot, err := blk.Block.Body.HashTreeRoot()
require.NoError(t, err)
wantedHeader := util.HydrateBeaconHeader(&eth.BeaconBlockHeader{
Slot: 8,
BodyRoot: bodyRoot[:],
})
wantedBlockRoot, err := wantedHeader.HashTreeRoot()
require.NoError(t, err)
genericResponse, err := anypb.New(&ethpb.EventBlock{
Slot: 8,
Block: wantedBlockRoot[:],
ExecutionOptimistic: true,
})
require.NoError(t, err)
wantedMessage := &gateway.EventSource{
Event: BlockTopic,
Data: genericResponse,
}
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
assertFeedSendAndReceive(ctx, &assertFeedArgs{
t: t,
srv: srv,
topics: []string{BlockTopic},
stream: mockStream,
shouldReceive: wantedMessage,
itemToSend: &feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{
SignedBlock: wsb,
IsOptimistic: true,
},
},
feed: srv.BlockNotifier.BlockFeed(),
})
})
}
func TestStreamEvents_OperationsEvents(t *testing.T) {
t.Run("attestation_unaggregated", func(t *testing.T) {
ctx := context.Background()
@@ -588,6 +538,53 @@ func TestStreamEvents_StateEvents(t *testing.T) {
feed: srv.StateNotifier.StateFeed(),
})
})
t.Run(BlockTopic, func(t *testing.T) {
ctx := context.Background()
srv, ctrl, mockStream := setupServer(ctx, t)
defer ctrl.Finish()
blk := util.HydrateSignedBeaconBlock(&eth.SignedBeaconBlock{
Block: &eth.BeaconBlock{
Slot: 8,
},
})
bodyRoot, err := blk.Block.Body.HashTreeRoot()
require.NoError(t, err)
wantedHeader := util.HydrateBeaconHeader(&eth.BeaconBlockHeader{
Slot: 8,
BodyRoot: bodyRoot[:],
})
wantedBlockRoot, err := wantedHeader.HashTreeRoot()
require.NoError(t, err)
genericResponse, err := anypb.New(&ethpb.EventBlock{
Slot: 8,
Block: wantedBlockRoot[:],
ExecutionOptimistic: true,
})
require.NoError(t, err)
wantedMessage := &gateway.EventSource{
Event: BlockTopic,
Data: genericResponse,
}
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
assertFeedSendAndReceive(ctx, &assertFeedArgs{
t: t,
srv: srv,
topics: []string{BlockTopic},
stream: mockStream,
shouldReceive: wantedMessage,
itemToSend: &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: 8,
SignedBlock: wsb,
Optimistic: true,
},
},
feed: srv.StateNotifier.StateFeed(),
})
})
}
func TestStreamEvents_CommaSeparatedTopics(t *testing.T) {
@@ -651,7 +648,6 @@ func TestStreamEvents_CommaSeparatedTopics(t *testing.T) {
func setupServer(ctx context.Context, t testing.TB) (*Server, *gomock.Controller, *mock.MockEvents_StreamEventsServer) {
srv := &Server{
BlockNotifier: &mockChain.MockBlockNotifier{},
StateNotifier: &mockChain.MockStateNotifier{},
OperationNotifier: &mockChain.MockOperationNotifier{},
Ctx: ctx,

View File

@@ -7,7 +7,6 @@ import (
"context"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
blockfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/block"
opfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
)
@@ -17,7 +16,6 @@ import (
type Server struct {
Ctx context.Context
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
OperationNotifier opfeed.Notifier
HeadFetcher blockchain.HeadFetcher
ChainInfoFetcher blockchain.ChainInfoFetcher

View File

@@ -13,14 +13,15 @@ go_library(
"//api/grpc:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/eth/v1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
@@ -48,6 +49,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -38,8 +39,8 @@ func ValidateSyncGRPC(
return status.Errorf(codes.Internal, "Could not check optimistic status: %v", err)
}
syncDetailsContainer := &SyncDetailsContainer{
Data: &SyncDetailsJson{
syncDetailsContainer := &shared.SyncDetailsContainer{
Data: &shared.SyncDetails{
HeadSlot: strconv.FormatUint(uint64(headSlot), 10),
SyncDistance: strconv.FormatUint(uint64(timeFetcher.CurrentSlot()-headSlot), 10),
IsSyncing: true,
@@ -58,35 +59,6 @@ func ValidateSyncGRPC(
return status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
// ValidateSyncHTTP checks whether the node is currently syncing and returns sync information.
// It returns information whether the node is currently syncing along with sync details.
func ValidateSyncHTTP(
ctx context.Context,
syncChecker sync.Checker,
headFetcher blockchain.HeadFetcher,
timeFetcher blockchain.TimeFetcher,
optimisticModeFetcher blockchain.OptimisticModeFetcher,
) (bool, *SyncDetailsContainer, error) {
if !syncChecker.Syncing() {
return false, nil, nil
}
headSlot := headFetcher.HeadSlot()
isOptimistic, err := optimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return true, nil, errors.Wrap(err, "could not check optimistic status")
}
syncDetails := &SyncDetailsContainer{
Data: &SyncDetailsJson{
HeadSlot: strconv.FormatUint(uint64(headSlot), 10),
SyncDistance: strconv.FormatUint(uint64(timeFetcher.CurrentSlot()-headSlot), 10),
IsSyncing: true,
IsOptimistic: isOptimistic,
},
}
return true, syncDetails, nil
}
// IsOptimistic checks whether the beacon state's block is optimistic.
func IsOptimistic(
ctx context.Context,
@@ -216,17 +188,3 @@ func isStateRootOptimistic(
// No block matching requested state root, return true.
return true, nil
}
// SyncDetailsJson contains information about node sync status.
type SyncDetailsJson struct {
HeadSlot string `json:"head_slot"`
SyncDistance string `json:"sync_distance"`
IsSyncing bool `json:"is_syncing"`
IsOptimistic bool `json:"is_optimistic"`
ElOffline bool `json:"el_offline"`
}
// SyncDetailsContainer is a wrapper for Data.
type SyncDetailsContainer struct {
Data *SyncDetailsJson `json:"data"`
}

View File

@@ -5,66 +5,66 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
)
// ValidatorStatus returns a validator's status at the given epoch.
func ValidatorStatus(validator state.ReadOnlyValidator, epoch primitives.Epoch) (ethpb.ValidatorStatus, error) {
valStatus, err := ValidatorSubStatus(validator, epoch)
func ValidatorStatus(val state.ReadOnlyValidator, epoch primitives.Epoch) (validator.ValidatorStatus, error) {
valStatus, err := ValidatorSubStatus(val, epoch)
if err != nil {
return 0, errors.Wrap(err, "could not get sub status")
return 0, errors.Wrap(err, "could not get validator sub status")
}
switch valStatus {
case ethpb.ValidatorStatus_PENDING_INITIALIZED, ethpb.ValidatorStatus_PENDING_QUEUED:
return ethpb.ValidatorStatus_PENDING, nil
case ethpb.ValidatorStatus_ACTIVE_ONGOING, ethpb.ValidatorStatus_ACTIVE_SLASHED, ethpb.ValidatorStatus_ACTIVE_EXITING:
return ethpb.ValidatorStatus_ACTIVE, nil
case ethpb.ValidatorStatus_EXITED_UNSLASHED, ethpb.ValidatorStatus_EXITED_SLASHED:
return ethpb.ValidatorStatus_EXITED, nil
case ethpb.ValidatorStatus_WITHDRAWAL_POSSIBLE, ethpb.ValidatorStatus_WITHDRAWAL_DONE:
return ethpb.ValidatorStatus_WITHDRAWAL, nil
case validator.PendingInitialized, validator.PendingQueued:
return validator.Pending, nil
case validator.ActiveOngoing, validator.ActiveSlashed, validator.ActiveExiting:
return validator.Active, nil
case validator.ExitedUnslashed, validator.ExitedSlashed:
return validator.Exited, nil
case validator.WithdrawalPossible, validator.WithdrawalDone:
return validator.Withdrawal, nil
}
return 0, errors.New("invalid validator state")
}
// ValidatorSubStatus returns a validator's sub-status at the given epoch.
func ValidatorSubStatus(validator state.ReadOnlyValidator, epoch primitives.Epoch) (ethpb.ValidatorStatus, error) {
func ValidatorSubStatus(val state.ReadOnlyValidator, epoch primitives.Epoch) (validator.ValidatorStatus, error) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
// Pending.
if validator.ActivationEpoch() > epoch {
if validator.ActivationEligibilityEpoch() == farFutureEpoch {
return ethpb.ValidatorStatus_PENDING_INITIALIZED, nil
} else if validator.ActivationEligibilityEpoch() < farFutureEpoch {
return ethpb.ValidatorStatus_PENDING_QUEUED, nil
if val.ActivationEpoch() > epoch {
if val.ActivationEligibilityEpoch() == farFutureEpoch {
return validator.PendingInitialized, nil
} else if val.ActivationEligibilityEpoch() < farFutureEpoch {
return validator.PendingQueued, nil
}
}
// Active.
if validator.ActivationEpoch() <= epoch && epoch < validator.ExitEpoch() {
if validator.ExitEpoch() == farFutureEpoch {
return ethpb.ValidatorStatus_ACTIVE_ONGOING, nil
} else if validator.ExitEpoch() < farFutureEpoch {
if validator.Slashed() {
return ethpb.ValidatorStatus_ACTIVE_SLASHED, nil
if val.ActivationEpoch() <= epoch && epoch < val.ExitEpoch() {
if val.ExitEpoch() == farFutureEpoch {
return validator.ActiveOngoing, nil
} else if val.ExitEpoch() < farFutureEpoch {
if val.Slashed() {
return validator.ActiveSlashed, nil
}
return ethpb.ValidatorStatus_ACTIVE_EXITING, nil
return validator.ActiveExiting, nil
}
}
// Exited.
if validator.ExitEpoch() <= epoch && epoch < validator.WithdrawableEpoch() {
if validator.Slashed() {
return ethpb.ValidatorStatus_EXITED_SLASHED, nil
if val.ExitEpoch() <= epoch && epoch < val.WithdrawableEpoch() {
if val.Slashed() {
return validator.ExitedSlashed, nil
}
return ethpb.ValidatorStatus_EXITED_UNSLASHED, nil
return validator.ExitedUnslashed, nil
}
if validator.WithdrawableEpoch() <= epoch {
if validator.EffectiveBalance() != 0 {
return ethpb.ValidatorStatus_WITHDRAWAL_POSSIBLE, nil
if val.WithdrawableEpoch() <= epoch {
if val.EffectiveBalance() != 0 {
return validator.WithdrawalPossible, nil
} else {
return ethpb.ValidatorStatus_WITHDRAWAL_DONE, nil
return validator.WithdrawalDone, nil
}
}

View File

@@ -7,6 +7,7 @@ import (
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
@@ -23,7 +24,7 @@ func Test_ValidatorStatus(t *testing.T) {
tests := []struct {
name string
args args
want ethpb.ValidatorStatus
want validator.ValidatorStatus
wantErr bool
}{
{
@@ -35,7 +36,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_PENDING,
want: validator.Pending,
},
{
name: "pending queued",
@@ -46,7 +47,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_PENDING,
want: validator.Pending,
},
{
name: "active ongoing",
@@ -57,7 +58,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_ACTIVE,
want: validator.Active,
},
{
name: "active slashed",
@@ -69,7 +70,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_ACTIVE,
want: validator.Active,
},
{
name: "active exiting",
@@ -81,7 +82,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_ACTIVE,
want: validator.Active,
},
{
name: "exited slashed",
@@ -94,7 +95,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(35),
},
want: ethpb.ValidatorStatus_EXITED,
want: validator.Exited,
},
{
name: "exited unslashed",
@@ -107,7 +108,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(35),
},
want: ethpb.ValidatorStatus_EXITED,
want: validator.Exited,
},
{
name: "withdrawal possible",
@@ -121,7 +122,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(45),
},
want: ethpb.ValidatorStatus_WITHDRAWAL,
want: validator.Withdrawal,
},
{
name: "withdrawal done",
@@ -135,7 +136,7 @@ func Test_ValidatorStatus(t *testing.T) {
},
epoch: primitives.Epoch(45),
},
want: ethpb.ValidatorStatus_WITHDRAWAL,
want: validator.Withdrawal,
},
}
for _, tt := range tests {
@@ -161,7 +162,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
tests := []struct {
name string
args args
want ethpb.ValidatorStatus
want validator.ValidatorStatus
wantErr bool
}{
{
@@ -173,7 +174,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_PENDING_INITIALIZED,
want: validator.PendingInitialized,
},
{
name: "pending queued",
@@ -184,7 +185,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_PENDING_QUEUED,
want: validator.PendingQueued,
},
{
name: "active ongoing",
@@ -195,7 +196,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_ACTIVE_ONGOING,
want: validator.ActiveOngoing,
},
{
name: "active slashed",
@@ -207,7 +208,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_ACTIVE_SLASHED,
want: validator.ActiveSlashed,
},
{
name: "active exiting",
@@ -219,7 +220,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(5),
},
want: ethpb.ValidatorStatus_ACTIVE_EXITING,
want: validator.ActiveExiting,
},
{
name: "exited slashed",
@@ -232,7 +233,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(35),
},
want: ethpb.ValidatorStatus_EXITED_SLASHED,
want: validator.ExitedSlashed,
},
{
name: "exited unslashed",
@@ -245,7 +246,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(35),
},
want: ethpb.ValidatorStatus_EXITED_UNSLASHED,
want: validator.ExitedUnslashed,
},
{
name: "withdrawal possible",
@@ -259,7 +260,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(45),
},
want: ethpb.ValidatorStatus_WITHDRAWAL_POSSIBLE,
want: validator.WithdrawalPossible,
},
{
name: "withdrawal done",
@@ -273,7 +274,7 @@ func Test_ValidatorSubStatus(t *testing.T) {
},
epoch: primitives.Epoch(45),
},
want: ethpb.ValidatorStatus_WITHDRAWAL_DONE,
want: validator.WithdrawalDone,
},
}
for _, tt := range tests {

View File

@@ -3,8 +3,10 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"handlers.go",
"node.go",
"server.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/node",
visibility = ["//beacon-chain:__subpackages__"],
@@ -17,6 +19,7 @@ go_library(
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/sync:go_default_library",
"//network/http:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -35,6 +38,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"handlers_test.go",
"node_test.go",
"server_test.go",
],

View File

@@ -0,0 +1,34 @@
package node
import (
"net/http"
"strconv"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
"go.opencensus.io/trace"
)
// GetSyncStatus requests the beacon node to describe if it's currently syncing or not, and
// if it is, what block it is up to.
func (s *Server) GetSyncStatus(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "node.GetSyncStatus")
defer span.End()
isOptimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
http2.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
return
}
headSlot := s.HeadFetcher.HeadSlot()
response := &SyncStatusResponse{
Data: &SyncStatusResponseData{
HeadSlot: strconv.FormatUint(uint64(headSlot), 10),
SyncDistance: strconv.FormatUint(uint64(s.GenesisTimeFetcher.CurrentSlot()-headSlot), 10),
IsSyncing: s.SyncChecker.Syncing(),
IsOptimistic: isOptimistic,
ElOffline: !s.ExecutionChainInfoFetcher.ExecutionClientConnected(),
},
}
http2.WriteJson(w, response)
}

View File

@@ -0,0 +1,52 @@
package node
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
syncmock "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
)
func TestSyncStatus(t *testing.T) {
currentSlot := new(primitives.Slot)
*currentSlot = 110
state, err := util.NewBeaconState()
require.NoError(t, err)
err = state.SetSlot(100)
require.NoError(t, err)
chainService := &mock.ChainService{Slot: currentSlot, State: state, Optimistic: true}
syncChecker := &syncmock.Sync{}
syncChecker.IsSyncing = true
s := &Server{
HeadFetcher: chainService,
GenesisTimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: syncChecker,
ExecutionChainInfoFetcher: &testutil.MockExecutionChainInfoFetcher{},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetSyncStatus(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SyncStatusResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
assert.Equal(t, "100", resp.Data.HeadSlot)
assert.Equal(t, "10", resp.Data.SyncDistance)
assert.Equal(t, true, resp.Data.IsSyncing)
assert.Equal(t, true, resp.Data.IsOptimistic)
assert.Equal(t, false, resp.Data.ElOffline)
}

View File

@@ -259,29 +259,6 @@ func (_ *Server) GetVersion(ctx context.Context, _ *emptypb.Empty) (*ethpb.Versi
}, nil
}
// GetSyncStatus requests the beacon node to describe if it's currently syncing or not, and
// if it is, what block it is up to.
func (ns *Server) GetSyncStatus(ctx context.Context, _ *emptypb.Empty) (*ethpb.SyncingResponse, error) {
ctx, span := trace.StartSpan(ctx, "node.GetSyncStatus")
defer span.End()
isOptimistic, err := ns.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not check optimistic status: %v", err)
}
headSlot := ns.HeadFetcher.HeadSlot()
return &ethpb.SyncingResponse{
Data: &ethpb.SyncInfo{
HeadSlot: headSlot,
SyncDistance: ns.GenesisTimeFetcher.CurrentSlot() - headSlot,
IsSyncing: ns.SyncChecker.Syncing(),
IsOptimistic: isOptimistic,
ElOffline: !ns.ExecutionChainInfoFetcher.ExecutionClientConnected(),
},
}, nil
}
// GetHealth returns node health status in http status codes. Useful for load balancers.
// Response Usage:
//

View File

@@ -18,20 +18,16 @@ import (
ma "github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/go-bitfield"
grpcutil "github.com/prysmaticlabs/prysm/v4/api/grpc"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
mockp2p "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
syncmock "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/consensus-types/wrapper"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
pb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
@@ -161,33 +157,6 @@ func TestGetIdentity(t *testing.T) {
})
}
func TestSyncStatus(t *testing.T) {
currentSlot := new(primitives.Slot)
*currentSlot = 110
state, err := util.NewBeaconState()
require.NoError(t, err)
err = state.SetSlot(100)
require.NoError(t, err)
chainService := &mock.ChainService{Slot: currentSlot, State: state, Optimistic: true}
syncChecker := &syncmock.Sync{}
syncChecker.IsSyncing = true
s := &Server{
HeadFetcher: chainService,
GenesisTimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: syncChecker,
ExecutionChainInfoFetcher: &testutil.MockExecutionChainInfoFetcher{},
}
resp, err := s.GetSyncStatus(context.Background(), &emptypb.Empty{})
require.NoError(t, err)
assert.Equal(t, primitives.Slot(100), resp.Data.HeadSlot)
assert.Equal(t, primitives.Slot(10), resp.Data.SyncDistance)
assert.Equal(t, true, resp.Data.IsSyncing)
assert.Equal(t, true, resp.Data.IsOptimistic)
assert.Equal(t, false, resp.Data.ElOffline)
}
func TestGetPeer(t *testing.T) {
const rawId = "16Uiu2HAkvyYtoQXZNTsthjgLHjEnv7kvwzEmjvsJjWXpbhtqpSUN"
ctx := context.Background()

View File

@@ -0,0 +1,13 @@
package node
type SyncStatusResponse struct {
Data *SyncStatusResponseData `json:"data"`
}
type SyncStatusResponseData struct {
HeadSlot string `json:"head_slot"`
SyncDistance string `json:"sync_distance"`
IsSyncing bool `json:"is_syncing"`
IsOptimistic bool `json:"is_optimistic"`
ElOffline bool `json:"el_offline"`
}

View File

@@ -23,7 +23,7 @@ go_library(
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network:go_default_library",
"//network/http:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -38,6 +38,7 @@ go_test(
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -50,7 +51,7 @@ go_test(
"//crypto/bls:go_default_library",
"//crypto/bls/blst:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network:go_default_library",
"//network/http:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -19,7 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/wealdtech/go-bytesutil"
@@ -32,15 +32,15 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
blk, err := s.Blocker.Block(r.Context(), []byte(blockId))
if errJson := handleGetBlockError(blk, err); errJson != nil {
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
if blk.Version() == version.Phase0 {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Block rewards are not supported for Phase 0 blocks",
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
@@ -49,114 +49,114 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
// To do this, we replay the state up to the block's slot, but before processing the block.
st, err := s.ReplayerBuilder.ReplayerForSlot(blk.Block().Slot()-1).ReplayToSlot(r.Context(), blk.Block().Slot())
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get state: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
proposerIndex := blk.Block().ProposerIndex()
initBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
st, err = altair.ProcessAttestationsNoVerifySignature(r.Context(), st, blk)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get attestation rewards" + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
attBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
st, err = coreblocks.ProcessAttesterSlashings(r.Context(), st, blk.Block().Body().AttesterSlashings(), validators.SlashValidator)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get attester slashing rewards: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
attSlashingsBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
st, err = coreblocks.ProcessProposerSlashings(r.Context(), st, blk.Block().Body().ProposerSlashings(), validators.SlashValidator)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get proposer slashing rewards" + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
proposerSlashingsBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
sa, err := blk.Block().Body().SyncAggregate()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get sync aggregate: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
var syncCommitteeReward uint64
_, syncCommitteeReward, err = altair.ProcessSyncAggregate(r.Context(), st, sa)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get sync aggregate rewards: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get optimistic mode info: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
blkRoot, err := blk.Block().HashTreeRoot()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get block root: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
@@ -172,7 +172,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
ExecutionOptimistic: optimistic,
Finalized: s.FinalizationFetcher.IsFinalized(r.Context(), blkRoot),
}
network.WriteJson(w, response)
http2.WriteJson(w, response)
}
// AttestationRewards retrieves attestation reward info for validators specified by array of public keys or validator index.
@@ -198,20 +198,20 @@ func (s *Server) AttestationRewards(w http.ResponseWriter, r *http.Request) {
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get optimistic mode info: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
blkRoot, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get block root: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
@@ -223,53 +223,170 @@ func (s *Server) AttestationRewards(w http.ResponseWriter, r *http.Request) {
ExecutionOptimistic: optimistic,
Finalized: s.FinalizationFetcher.IsFinalized(r.Context(), blkRoot),
}
network.WriteJson(w, resp)
http2.WriteJson(w, resp)
}
// SyncCommitteeRewards retrieves rewards info for sync committee members specified by array of public keys or validator index.
// If no array is provided, return reward info for every committee member.
func (s *Server) SyncCommitteeRewards(w http.ResponseWriter, r *http.Request) {
segments := strings.Split(r.URL.Path, "/")
blockId := segments[len(segments)-1]
blk, err := s.Blocker.Block(r.Context(), []byte(blockId))
if errJson := handleGetBlockError(blk, err); errJson != nil {
http2.WriteError(w, errJson)
return
}
if blk.Version() == version.Phase0 {
errJson := &http2.DefaultErrorJson{
Message: "Sync committee rewards are not supported for Phase 0",
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return
}
st, err := s.ReplayerBuilder.ReplayerForSlot(blk.Block().Slot()-1).ReplayToSlot(r.Context(), blk.Block().Slot())
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get state: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
sa, err := blk.Block().Body().SyncAggregate()
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get sync aggregate: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
vals, valIndices, ok := syncRewardsVals(w, r, st)
if !ok {
return
}
preProcessBals := make([]uint64, len(vals))
for i, valIdx := range valIndices {
preProcessBals[i], err = st.BalanceAtIndex(valIdx)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get validator's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
}
_, proposerReward, err := altair.ProcessSyncAggregate(r.Context(), st, sa)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get sync aggregate rewards: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
rewards := make([]int, len(preProcessBals))
proposerIndex := blk.Block().ProposerIndex()
for i, valIdx := range valIndices {
bal, err := st.BalanceAtIndex(valIdx)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get validator's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
rewards[i] = int(bal - preProcessBals[i]) // lint:ignore uintcast
if valIdx == proposerIndex {
rewards[i] = rewards[i] - int(proposerReward) // lint:ignore uintcast
}
}
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get optimistic mode info: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
blkRoot, err := blk.Block().HashTreeRoot()
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get block root: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return
}
scRewards := make([]SyncCommitteeReward, len(valIndices))
for i, valIdx := range valIndices {
scRewards[i] = SyncCommitteeReward{
ValidatorIndex: strconv.FormatUint(uint64(valIdx), 10),
Reward: strconv.Itoa(rewards[i]),
}
}
response := &SyncCommitteeRewardsResponse{
Data: scRewards,
ExecutionOptimistic: optimistic,
Finalized: s.FinalizationFetcher.IsFinalized(r.Context(), blkRoot),
}
http2.WriteJson(w, response)
}
func (s *Server) attRewardsState(w http.ResponseWriter, r *http.Request) (state.BeaconState, bool) {
segments := strings.Split(r.URL.Path, "/")
requestedEpoch, err := strconv.ParseUint(segments[len(segments)-1], 10, 64)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not decode epoch: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
if primitives.Epoch(requestedEpoch) < params.BeaconConfig().AltairForkEpoch {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Attestation rewards are not supported for Phase 0",
Code: http.StatusNotFound,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
currentEpoch := uint64(slots.ToEpoch(s.TimeFetcher.CurrentSlot()))
if requestedEpoch+1 >= currentEpoch {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Code: http.StatusNotFound,
Message: "Attestation rewards are available after two epoch transitions to ensure all attestations have a chance of inclusion",
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
nextEpochEnd, err := slots.EpochEnd(primitives.Epoch(requestedEpoch + 1))
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get next epoch's ending slot: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
st, err := s.Stater.StateBySlot(r.Context(), nextEpochEnd)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get state for epoch's starting slot: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
return st, true
@@ -282,73 +399,25 @@ func attRewardsBalancesAndVals(
) (*precompute.Balance, []*precompute.Validator, []primitives.ValidatorIndex, bool) {
allVals, bal, err := altair.InitializePrecomputeValidators(r.Context(), st)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not initialize precompute validators: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, nil, nil, false
}
allVals, bal, err = altair.ProcessEpochParticipation(r.Context(), st, bal, allVals)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not process epoch participation: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, nil, nil, false
}
var rawValIds []string
if r.Body != http.NoBody {
if err = json.NewDecoder(r.Body).Decode(&rawValIds); err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not decode validators: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
}
valIndices := make([]primitives.ValidatorIndex, len(rawValIds))
for i, v := range rawValIds {
index, err := strconv.ParseUint(v, 10, 64)
if err != nil {
pubkey, err := bytesutil.FromHexString(v)
if err != nil || len(pubkey) != fieldparams.BLSPubkeyLength {
errJson := &network.DefaultErrorJson{
Message: fmt.Sprintf("%s is not a validator index or pubkey", v),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
var ok bool
valIndices[i], ok = st.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubkey))
if !ok {
errJson := &network.DefaultErrorJson{
Message: fmt.Sprintf("No validator index found for pubkey %#x", pubkey),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
} else {
if index >= uint64(st.NumValidators()) {
errJson := &network.DefaultErrorJson{
Message: fmt.Sprintf("Validator index %d is too large. Maximum allowed index is %d", index, st.NumValidators()-1),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
valIndices[i] = primitives.ValidatorIndex(index)
}
}
if len(valIndices) == 0 {
valIndices = make([]primitives.ValidatorIndex, len(allVals))
for i := 0; i < len(allVals); i++ {
valIndices[i] = primitives.ValidatorIndex(i)
}
valIndices, ok := requestedValIndices(w, r, st, allVals)
if !ok {
return nil, nil, nil, false
}
if len(valIndices) == len(allVals) {
return bal, allVals, valIndices, true
@@ -394,11 +463,11 @@ func idealAttRewards(
}
deltas, err := altair.AttestationsDelta(st, bal, idealVals)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get attestations delta: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
for i, d := range deltas {
@@ -430,11 +499,11 @@ func totalAttRewards(
}
deltas, err := altair.AttestationsDelta(st, bal, vals)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: "Could not get attestations delta: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return nil, false
}
for i, d := range deltas {
@@ -453,22 +522,136 @@ func totalAttRewards(
return totalRewards, true
}
func handleGetBlockError(blk interfaces.ReadOnlySignedBeaconBlock, err error) *network.DefaultErrorJson {
func syncRewardsVals(
w http.ResponseWriter,
r *http.Request,
st state.BeaconState,
) ([]*precompute.Validator, []primitives.ValidatorIndex, bool) {
allVals, _, err := altair.InitializePrecomputeValidators(r.Context(), st)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not initialize precompute validators: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, nil, false
}
valIndices, ok := requestedValIndices(w, r, st, allVals)
if !ok {
return nil, nil, false
}
sc, err := st.CurrentSyncCommittee()
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not get current sync committee: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, nil, false
}
allScIndices := make([]primitives.ValidatorIndex, len(sc.Pubkeys))
for i, pk := range sc.Pubkeys {
valIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(pk))
if !ok {
errJson := &http2.DefaultErrorJson{
Message: fmt.Sprintf("No validator index found for pubkey %#x", pk),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, nil, false
}
allScIndices[i] = valIdx
}
scIndices := make([]primitives.ValidatorIndex, 0, len(allScIndices))
scVals := make([]*precompute.Validator, 0, len(allScIndices))
for _, valIdx := range valIndices {
for _, scIdx := range allScIndices {
if valIdx == scIdx {
scVals = append(scVals, allVals[valIdx])
scIndices = append(scIndices, valIdx)
break
}
}
}
return scVals, scIndices, true
}
func requestedValIndices(w http.ResponseWriter, r *http.Request, st state.BeaconState, allVals []*precompute.Validator) ([]primitives.ValidatorIndex, bool) {
var rawValIds []string
if r.Body != http.NoBody {
if err := json.NewDecoder(r.Body).Decode(&rawValIds); err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not decode validators: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, false
}
}
valIndices := make([]primitives.ValidatorIndex, len(rawValIds))
for i, v := range rawValIds {
index, err := strconv.ParseUint(v, 10, 64)
if err != nil {
pubkey, err := bytesutil.FromHexString(v)
if err != nil || len(pubkey) != fieldparams.BLSPubkeyLength {
errJson := &http2.DefaultErrorJson{
Message: fmt.Sprintf("%s is not a validator index or pubkey", v),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, false
}
var ok bool
valIndices[i], ok = st.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubkey))
if !ok {
errJson := &http2.DefaultErrorJson{
Message: fmt.Sprintf("No validator index found for pubkey %#x", pubkey),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, false
}
} else {
if index >= uint64(st.NumValidators()) {
errJson := &http2.DefaultErrorJson{
Message: fmt.Sprintf("Validator index %d is too large. Maximum allowed index is %d", index, st.NumValidators()-1),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return nil, false
}
valIndices[i] = primitives.ValidatorIndex(index)
}
}
if len(valIndices) == 0 {
valIndices = make([]primitives.ValidatorIndex, len(allVals))
for i := 0; i < len(allVals); i++ {
valIndices[i] = primitives.ValidatorIndex(i)
}
}
return valIndices, true
}
func handleGetBlockError(blk interfaces.ReadOnlySignedBeaconBlock, err error) *http2.DefaultErrorJson {
if errors.Is(err, lookup.BlockIdParseError{}) {
return &network.DefaultErrorJson{
return &http2.DefaultErrorJson{
Message: "Invalid block ID: " + err.Error(),
Code: http.StatusBadRequest,
}
}
if err != nil {
return &network.DefaultErrorJson{
return &http2.DefaultErrorJson{
Message: "Could not get block from block ID: " + err.Error(),
Code: http.StatusInternalServerError,
}
}
if err := blocks.BeaconBlockIsNil(blk); err != nil {
return &network.DefaultErrorJson{
Message: "Could not find requested block" + err.Error(),
return &http2.DefaultErrorJson{
Message: "Could not find requested block: " + err.Error(),
Code: http.StatusNotFound,
}
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -26,7 +27,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/crypto/bls/blst"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -34,6 +35,8 @@ import (
)
func TestBlockRewards(t *testing.T) {
helpers.ClearCache()
valCount := 64
st, err := util.NewBeaconStateCapella()
@@ -65,7 +68,7 @@ func TestBlockRewards(t *testing.T) {
bRoots[0] = slot0bRoot
require.NoError(t, st.SetBlockRoots(bRoots))
b := util.HydrateSignedBeaconBlockAltair(util.NewBeaconBlockAltair())
b := util.HydrateSignedBeaconBlockCapella(util.NewBeaconBlockCapella())
b.Block.Slot = 2
// we have to set the proposer index to the value that will be randomly chosen (fortunately it's deterministic)
b.Block.ProposerIndex = 12
@@ -194,7 +197,7 @@ func TestBlockRewards(t *testing.T) {
s.BlockRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "Block rewards are not supported for Phase 0 blocks", e.Message)
@@ -206,6 +209,7 @@ func TestAttestationRewards(t *testing.T) {
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
params.OverrideBeaconConfig(cfg)
helpers.ClearCache()
valCount := 64
@@ -395,7 +399,7 @@ func TestAttestationRewards(t *testing.T) {
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "foo is not a validator index or pubkey", e.Message)
@@ -416,7 +420,7 @@ func TestAttestationRewards(t *testing.T) {
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "No validator index found for pubkey "+pubkey, e.Message)
@@ -434,7 +438,7 @@ func TestAttestationRewards(t *testing.T) {
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "Validator index 999 is too large. Maximum allowed index is 63", e.Message)
@@ -447,7 +451,7 @@ func TestAttestationRewards(t *testing.T) {
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.Equal(t, "Attestation rewards are not supported for Phase 0", e.Message)
@@ -460,7 +464,7 @@ func TestAttestationRewards(t *testing.T) {
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "Could not decode epoch"))
@@ -473,9 +477,304 @@ func TestAttestationRewards(t *testing.T) {
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.Equal(t, "Attestation rewards are available after two epoch transitions to ensure all attestations have a chance of inclusion", e.Message)
})
}
func TestSyncCommiteeRewards(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
params.OverrideBeaconConfig(cfg)
helpers.ClearCache()
const valCount = 1024
// we have to set the proposer index to the value that will be randomly chosen (fortunately it's deterministic)
const proposerIndex = 84
st, err := util.NewBeaconStateCapella()
require.NoError(t, err)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch-1))
validators := make([]*eth.Validator, 0, valCount)
secretKeys := make([]bls.SecretKey, 0, valCount)
for i := 0; i < valCount; i++ {
blsKey, err := bls.RandKey()
require.NoError(t, err)
secretKeys = append(secretKeys, blsKey)
validators = append(validators, &eth.Validator{
PublicKey: blsKey.PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
})
}
require.NoError(t, st.SetValidators(validators))
require.NoError(t, st.SetInactivityScores(make([]uint64, len(validators))))
syncCommitteePubkeys := make([][]byte, fieldparams.SyncCommitteeLength)
for i := 0; i < fieldparams.SyncCommitteeLength; i++ {
syncCommitteePubkeys[i] = secretKeys[i].PublicKey().Marshal()
}
aggPubkey, err := bls.AggregatePublicKeys(syncCommitteePubkeys)
require.NoError(t, err)
require.NoError(t, st.SetCurrentSyncCommittee(&eth.SyncCommittee{
Pubkeys: syncCommitteePubkeys,
AggregatePubkey: aggPubkey.Marshal(),
}))
b := util.HydrateSignedBeaconBlockCapella(util.NewBeaconBlockCapella())
b.Block.Slot = 32
b.Block.ProposerIndex = proposerIndex
scBits := bitfield.NewBitvector512()
// last 10 sync committee members didn't perform their duty
for i := uint64(0); i < fieldparams.SyncCommitteeLength-10; i++ {
scBits.SetBitAt(i, true)
}
domain, err := signing.Domain(st.Fork(), 0, params.BeaconConfig().DomainSyncCommittee, st.GenesisValidatorsRoot())
require.NoError(t, err)
sszBytes := primitives.SSZBytes("")
r, err := signing.ComputeSigningRoot(&sszBytes, domain)
require.NoError(t, err)
// Bits set in sync committee bits determine which validators will be treated as participating in sync committee.
// These validators have to sign the message.
sigs := make([]bls.Signature, fieldparams.SyncCommitteeLength-10)
for i := range sigs {
sigs[i], err = blst.SignatureFromBytes(secretKeys[i].Sign(r[:]).Marshal())
require.NoError(t, err)
}
aggSig := bls.AggregateSignatures(sigs).Marshal()
b.Block.Body.SyncAggregate = &eth.SyncAggregate{SyncCommitteeBits: scBits, SyncCommitteeSignature: aggSig}
sbb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
phase0block, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
currentSlot := params.BeaconConfig().SlotsPerEpoch
mockChainService := &mock.ChainService{Optimistic: true, Slot: &currentSlot}
s := &Server{
Blocker: &testutil.MockBlocker{SlotBlockMap: map[primitives.Slot]interfaces.ReadOnlySignedBeaconBlock{
0: phase0block,
32: sbb,
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
ReplayerBuilder: mockstategen.NewMockReplayerBuilder(mockstategen.WithMockState(st)),
}
t.Run("ok - filtered vals", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
var body bytes.Buffer
pubkey := fmt.Sprintf("%#x", secretKeys[10].PublicKey().Marshal())
valIds, err := json.Marshal([]string{"20", pubkey})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SyncCommitteeRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
sum := uint64(0)
for _, scReward := range resp.Data {
r, err := strconv.ParseUint(scReward.Reward, 10, 64)
require.NoError(t, err)
sum += r
}
assert.Equal(t, uint64(1396), sum)
assert.Equal(t, true, resp.ExecutionOptimistic)
assert.Equal(t, false, resp.Finalized)
})
t.Run("ok - all vals", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SyncCommitteeRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 512, len(resp.Data))
sum := 0
for _, scReward := range resp.Data {
r, err := strconv.Atoi(scReward.Reward)
require.NoError(t, err)
sum += r
}
assert.Equal(t, 343416, sum)
})
t.Run("ok - validator outside sync committee is ignored", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
var body bytes.Buffer
pubkey := fmt.Sprintf("%#x", secretKeys[10].PublicKey().Marshal())
valIds, err := json.Marshal([]string{"20", "999", pubkey})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SyncCommitteeRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
sum := 0
for _, scReward := range resp.Data {
r, err := strconv.Atoi(scReward.Reward)
require.NoError(t, err)
sum += r
}
assert.Equal(t, 1396, sum)
})
t.Run("ok - proposer reward is deducted", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
var body bytes.Buffer
pubkey := fmt.Sprintf("%#x", secretKeys[10].PublicKey().Marshal())
valIds, err := json.Marshal([]string{"20", "84", pubkey})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SyncCommitteeRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 3, len(resp.Data))
sum := 0
for _, scReward := range resp.Data {
r, err := strconv.Atoi(scReward.Reward)
require.NoError(t, err)
sum += r
}
assert.Equal(t, 2094, sum)
})
t.Run("invalid validator index/pubkey", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
var body bytes.Buffer
valIds, err := json.Marshal([]string{"10", "foo"})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "foo is not a validator index or pubkey", e.Message)
})
t.Run("unknown validator pubkey", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
var body bytes.Buffer
privkey, err := bls.RandKey()
require.NoError(t, err)
pubkey := fmt.Sprintf("%#x", privkey.PublicKey().Marshal())
valIds, err := json.Marshal([]string{"10", pubkey})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "No validator index found for pubkey "+pubkey, e.Message)
})
t.Run("validator index too large", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/32"
var body bytes.Buffer
valIds, err := json.Marshal([]string{"10", "9999"})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "Validator index 9999 is too large. Maximum allowed index is 1023", e.Message)
})
t.Run("phase 0", func(t *testing.T) {
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
require.NoError(t, st.SetBalances(balances))
url := "http://only.the.slot.number.at.the.end.is.important/0"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SyncCommitteeRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "Sync committee rewards are not supported for Phase 0", e.Message)
})
}

View File

@@ -11,8 +11,7 @@ type Server struct {
OptimisticModeFetcher blockchain.OptimisticModeFetcher
FinalizationFetcher blockchain.FinalizationFetcher
ReplayerBuilder stategen.ReplayerBuilder
// TODO: Init
TimeFetcher blockchain.TimeFetcher
Stater lookup.Stater
HeadFetcher blockchain.HeadFetcher
TimeFetcher blockchain.TimeFetcher
Stater lookup.Stater
HeadFetcher blockchain.HeadFetcher
}

View File

@@ -40,3 +40,14 @@ type TotalAttestationReward struct {
Source string `json:"source"`
InclusionDelay string `json:"inclusion_delay"`
}
type SyncCommitteeRewardsResponse struct {
Data []SyncCommitteeReward `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type SyncCommitteeReward struct {
ValidatorIndex string `json:"validator_index"`
Reward string `json:"reward"`
}

View File

@@ -0,0 +1,33 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"errors.go",
"request.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/http:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["errors_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,45 @@
package shared
import (
"fmt"
"strings"
)
// DecodeError represents an error resulting from trying to decode an HTTP request.
// It tracks the full field name for which decoding failed.
type DecodeError struct {
path []string
err error
}
// NewDecodeError wraps an error (either the initial decoding error or another DecodeError).
// The current field that failed decoding must be passed in.
func NewDecodeError(err error, field string) *DecodeError {
de, ok := err.(*DecodeError)
if ok {
return &DecodeError{path: append([]string{field}, de.path...), err: de.err}
}
return &DecodeError{path: []string{field}, err: err}
}
// Error returns the formatted error message which contains the full field name and the actual decoding error.
func (e *DecodeError) Error() string {
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
}
// IndexedVerificationFailureError wraps a collection of verification failures.
type IndexedVerificationFailureError struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedVerificationFailure `json:"failures"`
}
func (e *IndexedVerificationFailureError) StatusCode() int {
return e.Code
}
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
type IndexedVerificationFailure struct {
Index int `json:"index"`
Message string `json:"message"`
}

View File

@@ -0,0 +1,16 @@
package shared
import (
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
)
func TestDecodeError(t *testing.T) {
e := errors.New("not a number")
de := NewDecodeError(e, "Z")
de = NewDecodeError(de, "Y")
de = NewDecodeError(de, "X")
assert.Equal(t, "could not decode X.Y.Z: not a number", de.Error())
}

View File

@@ -0,0 +1,136 @@
package shared
import (
"context"
"encoding/json"
"net/http"
"strconv"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
)
func UintFromQuery(w http.ResponseWriter, r *http.Request, name string) (bool, string, uint64) {
raw := r.URL.Query().Get(name)
if raw != "" {
v, valid := ValidateUint(w, name, raw)
if !valid {
return false, "", 0
}
return true, raw, v
}
return true, "", 0
}
func ValidateHex(w http.ResponseWriter, name string, s string) bool {
if s == "" {
errJson := &http2.DefaultErrorJson{
Message: name + " is required",
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return false
}
if !bytesutil.IsHex([]byte(s)) {
errJson := &http2.DefaultErrorJson{
Message: name + " is invalid",
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return false
}
return true
}
func ValidateUint(w http.ResponseWriter, name string, s string) (uint64, bool) {
if s == "" {
errJson := &http2.DefaultErrorJson{
Message: name + " is required",
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return 0, false
}
v, err := strconv.ParseUint(s, 10, 64)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: name + " is invalid: " + err.Error(),
Code: http.StatusBadRequest,
}
http2.WriteError(w, errJson)
return 0, false
}
return v, true
}
// IsSyncing checks whether the beacon node is currently syncing and writes out the sync status.
func IsSyncing(
ctx context.Context,
w http.ResponseWriter,
syncChecker sync.Checker,
headFetcher blockchain.HeadFetcher,
timeFetcher blockchain.TimeFetcher,
optimisticModeFetcher blockchain.OptimisticModeFetcher,
) bool {
if !syncChecker.Syncing() {
return false
}
headSlot := headFetcher.HeadSlot()
isOptimistic, err := optimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not check optimistic status: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return true
}
syncDetails := &SyncDetailsContainer{
Data: &SyncDetails{
HeadSlot: strconv.FormatUint(uint64(headSlot), 10),
SyncDistance: strconv.FormatUint(uint64(timeFetcher.CurrentSlot()-headSlot), 10),
IsSyncing: true,
IsOptimistic: isOptimistic,
},
}
msg := "Beacon node is currently syncing and not serving request on that endpoint"
details, err := json.Marshal(syncDetails)
if err == nil {
msg += " Details: " + string(details)
}
errJson := &http2.DefaultErrorJson{
Message: msg,
Code: http.StatusServiceUnavailable}
http2.WriteError(w, errJson)
return true
}
// IsOptimistic checks whether the beacon node is currently optimistic and writes it to the response.
func IsOptimistic(
ctx context.Context,
w http.ResponseWriter,
optimisticModeFetcher blockchain.OptimisticModeFetcher,
) (bool, error) {
isOptimistic, err := optimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
errJson := &http2.DefaultErrorJson{
Message: "Could not check optimistic status: " + err.Error(),
Code: http.StatusInternalServerError,
}
http2.WriteError(w, errJson)
return true, err
}
if !isOptimistic {
return false, nil
}
errJson := &http2.DefaultErrorJson{
Code: http.StatusServiceUnavailable,
Message: "Beacon node is currently optimistic and not serving validators",
}
http2.WriteError(w, errJson)
return true, nil
}

View File

@@ -0,0 +1,398 @@
package shared
import (
"fmt"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
type Attestation struct {
AggregationBits string `json:"aggregation_bits" validate:"required,hexadecimal"`
Data *AttestationData `json:"data" validate:"required"`
Signature string `json:"signature" validate:"required,hexadecimal"`
}
type AttestationData struct {
Slot string `json:"slot" validate:"required,number,gte=0"`
CommitteeIndex string `json:"index" validate:"required,number,gte=0"`
BeaconBlockRoot string `json:"beacon_block_root" validate:"required,hexadecimal"`
Source *Checkpoint `json:"source" validate:"required"`
Target *Checkpoint `json:"target" validate:"required"`
}
type Checkpoint struct {
Epoch string `json:"epoch" validate:"required,number,gte=0"`
Root string `json:"root" validate:"required,hexadecimal"`
}
type SignedContributionAndProof struct {
Message *ContributionAndProof `json:"message" validate:"required"`
Signature string `json:"signature" validate:"required,hexadecimal"`
}
type ContributionAndProof struct {
AggregatorIndex string `json:"aggregator_index" validate:"required,number,gte=0"`
Contribution *SyncCommitteeContribution `json:"contribution" validate:"required"`
SelectionProof string `json:"selection_proof" validate:"required,hexadecimal"`
}
type SyncCommitteeContribution struct {
Slot string `json:"slot" validate:"required,number,gte=0"`
BeaconBlockRoot string `json:"beacon_block_root" hex:"true" validate:"required,hexadecimal"`
SubcommitteeIndex string `json:"subcommittee_index" validate:"required,number,gte=0"`
AggregationBits string `json:"aggregation_bits" hex:"true" validate:"required,hexadecimal"`
Signature string `json:"signature" hex:"true" validate:"required,hexadecimal"`
}
type SignedAggregateAttestationAndProof struct {
Message *AggregateAttestationAndProof `json:"message" validate:"required"`
Signature string `json:"signature" validate:"required,hexadecimal"`
}
type AggregateAttestationAndProof struct {
AggregatorIndex string `json:"aggregator_index" validate:"required,number,gte=0"`
Aggregate *Attestation `json:"aggregate" validate:"required"`
SelectionProof string `json:"selection_proof" validate:"required,hexadecimal"`
}
type SyncCommitteeSubscription struct {
ValidatorIndex string `json:"validator_index" validate:"required,number,gte=0"`
SyncCommitteeIndices []string `json:"sync_committee_indices" validate:"required,dive,number,gte=0"`
UntilEpoch string `json:"until_epoch" validate:"required,number,gte=0"`
}
type BeaconCommitteeSubscription struct {
ValidatorIndex string `json:"validator_index" validate:"required,number,gte=0"`
CommitteeIndex string `json:"committee_index" validate:"required,number,gte=0"`
CommitteesAtSlot string `json:"committees_at_slot" validate:"required,number,gte=0"`
Slot string `json:"slot" validate:"required,number,gte=0"`
IsAggregator bool `json:"is_aggregator"`
}
type ValidatorRegistration struct {
FeeRecipient string `json:"fee_recipient" validate:"required,hexadecimal"`
GasLimit string `json:"gas_limit" validate:"required,number,gte=0"`
Timestamp string `json:"timestamp" validate:"required,number,gte=0"`
Pubkey string `json:"pubkey" validate:"required,hexadecimal"`
}
type SignedValidatorRegistration struct {
Message *ValidatorRegistration `json:"message" validate:"required,dive"`
Signature string `json:"signature" validate:"required,hexadecimal"`
}
func (s *SignedValidatorRegistration) ToConsensus() (*eth.SignedValidatorRegistrationV1, error) {
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Message")
}
sig, err := hexutil.Decode(s.Signature)
if err != nil {
return nil, NewDecodeError(err, "Signature")
}
if len(sig) != fieldparams.BLSSignatureLength {
return nil, fmt.Errorf("Signature length was %d when expecting length %d", len(sig), fieldparams.BLSSignatureLength)
}
return &eth.SignedValidatorRegistrationV1{
Message: msg,
Signature: sig,
}, nil
}
func (s *ValidatorRegistration) ToConsensus() (*eth.ValidatorRegistrationV1, error) {
feeRecipient, err := hexutil.Decode(s.FeeRecipient)
if err != nil {
return nil, NewDecodeError(err, "FeeRecipient")
}
if len(feeRecipient) != fieldparams.FeeRecipientLength {
return nil, fmt.Errorf("feeRecipient length was %d when expecting length %d", len(feeRecipient), fieldparams.FeeRecipientLength)
}
pubKey, err := hexutil.Decode(s.Pubkey)
if err != nil {
return nil, NewDecodeError(err, "FeeRecipient")
}
if len(pubKey) != fieldparams.BLSPubkeyLength {
return nil, fmt.Errorf("FeeRecipient length was %d when expecting length %d", len(pubKey), fieldparams.BLSPubkeyLength)
}
gasLimit, err := strconv.ParseUint(s.GasLimit, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "GasLimit")
}
timestamp, err := strconv.ParseUint(s.Timestamp, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "Timestamp")
}
return &eth.ValidatorRegistrationV1{
FeeRecipient: feeRecipient,
GasLimit: gasLimit,
Timestamp: timestamp,
Pubkey: pubKey,
}, nil
}
func (s *SignedContributionAndProof) ToConsensus() (*eth.SignedContributionAndProof, error) {
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Message")
}
sig, err := hexutil.Decode(s.Signature)
if err != nil {
return nil, NewDecodeError(err, "Signature")
}
return &eth.SignedContributionAndProof{
Message: msg,
Signature: sig,
}, nil
}
func (c *ContributionAndProof) ToConsensus() (*eth.ContributionAndProof, error) {
contribution, err := c.Contribution.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Contribution")
}
aggregatorIndex, err := strconv.ParseUint(c.AggregatorIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "AggregatorIndex")
}
selectionProof, err := hexutil.Decode(c.SelectionProof)
if err != nil {
return nil, NewDecodeError(err, "SelectionProof")
}
return &eth.ContributionAndProof{
AggregatorIndex: primitives.ValidatorIndex(aggregatorIndex),
Contribution: contribution,
SelectionProof: selectionProof,
}, nil
}
func (s *SyncCommitteeContribution) ToConsensus() (*eth.SyncCommitteeContribution, error) {
slot, err := strconv.ParseUint(s.Slot, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "Slot")
}
bbRoot, err := hexutil.Decode(s.BeaconBlockRoot)
if err != nil {
return nil, NewDecodeError(err, "BeaconBlockRoot")
}
subcommitteeIndex, err := strconv.ParseUint(s.SubcommitteeIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "SubcommitteeIndex")
}
aggBits, err := hexutil.Decode(s.AggregationBits)
if err != nil {
return nil, NewDecodeError(err, "AggregationBits")
}
sig, err := hexutil.Decode(s.Signature)
if err != nil {
return nil, NewDecodeError(err, "Signature")
}
return &eth.SyncCommitteeContribution{
Slot: primitives.Slot(slot),
BlockRoot: bbRoot,
SubcommitteeIndex: subcommitteeIndex,
AggregationBits: aggBits,
Signature: sig,
}, nil
}
func (s *SignedAggregateAttestationAndProof) ToConsensus() (*eth.SignedAggregateAttestationAndProof, error) {
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Message")
}
sig, err := hexutil.Decode(s.Signature)
if err != nil {
return nil, NewDecodeError(err, "Signature")
}
return &eth.SignedAggregateAttestationAndProof{
Message: msg,
Signature: sig,
}, nil
}
func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationAndProof, error) {
aggIndex, err := strconv.ParseUint(a.AggregatorIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "AggregatorIndex")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Aggregate")
}
proof, err := hexutil.Decode(a.SelectionProof)
if err != nil {
return nil, NewDecodeError(err, "SelectionProof")
}
return &eth.AggregateAttestationAndProof{
AggregatorIndex: primitives.ValidatorIndex(aggIndex),
Aggregate: agg,
SelectionProof: proof,
}, nil
}
func (a *Attestation) ToConsensus() (*eth.Attestation, error) {
aggBits, err := hexutil.Decode(a.AggregationBits)
if err != nil {
return nil, NewDecodeError(err, "AggregationBits")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Data")
}
sig, err := hexutil.Decode(a.Signature)
if err != nil {
return nil, NewDecodeError(err, "Signature")
}
return &eth.Attestation{
AggregationBits: aggBits,
Data: data,
Signature: sig,
}, nil
}
func AttestationFromConsensus(a *eth.Attestation) *Attestation {
return &Attestation{
AggregationBits: hexutil.Encode(a.AggregationBits),
Data: AttestationDataFromConsensus(a.Data),
Signature: hexutil.Encode(a.Signature),
}
}
func (a *AttestationData) ToConsensus() (*eth.AttestationData, error) {
slot, err := strconv.ParseUint(a.Slot, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "Slot")
}
committeeIndex, err := strconv.ParseUint(a.CommitteeIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "CommitteeIndex")
}
bbRoot, err := hexutil.Decode(a.BeaconBlockRoot)
if err != nil {
return nil, NewDecodeError(err, "BeaconBlockRoot")
}
source, err := a.Source.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Source")
}
target, err := a.Target.ToConsensus()
if err != nil {
return nil, NewDecodeError(err, "Target")
}
return &eth.AttestationData{
Slot: primitives.Slot(slot),
CommitteeIndex: primitives.CommitteeIndex(committeeIndex),
BeaconBlockRoot: bbRoot,
Source: source,
Target: target,
}, nil
}
func AttestationDataFromConsensus(a *eth.AttestationData) *AttestationData {
return &AttestationData{
Slot: strconv.FormatUint(uint64(a.Slot), 10),
CommitteeIndex: strconv.FormatUint(uint64(a.CommitteeIndex), 10),
BeaconBlockRoot: hexutil.Encode(a.BeaconBlockRoot),
Source: CheckpointFromConsensus(a.Source),
Target: CheckpointFromConsensus(a.Target),
}
}
func (c *Checkpoint) ToConsensus() (*eth.Checkpoint, error) {
epoch, err := strconv.ParseUint(c.Epoch, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "Epoch")
}
root, err := hexutil.Decode(c.Root)
if err != nil {
return nil, NewDecodeError(err, "Root")
}
return &eth.Checkpoint{
Epoch: primitives.Epoch(epoch),
Root: root,
}, nil
}
func CheckpointFromConsensus(c *eth.Checkpoint) *Checkpoint {
return &Checkpoint{
Epoch: strconv.FormatUint(uint64(c.Epoch), 10),
Root: hexutil.Encode(c.Root),
}
}
func (s *SyncCommitteeSubscription) ToConsensus() (*validator.SyncCommitteeSubscription, error) {
index, err := strconv.ParseUint(s.ValidatorIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "ValidatorIndex")
}
scIndices := make([]uint64, len(s.SyncCommitteeIndices))
for i, ix := range s.SyncCommitteeIndices {
scIndices[i], err = strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, NewDecodeError(err, fmt.Sprintf("SyncCommitteeIndices[%d]", i))
}
}
epoch, err := strconv.ParseUint(s.UntilEpoch, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "UntilEpoch")
}
return &validator.SyncCommitteeSubscription{
ValidatorIndex: primitives.ValidatorIndex(index),
SyncCommitteeIndices: scIndices,
UntilEpoch: primitives.Epoch(epoch),
}, nil
}
func (b *BeaconCommitteeSubscription) ToConsensus() (*validator.BeaconCommitteeSubscription, error) {
valIndex, err := strconv.ParseUint(b.ValidatorIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "ValidatorIndex")
}
committeeIndex, err := strconv.ParseUint(b.CommitteeIndex, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "CommitteeIndex")
}
committeesAtSlot, err := strconv.ParseUint(b.CommitteesAtSlot, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "CommitteesAtSlot")
}
slot, err := strconv.ParseUint(b.Slot, 10, 64)
if err != nil {
return nil, NewDecodeError(err, "Slot")
}
return &validator.BeaconCommitteeSubscription{
ValidatorIndex: primitives.ValidatorIndex(valIndex),
CommitteeIndex: primitives.CommitteeIndex(committeeIndex),
CommitteesAtSlot: committeesAtSlot,
Slot: primitives.Slot(slot),
IsAggregator: b.IsAggregator,
}, nil
}
// SyncDetails contains information about node sync status.
type SyncDetails struct {
HeadSlot string `json:"head_slot"`
SyncDistance string `json:"sync_distance"`
IsSyncing bool `json:"is_syncing"`
IsOptimistic bool `json:"is_optimistic"`
ElOffline bool `json:"el_offline"`
}
// SyncDetailsContainer is a wrapper for Data.
type SyncDetailsContainer struct {
Data *SyncDetails `json:"data"`
}

View File

@@ -3,22 +3,27 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"handlers.go",
"server.go",
"structs.go",
"validator.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/validator",
visibility = ["//beacon-chain:__subpackages__"],
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
@@ -26,7 +31,9 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/http:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
@@ -34,9 +41,9 @@ go_library(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_go_playground_validator_v10//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
@@ -46,26 +53,35 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["validator_test.go"],
srcs = [
"handlers_test.go",
"validator_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/builder/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/http:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -75,9 +91,9 @@ go_test(
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_mock//gomock:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -0,0 +1,573 @@
package validator
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/go-playground/validator/v10"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
validator2 "github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
ethpbalpha "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// GetAggregateAttestation aggregates all attestations matching the given attestation data root and slot, returning the aggregated result.
func (s *Server) GetAggregateAttestation(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.GetAggregateAttestation")
defer span.End()
attDataRoot := r.URL.Query().Get("attestation_data_root")
valid := shared.ValidateHex(w, "Attestation data root", attDataRoot)
if !valid {
return
}
rawSlot := r.URL.Query().Get("slot")
slot, valid := shared.ValidateUint(w, "Slot", rawSlot)
if !valid {
return
}
if err := s.AttestationsPool.AggregateUnaggregatedAttestations(ctx); err != nil {
http2.HandleError(w, "Could not aggregate unaggregated attestations: "+err.Error(), http.StatusBadRequest)
return
}
allAtts := s.AttestationsPool.AggregatedAttestations()
var bestMatchingAtt *ethpbalpha.Attestation
for _, att := range allAtts {
if att.Data.Slot == primitives.Slot(slot) {
root, err := att.Data.HashTreeRoot()
if err != nil {
http2.HandleError(w, "Could not get attestation data root: "+err.Error(), http.StatusInternalServerError)
return
}
attDataRootBytes, err := hexutil.Decode(attDataRoot)
if err != nil {
http2.HandleError(w, "Could not decode attestation data root into bytes: "+err.Error(), http.StatusBadRequest)
return
}
if bytes.Equal(root[:], attDataRootBytes) {
if bestMatchingAtt == nil || len(att.AggregationBits) > len(bestMatchingAtt.AggregationBits) {
bestMatchingAtt = att
}
}
}
}
if bestMatchingAtt == nil {
http2.HandleError(w, "No matching attestation found", http.StatusNotFound)
return
}
response := &AggregateAttestationResponse{
Data: &shared.Attestation{
AggregationBits: hexutil.Encode(bestMatchingAtt.AggregationBits),
Data: &shared.AttestationData{
Slot: strconv.FormatUint(uint64(bestMatchingAtt.Data.Slot), 10),
CommitteeIndex: strconv.FormatUint(uint64(bestMatchingAtt.Data.CommitteeIndex), 10),
BeaconBlockRoot: hexutil.Encode(bestMatchingAtt.Data.BeaconBlockRoot),
Source: &shared.Checkpoint{
Epoch: strconv.FormatUint(uint64(bestMatchingAtt.Data.Source.Epoch), 10),
Root: hexutil.Encode(bestMatchingAtt.Data.Source.Root),
},
Target: &shared.Checkpoint{
Epoch: strconv.FormatUint(uint64(bestMatchingAtt.Data.Target.Epoch), 10),
Root: hexutil.Encode(bestMatchingAtt.Data.Target.Root),
},
},
Signature: hexutil.Encode(bestMatchingAtt.Signature),
}}
http2.WriteJson(w, response)
}
// SubmitContributionAndProofs publishes multiple signed sync committee contribution and proofs.
func (s *Server) SubmitContributionAndProofs(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitContributionAndProofs")
defer span.End()
if r.Body == http.NoBody {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
var req SubmitContributionAndProofsRequest
if err := json.NewDecoder(r.Body).Decode(&req.Data); err != nil {
http2.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
validate := validator.New()
if err := validate.Struct(req); err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
for _, item := range req.Data {
consensusItem, err := item.ToConsensus()
if err != nil {
http2.HandleError(w, "Could not convert request contribution to consensus contribution: "+err.Error(), http.StatusBadRequest)
return
}
rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem)
if rpcError != nil {
http2.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
}
}
}
// SubmitAggregateAndProofs verifies given aggregate and proofs and publishes them on appropriate gossipsub topic.
func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitAggregateAndProofs")
defer span.End()
if r.Body == http.NoBody {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
var req SubmitAggregateAndProofsRequest
if err := json.NewDecoder(r.Body).Decode(&req.Data); err != nil {
http2.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
validate := validator.New()
if err := validate.Struct(req); err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
broadcastFailed := false
for _, item := range req.Data {
consensusItem, err := item.ToConsensus()
if err != nil {
http2.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
}
rpcError := s.CoreService.SubmitSignedAggregateSelectionProof(
ctx,
&ethpbalpha.SignedAggregateSubmitRequest{SignedAggregateAndProof: consensusItem},
)
if rpcError != nil {
_, ok := rpcError.Err.(*core.AggregateBroadcastFailedError)
if ok {
broadcastFailed = true
} else {
http2.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
if broadcastFailed {
http2.HandleError(w, "Could not broadcast one or more signed aggregated attestations", http.StatusInternalServerError)
}
}
// SubmitSyncCommitteeSubscription subscribe to a number of sync committee subnets.
//
// Subscribing to sync committee subnets is an action performed by VC to enable
// network participation, and only required if the VC has an active
// validator in an active sync committee.
func (s *Server) SubmitSyncCommitteeSubscription(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitSyncCommitteeSubscription")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
if r.Body == http.NoBody {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
var req SubmitSyncCommitteeSubscriptionsRequest
if err := json.NewDecoder(r.Body).Decode(&req.Data); err != nil {
http2.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
validate := validator.New()
if err := validate.Struct(req); err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
st, err := s.HeadFetcher.HeadStateReadOnly(ctx)
if err != nil {
http2.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
currEpoch := slots.ToEpoch(st.Slot())
validators := make([]state.ReadOnlyValidator, len(req.Data))
subscriptions := make([]*validator2.SyncCommitteeSubscription, len(req.Data))
for i, item := range req.Data {
consensusItem, err := item.ToConsensus()
if err != nil {
http2.HandleError(w, "Could not convert request subscription to consensus subscription: "+err.Error(), http.StatusBadRequest)
return
}
subscriptions[i] = consensusItem
val, err := st.ValidatorAtIndexReadOnly(consensusItem.ValidatorIndex)
if err != nil {
http2.HandleError(
w,
fmt.Sprintf("Could not get validator at index %d: %s", consensusItem.ValidatorIndex, err.Error()),
http.StatusInternalServerError,
)
return
}
valStatus, err := rpchelpers.ValidatorSubStatus(val, currEpoch)
if err != nil {
http2.HandleError(
w,
fmt.Sprintf("Could not get validator status at index %d: %s", consensusItem.ValidatorIndex, err.Error()),
http.StatusInternalServerError,
)
return
}
if valStatus != validator2.ActiveOngoing && valStatus != validator2.ActiveExiting {
http2.HandleError(
w,
fmt.Sprintf("Validator at index %d is not active or exiting", consensusItem.ValidatorIndex),
http.StatusBadRequest,
)
return
}
validators[i] = val
}
startEpoch, err := slots.SyncCommitteePeriodStartEpoch(currEpoch)
if err != nil {
http2.HandleError(w, "Could not get sync committee period start epoch: "+err.Error(), http.StatusInternalServerError)
return
}
for i, sub := range subscriptions {
if sub.UntilEpoch <= currEpoch {
http2.HandleError(
w,
fmt.Sprintf("Epoch for subscription at index %d is in the past. It must be at least %d", i, currEpoch+1),
http.StatusBadRequest,
)
return
}
maxValidUntilEpoch := startEpoch + params.BeaconConfig().EpochsPerSyncCommitteePeriod*2
if sub.UntilEpoch > maxValidUntilEpoch {
http2.HandleError(
w,
fmt.Sprintf("Epoch for subscription at index %d is too far in the future. It can be at most %d", i, maxValidUntilEpoch),
http.StatusBadRequest,
)
return
}
}
for i, sub := range subscriptions {
pubkey48 := validators[i].PublicKey()
// Handle overflow in the event current epoch is less than end epoch.
// This is an impossible condition, so it is a defensive check.
epochsToWatch, err := sub.UntilEpoch.SafeSub(uint64(startEpoch))
if err != nil {
epochsToWatch = 0
}
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
totalDuration := epochDuration * time.Duration(epochsToWatch)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
}
}
// SubmitBeaconCommitteeSubscription searches using discv5 for peers related to the provided subnet information
// and replaces current peers with those ones if necessary.
func (s *Server) SubmitBeaconCommitteeSubscription(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitBeaconCommitteeSubscription")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
if r.Body == http.NoBody {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
var req SubmitBeaconCommitteeSubscriptionsRequest
if err := json.NewDecoder(r.Body).Decode(&req.Data); err != nil {
http2.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
validate := validator.New()
if err := validate.Struct(req); err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
st, err := s.HeadFetcher.HeadStateReadOnly(ctx)
if err != nil {
http2.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
// Verify validators at the beginning to return early if request is invalid.
validators := make([]state.ReadOnlyValidator, len(req.Data))
subscriptions := make([]*validator2.BeaconCommitteeSubscription, len(req.Data))
for i, item := range req.Data {
consensusItem, err := item.ToConsensus()
if err != nil {
http2.HandleError(w, "Could not convert request subscription to consensus subscription: "+err.Error(), http.StatusBadRequest)
return
}
subscriptions[i] = consensusItem
val, err := st.ValidatorAtIndexReadOnly(consensusItem.ValidatorIndex)
if err != nil {
if outOfRangeErr, ok := err.(*state_native.ValidatorIndexOutOfRangeError); ok {
http2.HandleError(w, "Could not get validator: "+outOfRangeErr.Error(), http.StatusBadRequest)
return
}
http2.HandleError(w, "Could not get validator: "+err.Error(), http.StatusInternalServerError)
return
}
validators[i] = val
}
fetchValsLen := func(slot primitives.Slot) (uint64, error) {
wantedEpoch := slots.ToEpoch(slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
return 0, err
}
return uint64(len(vals)), nil
}
// Request the head validator indices of epoch represented by the first requested slot.
currValsLen, err := fetchValsLen(subscriptions[0].Slot)
if err != nil {
http2.HandleError(w, "Could not retrieve head validator length: "+err.Error(), http.StatusInternalServerError)
return
}
currEpoch := slots.ToEpoch(subscriptions[0].Slot)
for _, sub := range subscriptions {
// If epoch has changed, re-request active validators length
if currEpoch != slots.ToEpoch(sub.Slot) {
currValsLen, err = fetchValsLen(sub.Slot)
if err != nil {
http2.HandleError(w, "Could not retrieve head validator length: "+err.Error(), http.StatusInternalServerError)
return
}
currEpoch = slots.ToEpoch(sub.Slot)
}
subnet := helpers.ComputeSubnetFromCommitteeAndSlot(currValsLen, sub.CommitteeIndex, sub.Slot)
cache.SubnetIDs.AddAttesterSubnetID(sub.Slot, subnet)
if sub.IsAggregator {
cache.SubnetIDs.AddAggregatorSubnetID(sub.Slot, subnet)
}
}
for _, val := range validators {
valStatus, err := rpchelpers.ValidatorStatus(val, currEpoch)
if err != nil {
http2.HandleError(w, "Could not retrieve validator status: "+err.Error(), http.StatusInternalServerError)
return
}
pubkey := val.PublicKey()
core.AssignValidatorToSubnet(pubkey[:], valStatus)
}
}
// GetAttestationData requests that the beacon node produces attestation data for
// the requested committee index and slot based on the nodes current head.
func (s *Server) GetAttestationData(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.GetAttestationData")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
if isOptimistic, err := shared.IsOptimistic(ctx, w, s.OptimisticModeFetcher); isOptimistic || err != nil {
return
}
rawSlot := r.URL.Query().Get("slot")
slot, valid := shared.ValidateUint(w, "Slot", rawSlot)
if !valid {
return
}
rawCommitteeIndex := r.URL.Query().Get("committee_index")
committeeIndex, valid := shared.ValidateUint(w, "Committee Index", rawCommitteeIndex)
if !valid {
return
}
attestationData, rpcError := s.CoreService.GetAttestationData(ctx, &ethpbalpha.AttestationDataRequest{
Slot: primitives.Slot(slot),
CommitteeIndex: primitives.CommitteeIndex(committeeIndex),
})
if rpcError != nil {
http2.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
response := &GetAttestationDataResponse{
Data: &shared.AttestationData{
Slot: strconv.FormatUint(uint64(attestationData.Slot), 10),
CommitteeIndex: strconv.FormatUint(uint64(attestationData.CommitteeIndex), 10),
BeaconBlockRoot: hexutil.Encode(attestationData.BeaconBlockRoot),
Source: &shared.Checkpoint{
Epoch: strconv.FormatUint(uint64(attestationData.Source.Epoch), 10),
Root: hexutil.Encode(attestationData.Source.Root),
},
Target: &shared.Checkpoint{
Epoch: strconv.FormatUint(uint64(attestationData.Target.Epoch), 10),
Root: hexutil.Encode(attestationData.Target.Root),
},
},
}
http2.WriteJson(w, response)
}
// ProduceSyncCommitteeContribution requests that the beacon node produce a sync committee contribution.
func (s *Server) ProduceSyncCommitteeContribution(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.ProduceSyncCommitteeContribution")
defer span.End()
subIndex := r.URL.Query().Get("subcommittee_index")
index, valid := shared.ValidateUint(w, "Subcommittee Index", subIndex)
if !valid {
return
}
rawSlot := r.URL.Query().Get("slot")
slot, valid := shared.ValidateUint(w, "Slot", rawSlot)
if !valid {
return
}
rawBlockRoot := r.URL.Query().Get("beacon_block_root")
blockRoot, err := hexutil.Decode(rawBlockRoot)
if err != nil {
http2.HandleError(w, "Invalid Beacon Block Root: "+err.Error(), http.StatusBadRequest)
return
}
contribution, ok := s.produceSyncCommitteeContribution(ctx, w, primitives.Slot(slot), index, []byte(blockRoot))
if !ok {
return
}
response := &ProduceSyncCommitteeContributionResponse{
Data: contribution,
}
http2.WriteJson(w, response)
}
// ProduceSyncCommitteeContribution requests that the beacon node produce a sync committee contribution.
func (s *Server) produceSyncCommitteeContribution(
ctx context.Context,
w http.ResponseWriter,
slot primitives.Slot,
index uint64,
blockRoot []byte,
) (*shared.SyncCommitteeContribution, bool) {
msgs, err := s.SyncCommitteePool.SyncCommitteeMessages(slot)
if err != nil {
http2.HandleError(w, "Could not get sync subcommittee messages: "+err.Error(), http.StatusInternalServerError)
return nil, false
}
if len(msgs) == 0 {
http2.HandleError(w, "No subcommittee messages found", http.StatusNotFound)
return nil, false
}
sig, aggregatedBits, err := s.CoreService.AggregatedSigAndAggregationBits(
ctx,
&ethpbalpha.AggregatedSigAndAggregationBitsRequest{
Msgs: msgs,
Slot: slot,
SubnetId: index,
BlockRoot: blockRoot,
},
)
if err != nil {
http2.HandleError(w, "Could not get contribution data: "+err.Error(), http.StatusInternalServerError)
return nil, false
}
return &shared.SyncCommitteeContribution{
Slot: strconv.FormatUint(uint64(slot), 10),
BeaconBlockRoot: hexutil.Encode(blockRoot),
SubcommitteeIndex: strconv.FormatUint(index, 10),
AggregationBits: hexutil.Encode(aggregatedBits),
Signature: hexutil.Encode(sig),
}, true
}
// RegisterValidator requests that the beacon node stores valid validator registrations and calls the builder apis to update the custom builder
func (s *Server) RegisterValidator(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.RegisterValidators")
defer span.End()
if s.BlockBuilder == nil || !s.BlockBuilder.Configured() {
http2.HandleError(w, fmt.Sprintf("Could not register block builder: %v", builder.ErrNoBuilder), http.StatusBadRequest)
return
}
var jsonRegistrations []*shared.SignedValidatorRegistration
err := json.NewDecoder(r.Body).Decode(&jsonRegistrations)
switch {
case err == io.EOF:
http2.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
http2.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
validate := validator.New()
registrations := make([]*ethpbalpha.SignedValidatorRegistrationV1, len(jsonRegistrations))
for i, registration := range jsonRegistrations {
if err := validate.Struct(registration); err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
reg, err := registration.ToConsensus()
if err != nil {
http2.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
registrations[i] = reg
}
if len(registrations) == 0 {
http2.HandleError(w, "Validator registration request is empty", http.StatusBadRequest)
return
}
if err := s.BlockBuilder.RegisterValidator(ctx, registrations); err != nil {
http2.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,10 +4,12 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/synccommittee"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
@@ -30,4 +32,6 @@ type Server struct {
ChainInfoFetcher blockchain.ChainInfoFetcher
BeaconDB db.HeadAccessDatabase
BlockBuilder builder.BlockBuilder
OperationNotifier operation.Notifier
CoreService *core.Service
}

View File

@@ -0,0 +1,31 @@
package validator
import "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
type AggregateAttestationResponse struct {
Data *shared.Attestation `json:"data"`
}
type SubmitContributionAndProofsRequest struct {
Data []*shared.SignedContributionAndProof `json:"data" validate:"required,dive"`
}
type SubmitAggregateAndProofsRequest struct {
Data []*shared.SignedAggregateAttestationAndProof `json:"data" validate:"required,dive"`
}
type SubmitSyncCommitteeSubscriptionsRequest struct {
Data []*shared.SyncCommitteeSubscription `json:"data" validate:"required,dive"`
}
type SubmitBeaconCommitteeSubscriptionsRequest struct {
Data []*shared.BeaconCommitteeSubscription `json:"data" validate:"required,dive"`
}
type GetAttestationDataResponse struct {
Data *shared.AttestationData `json:"data"`
}
type ProduceSyncCommitteeContributionResponse struct {
Data *shared.SyncCommitteeContribution `json:"data"`
}

View File

@@ -6,19 +6,14 @@ import (
"fmt"
"sort"
"strconv"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -751,361 +746,6 @@ func (vs *Server) PrepareBeaconProposer(
return &emptypb.Empty{}, nil
}
// SubmitValidatorRegistration submits validator registrations.
func (vs *Server) SubmitValidatorRegistration(ctx context.Context, reg *ethpbv1.SubmitValidatorRegistrationsRequest) (*empty.Empty, error) {
ctx, span := trace.StartSpan(ctx, "validator.SubmitValidatorRegistration")
defer span.End()
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not register block builder: %v", builder.ErrNoBuilder)
}
var registrations []*ethpbalpha.SignedValidatorRegistrationV1
for i, registration := range reg.Registrations {
message := reg.Registrations[i].Message
registrations = append(registrations, &ethpbalpha.SignedValidatorRegistrationV1{
Message: &ethpbalpha.ValidatorRegistrationV1{
FeeRecipient: message.FeeRecipient,
GasLimit: message.GasLimit,
Timestamp: message.Timestamp,
Pubkey: message.Pubkey,
},
Signature: registration.Signature,
})
}
if len(registrations) == 0 {
return &empty.Empty{}, status.Errorf(codes.InvalidArgument, "Validator registration request is empty")
}
if err := vs.BlockBuilder.RegisterValidator(ctx, registrations); err != nil {
return nil, status.Errorf(codes.InvalidArgument, "Could not register block builder: %v", err)
}
return &empty.Empty{}, nil
}
// ProduceAttestationData requests that the beacon node produces attestation data for
// the requested committee index and slot based on the nodes current head.
func (vs *Server) ProduceAttestationData(ctx context.Context, req *ethpbv1.ProduceAttestationDataRequest) (*ethpbv1.ProduceAttestationDataResponse, error) {
ctx, span := trace.StartSpan(ctx, "validator.ProduceAttestationData")
defer span.End()
v1alpha1req := &ethpbalpha.AttestationDataRequest{
Slot: req.Slot,
CommitteeIndex: req.CommitteeIndex,
}
v1alpha1resp, err := vs.V1Alpha1Server.GetAttestationData(ctx, v1alpha1req)
if err != nil {
// We simply return err because it's already of a gRPC error type.
return nil, err
}
attData := migration.V1Alpha1AttDataToV1(v1alpha1resp)
return &ethpbv1.ProduceAttestationDataResponse{Data: attData}, nil
}
// GetAggregateAttestation aggregates all attestations matching the given attestation data root and slot, returning the aggregated result.
func (vs *Server) GetAggregateAttestation(ctx context.Context, req *ethpbv1.AggregateAttestationRequest) (*ethpbv1.AggregateAttestationResponse, error) {
_, span := trace.StartSpan(ctx, "validator.GetAggregateAttestation")
defer span.End()
if err := vs.AttestationsPool.AggregateUnaggregatedAttestations(ctx); err != nil {
return nil, status.Errorf(codes.Internal, "Could not aggregate unaggregated attestations: %v", err)
}
allAtts := vs.AttestationsPool.AggregatedAttestations()
var bestMatchingAtt *ethpbalpha.Attestation
for _, att := range allAtts {
if att.Data.Slot == req.Slot {
root, err := att.Data.HashTreeRoot()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get attestation data root: %v", err)
}
if bytes.Equal(root[:], req.AttestationDataRoot) {
if bestMatchingAtt == nil || len(att.AggregationBits) > len(bestMatchingAtt.AggregationBits) {
bestMatchingAtt = att
}
}
}
}
if bestMatchingAtt == nil {
return nil, status.Error(codes.NotFound, "No matching attestation found")
}
return &ethpbv1.AggregateAttestationResponse{
Data: migration.V1Alpha1AttestationToV1(bestMatchingAtt),
}, nil
}
// SubmitAggregateAndProofs verifies given aggregate and proofs and publishes them on appropriate gossipsub topic.
func (vs *Server) SubmitAggregateAndProofs(ctx context.Context, req *ethpbv1.SubmitAggregateAndProofsRequest) (*empty.Empty, error) {
ctx, span := trace.StartSpan(ctx, "validator.SubmitAggregateAndProofs")
defer span.End()
for _, agg := range req.Data {
if agg == nil || agg.Message == nil || agg.Message.Aggregate == nil || agg.Message.Aggregate.Data == nil {
return nil, status.Error(codes.InvalidArgument, "Signed aggregate request can't be nil")
}
sigLen := fieldparams.BLSSignatureLength
emptySig := make([]byte, sigLen)
if bytes.Equal(agg.Signature, emptySig) || bytes.Equal(agg.Message.SelectionProof, emptySig) || bytes.Equal(agg.Message.Aggregate.Signature, emptySig) {
return nil, status.Error(codes.InvalidArgument, "Signed signatures can't be zero hashes")
}
if len(agg.Signature) != sigLen || len(agg.Message.Aggregate.Signature) != sigLen {
return nil, status.Errorf(codes.InvalidArgument, "Incorrect signature length. Expected %d bytes", sigLen)
}
// As a preventive measure, a beacon node shouldn't broadcast an attestation whose slot is out of range.
if err := helpers.ValidateAttestationTime(agg.Message.Aggregate.Data.Slot,
vs.TimeFetcher.GenesisTime(), params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
return nil, status.Error(codes.InvalidArgument, "Attestation slot is no longer valid from current time")
}
}
broadcastFailed := false
for _, agg := range req.Data {
v1alpha1Agg := migration.V1SignedAggregateAttAndProofToV1Alpha1(agg)
if err := vs.Broadcaster.Broadcast(ctx, v1alpha1Agg); err != nil {
broadcastFailed = true
} else {
log.WithFields(log.Fields{
"slot": agg.Message.Aggregate.Data.Slot,
"committeeIndex": agg.Message.Aggregate.Data.Index,
"validatorIndex": agg.Message.AggregatorIndex,
"aggregatedCount": agg.Message.Aggregate.AggregationBits.Count(),
}).Debug("Broadcasting aggregated attestation and proof")
}
}
if broadcastFailed {
return nil, status.Errorf(
codes.Internal,
"Could not broadcast one or more signed aggregated attestations")
}
return &emptypb.Empty{}, nil
}
// SubmitBeaconCommitteeSubscription searches using discv5 for peers related to the provided subnet information
// and replaces current peers with those ones if necessary.
func (vs *Server) SubmitBeaconCommitteeSubscription(ctx context.Context, req *ethpbv1.SubmitBeaconCommitteeSubscriptionsRequest) (*emptypb.Empty, error) {
ctx, span := trace.StartSpan(ctx, "validator.SubmitBeaconCommitteeSubscription")
defer span.End()
if err := rpchelpers.ValidateSyncGRPC(ctx, vs.SyncChecker, vs.HeadFetcher, vs.TimeFetcher, vs.OptimisticModeFetcher); err != nil {
// We simply return the error because it's already a gRPC error.
return nil, err
}
if len(req.Data) == 0 {
return nil, status.Error(codes.InvalidArgument, "No subscriptions provided")
}
s, err := vs.HeadFetcher.HeadStateReadOnly(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
// Verify validators at the beginning to return early if request is invalid.
validators := make([]state.ReadOnlyValidator, len(req.Data))
for i, sub := range req.Data {
val, err := s.ValidatorAtIndexReadOnly(sub.ValidatorIndex)
if outOfRangeErr, ok := err.(*state_native.ValidatorIndexOutOfRangeError); ok {
return nil, status.Errorf(codes.InvalidArgument, "Invalid validator ID: %v", outOfRangeErr)
}
validators[i] = val
}
fetchValsLen := func(slot primitives.Slot) (uint64, error) {
wantedEpoch := slots.ToEpoch(slot)
vals, err := vs.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
return 0, err
}
return uint64(len(vals)), nil
}
// Request the head validator indices of epoch represented by the first requested slot.
currValsLen, err := fetchValsLen(req.Data[0].Slot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve head validator length: %v", err)
}
currEpoch := slots.ToEpoch(req.Data[0].Slot)
for _, sub := range req.Data {
// If epoch has changed, re-request active validators length
if currEpoch != slots.ToEpoch(sub.Slot) {
currValsLen, err = fetchValsLen(sub.Slot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve head validator length: %v", err)
}
currEpoch = slots.ToEpoch(sub.Slot)
}
subnet := helpers.ComputeSubnetFromCommitteeAndSlot(currValsLen, sub.CommitteeIndex, sub.Slot)
cache.SubnetIDs.AddAttesterSubnetID(sub.Slot, subnet)
if sub.IsAggregator {
cache.SubnetIDs.AddAggregatorSubnetID(sub.Slot, subnet)
}
}
for _, val := range validators {
valStatus, err := rpchelpers.ValidatorStatus(val, currEpoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve validator status: %v", err)
}
pubkey := val.PublicKey()
v1alpha1Req := &ethpbalpha.AssignValidatorToSubnetRequest{PublicKey: pubkey[:], Status: v1ValidatorStatusToV1Alpha1(valStatus)}
_, err = vs.V1Alpha1Server.AssignValidatorToSubnet(ctx, v1alpha1Req)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not assign validator to subnet")
}
}
return &emptypb.Empty{}, nil
}
// SubmitSyncCommitteeSubscription subscribe to a number of sync committee subnets.
//
// Subscribing to sync committee subnets is an action performed by VC to enable
// network participation in Altair networks, and only required if the VC has an active
// validator in an active sync committee.
func (vs *Server) SubmitSyncCommitteeSubscription(ctx context.Context, req *ethpbv2.SubmitSyncCommitteeSubscriptionsRequest) (*empty.Empty, error) {
ctx, span := trace.StartSpan(ctx, "validator.SubmitSyncCommitteeSubscription")
defer span.End()
if err := rpchelpers.ValidateSyncGRPC(ctx, vs.SyncChecker, vs.HeadFetcher, vs.TimeFetcher, vs.OptimisticModeFetcher); err != nil {
// We simply return the error because it's already a gRPC error.
return nil, err
}
if len(req.Data) == 0 {
return nil, status.Error(codes.InvalidArgument, "No subscriptions provided")
}
s, err := vs.HeadFetcher.HeadStateReadOnly(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
currEpoch := slots.ToEpoch(s.Slot())
validators := make([]state.ReadOnlyValidator, len(req.Data))
for i, sub := range req.Data {
val, err := s.ValidatorAtIndexReadOnly(sub.ValidatorIndex)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get validator at index %d: %v", sub.ValidatorIndex, err)
}
valStatus, err := rpchelpers.ValidatorSubStatus(val, currEpoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get validator status at index %d: %v", sub.ValidatorIndex, err)
}
if valStatus != ethpbv1.ValidatorStatus_ACTIVE_ONGOING && valStatus != ethpbv1.ValidatorStatus_ACTIVE_EXITING {
return nil, status.Errorf(codes.InvalidArgument, "Validator at index %d is not active or exiting: %v", sub.ValidatorIndex, err)
}
validators[i] = val
}
startEpoch, err := slots.SyncCommitteePeriodStartEpoch(currEpoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get sync committee period start epoch: %v.", err)
}
for i, sub := range req.Data {
if sub.UntilEpoch <= currEpoch {
return nil, status.Errorf(codes.InvalidArgument, "Epoch for subscription at index %d is in the past. It must be at least %d", i, currEpoch+1)
}
maxValidUntilEpoch := startEpoch + params.BeaconConfig().EpochsPerSyncCommitteePeriod*2
if sub.UntilEpoch > maxValidUntilEpoch {
return nil, status.Errorf(
codes.InvalidArgument,
"Epoch for subscription at index %d is too far in the future. It can be at most %d",
i,
maxValidUntilEpoch,
)
}
}
for i, sub := range req.Data {
pubkey48 := validators[i].PublicKey()
// Handle overflow in the event current epoch is less than end epoch.
// This is an impossible condition, so it is a defensive check.
epochsToWatch, err := sub.UntilEpoch.SafeSub(uint64(startEpoch))
if err != nil {
epochsToWatch = 0
}
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
totalDuration := epochDuration * time.Duration(epochsToWatch)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
}
return &empty.Empty{}, nil
}
// ProduceSyncCommitteeContribution requests that the beacon node produce a sync committee contribution.
func (vs *Server) ProduceSyncCommitteeContribution(
ctx context.Context,
req *ethpbv2.ProduceSyncCommitteeContributionRequest,
) (*ethpbv2.ProduceSyncCommitteeContributionResponse, error) {
ctx, span := trace.StartSpan(ctx, "validator.ProduceSyncCommitteeContribution")
defer span.End()
msgs, err := vs.SyncCommitteePool.SyncCommitteeMessages(req.Slot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get sync subcommittee messages: %v", err)
}
if msgs == nil {
return nil, status.Errorf(codes.NotFound, "No subcommittee messages found")
}
v1alpha1Req := &ethpbalpha.AggregatedSigAndAggregationBitsRequest{
Msgs: msgs,
Slot: req.Slot,
SubnetId: req.SubcommitteeIndex,
BlockRoot: req.BeaconBlockRoot,
}
v1alpha1Resp, err := vs.V1Alpha1Server.AggregatedSigAndAggregationBits(ctx, v1alpha1Req)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get contribution data: %v", err)
}
contribution := &ethpbv2.SyncCommitteeContribution{
Slot: req.Slot,
BeaconBlockRoot: req.BeaconBlockRoot,
SubcommitteeIndex: req.SubcommitteeIndex,
AggregationBits: v1alpha1Resp.Bits,
Signature: v1alpha1Resp.AggregatedSig,
}
return &ethpbv2.ProduceSyncCommitteeContributionResponse{
Data: contribution,
}, nil
}
// SubmitContributionAndProofs publishes multiple signed sync committee contribution and proofs.
func (vs *Server) SubmitContributionAndProofs(ctx context.Context, req *ethpbv2.SubmitContributionAndProofsRequest) (*empty.Empty, error) {
ctx, span := trace.StartSpan(ctx, "validator.SubmitContributionAndProofs")
defer span.End()
for _, item := range req.Data {
v1alpha1Req := &ethpbalpha.SignedContributionAndProof{
Message: &ethpbalpha.ContributionAndProof{
AggregatorIndex: item.Message.AggregatorIndex,
Contribution: &ethpbalpha.SyncCommitteeContribution{
Slot: item.Message.Contribution.Slot,
BlockRoot: item.Message.Contribution.BeaconBlockRoot,
SubcommitteeIndex: item.Message.Contribution.SubcommitteeIndex,
AggregationBits: item.Message.Contribution.AggregationBits,
Signature: item.Message.Contribution.Signature,
},
SelectionProof: item.Message.SelectionProof,
},
Signature: item.Signature,
}
_, err := vs.V1Alpha1Server.SubmitSignedContributionAndProof(ctx, v1alpha1Req)
// We simply return err because it's already of a gRPC error type.
if err != nil {
return nil, err
}
}
return &empty.Empty{}, nil
}
// GetLiveness requests the beacon node to indicate if a validator has been observed to be live in a given epoch.
// The beacon node might detect liveness by observing messages from the validator on the network,
// in the beacon chain, from its API or from any other source.
@@ -1216,20 +856,6 @@ func proposalDependentRoot(s state.BeaconState, epoch primitives.Epoch) ([]byte,
return root, nil
}
// Logic based on https://hackmd.io/ofFJ5gOmQpu1jjHilHbdQQ
func v1ValidatorStatusToV1Alpha1(valStatus ethpbv1.ValidatorStatus) ethpbalpha.ValidatorStatus {
switch valStatus {
case ethpbv1.ValidatorStatus_ACTIVE:
return ethpbalpha.ValidatorStatus_ACTIVE
case ethpbv1.ValidatorStatus_PENDING:
return ethpbalpha.ValidatorStatus_PENDING
case ethpbv1.ValidatorStatus_WITHDRAWAL:
return ethpbalpha.ValidatorStatus_EXITED
default:
return ethpbalpha.ValidatorStatus_UNKNOWN_STATUS
}
}
func syncCommitteeDutiesLastValidEpoch(currentEpoch primitives.Epoch) primitives.Epoch {
currentSyncPeriodIndex := currentEpoch / params.BeaconConfig().EpochsPerSyncCommitteePeriod
// Return the last epoch of the next sync committee.

File diff suppressed because it is too large Load Diff

View File

@@ -17,7 +17,7 @@ go_library(
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/sync:go_default_library",
"//network:go_default_library",
"//network/http:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
@@ -36,7 +36,7 @@ go_test(
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//network:go_default_library",
"//network/http:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -12,7 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
@@ -24,11 +24,11 @@ func (s *Server) ListTrustedPeer(w http.ResponseWriter, r *http.Request) {
for _, id := range allIds {
p, err := httpPeerInfo(peerStatus, id)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not get peer info").Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
// peers added into trusted set but never connected should also be listed
@@ -44,37 +44,37 @@ func (s *Server) ListTrustedPeer(w http.ResponseWriter, r *http.Request) {
allPeers = append(allPeers, p)
}
response := &PeersResponse{Peers: allPeers}
network.WriteJson(w, response)
http2.WriteJson(w, response)
}
// AddTrustedPeer adds a new peer into node's trusted peer set by Multiaddr
func (s *Server) AddTrustedPeer(w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(r.Body)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not read request body").Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
var addrRequest *AddrRequest
err = json.Unmarshal(body, &addrRequest)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not decode request body into peer address").Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
info, err := peer.AddrInfoFromString(addrRequest.Addr)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not derive peer info from multiaddress").Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}
@@ -98,11 +98,11 @@ func (s *Server) RemoveTrustedPeer(w http.ResponseWriter, r *http.Request) {
id := segments[len(segments)-1]
peerId, err := peer.Decode(id)
if err != nil {
errJson := &network.DefaultErrorJson{
errJson := &http2.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not decode peer id").Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
http2.WriteError(w, errJson)
return
}

View File

@@ -17,7 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
mockp2p "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/network"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
@@ -188,7 +188,7 @@ func TestAddTrustedPeer_EmptyBody(t *testing.T) {
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AddTrustedPeer(writer, request)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.Equal(t, "Could not decode request body into peer address: unexpected end of JSON input", e.Message)
@@ -213,7 +213,7 @@ func TestAddTrustedPeer_BadAddress(t *testing.T) {
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AddTrustedPeer(writer, request)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not derive peer info from multiaddress", e.Message)
@@ -243,7 +243,7 @@ func TestRemoveTrustedPeer_EmptyParameter(t *testing.T) {
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.RemoveTrustedPeer(writer, request)
e := &network.DefaultErrorJson{}
e := &http2.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.Equal(t, "Could not decode peer id: failed to parse peer ID: invalid cid: cid too short", e.Message)

View File

@@ -103,6 +103,7 @@ go_test(
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
@@ -47,4 +48,5 @@ type Server struct {
SyncChecker sync.Checker
ReplayerBuilder stategen.ReplayerBuilder
OptimisticModeFetcher blockchain.OptimisticModeFetcher
CoreService *core.Service
}

View File

@@ -659,11 +659,7 @@ func (bs *Server) GetValidatorQueue(
func (bs *Server) GetValidatorPerformance(
ctx context.Context, req *ethpb.ValidatorPerformanceRequest,
) (*ethpb.ValidatorPerformanceResponse, error) {
if bs.SyncChecker.Syncing() {
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
currSlot := bs.GenesisTimeFetcher.CurrentSlot()
response, err := core.ComputeValidatorPerformance(ctx, req, bs.HeadFetcher, currSlot)
response, err := bs.CoreService.ComputeValidatorPerformance(ctx, req)
if err != nil {
return nil, status.Errorf(core.ErrorReasonToGRPC(err.Reason), "Could not compute validator performance: %v", err.Err)
}

Some files were not shown because too many files have changed in this diff Show More