Compare commits

...

438 Commits

Author SHA1 Message Date
kasey
317c09c40a fix another unrelated bug 2023-01-18 16:08:49 -06:00
kasey
2c2f56cc7e fix what looks to be a bad merge 2023-01-18 16:00:47 -06:00
kasey
58acdb5136 trigger e2e at capella fork 2023-01-18 15:47:44 -06:00
Nishant Das
fa8a6d9d17 Merge branch 'develop' into capella 2023-01-18 17:46:15 +08:00
Nishant Das
fa2b64f702 Remove Unused Block Setter (#11889)
* remove methods

* remove mock

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-18 08:56:44 +00:00
omahs
6f5e35f08a Fix: typos (#11885)
* Fix: typo

Fix: typo

* Fix: typos

Fix: typos

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-01-18 15:21:58 +08:00
nisdas
743037efb5 remove file 2023-01-18 14:11:25 +08:00
nisdas
301970cf5a remove import 2023-01-18 14:09:31 +08:00
nisdas
01ae2fe7be remove method 2023-01-18 14:06:43 +08:00
nisdas
1cfe5988e6 fmt 2023-01-18 13:57:56 +08:00
nisdas
a40cadab32 Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into capella 2023-01-18 13:55:49 +08:00
terencechain
79d6ce45ad Add capella's marshal and unmarshal (#11879)
* Add capella's marshal and unmarshal

* skip test

* Fix TestJsonMarshalUnmarshal/execution_payload_Capella

* Fixing test

* Skip http test

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-17 23:44:59 +00:00
terencechain
73cd7df679 Add capella fork version for Sepolia testnet (#11888)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-17 19:51:01 +00:00
Nishant Das
d084d5a979 Add Disable Staking Contract Check Flag (#11886)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-17 19:00:30 +00:00
Sammy Rosso
db6b1c15c4 Add additional tests to bytesutil (#11877)
* Add missing tests from bytes.go and integers.go

* Fix failing test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-01-17 17:54:43 +00:00
Dhruv Bodani
7c9bff489e Add REST implementation for GetSyncCommitteeContribution (#11875)
* add REST implementation for GetSyncCommitteeContribution

* fix imports

* Update validator/client/beacon-api/sync_committee.go

Co-authored-by: Radosław Kapka <radek@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-17 16:43:22 +00:00
Nishant Das
1fca73d761 Delete interop.Dockerfile (#11887)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-17 15:09:32 +00:00
Potuz
fbafbdd62c Stream blocks capella (#11883)
* capella blocks stream

* add unit tests

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-17 12:17:18 +00:00
Potuz
75d98cf9af Do not log an overflow computing Capella's slot start (#11884)
* Do not log an overflow computing Capella's slot start

* nishant's review

Co-authored-by: Nishant Das <nishdas93@gmail.com>

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-01-17 10:59:51 +00:00
Nishant Das
9f5f807303 Merge branch 'develop' into capella 2023-01-17 18:39:47 +08:00
Radosław Kapka
96401e734e Implement bls_to_execution_change API event (#11865)
* Implement `bls_to_execution_change` API event

(cherry picked from commit 1c8a2c580b3664e1238380138dc65c30e238cb11)
(cherry picked from commit aabcaac619)

* missing event code

(cherry picked from commit daa4fd2b72)

* Update beacon-chain/core/feed/operation/events.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-01-17 09:45:20 +00:00
Nishant Das
5f6147ecf7 Merge branch 'develop' into capella 2023-01-17 16:56:30 +08:00
terencechain
5480d607ac Change withdrawal amount unmarshal to uint64 (gwei) (#11866)
* Change withdrawal amount unmarshal to uint64 (gwei)

* Init server

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2023-01-16 21:25:16 +00:00
terencechain
38f0a81526 Clarify circuit breaker logs (#11876)
* Clarify circuit breaker logs

* Revert bad changes

* Fix test
2023-01-16 17:49:23 +00:00
Potuz
b97d2827e9 add capella fork version for sepolia 2023-01-16 12:01:34 -03:00
Potuz
ad680d3417 Move broadcast of BLS changes to the forkwatcher (#11878)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-16 15:00:11 +00:00
Potuz
047cae5e8b Sign BLS_TO_EXECUTION_CHANGES with GENESIS_FORK_VERSION (#11870)
* Sign BLS_TO_EXECUTION_CHANGES with GENESIS_FORK_VERSION

* update spectests

* extra blank line

* fix consensus_specs_sha

* get rid of LightClients tests

* fix unit tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-16 14:29:33 +00:00
terencechain
179252faea Address historical summaries feedback (#11864)
* Address historical summaries feedback

* Fix build

* Fix test

* Move historicalSummaries

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-16 12:44:40 +00:00
Potuz
f93b82ee89 broadcast bls changes at the fork (#11872) 2023-01-16 11:51:03 +00:00
int88
508e1ad005 optimize and fix bug of attestations related code (#11869)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-01-16 02:19:46 +00:00
terence tsao
8f06f72eed Clean diff 2023-01-13 16:03:12 -08:00
terence tsao
046882401e Fix build 2023-01-13 15:55:03 -08:00
terence tsao
ce339bc22b Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-13 15:52:37 -08:00
terencechain
1b2d917389 New prysm get block RPC (#11834) 2023-01-13 15:45:17 -08:00
kasey
eb6b811071 Remove ShardingForkVersion/Epoch from config (#11868)
* remove ShardingForkVersion/Epoch from config

* update magic value counting config params

* skip the sharding params, which we disavow

we wanted to remove these upstream and may well do some day but right
now it's too big of a mess to untangle.

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-13 17:11:33 +00:00
kasey
cf71dbdf97 "Fail" in logs makes FAIL in e2e hard to find (#11867)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-01-13 13:03:30 +01:00
kasey
1ec5d45d8d Capella State Detection (#11862)
* capella state version detection bug fix

* add capella to yaml "template"

* changes for capella state detection

* lint

* don't set capella fork version == altair!!!

* less brittle test for fork schedule rpc

* fix assertions that use wrong field name

* don't test capella/sharding fv against upstream

* hat tip Terence for sanity check

* Update config/params/loader_test.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-01-12 22:15:49 -06:00
kasey
9328e9af1f Capella State Detection (#11862)
* capella state version detection bug fix

* add capella to yaml "template"

* changes for capella state detection

* lint

* don't set capella fork version == altair!!!

* less brittle test for fork schedule rpc

* fix assertions that use wrong field name

* don't test capella/sharding fv against upstream

* hat tip Terence for sanity check

* Update config/params/loader_test.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-01-13 04:04:37 +00:00
terencechain
4762fd71de Add historical summaries to beacon api state (#11863)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-12 17:06:23 +00:00
rkapka
daa4fd2b72 missing event code 2023-01-12 17:45:21 +01:00
rkapka
5bdffb82e3 remove duplicated tests
(cherry picked from commit 952c3f7f5beef6c3981fd09ea161b25adc26852a)
2023-01-12 17:36:25 +01:00
rkapka
aabcaac619 Implement bls_to_execution_change API event
(cherry picked from commit 1c8a2c580b3664e1238380138dc65c30e238cb11)
2023-01-12 17:36:22 +01:00
Nishant Das
769daffb73 Fix State Fetcher (#11820)
* fix fetcher

* kasey's review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-01-12 10:00:12 +00:00
Radosław Kapka
16e6e0de6c Replace FutureForkStub with specific interfaces (#11861)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-12 00:42:35 +00:00
Nishant Das
396fc3dc7e Fix Waiting For Bandwidth Issue While Syncing (#11853)
* fix bugs

* fix up

* comment

* raul's review

* comment

* finally fix it
2023-01-12 00:07:09 +00:00
terence tsao
d86a452b15 Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-11 14:52:26 -08:00
terencechain
7fa3ebfaa8 Add historical summaries to state and processing (#11842)
* Add historical summaries to state and processing

* Process historical roots test

* Passing spec tests

* Fix shas and tests

* Fix mainnet spectest sha

* Fix tests

* Update beacon-chain/core/epoch/epoch_processing.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/epoch/epoch_processing_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update proto/prysm/v1alpha1/beacon_state.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/state/state-native/hasher.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Radek's feedback

* Getters error

* Dont return

* Fix else

* Fix tests

* Fix test

* Rm white space

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-11 22:20:15 +00:00
Patrice Vignola
81b9eceb50 Add Capella support for Validator's REST API ProposeBeaconBlock and GetBeaconBlock endpoints (#11848)
* Add Capella support for Validator's REST API ProposeBeaconBlock and GetBeaconBlock endpoints

* Fix

* Fix

* Add context to capella tests

* Update validator/client/beacon-api/beacon_block_proto_helpers.go

* Update validator/client/beacon-api/beacon_block_proto_helpers.go

* Update validator/client/beacon-api/beacon_block_proto_helpers_test.go

* Update validator/client/beacon-api/beacon_block_proto_helpers_test.go

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Radosław Kapka <radek@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-11 16:55:09 +00:00
Radosław Kapka
505ff6ea3d Replay state to current slot in StateBySlot (#11858) 2023-01-11 14:53:06 +01:00
Nishant Das
2b5125c7bc Check If Validator Is In Sync Committee (#11860) 2023-01-11 10:19:31 +00:00
Kasey Kirkham
9f3bb623ec add capella to yaml "template" 2023-01-10 16:30:32 -06:00
Kasey Kirkham
b10a95097e capella state version detection bug fix 2023-01-10 15:54:27 -06:00
Preston Van Loon
1e3a55c6a6 Refactor bytesutil, add support for go1.20 slice to array conversions (#11838)
* Refactor bytes.go and bytes_test.go to smaller files, introduce go1.17 and go1.20 style of array copy

* rename bytes_go17.go to reflect that it works on any version 1.19 and below

* fix PadTo when len is exactly the size

* Add go1.20 style conversions

* Forgot another int method

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-10 16:41:01 +00:00
Dhruv Bodani
116f3ac265 Add REST implementation for validator's ProposeAttestation (#11800)
* Add REST implementation for validator's ProposeAttestation

* handle nil attestation

* update propose attestation with context

* fix lint

* add remaining nil testcases

* Update validator/client/beacon-api/propose_attestation_test.go

Co-authored-by: Radosław Kapka <radek@prysmaticlabs.com>

* fix BUILD.bazel

Co-authored-by: Radosław Kapka <radek@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-01-10 16:21:29 +00:00
Jacob Shufro
bbe003720c add custom headers to DomainData requests (#11767)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-01-10 00:20:45 +00:00
Patrice Vignola
e957edcb12 Add REST implementation for Validator's SubmitSignedContributionAndProof (#11812)
* Add REST implementation for Validator's SubscribeCommitteeSubnets

* Add context

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-09 22:54:52 +00:00
Dhruv Bodani
684022fa69 Add REST implementation for validator's SubmitSignedAggregateSelectionProof (#11826)
* add REST endpoint for SubmitSignedAggregateSelectionProof

* fix linter action

* update with context

* fix context import

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-09 21:18:42 +00:00
Dhruv Bodani
a7a64632b1 fix TestEndToEnd_MinimalConfig_ValidatorRESTApi test (#11856)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-09 21:45:37 +01:00
Dhruv Bodani
c89ab764e3 Add REST implementation for validator's GetSyncMessageBlockRoot (#11824)
* add REST implementation for GetSyncMessageBlockRoot

* improve sync_message_block_root_test.go to handle errors

* include context

* fix unit tests
2023-01-09 19:35:33 +00:00
Manu NALEPA
375a76d6c9 Add REST implementation for SubmitSyncMessage (#11827)
* Add REST implementation for `SubmitSyncMessage`

* Use context

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-01-09 17:05:18 +00:00
terencechain
bad7c6a94e Better reconstruct error msg while EL client is syncing (#11851)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-09 15:09:45 +00:00
Potuz
ffbb73a59b Merge remote-tracking branch 'origin/develop' into capella 2023-01-09 12:07:00 -03:00
Potuz
bdc3a6ef11 Capella coverage and json structs (#11854)
* Capella coverage and json structs

* use SetupTestConfig
2023-01-09 14:47:40 +00:00
Potuz
649974f14d removed duplicated case 2023-01-09 09:54:15 -03:00
Potuz
9ec0bc0734 Merge remote-tracking branch 'origin/historical-summaries' into capella 2023-01-09 08:57:24 -03:00
terence tsao
b6a32c050f Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-07 07:03:58 -08:00
terence tsao
055e225093 Passing spec tests 2023-01-06 15:18:38 -08:00
terence tsao
144218cb1b Process historical roots test 2023-01-06 11:56:24 -08:00
Manu NALEPA
4c9a0bf772 Add REST implementation for SubmitValidatorRegistrations (#11816)
* Add REST implementation for `SubmitValidatorRegistrations`

* Remove unused context

* Use context

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-01-06 19:46:09 +00:00
Potuz
13b575a609 Merge remote-tracking branch 'origin/develop' into capella 2023-01-06 10:59:43 -03:00
kasey
1c27b21b5a Prevent pollution between scenario tests (#11850)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-06 07:49:42 +00:00
Potuz
c1f00923c1 Implement bls_to_execution_changes beacon endpoint (#11845)
* Initial implementation

* fix withrawal unit tests

* add custom hooks api middleware test

* unit tests for endpoint

* update screwed proto file

* double import

* Raul review

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-01-06 05:17:33 +00:00
Dhruv Bodani
700f5fee8c Add context to beacon API REST implementation (#11847)
* add context to beacon APIs

* add TODO to merge GET and POST methods

* fix linter action

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-01-06 03:32:13 +00:00
Potuz
66b6177ed3 Log forkchoice weights on reorgs (#11849) 2023-01-05 20:06:40 -03:00
Raul Jordan
0b0373f2fd Remove Lock from Checkpoint Cache as LRU is Already Thread-Safe (#11846)
* lru cache in prysm is already thread safe

* rem import
2023-01-05 16:00:11 +00:00
Nishant Das
b481c91453 Shift Multiclient E2E to Post-Submit (#11811)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-01-05 02:39:28 +00:00
Manu NALEPA
e1408deb40 Add REST implementation for MultipleValidatorStatus (#11786)
* Add REST implementation for `MultipleValidatorStatus`

* Fix PR comments

* Address PR comments

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-01-04 23:15:23 +00:00
terence tsao
b5a414eae9 Rm bad imports 2023-01-04 08:10:27 -08:00
terence tsao
b94b347ace Sync with develop 2023-01-04 07:37:25 -08:00
terence tsao
f5ee225819 Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-04 07:16:00 -08:00
Potuz
51601716dc Rename proposer_exists (#11844) 2023-01-04 14:09:40 +00:00
terencechain
0e77a89208 Better version checks for future proof (#11825)
* Better version checks for future proof

* Revert changes for blob

* Use if else

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-01-04 03:47:20 +00:00
terence tsao
9cb48be14f Add historical summaries to state and processing 2023-01-03 15:58:39 -08:00
terencechain
6715f10ffc Add canUseBuilder helper method (#11829)
* Add canUseBuilder helper method

* Fix build

* Go fmt

* Check circuit breaker first
2023-01-03 12:46:08 +08:00
terencechain
2906196786 Add test utility helper to generate valid bls-to-exec (#11836)
* Add test util helper to generate bls-to-exec

* Gazelle
2023-01-03 00:34:15 +00:00
Potuz
ff82d9ee8c Add back BLS changes from orphaned blocks (#11822)
* Add back BLS changes from orphaned blocks

* Add full test

* add missing file

* remove duplicate imports

* Update beacon-chain/blockchain/head.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-01-01 18:22:01 +00:00
int88
42450fb24e fix trace name of function ExecuteStateTransitionNoVerifyAnySig (#11828)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-31 12:14:26 +00:00
terencechain
b6060d5a49 Track head slot through head block (#11833)
* Track head slot through head block

* Update beacon-chain/blockchain/head.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-12-29 22:23:56 +00:00
terencechain
cf6617933d Fix BUILD.bazel for v1alpha1 rpcs (#11830)
* Fix build.bazel for v1alpha1 rpcs

* Revert common_deps
2022-12-28 21:38:28 +00:00
Potuz
ffcdc26618 Merge remote-tracking branch 'origin/develop' into capella 2022-12-23 13:03:11 -03:00
terencechain
4e28192541 Proposer RPC: helpers to get exits and slashings (#11808)
* Proposer RPC: helpers to get exits and slashings

* add test

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-12-23 05:50:14 +00:00
Nishant Das
4aa075abac Use Single Log File in E2E (#11810)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-23 03:43:00 +00:00
kasey
c1563e40bd attempt to fix WaitForTextInFile log corruption (#11819)
* attempt to fix WaitForTextInFile log corruption

* restore Seek, just in case!

* lint

* handle file close error in all cases with a log

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-12-23 01:08:32 +00:00
terencechain
dac555a57c Allow proposer duties one epoch in advance part 2 (#11818)
Co-authored-by: rkapka <rkapka@wp.pl>
2022-12-22 23:00:36 +01:00
Potuz
96981a07b9 check signatures of BLS changes before capella 2022-12-22 16:10:00 -03:00
Potuz
4d549f572c Check BLS_TO_EXECUTION_CHANGE as if they were from Capella (#11762)
In the event we receive a BLS_TO_EXECUTION_CHANGE and our head state is
before Capella, verify it's signature with the Capella fork version.

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-12-22 14:09:24 -03:00
Potuz
6b2721b239 Check BLS_TO_EXECUTION_CHANGE as if they were from Capella
In the event we receive a BLS_TO_EXECUTION_CHANGE and our head state is
before Capella, verify it's signature with the Capella fork version.
2022-12-22 11:34:57 -03:00
Patrice Vignola
83f48350b2 Fix a bunch of deepsource warnings (#11814) 2022-12-22 09:20:10 +00:00
kasey
77a9dca9eb Cleaner genesis state population (#11809)
* e2e using dynamic configs; runtime bellatrix setup

* starts from phase0 but fee recipient eval fails

* lower TTD to cross before bellatrix epoch in e2e

* fix debug printf

* fix it

* dynamic ttd and more robust fee_recipient eval

* let multiplication set ttd=0 if bellatrix epoch=0

* refactoring fee recipient test after cognit errors

* lint

* appease deepsource

* deep sourcin

* gazelle

* missed a usage of this when refactoring

* refactoring premine genesis to work for all forks

previous set of functions was a copypasta mess

* gaz post-merge

* gaz

* resolve merge artifact and pr feedback

* back out some unneeded changes

* appease deepsource

* move premine-state next to similar utils

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: nisdas <nishdas93@gmail.com>
2022-12-22 05:38:51 +00:00
Nishant Das
9f1c6b2d18 Fix Multiclient E2E (#11803)
* fix multiclient e2e

* parallelize keystore generation

* fix config startup

* fix GetAggregateAttestation

* regression test

* kasey's review

* fix test

* gaz

* fix tests

* deep source
2022-12-22 02:48:46 +00:00
Radosław Kapka
0c997ede0f Query duties - epoch changes (#11806)
* Allow fetching next epoch attester duties

* better formatting

* fix tests

* use param
2022-12-21 16:28:16 +00:00
Radosław Kapka
2dc56878cd Implement getPoolBLSToExecutionChanges API endpoint (#11801)
* Implement `getPoolBLSToExecutionChanges` API endpoint

* fix

(cherry picked from commit d602c94b7b)

* unify with Capella

* newline
2022-12-21 13:01:04 +00:00
terence tsao
911048aa6d Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-21 13:01:40 +08:00
kasey
d916827fb9 E2E from Phase0 or Bellatrix (#11802)
* e2e using dynamic configs; runtime bellatrix setup

* starts from phase0 but fee recipient eval fails

* lower TTD to cross before bellatrix epoch in e2e

* fix debug printf

* fix it

* dynamic ttd and more robust fee_recipient eval

* let multiplication set ttd=0 if bellatrix epoch=0

* refactoring fee recipient test after cognit errors

* lint

* appease deepsource

* deep sourcin

* gazelle

* missed a usage of this when refactoring

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: nisdas <nishdas93@gmail.com>
2022-12-21 04:43:29 +00:00
James He
255e9693ee fixing typo on comments 2022-12-20 22:42:31 -06:00
rkapka
d602c94b7b fix 2022-12-20 20:28:53 +01:00
rkapka
6a5ecbd68f Implement getPoolBLSToExecutionChanges API endpoint
(cherry picked from commit cd25d922bc)

# Conflicts:
#	beacon-chain/node/node.go
#	beacon-chain/rpc/eth/beacon/pool.go
#	proto/eth/service/beacon_chain_service.pb.go
#	proto/migration/v1alpha1_to_v2.go
2022-12-20 17:31:21 +01:00
Ye Ding
e43152102e Identify invalid signature within batch verification (#11582) (#11741)
* Identify invalid signature within batch verification (#11582)

* Fix issues found by linter

* Make deepsource happy

* Add signature description enums

* Make descriptions a mandatory field

* More readable error message of invalid signatures

* Add 'enable-verbose-sig-verification' option

* Fix format

* Move descriptions to package signing

* Add more validation and test cases

* Fix build failure

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-12-20 18:41:47 +08:00
Nishant Das
90d5f6097c lock state (#11796) 2022-12-20 15:39:35 +08:00
james-prysm
ce125b763c Improve validator set change event readability and error handling (#11797) 2022-12-19 22:12:09 +01:00
Potuz
29dfcab505 Fix BlockValue marshalling 2022-12-19 11:14:35 -03:00
Potuz
16e5c903cc Merge remote-tracking branch 'origin/develop' into capella 2022-12-19 10:28:05 -03:00
Patrice Vignola
3ff5895548 Add REST implementation for Validator's ProposeExit (#11785)
* Add REST implementation for Validator's GetBeaconBlock

* Move endpoint into constant

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-19 12:58:52 +00:00
Patrice Vignola
4eb4cd9756 Add REST implementation for Validator's PrepareBeaconProposer (#11784)
* Add REST implementation for Validator's PrepareBeaconProposer

* Change fee recipients

* Handle error in a better way

* Move endpoint into constant

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-19 12:23:54 +00:00
terencechain
81b049ea02 Remove copy for block setters (#11795) 2022-12-19 10:54:25 +00:00
terencechain
66682cb4e5 Clean up block initizliation and remove set block (#11791) 2022-12-19 15:41:28 +08:00
Potuz
cab42a4ae3 Take raw arrays for BLS changes 2022-12-16 18:02:35 -03:00
Radosław Kapka
a2fc57cb03 Improve tests for ProduceBlockV2 (#11789)
* Improve tests for `ProduceBlockV2`

* Revert "Auxiliary commit to revert individual files from 31c606198bdc91bac798b7a446d04d5cce86dad7"

This reverts commit d8b264457168f8f5e03d0473288a56606e38ab39.
2022-12-16 20:54:38 +00:00
Radosław Kapka
b63f649d89 Add finalized metadata field to API block fetching (#11788) 2022-12-16 18:21:08 +01:00
Ye Ding
e627d3c6a4 Fix command to fetch latest stable version (#11782) (#11787) 2022-12-16 16:13:54 +00:00
Potuz
a5bdd42bdd Merge remote-tracking branch 'origin/block-value' into capella 2022-12-16 12:47:29 -03:00
Potuz
a26197f919 take lists for bls changes endpoint 2022-12-16 12:47:18 -03:00
Potuz
856102cba8 Implement bound on expected withdrawals (#11760)
* Implement bound on expected withdrawals

This PR implements https://github.com/ethereum/consensus-specs/pull/3095

* Terence's suggestion

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Update spectests

* fix minimal preset

* Clear committee cache during header tests to ensure active validators ain't leaking

* Rm debug log

* Rm bad init

* Update consensus spec sha

* Fix bad tests and remove redundant casting

* Fix build

* MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP typo

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-16 06:45:26 +00:00
terencechain
8ff6f7768f Sync committee head state cache for Capella (#11773)
* Handle capella

* Handle capella

* Error

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-16 00:42:43 +00:00
terencechain
295cc5e4f0 Engine API: new/get_payload for Capella (#11775)
* Engine API: new/get_payload for Capella

* Fix build

* Use switch for payload type check

* Fix bad tests

* Rm redundant arg

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-16 00:25:22 +00:00
Radosław Kapka
8a2c7e4269 Add finalized metadata field to API responses (#11783)
* in progress

* grpc working

* middleware

* fix tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-16 00:04:40 +00:00
Radosław Kapka
cb02a6897d Update block Beacon APIs to Capella (#11706)
* proto+ssz

* refactor GetBlindedBlockSSZ

(cherry picked from commit 97483c339f99b0d96bd81846a979383ffd2b0cda)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
(cherry picked from commit 9e4e82d2c5)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go

* add Capella version

(cherry picked from commit 5d6fd0bbe663e5dd16df5b2e773f68982bbcd24e)
(cherry picked from commit 82f6ddb693)

* support SSZ lol

(cherry picked from commit 52bc2c8d617ac3e1254c493fa053cdce4a1ebd63)
(cherry picked from commit d7d70bc25b)

* update withdrawals proto

* refactor and test GetBlockV2

(cherry picked from commit c1d4eaa79d)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go

* refactor and test GetSSZBlockV2

(cherry picked from commit fbc4e73d31)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go

* test other functions

(cherry picked from commit 31d4a4cd11)

* move stuff to blinded_blocks.go

(cherry picked from commit 0a9e1658dd)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go

* fix migration code

* add Capella to SubmitBlock

* custom hooks

* missing structs

* fix tests

* fix tests

* review

* fix build issues

* replace ioutil with io

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-15 23:42:07 +00:00
Patrice Vignola
e843cafe7d Add REST implementation for Validator's GetBeaconBlock (#11772)
* WIP

* WIP

* WIP

* Remove unused parameter

* Address PR comments

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-15 22:34:05 +00:00
terence tsao
080ce31395 Add block value to get payload v2 2022-12-15 17:17:58 +08:00
Radosław Kapka
1c5b0596c7 More user-friendly message in ErrUndefinedExecutionEngineError (#11777)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-15 07:46:17 +00:00
David Theodore
159228b34f SparseMerkleTrie depth fix & regression tests (#11778) 2022-12-15 06:50:12 +00:00
Potuz
d5d17e00b3 Merge branch 'develop' into capella 2022-12-14 13:04:23 -03:00
rkapka
9c6a1331cf remove redeclared struct 2022-12-14 16:29:14 +01:00
Radosław Kapka
1fbb3f3e51 Reconstruct full Capella block (#11732)
* Merge branch 'reconstruct-capella-block' into capella

(cherry picked from commit b0601580ef)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/engine/v1/json_marshal_unmarshal.go

* remove unneeded test

* rename methods

* add doc to interface

* deepsource

(cherry picked from commit 903cab75ee)

# Conflicts:
#	beacon-chain/execution/testing/mock_engine_client.go

* bzl

* fix failing tests

* single ExecutionBlockByHash function

* fix engine mock

* deepsource

* reorder checks

* single execution block type

* update tests

* update doc

* bytes test

* remove toWithdrawalJSON

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-14 13:42:48 +00:00
Manu NALEPA
9c21809c50 Add REST implementation for ValidatorStatus (#11757)
* Add REST implementation for `ValidatorStatus`

* Fix review comments
2022-12-14 12:58:36 +00:00
terencechain
62ffed7861 Add capella cases for RPC calls (#11759)
* Add capella cases for rpc calls

* Use if else

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-14 05:08:54 +00:00
Victor
bdf063819c Fixed comments of the template (#11652)
The comments of the template were not inside the braces and appeared when creating a new Issue.

Thanks

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-12-14 02:49:19 +00:00
Radosław Kapka
9efc89f993 Correct duties calculation for old slots (#11723)
* initial code

* update proposalDependentRoot

* convert method to function

* decouple tests

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-12-14 00:33:30 +00:00
Preston Van Loon
5ed52b1e44 Validator: Add trace span information for AttestationRecord save requests (#11742)
* Add additional span information when saving attestation records

* Remove keymanager.sign span. It's very noisy and not a helpful data point.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-14 00:06:32 +00:00
kasey
2e49fdb3d2 Start chain from bellatrix state (#11746)
* WIP trying to start from bellatrix state

* env var to control log path with unique paths

due to flaky test re-run behavior, logs from a failed test run are
overwritten by subsequent retries. This makes it difficult to retrieve
logs after the first failed run. It also takes some squinting through
output to find the location of the log file in the first place. This
flag enables logs to be placed in an arbitrary path. Note that bazel
sandboxing generally will force this path to be in the /tmp tree.

* WIP - grabbing changes from rm-pre-genesis branch

* combine bellatrix state w/ rm-pre-genesis branch

* WIP

* use encoding/detect for genesis state bytes

* WIP more fixes towards start from bellatrix

* remove debug wrapping

* WIP

* multiple bugfixes

* fix fork ordering bug and bellatrix genesis blocks

* send deposits, spam tx to advance, fix miner alloc

* WIP

* WIP mess

* WIP

* Print process ID information for purposes of attaching a debugger

* bugs: genesis body_root and deposit index mismatch

* fix voting period start, skip altair check

* add changes

* make it better

* rm startup FCU, rm logs

* cleanup import grouping&ordering

* restore FCU log, get rid of tmp var

* rm newline

* restore newline

* restore wrapped error

* rm newline

* removing boot node version override

this doesn't seem to matter?

* add issue number to todo comment

* rm commented code

* rm vmdebug geth flag

* unexport values only used with genesis test pkg

and add comments where missing from exported values.

* adding comments to special cases for testnets

* migrate comments from PR to actual code :)

* rm unused test param

* mark e2e spawns exempt from gosec warning

* Fix DeepSource errors in `proposer_bellatrix.go` (#11739)

* Fix DeepSource errors in

* Omit receiver name

* Address PR comments

* Remove unused variable

* Fix more DeepSource errors

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Remove `Test_IsExecutionEnabledCapella` (#11752)

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Add REST implementation for Validator's `ProposeBeaconBlock` (#11731)

* WIP

* WIP

* WIP

* Add tests

* WIP

* Add more tests

* Address DeepSource errors

* Remove unused param

* Add more tests

* Address PR comments

* Address PR comments

* Fix formatting

* Remove unused parameter

* Fix TestLittleEndianBytesToBigInt

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fix validator client (#11755)

* fix validator client

(cherry picked from commit deb138959a)

* Use signed changes in middleware block

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* API `finalized` metadata field - update protos (#11749)

* API `finalized` metadata field - update protos

* change nums

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* log breaks unit tests that don't do full arg setup

easiest to just remove it for now

* restore prior behavior of phase0 block for altair

* update unit tests to account for special case

* loosen condition for fork version to match config

we don't know which fork version genesis will start from, so we
shouldn't force it to be a phase0 genesis.

* skip until we can mod configs at runtime

* NewGenesisBlockForState computes state root itself

* rm noisy log

* this log would be noisy in mainnet

* fix format specifier, []byte -> string

* core.Genesis UnmarshalJson has a value receiver :)

* no longer needs to be exported

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: nisdas <nishdas93@gmail.com>
Co-authored-by: Patrice Vignola <vignola.patrice@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-14 07:13:49 +08:00
Preston Van Loon
a01d08b857 Add --stamp to release builds (#11768) 2022-12-13 21:33:51 +00:00
Preston Van Loon
fbe0672fc4 state: Improve replay state logs metadata (#11763)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-13 18:46:51 +00:00
Potuz
a15ec49844 Do not remove attestations twice (#11764) 2022-12-13 14:43:42 -03:00
int88
a268255ab1 fix mismatch of code and comment (#11761) 2022-12-13 15:57:52 +00:00
Potuz
8629ac8417 only broadcast bls changes post-capella 2022-12-13 09:53:05 -03:00
james-prysm
043079dafe Adding exit codes to cli commands (#11735)
* adding exit codes to cmd

* adding in fatal logs to other cli actions

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-13 12:13:33 +00:00
terencechain
3fcdd58872 Add block setters for consensus type (#11751)
* Add block getters

* Rm changes

* Rm changes

* Revert "Rm changes"

This reverts commit 1ae5db79da.

* Fix tests

* Set graffiti right place

* Potuz feedback

* Update consensus-types/blocks/setters.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Radek feedback

* Fix comments

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-13 00:03:36 +00:00
Radosław Kapka
0798092ca8 API finalized metadata field - update protos (#11749)
* API `finalized` metadata field - update protos

* change nums

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-12 16:59:27 +00:00
Radosław Kapka
fb981d29e8 fix validator client (#11755)
* fix validator client

(cherry picked from commit deb138959a)

* Use signed changes in middleware block

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-12-12 15:49:02 +00:00
rkapka
6dcb2bbf0d Use signed changes in middleware block
(cherry picked from commit e3c9e7bb5c)
2022-12-12 16:21:39 +01:00
Potuz
deb138959a fix validator client 2022-12-12 11:43:48 -03:00
Potuz
45e6f3bd00 fix build 2022-12-12 11:29:57 -03:00
Potuz
55a9e0d51a Merge branch 'develop' into capella 2022-12-12 11:15:39 -03:00
Patrice Vignola
fa01ee5eba Add REST implementation for Validator's ProposeBeaconBlock (#11731)
* WIP

* WIP

* WIP

* Add tests

* WIP

* Add more tests

* Address DeepSource errors

* Remove unused param

* Add more tests

* Address PR comments

* Address PR comments

* Fix formatting

* Remove unused parameter

* Fix TestLittleEndianBytesToBigInt

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-12 10:39:51 +00:00
terencechain
09619c388f Remove Test_IsExecutionEnabledCapella (#11752)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-11 22:59:12 +00:00
Patrice Vignola
1531313603 Fix DeepSource errors in proposer_bellatrix.go (#11739)
* Fix DeepSource errors in

* Omit receiver name

* Address PR comments

* Remove unused variable

* Fix more DeepSource errors

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-10 10:27:07 +00:00
terence tsao
3ddae600fb Merge branch 'capella' of github.com:prysmaticlabs/prysm into capella 2022-12-09 19:35:39 -08:00
Potuz
babfc66c5b Check BLS changes when requesting from pool (#11718)
* Check BLS changes when requesting from pool

* Terence's suggestions

* Radek's suggestion

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-12-09 14:41:45 +00:00
rkapka
56503110dd Merge branch 'recontruct-capella-blinded' into capella
# Conflicts:
#	testing/spectest/shared/common/forkchoice/service.go
2022-12-09 12:45:23 +01:00
rkapka
f67d35dffd single execution block type 2022-12-09 12:43:35 +01:00
terencechain
cb9b5e8f6e Use payload attribute type for engine-API (#11719)
* Add payload attribute type

* Gazelle

* Fix test

* Use new payload attribute type

* Fix test

* Fix test

* Update beacon-chain/execution/engine_client.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Feedbacks

* Fix suggestion

* Update argument, fix test

* Return emptyAttri instead of nil

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-08 21:39:45 +00:00
james-prysm
145eb5a8e4 add m1 chip info (#11736) 2022-12-08 15:20:32 -06:00
Preston Van Loon
7510f584cb beacondb: Remove cache look up and lock request from boltdb transaction for state summaries (#11745)
* Remove cache look up from bolt transaction

* remove bogus line, oops

* Remove independent cache lookup entirely and just use HasStateSummary

* Rm newline
2022-12-08 19:32:04 +00:00
rkapka
b05b67b264 reorder checks 2022-12-08 19:48:02 +01:00
rkapka
a5c6518c20 deepsource 2022-12-08 19:48:02 +01:00
Radosław Kapka
da048395ce Merge branch 'develop' into recontruct-capella-blinded 2022-12-08 18:41:12 +01:00
rkapka
f31f7be310 fix engine mock 2022-12-08 18:39:59 +01:00
rkapka
e1a2267f86 Merge remote-tracking branch 'origin/capella' into capella 2022-12-08 18:36:44 +01:00
rkapka
3c9e4ee7f7 Merge branch 'recontruct-capella-blinded' into capella
# Conflicts:
#	beacon-chain/blockchain/pow_block.go
#	beacon-chain/execution/engine_client.go
#	beacon-chain/execution/engine_client_test.go
#	beacon-chain/execution/testing/mock_engine_client.go
#	beacon-chain/rpc/eth/beacon/blocks.go
#	beacon-chain/state/state-native/getters_withdrawal.go
#	consensus-types/blocks/factory.go
#	proto/engine/v1/json_marshal_unmarshal.go
#	proto/engine/v1/json_marshal_unmarshal_test.go
2022-12-08 18:31:25 +01:00
rkapka
9ba32c9acd single ExecutionBlockByHash function 2022-12-08 17:53:02 +01:00
rkapka
d23008452e fix failing tests 2022-12-08 17:29:51 +01:00
terencechain
f397cba1e0 Better proposal RPC (#11721) 2022-12-08 07:40:48 -08:00
Patrice Vignola
dbeb3ee886 Onboard validator's Beacon REST API usage to e2e tests (#11704)
* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* Onboard validator's Beacon REST API usage to e2e tests

* Remove unused variables

* Remove use_beacon_api tags

* Fix DeepSource errors

* Revert unneeded changes

* Revert evaluator changes

* Revert import reordering

* Address PR comments

* Remove all REST API e2e tests except minimal one

* Fix validator pointing to inexisting beacon node port

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-08 14:38:56 +00:00
Preston Van Loon
ca2618110f e2e: Print process IDs for debugging. (#11734)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-08 04:48:16 +00:00
Nishant Das
642c399b9d Add Back Cross Client Option (#11738)
* add changes

* remove env var

* fix
2022-12-07 22:20:35 -06:00
Manu NALEPA
6eee539425 Add REST implementation for Validator's WaitForActivation (Ethereum Protocol Fellowship) (#11671)
* Implement REST `WaitForActivation`

* Activation: Factorize tests

* Fix PR comments

* `missingPubKeys`: Replace map by slice (no need to have a map here)

* Fix typo

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-12-07 19:20:11 +00:00
rkapka
1583e93b48 bzl 2022-12-07 18:23:43 +01:00
james-prysm
bd82eb873c Fix to Post Submit E2E (#11733) 2022-12-07 18:19:44 +01:00
rkapka
849457df81 deepsource
(cherry picked from commit 903cab75ee)

# Conflicts:
#	beacon-chain/execution/testing/mock_engine_client.go
2022-12-07 16:29:34 +01:00
rkapka
903cab75ee deepsource 2022-12-07 16:27:09 +01:00
rkapka
ee108d4aff add doc to interface
(cherry picked from commit a08baf4a14)
2022-12-07 16:22:14 +01:00
rkapka
49bcc58762 rename methods
(cherry picked from commit 8c56dfdd46)
2022-12-07 16:22:09 +01:00
rkapka
a08baf4a14 add doc to interface 2022-12-07 16:20:43 +01:00
rkapka
8c56dfdd46 rename methods 2022-12-07 16:20:31 +01:00
rkapka
dcdd9af9db remove unneeded test 2022-12-07 16:05:44 +01:00
rkapka
a464cf5c60 Merge branch 'reconstruct-capella-block' into capella
(cherry picked from commit b0601580ef)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/engine/v1/json_marshal_unmarshal.go
2022-12-07 15:21:58 +01:00
Nishant Das
62455b7bcb Fix Lint and Minor Bugs in E2E (#11730) 2022-12-07 05:31:53 +00:00
terence tsao
2d483ab09f Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-06 16:48:44 -08:00
Justin Traglia
6e26a6f128 Replace LastWithdrawalValidatorIndex to updated name (#11725) 2022-12-06 14:40:10 -08:00
Justin Traglia
b512b92a8a Update withdrawal error message to reflect new field name (#11724) 2022-12-06 14:39:17 -08:00
Preston Van Loon
3d6d0a12dd Update go to 1.19.4 (#11727)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-06 21:20:25 +00:00
rkapka
b0601580ef Merge branch 'reconstruct-capella-block' into capella 2022-12-06 22:16:59 +01:00
rkapka
c1f29ea651 remove logs 2022-12-06 22:11:35 +01:00
rkapka
881d1d435a logs 2022-12-06 21:46:41 +01:00
Patrice Vignola
05148dbc8f Fix DeepSource errors in the Validator's REST API (#11726) 2022-12-06 20:31:56 +00:00
rkapka
d1aae0c941 Merge branch 'capella' into reconstruct-capella-block 2022-12-06 21:26:55 +01:00
james-prysm
19af1d2bb0 E2E: beacon APIs Part 1 (#11306)
* adding compare beacon block test

* fixing bazel

* fixing evaluator import

* fixing imports

* changing package name

* fixing bazel

* adding logic to check for checking epoch

* fixing linting

* adding check for attester duties

* handle both blockv1 and blockv2

* making middleware objects public instead

* adding test for block attestations

* fixing typo

* adding blockroot test

* adding test for attestations

* fixing type value

* fixing test

* adding in node endpoints

* fixing bazel

* updating web3signer

* printing beacon blocks on request

* fixing struct

* temp log

* forgot string cast

* adding comparison function

* fixing bazel and evaulators, WIP

* fixing bazel

* changing how to minify json

* trying multiclient

* fixing port problem

* reverting evaluator and making test only for mainnet scenario testing

* removing test data

* fixing linting unused functions
git push

* changed to reflect

* adding in ssz comparison

* fixing tests

* fixing conflict

* fixing tests

* making v2 the standard

* adding better error logging

* fixing type

* adding lighthouse settings and fixing some deepsource items

* testing adding delay to evaluator

* testing without peers check

* changing target peers to try to fix lighthouse peer connections

* temp removing other tests

* fix lint issue

* adding peers connect back in

* adding in state version

* fixing bazel

* fixing path error

* testing changes to state

* fix unmarshal

* simplifying beacon api e2e execution

* fixing missed assertian checks

* improve logging and debugging issue

* trying to fix unmarshal

* still breaking more test edits

* removing fork to test unmarshal

* fixing pathing

* resolving error

* fixing beacon_api

* merging in debug api to beacon_api test

* fixing lint and temp commenting out endpoint

* adding in custom comparison function

* fixing custom evaluator

* adding test for block header data

* fixing header evaluation

* add node apis

* fixing linting,adding tests

* fixing bazel and temp removing unused functions

* fixing deepsource and linting issues

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing review

* resolving more review comments

* fixing linting

* removing ssz return value as it's large and possibly not needed

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing more review comments

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing linting and review iteems

* fixing cognit complexity issue

* fixing linting

* fix log printout

* test build kite only with crossclient

* switching out evaluator to depositedvalidatorsareactive

* removed wrong evaluator switching correct one

* removing skip based on review comments

* fixing pathing issue

* test without participation at epoch

* testing without special lighthouse logic in evaluator

* reducing expected participation when multiclient

* fixing imports

* reducing epochs to see if less flaky

* testing with other tests added back in

* reducing epochs ran further

* testing only cross client again

* testing multi run again

* test reverted scenario for tests

* testing with cross client

* removing commented out function

* testing without peers connect

* adding optimization based on suggestions

* removed the wrong peers connect

* accidently commited something I shouldn't have

* fixing lighthouse flag

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/evaluators/beaconapi_evaluators/beacon_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-06 16:01:17 +00:00
Nishant Das
faf16f9e56 Allow Nodes Running Via VPNs To Make Successful Dials (#11599)
* fix it

* fix dialer for now

* fix build

* fix test

* fix to v0.24.0

* fix gaz

* fix build

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-12-06 14:54:45 +00:00
Manu NALEPA
0a5c65e29c Add REST implementation for Validator's ValidatorIndex (#11712)
* Add GetAttestationData

* Add tests

* Add many more tests and refactor

* Fix logic

* Address PR comments

* Address PR comments

* Add jsonRestHandler and decouple http logic from rest of the code

* Add buildURL tests

* Remove handlers_test.go

* Improve tests

* Implement `ValidatorIndex` of `beaconApiValidatorClient` using Beacon API

* Implement getStateValidators

* `validatorIndex`: Use `getStateValidators`

Co-authored-by: Patrice Vignola <vignola.patrice@gmail.com>
2022-12-06 12:27:26 +00:00
Radosław Kapka
7dc966bb3b Update state Beacon APIs to Capella (#11708)
* proto

(cherry picked from commit 24f45e021061782ab4d6c101a95368310aad67b6)

* implementation

(cherry picked from commit bbfa22c2053e8176fc004b13ba9c8d62cc3bd352)

# Conflicts:
#	beacon-chain/rpc/apimiddleware/structs.go

* fix compilation error

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-06 00:36:23 +00:00
Radosław Kapka
51d35f36f0 Disallow computing committee assignments for old slots (#11722) 2022-12-05 23:37:08 +01:00
Patrice Vignola
943a0556e9 Add REST implementation for Validator's DomainData (#11711)
* Add REST implementation for Validator's DomainData

* Add missing dependency

* Fix getForkVersion logic

* Remove unused helpers

* Fix deepsource error

* Fix deepsource error

* Address PR comments

* Remove outdated comment

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-12-05 10:27:41 +00:00
terence tsao
57bdb907cc Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-02 11:10:39 -08:00
terencechain
f7cecf9f8a Add payload attribute type (#11710)
* Add payload attribute type

* Gazelle

* Fix test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-12-02 18:23:47 +00:00
rkapka
15d683c78f Merge branch 'capella' into reconstruct-capella-block 2022-12-02 16:43:56 +01:00
rkapka
bf6c8ced7d working 2022-12-02 16:37:24 +01:00
Potuz
78fb685027 Check BLS changes when requesting from the pool 2022-12-02 10:14:39 -03:00
Nishant Das
8be73a52b1 Add State Check For BLS Execution Change Messages (#11716) 2022-12-02 11:13:21 +00:00
Potuz
bebceb3bfa Handle BLS to execution changes included in blocks (#11713)
* Handle BLS to execution changes included in blocks

* log

* review
2022-12-02 14:04:45 +08:00
Potuz
98c0b23350 broadcast BLS changes 2022-12-01 11:26:56 -03:00
Mart1i1n
d541010bf1 Fix Typo (#11670)
* Update ffg_update_test.go

Fix some alignment typos.

* Update justification_finalization.go

Fix typo.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-11-30 22:25:55 +00:00
terence tsao
039a0fffba Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-11-30 12:09:47 -08:00
terencechain
8cb07e0c2b Spec test: capella and update to v1.3.0-alpha.1 (#11683) 2022-11-30 12:08:04 -08:00
Potuz
90ec640e7a Fix capella operations spectests 2022-11-30 15:35:55 -03:00
Potuz
10acd31d25 check on verify instead of sig 2022-11-30 15:20:23 -03:00
Potuz
df1e8b33d8 BLS Change signature verification 2022-11-30 14:52:31 -03:00
Preston Van Loon
aa2bf0c9c4 Spectests: ensure test directories are not empty (#11709)
* Add an assertion that test folders are not empty

* more assertions

* only run sync tests on bellatrix or later
2022-11-30 17:32:10 +00:00
rkapka
cdb4ee42cc not working 2022-11-30 18:22:52 +01:00
rkapka
d29baec77e proper error handling in BuildSignedBeaconBlockFromExecutionPayload 2022-11-30 15:32:57 +01:00
terence tsao
0adc54b7ff Refactor get payload 2022-11-29 12:02:47 -08:00
Ye Ding
e49d8f2162 Fix a race condition during initialization (#11444) (#11698)
* Fix a race condition during initialization (#11444)

* Fix tests

* Add more test cases

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-11-29 18:24:50 +00:00
Potuz
1cbd7e9888 withdraw by default 2022-11-29 13:52:34 -03:00
rkapka
0a9e1658dd move stuff to blinded_blocks.go 2022-11-29 17:36:52 +01:00
rkapka
31d4a4cd11 test other functions 2022-11-29 17:36:52 +01:00
rkapka
fbc4e73d31 refactor and test GetSSZBlockV2 2022-11-29 17:36:52 +01:00
rkapka
c1d4eaa79d refactor and test GetBlockV2 2022-11-29 17:36:52 +01:00
rkapka
760af6428e update ssz 2022-11-29 17:36:52 +01:00
terence tsao
dfa0ccf626 Fix attribute pb nil checks 2022-11-29 07:27:33 -08:00
rkapka
1a51fdbd58 update withdrawals proto 2022-11-29 14:23:11 +01:00
shana
c6ed4e2089 Do not omit json fields if empty in builder client (#11673)
* Do not omit json fields if empty in builder client

* fix tests

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-11-29 12:12:13 +00:00
int88
0ad902e47d fix help message of metric doublylinkedtree_node_count (#11705) 2022-11-29 11:29:48 +00:00
terencechain
368a99ec8d Fix nil attribute for capella branch (#11701) 2022-11-28 17:34:26 -08:00
Preston Van Loon
4f4775f9f9 bazel: Update rules_go and remove extra repos in WORKSPACE (#11703)
* Update rule_go to latest release, remove fuzzit stuff

* Delete another thing
2022-11-28 23:00:11 +00:00
terence tsao
1c7e734918 Fix some blockchain tests 2022-11-28 13:59:17 -08:00
rkapka
764d1325bf Merge remote-tracking branch 'origin/capella' into capella 2022-11-28 21:07:00 +01:00
rkapka
0cf30e9022 Merge branch '__develop' into capella 2022-11-28 21:06:20 +01:00
Potuz
227b20f368 fix nil block from stream 2022-11-28 16:53:11 -03:00
terencechain
679e6bc54a Cont FCU if get payload attribute fails (#11693)
* Cont FCU if get payload attribute fails

* Fix err position

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2022-11-28 19:42:32 +00:00
rkapka
d7d70bc25b support SSZ lol
(cherry picked from commit 52bc2c8d617ac3e1254c493fa053cdce4a1ebd63)
2022-11-28 20:19:24 +01:00
rkapka
82f6ddb693 add Capella version
(cherry picked from commit 5d6fd0bbe663e5dd16df5b2e773f68982bbcd24e)
2022-11-28 20:19:19 +01:00
rkapka
9e4e82d2c5 refactor GetBlindedBlockSSZ
(cherry picked from commit 97483c339f99b0d96bd81846a979383ffd2b0cda)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
2022-11-28 20:19:15 +01:00
Radosław Kapka
c7a3cf8563 GetForkChoice API endpoint (#11680)
* proto

* middleware

* change structure

* fix all issues

* test

* validity field
2022-11-28 19:17:53 +00:00
rkapka
9838369fe9 fix proto generation 2022-11-28 20:15:05 +01:00
rkapka
6085ad1bfa fix issues 2022-11-28 20:07:37 +01:00
rkapka
d3851b27df Merge branch '__develop' into capella
# Conflicts:
#	beacon-chain/rpc/apimiddleware/structs.go
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/eth/v2/BUILD.bazel
#	proto/eth/v2/beacon_block.pb.go
#	proto/eth/v2/beacon_block.proto
#	proto/eth/v2/generated.ssz.go
#	proto/migration/v1alpha1_to_v2.go
#	proto/prysm/v1alpha1/beacon_chain.pb.go
#	proto/prysm/v1alpha1/beacon_chain.proto
2022-11-28 19:31:33 +01:00
Radosław Kapka
6c3b75f908 Upgrade getBlindedBlock API endpoint to Capella (#11687)
* proto

(cherry picked from commit 7101910e0fab5a5572795115679fd6f8d8c8379b)

* GetBlindedBlock

(cherry picked from commit e5c269ddf7b0c9e04f72ed28982a82de56fcac55)

* middleware

(cherry picked from commit 1719ce5967b0f74786c596cc921f7256e6b224f3)

* refactor

* Update beacon-chain/rpc/apimiddleware/structs.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* update error message

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-11-28 18:17:05 +00:00
Potuz
d6100dfdcb fix spectest 2022-11-28 12:38:14 -03:00
Patrice Vignola
f276c5d006 Make the validator REST API's WaitForChainStart blocking (#11695) 2022-11-28 11:58:04 +01:00
Potuz
c2144dac86 Add BLSToExecutionChangge endpoint 2022-11-27 23:00:16 -03:00
Potuz
a47ff569a8 Add Submit BLSChange endpoint 2022-11-27 20:50:21 -03:00
Potuz
f8be022ef2 Merge branch 'develop' into capella 2022-11-27 20:40:41 -03:00
Potuz
4f39e6b685 Implement REST block API endpoints 2022-11-27 16:20:30 -03:00
Potuz
29953cb734 Refactor Sync Committee Rewards (#11696) 2022-11-27 11:06:32 -08:00
Potuz
c67b000633 add test 2022-11-27 13:07:50 -03:00
Potuz
02f7443586 Refactor Sync Committee Rewards 2022-11-27 09:04:03 -03:00
Nishant Das
a23a5052bc Add Gossip Handler For BLS To Execution Changes (#11690) 2022-11-26 11:07:05 -08:00
terence tsao
6275e7df4e Clean up execution engine 2022-11-25 17:20:28 -08:00
terencechain
1b6b52fda1 Add PayloadAttribute superset and use it for engine-api (#11691) 2022-11-25 17:12:55 -08:00
Potuz
5fa1fd84b9 Hook the BLSTOExecution Pool to the proposer 2022-11-25 09:33:17 -03:00
nisdas
bd0c9f9e8d fix 2022-11-25 09:06:59 -03:00
Potuz
2532bb370c Merge branch 'develop' into capella 2022-11-25 08:16:32 -03:00
Potuz
f9e0d4b13a Batch capella signatures with the rest of the block (#11689) 2022-11-25 09:45:36 +08:00
Potuz
0aaee51973 Process bls changes (#11684)
* Implement ProcessBLSToExecutionChanges

* Batch process signatures

* gaz

* Change runtime behavior

* Terence's review
2022-11-24 19:36:12 +00:00
terencechain
a0c0706224 Add Capella state changes (#11688)
* Add Capella state changes

* Use params.configs
2022-11-24 14:54:55 -03:00
Potuz
a525fad0ea Add upgrade to Capella to statereplay (#11686) 2022-11-24 14:58:05 +00:00
nisdas
12efc6c2c1 make it reject 2022-11-24 22:21:12 +08:00
nisdas
a6cc9ac9c5 add sig validation 2022-11-24 22:19:55 +08:00
nisdas
031f5845a2 add gossip handler for bls change object 2022-11-24 21:22:18 +08:00
nisdas
b88559726c Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into capella 2022-11-24 20:05:41 +08:00
Patrice Vignola
7ab5851c54 Add a gRPC fallback mode to the validator Beacon REST API (#11679)
* Add a gRPC fallback mode to the validator Beacon REST API

* Remove --beacon_api_grpc_fallback flag

* Add missing bazel dependency

* Reorder dependency per gazelle check

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-24 11:09:07 +00:00
Patrice Vignola
e231cfd59d Onboard validator's beacon REST API tests to CI (#11682)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-24 11:50:13 +01:00
kasey
0a41b957dc env var to control log path with unique paths (#11681)
due to flaky test re-run behavior, logs from a failed test run are
overwritten by subsequent retries. This makes it difficult to retrieve
logs after the first failed run. It also takes some squinting through
output to find the location of the log file in the first place. This
flag enables logs to be placed in an arbitrary path. Note that bazel
sandboxing generally will force this path to be in the /tmp tree.

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-11-23 22:56:40 +00:00
Radosław Kapka
f2399e21e1 GetLiveness API endpoint (#11617)
* proto

* initial version

* middleware + tests

* change request structure

* fix all issues

* review feedback

* simplify out-of-range check
2022-11-23 18:23:22 +00:00
kasey
395e49972e prysmctl support generating non-phase0 genesis.ssz (#11677)
* support generating non-phase0 genesis.ssz

* make default (Value) work for EnumValue + lint

* remove messy punctuation

* Ran gazelle for @kasey

* Fix deps viz

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-11-23 14:22:24 +00:00
nisdas
62f6b07cba fix gossip registration 2022-11-23 20:10:44 +08:00
terence tsao
f956f1ed6e Handle capella version for packing atts 2022-11-22 17:21:21 -08:00
Radosław Kapka
ee099d3f03 Operations pool for BLS-to-execution-changes (#11631)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-11-22 18:34:14 +01:00
Potuz
16b0820193 Merge branch 'develop' into capella 2022-11-22 13:43:03 -03:00
Potuz
4b02267e96 add more minimal fixes 2022-11-22 13:34:33 -03:00
Potuz
746584c453 fix missing minimal test 2022-11-22 13:34:33 -03:00
terencechain
e2ffaf983a Add minimal withdrawal size (#11658)
* Add selector with minimal withdrawal size

* @potuz's feedback, Add PayloadAttribruteV2

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-22 16:05:37 +00:00
Potuz
b56daaaca2 Fix empty withdrawals slice 2022-11-22 10:38:42 -03:00
Patrice Vignola
0d4b98cd0a Add REST implementation for Validator's WaitForChainStart (#11654)
* Add REST implementation for Validator's WaitForChainStart

* Add missing error mapping

* Add missing bazel dependency

* Add missing tests

* Address PR comments

* Replace EventErrorJson with DefaultErrorJson

* Add tests for WaitForChainStart

* Refactor tests

* Address PR comments

* Add gazelle:build_tag use_beacon_api comment in BUILD.bazel

* Address PR comments

* Address PR comments

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-22 12:12:55 +00:00
Andrew Davis
fe1281dc1a fix(beacon-chain/rpc): correct block root for block events (#11666) 2022-11-22 10:26:56 +00:00
james-prysm
9761bd0753 correctly assign arm64 arch for Apple M1 (#11675)
* fixing typo

* removing if statement, not needed anymore
2022-11-21 23:11:02 +00:00
Manu NALEPA
0bcddb3009 Update mockgen (#11615)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-21 17:26:27 +01:00
terencechain
ee9da3aded Fill missing payload ID should exit early if block has seen (#11669)
* Fill missing payload ID should exit early if block has seen

* Move metric inc to top

* Fix test
2022-11-21 10:34:40 -05:00
kasey
d58d3f2c57 E2E deposit testing overhaul (#11667)
* rewrite/refactor deposit testing code

keep track of sent deposits so that they can be compared in detail with
the validator set retreived from the API.

* fix bugs in evaluator and retry

* lint + deepsource appeasement

* typo s/Sprintf/Printf/

* gosec, more like nosec

* fix gosec number - 204->304

* type switch to get signed block from container

* improve comments

* centralizing constants and adding comments

* lock around Depositor to avoid future races

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-11-19 03:40:32 +00:00
Preston Van Loon
4b033f4cc7 Update go to 1.19.3 (#11630)
* Update go to 1.19.3

* Update other items to 1.19

* Update golangci-lint to latest release

* Run gofmt -s with go1.19

* Huge gofmt changes

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-11-18 19:12:19 +00:00
terencechain
07d0a00f88 Add capella slashing parameters (#11660)
* Add capella slashing parameters

* Add capella to epoch precompute

* Revert "Add capella to epoch precompute"

This reverts commit 61a5f3d2bd.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-17 17:03:53 +00:00
Nishant Das
08d63a0cd0 Remove Geth Bindings From Prysm (#11586)
* check in changes

* gaz

* preston's review

* comment

* fix up

* remove test

* gaz

* preston's review

* fix it
2022-11-17 17:16:19 +08:00
terence tsao
931e5e10c3 Fix mainnet fork transition tests 2022-11-16 08:49:06 -05:00
kasey
f02ad8a68b disable geth nat traversal (#11664)
Co-authored-by: kasey <kasey@users.noreply.github.com>
2022-11-16 00:14:42 +08:00
Potuz
c172f838b1 Mark capella fields as dirty 2022-11-15 09:58:34 -05:00
Potuz
c07ae29cd9 move MaxWithdrawalsPerPayload to fieldparams 2022-11-14 22:59:31 -05:00
Potuz
466fd359a2 Fix expected_withdrawals (#11662) 2022-11-14 21:33:40 -05:00
Potuz
214c9bfd8b fix bls_to_execution_changes 2022-11-14 16:41:17 -05:00
Potuz
716140d64d add bls_to_execution_change tests 2022-11-14 16:26:39 -05:00
Potuz
088cb4ef59 fix expected_withdrawals 2022-11-14 14:50:41 -05:00
terencechain
b5f8e69b6b Add capella to epoch precompute (#11661) 2022-11-14 10:05:58 -05:00
Potuz
d1472fc351 Add withdrawals operations tests 2022-11-13 08:54:59 -03:00
terence tsao
5c8c0c31d8 Merge branch 'capella-withdrawal-minimal' into capella 2022-11-12 22:06:43 -08:00
terence tsao
7f3c00c7a2 Can build 2022-11-12 22:06:22 -08:00
terencechain
c180dab791 Merge branch 'develop' into capella 2022-11-12 18:28:18 -08:00
terencechain
cf466702df Capella: validator log withdrawals (#11657)
* Capella: validator log withdrawals

* Capella: validator log withdrawals
2022-11-13 02:26:49 +00:00
terence tsao
f24acc21c7 Fix bazel 2022-11-12 17:43:19 -08:00
terence tsao
40b637849d Fix miminal capella tests 2022-11-12 17:26:05 -08:00
terence tsao
e7db1685df Add mainnet capella tests 2022-11-12 17:13:26 -08:00
terence tsao
eccbfd1011 Add shared capella spec tests helpers 2022-11-12 17:13:16 -08:00
terence tsao
90211f6769 Fix prev epoch attested precompute 2022-11-12 17:12:29 -08:00
terence tsao
edc32ac18e Fix slashing quotient 2022-11-12 17:12:04 -08:00
terence tsao
fe68e020e3 Add selector with minimal withdrawal size 2022-11-12 16:18:16 -08:00
terence tsao
81e1e3544d Add mainnet ssz vectors 2022-11-12 15:50:07 -08:00
Potuz
09372d5c35 Revert "added mainnet ssz tests"
This reverts commit 078a89e4ca.
2022-11-12 18:40:28 -03:00
Potuz
078a89e4ca added mainnet ssz tests 2022-11-12 18:39:11 -03:00
Potuz
dbc6ae26a6 Add minimal support for capella spec tests
Fixed many issues about hashing
Added fork typing in the state replayer
2022-11-12 18:18:35 -03:00
Potuz
b6f429867a Merge branch 'develop' into capella 2022-11-12 16:35:20 -03:00
Potuz
de73baa4a7 Update 3068 (#11656)
* rename last_withdrawal_validator_index

* Update Capella methods to Specs #3068

* Add missing renames

* rename minimal state

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-12 18:38:21 +00:00
Potuz
d4a3b74bc6 capella payload changes (#11647)
* capella payload transition changes

* fix leak

* add back setters_payload from capella

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-12 18:19:51 +00:00
Potuz
216cdd9361 Use finalized hash if payloadID cache misses (#11653) 2022-11-12 17:58:14 +00:00
Potuz
68c3e939b8 Fix withdrawalRoot hasher (#11655) 2022-11-12 09:00:53 -08:00
Potuz
09f50660ce Merge branch 'develop' into capella 2022-11-12 11:51:06 -03:00
Potuz
189825b495 fix withdrawal hashing 2022-11-11 23:21:41 -03:00
terencechain
bf7e17db8f Fix get RANDAO endpoint for underflow (#11650)
* Fix get randao end point for underflow

* Fix test
2022-11-11 17:58:25 +00:00
Patrice Vignola
ead9a83d8d Add customizable endpoints for the validator's REST API (#11633)
* WIP

* Refactor to use iface.ValidatorClient instead of ethpb.BeaconNodeValidatorClient

* Add mocks for iface.ValidatorClient

* Fix mocks

* Update update-mockgen.sh

* Fix warnings

* Fix config_setting syntax

* Use custom build settings

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* Fix endpoint address and reduce timeout

* Revert most e2e changes

* Use e2e.TestParams.Ports.PrysmBeaconNodeGatewayPort

* Fix BeaconRESTApiProviderFlag port

* Revert e2e changes
2022-11-11 17:33:48 +00:00
Potuz
764b7ff610 Don't build capella payload twice 2022-11-11 10:47:29 -03:00
Nishant Das
2fef03414d Fix ENR Serialization (#11648)
* fix it

* fix test
2022-11-10 12:03:55 +00:00
Potuz
7b63d5c08c Operations in Capella are as in Bellatrix (#11646)
* Operations in Capella are as in Bellatrix

* add unit test
2022-11-10 02:04:03 +00:00
Potuz
d499db7f0e Merge branch 'develop' into capella 2022-11-09 21:27:18 -03:00
Potuz
af6d5e9149 don't change again unecessarily (#11645)
* don't change again unecessarily

* remove blinded blocks from gossip
2022-11-09 23:59:10 +00:00
terencechain
d0d7021c1d Add Capella p2p changes (#11644) 2022-11-09 15:11:46 -08:00
Potuz
4db1a02763 Implement upgrade to capella (#11642)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-09 16:48:23 +01:00
Mart1i1n
69d7f7f6ca Update ffg_update_test.go (#11639)
Fix some alignment typos.
2022-11-09 12:27:17 +00:00
Potuz
ed2d1c7bf9 Merge branch 'develop' into capella 2022-11-09 09:14:55 -03:00
nisdas
14b73cbd47 register flag 2022-11-09 08:48:36 +08:00
Inphi
4e342b8802 Fix prysmctl generate-genesis yaml output file (#11635)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-08 14:36:58 +00:00
Nishant Das
02566de74c Copy Bytes Safely When Accessing Withdrawals (#11638)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-08 13:40:25 +00:00
Potuz
90ecd23d41 implement process_withdrawals (#11634)
* implement process_withdrawals

* change errors to error.go

* gazelle

* James' review

* use bytes.Equal instead

* Radek's review

* Radek's review #2

* fix test

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-08 10:15:26 -03:00
int88
4d68211ad4 fix error message of subscriber (#11622)
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-07 16:44:38 +00:00
terencechain
19f6d3bef6 Capella: Add DB changes (#11624)
* Add Capella DB changes

* Add tests
2022-11-07 15:26:27 +00:00
Potuz
37108e6ed8 Implement get_expected_withdrawals (#11618)
* Implement get_expected_withdrawals

* Fix config test and export method

* Radek's review

* start from a different index in a test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-07 14:11:16 +00:00
Potuz
3124785a08 Merge branch 'develop' into capella 2022-11-07 10:08:09 -03:00
Patrice Vignola
d33af46c90 Add support for building a Beacon API validator client versus a gRPC one (#11612) 2022-11-07 11:29:27 +01:00
Potuz
60e6306107 working withrawals initial commits 2022-11-06 16:53:57 -03:00
terence tsao
42ccb7830a Add Capella DB changes 2022-11-06 15:25:34 -03:00
Potuz
0bb03b9292 fix marshalling and engine calls 2022-11-06 15:24:22 -03:00
nisdas
ed6fbf1480 stupid bug 2022-11-07 00:18:01 +08:00
nisdas
477cec6021 wei it 2022-11-07 00:13:12 +08:00
nisdas
924500d111 add unmarshal 2022-11-06 23:57:37 +08:00
Potuz
0677504ef1 Revert "proposer changes"
This reverts commit ca2a7c4d9c.
2022-11-06 12:16:53 -03:00
Potuz
ca2a7c4d9c proposer changes 2022-11-06 12:14:17 -03:00
Potuz
28606629ad marshalling stub 2022-11-06 12:03:25 -03:00
Potuz
c817279464 fix capella payload 2022-11-06 11:14:39 -03:00
Potuz
009d6ed8ed proposer logic 2022-11-06 10:49:32 -03:00
Potuz
5cec1282a9 FCU two versions 2022-11-06 10:22:45 -03:00
Potuz
340170fd29 propose block V2 2022-11-06 09:42:20 -03:00
Potuz
7ed0cc139a marshalling first attempt 2022-11-06 07:47:43 -03:00
Potuz
2c822213eb rpc changes 2022-11-06 00:24:56 -03:00
Radosław Kapka
53d4659654 GetRandao Beacon API endpoint (#11609)
* `GetRandao` Beacon API endpoint

* test optimistic execution

* review

* change epoch in test
2022-11-05 23:04:58 +00:00
Potuz
c08bb39ffe add fork versions 2022-11-05 13:38:11 -03:00
Potuz
5083d8ab34 propose capella blocks 2022-11-05 12:38:27 -03:00
Potuz
7552a5dd07 capella fork logic 2022-11-05 07:33:20 -03:00
Potuz
c93d68f853 Capella state transition 2022-11-05 06:06:15 -03:00
terencechain
09e117370a Flatten blob to an array (#11614)
* Flatten blob to an array

* Rename data back to blob

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2022-11-04 15:03:25 +00:00
Potuz
3e8aa4023d Fix config test and export method 2022-11-04 11:59:49 -03:00
Potuz
b443875e66 Implement get_expected_withdrawals 2022-11-04 11:45:45 -03:00
Radosław Kapka
8ade8afb73 Remove SSZ tags from beacon state (#11613)
* Remove SSZ tags from beacon state

* tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-04 11:08:57 +00:00
Inphi
872021f10d Remove unneeded protoarray tests (#11607)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-04 09:39:52 +00:00
Potuz
bb09295072 Remove withdrawal Queue (#11610) 2022-11-03 16:55:44 +00:00
terencechain
e4b2b1ea7d Unlock pending block queue if insertion errors (#11600)
* Unlock pending queue if insertion errors

* @saolyn's feedback

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-02 18:26:27 +00:00
terencechain
ead329e610 Implement [][] to [][48] helper (#11606) 2022-11-02 18:04:40 +00:00
int88
a0c5669511 RLock() for readonly operation (#11603)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-11-02 16:22:42 +00:00
terencechain
0fd5253915 Remove unused protobuf imports (#11602)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-11-02 13:46:23 +00:00
terencechain
86e855d499 Rename blob.blob to blob.data (#11601) 2022-11-02 14:21:03 +01:00
Sammy Rosso
001ae30a59 EIP-4881: Add merkle tree interface (#11597)
* Add merkle tree interface

* Run gazelle

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-11-01 23:31:32 +00:00
terencechain
d1f0e5dd55 Run ./hack/update-go-pbs.sh (#11598) 2022-11-01 23:08:52 +00:00
terencechain
2142b13a41 Add 4844 containers to protobuf (#11596) 2022-10-31 12:11:40 -07:00
Radosław Kapka
ffac232d89 Capella beacon block (#11566)
* in progress

* done, no tests yet

* fix ToBlinded()

* Revert "Auxiliary commit to revert individual files from 2e356b6f5b15d409ac15e825c744528591c13739"

This reverts commit 081ab74e88fb7d0e3f6a81e00fe5e89483b41f90.

* tests

* fix tests

* one more fix

* and one more

* review

* fix proto_test

* another fix

* do not return error when nil object is wrapped

* allow nil payload in body.Proto()

* correctly assert error

* nil checks in body.Execution()

* simplify PR

* Revert "Auxiliary commit to revert individual files from 5736c1f22f2d2f309b9303c13d0fb6b1679c6ecb"

This reverts commit 1ff3a4c864923f5c180aa015aa087a2814498b42.

* fix slice sizes in cloner tests

* better payload tests

* review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-31 17:58:30 +00:00
int88
26b46301d2 some minor fixes (#11593)
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-30 02:54:15 +00:00
Mark Ridgwell
fdf913aed9 corrected method name in comment (#11594) 2022-10-28 12:17:56 -04:00
terencechain
9435d10652 Add more buckets for block arrival latency histogram (#11589)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-27 15:43:41 +00:00
Radosław Kapka
8fa481cb93 Flag to exit validators without confirmation prompt (#11588) 2022-10-27 17:24:41 +02:00
terencechain
26087d7b2d Add unknown roots in error msgs (#11585)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-27 10:38:07 -04:00
Nishant Das
5a3a01090a Increase Block Burst Factor For E2E (#11583)
* test

* set higher block burst factor

* add back space

* remove space
2022-10-27 03:06:52 +00:00
Preston Van Loon
55690de685 bazel: Check in 5.3.0 cross compile clang toolchain (#11577)
* Check in 5.3.0 generated clang cross-toolchain

* Remove tools/cross-toolchain/configs/clang/bazel_5.0.0

* Add manual tags and run gazelle

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-27 02:48:23 +00:00
kasey
007c776d8a tool to search db for a key prefix (#11417)
* tool to query db by a key prefix

* cleanup

* lint and fmt

* db/kv public visibility

We've discussed before that Bazel visibility constraints don't
accomplish much in go, so I'm phasing them out in places where they get
in the way.

* derp

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-26 21:28:02 +00:00
Sammy Rosso
a15e0797e4 Support non english mnemonics for wallet creation (#11543)
* add option to log rejected gossip message

* add bip39 supported mnemonic languages

* Revert "add option to log rejected gossip message"

This reverts commit 9a3d4486f6.

* Add mnemonic language flag

* Update go.mod

* Simplify language mapping

* Add test for setBip39Lang

* Update go.mod

* Improve language matching

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Run gazelle + fix maligned struct

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-26 21:04:00 +00:00
int88
1572c530b5 some minor fixes (#11572)
* some minor fixes

* minor fix

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-10-26 20:44:24 +00:00
Preston Van Loon
652303522f bazel: remove --stamp for builds (#11578) 2022-10-26 20:13:02 +00:00
Sammy Rosso
39a7988e9e Add error if chain-config-file used concurrently with network (#10863)
* Add error if chain-config-file used with network

If the chain-config-file flag is used concurrently with the network
flag then error and exit.
Related to #10822.

* Add config file name to error message with `fmt`

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Add unit test for hasNetworkFlag

* Add list of network flags to features

* Fix imports

* Run gazelle

* retrigger checks

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-10-25 10:48:25 +00:00
james-prysm
648ab9f2c2 revert moving validator-exit command (#11575) 2022-10-25 03:52:30 +00:00
terencechain
43a0b4bb16 Retrieve proposer existence in cache with correct parameter (#11567) 2022-10-24 13:20:41 -05:00
int88
57d7207554 fix typo and log error (#11565)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-10-23 22:01:44 +00:00
Potuz
b7b5b28c5a Fix locks in Capella setters (#11569)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-22 23:10:45 +00:00
terencechain
968dc5d1e8 Beacon api: get duties prune payload id cache (#11568)
* Beacon api: get duties prune payload id cache

* Fix complains and bad test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-22 22:43:57 +00:00
terencechain
c24016f006 Add validator index with Withdrawal pb (#11563)
* Add validator index with Withdrawal pb

* Update BUILD.bazel

* Fix test

* Better tests
2022-10-22 20:54:50 +00:00
Nishant Das
661cbc45ae Vendor Leaky Bucket Implementation (#11560)
* add changes

* fix tests

* change to minute

* remove dep

* remove

* fix tests

* add test for period

* improve

* linter

* build files

* ci

* make it stricter

* fix tests

* fix

* Update beacon-chain/sync/rate_limiter.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-10-20 16:40:13 -05:00
Nishant Das
4bd4d6392d Reduce Block Burst Factor (#11546)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-10-20 14:20:47 +00:00
Radosław Kapka
5f4c26875b Remove ValidatorCapella protobuf (#11562) 2022-10-19 17:56:50 +00:00
Radosław Kapka
6aab2b2b8d /eth/v1/beacon/blinded_blocks/{block_id} API endpoint (#11538)
* proto + stub

* Add execution_optimistic to SSZ response

* implementation

* proto fix

* api middleware

* tests

* more tests

* switch from Errorf to Error

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-19 15:08:30 +00:00
Radosław Kapka
b7a878d011 Resolve remaining native state tasks (#11561)
* remove ToProto and ToProtoUnsafe wrappers

* TestAppendBeyondIndicesLimit

* change type of genesisValidatorsRoot

* fuzz tests

* check type assertion
2022-10-19 10:37:45 -04:00
Raul Jordan
2ae9f1be9e Prysm Web UI Release v2.0.2 (#11559)
Co-authored-by: james-prysm <james-prysm@users.noreply.github.com>
2022-10-19 03:11:22 +00:00
Radosław Kapka
98b9c9e6c9 Handle unaggregated attestation event (#11558) 2022-10-18 10:34:25 -04:00
james-prysm
df694aad71 prysmctl: validator exit (#11515)
* moving voluntary exit command to prysmctl

* fixing linting

* fixing imports

* removing unused import:

* refactoring and adding a new unit test

* rolling back some changes

* fixing parameters

* fixing tests

* adding to main prysm ctl

* refactoring how wallet function works and ux to validator exit

* adding interop support

* fixing bazel build

* fixing if statement

* fixing beaconcha.in printout

* fixing web3signer bug, missing signing slot info

* fixing deep source issues

* fixing bazel package visibility for prysmctl

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-17 16:04:19 -05:00
terencechain
e8400a0773 Fix complains and bad test (#11555)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-17 16:20:26 +00:00
Radosław Kapka
dcba27ffbc add server name to address (#11556) 2022-10-17 10:59:17 -04:00
Radosław Kapka
cafe0bd1f8 Capella beacon state (#11141)
* fork

* types

* cloners

* getters

* remove CapellaBlind from fork

* hasher

* setters

* spec params, config tests

* generate ssz

* executionPayloadHeaderCapella

* proto state

* BeaconStateCapella SSZ

* saving state

* configfork

* BUILD files

* fix RealPosition

* fix hasher

* SetLatestExecutionPayloadHeaderCapella

* fix error message

* reduce complexity of saveStatesEfficientInternal

* add latestExecutionPayloadHeaderCapella to minimal state

* halway done interface

* remove withdrawal methods

* merge setters

* change signatures for v1 and v2

* fixing errors pt. 1

* paylod_test fixes

* fix everything

* remove unused func

* fix tests

* state_trie_test improvements

* in progress...

* hasher test

* fix configs

* simplify hashing

* Revert "fix configs"

This reverts commit bcae2825fc.

* remove capella from config test

* unify locking

* review

* hashing

* fixes

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-12 11:39:19 -05:00
Potuz
ce7f042974 Add Withdrawal helpers (#11552)
* Add Withdrawal helpers

* Review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-12 08:44:41 -05:00
int88
974cf51a8c fix some comment and log error (#11550)
* fix some comment and log error

* Update testing/endtoend/component_handler_test.go

* Apply suggestions from code review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-11 14:07:47 -04:00
Radosław Kapka
eb49404a75 Expose structs from API Middleware (#11547) 2022-10-07 15:45:26 +00:00
Preston Van Loon
6bea17cb54 Update libp2p to support go 1.19 (#11309)
* Update libp2p to support go 1.19

* gaz

* go mod tidy

* Only update the minimum deps

* go mod tidy

* revert .bazelrc

* Update go-libp2p to v0.22.0 and update import paths (#11440)

* Fix import paths

* Fix go-libp2p-peerstore import

* Bazel updates

* fix

* revert some comments changes

* revert some comment stuff

* fix dependency issues

* vendor problematic library

* use your brain

* remove

* tests

Co-authored-by: Marco Munizaga <marco@marcopolo.io>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-10-07 15:24:51 +08:00
Nishant Das
de8e50d8b6 Migrate Historical States In Separate Routine (#11501)
* add changes

* space

* add test

* space

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-05 19:11:03 +00:00
Sammy Rosso
8049060119 Add flag to enable logging on rejected gossip message (#11524)
* add option to log rejected gossip message

* fix rejectGossipMessage return

* revert test + fix import

* revert all beaconchain/sync/validate_* files

* log object and message if flag is set

* fix failing build

* remove object field

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-05 15:18:08 +00:00
Krasimir Georgiev
c1446a35c5 fix panic when the rpc client is not initialized (#11528)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Your Name <you@example.com>
2022-10-05 14:02:01 +02:00
james-prysm
f5efde5ccc Keymanager API: fix fee recipient API and add persistence (#11540)
* fixing bug with fee recipient api

* fixing unit tests

* clarifying logs
2022-10-04 17:05:46 +00:00
Nishant Das
cdcb289693 Handle New Agent Version For Lodestar (#11536)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-04 14:39:12 +02:00
Nishant Das
5dca874530 Fix Validator Monitor Registration (#11537)
* fix it

* gaz
2022-10-04 12:44:57 +02:00
Nishant Das
885dd2e327 Revert "More efficient way of computing skip slot cache key (#11441)" (#11535)
This reverts commit 0f0d480dbc.
2022-10-04 14:13:11 +08:00
terencechain
0f0d480dbc More efficient way of computing skip slot cache key (#11441)
* More efficient way of computing skip slot cache key

* Gazelle

* Add defensive check

* Fix test setup

* Disable skip slot cache

* Fix rpc tests for dependent root
2022-10-03 13:53:17 -04:00
terencechain
1696208318 Add CLI flag to define execution engine timout (#11489)
* Add CLI flag to define execution engine timout

* Rm unused

* Fix import

* Fix lint

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-30 18:05:05 +00:00
terencechain
7f686a7878 Cache proposer slot index for GetProposerDuties (#11521) 2022-09-30 19:19:40 +02:00
Potuz
2817f8e8d6 update head should go even without attestations (#11503)
* update head should go even without attestations

* add unit tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-30 13:29:18 +00:00
Radosław Kapka
db76be2f15 Remove state code owners (#11519) 2022-09-30 13:21:06 +00:00
Potuz
ad65c841c4 Add a justified balance getter to forkchoice (#11513)
* init

* unit test

* DeepSource complains

* gaz

* shutting deepsource down

* change var name and use handler type

* Kasey's name suggestion

* fix stategen

* interface signature

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-09-30 06:39:07 -03:00
terencechain
7b255f5c9d Don't mark builder status unhealthy if mev-boost status is non 200 (#11506)
* Log error for mev boost / relayer status, not return error

* Add ticker to poll relayer status

* Add ticker to poll relayer status

* Update beacon-chain/builder/service.go

* Rm test

* Check nil before calling status

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-29 20:54:24 +00:00
Sammy Rosso
157b1e373c fix validator loggin timeTillDuty as a negative number (#11512)
* fix timeTillDuty log shows a negative number

* Update validator/client/validator.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-09-29 19:48:43 +00:00
terencechain
e3df25f443 Fix attestations update head error logging (#11514) 2022-09-29 12:16:42 -07:00
kasey
805473cb38 Give forkchoice to stategen (#11439)
* add forkchoice to stategen.New, update everywhere

* conflict_1

* Fix proposer_bellatrix test

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-09-28 20:10:27 +00:00
terencechain
a54bb19c82 Set builder get payload timeout to 3s (#11413) 2022-09-28 07:55:44 -07:00
terencechain
14908f639e Remove INTERVALS_PER_SLOT from place holder for test (#11502)
* Rm INTERVALS_PER_SLOT from place holder

* Comment
2022-09-26 15:19:04 +00:00
Potuz
77fc45304b Remove protoarray forkchoice (#11455)
* Remove protoarray forkchoice

* exported errors

* fix spectests

* fix tests

* conflict 1

* Preston's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-26 14:45:21 +00:00
terencechain
722c7fb034 Update consensus layer badge to v1.2.0 (#11492) 2022-09-26 00:27:54 +00:00
Potuz
97ef8dde4d Fix sync tests and update to 1.2.0 (#11498)
* Fix sync tests and update to 1.2.0

* fix repo sha
2022-09-25 23:48:14 +00:00
terencechain
211c5c2c5c Beacon api: produce block should skip mev-boost (#11488)
* Beacon api: produce block should skip mev-boost

* Update comments

* Additional test and comments

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_bellatrix.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Fix suggestion

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-09-23 18:45:26 +00:00
Yier
2ea66a8467 Remove unwanted wrapper of GRPC status error (#11486)
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-09-23 14:37:30 +00:00
terencechain
7720d98764 Beacon api: propoerly submit blind block (#11483)
* Beacon api: propoerly submit blind block

* Gazelle

* Fix SubmitBlockSSZ

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-09-22 20:13:43 +00:00
james-prysm
20e99fd1f9 Improvement to Fee Recipient UX (#11307)
* updating mock

* adding new internal api

* adding generated code

* converting validator index to pubkey

* adding new API endpoint

* fixing mock related new API

* fixing unit tests

* preventing failure on startup, replacing with warnings

* improving log message

* changing warn log to error log

* fixing error formatting

* improve log on beacon node side on startup

* fixing deepsource issue

* improving logs

* fixing unit tests

* fixing more tests

* adding error check

* adding in new case for fee recipient to avoid conflicts on changing beacon node suggested fee recipient

* adding default fee recipient check for gas limit as well

* adding improved unit tests

* accidently checked in tmp folder

* adding more unit tests

* fixing gas limit unit test

* Update validator/rpc/standard_api_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/node/config.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/node/config.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update proto/prysm/v1alpha1/validator.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing comments

* updating proto generated files

* fixing linting and addressign review comments

* fixing unit test

* improve error handling

* accidently added tmp folder

* improving function error returns

* realizing i am wrapping error incorrectly

* fixing unit test

* addressing review comment

* fixing linting

* fixing unit test

* improving ux around enable builder

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-09-22 11:35:35 -05:00
Nishant Das
61f6b27548 Change Default Timeout To 30 Seconds (#11487) 2022-09-22 10:43:53 +00:00
1193 changed files with 65086 additions and 24335 deletions

View File

@@ -21,7 +21,6 @@ build --sandbox_default_allow_network=false
# Stamp binaries with git information
build --workspace_status_command=./hack/workspace_status.sh
build --stamp
# Prevent PATH changes from rebuilding when switching from IDE to command line.
build --incompatible_strict_action_env
@@ -38,7 +37,7 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
# Release flags
build:release --compilation_mode=opt
build:release --config=llvm
build:release --stamp
# LLVM compiler for building C/C++ dependencies.
build:llvm --define compiler=llvm

10
.github/CODEOWNERS vendored
View File

@@ -5,12 +5,4 @@
*.bzl @prestonvanloon
# Anyone on prylabs team can approve dependency updates.
deps.bzl @prysmaticlabs/core-team
# Radek and Nishant are responsible for changes that can affect the native state feature.
# See https://www.notion.so/prysmaticlabs/Native-Beacon-State-Redesign-6cc9744b4ec1439bb34fa829b36a35c1
/beacon-chain/state/fieldtrie/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v1/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v2/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v3/ @rkapka @nisdas @rauljordan
/beacon-chain/state/state-native/ @rkapka @nisdas @rauljordan
deps.bzl @prysmaticlabs/core-team

View File

@@ -16,12 +16,12 @@ Existing issues often contain information about workarounds, resolution, or prog
### Description
<!-- ✍️--> A clear and concise description of the problem or missing capability...
<!-- ✍️ A clear and concise description of the problem or missing capability... -->
### Describe the solution you'd like
<!-- ✍️--> If you have a solution in mind, please describe it.
<!-- ✍️ If you have a solution in mind, please describe it. -->
### Describe alternatives you've considered
<!-- ✍️--> Have you considered any alternative solutions or workarounds?
<!-- ✍️ Have you considered any alternative solutions or workarounds? -->

View File

@@ -26,10 +26,10 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.18
- name: Set up Go 1.19
uses: actions/setup-go@v3
with:
go-version: 1.18
go-version: 1.19
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -43,16 +43,16 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.18
- name: Set up Go 1.19
uses: actions/setup-go@v3
with:
go-version: 1.18
go-version: 1.19
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: v1.47.2
version: v1.50.1
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
@@ -62,7 +62,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.18
go-version: 1.19
id: go
- name: Check out code into the Go module directory

3
.gitignore vendored
View File

@@ -38,3 +38,6 @@ metaData
# execution API authentication
jwt.hex
# manual testing
tmp

View File

@@ -6,7 +6,7 @@ run:
- proto
- tools/analyzers
timeout: 10m
go: '1.18'
go: '1.19'
linters:
disable-all: true

View File

@@ -1,4 +1,4 @@
# Dependency Managagement in Prysm
# Dependency Management in Prysm
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
@@ -28,7 +28,7 @@ including complicated c++ dependencies.
One key advantage of Bazel over vanilla `go build` is that Bazel automatically (re)builds generated
pb.go files at build time when file changes are present in any protobuf definition file or after
any updates to the protobuf compiler or other relevant dependencies. Vanilla go users should run
the following scripts often to ensure their generated files are up to date. Further more, Prysm
the following scripts often to ensure their generated files are up to date. Furthermore, Prysm
generates SSZ marshal related code based on defined data structures. These generated files must
also be updated and checked in as frequently.

View File

@@ -2,7 +2,7 @@
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.2.0-rc.3](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0.rc.3-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0-rc.3)
[![Consensus_Spec_Version 1.2.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0)
[![Execution_API_Version 1.0.0-beta.1](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.1-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.1/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)

View File

@@ -88,10 +88,10 @@ http_archive(
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "16e9fca53ed6bd4ff4ad76facc9b7b651a89db1689a2877d6fd7b82aa824e366",
sha256 = "ae013bf35bd23234d1dea46b079f1e05ba74ac0321423830119d3e787ec73483",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.34.0/rules_go-v0.34.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.34.0/rules_go-v0.34.0.zip",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.36.0/rules_go-v0.36.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.36.0/rules_go-v0.36.0.zip",
],
)
@@ -110,13 +110,6 @@ git_repository(
# gazelle args: -go_prefix github.com/gogo/protobuf -proto legacy
)
http_archive(
name = "fuzzit_linux",
build_file_content = "exports_files([\"fuzzit\"])",
sha256 = "9ca76ac1c22d9360936006efddf992977ebf8e4788ded8e5f9d511285c9ac774",
urls = ["https://github.com/fuzzitdev/fuzzit/releases/download/v2.4.76/fuzzit_Linux_x86_64.zip"],
)
load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
@@ -164,35 +157,15 @@ container_pull(
repository = "pinglamb/alpine-glibc",
)
container_pull(
name = "fuzzit_base",
digest = "sha256:24a39a4360b07b8f0121eb55674a2e757ab09f0baff5569332fefd227ee4338f",
registry = "gcr.io",
repository = "fuzzit-public/stretch-llvm8",
)
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
go_rules_dependencies()
go_register_toolchains(
go_version = "1.18.5",
go_version = "1.19.4",
nogo = "@//:nogo",
)
http_archive(
name = "prysm_testnet_site",
build_file_content = """
proto_library(
name = "faucet_proto",
srcs = ["src/proto/faucet.proto"],
visibility = ["//visibility:public"],
)""",
sha256 = "29742136ff9faf47343073c4569a7cf21b8ed138f726929e09e3c38ab83544f7",
strip_prefix = "prysm-testnet-site-5c711600f0a77fc553b18cf37b880eaffef4afdb",
url = "https://github.com/prestonvanloon/prysm-testnet-site/archive/5c711600f0a77fc553b18cf37b880eaffef4afdb.tar.gz",
)
http_archive(
name = "io_kubernetes_build",
sha256 = "b84fbd1173acee9d02a7d3698ad269fdf4f7aa081e9cecd40e012ad0ad8cfa2a",
@@ -215,7 +188,7 @@ filegroup(
url = "https://github.com/eth-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
)
consensus_spec_version = "v1.2.0-rc.3"
consensus_spec_version = "v1.3.0-rc.1"
bls_test_version = "v0.1.1"
@@ -231,7 +204,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "18ca21497f41042cdbe60e2333b100d218b2994fb514964b9deb23daf615a12f",
sha256 = "3d6fadb64674eb64a84fae6c2efa9949231ea91e7cb74ada9214097323e86569",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -247,7 +220,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "47b8f6fabe39b4a69f13054ba74e26ab51581ddbd359c18cf0f03317474e299c",
sha256 = "54ffbcab1e77316a280e6f5a64c6ed62351e8f5678e6fa49340e49b9b5575e8e",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -263,7 +236,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "a061efc05429b169393c32dc2633a948269461b0fe681f11d41e170a880dcc71",
sha256 = "bb06d30ca533dc97d45f2367916ba9ff1b5af52f08a9d8c33bb7b1a61254094e",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -278,7 +251,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "753d51c6a6cc6df101c897e4bea77f73b271f50aeda74440f412514d4bd88a86",
sha256 = "9d22246c00ec3907ef8dc3ddccdfe6f7153ce46df73deee0a0176fe7e4aa1126",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -342,9 +315,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "e0c0b5dc609b3a221e74c720f483c595441f2ad5e38bb8aa3522636039945a6f",
sha256 = "b2226874526805d64c29e5053fa28e511b57c0860585d6d59777ee81ff4859ca",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v2.0.1/prysm-web-ui.tar.gz",
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v2.0.2/prysm-web-ui.tar.gz",
],
)

View File

@@ -1,6 +1,5 @@
/*
Package beacon provides a client for interacting with the standard Eth Beacon Node API.
Interactive swagger documentation for the API is available here: https://ethereum.github.io/beacon-APIs/
*/
package beacon

View File

@@ -10,6 +10,7 @@ import (
"net/http"
"net/url"
"text/template"
"time"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -31,6 +32,7 @@ const (
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
var submitBlindedBlockTimeout = 3 * time.Second
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
@@ -252,7 +254,11 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb *ethpb.SignedBlinded
if err != nil {
return nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockBellatrix value body in SubmitBlindedBlock")
}
ctx, cancel := context.WithTimeout(ctx, submitBlindedBlockTimeout)
defer cancel()
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body))
if err != nil {
return nil, errors.Wrap(err, "error posting the SignedBlindedBeaconBlockBellatrix to the builder api")
}

View File

@@ -81,7 +81,7 @@ func TestClient_Status(t *testing.T) {
func TestClient_RegisterValidator(t *testing.T) {
ctx := context.Background()
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"}}]`
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]`
expectedPath := "/eth/v1/builder/validators"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
@@ -111,6 +111,7 @@ func TestClient_RegisterValidator(t *testing.T) {
Timestamp: 42,
Pubkey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
require.NoError(t, c.RegisterValidator(ctx, []*eth.SignedValidatorRegistrationV1{reg}))
}

View File

@@ -23,8 +23,8 @@ type ValidatorRegistration struct {
func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *ValidatorRegistration `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
Message *ValidatorRegistration `json:"message"`
Signature hexutil.Bytes `json:"signature"`
}{
Message: &ValidatorRegistration{r.Message},
Signature: r.SignedValidatorRegistrationV1.Signature,
@@ -36,8 +36,8 @@ func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
r.SignedValidatorRegistrationV1 = &eth.SignedValidatorRegistrationV1{}
}
o := struct {
Message *ValidatorRegistration `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
Message *ValidatorRegistration `json:"message"`
Signature hexutil.Bytes `json:"signature"`
}{}
if err := json.Unmarshal(b, &o); err != nil {
return err
@@ -49,10 +49,10 @@ func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
GasLimit string `json:"gas_limit,omitempty"`
Timestamp string `json:"timestamp,omitempty"`
Pubkey hexutil.Bytes `json:"pubkey,omitempty"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
GasLimit string `json:"gas_limit"`
Timestamp string `json:"timestamp"`
Pubkey hexutil.Bytes `json:"pubkey"`
}{
FeeRecipient: r.FeeRecipient,
GasLimit: fmt.Sprintf("%d", r.GasLimit),
@@ -66,10 +66,10 @@ func (r *ValidatorRegistration) UnmarshalJSON(b []byte) error {
r.ValidatorRegistrationV1 = &eth.ValidatorRegistrationV1{}
}
o := struct {
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
GasLimit string `json:"gas_limit,omitempty"`
Timestamp string `json:"timestamp,omitempty"`
Pubkey hexutil.Bytes `json:"pubkey,omitempty"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
GasLimit string `json:"gas_limit"`
Timestamp string `json:"timestamp"`
Pubkey hexutil.Bytes `json:"pubkey"`
}{}
if err := json.Unmarshal(b, &o); err != nil {
return err
@@ -110,8 +110,7 @@ func stringToUint256(s string) (Uint256, error) {
// sszBytesToUint256 creates a Uint256 from a ssz-style (little-endian byte slice) representation.
func sszBytesToUint256(b []byte) (Uint256, error) {
bi := new(big.Int)
bi.SetBytes(bytesutil.ReverseByteOrder(b))
bi := bytesutil.LittleEndianBytesToBigInt(b)
if !isValidUint256(bi) {
return Uint256{}, errors.Wrapf(errDecodeUint256, "value=%s", b)
}
@@ -183,11 +182,11 @@ func (s Uint64String) MarshalText() ([]byte, error) {
}
type ExecHeaderResponse struct {
Version string `json:"version,omitempty"`
Version string `json:"version"`
Data struct {
Signature hexutil.Bytes `json:"signature,omitempty"`
Message *BuilderBid `json:"message,omitempty"`
} `json:"data,omitempty"`
Signature hexutil.Bytes `json:"signature"`
Message *BuilderBid `json:"message"`
} `json:"data"`
}
func (ehr *ExecHeaderResponse) ToProto() (*eth.SignedBuilderBid, error) {
@@ -233,26 +232,26 @@ func (h *ExecutionPayloadHeader) ToProto() (*v1.ExecutionPayloadHeader, error) {
}
type BuilderBid struct {
Header *ExecutionPayloadHeader `json:"header,omitempty"`
Value Uint256 `json:"value,omitempty"`
Pubkey hexutil.Bytes `json:"pubkey,omitempty"`
Header *ExecutionPayloadHeader `json:"header"`
Value Uint256 `json:"value"`
Pubkey hexutil.Bytes `json:"pubkey"`
}
type ExecutionPayloadHeader struct {
ParentHash hexutil.Bytes `json:"parent_hash,omitempty"`
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
ReceiptsRoot hexutil.Bytes `json:"receipts_root,omitempty"`
LogsBloom hexutil.Bytes `json:"logs_bloom,omitempty"`
PrevRandao hexutil.Bytes `json:"prev_randao,omitempty"`
BlockNumber Uint64String `json:"block_number,omitempty"`
GasLimit Uint64String `json:"gas_limit,omitempty"`
GasUsed Uint64String `json:"gas_used,omitempty"`
Timestamp Uint64String `json:"timestamp,omitempty"`
ExtraData hexutil.Bytes `json:"extra_data,omitempty"`
BaseFeePerGas Uint256 `json:"base_fee_per_gas,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
TransactionsRoot hexutil.Bytes `json:"transactions_root,omitempty"`
ParentHash hexutil.Bytes `json:"parent_hash"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
StateRoot hexutil.Bytes `json:"state_root"`
ReceiptsRoot hexutil.Bytes `json:"receipts_root"`
LogsBloom hexutil.Bytes `json:"logs_bloom"`
PrevRandao hexutil.Bytes `json:"prev_randao"`
BlockNumber Uint64String `json:"block_number"`
GasLimit Uint64String `json:"gas_limit"`
GasUsed Uint64String `json:"gas_used"`
Timestamp Uint64String `json:"timestamp"`
ExtraData hexutil.Bytes `json:"extra_data"`
BaseFeePerGas Uint256 `json:"base_fee_per_gas"`
BlockHash hexutil.Bytes `json:"block_hash"`
TransactionsRoot hexutil.Bytes `json:"transactions_root"`
*v1.ExecutionPayloadHeader
}
@@ -294,25 +293,25 @@ func (h *ExecutionPayloadHeader) UnmarshalJSON(b []byte) error {
}
type ExecPayloadResponse struct {
Version string `json:"version,omitempty"`
Data ExecutionPayload `json:"data,omitempty"`
Version string `json:"version"`
Data ExecutionPayload `json:"data"`
}
type ExecutionPayload struct {
ParentHash hexutil.Bytes `json:"parent_hash,omitempty"`
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
ReceiptsRoot hexutil.Bytes `json:"receipts_root,omitempty"`
LogsBloom hexutil.Bytes `json:"logs_bloom,omitempty"`
PrevRandao hexutil.Bytes `json:"prev_randao,omitempty"`
BlockNumber Uint64String `json:"block_number,omitempty"`
GasLimit Uint64String `json:"gas_limit,omitempty"`
GasUsed Uint64String `json:"gas_used,omitempty"`
Timestamp Uint64String `json:"timestamp,omitempty"`
ExtraData hexutil.Bytes `json:"extra_data,omitempty"`
BaseFeePerGas Uint256 `json:"base_fee_per_gas,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
Transactions []hexutil.Bytes `json:"transactions,omitempty"`
ParentHash hexutil.Bytes `json:"parent_hash"`
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
StateRoot hexutil.Bytes `json:"state_root"`
ReceiptsRoot hexutil.Bytes `json:"receipts_root"`
LogsBloom hexutil.Bytes `json:"logs_bloom"`
PrevRandao hexutil.Bytes `json:"prev_randao"`
BlockNumber Uint64String `json:"block_number"`
GasLimit Uint64String `json:"gas_limit"`
GasUsed Uint64String `json:"gas_used"`
Timestamp Uint64String `json:"timestamp"`
ExtraData hexutil.Bytes `json:"extra_data"`
BaseFeePerGas Uint256 `json:"base_fee_per_gas"`
BlockHash hexutil.Bytes `json:"block_hash"`
Transactions []hexutil.Bytes `json:"transactions"`
}
func (r *ExecPayloadResponse) ToProto() (*v1.ExecutionPayload, error) {
@@ -356,8 +355,8 @@ type BlindedBeaconBlockBodyBellatrix struct {
func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *BlindedBeaconBlockBellatrix `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
Message *BlindedBeaconBlockBellatrix `json:"message"`
Signature hexutil.Bytes `json:"signature"`
}{
Message: &BlindedBeaconBlockBellatrix{r.SignedBlindedBeaconBlockBellatrix.Block},
Signature: r.SignedBlindedBeaconBlockBellatrix.Signature,
@@ -367,10 +366,10 @@ func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
func (b *BlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index,omitempty"`
ParentRoot hexutil.Bytes `json:"parent_root,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
Body *BlindedBeaconBlockBodyBellatrix `json:"body,omitempty"`
ProposerIndex string `json:"proposer_index"`
ParentRoot hexutil.Bytes `json:"parent_root"`
StateRoot hexutil.Bytes `json:"state_root"`
Body *BlindedBeaconBlockBodyBellatrix `json:"body"`
}{
Slot: fmt.Sprintf("%d", b.Slot),
ProposerIndex: fmt.Sprintf("%d", b.ProposerIndex),
@@ -386,8 +385,8 @@ type ProposerSlashing struct {
func (s *ProposerSlashing) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
SignedHeader1 *SignedBeaconBlockHeader `json:"signed_header_1,omitempty"`
SignedHeader2 *SignedBeaconBlockHeader `json:"signed_header_2,omitempty"`
SignedHeader1 *SignedBeaconBlockHeader `json:"signed_header_1"`
SignedHeader2 *SignedBeaconBlockHeader `json:"signed_header_2"`
}{
SignedHeader1: &SignedBeaconBlockHeader{s.ProposerSlashing.Header_1},
SignedHeader2: &SignedBeaconBlockHeader{s.ProposerSlashing.Header_2},
@@ -400,8 +399,8 @@ type SignedBeaconBlockHeader struct {
func (h *SignedBeaconBlockHeader) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Header *BeaconBlockHeader `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
Header *BeaconBlockHeader `json:"message"`
Signature hexutil.Bytes `json:"signature"`
}{
Header: &BeaconBlockHeader{h.SignedBeaconBlockHeader.Header},
Signature: h.SignedBeaconBlockHeader.Signature,
@@ -414,11 +413,11 @@ type BeaconBlockHeader struct {
func (h *BeaconBlockHeader) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot,omitempty"`
ProposerIndex string `json:"proposer_index,omitempty"`
ParentRoot hexutil.Bytes `json:"parent_root,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
BodyRoot hexutil.Bytes `json:"body_root,omitempty"`
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot hexutil.Bytes `json:"parent_root"`
StateRoot hexutil.Bytes `json:"state_root"`
BodyRoot hexutil.Bytes `json:"body_root"`
}{
Slot: fmt.Sprintf("%d", h.BeaconBlockHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", h.BeaconBlockHeader.ProposerIndex),
@@ -438,9 +437,9 @@ func (a *IndexedAttestation) MarshalJSON() ([]byte, error) {
indices[i] = fmt.Sprintf("%d", a.AttestingIndices[i])
}
return json.Marshal(struct {
AttestingIndices []string `json:"attesting_indices,omitempty"`
Data *AttestationData `json:"data,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
AttestingIndices []string `json:"attesting_indices"`
Data *AttestationData `json:"data"`
Signature hexutil.Bytes `json:"signature"`
}{
AttestingIndices: indices,
Data: &AttestationData{a.IndexedAttestation.Data},
@@ -454,8 +453,8 @@ type AttesterSlashing struct {
func (s *AttesterSlashing) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Attestation1 *IndexedAttestation `json:"attestation_1,omitempty"`
Attestation2 *IndexedAttestation `json:"attestation_2,omitempty"`
Attestation1 *IndexedAttestation `json:"attestation_1"`
Attestation2 *IndexedAttestation `json:"attestation_2"`
}{
Attestation1: &IndexedAttestation{s.Attestation_1},
Attestation2: &IndexedAttestation{s.Attestation_2},
@@ -468,8 +467,8 @@ type Checkpoint struct {
func (c *Checkpoint) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Epoch string `json:"epoch,omitempty"`
Root hexutil.Bytes `json:"root,omitempty"`
Epoch string `json:"epoch"`
Root hexutil.Bytes `json:"root"`
}{
Epoch: fmt.Sprintf("%d", c.Checkpoint.Epoch),
Root: c.Checkpoint.Root,
@@ -482,11 +481,11 @@ type AttestationData struct {
func (a *AttestationData) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot,omitempty"`
Index string `json:"index,omitempty"`
BeaconBlockRoot hexutil.Bytes `json:"beacon_block_root,omitempty"`
Source *Checkpoint `json:"source,omitempty"`
Target *Checkpoint `json:"target,omitempty"`
Slot string `json:"slot"`
Index string `json:"index"`
BeaconBlockRoot hexutil.Bytes `json:"beacon_block_root"`
Source *Checkpoint `json:"source"`
Target *Checkpoint `json:"target"`
}{
Slot: fmt.Sprintf("%d", a.AttestationData.Slot),
Index: fmt.Sprintf("%d", a.AttestationData.CommitteeIndex),
@@ -502,9 +501,9 @@ type Attestation struct {
func (a *Attestation) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
AggregationBits hexutil.Bytes `json:"aggregation_bits,omitempty"`
Data *AttestationData `json:"data,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty" ssz-size:"96"`
AggregationBits hexutil.Bytes `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature hexutil.Bytes `json:"signature" ssz-size:"96"`
}{
AggregationBits: hexutil.Bytes(a.Attestation.AggregationBits),
Data: &AttestationData{a.Attestation.Data},
@@ -518,10 +517,10 @@ type DepositData struct {
func (d *DepositData) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
PublicKey hexutil.Bytes `json:"pubkey,omitempty"`
WithdrawalCredentials hexutil.Bytes `json:"withdrawal_credentials,omitempty"`
Amount string `json:"amount,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
PublicKey hexutil.Bytes `json:"pubkey"`
WithdrawalCredentials hexutil.Bytes `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature hexutil.Bytes `json:"signature"`
}{
PublicKey: d.PublicKey,
WithdrawalCredentials: d.WithdrawalCredentials,
@@ -554,8 +553,8 @@ type SignedVoluntaryExit struct {
func (sve *SignedVoluntaryExit) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *VoluntaryExit `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
Message *VoluntaryExit `json:"message"`
Signature hexutil.Bytes `json:"signature"`
}{
Signature: sve.SignedVoluntaryExit.Signature,
Message: &VoluntaryExit{sve.SignedVoluntaryExit.Exit},
@@ -568,8 +567,8 @@ type VoluntaryExit struct {
func (ve *VoluntaryExit) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Epoch string `json:"epoch,omitempty"`
ValidatorIndex string `json:"validator_index,omitempty"`
Epoch string `json:"epoch"`
ValidatorIndex string `json:"validator_index"`
}{
Epoch: fmt.Sprintf("%d", ve.Epoch),
ValidatorIndex: fmt.Sprintf("%d", ve.ValidatorIndex),
@@ -582,8 +581,8 @@ type SyncAggregate struct {
func (s *SyncAggregate) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
SyncCommitteeBits hexutil.Bytes `json:"sync_committee_bits,omitempty"`
SyncCommitteeSignature hexutil.Bytes `json:"sync_committee_signature,omitempty"`
SyncCommitteeBits hexutil.Bytes `json:"sync_committee_bits"`
SyncCommitteeSignature hexutil.Bytes `json:"sync_committee_signature"`
}{
SyncCommitteeBits: hexutil.Bytes(s.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: s.SyncAggregate.SyncCommitteeSignature,
@@ -596,9 +595,9 @@ type Eth1Data struct {
func (e *Eth1Data) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
DepositRoot hexutil.Bytes `json:"deposit_root,omitempty"`
DepositCount string `json:"deposit_count,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
DepositRoot hexutil.Bytes `json:"deposit_root"`
DepositCount string `json:"deposit_count"`
BlockHash hexutil.Bytes `json:"block_hash"`
}{
DepositRoot: e.DepositRoot,
DepositCount: fmt.Sprintf("%d", e.DepositCount),
@@ -628,16 +627,16 @@ func (b *BlindedBeaconBlockBodyBellatrix) MarshalJSON() ([]byte, error) {
pros[i] = &ProposerSlashing{ProposerSlashing: b.BlindedBeaconBlockBodyBellatrix.ProposerSlashings[i]}
}
return json.Marshal(struct {
RandaoReveal hexutil.Bytes `json:"randao_reveal,omitempty"`
Eth1Data *Eth1Data `json:"eth1_data,omitempty"`
Graffiti hexutil.Bytes `json:"graffiti,omitempty"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings,omitempty"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings,omitempty"`
Attestations []*Attestation `json:"attestations,omitempty"`
Deposits []*Deposit `json:"deposits,omitempty"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate,omitempty"`
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header,omitempty"`
RandaoReveal hexutil.Bytes `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti hexutil.Bytes `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header"`
}{
RandaoReveal: b.RandaoReveal,
Eth1Data: &Eth1Data{b.BlindedBeaconBlockBodyBellatrix.Eth1Data},

View File

@@ -3,7 +3,9 @@ Copyright 2017 Albert Tedja
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

View File

@@ -3,7 +3,9 @@ Copyright 2017 Albert Tedja
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

View File

@@ -51,9 +51,9 @@ go_library(
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
@@ -64,6 +64,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",

View File

@@ -5,10 +5,10 @@ import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -314,7 +314,7 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
if err == nil {
return optimistic, nil
}
if err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
if !errors.Is(err, doublylinkedtree.ErrNilNode) {
return true, err
}
// If fockchoice does not have the headroot, then the node is considered
@@ -328,6 +328,11 @@ func (s *Service) IsFinalized(ctx context.Context, root [32]byte) bool {
if s.ForkChoicer().FinalizedCheckpoint().Root == root {
return true
}
// If node exists in our store, then it is not
// finalized.
if s.ForkChoicer().HasNode(root) {
return false
}
return s.cfg.BeaconDB.IsFinalizedBlock(ctx, root)
}
@@ -338,7 +343,7 @@ func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool,
if err == nil {
return optimistic, nil
}
if err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
if !errors.Is(err, doublylinkedtree.ErrNilNode) {
return false, err
}
// if the requested root is the headroot and the root is not found in

View File

@@ -5,6 +5,7 @@ import (
"testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
@@ -32,7 +33,7 @@ func TestHeadSlot_DataRace(t *testing.T) {
func TestHeadRoot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{root: [32]byte{'A'}},
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
@@ -54,7 +55,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
wsb, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}})
require.NoError(t, err)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{block: wsb},
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
@@ -74,7 +75,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
func TestHeadState_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)

View File

@@ -7,7 +7,6 @@ import (
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
@@ -85,7 +84,7 @@ func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -110,7 +109,7 @@ func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -126,7 +125,10 @@ func TestHeadSlot_CanRetrieve(t *testing.T) {
c := &Service{}
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
c.head = &head{slot: 100, state: s}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b.Block().SetSlot(100)
c.head = &head{block: b, state: s}
assert.Equal(t, types.Slot(100), c.HeadSlot())
}
@@ -137,7 +139,7 @@ func TestHeadRoot_CanRetrieve(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -156,7 +158,7 @@ func TestHeadRoot_UseDB(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -199,7 +201,7 @@ func TestHeadState_CanRetrieve(t *testing.T) {
c.head = &head{state: s}
headState, err := c.HeadState(context.Background())
require.NoError(t, err)
assert.DeepEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Incorrect head state received")
assert.DeepEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Incorrect head state received")
}
func TestGenesisTime_CanRetrieve(t *testing.T) {
@@ -305,31 +307,6 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
root = c.HeadGenesisValidatorsRoot()
require.DeepEqual(t, root[:], s.GenesisValidatorsRoot())
}
func TestService_ChainHeads_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
roots, slots := c.ChainHeads()
require.DeepEqual(t, [][32]byte{{'c'}, {'d'}, {'e'}}, roots)
require.DeepEqual(t, []types.Slot{102, 103, 104}, slots)
}
//
// A <- B <- C
@@ -337,7 +314,7 @@ func TestService_ChainHeads_ProtoArray(t *testing.T) {
// \ ---------- E
// ---------- D
func TestService_ChainHeads_DoublyLinkedTree(t *testing.T) {
func TestService_ChainHeads(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -354,7 +331,7 @@ func TestService_ChainHeads_DoublyLinkedTree(t *testing.T) {
st, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -429,29 +406,7 @@ func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
}
func TestService_IsOptimistic_ProtoArray(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimistic_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimistic(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.BellatrixForkEpoch = 0
@@ -460,7 +415,7 @@ func TestService_IsOptimistic_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
@@ -481,9 +436,9 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
require.Equal(t, false, opt)
}
func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
func TestService_IsOptimisticForRoot(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -498,86 +453,10 @@ func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
require.Equal(t, true, opt)
}
func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
func TestService_IsOptimisticForRoot_DB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
_, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.ErrorContains(t, "nil summary returned from the DB", err)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: validatedRoot[:],
}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, validatedRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, cp))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, false, validated)
// Before the first finalized epoch, finalized root could be zeros.
validatedCheckpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
}
func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
@@ -635,7 +514,7 @@ func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10

View File

@@ -8,19 +8,17 @@ var (
// ErrInvalidBlockHashPayloadStatus is returned when the payload has invalid block hash.
ErrInvalidBlockHashPayloadStatus = invalidBlock{error: errors.New("received an INVALID_BLOCK_HASH payload from execution engine")}
// ErrUndefinedExecutionEngineError is returned when the execution engine returns an error that is not defined
ErrUndefinedExecutionEngineError = errors.New("received an undefined ee error")
ErrUndefinedExecutionEngineError = errors.New("received an undefined execution engine error")
// errNilFinalizedInStore is returned when a nil finalized checkpt is returned from store.
errNilFinalizedInStore = errors.New("nil finalized checkpoint returned from store")
// errNilFinalizedCheckpoint is returned when a nil finalized checkpt is returned from a state.
errNilFinalizedCheckpoint = errors.New("nil finalized checkpoint returned from state")
// errNilJustifiedCheckpoint is returned when a nil justified checkpt is returned from a state.
errNilJustifiedCheckpoint = errors.New("nil finalized checkpoint returned from state")
errNilJustifiedCheckpoint = errors.New("nil justified checkpoint returned from state")
// errInvalidNilSummary is returned when a nil summary is returned from the DB.
errInvalidNilSummary = errors.New("nil summary returned from the DB")
// errWrongBlockCount is returned when the wrong number of blocks or block roots is used
errWrongBlockCount = errors.New("wrong number of blocks or block roots")
// block is not a valid optimistic candidate block
errNotOptimisticCandidate = errors.New("block is not suitable for optimistic sync")
// errBlockNotFoundInCacheOrDB is returned when a block is not found in the cache or DB.
errBlockNotFoundInCacheOrDB = errors.New("block not found in cache or db")
// errNilStateFromStategen is returned when a nil state is returned from the state generator.

View File

@@ -15,9 +15,11 @@ import (
"github.com/prysmaticlabs/prysm/v3/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v3/consensus-types/payload-attribute"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -67,11 +69,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
}
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
hasAttr, attr, proposerId, err := s.getPayloadAttribute(ctx, arg.headState, nextSlot)
if err != nil {
log.WithError(err).Error("Could not get head payload attribute")
return nil, nil
}
hasAttr, attr, proposerId := s.getPayloadAttribute(ctx, arg.headState, nextSlot)
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attr)
if err != nil {
@@ -150,7 +148,8 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
log.WithError(err).Error("Could not set head root to valid")
return nil, nil
}
if hasAttr && payloadID != nil { // If the forkchoice update call has an attribute, update the proposer payload ID cache.
// If the forkchoice update call has an attribute, update the proposer payload ID cache.
if hasAttr && payloadID != nil {
var pId [8]byte
copy(pId[:], payloadID[:])
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
@@ -180,10 +179,10 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
return bytesutil.ToBytes32(payload.BlockHash()), nil
}
// notifyForkchoiceUpdate signals execution engine on a new payload.
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
postStateHeader *enginev1.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) (bool, error) {
postStateHeader interfaces.ExecutionData, blk interfaces.SignedBeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
@@ -251,22 +250,25 @@ func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot types.Slot) (bool, *enginev1.PayloadAttributes, types.ValidatorIndex, error) {
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot types.Slot) (bool, payloadattribute.Attributer, types.ValidatorIndex) {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
// Root is `[32]byte{}` since we are retrieving proposer ID of a given slot. During insertion at assignment the root was not known.
proposerID, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, [32]byte{} /* root */)
if !ok { // There's no need to build attribute if there is no proposer for slot.
return false, nil, 0, nil
return false, emptyAttri, 0
}
// Get previous randao.
st = st.Copy()
st, err := transition.ProcessSlotsIfPossible(ctx, st, slot)
if err != nil {
return false, nil, 0, err
log.WithError(err).Error("Could not process slots to get payload attribute")
return false, emptyAttri, 0
}
prevRando, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
return false, nil, 0, nil
log.WithError(err).Error("Could not get randao mix to get payload attribute")
return false, emptyAttri, 0
}
// Get fee recipient.
@@ -284,7 +286,8 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
"Please refer to our documentation for instructions")
}
case err != nil:
return false, nil, 0, errors.Wrap(err, "could not get fee recipient in db")
log.WithError(err).Error("Could not get fee recipient to get payload attribute")
return false, emptyAttri, 0
default:
feeRecipient = recipient
}
@@ -292,14 +295,44 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
// Get timestamp.
t, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
if err != nil {
return false, nil, 0, err
log.WithError(err).Error("Could not get timestamp to get payload attribute")
return false, emptyAttri, 0
}
attr := &enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
var attr payloadattribute.Attributer
switch st.Version() {
case version.Capella:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return false, emptyAttri, 0
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
Withdrawals: withdrawals,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
}
case version.Bellatrix:
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
}
default:
log.WithField("version", st.Version()).Error("Could not get payload attribute due to unknown state version")
return false, emptyAttri, 0
}
return true, attr, proposerID, nil
return true, attr, proposerID
}
// removeInvalidBlockAndState removes the invalid block and its corresponding state from the cache and DB.

View File

@@ -13,9 +13,9 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
bstate "github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -31,6 +31,69 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
altairBlkRoot, err := altairBlk.Block().HashTreeRoot()
require.NoError(t, err)
bellatrixBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockBellatrix())
bellatrixBlkRoot, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 10)
service.head = &head{
state: st,
}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
require.NoError(t, err)
pid := &v1.PayloadIDBytes{1}
service.cfg.ExecutionEngineCaller = &mockExecution.EngineClient{PayloadIDBytes: pid}
st, _ = util.DeterministicGenesisState(t, 1)
require.NoError(t, beaconDB.SaveState(ctx, st, bellatrixBlkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bellatrixBlkRoot))
// Intentionally generate a bad state such that `hash_tree_root` fails during `process_slot`
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
arg := &notifyForkchoiceUpdateArg{
headState: s,
headRoot: [32]byte{},
headBlock: b,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(1, 0, [8]byte{}, [32]byte{})
got, err := service.notifyForkchoiceUpdate(ctx, arg)
require.NoError(t, err)
require.DeepEqual(t, got, pid) // We still get a payload ID even though the state is bad. This means it returns until the end.
}
func Test_NotifyForkchoiceUpdate(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -43,16 +106,17 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 10)
service.head = &head{
state: st,
}
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -198,151 +262,6 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}
}
//
//
// A <- B <- C <- D
// \
// ---------- E <- F
// \
// ------ G
// D is the current head, attestations for F and G come late, both are invalid.
// We switch recursively to F then G and finally to D.
//
// We test:
// 1. forkchoice removes blocks F and G from the forkchoice implementation
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive_Protoarray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
// Prepare blocks
ba := util.NewBeaconBlockBellatrix()
ba.Block.Body.ExecutionPayload.BlockNumber = 1
wba := util.SaveBlock(t, ctx, beaconDB, ba)
bra, err := wba.Block().HashTreeRoot()
require.NoError(t, err)
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.ExecutionPayload.BlockNumber = 2
wbb := util.SaveBlock(t, ctx, beaconDB, bb)
brb, err := wbb.Block().HashTreeRoot()
require.NoError(t, err)
bc := util.NewBeaconBlockBellatrix()
bc.Block.Body.ExecutionPayload.BlockNumber = 3
wbc := util.SaveBlock(t, ctx, beaconDB, bc)
brc, err := wbc.Block().HashTreeRoot()
require.NoError(t, err)
bd := util.NewBeaconBlockBellatrix()
pd := [32]byte{'D'}
bd.Block.Body.ExecutionPayload.BlockHash = pd[:]
bd.Block.Body.ExecutionPayload.BlockNumber = 4
wbd := util.SaveBlock(t, ctx, beaconDB, bd)
brd, err := wbd.Block().HashTreeRoot()
require.NoError(t, err)
be := util.NewBeaconBlockBellatrix()
pe := [32]byte{'E'}
be.Block.Body.ExecutionPayload.BlockHash = pe[:]
be.Block.Body.ExecutionPayload.BlockNumber = 5
wbe := util.SaveBlock(t, ctx, beaconDB, be)
bre, err := wbe.Block().HashTreeRoot()
require.NoError(t, err)
bf := util.NewBeaconBlockBellatrix()
pf := [32]byte{'F'}
bf.Block.Body.ExecutionPayload.BlockHash = pf[:]
bf.Block.Body.ExecutionPayload.BlockNumber = 6
bf.Block.ParentRoot = bre[:]
wbf := util.SaveBlock(t, ctx, beaconDB, bf)
brf, err := wbf.Block().HashTreeRoot()
require.NoError(t, err)
bg := util.NewBeaconBlockBellatrix()
bg.Block.Body.ExecutionPayload.BlockNumber = 7
pg := [32]byte{'G'}
bg.Block.Body.ExecutionPayload.BlockHash = pg[:]
bg.Block.ParentRoot = bre[:]
wbg := util.SaveBlock(t, ctx, beaconDB, bg)
brg, err := wbg.Block().HashTreeRoot()
require.NoError(t, err)
// Insert blocks into forkchoice
service := setupBeaconChain(t, beaconDB)
fcs := protoarray.New()
service.cfg.ForkChoiceStore = fcs
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
// Insert Attestations to D, F and G so that they have higher weight than D
// Ensure G is head
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(jc))
headRoot, err := fcs.Head(ctx, []uint64{50, 100, 200})
require.NoError(t, err)
require.Equal(t, brg, headRoot)
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockExecution.EngineClient{ErrForkchoiceUpdated: execution.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
block: wba,
}
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
headState: st,
headBlock: wbg.Block(),
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.Equal(t, true, IsInvalidBlock(err))
require.Equal(t, brf, InvalidBlockRoot(err))
// Ensure Head is D
headRoot, err = fcs.Head(ctx, service.justifiedBalances.balances)
require.NoError(t, err)
require.Equal(t, brd, headRoot)
// Ensure F and G where removed but their parent E wasn't
require.Equal(t, false, fcs.HasNode(brf))
require.Equal(t, false, fcs.HasNode(brg))
require.Equal(t, true, fcs.HasNode(bre))
}
func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -577,7 +496,7 @@ func Test_NotifyNewPayload(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
phase0State, _ := util.DeterministicGenesisState(t, 1)
@@ -823,7 +742,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
@@ -866,15 +785,15 @@ func Test_GetPayloadAttribute(t *testing.T) {
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
// Cache miss
service, err := NewService(ctx, opts...)
require.NoError(t, err)
hasPayload, _, vId, err := service.getPayloadAttribute(ctx, nil, 0)
require.NoError(t, err)
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0)
require.Equal(t, false, hasPayload)
require.Equal(t, types.ValidatorIndex(0), vId)
@@ -882,24 +801,65 @@ func Test_GetPayloadAttribute(t *testing.T) {
suggestedVid := types.ValidatorIndex(1)
slot := types.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
st, _ := util.DeterministicGenesisState(t, 1)
hook := logTest.NewGlobal()
hasPayload, attr, vId, err := service.getPayloadAttribute(ctx, st, slot)
require.NoError(t, err)
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient).String())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []types.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId, err = service.getPayloadAttribute(ctx, st, slot)
require.NoError(t, err)
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient))
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
}
func Test_GetPayloadAttributeV2(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
// Cache miss
service, err := NewService(ctx, opts...)
require.NoError(t, err)
st, _ := util.DeterministicGenesisStateCapella(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0)
require.Equal(t, false, hasPayload)
require.Equal(t, types.ValidatorIndex(0), vId)
// Cache hit, advance state, no fee recipient
suggestedVid := types.ValidatorIndex(1)
slot := types.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []types.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
}
func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
@@ -908,8 +868,8 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
stateGen := stategen.New(beaconDB)
fcs := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fcs)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stateGen),
@@ -917,7 +877,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
var genesisStateRoot [32]byte
genesisBlk := blocks.NewGenesisBlock(genesisStateRoot[:])
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
genesisRoot, err := genesisBlk.Block.HashTreeRoot()
@@ -1022,10 +982,11 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
func TestService_removeInvalidBlockAndState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB, fc)),
WithForkChoiceStore(fc),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -1074,10 +1035,11 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
func TestService_getPayloadHash(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB, fc)),
WithForkChoiceStore(fc),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)

View File

@@ -19,6 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/math"
ethpbv1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -51,7 +52,6 @@ func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
// This defines the current chain service's view of head.
type head struct {
slot types.Slot // current head slot.
root [32]byte // current head root.
block interfaces.SignedBeaconBlock // current head block.
state state.BeaconState // current head state.
@@ -109,11 +109,21 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
}
dis := headSlot + newHeadSlot - 2*forkSlot
dep := math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
oldWeight, err := s.ForkChoicer().Weight(oldHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("could not determine node weight")
}
newWeight, err := s.ForkChoicer().Weight(newHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
}
log.WithFields(logrus.Fields{
"newSlot": fmt.Sprintf("%d", newHeadSlot),
"newRoot": fmt.Sprintf("%#x", newHeadRoot),
"newWeight": newWeight,
"oldSlot": fmt.Sprintf("%d", headSlot),
"oldRoot": fmt.Sprintf("%#x", oldHeadRoot),
"oldWeight": oldWeight,
"commonAncestorRoot": fmt.Sprintf("%#x", commonRoot),
"distance": dis,
"depth": dep,
@@ -139,7 +149,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
},
})
if err := s.saveOrphanedAtts(ctx, oldHeadRoot, newHeadRoot); err != nil {
if err := s.saveOrphanedOperations(ctx, oldHeadRoot, newHeadRoot); err != nil {
return err
}
reorgCount.Inc()
@@ -202,7 +212,6 @@ func (s *Service) setHead(root [32]byte, block interfaces.SignedBeaconBlock, sta
return err
}
s.head = &head{
slot: block.Block().Slot(),
root: root,
block: bCp,
state: state.Copy(),
@@ -223,7 +232,6 @@ func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.SignedBeaco
return err
}
s.head = &head{
slot: block.Block().Slot(),
root: root,
block: bCp,
state: state,
@@ -234,7 +242,10 @@ func (s *Service) setHeadInitialSync(root [32]byte, block interfaces.SignedBeaco
// This returns the head slot.
// This is a lock free version.
func (s *Service) headSlot() types.Slot {
return s.head.slot
if s.head == nil || s.head.block == nil || s.head.block.Block() == nil {
return 0
}
return s.head.block.Block().Slot()
}
// This returns the head root.
@@ -347,9 +358,9 @@ func (s *Service) notifyNewHeadEvent(
return nil
}
// This saves the attestations between `orphanedRoot` and the common ancestor root that is derived using `newHeadRoot`.
// This saves the Attestations and BLSToExecChanges between `orphanedRoot` and the common ancestor root that is derived using `newHeadRoot`.
// It also filters out the attestations that is one epoch older as a defense so invalid attestations don't flow into the attestation pool.
func (s *Service) saveOrphanedAtts(ctx context.Context, orphanedRoot [32]byte, newHeadRoot [32]byte) error {
func (s *Service) saveOrphanedOperations(ctx context.Context, orphanedRoot [32]byte, newHeadRoot [32]byte) error {
commonAncestorRoot, _, err := s.ForkChoicer().CommonAncestor(ctx, newHeadRoot, orphanedRoot)
switch {
// Exit early if there's no common ancestor and root doesn't exist, there would be nothing to save.
@@ -388,6 +399,15 @@ func (s *Service) saveOrphanedAtts(ctx context.Context, orphanedRoot [32]byte, n
}
saveOrphanedAttCount.Inc()
}
if orphanedBlk.Version() >= version.Capella {
changes, err := orphanedBlk.Block().Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
for _, c := range changes {
s.cfg.BLSToExecPool.InsertBLSToExecChange(c)
}
}
parentRoot := orphanedBlk.Block().ParentRoot()
orphanedRoot = bytesutil.ToBytes32(parentRoot[:])
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
dbtest "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -19,7 +20,7 @@ func TestService_headSyncCommitteeFetcher_Errors(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
c := &Service{
cfg: &config{
StateGen: stategen.New(beaconDB),
StateGen: stategen.New(beaconDB, doublylinkedtree.New()),
},
}
c.head = &head{}
@@ -37,7 +38,7 @@ func TestService_HeadDomainFetcher_Errors(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
c := &Service{
cfg: &config{
StateGen: stategen.New(beaconDB),
StateGen: stategen.New(beaconDB, doublylinkedtree.New()),
},
}
c.head = &head{}

View File

@@ -10,7 +10,7 @@ import (
mock "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -31,7 +31,7 @@ func TestSaveHead_Same(t *testing.T) {
service := setupBeaconChain(t, beaconDB)
r := [32]byte{'A'}
service.head = &head{slot: 0, root: r}
service.head = &head{root: r}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
@@ -54,7 +54,6 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
slot: 0,
root: oldRoot,
block: oldBlock,
}
@@ -90,7 +89,7 @@ func TestSaveHead_Different(t *testing.T) {
pb, err := headBlock.Proto()
require.NoError(t, err)
assert.DeepEqual(t, newHeadSignedBlock, pb, "Head did not change")
assert.DeepSSZEqual(t, headState.CloneInnerState(), service.headState(ctx).CloneInnerState(), "Head did not change")
assert.DeepSSZEqual(t, headState.ToProto(), service.headState(ctx).ToProto(), "Head did not change")
}
func TestSaveHead_Different_Reorg(t *testing.T) {
@@ -108,7 +107,6 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
slot: 0,
root: oldRoot,
block: oldBlock,
}
@@ -148,7 +146,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
pb, err := headBlock.Proto()
require.NoError(t, err)
assert.DeepEqual(t, newHeadSignedBlock, pb, "Head did not change")
assert.DeepSSZEqual(t, headState.CloneInnerState(), service.headState(ctx).CloneInnerState(), "Head did not change")
assert.DeepSSZEqual(t, headState.ToProto(), service.headState(ctx).ToProto(), "Head did not change")
require.LogsContain(t, hook, "Chain reorg occurred")
require.LogsContain(t, hook, "distance=1")
require.LogsContain(t, hook, "depth=1")
@@ -234,64 +232,6 @@ func Test_notifyNewHeadEvent(t *testing.T) {
})
}
func TestSaveOrphanedAtts_NoCommonAncestor_Protoarray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
// this test does not make sense in doubly linked tree since it enforces
// that the finalized node is a common ancestor
service.cfg.ForkChoiceStore = protoarray.New()
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2 -- 3
// -4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 3)
assert.NoError(t, err)
blk3.Block.ParentRoot = r2[:]
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
func TestSaveOrphanedAtts(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -344,7 +284,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.NoError(t, service.saveOrphanedOperations(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
@@ -362,31 +302,37 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.cfg.BLSToExecPool = blstoexec.NewPool()
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+2)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2
// \-4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
st, keys := util.DeterministicGenesisStateCapella(t, 64)
blkConfig := util.DefaultBlockGenConfig()
blkConfig.NumBLSChanges = 5
blkG, err := util.GenerateFullBlockCapella(st, keys, blkConfig, 1)
assert.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
blkConfig.NumBLSChanges = 10
blk1, err := util.GenerateFullBlockCapella(st, keys, blkConfig, 2)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
blkConfig.NumBLSChanges = 15
blk2, err := util.GenerateFullBlockCapella(st, keys, blkConfig, 3)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4 := util.NewBeaconBlockCapella()
blkConfig.NumBLSChanges = 0
blk4.Block.Slot = 4
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
@@ -394,7 +340,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
for _, blk := range []*ethpb.SignedBeaconBlockCapella{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
@@ -403,8 +349,11 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r2, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
require.NoError(t, service.saveOrphanedOperations(ctx, r2, r4))
require.Equal(t, 1, service.cfg.AttPool.AggregatedAttestationCount())
pending, err := service.cfg.BLSToExecPool.PendingBLSToExecChanges()
require.NoError(t, err)
require.Equal(t, 15, len(pending))
}
func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
@@ -463,7 +412,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.NoError(t, service.saveOrphanedOperations(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
@@ -527,7 +476,7 @@ func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r2, r4))
require.NoError(t, service.saveOrphanedOperations(ctx, r2, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
@@ -538,7 +487,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}

View File

@@ -38,14 +38,14 @@ func logStateTransitionData(b interfaces.BeaconBlock) error {
if len(b.Body().VoluntaryExits()) > 0 {
log = log.WithField("voluntaryExits", len(b.Body().VoluntaryExits()))
}
if b.Version() == version.Altair || b.Version() == version.Bellatrix {
if b.Version() >= version.Altair {
agg, err := b.Body().SyncAggregate()
if err != nil {
return err
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() == version.Bellatrix {
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
@@ -87,6 +87,7 @@ func logBlockSyncStatus(block interfaces.BeaconBlock, blockRoot [32]byte, justif
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime),
"deposits": len(block.Body().Deposits()),
}).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
@@ -117,12 +118,24 @@ func logPayload(block interfaces.BeaconBlock) error {
return errors.New("gas limit should not be 0")
}
gasUtilized := float64(payload.GasUsed()) / float64(payload.GasLimit())
log.WithFields(logrus.Fields{
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber,
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}).Debug("Synced new payload")
}
if block.Version() >= version.Capella {
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
fields["withdrawals"] = len(withdrawals)
changes, err := block.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
fields["blsToExecutionChanges"] = len(changes)
}
log.WithFields(fields).Debug("Synced new payload")
return nil
}

View File

@@ -317,9 +317,8 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
var b *precompute.Balance
var v []*precompute.Validator
var err error
switch headState.Version() {
case version.Phase0:
// Validator participation should be viewed on the canonical chain.
if headState.Version() == version.Phase0 {
v, b, err = precompute.New(ctx, headState)
if err != nil {
return err
@@ -328,7 +327,7 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
if err != nil {
return err
}
case version.Altair, version.Bellatrix:
} else if headState.Version() >= version.Altair {
v, b, err = altair.InitializePrecomputeValidators(ctx, headState)
if err != nil {
return err
@@ -337,9 +336,10 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
if err != nil {
return err
}
default:
return errors.Errorf("invalid state type provided: %T", headState.InnerStateUnsafe())
} else {
return errors.Errorf("invalid state type provided: %T", headState.ToProtoUnsafe())
}
prevEpochActiveBalances.Set(float64(b.ActivePrevEpoch))
prevEpochSourceBalances.Set(float64(b.PrevEpochAttested))
prevEpochTargetBalances.Set(float64(b.PrevEpochTargetAttested))

View File

@@ -6,17 +6,17 @@ import (
"testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
)
func testServiceOptsWithDB(t *testing.T) []Option {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
return []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
@@ -99,6 +100,14 @@ func WithSlashingPool(p slashings.PoolManager) Option {
}
}
// WithBLSToExecPool to keep track of BLS to Execution address changes.
func WithBLSToExecPool(p blstoexec.PoolManager) Option {
return func(s *Service) error {
s.cfg.BLSToExecPool = p
return nil
}
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
return func(s *Service) error {

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
)
@@ -22,20 +23,21 @@ import (
// validateMergeBlock validates terminal block hash in the event of manual overrides before checking for total difficulty.
//
// def validate_merge_block(block: BeaconBlock) -> None:
// if TERMINAL_BLOCK_HASH != Hash32():
// # If `TERMINAL_BLOCK_HASH` is used as an override, the activation epoch must be reached.
// assert compute_epoch_at_slot(block.slot) >= TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
// assert block.body.execution_payload.parent_hash == TERMINAL_BLOCK_HASH
// return
//
// pow_block = get_pow_block(block.body.execution_payload.parent_hash)
// # Check if `pow_block` is available
// assert pow_block is not None
// pow_parent = get_pow_block(pow_block.parent_hash)
// # Check if `pow_parent` is available
// assert pow_parent is not None
// # Check if `pow_block` is a valid terminal PoW block
// assert is_valid_terminal_pow_block(pow_block, pow_parent)
// if TERMINAL_BLOCK_HASH != Hash32():
// # If `TERMINAL_BLOCK_HASH` is used as an override, the activation epoch must be reached.
// assert compute_epoch_at_slot(block.slot) >= TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
// assert block.body.execution_payload.parent_hash == TERMINAL_BLOCK_HASH
// return
//
// pow_block = get_pow_block(block.body.execution_payload.parent_hash)
// # Check if `pow_block` is available
// assert pow_block is not None
// pow_parent = get_pow_block(pow_block.parent_hash)
// # Check if `pow_parent` is available
// assert pow_parent is not None
// # Check if `pow_block` is a valid terminal PoW block
// assert is_valid_terminal_pow_block(pow_block, pow_parent)
func (s *Service) validateMergeBlock(ctx context.Context, b interfaces.SignedBeaconBlock) error {
if err := blocks.BeaconBlockIsNil(b); err != nil {
return err
@@ -91,6 +93,7 @@ func (s *Service) getBlkParentHashAndTD(ctx context.Context, blkHash []byte) ([]
if blk == nil {
return nil, nil, errors.New("pow block is nil")
}
blk.Version = version.Bellatrix
blkTDBig, err := hexutil.DecodeBig(blk.TotalDifficulty)
if err != nil {
return nil, nil, errors.Wrap(err, "could not decode merge block total difficulty")
@@ -105,10 +108,11 @@ func (s *Service) getBlkParentHashAndTD(ctx context.Context, blkHash []byte) ([]
// validateTerminalBlockHash validates if the merge block is a valid terminal PoW block.
// spec code:
// if TERMINAL_BLOCK_HASH != Hash32():
// # If `TERMINAL_BLOCK_HASH` is used as an override, the activation epoch must be reached.
// assert compute_epoch_at_slot(block.slot) >= TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
// assert block.body.execution_payload.parent_hash == TERMINAL_BLOCK_HASH
// return
//
// # If `TERMINAL_BLOCK_HASH` is used as an override, the activation epoch must be reached.
// assert compute_epoch_at_slot(block.slot) >= TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
// assert block.body.execution_payload.parent_hash == TERMINAL_BLOCK_HASH
// return
func validateTerminalBlockHash(blkSlot types.Slot, payload interfaces.ExecutionData) error {
if bytesutil.ToBytes32(params.BeaconConfig().TerminalBlockHash.Bytes()) == [32]byte{} {
return nil
@@ -125,9 +129,10 @@ func validateTerminalBlockHash(blkSlot types.Slot, payload interfaces.ExecutionD
// validateTerminalBlockDifficulties validates terminal pow block by comparing own total difficulty with parent's total difficulty.
//
// def is_valid_terminal_pow_block(block: PowBlock, parent: PowBlock) -> bool:
// is_total_difficulty_reached = block.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// is_parent_total_difficulty_valid = parent.total_difficulty < TERMINAL_TOTAL_DIFFICULTY
// return is_total_difficulty_reached and is_parent_total_difficulty_valid
//
// is_total_difficulty_reached = block.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// is_parent_total_difficulty_valid = parent.total_difficulty < TERMINAL_TOTAL_DIFFICULTY
// return is_total_difficulty_reached and is_parent_total_difficulty_valid
func validateTerminalBlockDifficulties(currentDifficulty *uint256.Int, parentDifficulty *uint256.Int) (bool, error) {
b, ok := new(big.Int).SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
if !ok {

View File

@@ -10,7 +10,7 @@ import (
"github.com/holiman/uint256"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
mocks "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
@@ -109,10 +109,10 @@ func Test_validateMergeBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
@@ -159,10 +159,10 @@ func Test_validateMergeBlock(t *testing.T) {
func Test_getBlkParentHashAndTD(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)

View File

@@ -19,23 +19,24 @@ import (
// The delay is handled by the caller in `processAttestations`.
//
// Spec pseudocode definition:
// def on_attestation(store: Store, attestation: Attestation) -> None:
// """
// Run ``on_attestation`` upon receiving a new ``attestation`` from either within a block or directly on the wire.
//
// An ``attestation`` that is asserted as invalid may be valid at a later time,
// consider scheduling it for later processing in such case.
// """
// validate_on_attestation(store, attestation)
// store_target_checkpoint_state(store, attestation.data.target)
// def on_attestation(store: Store, attestation: Attestation) -> None:
// """
// Run ``on_attestation`` upon receiving a new ``attestation`` from either within a block or directly on the wire.
//
// # Get state at the `target` to fully validate attestation
// target_state = store.checkpoint_states[attestation.data.target]
// indexed_attestation = get_indexed_attestation(target_state, attestation)
// assert is_valid_indexed_attestation(target_state, indexed_attestation)
// An ``attestation`` that is asserted as invalid may be valid at a later time,
// consider scheduling it for later processing in such case.
// """
// validate_on_attestation(store, attestation)
// store_target_checkpoint_state(store, attestation.data.target)
//
// # Update latest messages for attesting indices
// update_latest_messages(store, indexed_attestation.attesting_indices, attestation)
// # Get state at the `target` to fully validate attestation
// target_state = store.checkpoint_states[attestation.data.target]
// indexed_attestation = get_indexed_attestation(target_state, attestation)
// assert is_valid_indexed_attestation(target_state, indexed_attestation)
//
// # Update latest messages for attesting indices
// update_latest_messages(store, indexed_attestation.attesting_indices, attestation)
func (s *Service) OnAttestation(ctx context.Context, a *ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onAttestation")
defer span.End()

View File

@@ -8,7 +8,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
@@ -22,14 +21,15 @@ import (
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fc),
WithStateGen(stategen.New(beaconDB, fc)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -128,142 +128,6 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
}
}
func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
blkWithoutState := util.NewBeaconBlock()
blkWithoutState.Block.Slot = 0
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
BlkWithOutStateRoot, err := blkWithoutState.Block.HashTreeRoot()
require.NoError(t, err)
blkWithStateBadAtt := util.NewBeaconBlock()
blkWithStateBadAtt.Block.Slot = 1
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
BlkWithStateBadAttRoot, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(100*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
blkWithValidState := util.NewBeaconBlock()
blkWithValidState.Block.Slot = 2
util.SaveBlock(t, ctx, beaconDB, blkWithValidState)
blkWithValidStateRoot, err := blkWithValidState.Block.HashTreeRoot()
require.NoError(t, err)
s, err = util.NewBeaconState()
require.NoError(t, err)
err = s.SetFork(&ethpb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
})
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
tests := []struct {
name string
a *ethpb.Attestation
wantedErr string
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation can't be nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation's data can't be nil",
},
{
name: "process nil field (a.Target) in attestation",
a: &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
Target: nil,
Source: &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)},
},
AggregationBits: make([]byte, 1),
Signature: make([]byte, 96),
},
wantedErr: "attestation's target can't be nil",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := service.OnAttestation(ctx, tt.a)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 1, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 1, Root: tRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
}
func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -271,7 +135,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
@@ -300,7 +164,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -362,7 +226,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -392,7 +256,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
cached, err = service.checkpointStateCache.StateByCheckpoint(newCheckpoint)
require.NoError(t, err)
require.DeepSSZEqual(t, returned.InnerStateUnsafe(), cached.InnerStateUnsafe())
require.DeepSSZEqual(t, returned.ToProtoUnsafe(), cached.ToProtoUnsafe())
}
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
@@ -461,37 +325,6 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
assert.NoError(t, service.verifyBeaconBlock(ctx, d), "Did not receive the wanted error")
}
func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1}))
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(context.Background(), r33[:])
require.ErrorContains(t, "Root and finalized store are not consistent", err)
}
func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -499,7 +332,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)

View File

@@ -23,7 +23,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/monitoring/tracing"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1/attestation"
@@ -46,52 +45,53 @@ var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
// computation in this method and methods it calls into.
//
// Spec pseudocode definition:
// def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
// block = signed_block.message
// # Parent block must be known
// assert block.parent_root in store.block_states
// # Make a copy of the state to avoid mutability issues
// pre_state = copy(store.block_states[block.parent_root])
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert get_current_slot(store) >= block.slot
//
// # Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// assert block.slot > finalized_slot
// # Check block is a descendant of the finalized block at the checkpoint finalized slot
// assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
// def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
// block = signed_block.message
// # Parent block must be known
// assert block.parent_root in store.block_states
// # Make a copy of the state to avoid mutability issues
// pre_state = copy(store.block_states[block.parent_root])
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert get_current_slot(store) >= block.slot
//
// # Check the block is valid and compute the post-state
// state = pre_state.copy()
// state_transition(state, signed_block, True)
// # Add new block to the store
// store.blocks[hash_tree_root(block)] = block
// # Add new state for this block to the store
// store.block_states[hash_tree_root(block)] = state
// # Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// assert block.slot > finalized_slot
// # Check block is a descendant of the finalized block at the checkpoint finalized slot
// assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
// if should_update_justified_checkpoint(store, state.current_justified_checkpoint):
// store.justified_checkpoint = state.current_justified_checkpoint
// # Check the block is valid and compute the post-state
// state = pre_state.copy()
// state_transition(state, signed_block, True)
// # Add new block to the store
// store.blocks[hash_tree_root(block)] = block
// # Add new state for this block to the store
// store.block_states[hash_tree_root(block)] = state
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
// if should_update_justified_checkpoint(store, state.current_justified_checkpoint):
// store.justified_checkpoint = state.current_justified_checkpoint
//
// # Potentially update justified if different from store
// if store.justified_checkpoint != state.current_justified_checkpoint:
// # Update justified if new justified is later than store justified
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// store.justified_checkpoint = state.current_justified_checkpoint
// return
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
//
// # Update justified if store justified is not in chain with finalized checkpoint
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// ancestor_at_finalized_slot = get_ancestor(store, store.justified_checkpoint.root, finalized_slot)
// if ancestor_at_finalized_slot != store.finalized_checkpoint.root:
// store.justified_checkpoint = state.current_justified_checkpoint
// # Potentially update justified if different from store
// if store.justified_checkpoint != state.current_justified_checkpoint:
// # Update justified if new justified is later than store justified
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// store.justified_checkpoint = state.current_justified_checkpoint
// return
//
// # Update justified if store justified is not in chain with finalized checkpoint
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// ancestor_at_finalized_slot = get_ancestor(store, store.justified_checkpoint.root, finalized_slot)
// if ancestor_at_finalized_slot != store.finalized_checkpoint.root:
// store.justified_checkpoint = state.current_justified_checkpoint
func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlock, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
defer span.End()
@@ -146,6 +146,10 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
if err := s.handleBlockAttestations(ctx, signed.Block(), postState); err != nil {
return errors.Wrap(err, "could not handle block's attestations")
}
if err := s.handleBlockBLSToExecChanges(signed.Block()); err != nil {
return errors.Wrap(err, "could not handle block's BLSToExecutionChanges")
}
s.InsertSlashingsToForkChoiceStore(ctx, signed.Block().Body().AttesterSlashings())
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
@@ -278,11 +282,11 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
return nil
}
func getStateVersionAndPayload(st state.BeaconState) (int, *enginev1.ExecutionPayloadHeader, error) {
func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionData, error) {
if st == nil {
return 0, nil, errors.New("nil state")
}
var preStateHeader *enginev1.ExecutionPayloadHeader
var preStateHeader interfaces.ExecutionData
var err error
preStateVersion := st.Version()
switch preStateVersion {
@@ -333,14 +337,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
jCheckpoints := make([]*ethpb.Checkpoint, len(blks))
fCheckpoints := make([]*ethpb.Checkpoint, len(blks))
sigSet := &bls.SignatureBatch{
Signatures: [][]byte{},
PublicKeys: []bls.PublicKey{},
Messages: [][32]byte{},
}
sigSet := bls.NewSet()
type versionAndHeader struct {
version int
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
}
preVersionAndHeaders := make([]*versionAndHeader, len(blks))
postVersionAndHeaders := make([]*versionAndHeader, len(blks))
@@ -377,7 +377,13 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
}
sigSet.Join(set)
}
verify, err := sigSet.Verify()
var verify bool
if features.Get().EnableVerboseSigVerification {
verify, err = sigSet.VerifyVerbosely()
} else {
verify, err = sigSet.Verify()
}
if err != nil {
return invalidBlock{error: err}
}
@@ -525,11 +531,7 @@ func (s *Service) insertBlockToForkchoiceStore(ctx context.Context, blk interfac
}
}
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, st, root); err != nil {
return err
}
return nil
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, root)
}
// This feeds in the attestations included in the block to fork choice store. It's allows fork choice store
@@ -555,6 +557,22 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Be
return nil
}
func (s *Service) handleBlockBLSToExecChanges(blk interfaces.BeaconBlock) error {
if blk.Version() < version.Capella {
return nil
}
changes, err := blk.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
for _, change := range changes {
if err := s.cfg.BLSToExecPool.MarkIncluded(change); err != nil {
return errors.Wrap(err, "could not mark BLSToExecutionChange as included")
}
}
return nil
}
// InsertSlashingsToForkChoiceStore inserts attester slashing indices to fork choice store.
// To call this function, it's caller's responsibility to ensure the slashing object is valid.
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []*ethpb.AttesterSlashing) {
@@ -607,7 +625,7 @@ func (s *Service) pruneCanonicalAttsFromPool(ctx context.Context, r [32]byte, b
}
// validateMergeTransitionBlock validates the merge transition block.
func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion int, stateHeader *enginev1.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) error {
func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion int, stateHeader interfaces.ExecutionData, blk interfaces.SignedBeaconBlock) error {
// Skip validation if block is older than Bellatrix.
if blocks.IsPreBellatrixVersion(blk.Block().Version()) {
return nil
@@ -634,11 +652,7 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// Skip validation if the block is not a merge transition block.
// To reach here. The payload must be non-empty. If the state header is empty then it's at transition.
wh, err := consensusblocks.WrappedExecutionPayloadHeader(stateHeader)
if err != nil {
return err
}
empty, err := consensusblocks.IsEmptyExecutionData(wh)
empty, err := consensusblocks.IsEmptyExecutionData(stateHeader)
if err != nil {
return err
}
@@ -669,25 +683,8 @@ func (s *Service) fillMissingPayloadIDRoutine(ctx context.Context, stateFeed *ev
for {
select {
case ti := <-ticker.C:
if !atHalfSlot(ti) {
continue
}
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, s.headRoot())
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if has && id == [8]byte{} {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).Error("Could not get head block")
} else {
if _, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: s.headState(ctx),
headRoot: s.headRoot(),
headBlock: headBlock.Block(),
}); err != nil {
log.WithError(err).Error("Could not prepare payload on empty ID")
}
}
missedPayloadIDFilledCount.Inc()
if err := s.fillMissingBlockPayloadId(ctx, ti); err != nil {
log.WithError(err).Error("Could not fill missing payload ID")
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
@@ -702,3 +699,31 @@ func atHalfSlot(t time.Time) bool {
s := params.BeaconConfig().SecondsPerSlot
return uint64(t.Second())%s == s/2
}
func (s *Service) fillMissingBlockPayloadId(ctx context.Context, ti time.Time) error {
if !atHalfSlot(ti) {
return nil
}
if s.CurrentSlot() == s.cfg.ForkChoiceStore.HighestReceivedBlockSlot() {
return nil
}
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if has && id == [8]byte{} {
missedPayloadIDFilledCount.Inc()
headBlock, err := s.headBlock()
if err != nil {
return err
} else {
if _, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: s.headState(ctx),
headRoot: s.headRoot(),
headBlock: headBlock.Block(),
}); err != nil {
return err
}
}
}
return nil
}

View File

@@ -6,9 +6,7 @@ import (
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -70,6 +68,7 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.BeaconBloc
// during initial syncing. There's no risk given a state summary object is just a
// a subset of the block object.
if !s.cfg.BeaconDB.HasStateSummary(ctx, parentRoot) && !s.cfg.BeaconDB.HasBlock(ctx, parentRoot) {
log.Errorf("requesting blockroot %#x", bytesutil.Trunc(parentRoot[:]))
return errors.New("could not reconstruct parent state")
}
@@ -163,7 +162,7 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
fRoot := bytesutil.ToBytes32(cp.Root)
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
if err != nil && err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
if err != nil && !errors.Is(err, doublylinkedtree.ErrNilNode) {
return err
}
if !optimistic {
@@ -172,24 +171,30 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
return err
}
}
if err := s.cfg.StateGen.MigrateToCold(ctx, fRoot); err != nil {
return errors.Wrap(err, "could not migrate to cold")
}
go func() {
// We do not pass in the parent context from the method as this method call
// is meant to be asynchronous and run in the background rather than being
// tied to the execution of a block.
if err := s.cfg.StateGen.MigrateToCold(s.ctx, fRoot); err != nil {
log.WithError(err).Error("could not migrate to cold")
}
}()
return nil
}
// ancestor returns the block root of an ancestry block from the input block root.
//
// Spec pseudocode definition:
// def get_ancestor(store: Store, root: Root, slot: Slot) -> Root:
// block = store.blocks[root]
// if block.slot > slot:
// return get_ancestor(store, block.parent_root, slot)
// elif block.slot == slot:
// return root
// else:
// # root is older than queried slot, thus a skip slot. Return most recent root prior to slot
// return root
//
// def get_ancestor(store: Store, root: Root, slot: Slot) -> Root:
// block = store.blocks[root]
// if block.slot > slot:
// return get_ancestor(store, block.parent_root, slot)
// elif block.slot == slot:
// return root
// else:
// # root is older than queried slot, thus a skip slot. Return most recent root prior to slot
// return root
func (s *Service) ancestor(ctx context.Context, root []byte, slot types.Slot) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.ancestor")
defer span.End()
@@ -312,23 +317,6 @@ func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) e
return nil
}
// The deletes input attestations from the attestation pool, so proposers don't include them in a block for the future.
func (s *Service) deletePoolAtts(atts []*ethpb.Attestation) error {
for _, att := range atts {
if helpers.IsAggregated(att) {
if err := s.cfg.AttPool.DeleteAggregatedAttestation(att); err != nil {
return err
}
} else {
if err := s.cfg.AttPool.DeleteUnaggregatedAttestation(att); err != nil {
return err
}
}
}
return nil
}
// This ensures that the input root defaults to using genesis root instead of zero hashes. This is needed for handling
// fork choice justification routine.
func (s *Service) ensureRootNotZeros(root [32]byte) [32]byte {

File diff suppressed because it is too large Load Diff

View File

@@ -135,12 +135,6 @@ func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
// UpdateHead updates the canonical head of the chain based on information from fork-choice attestations and votes.
// It requires no external inputs.
func (s *Service) UpdateHead(ctx context.Context) error {
// Continue when there's no fork choice attestation, there's nothing to process and update head.
// This covers the condition when the node is still initial syncing to the head of the chain.
if s.cfg.AttPool.ForkchoiceAttestationCount() == 0 {
return nil
}
// Only one process can process attestations and update head at a time.
s.processAttestationsLock.Lock()
defer s.processAttestationsLock.Unlock()
@@ -157,7 +151,7 @@ func (s *Service) UpdateHead(ctx context.Context) error {
start = time.Now()
newHeadRoot, err := s.cfg.ForkChoiceStore.Head(ctx, balances)
if err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
log.WithError(err).Error("Could not compute head from new attestations")
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))

View File

@@ -159,7 +159,6 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
require.NoError(t, service.saveInitSyncBlock(ctx, r1, wsb))
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
slot: 1,
root: r1,
block: wsb,
state: st,
@@ -177,7 +176,6 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
require.NoError(t, err)
st, _ = util.DeterministicGenesisState(t, 1)
service.head = &head{
slot: 1,
root: r1,
block: wsb,
state: st,
@@ -226,11 +224,11 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
// Generate attestatios for this block in Slot 1
// Generate attestations for this block in Slot 1
atts, err := util.GenerateAttestations(copied, pks, 1, 1, false)
require.NoError(t, err)
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
// Verify the target is in forchoice
// Verify the target is in forkchoice
require.Equal(t, true, fcs.HasNode(bytesutil.ToBytes32(atts[0].Data.BeaconBlockRoot)))
// Insert a new block to forkchoice
@@ -253,3 +251,52 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, tRoot, service.head.root) // Validate head is the new one
}
func TestService_UpdateHead_NoAtts(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
fcs := doublylinkedtree.New()
opts = append(opts,
WithAttestationPool(attestations.NewPool()),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fcs),
)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisTime = prysmTime.Now().Add(-2 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, service.saveGenesisData(ctx, genesisState))
copied := genesisState.Copy()
// Generate a new block
blk, err := util.GenerateFullBlock(copied, pks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)
// Insert a new block to forkchoice
ojc := &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}
b, err := util.GenerateFullBlock(genesisState, pks, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
b.Block.ParentRoot = service.originBlockRoot[:]
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, err, service.UpdateHead(ctx))
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, r, service.head.root) // Validate head is the new one
}

View File

@@ -32,9 +32,9 @@ type SlashingReceiver interface {
// ReceiveBlock is a function that defines the operations (minus pubsub)
// that are performed on a received block. The operations consist of:
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
@@ -145,11 +145,6 @@ func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.A
}
func (s *Service) handlePostBlockOperations(b interfaces.BeaconBlock) error {
// Delete the processed block attestations from attestation pool.
if err := s.deletePoolAtts(b.Body().Attestations()); err != nil {
return err
}
// Mark block exits as seen so we don't include same ones in future blocks.
for _, e := range b.Body().VoluntaryExits() {
s.cfg.ExitPool.MarkIncluded(e)

View File

@@ -8,7 +8,7 @@ import (
blockchainTesting "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
@@ -33,7 +33,7 @@ func TestService_ReceiveBlock(t *testing.T) {
assert.NoError(t, err)
return blk
}
params.SetupTestConfigCleanupWithLock(t)
//params.SetupTestConfigCleanupWithLock(t)
bc := params.BeaconConfig().Copy()
bc.ShardCommitteePeriod = 0 // Required for voluntary exits test in reasonable time.
params.OverrideBeaconConfig(bc)
@@ -127,13 +127,14 @@ func TestService_ReceiveBlock(t *testing.T) {
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New()),
WithForkChoiceStore(fc),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fc)),
WithFinalizedStateAtStartUp(genesis),
}
s, err := NewService(ctx, opts...)
@@ -166,13 +167,14 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New()),
WithForkChoiceStore(fc),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fc)),
}
s, err := NewService(ctx, opts...)
@@ -242,12 +244,13 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fc := doublylinkedtree.New()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New()),
WithForkChoiceStore(fc),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fc)),
}
s, err := NewService(ctx, opts...)
require.NoError(t, err)

View File

@@ -23,6 +23,7 @@ import (
f "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
@@ -70,6 +71,7 @@ type config struct {
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2p p2p.Broadcaster
MaxRoutines int
StateNotifier statefeed.Notifier

View File

@@ -21,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
@@ -84,7 +83,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
srv.Stop()
})
bState, _ := util.DeterministicGenesisState(t, 10)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.InnerStateUnsafe())
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProtoUnsafe())
require.NoError(t, err)
mockTrie, err := trie.NewTrie(0)
require.NoError(t, err)
@@ -118,7 +117,8 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
require.NoError(t, stateGen.SaveState(ctx, bytesutil.ToBytes32(bState.FinalizedCheckpoint().Root), bState))
@@ -129,7 +129,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithAttestationService(attService),
WithStateGen(stateGen),
}
@@ -306,9 +306,10 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
c, err := NewService(ctx,
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithDatabase(beaconDB),
WithStateGen(stateGen),
WithAttestationService(attSrv),
@@ -324,7 +325,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
assert.DeepEqual(t, headBlock, pb, "Head block incorrect")
s, err := c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
assert.Equal(t, c.HeadSlot(), headBlock.Block.Slot, "Head slot incorrect")
r, err := c.HeadRoot(context.Background())
require.NoError(t, err)
@@ -365,9 +366,10 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
}
require.NoError(t, beaconDB.SaveStateSummary(ctx, ss))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: headRoot[:], Epoch: slots.ToEpoch(finalizedSlot)}))
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
c, err := NewService(ctx,
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithDatabase(beaconDB),
WithStateGen(stateGen),
WithAttestationService(attSrv),
@@ -378,7 +380,7 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
require.NoError(t, c.StartFromSavedState(headState))
s, err := c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
assert.Equal(t, genesisRoot, c.originBlockRoot, "Genesis block root incorrect")
pb, err := c.head.block.Proto()
require.NoError(t, err)
@@ -388,8 +390,9 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
fc := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: doublylinkedtree.New()},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, fc), ForkChoiceStore: fc},
}
blk := util.NewBeaconBlock()
blk.Block.Slot = 1
@@ -409,25 +412,6 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
}
}
func TestHasBlock_ForkChoiceAndDB_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
}
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
}
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -451,7 +435,7 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
ctx: ctx,
cancel: cancel,
initSyncBlocks: make(map[[32]byte]interfaces.SignedBeaconBlock),
@@ -499,27 +483,6 @@ func BenchmarkHasBlockDB(b *testing.B) {
}
}
func BenchmarkHasBlockForkChoiceStore_ProtoArray(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
}
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := state_native.InitializeFromProtoPhase0(bs)
require.NoError(b, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(b, err)
require.NoError(b, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
b.ResetTimer()
for i := 0; i < b.N; i++ {
require.Equal(b, true, s.cfg.ForkChoiceStore.HasNode(r), "Block is not in fork choice store")
}
}
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
@@ -572,9 +535,10 @@ func TestChainService_EverythingOptimistic(t *testing.T) {
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
c, err := NewService(ctx,
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithDatabase(beaconDB),
WithStateGen(stateGen),
WithAttestationService(attSrv),

View File

@@ -6,7 +6,7 @@ import (
"github.com/pkg/errors"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -72,7 +72,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, err)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt, ForkChoiceStore: fcs},
wsVerifier: wv,

View File

@@ -20,13 +20,6 @@ var (
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
getStatusLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_status_latency_milliseconds",
Help: "Captures RPC latency for get status in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
registerValidatorLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "register_validator_latency_milliseconds",

View File

@@ -72,7 +72,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
}
// Start initializes the service.
func (*Service) Start() {}
func (s *Service) Start() {
go s.pollRelayerStatus(s.ctx)
}
// Stop halts the service.
func (*Service) Stop() error {
@@ -105,19 +107,12 @@ func (s *Service) GetHeader(ctx context.Context, slot types.Slot, parentHash [32
// Status retrieves the status of the builder relay network.
func (s *Service) Status() error {
ctx, span := trace.StartSpan(context.Background(), "builder.Status")
defer span.End()
start := time.Now()
defer func() {
getStatusLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
// Return early if builder isn't initialized in service.
if s.c == nil {
return nil
}
return s.c.Status(ctx)
return nil
}
// RegisterValidator registers a validator with the builder relay network.
@@ -157,3 +152,20 @@ func (s *Service) RegisterValidator(ctx context.Context, reg []*ethpb.SignedVali
func (s *Service) Configured() bool {
return s.c != nil && !reflect.ValueOf(s.c).IsNil()
}
func (s *Service) pollRelayerStatus(ctx context.Context) {
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if s.c != nil {
if err := s.c.Status(ctx); err != nil {
log.WithError(err).Error("Failed to call relayer status endpoint, perhaps mev-boost or relayers are down")
}
}
case <-ctx.Done():
return
}
}
}

View File

@@ -1,8 +1,6 @@
package cache
import (
"sync"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -32,7 +30,6 @@ var (
// CheckpointStateCache is a struct with 1 queue for looking up state by checkpoint.
type CheckpointStateCache struct {
cache *lru.Cache
lock sync.RWMutex
}
// NewCheckpointStateCache creates a new checkpoint state cache for storing/accessing processed state.
@@ -45,8 +42,6 @@ func NewCheckpointStateCache() *CheckpointStateCache {
// StateByCheckpoint fetches state by checkpoint. Returns true with a
// reference to the CheckpointState info, if exists. Otherwise returns false, nil.
func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (state.BeaconState, error) {
c.lock.RLock()
defer c.lock.RUnlock()
h, err := hash.HashProto(cp)
if err != nil {
return nil, err
@@ -67,8 +62,6 @@ func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (state.Be
// AddCheckpointState adds CheckpointState object to the cache. This method also trims the least
// recently added CheckpointState object if the cache size has ready the max cache size limit.
func (c *CheckpointStateCache) AddCheckpointState(cp *ethpb.Checkpoint, s state.ReadOnlyBeaconState) error {
c.lock.Lock()
defer c.lock.Unlock()
h, err := hash.HashProto(cp)
if err != nil {
return err

View File

@@ -33,9 +33,9 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
s, err = cache.StateByCheckpoint(cp1)
require.NoError(t, err)
pbState1, err := state_native.ProtobufBeaconStatePhase0(s.InnerStateUnsafe())
pbState1, err := state_native.ProtobufBeaconStatePhase0(s.ToProtoUnsafe())
require.NoError(t, err)
pbstate, err := state_native.ProtobufBeaconStatePhase0(st.InnerStateUnsafe())
pbstate, err := state_native.ProtobufBeaconStatePhase0(st.ToProtoUnsafe())
require.NoError(t, err)
if !proto.Equal(pbState1, pbstate) {
t.Error("incorrectly cached state")
@@ -50,11 +50,11 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
s, err = cache.StateByCheckpoint(cp2)
require.NoError(t, err)
assert.DeepEqual(t, st2.CloneInnerState(), s.CloneInnerState(), "incorrectly cached state")
assert.DeepEqual(t, st2.ToProto(), s.ToProto(), "incorrectly cached state")
s, err = cache.StateByCheckpoint(cp1)
require.NoError(t, err)
assert.DeepEqual(t, st.CloneInnerState(), s.CloneInnerState(), "incorrectly cached state")
assert.DeepEqual(t, st.ToProto(), s.ToProto(), "incorrectly cached state")
}
func TestCheckpointStateCache_MaxSize(t *testing.T) {

View File

@@ -0,0 +1,8 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["merkle_tree.go"],
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/cache/depositsnapshot",
visibility = ["//visibility:public"],
)

View File

@@ -0,0 +1,19 @@
package depositsnapshot
const (
DepositContractDepth = 32 // Maximum tree depth as defined by EIP-4881.
)
// MerkleTreeNode is the interface for a Merkle tree.
type MerkleTreeNode interface {
// GetRoot returns the root of the Merkle tree.
GetRoot() [32]byte
// IsFull returns whether there is space left for deposits.
IsFull() bool
// Finalize marks deposits of the Merkle tree as finalized.
Finalize(deposits uint, depth uint) MerkleTreeNode
// GetFinalized returns a list of hashes of all the finalized nodes and the number of deposits.
GetFinalized(result [][32]byte) ([][32]byte, uint)
// PushLeaf adds a new leaf node at the next available Zero node.
PushLeaf(leaf [32]byte, deposits uint, depth uint) MerkleTreeNode
}

View File

@@ -57,7 +57,7 @@ func (f *ProposerPayloadIDsCache) SetProposerAndPayloadIDs(slot types.Slot, vId
ids, ok := f.slotToProposerAndPayloadIDs[k]
// Ok to overwrite if the slot is already set but the cached payload ID is not set.
// This combats the re-org case where payload assignment could change at the start of the epoch.
byte8 := [vIdLength]byte{}
var byte8 [vIdLength]byte
if !ok || (ok && bytes.Equal(ids[vIdLength:], byte8[:])) {
f.slotToProposerAndPayloadIDs[k] = bs
}

View File

@@ -9,7 +9,7 @@ import (
func TestValidatorPayloadIDsCache_GetAndSaveValidatorPayloadIDs(t *testing.T) {
cache := NewProposerPayloadIDsCache()
r := [32]byte{}
var r [32]byte
i, p, ok := cache.GetProposerPayloadIDs(0, r)
require.Equal(t, false, ok)
require.Equal(t, types.ValidatorIndex(0), i)

View File

@@ -33,5 +33,5 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
res, err := c.Get(ctx, r)
require.NoError(t, err)
assert.DeepEqual(t, res.CloneInnerState(), s.CloneInnerState(), "Expected equal protos to return from cache")
assert.DeepEqual(t, res.ToProto(), s.ToProto(), "Expected equal protos to return from cache")
}

View File

@@ -52,9 +52,8 @@ func (c *SyncCommitteeHeadStateCache) Get(slot types.Slot) (state.BeaconState, e
if !ok {
return nil, ErrIncorrectType
}
switch st.Version() {
case version.Altair, version.Bellatrix:
default:
// Sync committee is not supported in phase 0.
if st.Version() == version.Phase0 {
return nil, ErrIncorrectType
}
return st, nil

View File

@@ -33,6 +33,13 @@ func TestSyncCommitteeHeadState(t *testing.T) {
},
})
require.NoError(t, err)
capellaState, err := state_native.InitializeFromProtoCapella(&ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
type put struct {
slot types.Slot
state state.BeaconState
@@ -106,6 +113,15 @@ func TestSyncCommitteeHeadState(t *testing.T) {
},
want: bellatrixState,
},
{
name: "found with key (capella state)",
key: types.Slot(200),
put: &put{
slot: types.Slot(200),
state: capellaState,
},
want: capellaState,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -14,13 +14,7 @@ go_library(
"upgrade.go",
],
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/altair",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/endtoend/evaluators:__subpackages__",
"//testing/spectest:__subpackages__",
"//testing/util:__pkg__",
"//validator/client:__pkg__",
],
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
@@ -58,6 +52,7 @@ go_test(
"deposit_test.go",
"epoch_precompute_test.go",
"epoch_spec_test.go",
"exports_test.go",
"reward_test.go",
"sync_committee_test.go",
"transition_test.go",

View File

@@ -82,23 +82,24 @@ func ProcessAttestationNoVerifySignature(
// the proposer in state.
//
// Spec code:
// # Update epoch participation flags
// if data.target.epoch == get_current_epoch(state):
// epoch_participation = state.current_epoch_participation
// else:
// epoch_participation = state.previous_epoch_participation
//
// proposer_reward_numerator = 0
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
// if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
// proposer_reward_numerator += get_base_reward(state, index) * weight
// # Update epoch participation flags
// if data.target.epoch == get_current_epoch(state):
// epoch_participation = state.current_epoch_participation
// else:
// epoch_participation = state.previous_epoch_participation
//
// # Reward proposer
// proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
// proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator)
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
// proposer_reward_numerator = 0
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
// if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
// proposer_reward_numerator += get_base_reward(state, index) * weight
//
// # Reward proposer
// proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
// proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator)
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
func SetParticipationAndRewardProposer(
ctx context.Context,
beaconState state.BeaconState,
@@ -157,12 +158,13 @@ func AddValidatorFlag(flag, flagPosition uint8) (uint8, error) {
// EpochParticipation sets and returns the proposer reward numerator and epoch participation.
//
// Spec code:
// proposer_reward_numerator = 0
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
// if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
// proposer_reward_numerator += get_base_reward(state, index) * weight
//
// proposer_reward_numerator = 0
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
// if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
// proposer_reward_numerator += get_base_reward(state, index) * weight
func EpochParticipation(beaconState state.BeaconState, indices []uint64, epochParticipation []byte, participatedFlags map[uint8]bool, totalBalance uint64) (uint64, []byte, error) {
cfg := params.BeaconConfig()
sourceFlagIndex := cfg.TimelySourceFlagIndex
@@ -218,9 +220,10 @@ func EpochParticipation(beaconState state.BeaconState, indices []uint64, epochPa
// RewardProposer rewards proposer by increasing proposer's balance with input reward numerator and calculated reward denominator.
//
// Spec code:
// proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
// proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator)
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
//
// proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
// proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator)
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
func RewardProposer(ctx context.Context, beaconState state.BeaconState, proposerRewardNumerator uint64) error {
cfg := params.BeaconConfig()
d := (cfg.WeightDenominator - cfg.ProposerWeight) * cfg.WeightDenominator / cfg.ProposerWeight
@@ -238,31 +241,32 @@ func RewardProposer(ctx context.Context, beaconState state.BeaconState, proposer
//
// Spec code:
// def get_attestation_participation_flag_indices(state: BeaconState,
// data: AttestationData,
// inclusion_delay: uint64) -> Sequence[int]:
// """
// Return the flag indices that are satisfied by an attestation.
// """
// if data.target.epoch == get_current_epoch(state):
// justified_checkpoint = state.current_justified_checkpoint
// else:
// justified_checkpoint = state.previous_justified_checkpoint
//
// # Matching roots
// is_matching_source = data.source == justified_checkpoint
// is_matching_target = is_matching_source and data.target.root == get_block_root(state, data.target.epoch)
// is_matching_head = is_matching_target and data.beacon_block_root == get_block_root_at_slot(state, data.slot)
// assert is_matching_source
// data: AttestationData,
// inclusion_delay: uint64) -> Sequence[int]:
// """
// Return the flag indices that are satisfied by an attestation.
// """
// if data.target.epoch == get_current_epoch(state):
// justified_checkpoint = state.current_justified_checkpoint
// else:
// justified_checkpoint = state.previous_justified_checkpoint
//
// participation_flag_indices = []
// if is_matching_source and inclusion_delay <= integer_squareroot(SLOTS_PER_EPOCH):
// participation_flag_indices.append(TIMELY_SOURCE_FLAG_INDEX)
// if is_matching_target and inclusion_delay <= SLOTS_PER_EPOCH:
// participation_flag_indices.append(TIMELY_TARGET_FLAG_INDEX)
// if is_matching_head and inclusion_delay == MIN_ATTESTATION_INCLUSION_DELAY:
// participation_flag_indices.append(TIMELY_HEAD_FLAG_INDEX)
// # Matching roots
// is_matching_source = data.source == justified_checkpoint
// is_matching_target = is_matching_source and data.target.root == get_block_root(state, data.target.epoch)
// is_matching_head = is_matching_target and data.beacon_block_root == get_block_root_at_slot(state, data.slot)
// assert is_matching_source
//
// return participation_flag_indices
// participation_flag_indices = []
// if is_matching_source and inclusion_delay <= integer_squareroot(SLOTS_PER_EPOCH):
// participation_flag_indices.append(TIMELY_SOURCE_FLAG_INDEX)
// if is_matching_target and inclusion_delay <= SLOTS_PER_EPOCH:
// participation_flag_indices.append(TIMELY_TARGET_FLAG_INDEX)
// if is_matching_head and inclusion_delay == MIN_ATTESTATION_INCLUSION_DELAY:
// participation_flag_indices.append(TIMELY_HEAD_FLAG_INDEX)
//
// return participation_flag_indices
func AttestationParticipationFlagIndices(beaconState state.BeaconState, data *ethpb.AttestationData, delay types.Slot) (map[uint8]bool, error) {
currEpoch := time.CurrentEpoch(beaconState)
var justifiedCheckpt *ethpb.Checkpoint
@@ -304,9 +308,10 @@ func AttestationParticipationFlagIndices(beaconState state.BeaconState, data *et
// MatchingStatus returns the matching statues for attestation data's source target and head.
//
// Spec code:
// is_matching_source = data.source == justified_checkpoint
// is_matching_target = is_matching_source and data.target.root == get_block_root(state, data.target.epoch)
// is_matching_head = is_matching_target and data.beacon_block_root == get_block_root_at_slot(state, data.slot)
//
// is_matching_source = data.source == justified_checkpoint
// is_matching_target = is_matching_source and data.target.root == get_block_root(state, data.target.epoch)
// is_matching_head = is_matching_target and data.beacon_block_root == get_block_root_at_slot(state, data.slot)
func MatchingStatus(beaconState state.BeaconState, data *ethpb.AttestationData, cp *ethpb.Checkpoint) (matchedSrc, matchedTgt, matchedHead bool, err error) {
matchedSrc = attestation.CheckPointIsEqual(data.Source, cp)

View File

@@ -256,7 +256,7 @@ func TestProcessAttestationNoVerify_SourceTargetHead(t *testing.T) {
},
AggregationBits: aggBits,
}
zeroSig := [96]byte{}
var zeroSig [96]byte
att.Signature = zeroSig[:]
ckp := beaconState.CurrentJustifiedCheckpoint()

View File

@@ -9,7 +9,6 @@ import (
p2pType "github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
@@ -20,32 +19,33 @@ import (
//
// Spec code:
// def process_sync_aggregate(state: BeaconState, sync_aggregate: SyncAggregate) -> None:
// # Verify sync committee aggregate signature signing over the previous slot block root
// committee_pubkeys = state.current_sync_committee.pubkeys
// participant_pubkeys = [pubkey for pubkey, bit in zip(committee_pubkeys, sync_aggregate.sync_committee_bits) if bit]
// previous_slot = max(state.slot, Slot(1)) - Slot(1)
// domain = get_domain(state, DOMAIN_SYNC_COMMITTEE, compute_epoch_at_slot(previous_slot))
// signing_root = compute_signing_root(get_block_root_at_slot(state, previous_slot), domain)
// assert eth2_fast_aggregate_verify(participant_pubkeys, signing_root, sync_aggregate.sync_committee_signature)
//
// # Compute participant and proposer rewards
// total_active_increments = get_total_active_balance(state) // EFFECTIVE_BALANCE_INCREMENT
// total_base_rewards = Gwei(get_base_reward_per_increment(state) * total_active_increments)
// max_participant_rewards = Gwei(total_base_rewards * SYNC_REWARD_WEIGHT // WEIGHT_DENOMINATOR // SLOTS_PER_EPOCH)
// participant_reward = Gwei(max_participant_rewards // SYNC_COMMITTEE_SIZE)
// proposer_reward = Gwei(participant_reward * PROPOSER_WEIGHT // (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT))
// # Verify sync committee aggregate signature signing over the previous slot block root
// committee_pubkeys = state.current_sync_committee.pubkeys
// participant_pubkeys = [pubkey for pubkey, bit in zip(committee_pubkeys, sync_aggregate.sync_committee_bits) if bit]
// previous_slot = max(state.slot, Slot(1)) - Slot(1)
// domain = get_domain(state, DOMAIN_SYNC_COMMITTEE, compute_epoch_at_slot(previous_slot))
// signing_root = compute_signing_root(get_block_root_at_slot(state, previous_slot), domain)
// assert eth2_fast_aggregate_verify(participant_pubkeys, signing_root, sync_aggregate.sync_committee_signature)
//
// # Apply participant and proposer rewards
// all_pubkeys = [v.pubkey for v in state.validators]
// committee_indices = [ValidatorIndex(all_pubkeys.index(pubkey)) for pubkey in state.current_sync_committee.pubkeys]
// for participant_index, participation_bit in zip(committee_indices, sync_aggregate.sync_committee_bits):
// if participation_bit:
// increase_balance(state, participant_index, participant_reward)
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
// else:
// decrease_balance(state, participant_index, participant_reward)
// # Compute participant and proposer rewards
// total_active_increments = get_total_active_balance(state) // EFFECTIVE_BALANCE_INCREMENT
// total_base_rewards = Gwei(get_base_reward_per_increment(state) * total_active_increments)
// max_participant_rewards = Gwei(total_base_rewards * SYNC_REWARD_WEIGHT // WEIGHT_DENOMINATOR // SLOTS_PER_EPOCH)
// participant_reward = Gwei(max_participant_rewards // SYNC_COMMITTEE_SIZE)
// proposer_reward = Gwei(participant_reward * PROPOSER_WEIGHT // (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT))
//
// # Apply participant and proposer rewards
// all_pubkeys = [v.pubkey for v in state.validators]
// committee_indices = [ValidatorIndex(all_pubkeys.index(pubkey)) for pubkey in state.current_sync_committee.pubkeys]
// for participant_index, participation_bit in zip(committee_indices, sync_aggregate.sync_committee_bits):
// if participation_bit:
// increase_balance(state, participant_index, participant_reward)
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
// else:
// decrease_balance(state, participant_index, participant_reward)
func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, error) {
votedKeys, votedIndices, didntVoteIndices, err := FilterSyncCommitteeVotes(s, sync)
s, votedKeys, err := processSyncAggregate(ctx, s, sync)
if err != nil {
return nil, errors.Wrap(err, "could not filter sync committee votes")
}
@@ -53,50 +53,70 @@ func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.
if err := VerifySyncCommitteeSig(s, votedKeys, sync.SyncCommitteeSignature); err != nil {
return nil, errors.Wrap(err, "could not verify sync committee signature")
}
return ApplySyncRewardsPenalties(ctx, s, votedIndices, didntVoteIndices)
return s, nil
}
// FilterSyncCommitteeVotes filters the validator public keys and indices for the ones that voted and didn't vote.
func FilterSyncCommitteeVotes(s state.BeaconState, sync *ethpb.SyncAggregate) (
votedKeys []bls.PublicKey,
votedIndices []types.ValidatorIndex,
didntVoteIndices []types.ValidatorIndex,
err error) {
// processSyncAggregate applies all the logic in the spec function `process_sync_aggregate` except
// verifying the BLS signatures. It returns the modified beacons state and the list of validators'
// public keys that voted, for future signature verification.
func processSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (
state.BeaconState,
[]bls.PublicKey,
error) {
currentSyncCommittee, err := s.CurrentSyncCommittee()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
if currentSyncCommittee == nil {
return nil, nil, nil, errors.New("nil current sync committee in state")
return nil, nil, errors.New("nil current sync committee in state")
}
committeeKeys := currentSyncCommittee.Pubkeys
if sync.SyncCommitteeBits.Len() > uint64(len(committeeKeys)) {
return nil, nil, nil, errors.New("bits length exceeds committee length")
return nil, nil, errors.New("bits length exceeds committee length")
}
votedKeys = make([]bls.PublicKey, 0, len(committeeKeys))
votedIndices = make([]types.ValidatorIndex, 0, len(committeeKeys))
didntVoteIndices = make([]types.ValidatorIndex, 0) // No allocation. Expect most votes.
votedKeys := make([]bls.PublicKey, 0, len(committeeKeys))
activeBalance, err := helpers.TotalActiveBalance(s)
if err != nil {
return nil, nil, err
}
proposerReward, participantReward, err := SyncRewards(activeBalance)
if err != nil {
return nil, nil, err
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, s)
if err != nil {
return nil, nil, err
}
earnedProposerReward := uint64(0)
for i := uint64(0); i < sync.SyncCommitteeBits.Len(); i++ {
vIdx, exists := s.ValidatorIndexByPubkey(bytesutil.ToBytes48(committeeKeys[i]))
// Impossible scenario.
if !exists {
return nil, nil, nil, errors.New("validator public key does not exist in state")
return nil, nil, errors.New("validator public key does not exist in state")
}
if sync.SyncCommitteeBits.BitAt(i) {
pubKey, err := bls.PublicKeyFromBytes(committeeKeys[i])
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
votedKeys = append(votedKeys, pubKey)
votedIndices = append(votedIndices, vIdx)
if err := helpers.IncreaseBalance(s, vIdx, participantReward); err != nil {
return nil, nil, err
}
earnedProposerReward += proposerReward
} else {
didntVoteIndices = append(didntVoteIndices, vIdx)
if err := helpers.DecreaseBalance(s, vIdx, participantReward); err != nil {
return nil, nil, err
}
}
}
return
if err := helpers.IncreaseBalance(s, proposerIndex, earnedProposerReward); err != nil {
return nil, nil, err
}
return s, votedKeys, err
}
// VerifySyncCommitteeSig verifies sync committee signature `syncSig` is valid with respect to public keys `syncKeys`.
@@ -125,43 +145,6 @@ func VerifySyncCommitteeSig(s state.BeaconState, syncKeys []bls.PublicKey, syncS
return nil
}
// ApplySyncRewardsPenalties applies rewards and penalties for proposer and sync committee participants.
func ApplySyncRewardsPenalties(ctx context.Context, s state.BeaconState, votedIndices, didntVoteIndices []types.ValidatorIndex) (state.BeaconState, error) {
activeBalance, err := helpers.TotalActiveBalance(s)
if err != nil {
return nil, err
}
proposerReward, participantReward, err := SyncRewards(activeBalance)
if err != nil {
return nil, err
}
// Apply sync committee rewards.
earnedProposerReward := uint64(0)
for _, index := range votedIndices {
if err := helpers.IncreaseBalance(s, index, participantReward); err != nil {
return nil, err
}
earnedProposerReward += proposerReward
}
// Apply proposer rewards.
proposerIndex, err := helpers.BeaconProposerIndex(ctx, s)
if err != nil {
return nil, err
}
if err := helpers.IncreaseBalance(s, proposerIndex, earnedProposerReward); err != nil {
return nil, err
}
// Apply sync committee penalties.
for _, index := range didntVoteIndices {
if err := helpers.DecreaseBalance(s, index, participantReward); err != nil {
return nil, err
}
}
return s, nil
}
// SyncRewards returns the proposer reward and the sync participant reward given the total active balance in state.
func SyncRewards(activeBalance uint64) (proposerReward, participantReward uint64, err error) {
cfg := params.BeaconConfig()

View File

@@ -168,7 +168,36 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
require.NoError(t, err)
}
func TestProcessSyncCommittee_FilterSyncCommitteeVotes(t *testing.T) {
// This is a regression test #11696
func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
committeeKeys := committee.Pubkeys
committeeKeys[1] = committeeKeys[0]
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
idx, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(committeeKeys[0]))
require.Equal(t, true, ok)
syncBits := bitfield.NewBitvector512()
for i := range syncBits {
syncBits[i] = 0xFF
}
syncBits.SetBitAt(0, false)
syncAggregate := &ethpb.SyncAggregate{
SyncCommitteeBits: syncBits,
}
require.NoError(t, beaconState.UpdateBalancesAtIndex(idx, 0))
st, votedKeys, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
require.Equal(t, 511, len(votedKeys))
require.DeepEqual(t, committeeKeys[0], votedKeys[0].Marshal())
balances := st.Balances()
require.Equal(t, uint64(988), balances[idx])
}
func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
@@ -183,25 +212,40 @@ func TestProcessSyncCommittee_FilterSyncCommitteeVotes(t *testing.T) {
SyncCommitteeBits: syncBits,
}
votedKeys, votedIndices, didntVoteIndices, err := altair.FilterSyncCommitteeVotes(beaconState, syncAggregate)
st, votedKeys, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
votedMap := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
for _, key := range votedKeys {
votedMap[bytesutil.ToBytes48(key.Marshal())] = true
}
require.Equal(t, int(syncBits.Len()/2), len(votedKeys))
require.Equal(t, int(syncBits.Len()/2), len(votedIndices))
require.Equal(t, int(syncBits.Len()/2), len(didntVoteIndices))
currentSyncCommittee, err := st.CurrentSyncCommittee()
require.NoError(t, err)
committeeKeys := currentSyncCommittee.Pubkeys
balances := st.Balances()
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
for i := 0; i < len(syncBits); i++ {
if syncBits.BitAt(uint64(i)) {
pk := beaconState.PubkeyAtIndex(votedIndices[i])
pk := bytesutil.ToBytes48(committeeKeys[i])
require.DeepEqual(t, true, votedMap[pk])
idx, ok := st.ValidatorIndexByPubkey(pk)
require.Equal(t, true, ok)
require.Equal(t, uint64(32000000988), balances[idx])
} else {
pk := beaconState.PubkeyAtIndex(didntVoteIndices[i])
pk := bytesutil.ToBytes48(committeeKeys[i])
require.DeepEqual(t, false, votedMap[pk])
idx, ok := st.ValidatorIndexByPubkey(pk)
require.Equal(t, true, ok)
if idx != proposerIndex {
require.Equal(t, uint64(31999999012), balances[idx])
}
}
}
require.Equal(t, uint64(32000035108), balances[proposerIndex])
}
func Test_VerifySyncCommitteeSig(t *testing.T) {
@@ -240,22 +284,6 @@ func Test_VerifySyncCommitteeSig(t *testing.T) {
require.NoError(t, altair.VerifySyncCommitteeSig(beaconState, pks, aggregatedSig))
}
func Test_ApplySyncRewardsPenalties(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
beaconState, err := altair.ApplySyncRewardsPenalties(context.Background(), beaconState,
[]types.ValidatorIndex{0, 1}, // voted
[]types.ValidatorIndex{2, 3}) // didn't vote
require.NoError(t, err)
balances := beaconState.Balances()
require.Equal(t, uint64(32000000988), balances[0])
require.Equal(t, balances[0], balances[1])
require.Equal(t, uint64(31999999012), balances[2])
require.Equal(t, balances[2], balances[3])
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
require.Equal(t, uint64(32000000282), balances[proposerIndex])
}
func Test_SyncRewards(t *testing.T) {
tests := []struct {
name string

View File

@@ -68,9 +68,10 @@ func InitializePrecomputeValidators(ctx context.Context, beaconState state.Beaco
// For fully inactive validators and perfect active validators, the effect is the same as before Altair.
// For a validator is inactive and the chain fails to finalize, the inactivity score increases by a fixed number, the total loss after N epochs is proportional to N**2/2.
// For imperfectly active validators. The inactivity score's behavior is specified by this function:
// If a validator fails to submit an attestation with the correct target, their inactivity score goes up by 4.
// If they successfully submit an attestation with the correct source and target, their inactivity score drops by 1
// If the chain has recently finalized, each validator's score drops by 16.
//
// If a validator fails to submit an attestation with the correct target, their inactivity score goes up by 4.
// If they successfully submit an attestation with the correct source and target, their inactivity score drops by 1
// If the chain has recently finalized, each validator's score drops by 16.
func ProcessInactivityScores(
ctx context.Context,
beaconState state.BeaconState,
@@ -132,12 +133,13 @@ func ProcessInactivityScores(
// it also tracks and updates epoch attesting balances.
// Spec code:
// if epoch == get_current_epoch(state):
// epoch_participation = state.current_epoch_participation
// else:
// epoch_participation = state.previous_epoch_participation
// active_validator_indices = get_active_validator_indices(state, epoch)
// participating_indices = [i for i in active_validator_indices if has_flag(epoch_participation[i], flag_index)]
// return set(filter(lambda index: not state.validators[index].slashed, participating_indices))
//
// epoch_participation = state.current_epoch_participation
// else:
// epoch_participation = state.previous_epoch_participation
// active_validator_indices = get_active_validator_indices(state, epoch)
// participating_indices = [i for i in active_validator_indices if has_flag(epoch_participation[i], flag_index)]
// return set(filter(lambda index: not state.validators[index].slashed, participating_indices))
func ProcessEpochParticipation(
ctx context.Context,
beaconState state.BeaconState,

View File

@@ -14,10 +14,11 @@ import (
//
// Spec code:
// def process_sync_committee_updates(state: BeaconState) -> None:
// next_epoch = get_current_epoch(state) + Epoch(1)
// if next_epoch % EPOCHS_PER_SYNC_COMMITTEE_PERIOD == 0:
// state.current_sync_committee = state.next_sync_committee
// state.next_sync_committee = get_next_sync_committee(state)
//
// next_epoch = get_current_epoch(state) + Epoch(1)
// if next_epoch % EPOCHS_PER_SYNC_COMMITTEE_PERIOD == 0:
// state.current_sync_committee = state.next_sync_committee
// state.next_sync_committee = get_next_sync_committee(state)
func ProcessSyncCommitteeUpdates(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
nextEpoch := time.NextEpoch(beaconState)
if nextEpoch%params.BeaconConfig().EpochsPerSyncCommitteePeriod == 0 {
@@ -46,8 +47,9 @@ func ProcessSyncCommitteeUpdates(ctx context.Context, beaconState state.BeaconSt
//
// Spec code:
// def process_participation_flag_updates(state: BeaconState) -> None:
// state.previous_epoch_participation = state.current_epoch_participation
// state.current_epoch_participation = [ParticipationFlags(0b0000_0000) for _ in range(len(state.validators))]
//
// state.previous_epoch_participation = state.current_epoch_participation
// state.current_epoch_participation = [ParticipationFlags(0b0000_0000) for _ in range(len(state.validators))]
func ProcessParticipationFlagUpdates(beaconState state.BeaconState) (state.BeaconState, error) {
c, err := beaconState.CurrentEpochParticipation()
if err != nil {

View File

@@ -0,0 +1,3 @@
package altair
var ProcessSyncAggregateEported = processSyncAggregate

View File

@@ -13,16 +13,17 @@ import (
// individual validator's base reward.
//
// Spec code:
// def get_base_reward(state: BeaconState, index: ValidatorIndex) -> Gwei:
// """
// Return the base reward for the validator defined by ``index`` with respect to the current ``state``.
//
// Note: An optimally performing validator can earn one base reward per epoch over a long time horizon.
// This takes into account both per-epoch (e.g. attestation) and intermittent duties (e.g. block proposal
// and sync committees).
// """
// increments = state.validators[index].effective_balance // EFFECTIVE_BALANCE_INCREMENT
// return Gwei(increments * get_base_reward_per_increment(state))
// def get_base_reward(state: BeaconState, index: ValidatorIndex) -> Gwei:
// """
// Return the base reward for the validator defined by ``index`` with respect to the current ``state``.
//
// Note: An optimally performing validator can earn one base reward per epoch over a long time horizon.
// This takes into account both per-epoch (e.g. attestation) and intermittent duties (e.g. block proposal
// and sync committees).
// """
// increments = state.validators[index].effective_balance // EFFECTIVE_BALANCE_INCREMENT
// return Gwei(increments * get_base_reward_per_increment(state))
func BaseReward(s state.ReadOnlyBeaconState, index types.ValidatorIndex) (uint64, error) {
totalBalance, err := helpers.TotalActiveBalance(s)
if err != nil {
@@ -50,7 +51,8 @@ func BaseRewardWithTotalBalance(s state.ReadOnlyBeaconState, index types.Validat
//
// Spec code:
// def get_base_reward_per_increment(state: BeaconState) -> Gwei:
// return Gwei(EFFECTIVE_BALANCE_INCREMENT * BASE_REWARD_FACTOR // integer_squareroot(get_total_active_balance(state)))
//
// return Gwei(EFFECTIVE_BALANCE_INCREMENT * BASE_REWARD_FACTOR // integer_squareroot(get_total_active_balance(state)))
func BaseRewardPerIncrement(activeBalance uint64) (uint64, error) {
if activeBalance == 0 {
return 0, errors.New("active balance can't be 0")

View File

@@ -47,13 +47,14 @@ func ValidateNilSyncContribution(s *ethpb.SignedContributionAndProof) error {
//
// Spec code:
// def get_next_sync_committee(state: BeaconState) -> SyncCommittee:
// """
// Return the next sync committee, with possible pubkey duplicates.
// """
// indices = get_next_sync_committee_indices(state)
// pubkeys = [state.validators[index].pubkey for index in indices]
// aggregate_pubkey = bls.AggregatePKs(pubkeys)
// return SyncCommittee(pubkeys=pubkeys, aggregate_pubkey=aggregate_pubkey)
//
// """
// Return the next sync committee, with possible pubkey duplicates.
// """
// indices = get_next_sync_committee_indices(state)
// pubkeys = [state.validators[index].pubkey for index in indices]
// aggregate_pubkey = bls.AggregatePKs(pubkeys)
// return SyncCommittee(pubkeys=pubkeys, aggregate_pubkey=aggregate_pubkey)
func NextSyncCommittee(ctx context.Context, s state.BeaconState) (*ethpb.SyncCommittee, error) {
indices, err := NextSyncCommitteeIndices(ctx, s)
if err != nil {
@@ -78,26 +79,27 @@ func NextSyncCommittee(ctx context.Context, s state.BeaconState) (*ethpb.SyncCom
//
// Spec code:
// def get_next_sync_committee_indices(state: BeaconState) -> Sequence[ValidatorIndex]:
// """
// Return the sync committee indices, with possible duplicates, for the next sync committee.
// """
// epoch = Epoch(get_current_epoch(state) + 1)
//
// MAX_RANDOM_BYTE = 2**8 - 1
// active_validator_indices = get_active_validator_indices(state, epoch)
// active_validator_count = uint64(len(active_validator_indices))
// seed = get_seed(state, epoch, DOMAIN_SYNC_COMMITTEE)
// i = 0
// sync_committee_indices: List[ValidatorIndex] = []
// while len(sync_committee_indices) < SYNC_COMMITTEE_SIZE:
// shuffled_index = compute_shuffled_index(uint64(i % active_validator_count), active_validator_count, seed)
// candidate_index = active_validator_indices[shuffled_index]
// random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
// effective_balance = state.validators[candidate_index].effective_balance
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
// sync_committee_indices.append(candidate_index)
// i += 1
// return sync_committee_indices
// """
// Return the sync committee indices, with possible duplicates, for the next sync committee.
// """
// epoch = Epoch(get_current_epoch(state) + 1)
//
// MAX_RANDOM_BYTE = 2**8 - 1
// active_validator_indices = get_active_validator_indices(state, epoch)
// active_validator_count = uint64(len(active_validator_indices))
// seed = get_seed(state, epoch, DOMAIN_SYNC_COMMITTEE)
// i = 0
// sync_committee_indices: List[ValidatorIndex] = []
// while len(sync_committee_indices) < SYNC_COMMITTEE_SIZE:
// shuffled_index = compute_shuffled_index(uint64(i % active_validator_count), active_validator_count, seed)
// candidate_index = active_validator_indices[shuffled_index]
// random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
// effective_balance = state.validators[candidate_index].effective_balance
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
// sync_committee_indices.append(candidate_index)
// i += 1
// return sync_committee_indices
func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]types.ValidatorIndex, error) {
epoch := coreTime.NextEpoch(s)
indices, err := helpers.ActiveValidatorIndices(ctx, s, epoch)
@@ -144,18 +146,19 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]types
// SyncSubCommitteePubkeys returns the pubkeys participating in a sync subcommittee.
//
// def get_sync_subcommittee_pubkeys(state: BeaconState, subcommittee_index: uint64) -> Sequence[BLSPubkey]:
// # Committees assigned to `slot` sign for `slot - 1`
// # This creates the exceptional logic below when transitioning between sync committee periods
// next_slot_epoch = compute_epoch_at_slot(Slot(state.slot + 1))
// if compute_sync_committee_period(get_current_epoch(state)) == compute_sync_committee_period(next_slot_epoch):
// sync_committee = state.current_sync_committee
// else:
// sync_committee = state.next_sync_committee
//
// # Return pubkeys for the subcommittee index
// sync_subcommittee_size = SYNC_COMMITTEE_SIZE // SYNC_COMMITTEE_SUBNET_COUNT
// i = subcommittee_index * sync_subcommittee_size
// return sync_committee.pubkeys[i:i + sync_subcommittee_size]
// # Committees assigned to `slot` sign for `slot - 1`
// # This creates the exceptional logic below when transitioning between sync committee periods
// next_slot_epoch = compute_epoch_at_slot(Slot(state.slot + 1))
// if compute_sync_committee_period(get_current_epoch(state)) == compute_sync_committee_period(next_slot_epoch):
// sync_committee = state.current_sync_committee
// else:
// sync_committee = state.next_sync_committee
//
// # Return pubkeys for the subcommittee index
// sync_subcommittee_size = SYNC_COMMITTEE_SIZE // SYNC_COMMITTEE_SUBNET_COUNT
// i = subcommittee_index * sync_subcommittee_size
// return sync_committee.pubkeys[i:i + sync_subcommittee_size]
func SyncSubCommitteePubkeys(syncCommittee *ethpb.SyncCommittee, subComIdx types.CommitteeIndex) ([][]byte, error) {
cfg := params.BeaconConfig()
subCommSize := cfg.SyncCommitteeSize / cfg.SyncCommitteeSubnetCount
@@ -172,8 +175,9 @@ func SyncSubCommitteePubkeys(syncCommittee *ethpb.SyncCommittee, subComIdx types
// aggregator.
//
// def is_sync_committee_aggregator(signature: BLSSignature) -> bool:
// modulo = max(1, SYNC_COMMITTEE_SIZE // SYNC_COMMITTEE_SUBNET_COUNT // TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE)
// return bytes_to_uint64(hash(signature)[0:8]) % modulo == 0
//
// modulo = max(1, SYNC_COMMITTEE_SIZE // SYNC_COMMITTEE_SUBNET_COUNT // TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE)
// return bytes_to_uint64(hash(signature)[0:8]) % modulo == 0
func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
if len(sig) != fieldparams.BLSSignatureLength {
return false, errors.New("incorrect sig length")

View File

@@ -15,18 +15,19 @@ import (
//
// Spec code:
// def process_epoch(state: BeaconState) -> None:
// process_justification_and_finalization(state) # [Modified in Altair]
// process_inactivity_updates(state) # [New in Altair]
// process_rewards_and_penalties(state) # [Modified in Altair]
// process_registry_updates(state)
// process_slashings(state) # [Modified in Altair]
// process_eth1_data_reset(state)
// process_effective_balance_updates(state)
// process_slashings_reset(state)
// process_randao_mixes_reset(state)
// process_historical_roots_update(state)
// process_participation_flag_updates(state) # [New in Altair]
// process_sync_committee_updates(state) # [New in Altair]
//
// process_justification_and_finalization(state) # [Modified in Altair]
// process_inactivity_updates(state) # [New in Altair]
// process_rewards_and_penalties(state) # [Modified in Altair]
// process_registry_updates(state)
// process_slashings(state) # [Modified in Altair]
// process_eth1_data_reset(state)
// process_effective_balance_updates(state)
// process_slashings_reset(state)
// process_randao_mixes_reset(state)
// process_historical_roots_update(state)
// process_participation_flag_updates(state) # [New in Altair]
// process_sync_committee_updates(state) # [New in Altair]
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "altair.ProcessEpoch")
defer span.End()
@@ -92,7 +93,7 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconSta
if err != nil {
return nil, err
}
state, err = e.ProcessHistoricalRootsUpdate(state)
state, err = e.ProcessHistoricalDataUpdate(state)
if err != nil {
return nil, err
}

View File

@@ -16,56 +16,61 @@ import (
//
// Spec code:
// def upgrade_to_altair(pre: phase0.BeaconState) -> BeaconState:
// epoch = phase0.get_current_epoch(pre)
// post = BeaconState(
// # Versioning
// genesis_time=pre.genesis_time,
// genesis_validators_root=pre.genesis_validators_root,
// slot=pre.slot,
// fork=Fork(
// previous_version=pre.fork.current_version,
// current_version=ALTAIR_FORK_VERSION,
// epoch=epoch,
// ),
// # History
// latest_block_header=pre.latest_block_header,
// block_roots=pre.block_roots,
// state_roots=pre.state_roots,
// historical_roots=pre.historical_roots,
// # Eth1
// eth1_data=pre.eth1_data,
// eth1_data_votes=pre.eth1_data_votes,
// eth1_deposit_index=pre.eth1_deposit_index,
// # Registry
// validators=pre.validators,
// balances=pre.balances,
// # Randomness
// randao_mixes=pre.randao_mixes,
// # Slashings
// slashings=pre.slashings,
// # Participation
// previous_epoch_participation=[ParticipationFlags(0b0000_0000) for _ in range(len(pre.validators))],
// current_epoch_participation=[ParticipationFlags(0b0000_0000) for _ in range(len(pre.validators))],
// # Finality
// justification_bits=pre.justification_bits,
// previous_justified_checkpoint=pre.previous_justified_checkpoint,
// current_justified_checkpoint=pre.current_justified_checkpoint,
// finalized_checkpoint=pre.finalized_checkpoint,
// # Inactivity
// inactivity_scores=[uint64(0) for _ in range(len(pre.validators))],
// )
// # Fill in previous epoch participation from the pre state's pending attestations
// translate_participation(post, pre.previous_epoch_attestations)
//
// # Fill in sync committees
// # Note: A duplicate committee is assigned for the current and next committee at the fork boundary
// post.current_sync_committee = get_next_sync_committee(post)
// post.next_sync_committee = get_next_sync_committee(post)
// return post
// epoch = phase0.get_current_epoch(pre)
// post = BeaconState(
// # Versioning
// genesis_time=pre.genesis_time,
// genesis_validators_root=pre.genesis_validators_root,
// slot=pre.slot,
// fork=Fork(
// previous_version=pre.fork.current_version,
// current_version=ALTAIR_FORK_VERSION,
// epoch=epoch,
// ),
// # History
// latest_block_header=pre.latest_block_header,
// block_roots=pre.block_roots,
// state_roots=pre.state_roots,
// historical_roots=pre.historical_roots,
// # Eth1
// eth1_data=pre.eth1_data,
// eth1_data_votes=pre.eth1_data_votes,
// eth1_deposit_index=pre.eth1_deposit_index,
// # Registry
// validators=pre.validators,
// balances=pre.balances,
// # Randomness
// randao_mixes=pre.randao_mixes,
// # Slashings
// slashings=pre.slashings,
// # Participation
// previous_epoch_participation=[ParticipationFlags(0b0000_0000) for _ in range(len(pre.validators))],
// current_epoch_participation=[ParticipationFlags(0b0000_0000) for _ in range(len(pre.validators))],
// # Finality
// justification_bits=pre.justification_bits,
// previous_justified_checkpoint=pre.previous_justified_checkpoint,
// current_justified_checkpoint=pre.current_justified_checkpoint,
// finalized_checkpoint=pre.finalized_checkpoint,
// # Inactivity
// inactivity_scores=[uint64(0) for _ in range(len(pre.validators))],
// )
// # Fill in previous epoch participation from the pre state's pending attestations
// translate_participation(post, pre.previous_epoch_attestations)
//
// # Fill in sync committees
// # Note: A duplicate committee is assigned for the current and next committee at the fork boundary
// post.current_sync_committee = get_next_sync_committee(post)
// post.next_sync_committee = get_next_sync_committee(post)
// return post
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
hrs, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateAltair{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
@@ -78,7 +83,7 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
HistoricalRoots: hrs,
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
@@ -126,17 +131,18 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
//
// Spec code:
// def translate_participation(state: BeaconState, pending_attestations: Sequence[phase0.PendingAttestation]) -> None:
// for attestation in pending_attestations:
// data = attestation.data
// inclusion_delay = attestation.inclusion_delay
// # Translate attestation inclusion info to flag indices
// participation_flag_indices = get_attestation_participation_flag_indices(state, data, inclusion_delay)
//
// # Apply flags to all attesting validators
// epoch_participation = state.previous_epoch_participation
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index in participation_flag_indices:
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
// for attestation in pending_attestations:
// data = attestation.data
// inclusion_delay = attestation.inclusion_delay
// # Translate attestation inclusion info to flag indices
// participation_flag_indices = get_attestation_participation_flag_indices(state, data, inclusion_delay)
//
// # Apply flags to all attesting validators
// epoch_participation = state.previous_epoch_participation
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index in participation_flag_indices:
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
func TranslateParticipation(ctx context.Context, state state.BeaconState, atts []*ethpb.PendingAttestation) (state.BeaconState, error) {
epochParticipation, err := state.PreviousEpochParticipation()
if err != nil {

View File

@@ -82,7 +82,11 @@ func TestUpgradeToAltair(t *testing.T) {
require.DeepSSZEqual(t, preForkState.LatestBlockHeader(), aState.LatestBlockHeader())
require.DeepSSZEqual(t, preForkState.BlockRoots(), aState.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), aState.StateRoots())
require.DeepSSZEqual(t, preForkState.HistoricalRoots(), aState.HistoricalRoots())
r1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
r2, err := aState.HistoricalRoots()
require.NoError(t, err)
require.DeepSSZEqual(t, r1, r2)
require.DeepSSZEqual(t, preForkState.Eth1Data(), aState.Eth1Data())
require.DeepSSZEqual(t, preForkState.Eth1DataVotes(), aState.Eth1DataVotes())
require.DeepSSZEqual(t, preForkState.Eth1DepositIndex(), aState.Eth1DepositIndex())

View File

@@ -19,12 +19,7 @@ go_library(
"withdrawals.go",
],
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/blocks",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/spectest:__subpackages__",
"//testing/util:__pkg__",
"//validator:__subpackages__",
],
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
@@ -41,11 +36,12 @@ go_library(
"//contracts/deposit:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//crypto/hash/htr:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/slashings:go_default_library",
@@ -70,6 +66,7 @@ go_test(
"deposit_test.go",
"eth1_data_test.go",
"exit_test.go",
"exports_test.go",
"genesis_test.go",
"header_test.go",
"payload_test.go",
@@ -93,13 +90,16 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash/htr:go_default_library",
"//crypto/bls/common:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation:go_default_library",

View File

@@ -185,19 +185,20 @@ func VerifyAttestationSignature(ctx context.Context, beaconState state.ReadOnlyB
// VerifyIndexedAttestation determines the validity of an indexed attestation.
//
// Spec pseudocode definition:
// def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:
// """
// Check if ``indexed_attestation`` is not empty, has sorted and unique indices and has a valid aggregate signature.
// """
// # Verify indices are sorted and unique
// indices = indexed_attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(set(indices)):
// return False
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch)
// signing_root = compute_signing_root(indexed_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_attestation.signature)
//
// def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:
// """
// Check if ``indexed_attestation`` is not empty, has sorted and unique indices and has a valid aggregate signature.
// """
// # Verify indices are sorted and unique
// indices = indexed_attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(set(indices)):
// return False
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch)
// signing_root = compute_signing_root(indexed_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_attestation.signature)
func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBeaconState, indexedAtt *ethpb.IndexedAttestation) error {
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
defer span.End()

View File

@@ -64,7 +64,7 @@ func TestVerifyAttestationNoVerifySignature_IncorrectSourceEpoch(t *testing.T) {
AggregationBits: aggBits,
}
zeroSig := [96]byte{}
var zeroSig [96]byte
att.Signature = zeroSig[:]
err := beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)

View File

@@ -113,7 +113,7 @@ func TestProcessAttestationsNoVerify_OK(t *testing.T) {
AggregationBits: aggBits,
}
zeroSig := [fieldparams.BLSSignatureLength]byte{}
var zeroSig [fieldparams.BLSSignatureLength]byte
att.Signature = zeroSig[:]
err := beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
@@ -144,7 +144,7 @@ func TestVerifyAttestationNoVerifySignature_OK(t *testing.T) {
AggregationBits: aggBits,
}
zeroSig := [fieldparams.BLSSignatureLength]byte{}
var zeroSig [fieldparams.BLSSignatureLength]byte
att.Signature = zeroSig[:]
err := beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
@@ -172,7 +172,7 @@ func TestVerifyAttestationNoVerifySignature_BadAttIdx(t *testing.T) {
},
AggregationBits: aggBits,
}
zeroSig := [fieldparams.BLSSignatureLength]byte{}
var zeroSig [fieldparams.BLSSignatureLength]byte
att.Signature = zeroSig[:]
require.NoError(t, beaconState.SetSlot(beaconState.Slot()+params.BeaconConfig().MinAttestationInclusionDelay))
ckp := beaconState.CurrentJustifiedCheckpoint()

View File

@@ -22,20 +22,21 @@ import (
// Casper FFG slashing conditions if any slashable events occurred.
//
// Spec pseudocode definition:
// def process_attester_slashing(state: BeaconState, attester_slashing: AttesterSlashing) -> None:
// attestation_1 = attester_slashing.attestation_1
// attestation_2 = attester_slashing.attestation_2
// assert is_slashable_attestation_data(attestation_1.data, attestation_2.data)
// assert is_valid_indexed_attestation(state, attestation_1)
// assert is_valid_indexed_attestation(state, attestation_2)
//
// slashed_any = False
// indices = set(attestation_1.attesting_indices).intersection(attestation_2.attesting_indices)
// for index in sorted(indices):
// if is_slashable_validator(state.validators[index], get_current_epoch(state)):
// slash_validator(state, index)
// slashed_any = True
// assert slashed_any
// def process_attester_slashing(state: BeaconState, attester_slashing: AttesterSlashing) -> None:
// attestation_1 = attester_slashing.attestation_1
// attestation_2 = attester_slashing.attestation_2
// assert is_slashable_attestation_data(attestation_1.data, attestation_2.data)
// assert is_valid_indexed_attestation(state, attestation_1)
// assert is_valid_indexed_attestation(state, attestation_2)
//
// slashed_any = False
// indices = set(attestation_1.attesting_indices).intersection(attestation_2.attesting_indices)
// for index in sorted(indices):
// if is_slashable_validator(state.validators[index], get_current_epoch(state)):
// slash_validator(state, index)
// slashed_any = True
// assert slashed_any
func ProcessAttesterSlashings(
ctx context.Context,
beaconState state.BeaconState,
@@ -83,7 +84,7 @@ func ProcessAttesterSlashing(
slashingQuotient = cfg.MinSlashingPenaltyQuotient
case beaconState.Version() == version.Altair:
slashingQuotient = cfg.MinSlashingPenaltyQuotientAltair
case beaconState.Version() == version.Bellatrix:
case beaconState.Version() >= version.Bellatrix:
slashingQuotient = cfg.MinSlashingPenaltyQuotientBellatrix
default:
return nil, errors.New("unknown state version")
@@ -132,16 +133,17 @@ func VerifyAttesterSlashing(ctx context.Context, beaconState state.ReadOnlyBeaco
// IsSlashableAttestationData verifies a slashing against the Casper Proof of Stake FFG rules.
//
// Spec pseudocode definition:
// def is_slashable_attestation_data(data_1: AttestationData, data_2: AttestationData) -> bool:
// """
// Check if ``data_1`` and ``data_2`` are slashable according to Casper FFG rules.
// """
// return (
// # Double vote
// (data_1 != data_2 and data_1.target.epoch == data_2.target.epoch) or
// # Surround vote
// (data_1.source.epoch < data_2.source.epoch and data_2.target.epoch < data_1.target.epoch)
// )
//
// def is_slashable_attestation_data(data_1: AttestationData, data_2: AttestationData) -> bool:
// """
// Check if ``data_1`` and ``data_2`` are slashable according to Casper FFG rules.
// """
// return (
// # Double vote
// (data_1 != data_2 and data_1.target.epoch == data_2.target.epoch) or
// # Surround vote
// (data_1.source.epoch < data_2.source.epoch and data_2.target.epoch < data_1.target.epoch)
// )
func IsSlashableAttestationData(data1, data2 *ethpb.AttestationData) bool {
if data1 == nil || data2 == nil || data1.Target == nil || data2.Target == nil || data1.Source == nil || data2.Source == nil {
return false

View File

@@ -302,3 +302,72 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusCapella(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateCapella(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}

View File

@@ -55,9 +55,9 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
var ba []byte
pubkey := [fieldparams.BLSPubkeyLength]byte{}
sig := [96]byte{}
domain := [4]byte{}
var pubkey [fieldparams.BLSPubkeyLength]byte
var sig [96]byte
var domain [4]byte
var p []byte
var s []byte
var d []byte

View File

@@ -71,8 +71,9 @@ func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposi
// into the beacon chain.
//
// Spec pseudocode definition:
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
//
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
@@ -120,40 +121,41 @@ func BatchVerifyDepositsSignatures(ctx context.Context, deposits []*ethpb.Deposi
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// pubkey = deposit.data.pubkey
// amount = deposit.data.amount
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if not bls.Verify(pubkey, signing_root, deposit.data.signature):
// return
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// # Add validator and balance entries
// state.validators.append(get_validator_from_deposit(state, deposit))
// state.balances.append(amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
// pubkey = deposit.data.pubkey
// amount = deposit.data.amount
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if not bls.Verify(pubkey, signing_root, deposit.data.signature):
// return
//
// # Add validator and balance entries
// state.validators.append(get_validator_from_deposit(state, deposit))
// state.balances.append(amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, bool, error) {
var newValidator bool
if err := verifyDeposit(beaconState, deposit); err != nil {

View File

@@ -6,3 +6,8 @@ var errNilSignedWithdrawalMessage = errors.New("nil SignedBLSToExecutionChange m
var errNilWithdrawalMessage = errors.New("nil BLSToExecutionChange message")
var errInvalidBLSPrefix = errors.New("withdrawal credential prefix is not a BLS prefix")
var errInvalidWithdrawalCredentials = errors.New("withdrawal credentials do not match")
var errInvalidWithdrawalIndex = errors.New("invalid withdrawal index")
var errInvalidValidatorIndex = errors.New("invalid validator index")
var errInvalidWithdrawalAmount = errors.New("invalid withdrawal amount")
var errInvalidExecutionAddress = errors.New("invalid execution address")
var errInvalidWithdrawalNumber = errors.New("invalid number of withdrawals")

View File

@@ -15,10 +15,11 @@ import (
// into the beacon state.
//
// Official spec definition:
// def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None:
// state.eth1_data_votes.append(body.eth1_data)
// if state.eth1_data_votes.count(body.eth1_data) * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH:
// state.eth1_data = body.eth1_data
//
// def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None:
// state.eth1_data_votes.append(body.eth1_data)
// if state.eth1_data_votes.count(body.eth1_data) * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH:
// state.eth1_data = body.eth1_data
func ProcessEth1DataInBlock(_ context.Context, beaconState state.BeaconState, eth1Data *ethpb.Eth1Data) (state.BeaconState, error) {
if beaconState == nil || beaconState.IsNil() {
return nil, errors.New("nil state")

View File

@@ -27,23 +27,24 @@ var ValidatorCannotExitYetMsg = "validator has not been active long enough to ex
// should exit the state's validator registry.
//
// Spec pseudocode definition:
// def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None:
// voluntary_exit = signed_voluntary_exit.message
// validator = state.validators[voluntary_exit.validator_index]
// # Verify the validator is active
// assert is_active_validator(validator, get_current_epoch(state))
// # Verify exit has not been initiated
// assert validator.exit_epoch == FAR_FUTURE_EPOCH
// # Exits must specify an epoch when they become valid; they are not valid before then
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
//
// def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None:
// voluntary_exit = signed_voluntary_exit.message
// validator = state.validators[voluntary_exit.validator_index]
// # Verify the validator is active
// assert is_active_validator(validator, get_current_epoch(state))
// # Verify exit has not been initiated
// assert validator.exit_epoch == FAR_FUTURE_EPOCH
// # Exits must specify an epoch when they become valid; they are not valid before then
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
func ProcessVoluntaryExits(
ctx context.Context,
beaconState state.BeaconState,
@@ -71,23 +72,24 @@ func ProcessVoluntaryExits(
// VerifyExitAndSignature implements the spec defined validation for voluntary exits.
//
// Spec pseudocode definition:
// def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None:
// voluntary_exit = signed_voluntary_exit.message
// validator = state.validators[voluntary_exit.validator_index]
// # Verify the validator is active
// assert is_active_validator(validator, get_current_epoch(state))
// # Verify exit has not been initiated
// assert validator.exit_epoch == FAR_FUTURE_EPOCH
// # Exits must specify an epoch when they become valid; they are not valid before then
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
//
// def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None:
// voluntary_exit = signed_voluntary_exit.message
// validator = state.validators[voluntary_exit.validator_index]
// # Verify the validator is active
// assert is_active_validator(validator, get_current_epoch(state))
// # Verify exit has not been initiated
// assert validator.exit_epoch == FAR_FUTURE_EPOCH
// # Exits must specify an epoch when they become valid; they are not valid before then
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
func VerifyExitAndSignature(
validator state.ReadOnlyValidator,
currentSlot types.Slot,
@@ -117,23 +119,24 @@ func VerifyExitAndSignature(
// verifyExitConditions implements the spec defined validation for voluntary exits(excluding signatures).
//
// Spec pseudocode definition:
// def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None:
// voluntary_exit = signed_voluntary_exit.message
// validator = state.validators[voluntary_exit.validator_index]
// # Verify the validator is active
// assert is_active_validator(validator, get_current_epoch(state))
// # Verify exit has not been initiated
// assert validator.exit_epoch == FAR_FUTURE_EPOCH
// # Exits must specify an epoch when they become valid; they are not valid before then
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
//
// def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None:
// voluntary_exit = signed_voluntary_exit.message
// validator = state.validators[voluntary_exit.validator_index]
// # Verify the validator is active
// assert is_active_validator(validator, get_current_epoch(state))
// # Verify exit has not been initiated
// assert validator.exit_epoch == FAR_FUTURE_EPOCH
// # Exits must specify an epoch when they become valid; they are not valid before then
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
func verifyExitConditions(validator state.ReadOnlyValidator, currentSlot types.Slot, exit *ethpb.VoluntaryExit) error {
currentEpoch := slots.ToEpoch(currentSlot)
// Verify the validator is active.

View File

@@ -0,0 +1,3 @@
package blocks
var ProcessBLSToExecutionChange = processBLSToExecutionChange

View File

@@ -3,9 +3,16 @@
package blocks
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
)
@@ -29,3 +36,116 @@ func NewGenesisBlock(stateRoot []byte) *ethpb.SignedBeaconBlock {
}
return block
}
var ErrUnrecognizedState = errors.New("unknown underlying type for state.BeaconState value")
func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfaces.SignedBeaconBlock, error) {
root, err := st.HashTreeRoot(ctx)
if err != nil {
return nil, err
}
ps := st.ToProto()
switch ps.(type) {
case *ethpb.BeaconState:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBody{
RandaoReveal: make([]byte, fieldparams.BLSSignatureLength),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateAltair:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockAltair{
Block: &ethpb.BeaconBlockAltair{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyAltair{
RandaoReveal: make([]byte, fieldparams.BLSSignatureLength),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateBellatrix:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyBellatrix{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayload{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
},
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateCapella:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockCapella{
Block: &ethpb.BeaconBlockCapella{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyCapella{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadCapella{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}
}

View File

@@ -18,27 +18,27 @@ import (
//
// Spec pseudocode definition:
//
// def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
// # Verify that the slots match
// assert block.slot == state.slot
// # Verify that the block is newer than latest block header
// assert block.slot > state.latest_block_header.slot
// # Verify that proposer index is the correct index
// assert block.proposer_index == get_beacon_proposer_index(state)
// # Verify that the parent matches
// assert block.parent_root == hash_tree_root(state.latest_block_header)
// # Cache current block as the new latest block
// state.latest_block_header = BeaconBlockHeader(
// slot=block.slot,
// proposer_index=block.proposer_index,
// parent_root=block.parent_root,
// state_root=Bytes32(), # Overwritten in the next process_slot call
// body_root=hash_tree_root(block.body),
// )
// def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
// # Verify that the slots match
// assert block.slot == state.slot
// # Verify that the block is newer than latest block header
// assert block.slot > state.latest_block_header.slot
// # Verify that proposer index is the correct index
// assert block.proposer_index == get_beacon_proposer_index(state)
// # Verify that the parent matches
// assert block.parent_root == hash_tree_root(state.latest_block_header)
// # Cache current block as the new latest block
// state.latest_block_header = BeaconBlockHeader(
// slot=block.slot,
// proposer_index=block.proposer_index,
// parent_root=block.parent_root,
// state_root=Bytes32(), # Overwritten in the next process_slot call
// body_root=hash_tree_root(block.body),
// )
//
// # Verify proposer is not slashed
// proposer = state.validators[block.proposer_index]
// assert not proposer.slashed
// # Verify proposer is not slashed
// proposer = state.validators[block.proposer_index]
// assert not proposer.slashed
func ProcessBlockHeader(
ctx context.Context,
beaconState state.BeaconState,
@@ -73,27 +73,28 @@ func ProcessBlockHeader(
// using a unsigned block.
//
// Spec pseudocode definition:
// def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
// # Verify that the slots match
// assert block.slot == state.slot
// # Verify that the block is newer than latest block header
// assert block.slot > state.latest_block_header.slot
// # Verify that proposer index is the correct index
// assert block.proposer_index == get_beacon_proposer_index(state)
// # Verify that the parent matches
// assert block.parent_root == hash_tree_root(state.latest_block_header)
// # Cache current block as the new latest block
// state.latest_block_header = BeaconBlockHeader(
// slot=block.slot,
// proposer_index=block.proposer_index,
// parent_root=block.parent_root,
// state_root=Bytes32(), # Overwritten in the next process_slot call
// body_root=hash_tree_root(block.body),
// )
//
// # Verify proposer is not slashed
// proposer = state.validators[block.proposer_index]
// assert not proposer.slashed
// def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
// # Verify that the slots match
// assert block.slot == state.slot
// # Verify that the block is newer than latest block header
// assert block.slot > state.latest_block_header.slot
// # Verify that proposer index is the correct index
// assert block.proposer_index == get_beacon_proposer_index(state)
// # Verify that the parent matches
// assert block.parent_root == hash_tree_root(state.latest_block_header)
// # Cache current block as the new latest block
// state.latest_block_header = BeaconBlockHeader(
// slot=block.slot,
// proposer_index=block.proposer_index,
// parent_root=block.parent_root,
// state_root=Bytes32(), # Overwritten in the next process_slot call
// body_root=hash_tree_root(block.body),
// )
//
// # Verify proposer is not slashed
// proposer = state.validators[block.proposer_index]
// assert not proposer.slashed
func ProcessBlockHeaderNoVerify(
ctx context.Context,
beaconState state.BeaconState,

View File

@@ -10,7 +10,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
@@ -26,7 +25,8 @@ var (
//
// Spec code:
// def is_merge_transition_complete(state: BeaconState) -> bool:
// return state.latest_execution_payload_header != ExecutionPayloadHeader()
//
// return state.latest_execution_payload_header != ExecutionPayloadHeader()
func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
if st == nil {
return false, errors.New("nil state")
@@ -34,15 +34,14 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
if IsPreBellatrixVersion(st.Version()) {
return false, nil
}
if st.Version() > version.Bellatrix {
return true, nil
}
h, err := st.LatestExecutionPayloadHeader()
if err != nil {
return false, err
}
wrappedHeader, err := blocks.WrappedExecutionPayloadHeader(h)
if err != nil {
return false, err
}
isEmpty, err := blocks.IsEmptyExecutionData(wrappedHeader)
isEmpty, err := blocks.IsEmptyExecutionData(h)
if err != nil {
return false, err
}
@@ -53,7 +52,8 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
//
// Spec code:
// def is_execution_block(block: BeaconBlock) -> bool:
// return block.body.execution_payload != ExecutionPayload()
//
// return block.body.execution_payload != ExecutionPayload()
func IsExecutionBlock(body interfaces.BeaconBlockBody) (bool, error) {
if body == nil {
return false, errors.New("nil block body")
@@ -78,7 +78,8 @@ func IsExecutionBlock(body interfaces.BeaconBlockBody) (bool, error) {
//
// Spec code:
// def is_execution_enabled(state: BeaconState, body: BeaconBlockBody) -> bool:
// return is_merge_block(state, body) or is_merge_complete(state)
//
// return is_merge_block(state, body) or is_merge_complete(state)
func IsExecutionEnabled(st state.BeaconState, body interfaces.BeaconBlockBody) (bool, error) {
if st == nil || body == nil {
return false, errors.New("nil state or block body")
@@ -95,12 +96,8 @@ func IsExecutionEnabled(st state.BeaconState, body interfaces.BeaconBlockBody) (
// IsExecutionEnabledUsingHeader returns true if the execution is enabled using post processed payload header and block body.
// This is an optimized version of IsExecutionEnabled where beacon state is not required as an argument.
func IsExecutionEnabledUsingHeader(header *enginev1.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
wrappedHeader, err := blocks.WrappedExecutionPayloadHeader(header)
if err != nil {
return false, err
}
isEmpty, err := blocks.IsEmptyExecutionData(wrappedHeader)
func IsExecutionEnabledUsingHeader(header interfaces.ExecutionData, body interfaces.BeaconBlockBody) (bool, error) {
isEmpty, err := blocks.IsEmptyExecutionData(header)
if err != nil {
return false, err
}
@@ -119,9 +116,10 @@ func IsPreBellatrixVersion(v int) bool {
// These validation steps ONLY apply to post merge.
//
// Spec code:
// # Verify consistency of the parent hash with respect to the previous execution payload header
// if is_merge_complete(state):
// assert payload.parent_hash == state.latest_execution_payload_header.block_hash
//
// # Verify consistency of the parent hash with respect to the previous execution payload header
// if is_merge_complete(state):
// assert payload.parent_hash == state.latest_execution_payload_header.block_hash
func ValidatePayloadWhenMergeCompletes(st state.BeaconState, payload interfaces.ExecutionData) error {
complete, err := IsMergeTransitionComplete(st)
if err != nil {
@@ -130,12 +128,12 @@ func ValidatePayloadWhenMergeCompletes(st state.BeaconState, payload interfaces.
if !complete {
return nil
}
header, err := st.LatestExecutionPayloadHeader()
if err != nil {
return err
}
if !bytes.Equal(payload.ParentHash(), header.BlockHash) {
if !bytes.Equal(payload.ParentHash(), header.BlockHash()) {
log.Errorf("parent Hash %#x, header Hash: %#x", bytesutil.Trunc(payload.ParentHash()), bytesutil.Trunc(header.BlockHash()))
return ErrInvalidPayloadBlockHash
}
return nil
@@ -145,10 +143,11 @@ func ValidatePayloadWhenMergeCompletes(st state.BeaconState, payload interfaces.
// These validation steps apply to both pre merge and post merge.
//
// Spec code:
// # Verify random
// assert payload.random == get_randao_mix(state, get_current_epoch(state))
// # Verify timestamp
// assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
//
// # Verify random
// assert payload.random == get_randao_mix(state, get_current_epoch(state))
// # Verify timestamp
// assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) error {
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
@@ -174,48 +173,51 @@ func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) err
//
// Spec code:
// def process_execution_payload(state: BeaconState, payload: ExecutionPayload, execution_engine: ExecutionEngine) -> None:
// # Verify consistency of the parent hash with respect to the previous execution payload header
// if is_merge_complete(state):
// assert payload.parent_hash == state.latest_execution_payload_header.block_hash
// # Verify random
// assert payload.random == get_randao_mix(state, get_current_epoch(state))
// # Verify timestamp
// assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
// # Verify the execution payload is valid
// assert execution_engine.execute_payload(payload)
// # Cache execution payload header
// state.latest_execution_payload_header = ExecutionPayloadHeader(
// parent_hash=payload.parent_hash,
// FeeRecipient=payload.FeeRecipient,
// state_root=payload.state_root,
// receipt_root=payload.receipt_root,
// logs_bloom=payload.logs_bloom,
// random=payload.random,
// block_number=payload.block_number,
// gas_limit=payload.gas_limit,
// gas_used=payload.gas_used,
// timestamp=payload.timestamp,
// extra_data=payload.extra_data,
// base_fee_per_gas=payload.base_fee_per_gas,
// block_hash=payload.block_hash,
// transactions_root=hash_tree_root(payload.transactions),
// )
//
// # Verify consistency of the parent hash with respect to the previous execution payload header
// if is_merge_complete(state):
// assert payload.parent_hash == state.latest_execution_payload_header.block_hash
// # Verify random
// assert payload.random == get_randao_mix(state, get_current_epoch(state))
// # Verify timestamp
// assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
// # Verify the execution payload is valid
// assert execution_engine.execute_payload(payload)
// # Cache execution payload header
// state.latest_execution_payload_header = ExecutionPayloadHeader(
// parent_hash=payload.parent_hash,
// FeeRecipient=payload.FeeRecipient,
// state_root=payload.state_root,
// receipt_root=payload.receipt_root,
// logs_bloom=payload.logs_bloom,
// random=payload.random,
// block_number=payload.block_number,
// gas_limit=payload.gas_limit,
// gas_used=payload.gas_used,
// timestamp=payload.timestamp,
// extra_data=payload.extra_data,
// base_fee_per_gas=payload.base_fee_per_gas,
// block_hash=payload.block_hash,
// transactions_root=hash_tree_root(payload.transactions),
// )
func ProcessPayload(st state.BeaconState, payload interfaces.ExecutionData) (state.BeaconState, error) {
if st.Version() >= version.Capella {
withdrawals, err := payload.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get payload withdrawals")
}
st, err = ProcessWithdrawals(st, withdrawals)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err := ValidatePayloadWhenMergeCompletes(st, payload); err != nil {
return nil, err
}
if err := ValidatePayload(st, payload); err != nil {
return nil, err
}
header, err := blocks.PayloadToHeader(payload)
if err != nil {
return nil, err
}
wrappedHeader, err := blocks.WrappedExecutionPayloadHeader(header)
if err != nil {
return nil, err
}
if err := st.SetLatestExecutionPayloadHeader(wrappedHeader); err != nil {
if err := st.SetLatestExecutionPayloadHeader(payload); err != nil {
return nil, err
}
return st, nil
@@ -236,7 +238,7 @@ func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header interf
if err != nil {
return err
}
if !bytes.Equal(header.ParentHash(), h.BlockHash) {
if !bytes.Equal(header.ParentHash(), h.BlockHash()) {
return ErrInvalidPayloadBlockHash
}
return nil
@@ -280,7 +282,7 @@ func ProcessPayloadHeader(st state.BeaconState, header interfaces.ExecutionData)
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.BeaconBlock) ([32]byte, error) {
payloadHash := [32]byte{}
var payloadHash [32]byte
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
consensusblocks "github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
@@ -20,127 +21,170 @@ import (
func Test_IsMergeComplete(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayloadHeader
payload interfaces.ExecutionData
want bool
}{
{
name: "empty payload header",
payload: emptyPayloadHeader(),
want: false,
name: "empty payload header",
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
want: false,
},
{
name: "has parent hash",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has fee recipient",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has state root",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has receipt root",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has logs bloom",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
return h
}(),
want: true,
},
{
name: "has random",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has base fee",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has block hash",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has extra data",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ExtraData = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ExtraData = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has block number",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockNumber = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockNumber = 1
return h
}(),
want: true,
},
{
name: "has gas limit",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.GasLimit = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.GasLimit = 1
return h
}(),
want: true,
},
{
name: "has gas used",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.GasUsed = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.GasUsed = 1
return h
}(),
want: true,
},
{
name: "has time stamp",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.Timestamp = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.Timestamp = 1
return h
}(),
want: true,
@@ -149,9 +193,7 @@ func Test_IsMergeComplete(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.payload)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(t, st.SetLatestExecutionPayloadHeader(tt.payload))
got, err := blocks.IsMergeTransitionComplete(st)
require.NoError(t, err)
if got != tt.want {
@@ -161,6 +203,13 @@ func Test_IsMergeComplete(t *testing.T) {
}
}
func Test_IsMergeCompleteCapella(t *testing.T) {
st, _ := util.DeterministicGenesisStateCapella(t, 1)
got, err := blocks.IsMergeTransitionComplete(st)
require.NoError(t, err)
require.Equal(t, got, true)
}
func Test_IsExecutionBlock(t *testing.T) {
tests := []struct {
name string
@@ -195,40 +244,65 @@ func Test_IsExecutionBlock(t *testing.T) {
}
}
func Test_IsExecutionBlockCapella(t *testing.T) {
blk := util.NewBeaconBlockCapella()
blk.Block.Body.ExecutionPayload = emptyPayloadCapella()
wrappedBlock, err := consensusblocks.NewBeaconBlock(blk.Block)
require.NoError(t, err)
got, err := blocks.IsExecutionBlock(wrappedBlock.Body())
require.NoError(t, err)
require.Equal(t, false, got)
}
func Test_IsExecutionEnabled(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
useAltairSt bool
want bool
}{
{
name: "use older than bellatrix state",
payload: emptyPayload(),
header: emptyPayloadHeader(),
name: "use older than bellatrix state",
payload: emptyPayload(),
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
useAltairSt: true,
want: false,
},
{
name: "empty header, empty payload",
payload: emptyPayload(),
header: emptyPayloadHeader(),
want: false,
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
want: false,
},
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "empty header, non-empty payload",
header: emptyPayloadHeader(),
name: "empty header, non-empty payload",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Timestamp = 1
@@ -238,9 +312,12 @@ func Test_IsExecutionEnabled(t *testing.T) {
},
{
name: "non-empty header, non-empty payload",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
@@ -254,9 +331,7 @@ func Test_IsExecutionEnabled(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(t, st.SetLatestExecutionPayloadHeader(tt.header))
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload = tt.payload
body, err := consensusblocks.NewBeaconBlockBody(blk.Block.Body)
@@ -277,28 +352,39 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
want bool
}{
{
name: "empty header, empty payload",
payload: emptyPayload(),
header: emptyPayloadHeader(),
want: false,
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
want: false,
},
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "empty header, non-empty payload",
header: emptyPayloadHeader(),
name: "empty header, non-empty payload",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Timestamp = 1
@@ -308,9 +394,12 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
},
{
name: "non-empty header, non-empty payload",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
@@ -340,14 +429,18 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "merge incomplete",
payload: emptyPayload(),
header: emptyPayloadHeader(),
err: nil,
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
err: nil,
},
{
name: "validate passes",
@@ -356,9 +449,12 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
err: nil,
@@ -370,9 +466,12 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockHash = bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength)
return h
}(),
err: blocks.ErrInvalidPayloadBlockHash,
@@ -381,9 +480,7 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(t, st.SetLatestExecutionPayloadHeader(tt.header))
wrappedPayload, err := consensusblocks.WrappedExecutionPayload(tt.payload)
require.NoError(t, err)
err = blocks.ValidatePayloadWhenMergeCompletes(st, wrappedPayload)
@@ -493,14 +590,31 @@ func Test_ProcessPayload(t *testing.T) {
require.Equal(t, tt.err, err)
want, err := consensusblocks.PayloadToHeader(wrappedPayload)
require.Equal(t, tt.err, err)
got, err := st.LatestExecutionPayloadHeader()
h, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
got, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
require.DeepSSZEqual(t, want, got)
}
})
}
}
func Test_ProcessPayloadCapella(t *testing.T) {
st, _ := util.DeterministicGenesisStateCapella(t, 1)
header, err := emptyPayloadHeaderCapella()
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(header))
payload := emptyPayloadCapella()
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
payload.PrevRandao = random
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload)
require.NoError(t, err)
_, err = blocks.ProcessPayload(st, wrapped)
require.NoError(t, err)
}
func Test_ProcessPayloadHeader(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
@@ -509,29 +623,39 @@ func Test_ProcessPayloadHeader(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "process passes",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = uint64(ts.Unix())
return h
}(), err: nil,
},
{
name: "incorrect prev randao",
header: emptyPayloadHeader(),
err: blocks.ErrInvalidPayloadPrevRandao,
name: "incorrect prev randao",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
err: blocks.ErrInvalidPayloadPrevRandao,
},
{
name: "incorrect timestamp",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = 1
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = 1
return h
}(),
err: blocks.ErrInvalidPayloadTimeStamp,
@@ -539,16 +663,18 @@ func Test_ProcessPayloadHeader(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
st, err := blocks.ProcessPayloadHeader(st, wrappedHeader)
st, err := blocks.ProcessPayloadHeader(st, tt.header)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)
got, err := st.LatestExecutionPayloadHeader()
want, ok := tt.header.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
h, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
require.DeepSSZEqual(t, tt.header, got)
got, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
require.DeepSSZEqual(t, want, got)
}
})
}
@@ -562,29 +688,39 @@ func Test_ValidatePayloadHeader(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "process passes",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = uint64(ts.Unix())
return h
}(), err: nil,
},
{
name: "incorrect prev randao",
header: emptyPayloadHeader(),
err: blocks.ErrInvalidPayloadPrevRandao,
name: "incorrect prev randao",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
err: blocks.ErrInvalidPayloadPrevRandao,
},
{
name: "incorrect timestamp",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = 1
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = 1
return h
}(),
err: blocks.ErrInvalidPayloadTimeStamp,
@@ -592,9 +728,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
err = blocks.ValidatePayloadHeader(st, wrappedHeader)
err = blocks.ValidatePayloadHeader(st, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -609,13 +743,14 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
tests := []struct {
name string
state state.BeaconState
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "no merge",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
state: emptySt,
@@ -623,9 +758,12 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
},
{
name: "process passes",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = []byte{'a'}
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = []byte{'a'}
return h
}(),
state: st,
@@ -633,9 +771,12 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
},
{
name: "invalid block hash",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = []byte{'b'}
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = []byte{'b'}
return h
}(),
state: st,
@@ -644,9 +785,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
err = blocks.ValidatePayloadHeaderWhenMergeCompletes(tt.state, wrappedHeader)
err = blocks.ValidatePayloadHeaderWhenMergeCompletes(tt.state, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -695,9 +834,9 @@ func Test_PayloadToHeader(t *testing.T) {
func BenchmarkBellatrixComplete(b *testing.B) {
st, _ := util.DeterministicGenesisStateBellatrix(b, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(emptyPayloadHeader())
h, err := emptyPayloadHeader()
require.NoError(b, err)
require.NoError(b, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(b, st.SetLatestExecutionPayloadHeader(h))
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -706,8 +845,8 @@ func BenchmarkBellatrixComplete(b *testing.B) {
}
}
func emptyPayloadHeader() *enginev1.ExecutionPayloadHeader {
return &enginev1.ExecutionPayloadHeader{
func emptyPayloadHeader() (interfaces.ExecutionData, error) {
return consensusblocks.WrappedExecutionPayloadHeader(&enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
@@ -718,7 +857,23 @@ func emptyPayloadHeader() *enginev1.ExecutionPayloadHeader {
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
}
})
}
func emptyPayloadHeaderCapella() (interfaces.ExecutionData, error) {
return consensusblocks.WrappedExecutionPayloadHeaderCapella(&enginev1.ExecutionPayloadHeaderCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
})
}
func emptyPayload() *enginev1.ExecutionPayload {
@@ -735,3 +890,19 @@ func emptyPayload() *enginev1.ExecutionPayload {
ExtraData: make([]byte, 0),
}
}
func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
return &enginev1.ExecutionPayloadCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
}
}

View File

@@ -24,26 +24,27 @@ type slashValidatorFunc func(ctx context.Context, st state.BeaconState, vid type
// slashing conditions if any slashable events occurred.
//
// Spec pseudocode definition:
// def process_proposer_slashing(state: BeaconState, proposer_slashing: ProposerSlashing) -> None:
// header_1 = proposer_slashing.signed_header_1.message
// header_2 = proposer_slashing.signed_header_2.message
//
// # Verify header slots match
// assert header_1.slot == header_2.slot
// # Verify header proposer indices match
// assert header_1.proposer_index == header_2.proposer_index
// # Verify the headers are different
// assert header_1 != header_2
// # Verify the proposer is slashable
// proposer = state.validators[header_1.proposer_index]
// assert is_slashable_validator(proposer, get_current_epoch(state))
// # Verify signatures
// for signed_header in (proposer_slashing.signed_header_1, proposer_slashing.signed_header_2):
// domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_at_slot(signed_header.message.slot))
// signing_root = compute_signing_root(signed_header.message, domain)
// assert bls.Verify(proposer.pubkey, signing_root, signed_header.signature)
// def process_proposer_slashing(state: BeaconState, proposer_slashing: ProposerSlashing) -> None:
// header_1 = proposer_slashing.signed_header_1.message
// header_2 = proposer_slashing.signed_header_2.message
//
// slash_validator(state, header_1.proposer_index)
// # Verify header slots match
// assert header_1.slot == header_2.slot
// # Verify header proposer indices match
// assert header_1.proposer_index == header_2.proposer_index
// # Verify the headers are different
// assert header_1 != header_2
// # Verify the proposer is slashable
// proposer = state.validators[header_1.proposer_index]
// assert is_slashable_validator(proposer, get_current_epoch(state))
// # Verify signatures
// for signed_header in (proposer_slashing.signed_header_1, proposer_slashing.signed_header_2):
// domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_at_slot(signed_header.message.slot))
// signing_root = compute_signing_root(signed_header.message, domain)
// assert bls.Verify(proposer.pubkey, signing_root, signed_header.signature)
//
// slash_validator(state, header_1.proposer_index)
func ProcessProposerSlashings(
ctx context.Context,
beaconState state.BeaconState,
@@ -81,7 +82,7 @@ func ProcessProposerSlashing(
slashingQuotient = cfg.MinSlashingPenaltyQuotient
case beaconState.Version() == version.Altair:
slashingQuotient = cfg.MinSlashingPenaltyQuotientAltair
case beaconState.Version() == version.Bellatrix:
case beaconState.Version() >= version.Bellatrix:
slashingQuotient = cfg.MinSlashingPenaltyQuotientBellatrix
default:
return nil, errors.New("unknown state version")

View File

@@ -282,6 +282,54 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
// We test the case when data is correct and verify the validator
// registry has been updated.
beaconState, privKeys := util.DeterministicGenesisStateCapella(t, 100)
proposerIdx := types.ValidatorIndex(1)
header1 := &ethpb.SignedBeaconBlockHeader{
Header: util.HydrateBeaconHeader(&ethpb.BeaconBlockHeader{
ProposerIndex: proposerIdx,
StateRoot: bytesutil.PadTo([]byte("A"), 32),
}),
}
var err error
header1.Signature, err = signing.ComputeDomainAndSign(beaconState, 0, header1.Header, params.BeaconConfig().DomainBeaconProposer, privKeys[proposerIdx])
require.NoError(t, err)
header2 := util.HydrateSignedBeaconHeader(&ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: proposerIdx,
StateRoot: bytesutil.PadTo([]byte("B"), 32),
},
})
header2.Signature, err = signing.ComputeDomainAndSign(beaconState, 0, header2.Header, params.BeaconConfig().DomainBeaconProposer, privKeys[proposerIdx])
require.NoError(t, err)
slashings := []*ethpb.ProposerSlashing{
{
Header_1: header1,
Header_2: header2,
},
}
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
if newStateVals[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf("Proposer with index 1 did not correctly exit,"+"wanted slot:%d, got:%d",
newStateVals[1].ExitEpoch, beaconState.Validators()[1].ExitEpoch)
}
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestVerifyProposerSlashing(t *testing.T) {
type args struct {
beaconState state.BeaconState

View File

@@ -17,15 +17,16 @@ import (
// in the beacon state's latest randao mixes slice.
//
// Spec pseudocode definition:
// def process_randao(state: BeaconState, body: BeaconBlockBody) -> None:
// epoch = get_current_epoch(state)
// # Verify RANDAO reveal
// proposer = state.validators[get_beacon_proposer_index(state)]
// signing_root = compute_signing_root(epoch, get_domain(state, DOMAIN_RANDAO))
// assert bls.Verify(proposer.pubkey, signing_root, body.randao_reveal)
// # Mix in RANDAO reveal
// mix = xor(get_randao_mix(state, epoch), hash(body.randao_reveal))
// state.randao_mixes[epoch % EPOCHS_PER_HISTORICAL_VECTOR] = mix
//
// def process_randao(state: BeaconState, body: BeaconBlockBody) -> None:
// epoch = get_current_epoch(state)
// # Verify RANDAO reveal
// proposer = state.validators[get_beacon_proposer_index(state)]
// signing_root = compute_signing_root(epoch, get_domain(state, DOMAIN_RANDAO))
// assert bls.Verify(proposer.pubkey, signing_root, body.randao_reveal)
// # Mix in RANDAO reveal
// mix = xor(get_randao_mix(state, epoch), hash(body.randao_reveal))
// state.randao_mixes[epoch % EPOCHS_PER_HISTORICAL_VECTOR] = mix
func ProcessRandao(
ctx context.Context,
beaconState state.BeaconState,
@@ -56,11 +57,12 @@ func ProcessRandao(
// in the beacon state's latest randao mixes slice.
//
// Spec pseudocode definition:
// # Mix it in
// state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = (
// xor(get_randao_mix(state, get_current_epoch(state)),
// hash(body.randao_reveal))
// )
//
// # Mix it in
// state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = (
// xor(get_randao_mix(state, get_current_epoch(state)),
// hash(body.randao_reveal))
// )
func ProcessRandaoNoVerify(
beaconState state.BeaconState,
randaoReveal []byte,

View File

@@ -19,7 +19,7 @@ import (
)
// retrieves the signature batch from the raw data, public key,signature and domain provided.
func signatureBatch(signedData, pub, signature, domain []byte) (*bls.SignatureBatch, error) {
func signatureBatch(signedData, pub, signature, domain []byte, desc string) (*bls.SignatureBatch, error) {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return nil, errors.Wrap(err, "could not convert bytes to public key")
@@ -33,15 +33,16 @@ func signatureBatch(signedData, pub, signature, domain []byte) (*bls.SignatureBa
return nil, errors.Wrap(err, "could not hash container")
}
return &bls.SignatureBatch{
Signatures: [][]byte{signature},
PublicKeys: []bls.PublicKey{publicKey},
Messages: [][32]byte{root},
Signatures: [][]byte{signature},
PublicKeys: []bls.PublicKey{publicKey},
Messages: [][32]byte{root},
Descriptions: []string{desc},
}, nil
}
// verifies the signature from the raw data, public key and domain provided.
func verifySignature(signedData, pub, signature, domain []byte) error {
set, err := signatureBatch(signedData, pub, signature, domain)
set, err := signatureBatch(signedData, pub, signature, domain, signing.UnknownSignature)
if err != nil {
return err
}
@@ -146,7 +147,7 @@ func RandaoSignatureBatch(
if err != nil {
return nil, err
}
set, err := signatureBatch(buf, proposerPub, reveal, domain)
set, err := signatureBatch(buf, proposerPub, reveal, domain, signing.RandaoSignature)
if err != nil {
return nil, err
}
@@ -186,6 +187,7 @@ func createAttestationSignatureBatch(
sigs := make([][]byte, len(atts))
pks := make([]bls.PublicKey, len(atts))
msgs := make([][32]byte, len(atts))
descs := make([]string, len(atts))
for i, a := range atts {
sigs[i] = a.Signature
c, err := helpers.BeaconCommitteeFromState(ctx, beaconState, a.Data.Slot, a.Data.CommitteeIndex)
@@ -216,11 +218,14 @@ func createAttestationSignatureBatch(
return nil, errors.Wrap(err, "could not get signing root of object")
}
msgs[i] = root
descs[i] = signing.AttestationSignature
}
return &bls.SignatureBatch{
Signatures: sigs,
PublicKeys: pks,
Messages: msgs,
Signatures: sigs,
PublicKeys: pks,
Messages: msgs,
Descriptions: descs,
}, nil
}

View File

@@ -3,45 +3,92 @@ package blocks
import (
"bytes"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/crypto/hash/htr"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/crypto/hash"
"github.com/prysmaticlabs/prysm/v3/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v3/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
)
const executionToBLSPadding = 12
// ProcessBLSToExecutionChange validates a SignedBLSToExecution message and
func ProcessBLSToExecutionChanges(
st state.BeaconState,
signed interfaces.SignedBeaconBlock) (state.BeaconState, error) {
if signed.Version() < version.Capella {
return st, nil
}
changes, err := signed.Block().Body().BLSToExecutionChanges()
if err != nil {
return nil, errors.Wrap(err, "could not get BLSToExecutionChanges")
}
// Return early if no changes
if len(changes) == 0 {
return st, nil
}
for _, change := range changes {
st, err = processBLSToExecutionChange(st, change)
if err != nil {
return nil, errors.Wrap(err, "could not process BLSToExecutionChange")
}
}
return st, nil
}
// processBLSToExecutionChange validates a SignedBLSToExecution message and
// changes the validator's withdrawal address accordingly.
//
// Spec pseudocode definition:
//
//def process_bls_to_execution_change(state: BeaconState, signed_address_change: SignedBLSToExecutionChange) -> None:
// validator = state.validators[address_change.validator_index]
// def process_bls_to_execution_change(state: BeaconState, signed_address_change: SignedBLSToExecutionChange) -> None:
//
// assert validator.withdrawal_credentials[:1] == BLS_WITHDRAWAL_PREFIX
// assert validator.withdrawal_credentials[1:] == hash(address_change.from_bls_pubkey)[1:]
// validator = state.validators[address_change.validator_index]
//
// domain = get_domain(state, DOMAIN_BLS_TO_EXECUTION_CHANGE)
// signing_root = compute_signing_root(address_change, domain)
// assert bls.Verify(address_change.from_bls_pubkey, signing_root, signed_address_change.signature)
// assert validator.withdrawal_credentials[:1] == BLS_WITHDRAWAL_PREFIX
// assert validator.withdrawal_credentials[1:] == hash(address_change.from_bls_pubkey)[1:]
//
// validator.withdrawal_credentials = (
// ETH1_ADDRESS_WITHDRAWAL_PREFIX
// + b'\x00' * 11
// + address_change.to_execution_address
// )
// domain = get_domain(state, DOMAIN_BLS_TO_EXECUTION_CHANGE)
// signing_root = compute_signing_root(address_change, domain)
// assert bls.Verify(address_change.from_bls_pubkey, signing_root, signed_address_change.signature)
//
func ProcessBLSToExecutionChange(st state.BeaconState, signed *ethpb.SignedBLSToExecutionChange) (state.BeaconState, error) {
// validator.withdrawal_credentials = (
// ETH1_ADDRESS_WITHDRAWAL_PREFIX
// + b'\x00' * 11
// + address_change.to_execution_address
// )
func processBLSToExecutionChange(st state.BeaconState, signed *ethpb.SignedBLSToExecutionChange) (state.BeaconState, error) {
// Checks that the message passes the validation conditions.
val, err := ValidateBLSToExecutionChange(st, signed)
if err != nil {
return nil, err
}
message := signed.Message
newCredentials := make([]byte, executionToBLSPadding)
newCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
val.WithdrawalCredentials = append(newCredentials, message.ToExecutionAddress...)
err = st.UpdateValidatorAtIndex(message.ValidatorIndex, val)
return st, err
}
// ValidateBLSToExecutionChange validates the execution change message against the state and returns the
// validator referenced by the message.
func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.SignedBLSToExecutionChange) (*ethpb.Validator, error) {
if signed == nil {
return st, errNilSignedWithdrawalMessage
return nil, errNilSignedWithdrawalMessage
}
message := signed.Message
if message == nil {
return st, errNilWithdrawalMessage
return nil, errNilWithdrawalMessage
}
val, err := st.ValidatorAtIndex(message.ValidatorIndex)
@@ -55,24 +102,113 @@ func ProcessBLSToExecutionChange(st state.BeaconState, signed *ethpb.SignedBLSTo
// hash the public key and verify it matches the withdrawal credentials
fromPubkey := message.FromBlsPubkey
pubkeyChunks := [][32]byte{bytesutil.ToBytes32(fromPubkey[:32]), bytesutil.ToBytes32(fromPubkey[32:])}
digest := make([][32]byte, 1)
htr.VectorizedSha256(pubkeyChunks, digest)
if !bytes.Equal(digest[0][1:], cred[1:]) {
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(fromPubkey)
if !bytes.Equal(digest[1:], cred[1:]) {
return nil, errInvalidWithdrawalCredentials
}
epoch := slots.ToEpoch(st.Slot())
domain, err := signing.Domain(st.Fork(), epoch, params.BeaconConfig().DomainBLSToExecutionChange, st.GenesisValidatorsRoot())
if err != nil {
return nil, err
}
if err := signing.VerifySigningRoot(message, fromPubkey, signed.Signature, domain); err != nil {
return nil, signing.ErrSigFailedToVerify
}
newCredentials := make([]byte, executionToBLSPadding)
newCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
val.WithdrawalCredentials = append(newCredentials, message.ToExecutionAddress...)
err = st.UpdateValidatorAtIndex(message.ValidatorIndex, val)
return st, err
return val, nil
}
func ProcessWithdrawals(st state.BeaconState, withdrawals []*enginev1.Withdrawal) (state.BeaconState, error) {
expected, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
}
if len(expected) != len(withdrawals) {
return nil, errInvalidWithdrawalNumber
}
for i, withdrawal := range withdrawals {
if withdrawal.Index != expected[i].Index {
return nil, errInvalidWithdrawalIndex
}
if withdrawal.ValidatorIndex != expected[i].ValidatorIndex {
return nil, errInvalidValidatorIndex
}
if !bytes.Equal(withdrawal.Address, expected[i].Address) {
return nil, errInvalidExecutionAddress
}
if withdrawal.Amount != expected[i].Amount {
return nil, errInvalidWithdrawalAmount
}
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if err != nil {
return nil, errors.Wrap(err, "could not decrease balance")
}
}
if len(withdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(withdrawals[len(withdrawals)-1].Index + 1); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex types.ValidatorIndex
if uint64(len(withdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += types.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % types.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = withdrawals[len(withdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == types.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal validator index")
}
return st, nil
}
func BLSChangesSignatureBatch(
st state.ReadOnlyBeaconState,
changes []*ethpb.SignedBLSToExecutionChange,
) (*bls.SignatureBatch, error) {
// Return early if no changes
if len(changes) == 0 {
return bls.NewSet(), nil
}
batch := &bls.SignatureBatch{
Signatures: make([][]byte, len(changes)),
PublicKeys: make([]bls.PublicKey, len(changes)),
Messages: make([][32]byte, len(changes)),
Descriptions: make([]string, len(changes)),
}
c := params.BeaconConfig()
domain, err := signing.ComputeDomain(c.DomainBLSToExecutionChange, c.GenesisForkVersion, st.GenesisValidatorsRoot())
if err != nil {
return nil, errors.Wrap(err, "could not compute signing domain")
}
for i, change := range changes {
batch.Signatures[i] = change.Signature
publicKey, err := bls.PublicKeyFromBytes(change.Message.FromBlsPubkey)
if err != nil {
return nil, errors.Wrap(err, "could not convert bytes to public key")
}
batch.PublicKeys[i] = publicKey
htr, err := signing.SigningData(change.Message.HashTreeRoot, domain)
if err != nil {
return nil, errors.Wrap(err, "could not compute BLSToExecutionChange signing data")
}
batch.Messages[i] = htr
batch.Descriptions[i] = signing.BlsChangeSignature
}
return batch, nil
}
// VerifyBLSChangeSignature checks the signature in the SignedBLSToExecutionChange message.
// It validates the signature with the Capella fork version if the passed state
// is from a previous fork.
func VerifyBLSChangeSignature(
st state.BeaconState,
change *ethpbv2.SignedBLSToExecutionChange,
) error {
c := params.BeaconConfig()
domain, err := signing.ComputeDomain(c.DomainBLSToExecutionChange, c.GenesisForkVersion, st.GenesisValidatorsRoot())
if err != nil {
return errors.Wrap(err, "could not compute signing domain")
}
publicKey := change.Message.FromBlsPubkey
return signing.VerifySigningRoot(change.Message, publicKey, change.Signature, domain)
}

View File

@@ -1,18 +1,26 @@
package blocks_test
import (
"math/rand"
"testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/crypto/hash/htr"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/crypto/bls/common"
"github.com/prysmaticlabs/prysm/v3/crypto/hash"
"github.com/prysmaticlabs/prysm/v3/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v3/proto/migration"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
func TestProcessBLSToExecutionChange(t *testing.T) {
@@ -27,14 +35,13 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
FromBlsPubkey: pubkey,
}
pubkeyChunks := [][32]byte{bytesutil.ToBytes32(pubkey[:32]), bytesutil.ToBytes32(pubkey[32:])}
digest := make([][32]byte, 1)
htr.VectorizedSha256(pubkeyChunks, digest)
digest[0][0] = params.BeaconConfig().BLSWithdrawalPrefixByte
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
registry := []*ethpb.Validator{
{
WithdrawalCredentials: digest[0][:],
WithdrawalCredentials: digest[:],
},
}
st, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
@@ -63,6 +70,47 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
require.DeepEqual(t, message.ToExecutionAddress, val.WithdrawalCredentials[12:])
})
t.Run("happy case only validation", func(t *testing.T) {
priv, err := bls.RandKey()
require.NoError(t, err)
pubkey := priv.PublicKey().Marshal()
message := &ethpb.BLSToExecutionChange{
ToExecutionAddress: []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13},
ValidatorIndex: 0,
FromBlsPubkey: pubkey,
}
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
registry := []*ethpb.Validator{
{
WithdrawalCredentials: digest[:],
},
}
st, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
Slot: params.BeaconConfig().SlotsPerEpoch * 5,
})
require.NoError(t, err)
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, priv)
require.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: signature,
}
val, err := blocks.ValidateBLSToExecutionChange(st, signed)
require.NoError(t, err)
require.DeepEqual(t, digest[:], val.WithdrawalCredentials)
})
t.Run("non-existent validator", func(t *testing.T) {
priv, err := bls.RandKey()
@@ -75,14 +123,13 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
FromBlsPubkey: pubkey,
}
pubkeyChunks := [][32]byte{bytesutil.ToBytes32(pubkey[:32]), bytesutil.ToBytes32(pubkey[32:])}
digest := make([][32]byte, 1)
htr.VectorizedSha256(pubkeyChunks, digest)
digest[0][0] = params.BeaconConfig().BLSWithdrawalPrefixByte
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
registry := []*ethpb.Validator{
{
WithdrawalCredentials: digest[0][:],
WithdrawalCredentials: digest[:],
},
}
st, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
@@ -155,15 +202,13 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
ValidatorIndex: 0,
FromBlsPubkey: pubkey,
}
pubkeyChunks := [][32]byte{bytesutil.ToBytes32(pubkey[:32]), bytesutil.ToBytes32(pubkey[32:])}
digest := make([][32]byte, 1)
htr.VectorizedSha256(pubkeyChunks, digest)
digest[0][0] = params.BeaconConfig().BLSWithdrawalPrefixByte
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
registry := []*ethpb.Validator{
{
WithdrawalCredentials: digest[0][:],
WithdrawalCredentials: digest[:],
},
}
registry[0].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
@@ -191,3 +236,711 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
})
}
func TestProcessWithdrawals(t *testing.T) {
const (
currentEpoch = types.Epoch(10)
epochInFuture = types.Epoch(12)
epochInPast = types.Epoch(8)
numValidators = 128
notWithdrawableIndex = 127
notPartiallyWithdrawable = 126
maxSweep = uint64(80)
)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
type args struct {
Name string
NextWithdrawalValidatorIndex types.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []types.ValidatorIndex
PartialWithdrawalIndices []types.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
}
type control struct {
NextWithdrawalValidatorIndex types.ValidatorIndex
NextWithdrawalIndex uint64
ExpectedError bool
Balances map[uint64]uint64
}
type Test struct {
Args args
Control control
}
executionAddress := func(i types.ValidatorIndex) []byte {
wc := make([]byte, 20)
wc[19] = byte(i)
return wc
}
withdrawalAmount := func(i types.ValidatorIndex) uint64 {
return maxEffectiveBalance + uint64(i)*100000
}
fullWithdrawal := func(i types.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i),
}
}
partialWithdrawal := func(i types.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i) - maxEffectiveBalance,
}
}
tests := []Test{
{
Args: args{
Name: "success no withdrawals",
NextWithdrawalValidatorIndex: 10,
NextWithdrawalIndex: 3,
},
Control: control{
NextWithdrawalValidatorIndex: 90,
NextWithdrawalIndex: 3,
},
},
{
Args: args{
Name: "success one full withdrawal",
NextWithdrawalIndex: 3,
NextWithdrawalValidatorIndex: 5,
FullWithdrawalIndices: []types.ValidatorIndex{70},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(70, 3),
},
},
Control: control{
NextWithdrawalValidatorIndex: 85,
NextWithdrawalIndex: 4,
Balances: map[uint64]uint64{70: 0},
},
},
{
Args: args{
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PartialWithdrawalIndices: []types.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
partialWithdrawal(7, 21),
},
},
Control: control{
NextWithdrawalValidatorIndex: 72,
NextWithdrawalIndex: 22,
Balances: map[uint64]uint64{7: maxEffectiveBalance},
},
},
{
Args: args{
Name: "success many full withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{7: 0, 19: 0, 28: 0},
},
},
{
Args: args{
Name: "Less than max sweep at end",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{80, 81, 82, 83},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(80, 22), fullWithdrawal(81, 23), fullWithdrawal(82, 24),
fullWithdrawal(83, 25),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 26,
Balances: map[uint64]uint64{80: 0, 81: 0, 82: 0, 83: 0},
},
},
{
Args: args{
Name: "Less than max sweep and beginning",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{4, 5, 6},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(4, 22), fullWithdrawal(5, 23), fullWithdrawal(6, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{4: 0, 5: 0, 6: 0},
},
},
{
Args: args{
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PartialWithdrawalIndices: []types.ValidatorIndex{7, 19, 28},
Withdrawals: []*enginev1.Withdrawal{
partialWithdrawal(7, 22), partialWithdrawal(19, 23), partialWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{
7: maxEffectiveBalance,
19: maxEffectiveBalance,
28: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []types.ValidatorIndex{7, 19, 28},
PartialWithdrawalIndices: []types.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
partialWithdrawal(89, 22), partialWithdrawal(1, 23), partialWithdrawal(2, 24),
fullWithdrawal(7, 25), partialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success more than max fully withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
FullWithdrawalIndices: []types.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(1, 22), fullWithdrawal(2, 23), fullWithdrawal(3, 24),
fullWithdrawal(4, 25), fullWithdrawal(5, 26), fullWithdrawal(6, 27),
fullWithdrawal(7, 28), fullWithdrawal(8, 29), fullWithdrawal(9, 30),
fullWithdrawal(21, 31), fullWithdrawal(22, 32), fullWithdrawal(23, 33),
fullWithdrawal(24, 34), fullWithdrawal(25, 35), fullWithdrawal(26, 36),
fullWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0,
21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0,
},
},
},
{
Args: args{
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PartialWithdrawalIndices: []types.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
partialWithdrawal(1, 22), partialWithdrawal(2, 23), partialWithdrawal(3, 24),
partialWithdrawal(4, 25), partialWithdrawal(5, 26), partialWithdrawal(6, 27),
partialWithdrawal(7, 28), partialWithdrawal(8, 29), partialWithdrawal(9, 30),
partialWithdrawal(21, 31), partialWithdrawal(22, 32), partialWithdrawal(23, 33),
partialWithdrawal(24, 34), partialWithdrawal(25, 35), partialWithdrawal(26, 36),
partialWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: maxEffectiveBalance,
2: maxEffectiveBalance,
3: maxEffectiveBalance,
4: maxEffectiveBalance,
5: maxEffectiveBalance,
6: maxEffectiveBalance,
7: maxEffectiveBalance,
8: maxEffectiveBalance,
9: maxEffectiveBalance,
21: maxEffectiveBalance,
22: maxEffectiveBalance,
23: maxEffectiveBalance,
24: maxEffectiveBalance,
25: maxEffectiveBalance,
26: maxEffectiveBalance,
27: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "failure wrong number of partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 37,
PartialWithdrawalIndices: []types.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
partialWithdrawal(7, 21), partialWithdrawal(9, 22),
},
},
Control: control{
ExpectedError: true,
},
},
{
Args: args{
Name: "failure invalid withdrawal index",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 25),
fullWithdrawal(1, 25),
},
},
Control: control{
ExpectedError: true,
},
},
{
Args: args{
Name: "failure invalid validator index",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(27, 24),
fullWithdrawal(1, 25),
},
},
Control: control{
ExpectedError: true,
},
},
{
Args: args{
Name: "failure invalid withdrawal amount",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), partialWithdrawal(28, 24),
fullWithdrawal(1, 25),
},
},
Control: control{
ExpectedError: true,
},
},
{
Args: args{
Name: "failure validator not fully withdrawable",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []types.ValidatorIndex{notWithdrawableIndex},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(notWithdrawableIndex, 22),
},
},
Control: control{
ExpectedError: true,
},
},
{
Args: args{
Name: "failure validator not partially withdrawable",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PartialWithdrawalIndices: []types.ValidatorIndex{notPartiallyWithdrawable},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(notPartiallyWithdrawable, 22),
},
},
Control: control{
ExpectedError: true,
},
},
}
checkPostState := func(t *testing.T, expected control, st state.BeaconState) {
l, err := st.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalValidatorIndex, l)
n, err := st.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalIndex, n)
balances := st.Balances()
for idx, bal := range expected.Balances {
require.Equal(t, bal, balances[idx])
}
}
prepareValidators := func(st *ethpb.BeaconStateCapella, arguments args) (state.BeaconState, error) {
validators := make([]*ethpb.Validator, numValidators)
st.Balances = make([]uint64, numValidators)
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = epochInFuture
v.WithdrawalCredentials = make([]byte, 32)
v.WithdrawalCredentials[31] = byte(i)
st.Balances[i] = v.EffectiveBalance - uint64(rand.Intn(1000))
validators[i] = v
}
for _, idx := range arguments.FullWithdrawalIndices {
if idx != notWithdrawableIndex {
validators[idx].WithdrawableEpoch = epochInPast
}
st.Balances[idx] = withdrawalAmount(idx)
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
}
for _, idx := range arguments.PartialWithdrawalIndices {
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
st.Balances[idx] = withdrawalAmount(idx)
}
st.Validators = validators
return state_native.InitializeFromProtoCapella(st)
}
for _, test := range tests {
t.Run(test.Args.Name, func(t *testing.T) {
saved := params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = maxSweep
if test.Args.Withdrawals == nil {
test.Args.Withdrawals = make([]*enginev1.Withdrawal, 0)
}
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]types.ValidatorIndex, 0)
}
if test.Args.PartialWithdrawalIndices == nil {
test.Args.PartialWithdrawalIndices = make([]types.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
spb := &ethpb.BeaconStateCapella{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
}
st, err := prepareValidators(spb, test.Args)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, test.Args.Withdrawals)
if test.Control.ExpectedError {
require.NotNil(t, err)
} else {
require.NoError(t, err)
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
}
func TestProcessBLSToExecutionChanges(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
}
numValidators := 10
validators := make([]*ethpb.Validator, numValidators)
blsChanges := make([]*ethpb.BLSToExecutionChange, numValidators)
spb.Balances = make([]uint64, numValidators)
privKeys := make([]common.SecretKey, numValidators)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
executionAddress := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = params.BeaconConfig().FarFutureEpoch
v.WithdrawalCredentials = make([]byte, 32)
priv, err := bls.RandKey()
require.NoError(t, err)
privKeys[i] = priv
pubkey := priv.PublicKey().Marshal()
message := &ethpb.BLSToExecutionChange{
ToExecutionAddress: executionAddress,
ValidatorIndex: types.ValidatorIndex(i),
FromBlsPubkey: pubkey,
}
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
copy(v.WithdrawalCredentials, digest[:])
validators[i] = v
blsChanges[i] = message
}
spb.Validators = validators
st, err := state_native.InitializeFromProtoCapella(spb)
require.NoError(t, err)
signedChanges := make([]*ethpb.SignedBLSToExecutionChange, numValidators)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: signature,
}
signedChanges[i] = signed
}
body := &ethpb.BeaconBlockBodyCapella{
BlsToExecutionChanges: signedChanges,
}
bpb := &ethpb.BeaconBlockCapella{
Body: body,
}
sbpb := &ethpb.SignedBeaconBlockCapella{
Block: bpb,
}
signed, err := consensusblocks.NewSignedBeaconBlock(sbpb)
require.NoError(t, err)
st, err = blocks.ProcessBLSToExecutionChanges(st, signed)
require.NoError(t, err)
vals := st.Validators()
for _, val := range vals {
require.DeepEqual(t, executionAddress, val.WithdrawalCredentials[12:])
require.Equal(t, params.BeaconConfig().ETH1AddressWithdrawalPrefixByte, val.WithdrawalCredentials[0])
}
}
func TestBLSChangesSignatureBatch(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
}
numValidators := 10
validators := make([]*ethpb.Validator, numValidators)
blsChanges := make([]*ethpb.BLSToExecutionChange, numValidators)
spb.Balances = make([]uint64, numValidators)
privKeys := make([]common.SecretKey, numValidators)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
executionAddress := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = params.BeaconConfig().FarFutureEpoch
v.WithdrawalCredentials = make([]byte, 32)
priv, err := bls.RandKey()
require.NoError(t, err)
privKeys[i] = priv
pubkey := priv.PublicKey().Marshal()
message := &ethpb.BLSToExecutionChange{
ToExecutionAddress: executionAddress,
ValidatorIndex: types.ValidatorIndex(i),
FromBlsPubkey: pubkey,
}
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
copy(v.WithdrawalCredentials, digest[:])
validators[i] = v
blsChanges[i] = message
}
spb.Validators = validators
st, err := state_native.InitializeFromProtoCapella(spb)
require.NoError(t, err)
signedChanges := make([]*ethpb.SignedBLSToExecutionChange, numValidators)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: signature,
}
signedChanges[i] = signed
}
batch, err := blocks.BLSChangesSignatureBatch(st, signedChanges)
require.NoError(t, err)
verify, err := batch.Verify()
require.NoError(t, err)
require.Equal(t, true, verify)
// Verify a single change
change := migration.V1Alpha1SignedBLSToExecChangeToV2(signedChanges[0])
require.NoError(t, blocks.VerifyBLSChangeSignature(st, change))
}
func TestBLSChangesSignatureBatchWrongFork(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
PreviousVersion: params.BeaconConfig().BellatrixForkVersion,
Epoch: params.BeaconConfig().CapellaForkEpoch,
},
}
numValidators := 10
validators := make([]*ethpb.Validator, numValidators)
blsChanges := make([]*ethpb.BLSToExecutionChange, numValidators)
spb.Balances = make([]uint64, numValidators)
privKeys := make([]common.SecretKey, numValidators)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
executionAddress := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = params.BeaconConfig().FarFutureEpoch
v.WithdrawalCredentials = make([]byte, 32)
priv, err := bls.RandKey()
require.NoError(t, err)
privKeys[i] = priv
pubkey := priv.PublicKey().Marshal()
message := &ethpb.BLSToExecutionChange{
ToExecutionAddress: executionAddress,
ValidatorIndex: types.ValidatorIndex(i),
FromBlsPubkey: pubkey,
}
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
copy(v.WithdrawalCredentials, digest[:])
validators[i] = v
blsChanges[i] = message
}
spb.Validators = validators
st, err := state_native.InitializeFromProtoCapella(spb)
require.NoError(t, err)
signedChanges := make([]*ethpb.SignedBLSToExecutionChange, numValidators)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: signature,
}
signedChanges[i] = signed
}
batch, err := blocks.BLSChangesSignatureBatch(st, signedChanges)
require.NoError(t, err)
verify, err := batch.Verify()
require.NoError(t, err)
require.Equal(t, false, verify)
// Verify a single change
change := migration.V1Alpha1SignedBLSToExecChangeToV2(signedChanges[0])
require.ErrorIs(t, signing.ErrSigFailedToVerify, blocks.VerifyBLSChangeSignature(st, change))
}
func TestBLSChangesSignatureBatchFromBellatrix(t *testing.T) {
cfg := params.BeaconConfig()
savedConfig := cfg.Copy()
cfg.CapellaForkEpoch = cfg.BellatrixForkEpoch.AddEpoch(2)
params.OverrideBeaconConfig(cfg)
spb := &ethpb.BeaconStateBellatrix{
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().BellatrixForkVersion,
PreviousVersion: params.BeaconConfig().AltairForkVersion,
Epoch: params.BeaconConfig().BellatrixForkEpoch,
},
}
numValidators := 10
validators := make([]*ethpb.Validator, numValidators)
blsChanges := make([]*ethpb.BLSToExecutionChange, numValidators)
spb.Balances = make([]uint64, numValidators)
slot, err := slots.EpochStart(params.BeaconConfig().BellatrixForkEpoch)
require.NoError(t, err)
spb.Slot = slot
privKeys := make([]common.SecretKey, numValidators)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
executionAddress := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = params.BeaconConfig().FarFutureEpoch
v.WithdrawalCredentials = make([]byte, 32)
priv, err := bls.RandKey()
require.NoError(t, err)
privKeys[i] = priv
pubkey := priv.PublicKey().Marshal()
message := &ethpb.BLSToExecutionChange{
ToExecutionAddress: executionAddress,
ValidatorIndex: types.ValidatorIndex(i),
FromBlsPubkey: pubkey,
}
hashFn := ssz.NewHasherFunc(hash.CustomSHA256Hasher())
digest := hashFn.Hash(pubkey)
digest[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
copy(v.WithdrawalCredentials, digest[:])
validators[i] = v
blsChanges[i] = message
}
spb.Validators = validators
st, err := state_native.InitializeFromProtoBellatrix(spb)
require.NoError(t, err)
signedChanges := make([]*ethpb.SignedBLSToExecutionChange, numValidators)
spc := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
Epoch: params.BeaconConfig().CapellaForkEpoch,
},
}
slot, err = slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
spc.Slot = slot
stc, err := state_native.InitializeFromProtoCapella(spc)
require.NoError(t, err)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(stc, 0, message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: signature,
}
signedChanges[i] = signed
}
batch, err := blocks.BLSChangesSignatureBatch(st, signedChanges)
require.NoError(t, err)
verify, err := batch.Verify()
require.NoError(t, err)
require.Equal(t, true, verify)
// Verify a single change
change := migration.V1Alpha1SignedBLSToExecChangeToV2(signedChanges[0])
require.NoError(t, blocks.VerifyBLSChangeSignature(st, change))
params.OverrideBeaconConfig(savedConfig)
}

View File

@@ -0,0 +1,31 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["upgrade.go"],
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/capella",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["upgrade_test.go"],
deps = [
":go_default_library",
"//beacon-chain/core/time:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
],
)

View File

@@ -0,0 +1,101 @@
package capella
import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/config/params"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
)
// UpgradeToCapella updates a generic state to return the version Capella state.
func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := state.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := state.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := state.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := state.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := state.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
hrs, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateCapella{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: hrs,
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: make([]byte, 32),
},
NextWithdrawalIndex: 0,
NextWithdrawalValidatorIndex: 0,
HistoricalSummaries: make([]*ethpb.HistoricalSummary, 0),
}
return state_native.InitializeFromProtoUnsafeCapella(s)
}

View File

@@ -0,0 +1,104 @@
package capella_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/capella"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/testing/util"
)
func TestUpgradeToCapella(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().MaxValidatorsPerCommittee)
preForkState := st.Copy()
mSt, err := capella.UpgradeToCapella(st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), mSt.GenesisTime())
require.DeepSSZEqual(t, preForkState.GenesisValidatorsRoot(), mSt.GenesisValidatorsRoot())
require.Equal(t, preForkState.Slot(), mSt.Slot())
require.DeepSSZEqual(t, preForkState.LatestBlockHeader(), mSt.LatestBlockHeader())
require.DeepSSZEqual(t, preForkState.BlockRoots(), mSt.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), mSt.StateRoots())
require.DeepSSZEqual(t, preForkState.Eth1Data(), mSt.Eth1Data())
require.DeepSSZEqual(t, preForkState.Eth1DataVotes(), mSt.Eth1DataVotes())
require.DeepSSZEqual(t, preForkState.Eth1DepositIndex(), mSt.Eth1DepositIndex())
require.DeepSSZEqual(t, preForkState.Validators(), mSt.Validators())
require.DeepSSZEqual(t, preForkState.Balances(), mSt.Balances())
require.DeepSSZEqual(t, preForkState.RandaoMixes(), mSt.RandaoMixes())
require.DeepSSZEqual(t, preForkState.Slashings(), mSt.Slashings())
require.DeepSSZEqual(t, preForkState.JustificationBits(), mSt.JustificationBits())
require.DeepSSZEqual(t, preForkState.PreviousJustifiedCheckpoint(), mSt.PreviousJustifiedCheckpoint())
require.DeepSSZEqual(t, preForkState.CurrentJustifiedCheckpoint(), mSt.CurrentJustifiedCheckpoint())
require.DeepSSZEqual(t, preForkState.FinalizedCheckpoint(), mSt.FinalizedCheckpoint())
numValidators := mSt.NumValidators()
p, err := mSt.PreviousEpochParticipation()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]byte, numValidators), p)
p, err = mSt.CurrentEpochParticipation()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]byte, numValidators), p)
s, err := mSt.InactivityScores()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
Epoch: time.CurrentEpoch(st),
}, f)
csc, err := mSt.CurrentSyncCommittee()
require.NoError(t, err)
psc, err := preForkState.CurrentSyncCommittee()
require.NoError(t, err)
require.DeepSSZEqual(t, psc, csc)
nsc, err := mSt.NextSyncCommittee()
require.NoError(t, err)
psc, err = preForkState.NextSyncCommittee()
require.NoError(t, err)
require.DeepSSZEqual(t, psc, nsc)
header, err := mSt.LatestExecutionPayloadHeader()
require.NoError(t, err)
protoHeader, ok := header.Proto().(*enginev1.ExecutionPayloadHeaderCapella)
require.Equal(t, true, ok)
prevHeader, err := preForkState.LatestExecutionPayloadHeader()
require.NoError(t, err)
txRoot, err := prevHeader.TransactionsRoot()
require.NoError(t, err)
wanted := &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: prevHeader.ParentHash(),
FeeRecipient: prevHeader.FeeRecipient(),
StateRoot: prevHeader.StateRoot(),
ReceiptsRoot: prevHeader.ReceiptsRoot(),
LogsBloom: prevHeader.LogsBloom(),
PrevRandao: prevHeader.PrevRandao(),
BlockNumber: prevHeader.BlockNumber(),
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: make([]byte, 32),
}
require.DeepEqual(t, wanted, protoHeader)
nwi, err := mSt.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, uint64(0), nwi)
lwvi, err := mSt.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, types.ValidatorIndex(0), lwvi)
summaries, err := mSt.HistoricalSummaries()
require.NoError(t, err)
require.Equal(t, 0, len(summaries))
}

View File

@@ -13,11 +13,14 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
@@ -33,8 +36,10 @@ go_test(
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -14,11 +14,14 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/math"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
)
// sortableIndices implements the Sort interface to sort newly activated validator indices
@@ -49,12 +52,13 @@ func (s sortableIndices) Less(i, j int) bool {
// need to get attesting balance from attestations.
//
// Spec pseudocode definition:
// def get_attesting_balance(state: BeaconState, attestations: Sequence[PendingAttestation]) -> Gwei:
// """
// Return the combined effective balance of the set of unslashed validators participating in ``attestations``.
// Note: ``get_total_balance`` returns ``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
// """
// return get_total_balance(state, get_unslashed_attesting_indices(state, attestations))
//
// def get_attesting_balance(state: BeaconState, attestations: Sequence[PendingAttestation]) -> Gwei:
// """
// Return the combined effective balance of the set of unslashed validators participating in ``attestations``.
// Note: ``get_total_balance`` returns ``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
// """
// return get_total_balance(state, get_unslashed_attesting_indices(state, attestations))
func AttestingBalance(ctx context.Context, state state.ReadOnlyBeaconState, atts []*ethpb.PendingAttestation) (uint64, error) {
indices, err := UnslashedAttestingIndices(ctx, state, atts)
if err != nil {
@@ -67,25 +71,26 @@ func AttestingBalance(ctx context.Context, state state.ReadOnlyBeaconState, atts
// the amount to rotate is determined churn limit.
//
// Spec pseudocode definition:
// def process_registry_updates(state: BeaconState) -> None:
// # Process activation eligibility and ejections
// for index, validator in enumerate(state.validators):
// if is_eligible_for_activation_queue(validator):
// validator.activation_eligibility_epoch = get_current_epoch(state) + 1
//
// if is_active_validator(validator, get_current_epoch(state)) and validator.effective_balance <= EJECTION_BALANCE:
// initiate_validator_exit(state, ValidatorIndex(index))
// def process_registry_updates(state: BeaconState) -> None:
// # Process activation eligibility and ejections
// for index, validator in enumerate(state.validators):
// if is_eligible_for_activation_queue(validator):
// validator.activation_eligibility_epoch = get_current_epoch(state) + 1
//
// # Queue validators eligible for activation and not yet dequeued for activation
// activation_queue = sorted([
// index for index, validator in enumerate(state.validators)
// if is_eligible_for_activation(state, validator)
// # Order by the sequence of activation_eligibility_epoch setting and then index
// ], key=lambda index: (state.validators[index].activation_eligibility_epoch, index))
// # Dequeued validators for activation up to churn limit
// for index in activation_queue[:get_validator_churn_limit(state)]:
// validator = state.validators[index]
// validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
// if is_active_validator(validator, get_current_epoch(state)) and validator.effective_balance <= EJECTION_BALANCE:
// initiate_validator_exit(state, ValidatorIndex(index))
//
// # Queue validators eligible for activation and not yet dequeued for activation
// activation_queue = sorted([
// index for index, validator in enumerate(state.validators)
// if is_eligible_for_activation(state, validator)
// # Order by the sequence of activation_eligibility_epoch setting and then index
// ], key=lambda index: (state.validators[index].activation_eligibility_epoch, index))
// # Dequeued validators for activation up to churn limit
// for index in activation_queue[:get_validator_churn_limit(state)]:
// validator = state.validators[index]
// validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
vals := state.Validators()
@@ -155,16 +160,16 @@ func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state
// ProcessSlashings processes the slashed validators during epoch processing,
//
// def process_slashings(state: BeaconState) -> None:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER, total_balance)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// decrease_balance(state, ValidatorIndex(index), penalty)
// def process_slashings(state: BeaconState) -> None:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER, total_balance)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// decrease_balance(state, ValidatorIndex(index), penalty)
func ProcessSlashings(state state.BeaconState, slashingMultiplier uint64) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
totalBalance, err := helpers.TotalActiveBalance(state)
@@ -207,11 +212,12 @@ func ProcessSlashings(state state.BeaconState, slashingMultiplier uint64) (state
// ProcessEth1DataReset processes updates to ETH1 data votes during epoch processing.
//
// Spec pseudocode definition:
// def process_eth1_data_reset(state: BeaconState) -> None:
// next_epoch = Epoch(get_current_epoch(state) + 1)
// # Reset eth1 data votes
// if next_epoch % EPOCHS_PER_ETH1_VOTING_PERIOD == 0:
// state.eth1_data_votes = []
//
// def process_eth1_data_reset(state: BeaconState) -> None:
// next_epoch = Epoch(get_current_epoch(state) + 1)
// # Reset eth1 data votes
// if next_epoch % EPOCHS_PER_ETH1_VOTING_PERIOD == 0:
// state.eth1_data_votes = []
func ProcessEth1DataReset(state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
nextEpoch := currentEpoch + 1
@@ -229,18 +235,19 @@ func ProcessEth1DataReset(state state.BeaconState) (state.BeaconState, error) {
// ProcessEffectiveBalanceUpdates processes effective balance updates during epoch processing.
//
// Spec pseudocode definition:
// def process_effective_balance_updates(state: BeaconState) -> None:
// # Update effective balances with hysteresis
// for index, validator in enumerate(state.validators):
// balance = state.balances[index]
// HYSTERESIS_INCREMENT = uint64(EFFECTIVE_BALANCE_INCREMENT // HYSTERESIS_QUOTIENT)
// DOWNWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_DOWNWARD_MULTIPLIER
// UPWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_UPWARD_MULTIPLIER
// if (
// balance + DOWNWARD_THRESHOLD < validator.effective_balance
// or validator.effective_balance + UPWARD_THRESHOLD < balance
// ):
// validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
//
// def process_effective_balance_updates(state: BeaconState) -> None:
// # Update effective balances with hysteresis
// for index, validator in enumerate(state.validators):
// balance = state.balances[index]
// HYSTERESIS_INCREMENT = uint64(EFFECTIVE_BALANCE_INCREMENT // HYSTERESIS_QUOTIENT)
// DOWNWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_DOWNWARD_MULTIPLIER
// UPWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_UPWARD_MULTIPLIER
// if (
// balance + DOWNWARD_THRESHOLD < validator.effective_balance
// or validator.effective_balance + UPWARD_THRESHOLD < balance
// ):
// validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
func ProcessEffectiveBalanceUpdates(state state.BeaconState) (state.BeaconState, error) {
effBalanceInc := params.BeaconConfig().EffectiveBalanceIncrement
maxEffBalance := params.BeaconConfig().MaxEffectiveBalance
@@ -285,10 +292,11 @@ func ProcessEffectiveBalanceUpdates(state state.BeaconState) (state.BeaconState,
// ProcessSlashingsReset processes the total slashing balances updates during epoch processing.
//
// Spec pseudocode definition:
// def process_slashings_reset(state: BeaconState) -> None:
// next_epoch = Epoch(get_current_epoch(state) + 1)
// # Reset slashings
// state.slashings[next_epoch % EPOCHS_PER_SLASHINGS_VECTOR] = Gwei(0)
//
// def process_slashings_reset(state: BeaconState) -> None:
// next_epoch = Epoch(get_current_epoch(state) + 1)
// # Reset slashings
// state.slashings[next_epoch % EPOCHS_PER_SLASHINGS_VECTOR] = Gwei(0)
func ProcessSlashingsReset(state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
nextEpoch := currentEpoch + 1
@@ -314,11 +322,12 @@ func ProcessSlashingsReset(state state.BeaconState) (state.BeaconState, error) {
// ProcessRandaoMixesReset processes the final updates to RANDAO mix during epoch processing.
//
// Spec pseudocode definition:
// def process_randao_mixes_reset(state: BeaconState) -> None:
// current_epoch = get_current_epoch(state)
// next_epoch = Epoch(current_epoch + 1)
// # Set randao mix
// state.randao_mixes[next_epoch % EPOCHS_PER_HISTORICAL_VECTOR] = get_randao_mix(state, current_epoch)
//
// def process_randao_mixes_reset(state: BeaconState) -> None:
// current_epoch = get_current_epoch(state)
// next_epoch = Epoch(current_epoch + 1)
// # Set randao mix
// state.randao_mixes[next_epoch % EPOCHS_PER_HISTORICAL_VECTOR] = get_randao_mix(state, current_epoch)
func ProcessRandaoMixesReset(state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
nextEpoch := currentEpoch + 1
@@ -343,32 +352,39 @@ func ProcessRandaoMixesReset(state state.BeaconState) (state.BeaconState, error)
return state, nil
}
// ProcessHistoricalRootsUpdate processes the updates to historical root accumulator during epoch processing.
//
// Spec pseudocode definition:
// def process_historical_roots_update(state: BeaconState) -> None:
// # Set historical root accumulator
// next_epoch = Epoch(get_current_epoch(state) + 1)
// if next_epoch % (SLOTS_PER_HISTORICAL_ROOT // SLOTS_PER_EPOCH) == 0:
// historical_batch = HistoricalBatch(block_roots=state.block_roots, state_roots=state.state_roots)
// state.historical_roots.append(hash_tree_root(historical_batch))
func ProcessHistoricalRootsUpdate(state state.BeaconState) (state.BeaconState, error) {
// ProcessHistoricalDataUpdate processes the updates to historical data during epoch processing.
// From Capella onward, per spec, state's historical summaries are updated instead of historical roots.
func ProcessHistoricalDataUpdate(state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
nextEpoch := currentEpoch + 1
// Set historical root accumulator.
epochsPerHistoricalRoot := params.BeaconConfig().SlotsPerHistoricalRoot.DivSlot(params.BeaconConfig().SlotsPerEpoch)
if nextEpoch.Mod(uint64(epochsPerHistoricalRoot)) == 0 {
historicalBatch := &ethpb.HistoricalBatch{
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
}
batchRoot, err := historicalBatch.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not hash historical batch")
}
if err := state.AppendHistoricalRoots(batchRoot); err != nil {
return nil, err
if state.Version() >= version.Capella {
br, err := stateutil.ArraysRoot(state.BlockRoots(), fieldparams.BlockRootsLength)
if err != nil {
return nil, err
}
sr, err := stateutil.ArraysRoot(state.StateRoots(), fieldparams.StateRootsLength)
if err != nil {
return nil, err
}
if err := state.AppendHistoricalSummaries(&ethpb.HistoricalSummary{BlockSummaryRoot: br[:], StateSummaryRoot: sr[:]}); err != nil {
return nil, err
}
} else {
historicalBatch := &ethpb.HistoricalBatch{
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
}
batchRoot, err := historicalBatch.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not hash historical batch")
}
if err := state.AppendHistoricalRoots(batchRoot); err != nil {
return nil, err
}
}
}
@@ -378,10 +394,11 @@ func ProcessHistoricalRootsUpdate(state state.BeaconState) (state.BeaconState, e
// ProcessParticipationRecordUpdates rotates current/previous epoch attestations during epoch processing.
//
// Spec pseudocode definition:
// def process_participation_record_updates(state: BeaconState) -> None:
// # Rotate current/previous epoch attestations
// state.previous_epoch_attestations = state.current_epoch_attestations
// state.current_epoch_attestations = []
//
// def process_participation_record_updates(state: BeaconState) -> None:
// # Rotate current/previous epoch attestations
// state.previous_epoch_attestations = state.current_epoch_attestations
// state.current_epoch_attestations = []
func ProcessParticipationRecordUpdates(state state.BeaconState) (state.BeaconState, error) {
if err := state.RotateAttestations(); err != nil {
return nil, err
@@ -418,7 +435,7 @@ func ProcessFinalUpdates(state state.BeaconState) (state.BeaconState, error) {
}
// Set historical root accumulator.
state, err = ProcessHistoricalRootsUpdate(state)
state, err = ProcessHistoricalDataUpdate(state)
if err != nil {
return nil, err
}
@@ -436,12 +453,13 @@ func ProcessFinalUpdates(state state.BeaconState) (state.BeaconState, error) {
// it sorts the indices and filters out the slashed ones.
//
// Spec pseudocode definition:
// def get_unslashed_attesting_indices(state: BeaconState,
// attestations: Sequence[PendingAttestation]) -> Set[ValidatorIndex]:
// output = set() # type: Set[ValidatorIndex]
// for a in attestations:
// output = output.union(get_attesting_indices(state, a.data, a.aggregation_bits))
// return set(filter(lambda index: not state.validators[index].slashed, output))
//
// def get_unslashed_attesting_indices(state: BeaconState,
// attestations: Sequence[PendingAttestation]) -> Set[ValidatorIndex]:
// output = set() # type: Set[ValidatorIndex]
// for a in attestations:
// output = output.union(get_attesting_indices(state, a.data, a.aggregation_bits))
// return set(filter(lambda index: not state.validators[index].slashed, output))
func UnslashedAttestingIndices(ctx context.Context, state state.ReadOnlyBeaconState, atts []*ethpb.PendingAttestation) ([]types.ValidatorIndex, error) {
var setIndices []types.ValidatorIndex
seen := make(map[uint64]bool)

View File

@@ -10,8 +10,10 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -274,7 +276,9 @@ func TestProcessFinalUpdates_CanProcess(t *testing.T) {
assert.DeepNotEqual(t, params.BeaconConfig().ZeroHash[:], mix, "latest RANDAO still zero hashes")
// Verify historical root accumulator was appended.
assert.Equal(t, 1, len(newS.HistoricalRoots()), "Unexpected slashed balance")
roots, err := newS.HistoricalRoots()
require.NoError(t, err)
assert.Equal(t, 1, len(roots), "Unexpected slashed balance")
currAtt, err := newS.CurrentEpochAttestations()
require.NoError(t, err)
assert.NotNil(t, currAtt, "Nil value stored in current epoch attestations instead of empty slice")
@@ -455,3 +459,83 @@ func TestProcessSlashings_BadValue(t *testing.T) {
_, err = epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplier)
require.ErrorContains(t, "addition overflows", err)
}
func TestProcessHistoricalDataUpdate(t *testing.T) {
tests := []struct {
name string
st func() state.BeaconState
verifier func(state.BeaconState)
}{
{
name: "no change",
st: func() state.BeaconState {
st, _ := util.DeterministicGenesisState(t, 1)
return st
},
verifier: func(st state.BeaconState) {
roots, err := st.HistoricalRoots()
require.NoError(t, err)
require.Equal(t, 0, len(roots))
},
},
{
name: "before capella can process and get historical root",
st: func() state.BeaconState {
st, _ := util.DeterministicGenesisState(t, 1)
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
require.NoError(t, err)
return st
},
verifier: func(st state.BeaconState) {
roots, err := st.HistoricalRoots()
require.NoError(t, err)
require.Equal(t, 1, len(roots))
b := &ethpb.HistoricalBatch{
BlockRoots: st.BlockRoots(),
StateRoots: st.StateRoots(),
}
r, err := b.HashTreeRoot()
require.NoError(t, err)
require.DeepEqual(t, r[:], roots[0])
_, err = st.HistoricalSummaries()
require.ErrorContains(t, "HistoricalSummaries is not supported for phase0", err)
},
},
{
name: "after capella can process and get historical summary",
st: func() state.BeaconState {
st, _ := util.DeterministicGenesisStateCapella(t, 1)
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
require.NoError(t, err)
return st
},
verifier: func(st state.BeaconState) {
summaries, err := st.HistoricalSummaries()
require.NoError(t, err)
require.Equal(t, 1, len(summaries))
br, err := stateutil.ArraysRoot(st.BlockRoots(), fieldparams.BlockRootsLength)
require.NoError(t, err)
sr, err := stateutil.ArraysRoot(st.StateRoots(), fieldparams.StateRootsLength)
require.NoError(t, err)
b := &ethpb.HistoricalSummary{
BlockSummaryRoot: br[:],
StateSummaryRoot: sr[:],
}
require.DeepEqual(t, b, summaries[0])
hrs, err := st.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hrs, [][]byte{})
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := epoch.ProcessHistoricalDataUpdate(tt.st())
require.NoError(t, err)
tt.verifier(got)
})
}
}

View File

@@ -183,7 +183,7 @@ func UpdateBalance(vp []*Validator, bBal *Balance, stateVersion int) *Balance {
if stateVersion == version.Phase0 && v.IsPrevEpochAttester {
bBal.PrevEpochAttested += v.CurrentEpochEffectiveBalance
}
if (stateVersion == version.Altair || stateVersion == version.Bellatrix) && v.IsPrevEpochSourceAttester {
if stateVersion >= version.Altair && v.IsPrevEpochSourceAttester {
bBal.PrevEpochAttested += v.CurrentEpochEffectiveBalance
}
if v.IsPrevEpochTargetAttester {

View File

@@ -70,7 +70,7 @@ func TestUpdateBalance(t *testing.T) {
assert.DeepEqual(t, wantedPBal, pBal, "Incorrect balance calculations")
}
func TestUpdateBalanceBellatrixVersion(t *testing.T) {
func TestUpdateBalanceDifferentVersions(t *testing.T) {
vp := []*precompute.Validator{
{IsCurrentEpochAttester: true, CurrentEpochEffectiveBalance: 100 * params.BeaconConfig().EffectiveBalanceIncrement},
{IsCurrentEpochTargetAttester: true, IsCurrentEpochAttester: true, CurrentEpochEffectiveBalance: 100 * params.BeaconConfig().EffectiveBalanceIncrement},
@@ -92,6 +92,9 @@ func TestUpdateBalanceBellatrixVersion(t *testing.T) {
}
pBal := precompute.UpdateBalance(vp, &precompute.Balance{}, version.Bellatrix)
assert.DeepEqual(t, wantedPBal, pBal, "Incorrect balance calculations")
pBal = precompute.UpdateBalance(vp, &precompute.Balance{}, version.Capella)
assert.DeepEqual(t, wantedPBal, pBal, "Incorrect balance calculations")
}
func TestSameHead(t *testing.T) {

View File

@@ -42,17 +42,18 @@ func UnrealizedCheckpoints(st state.BeaconState) (uint64, *ethpb.Checkpoint, *et
// Note: this is an optimized version by passing in precomputed total and attesting balances.
//
// Spec pseudocode definition:
// def process_justification_and_finalization(state: BeaconState) -> None:
// # Initial FFG checkpoint values have a `0x00` stub for `root`.
// # Skip FFG updates in the first two epochs to avoid corner cases that might result in modifying this stub.
// if get_current_epoch(state) <= GENESIS_EPOCH + 1:
// return
// previous_attestations = get_matching_target_attestations(state, get_previous_epoch(state))
// current_attestations = get_matching_target_attestations(state, get_current_epoch(state))
// total_active_balance = get_total_active_balance(state)
// previous_target_balance = get_attesting_balance(state, previous_attestations)
// current_target_balance = get_attesting_balance(state, current_attestations)
// weigh_justification_and_finalization(state, total_active_balance, previous_target_balance, current_target_balance)
//
// def process_justification_and_finalization(state: BeaconState) -> None:
// # Initial FFG checkpoint values have a `0x00` stub for `root`.
// # Skip FFG updates in the first two epochs to avoid corner cases that might result in modifying this stub.
// if get_current_epoch(state) <= GENESIS_EPOCH + 1:
// return
// previous_attestations = get_matching_target_attestations(state, get_previous_epoch(state))
// current_attestations = get_matching_target_attestations(state, get_current_epoch(state))
// total_active_balance = get_total_active_balance(state)
// previous_target_balance = get_attesting_balance(state, previous_attestations)
// current_target_balance = get_attesting_balance(state, current_attestations)
// weigh_justification_and_finalization(state, total_active_balance, previous_target_balance, current_target_balance)
func ProcessJustificationAndFinalizationPreCompute(state state.BeaconState, pBal *Balance) (state.BeaconState, error) {
canProcessSlot, err := slots.EpochStart(2 /*epoch*/)
if err != nil {
@@ -83,7 +84,7 @@ func processJustificationBits(state state.BeaconState, totalActiveBalance, prevE
return newBits
}
// updateJustificationAndFinalization processes justification and finalization during
// weighJustificationAndFinalization processes justification and finalization during
// epoch processing. This is where a beacon node can justify and finalize a new epoch.
func weighJustificationAndFinalization(state state.BeaconState, newBits bitfield.Bitvector4) (state.BeaconState, error) {
jc, fc, err := computeCheckpoints(state, newBits)
@@ -113,41 +114,42 @@ func weighJustificationAndFinalization(state state.BeaconState, newBits bitfield
// checkpoints at epoch transition
// Spec pseudocode definition:
// def weigh_justification_and_finalization(state: BeaconState,
// total_active_balance: Gwei,
// previous_epoch_target_balance: Gwei,
// current_epoch_target_balance: Gwei) -> None:
// previous_epoch = get_previous_epoch(state)
// current_epoch = get_current_epoch(state)
// old_previous_justified_checkpoint = state.previous_justified_checkpoint
// old_current_justified_checkpoint = state.current_justified_checkpoint
//
// # Process justifications
// state.previous_justified_checkpoint = state.current_justified_checkpoint
// state.justification_bits[1:] = state.justification_bits[:JUSTIFICATION_BITS_LENGTH - 1]
// state.justification_bits[0] = 0b0
// if previous_epoch_target_balance * 3 >= total_active_balance * 2:
// state.current_justified_checkpoint = Checkpoint(epoch=previous_epoch,
// root=get_block_root(state, previous_epoch))
// state.justification_bits[1] = 0b1
// if current_epoch_target_balance * 3 >= total_active_balance * 2:
// state.current_justified_checkpoint = Checkpoint(epoch=current_epoch,
// root=get_block_root(state, current_epoch))
// state.justification_bits[0] = 0b1
// total_active_balance: Gwei,
// previous_epoch_target_balance: Gwei,
// current_epoch_target_balance: Gwei) -> None:
// previous_epoch = get_previous_epoch(state)
// current_epoch = get_current_epoch(state)
// old_previous_justified_checkpoint = state.previous_justified_checkpoint
// old_current_justified_checkpoint = state.current_justified_checkpoint
//
// # Process finalizations
// bits = state.justification_bits
// # The 2nd/3rd/4th most recent epochs are justified, the 2nd using the 4th as source
// if all(bits[1:4]) and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
// state.finalized_checkpoint = old_previous_justified_checkpoint
// # The 2nd/3rd most recent epochs are justified, the 2nd using the 3rd as source
// if all(bits[1:3]) and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
// state.finalized_checkpoint = old_previous_justified_checkpoint
// # The 1st/2nd/3rd most recent epochs are justified, the 1st using the 3rd as source
// if all(bits[0:3]) and old_current_justified_checkpoint.epoch + 2 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
// # The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
// if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
// # Process justifications
// state.previous_justified_checkpoint = state.current_justified_checkpoint
// state.justification_bits[1:] = state.justification_bits[:JUSTIFICATION_BITS_LENGTH - 1]
// state.justification_bits[0] = 0b0
// if previous_epoch_target_balance * 3 >= total_active_balance * 2:
// state.current_justified_checkpoint = Checkpoint(epoch=previous_epoch,
// root=get_block_root(state, previous_epoch))
// state.justification_bits[1] = 0b1
// if current_epoch_target_balance * 3 >= total_active_balance * 2:
// state.current_justified_checkpoint = Checkpoint(epoch=current_epoch,
// root=get_block_root(state, current_epoch))
// state.justification_bits[0] = 0b1
//
// # Process finalizations
// bits = state.justification_bits
// # The 2nd/3rd/4th most recent epochs are justified, the 2nd using the 4th as source
// if all(bits[1:4]) and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
// state.finalized_checkpoint = old_previous_justified_checkpoint
// # The 2nd/3rd most recent epochs are justified, the 2nd using the 3rd as source
// if all(bits[1:3]) and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
// state.finalized_checkpoint = old_previous_justified_checkpoint
// # The 1st/2nd/3rd most recent epochs are justified, the 1st using the 3rd as source
// if all(bits[0:3]) and old_current_justified_checkpoint.epoch + 2 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
// # The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
// if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
func computeCheckpoints(state state.BeaconState, newBits bitfield.Bitvector4) (*ethpb.Checkpoint, *ethpb.Checkpoint, error) {
prevEpoch := time.PrevEpoch(state)
currentEpoch := time.CurrentEpoch(state)

View File

@@ -358,10 +358,11 @@ func TestProposerDeltaPrecompute_SlashedCase(t *testing.T) {
// individual validator's base reward quotient.
//
// Spec pseudocode definition:
// def get_base_reward(state: BeaconState, index: ValidatorIndex) -> Gwei:
// total_balance = get_total_active_balance(state)
// effective_balance = state.validators[index].effective_balance
// return Gwei(effective_balance * BASE_REWARD_FACTOR // integer_squareroot(total_balance) // BASE_REWARDS_PER_EPOCH)
//
// def get_base_reward(state: BeaconState, index: ValidatorIndex) -> Gwei:
// total_balance = get_total_active_balance(state)
// effective_balance = state.validators[index].effective_balance
// return Gwei(effective_balance * BASE_REWARD_FACTOR // integer_squareroot(total_balance) // BASE_REWARDS_PER_EPOCH)
func baseReward(state state.ReadOnlyBeaconState, index types.ValidatorIndex) (uint64, error) {
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {

View File

@@ -6,6 +6,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/execution",
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl/testnet:__pkg__",
"//testing/spectest:__subpackages__",
"//validator/client:__pkg__",
],

Some files were not shown because too many files have changed in this diff Show More