Compare commits

...

145 Commits

Author SHA1 Message Date
terencechain
805fa65088 Merge branch 'develop' into disallow-lower-epoch-update 2022-06-11 08:55:28 -07:00
terencechain
f9e3b0a3c2 Active balance: return EFFECTIVE_BALANCE_INCREMENT as min (#10866) 2022-06-11 08:54:33 -07:00
terence tsao
195c4f34f1 Disallow lower justified epoch to override higher epoch 2022-06-10 13:34:44 -07:00
terencechain
a58809597e Sync: don't process pending blocks w/o genesis time (#10750)
* Sync: don't process pending blocks w/o genesis time

* Update pending_blocks_queue.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-10 05:02:47 +00:00
Nishant Das
7f443e8387 Add Optimistic Sync Scenario Testing (#10836)
* add latest changes

* fix it

* add multiclient support

* fix tests

* Apply suggestions from code review

* fix test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 23:24:53 +00:00
Potuz
18fc17c903 Forkchoice checkpoints (#10823)
* double_tree_changes

* protoarray changes

* beacon-chain changes

* spec tests and debug rpc fixes

* more conflicts

* more conflicts

* Terence's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 22:28:30 +00:00
Sammy Rosso
d7b01b9d81 Fix default mainnet log when using chain config (#10855)
* Fix default mainnet log when using chain config

Add a log to specify the use of a chain-config-file rather than
defaulting to that of mainnet.
Related to #10821.

* Add more specific log message

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 21:31:59 +00:00
terencechain
7ebd9035dd Add ttd metric (#10851)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 20:35:38 +00:00
Radosław Kapka
578fea73d7 API's IsOptimistic - update header.StateRoot only when block is not missing (#10852)
* bug fix

* tests

* comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 19:35:35 +00:00
terencechain
7fcadbe3ef Add optimistic status to chainhead (#10842)
* Add optimistic status to chainhead

* Fix tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 17:37:52 +00:00
Radosław Kapka
fce9e6883d Native Blocks Ep. 1 - New types and functions (#10837)
* types and functions

* partially done tests

* refactor

* remaining Proto() tests

* remaining proto.go tests

* simplify UnmarshalSSZ and move BeaconBlockIsNil

* getters_test

* remove errAssertionFailed

* review feedback

* remove cloning protobuf

* fmt

* change IsNil

* fix tests
2022-06-09 13:13:02 +00:00
terencechain
5ee66a4a68 Update engine api buckets (#10847)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-08 20:12:54 +00:00
kasey
f4c7fb6182 checkpoint sync e2e to use 3m timeout w/ elapsed log (#10849)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-06-08 19:14:10 +00:00
Potuz
077dcdc643 enable dev on Ropsten (#10839)
* enable dev on Ropsten

* include only vectorized HTR and doublylinkedtree

* error handling

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-08 13:45:13 +00:00
kasey
1fa864cb1a use slot:block index correctly (#10820)
* adding splitRoots, refactor to use it

* use splitRoots & work in roots only

the most common use case for this method is to get a list of
candidate roots and check if they are canonical. there isn't a great
reason to look up all the non-canonical blocks, because forkchoice
checks based on the root only, so just return roots and defer the
responsibility of resolving those to full blocks.

* update comment

* clean up shadowing

* more clear non-error return

* add test case for single root in index slot

* fmt

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-07 16:47:42 +00:00
Mike Neuder
6357860cc2 Refactor validator accounts backup to remove cli context dependency (#10824)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-07 15:19:12 +00:00
mick
cc1ea81d4a Implement generate-auth-secret on beacon node CLI (#10733)
* s

* s

* typo

* typo

* s

* s

* fixes based on PR feedback

* PR feedback

* reverting log changes

* adding flag per feedback

* conventions

* main fixes

* Update cmd/beacon-chain/jwt/jwt.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update cmd/beacon-chain/jwt/jwt.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update cmd/flags.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* s

* tests

* test attempt

* test

* Update cmd/beacon-chain/jwt/jwt.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* err fix

* s

* further simplify

* cleanup

* namefix

* tests pass

* gaz

* rem deadcode

* Gaz

* shorthand

* naming

* test pass

* dedup

* success

* Ignore jwt.hex file

* logrus

* feedback

* junk

* Also check that no file was written

* local run config

* small fix

* jwt

* testfix

* s

* disabling test

* reverting main changes

* main revert

* removing temp folder

* comment

* gaz

* clarity

* rem

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-06-07 07:37:12 +00:00
Nishant Das
dd65622441 Efficiently Pack Uint64 Lists (#10830)
* make it more efficient

* radek's review

* review again
2022-06-07 06:34:13 +00:00
Nishant Das
6c39301f33 Integrate Engine Proxy into E2E (#10808)
* add it in

* support jwt secret

* fix it

* fix

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 23:35:54 +00:00
terencechain
5b12f5a27d Integrate builder client into builder service (#10825)
* Integrate builder client into builder service

* Do nothing if pubkey is not found

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 22:38:49 +00:00
terencechain
105d73d59b Update spec labels (#10833)
* Update label version

* Update label version

* Update label version

* Update label version

* Update label version
2022-06-06 18:13:59 -04:00
james-prysm
db687bf56d Validator client builder support (#10749)
* Startinb builder service and interface

* Get header from builder

* Add get builder block

* Single validator registration

* Add mev-builder http cli flag

* Add method to verify registration signature

* Add builder registration

* Add submit validator registration

* suporting yaml

* fix yaml unmarshaling

* rolling back some changes from unmarshal from file

* adding yaml support

* adding register validator support

* added new validator requests into client/validator

* fixing gofmt

* updating flags and including gas limit, unit tests are still broken

* fixing bazel

* more name changes and fixing unit tests

* fixing unit tests and renaming functions

* fixing unit tests and renaming to match changes

* adding new test for yaml

* fixing bazel linter

* reverting change on validator service proto

* adding clarifying logs

* renaming function name to be more descriptive

* renaming variable

* rolling back some files that will be added from the builder-1 branch

* reverting more

* more reverting

* need placeholder

* need placeholder

* fixing unit test

* fixing unit test

* fixing unit test

* fixing unit test

* fixing more unit tests

* fixing more unit tests

* rolling back mockgen

* fixing bazel

* rolling back changes

* removing duplicate function

* fixing client mock

* removing unused type

* fixing missing brace

* fixing bad field name

* fixing bazel

* updating naming

* fixing bazel

* fixing unit test

* fixing bazel linting

* unhandled err

* fixing gofmt

* simplifying name based on feedback

* using corrected function

* moving default fee recipient and gaslimit to beaconconfig

* missing a few constant changes

* fixing bazel

* fixing more missed default renames

* fixing more constants in tests

* fixing bazel

* adding update proposer setting per epoch

* refactoring to reduce complexity

* adding unit test for proposer settings

* Update validator/client/validator.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* trying out renaming based on feedback

* adjusting based on review comments

* making tests more appropriate

* fixing bazel

* updating flag description based on review feedback

* addressing review feedback

* switching to pushing at start of epoch for more time

* adding new unit test and properly throwing error

* switching keys in error to count

* fixing log variable

* resolving conflict

* resolving more conflicts

* adjusting error message

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-06-06 19:32:41 +00:00
james-prysm
f0403afb25 Suggested-Fee-Recipient flag should not override (#10804)
* initial commit

* fixing unit test

* fixing gofmt

* updating usage language

* updating flag description

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 16:58:52 +00:00
Potuz
3f309968d4 Only prune in newer finalization (#10831)
* Only prune in newer finalization

* add regression test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 15:25:22 +00:00
Potuz
52acaceb3f Enforce that every node descends from finalized node (#10784)
* Enforce that every node descends from finalized node

* fix test

* fix conflict

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 14:28:49 +00:00
Nishant Das
5216402f66 Clean Up State Finalizer (#10829) 2022-06-06 09:01:58 +00:00
Potuz
2536195be0 Change forkchoice API (#10774)
* Change forkchoice API

doubly-linked-tree changes

* protoarray changes

* blockchain tests

* rebase and fix conflicts

* More blockchain fixes

* blockchain test fixes

* doubly linked tree changes again

* protoarray changes v2

* blockchain packages v2

* Radek's review

* Fix on batch processing

* Terence's review

* fill in at start

* Revert "fill in at start"

This reverts commit 8c11db063a.

* wrap error message

* fill in before mutating the state

* wrap nil node errors

* handle unknown optimistic status on init sync

* rename insert function

* prune on batches only after forkchoice insertion

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-06-05 17:48:21 +00:00
terencechain
a4b9e006af Fill missing payload ID (#10803) 2022-06-05 05:34:08 +00:00
terencechain
76b941b310 Update ropsten TTD (#10817)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-04 02:54:51 +00:00
Nishant Das
d099c2790b Add Back Timestamp Related Deposit Processing (#10806)
* use it

* add test

* fix

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-06-04 01:58:03 +00:00
Potuz
2fc3d41ffa goppelganger -> doppelganger (#10816)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-03 22:53:58 +00:00
Jim McDonald
2e4174bdd6 Set fee recipient validator index as per spec. (#10814)
The beacon API specification states that the `validator_index` field of
the `fee_recipient` structure is a quoted string rather than a bare
integer.  This fixes up the encoding to match the spec.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-03 21:52:17 +00:00
kasey
a170fd4bd6 Revert "Change name and return type of HighestSlotBlocksBelow (#10811)" (#10818)
This reverts commit 0e9dfe04a1.

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-06-03 18:36:06 +00:00
terencechain
3e36eacd1e Add registration DB methods (#10812) 2022-06-03 15:13:04 +00:00
kasey
0e9dfe04a1 Change name and return type of HighestSlotBlocksBelow (#10811)
* HighestSlotBlocksBelow to return a single value

* HighestSlotBlocksBelow->HighestBlockBelowSlot

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-03 07:16:34 +00:00
Radosław Kapka
302a5cf00b API SSZ content negotiation (#10760)
* API SSZ content negotiation

* custom code

* use raw string

* code review feedback

* remove duplicated test
2022-06-02 22:13:58 +00:00
kasey
c72f1af951 rewind cursor upon nil value (bucket) (#10802)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-06-02 04:06:19 +00:00
terencechain
2bc3ef29e0 Wrap unknown RPC errors (#10793) 2022-06-01 16:46:07 +00:00
terencechain
bc91d63fcf Add builder service skeleton and flag (#10789)
* Add builder service skeleton and flag

* Fix build
2022-06-01 15:12:15 +00:00
Potuz
8f18920ac7 Update forkchoice justified on init sync (#10801)
* Update forkchoice justified on init sync

* handle err

* add regression test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-01 14:13:47 +00:00
Nishant Das
2cc62cdf40 Fix Fuzzing (#10798) 2022-06-01 13:15:50 +00:00
terencechain
95c140b512 Validator client: add submit registration (#10785)
* Add client registration methods

* Mocken

* Run mockgen

* Fix bad tests

* Fix rest of the tests

* Tests

* Fix time tests

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-01 01:53:25 +00:00
kasey
7563bc0444 HighestSlotBlocksBelow, but in reverse (#10772)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-31 21:10:33 +00:00
Preston Van Loon
4353d39daa Mark e2e test targets as flaky for now (#10778)
* Mark e2e test targets as flaky for now

* Mark e2e test targets as flaky for now

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-31 20:11:47 +00:00
terencechain
2de2000eb7 Add back finalized info in new block log (#10792)
* Add back finalized info in new block log

* Better refactor

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-31 17:44:36 +00:00
terencechain
94f6389ebd Run mockgen (#10775) 2022-05-31 09:43:01 -07:00
Potuz
b582ca27e6 Forkchoice fill chain (#10783)
* Move chain inclusions to forkchoice handling

* gaz

* return early if no blocks are pending

* Terence's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-31 10:19:48 +00:00
Preston Van Loon
2586a9e667 Add and verify context in AddCommitteeShuffledList (#10786)
* Add and verify context in AddCommitteeShuffledList

* Add and verify context in AddCommitteeShuffledList

* Change unnecessary arg reordering

* fix build, oops

* Regression test

* Regression test

* fix fuzz cache disabled
2022-05-30 21:38:37 -03:00
terencechain
87251d627d Ensure finalized root can't be zeros (#10791) 2022-05-30 18:01:30 +00:00
Potuz
3ff285dda5 Do not fill in blocks that are not in the finalized branch (#10776)
* Do not fill in blocks that are not in the finalized branch

* mark blocks as invalid

* Fix old off-by-ones, do not pass finalized state
2022-05-30 11:55:39 +00:00
terencechain
364ad3fbda Reinsert reorg atts (#10767)
* Add common ancestor root for protoarray

* More efficient algo

* Tests

* Fix linting

* Fix linting

* Fix linting

* Fix linting

* Fix linting

* Fix linting

* Apply suggestions from code review

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Feedbacks

* Revert saveHead changes

* Revert "Revert saveHead changes"

This reverts commit a15fddc2e6.

* Fix rest of the tests

* Update beacon-chain/blockchain/head.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-29 19:32:42 +00:00
Potuz
4cbb69602f Clean up onBlockBatch and prune forkchoice on init sync (#10768)
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-05-28 11:56:58 +00:00
David
64920d719d add mutex for validator.highestValidSlot (#10722)
* add mutex for validator.highestValidSlot

* fixed deadlock issue

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-05-28 00:44:06 +00:00
Potuz
3cf385fe91 Unrealized justification (#10659)
* unrealized justification API

* Add time elapse logging

* add unrealized justification checkpoint

* Use UnrealizedJustificationCheckpoint

* Refactor unrealized checkpoints

* Move logic to state package

* do not use ctx on a sum

* fix ctx

* add tests

* fix conflicts

* unhandled error

* Fix ordering in computing checkpoints

* gaz

* keep finalized checkpoint if nothing justified

* gaz

* copy checkpoint

* fix check for nil

* Add state package tests

* Add tests

* Radek's review

* add more tests

* Update beacon-chain/core/epoch/precompute/justification_finalization.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* deduplicate to stateutil

* missing file

* Add stateutil test

* Minor refactor, don't export certain things

* Fix exports in tests

* remove unused error

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-27 16:38:00 +00:00
David
adabd1fa4f service.head race condition fix (#10741)
* added various read mutex locks for service.head

* added RLocks around all calls to s.headRoot()

* added RLocks around all calls to s.headBlock()

* reduce lock surface-> Stop(),handleEpochBoundary()

* refactor Stop() to +performance, -lock_surface

* Apply suggestions from code review

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* fixed indentation

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-05-27 14:24:43 +00:00
Nishant Das
9be2111b7a Write To Only One File For Geth Logs (#10769) 2022-05-27 13:06:30 +00:00
Raul Jordan
1dec9eb912 Remove Unnecessary Stack Traces for Engine Errors (#10765)
* cleaner rpc responses

* clean up unecessary stack trace

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-26 22:09:33 +00:00
terencechain
a2951ec37d Update Ropsten ttd to 100000000000000000000000 (#10762)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-26 21:12:59 +00:00
terencechain
216fdb48cf Add back ttd override flags (#10763)
* Add back ttd override flags

* Add warning log for overrides

* Print has hex

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-05-26 19:12:02 +00:00
Preston Van Loon
61c5e2a443 Fix error message with incorrect flag names (#10761) 2022-05-26 17:47:00 +00:00
Nishant Das
da9f72360d Make Common Dep Set For E2E (#10758) 2022-05-26 09:33:04 +02:00
Preston Van Loon
1be46aa16e Add a fuzz test for fieldtrie (#10757)
* Add a fuzz test for fieldtrie

* gofmt
2022-05-26 13:17:34 +08:00
kasey
a1a12243be Sync from finalized (#10723)
* checkpoint sync use finalized state+block

instead of finding the block at the beginning of the weak subjectivity
epoch.

* happy path test for sync-from-finalized

* gofmt

* functional opts for the minimal e2e

* add TestCheckpointSync option

* wip: pushing for CI

* include conn index in log for debugging

* lint

* block until regular sync test finishes

* restore TestSync->testDoppelGangerProtection link

* update bazel deps for all the test targets

* updating to match current checksum from github

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-05-25 22:52:43 +00:00
kasey
a1dd2e6b8c updating to match current checksum from github (#10756)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-05-25 18:29:17 +00:00
Nishant Das
ecb605814e Add in Separate Directories per Test (#10753)
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-05-25 15:57:13 +00:00
terencechain
61cbe3709b Ignore subset aggregates (#10674)
* Ignore subset aggregates

* Add test

* Update BUILD.bazel

* Update validate_sync_contribution_proof_test.go

* Update validate_sync_contribution_proof_test.go

* Don't utilize pooled objects. Use direct caches

* Handle mainnet/minimal better

* Revert att changes

* Check overlaps before set

* Feedbacks

* Fixed a copy bug

* Fixed the same copy bug in committee indices cache

* Use SafeCopyBytes

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-25 05:40:06 +00:00
Raul Jordan
7039c382bf Update Golang X Tools to Support Generics in Prysm (#10752)
* update tools to enable generics support

* tidy

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-25 05:10:05 +00:00
Nishant Das
4f00984ab1 fix (#10751) 2022-05-24 23:10:51 -04:00
terencechain
edb03328ea Sync: use copied bits for seen cache (#10747)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-24 21:11:33 +00:00
Sammy Rosso
c922eb9bfc Add new DomainType for application usage (#10740)
* Add new DomainType for application usage

* Add DomainApplicationMask to config_test

* Add func to convert uint32 to 4 byte array

We add a convenience function Uint32ToBytes4 that takes a uint32,
converts it to big endian order and return a 4 byte array.

* Add unit test for Uint32ToBytes4

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-05-24 20:16:05 +00:00
terencechain
051a83a83d Engine metrics: help text typos (#10746)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-24 19:17:17 +00:00
terencechain
f46ff626df Service: return errors on nil checkpoints (#10748) 2022-05-24 16:15:04 +00:00
Preston Van Loon
8130ff29bc Update .buildkite-bazelrc: change CI to use toplevel remote caching. (#10744) 2022-05-24 13:22:44 +00:00
Nishant Das
5d4078305a Use Correct Math Library (#10742)
* use correct math lib

* one more case
2022-05-24 07:22:46 +00:00
Preston Van Loon
6910460173 Refactor migrateStateValidators for better readability (#10727)
* Refactor migrateStateValidators for better readability

* pass test

* rev

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-23 23:56:02 +00:00
terencechain
7cc291c09f Update proposer boost score and spec tests (#10665)
* Update mainnet_config.go

* Fixed a few tests

* Update rest of the tests

* Consolidate blockchain errors

* Update spec tests to v1.2.0-rc.1

* Fix withdrawal epoch overflows

* add slashings to forkchoice spectests

* Remove unused parameter

* Disable skip slot cache

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-05-23 23:08:24 +00:00
Raul Jordan
76645bccee Move ExecutionPayload Utils to Consensus Types Subpackage (#10732)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-23 21:57:32 +00:00
terencechain
46c0579816 Fix withdrawal epoch overflows (#10739)
* Fix withdrawal epoch overflows

* Fix typo, used epoch instead of churn
2022-05-23 21:10:19 +00:00
kasey
c66d9e9a11 Builder client (#10703)
* builder api client

* unexport error

* thanks, DeepSource!

* replace hexSlice w/ hexutil.Bytes

* use uint256 for BaseFeePerGas

* more confidence in correct endianness

* comment fix per Terence

* fix proto conversion for uint256

* couple more value checks in the http client tests

* TestMarshalBlindedBeaconBlockBodyBellatrix

* appease deepsource

* middleware to log requests

* big int round trip test

* very superficial test to make deepsource happy

* round trip test between proto payloads

* round trip starting from marshaled struct

* deepsource... for you, the moon

* remove unused receiver

* gofmt

* remove test destroying line added while debugging

* handle nil body in logging middleware

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-23 17:30:51 +00:00
Potuz
237807b248 Do not update forkchoice checkpoints on calls to Head (#10702)
* Only update forkchoice checkpoints at epoch transition

* gazelle

* blockchain package changes

* fix node

* spectest package

* rpc package

* fix spec test

* Fix spectests

* fix new_slot.test

* gaz

* fix spectests

* fix conflicts

* Update beacon-chain/blockchain/process_block.go

* regression do not update on newSlot

* revert bd64cab

* terence's review

* fix conflicts

* fix latest conflicts

* gaz

* go mod tidy

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-22 18:37:01 +00:00
terencechain
bcd75adedd Consolidate blockchain errors (#10736)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-22 17:48:54 +00:00
Potuz
0c981e8853 Change synced new block log (#10724)
* Change synced new block log

* Update beacon-chain/blockchain/log.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Fix finalized -> justified

* Update beacon-chain/blockchain/receive_block.go

* fix conflicts

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-22 16:57:48 +00:00
Potuz
e7991b9d7b Track unrealized justification/finalization in forkchoice (#10658)
* Track unrealized justification/finalization in forkchoice

* add missing files

* mod tidy

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-05-21 10:27:07 -03:00
Raul Jordan
244e670b71 Move BeaconBlockNil Checker Function to Consensus-Types/Wrapper Package (#10731)
* beacon block is nil wrapper

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 23:29:16 +00:00
Raul Jordan
dc5fb92b28 Remove Unnecessary State Interfaces (#10707)
* rem other state interfaces

* redundant check

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 22:40:03 +00:00
terencechain
a2db03b9e9 Update engine API err codes (#10730)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-20 20:08:00 +00:00
terencechain
370cf1a6c8 Chain info: Return err if checkpoint is nil (#10729)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 18:41:33 +00:00
Potuz
76f6d74b83 Add new slot tests (#10728)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 18:17:53 +00:00
terencechain
addb3cd665 More selective on marking block as bad (#10681)
* Starting, looking for feedbacks

* Update error.go

* Wrap invalid blocks to rest of the processings

* More tests

* More tests

* Fix tests

* Update process_block_test.go

* Update execution_engine_test.go

* Update BUILD.bazel

* Nishant's feedback and Kasey's recommendation

* Add comments on what an invalid block is

* Update beacon-chain/blockchain/error.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Update beacon-chain/blockchain/error.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Rm faulty invalid conditions

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 17:05:39 +00:00
Preston Van Loon
c603d120d7 Refactor high complexity StreamBlocksAltair (#10726)
* Refactor high complexity StreamBlocksAltair

* catch nil data

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 16:17:55 +00:00
Raul Jordan
39893bbe30 Encapsulate Fork-Specific Parameters Under BeaconState Interface (#10706)
* encapsulate beacon state specific spec parameters

* edit build file

* code review suggestion

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 15:30:30 +00:00
Nishant Das
a984605064 Scenario End To End Testing (#10696)
* add new changes

* fix it all

* add nicer scenario

* some more cleanup

* restructure tests

* godoc

* skip one scenario

* space

* fix test

* clean up

* fix conflicts

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-20 09:34:10 +00:00
Nishant Das
f28b47bd87 Raise Soft File Descriptor Limit Up To The Hard Limit (#10650)
* add changes

* comment

* kasey's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-20 08:12:52 +00:00
kasey
588dea83b7 Config registry (#10683)
* test coverage and updates to config twiddlers

* LoadChainConfigFile error if SetActive conflicts

* lint

* wip working around test issues

* more fixes, mass test updates

* lint

* linting

* thanks deepsource!

* fix undeclared vars

* fixing more undefined

* fix a bug, make a bug, repeat

* gaz

* use stock mainnet in case fork schedule matters

* remove unused KnownConfigs

* post-merge cleanup

* eliminating OverrideBeaconConfig outside tests

* more cleanup of OverrideBeaconConfig outside tests

* config for interop w/ genesis gen support

* improve var name

* API on package instead of exported value

* cleanup remainders of "registry" naming

* Nishant feedback

* add ropstein to configset

* lint

* lint #2

* ✂️

* revert accidental commented line

* check if active is nil (replace called on empty)

* Nishant feedback

* replace OverrideBeaconConfig call

* update interop instructions w/ new flag

* don't let interop replace config set via cli flags

Co-authored-by: kasey <kasey@users.noreply.github.com>
2022-05-20 07:16:53 +00:00
Nishant Das
1012ec1915 Drop Down Expected Sync Participation During Fork (#10708)
* reduce

* skip
2022-05-19 12:28:56 +00:00
Raul Jordan
b74947aa75 Fix Slasher E2E Flakiness (#10715)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-19 08:51:31 +00:00
kasey
e3f69e4fad run e2e for 12 epochs (#10717)
* run for the original number of epochs

* to avoid race, wait 1/2 epoch after 1st ready node

* Update testing/endtoend/minimal_e2e_test.go

* Update testing/endtoend/minimal_e2e_test.go

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-05-19 06:45:58 +00:00
terencechain
092e9e1d19 Clean up various warnings (#10710)
* Clean up various warnings

* Update beacon-chain/rpc/prysm/v1alpha1/debug/state_test.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Fix redundant casting genState

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-05-19 04:38:04 +00:00
Sammy Rosso
1c51f6d1be Fix Tests in sync/validate_beacon_blocks.go to Pass for the Right Reasons (#10711)
* test: better InvalidSignature validation

Added digest to topic so that the test would not fail at decodePubsubMessage.
Generated a valid signature from an invalid key so that it would be considered invalid.
Added a check for the correct error: ErrSigFailedToVerify.
Related to #9791.

* test: better BlockAlreadyPresentInDB validation

Added digest to topic so that the test would not fail at decodePubsubMessage.
Added a check that the block is ignored.
Related to #9791.

* test: better Syncing validation

Replaced error silencing with NoError assertion.
Added check for the expected validation decision: ValidationIgnore.
Related to #9791.

* test: better IgnoreAndQueueBlocksFromNearFuture validation

Rename test to best describe it's function.
Replace error silencing with a check for the expected error.
Related to #9791.

* test: better RejectBlocksFromFuture validation

Added digest to topic so that the test would not fail at decodePubsubMessage.
Replaced error silencing with NoError assertion.
Added check for the expected validation decision: ValidationIgnore.
Related to #9791.

* test: better RejectBlocksFromThePast validation

Added digest to topic so that the test would not fail at decodePubsubMessage.
Replaced error silencing with a check for the expected error.
Added check for the expected validation decision: ValidationIgnore.
Related to #9791.

* test: better SeenProposerSlot validation

Added digest to topic so that the test would not fail at decodePubsubMessage.
Replaced error silencing with NoError assertion.
Added check for the expected validation decision: ValidationIgnore.
Related to #9791.

* test: fix topic and silenced errors

* set chain service slot

* test: better RejectBlocksFromBadParent validation

Set bad parent block.
Fixed expected errors and validation decision.

* revert: correct log msg for unknown parent block

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-19 03:45:45 +00:00
Preston Van Loon
ee6aa4d4ec rpc: Better connection handling (#10714) 2022-05-19 02:53:24 +00:00
terencechain
0d2696ed4e Add bopsten config and cli flag (#10700)
* Add bopsten config and cli flag

* Rename : )

* Update genesis time

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-18 19:29:24 +00:00
mick
03d44d7bfe Weak subjectivity warning copy update, warn -> info (#10699)
* warn -> info

* wrap

* lintfix

* mick-needs-a-better-IDE

* gofmt

* empty commit to trigger cicd

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-18 16:20:35 +00:00
james-prysm
b38e4ddc3e Web3signer: Bellatrix Block (#10590)
* initial commit

* fixing bazel

* removing extra space

* adding blindedbeaconblock

* adding unit tests

* fixing bazel build

* switching to switch statement

* fixing function parameters

* fixing ineffectual err error

* deleting unused files

* updating web3signer version to support bellatrix

* adding metrics

* testing longer run

* changing log level to help find errors

* changing logging back to all to better understand the issue

* testing better way to get public keys for web3signer on validator

* fixing bazel

* rolling back changes

* missed rolling back 1 thing

* adding flag back in

* adding comments back in

* testing lower participation

* adding logs to see web3signer keys for comparison

* fix bazel

* adding more logs temporariliy

* switching hex utility function

* web3signer doesn't support deposits, so changing this

* rolling back unintended unit test

* rolling back some changes for debugging

* Update validator/keymanager/remote-web3signer/v1/web3signer_types.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/components/web3remotesigner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/endtoend_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update testing/endtoend/endtoend_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing merge conflict

* Update validator/keymanager/remote-web3signer/v1/requests.go

* Update testing/endtoend/endtoend_test.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/keymanager/remote-web3signer/v1/requests.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* prysm doesn't currently support deposits for web3signer

* reverting back to old implementation

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-05-18 15:20:20 +00:00
Mike Neuder
9fab9df61e Refactor validator accounts delete to remove cli context dependency (#10686)
* add functional options accounts delete

* bazel run //:gazelle -- fix

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-17 23:13:36 +00:00
Radosław Kapka
eedafac822 SSZ responses for block production (#10697)
* Simplify SSZ handling

* fix tests

* add version to SSZ proto and to GetBeaconStateSSZV2

* TestProduceBlockV2SSZ

* rest of tests

* middleware

* use constants

* some middleware changes

* test fix

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-17 20:13:06 +00:00
terencechain
e9ad7aeff8 Fix a bad forkchoice proposer boost test (#10705)
* Fix a bad forkchoice proposer boost test

* Improve tests

* Add fork regression

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-05-17 16:26:56 -03:00
terencechain
3cfef20938 Add better debugging logs for validate block p2p pipeline (#10698) 2022-05-17 17:27:52 +00:00
Potuz
e90284bc00 Add Error checks for nil inner state (#10701) 2022-05-17 17:38:35 +08:00
terencechain
01e15a033f Improve beacon node doesn't have a parent in db logs (#10689)
* Improve beacon node doesn't have a parent in db logs

* Update round_robin.go

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-16 18:51:27 +00:00
Radosław Kapka
f09b06d6f6 Allow SSZ-serialized blocks in publishBlindedBlock (#10679)
* SubmitBlockSSZ grpc

* SubmitBlockSSZ middleware

* test fixes

* use VersionedUnmarshaller

* use VersionedUnmarshaller

(cherry picked from commit 7388eeb963)

* tests

* fuzz: Add fuzz tests for sparse merkle trie (#10662)

* Add fuzz tests for sparse merkle trie and change HTR signature to return an error

* fix capitalization of error message

* Add engine timeout values (#10645)

* Add timeout values

* Update engine_client.go

* Update engine_client.go

* Update beacon-chain/powchain/engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/powchain/engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/powchain/engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Cleanup of `stategen` package (#10607)

* powchain and stategen

* revert powchain changes

* rename field to blockRootsOfSavedStates

* rename params to blockRoot

* review feedback

* fix loop

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Process atts and update head before proposing (#10653)

* Process atts and updeate head

* Fix ctx

* New test and old tests

* Update validator_test.go

* Update validator_test.go

* Update service.go

* Rename to UpdateHead

* Update receive_attestation.go

* Update receive_attestation.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Add link to e2e docs in `README` (#10672)

* Improve `ReceiveBlock`'s comment (#10671)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Call fcu on invalid payload (#10565)

* Starting

* remove finalized root

* Just call fcu

* Review feedbacks

* fix one test

* Fix conflicts

* Update execution_engine_test.go

* Add a test for invalid recursive call

* Add comprehensive recursive test

* dissallow override empty hash

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Cache and use justified and finalized payload block hash (#10657)

* Cache and use justified and finalized payload block hash

* Fix tests

* Use real byte

* Fix conflicts

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* do not export slotFromBlock

* simplify tests

* grpc

* middleware

* extract package-level consts

* Simplify SSZ handling

* fix tests

* test fixes

* test hack

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-05-16 17:59:43 +00:00
james-prysm
16e66ee1b8 fee-recipient: update error log to warn log (#10684)
* initial commit

* updating fee recipient on beacon node

* updating logs

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-16 14:40:13 +00:00
terencechain
3d3890205f Remove invalid terminal block error code (#10646)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-15 03:14:44 +00:00
terencechain
a90335b15e Blockchain: Use get_block to retrieve block (#10688)
* Use get block

* Remove unused errNilParentInDB

* Fix TestVerifyBlkDescendant

* Update beacon-chain/blockchain/service.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-05-15 02:23:01 +00:00
Radosław Kapka
8cd43d216f Changes to SSZ handling (#10687)
* Simplify SSZ handling

* fix tests

* add version to SSZ proto and to GetBeaconStateSSZV2

* remove unwanted methods

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-13 20:29:18 +00:00
Radosław Kapka
d25c0ec1a5 Fix bugs in /eth/v1/beacon/blinded_blocks (#10673)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-13 18:20:39 +00:00
Radosław Kapka
e1c4427ea5 Allow SSZ-serialized blocks in publishBlock (#10663)
* SubmitBlockSSZ grpc

* SubmitBlockSSZ middleware

* test fixes

* use VersionedUnmarshaller

(cherry picked from commit 7388eeb963)

* tests

* do not export slotFromBlock

* simplify tests

* extract package-level consts

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-13 10:54:45 +00:00
Preston Van Loon
7042791e31 fuzz: Fix off by one error in sparse merkle trie item insertion (#10668)
* fuzz: Fix off by one error in sparse merkle trie item insertion

* remove new line

* Move validation to the proto constructor

* fix build

* Add a unit test because @nisdas is going to ask for it

* fix up

* gaz

* Update container/trie/sparse_merkle.go

* Update container/trie/sparse_merkle_test.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: nisdas <nishdas93@gmail.com>
2022-05-13 07:02:43 +00:00
terencechain
e771585b77 Fix error.Is ordering (#10685)
* Fix error.is ordering for ErrUndefinedExecutionEngineError

* Update BUILD.bazel
2022-05-12 22:43:34 +00:00
Radosław Kapka
98622a052f Extract OptimisticSyncFetcher interface (#10654)
* Extract `OptimisticSyncFetcher` interface

* extract IsOptimistic

* fix tests

* more test fixes

* even more test fixes

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-12 17:23:45 +00:00
Potuz
61033ebea1 handle failure to update head (#10651)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-12 16:36:46 +00:00
Potuz
e808025b17 regression test off-by-one (#10675)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-12 15:49:37 +00:00
Radosław Kapka
7db0435ee0 Unify WARNING comments (#10678)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-12 15:25:44 +00:00
Nishant Das
1f086e4333 add more fuzz targets (#10682) 2022-05-12 10:47:29 -04:00
james-prysm
184e5be9de Fee recipient: checksum log (#10664)
* adding checksum check at validator client and beacon node

* adding validation and logs on validator client startup

* moving the log and validation

* fixing unit tests

* adding test for back checksum on validator client

* fixing bazel

* addressing comments

* fixing log display

* Update beacon-chain/node/config.go

* Apply suggestions from code review

* breaking up lines

* fixing unit test

* ugh another fix to unit test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-11 19:36:57 +00:00
terencechain
e33850bf51 Don't return nil with new head (#10680) 2022-05-11 11:33:10 -07:00
Preston Van Loon
cc643ac4cc native-state: Simplify MarshalSSZ (#10677)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-11 14:25:11 +00:00
Nishant Das
abefe1e9d5 Add Sync Block Topic Fuzz Target (#10676)
* add new target

* remove

* Update beacon-chain/sync/BUILD.bazel
2022-05-11 13:18:12 +00:00
Nishant Das
b4e89fb28b Add in State Fuzzing (#10375)
* add it in

* build tags

* gaz

* remove dependency on fast-ssz for now

* copy and comment

* add in native state

* fix build

* no assert on invalid input, just return

* Add failing case

* fix it up

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-05-11 12:47:54 +00:00
terencechain
4ad1c4df01 Cache and use justified and finalized payload block hash (#10657)
* Cache and use justified and finalized payload block hash

* Fix tests

* Use real byte

* Fix conflicts

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-10 21:20:28 +00:00
terencechain
6a197b47d9 Call fcu on invalid payload (#10565)
* Starting

* remove finalized root

* Just call fcu

* Review feedbacks

* fix one test

* Fix conflicts

* Update execution_engine_test.go

* Add a test for invalid recursive call

* Add comprehensive recursive test

* dissallow override empty hash

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-10 20:02:00 +00:00
Radosław Kapka
d102421a25 Improve ReceiveBlock's comment (#10671)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-10 19:13:28 +00:00
Radosław Kapka
7b1490429c Add link to e2e docs in README (#10672) 2022-05-10 14:16:40 +00:00
terencechain
bbdf19cfd0 Process atts and update head before proposing (#10653)
* Process atts and updeate head

* Fix ctx

* New test and old tests

* Update validator_test.go

* Update validator_test.go

* Update service.go

* Rename to UpdateHead

* Update receive_attestation.go

* Update receive_attestation.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-09 23:12:50 +00:00
Radosław Kapka
5d94030b4f Cleanup of stategen package (#10607)
* powchain and stategen

* revert powchain changes

* rename field to blockRootsOfSavedStates

* rename params to blockRoot

* review feedback

* fix loop

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-09 22:25:47 +00:00
terencechain
16273a2040 Add engine timeout values (#10645)
* Add timeout values

* Update engine_client.go

* Update engine_client.go

* Update beacon-chain/powchain/engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/powchain/engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/powchain/engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update engine_client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-05-09 20:02:20 +00:00
Preston Van Loon
97663548a1 fuzz: Add fuzz tests for sparse merkle trie (#10662)
* Add fuzz tests for sparse merkle trie and change HTR signature to return an error

* fix capitalization of error message
2022-05-09 16:51:48 +00:00
Radosław Kapka
7d9d8454b1 Improvements to powchain package (#10652)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-09 13:57:34 +00:00
Radosław Kapka
21bdbd548a Deduplicate native state (a.k.a. One State to rule them all) (#10483)
* v0

* getters/setters

* init and copy

* hasher

* all the nice stuff

* make bazel happy

* remove tests for smaller PR

* remove old states

* move files

* import fixes

* custom MarshalSSZ

* fixed deadlock

* copy version when copying state

* correct issues in state_trie

* fix Copy()

* better e2e comment

* add code to minimal state

* spectest test

* Revert "Auxiliary commit to revert individual files from 84154423464e8372f7e0a03367403656ac5cd78e"

This reverts commit 9602599d183081291dfa0ba4f1036430f63a7822.

* native state assert

* always error

* always log

* more native state usage

* cleanup

* remove empty line

* Revert "spectests"

This reverts commit 1c49bed5d1cf6224afaf21e18562bf72fae5d2b6.

# Conflicts:
#	beacon-chain/powchain/service.go
#	beacon-chain/state/v1/state_trie.go
#	beacon-chain/state/v2/state_trie.go
#	beacon-chain/state/v3/state_trie.go
#	testing/spectest/shared/phase0/finality/BUILD.bazel
#	testing/spectest/shared/phase0/finality/runner.go

* dedup field trie

* fix test issues

* cleanup

* use correct field num in FinalizedRootProof

* use existing version constant

* halfway there

* "working" version

* some fixes

* fix field nums in tests

* rename v0types to nativetypes

* Revert "Auxiliary commit to revert individual files from dc549b1cf8e724bd08cee1ecc760ff3771d5592d"

This reverts commit 7254d3070d8693b283fc686a2e01a822ecbac1b3.

* uncomment code

* remove map size

* Revert "Revert "spectests""

This reverts commit 39c271ae6b.

* use reverse map

* Revert "Revert "Revert "spectests"""

This reverts commit 19ba8cf95c.

* finally found the bug

(cherry picked from commit a5414c4be1bdb61a50b391ea5301895e772cc5e9)

* simplify populateFieldIndexes

* fix copy

(cherry picked from commit 7da4fb8cf51557ef931bb781872ea52fc6731af5)

* remove native state from e2e

* remove index map

* unsupported functions

* Use ProtobufBeaconState() from native state

* tests

* typo

* reduce complexity of `SaveStatesEfficient`

* remove unused receiver name

* update doc.go

* fix test assertion

* fix test assertion 2

* Phase0 justification bits

* bring back state tests

* rename fieldIndexRev

* versioning of ToProto

* remove version check from unexported function

* hasher tests

* don't return error from JustificationBits

* extract fieldConvertersNative

* helper error function

* use fieldConvertersNative

* Introduce RealPosition method on FieldIndex

* use RealPosition in hasher

* remove unused fields

* remove TestAppendBeyondIndicesLimit

(cherry picked from commit 3017e700282969c30006b64c95c21ffe6b166f8b)

* simplify RealPosition

* rename field interface

* use helper in proofs.go

* Update beacon-chain/core/altair/upgrade.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-09 13:02:34 +00:00
Nishant Das
e2caaf972f Fix Multiclient E2E Runs (#10660) 2022-05-09 12:38:11 +02:00
Nishant Das
c5ddc266ae Perform Request Size Limiting When Caching Eth1 Headers (#10649)
* rate limit reqs

* fix issues

* make it better

* final check

* terence's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-05-09 03:19:23 +00:00
Nishant Das
74518f0e1b Log Out Missing Validators in Participation Drops (#10624)
* debug failures

* clean up

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-05-09 01:44:47 +00:00
terencechain
122c3f44cc Insert over-the-wire slashing to forkchoice (#10615)
* remove equivocating votes from forkchoice

* shutup deepsource

* Add insertSlashingsToForkChoiceStore

* Fix equality checks, add tests

* Update process_block.go

* Fix equality check for doublylinked

* More checks

* Rm err return

* Consider slashings over the wire

* Update new_slot_test.go

* Fix spec tests

* Add ReceiveAttesterSlashing

* Update mock.go

* Revert spec test changes

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-05-07 15:56:34 +00:00
kasey
b1b13cfca7 simplify config names, use strings (#10656)
* simplify config names, use strings

* lint

Co-authored-by: kasey <kasey@users.noreply.github.com>
2022-05-06 21:42:27 +00:00
Preston Van Loon
495013e832 Update github actions to go 1.18 (#10655) 2022-05-06 17:50:52 +00:00
kasey
ae82c17dc3 run process_slots before caching committee data (#10639)
* demonstrate issue with committee caching in e2e

* lint

* run process_slots before caching committee data

* pass transitioned state to UpdateCommitteeCache

* don't run extra epochs w/ web3signer

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-05-06 14:33:09 +08:00
787 changed files with 31462 additions and 19293 deletions

View File

@@ -10,7 +10,7 @@
# Prysm specific remote-cache properties.
#build:remote-cache --disk_cache=
build:remote-cache --remote_download_minimal
build:remote-cache --remote_download_toplevel
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092
build:remote-cache --remote_local_fallback

View File

@@ -38,10 +38,10 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v3
with:
go-version: 1.17
go-version: 1.18
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -55,16 +55,16 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v3
with:
go-version: 1.17
go-version: 1.18
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v2
with:
args: --print-issued-lines --sort-results --no-config --timeout=10m --disable-all -E deadcode -E errcheck -E gosimple --skip-files=validator/web/site_data.go --skip-dirs=proto --go=1.17
args: --print-issued-lines --sort-results --no-config --timeout=10m --disable-all -E deadcode -E errcheck -E gosimple --skip-files=validator/web/site_data.go --skip-dirs=proto --go=1.18
version: v1.45.2
skip-go-installation: true
@@ -75,7 +75,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
id: go
- name: Check out code into the Go module directory
@@ -89,10 +89,10 @@ jobs:
# Use blst tag to allow go and bazel builds for blst.
run: go build -v ./...
# fuzz and blst_disabled leverage go tag based stubs at compile time.
# Building with these tags should be checked and enforced at pre-submit.
- name: Build for fuzzing
run: go build -tags=fuzz,blst_disabled ./...
# fuzz leverage go tag based stubs at compile time.
# Building and testing with these tags should be checked and enforced at pre-submit.
- name: Test for fuzzing
run: go test -tags=fuzz,develop ./... -test.run=^Fuzz
# Tests run via Bazel for now...
# - name: Test

3
.gitignore vendored
View File

@@ -35,3 +35,6 @@ bin
# p2p metaData
metaData
# execution API authentication
jwt.hex

View File

@@ -26,15 +26,18 @@ You can use `bazel run //tools/genesis-state-gen` to create a deterministic gene
### Usage
- **--genesis-time** uint: Unix timestamp used as the genesis time in the generated genesis state (defaults to now)
- **--mainnet-config** bool: Select whether genesis state should be generated with mainnet or minimal (default) params
- **--num-validators** int: Number of validators to deterministically include in the generated genesis state
- **--output-ssz** string: Output filename of the SSZ marshaling of the generated genesis state
- **--config-name=interop** string: name of the beacon chain config to use when generating the state. ex mainnet|minimal|interop
**deprecated flag: use --config-name instead**
- **--mainnet-config** bool: Select whether genesis state should be generated with mainnet or minimal (default) params
The example below creates 64 validator keys, instantiates a genesis state with those 64 validators and with genesis unix timestamp 1567542540,
and finally writes a ssz encoded output to ~/Desktop/genesis.ssz. This file can be used to kickstart the beacon chain in the next section.
and finally writes a ssz encoded output to ~/Desktop/genesis.ssz. This file can be used to kickstart the beacon chain in the next section. When using the `--interop-*` flags, the beacon node will assume the `interop` config should be used, unless a different config is specified on the command line.
```
bazel run //tools/genesis-state-gen -- --output-ssz ~/Desktop/genesis.ssz --num-validators 64 --genesis-time 1567542540
bazel run //tools/genesis-state-gen -- --config-name interop --output-ssz ~/Desktop/genesis.ssz --num-validators 64 --genesis-time 1567542540
```
## Launching a Beacon Node + Validator Client
@@ -46,8 +49,10 @@ Open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract $(curl -s https://prylabs.net/contract) \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-num-validators 64 \
--interop-eth1data-votes
```
@@ -69,8 +74,10 @@ Assuming you generated a `genesis.ssz` file with 64 validators, open up two term
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract $(curl -s https://prylabs.net/contract) \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-genesis-state /path/to/genesis.ssz \
--interop-eth1data-votes
```

View File

@@ -2,7 +2,8 @@
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.1.10](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.1.10-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.1.10)
[![Consensus_Spec_Version 1.2.0-rc.1](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0.rc.1-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0-rc.1)
[![Execution_API_Version 1.0.0-alpha.9](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.alpha.9-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-alpha.9/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/eth2/) specification, developed by [Prysmatic Labs](https://prysmaticlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.

View File

@@ -215,7 +215,7 @@ filegroup(
url = "https://github.com/eth-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
)
consensus_spec_version = "v1.1.10"
consensus_spec_version = "v1.2.0-rc.1"
bls_test_version = "v0.1.1"
@@ -231,7 +231,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "28043009cc2f6fc9804e73c8c1fc2cb27062f1591e6884f3015ae1dd7a276883",
sha256 = "9c93f87378aaa6d6fe1c67b396eac2aacc9594af2a83f028cb99c95dea5b81df",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -247,7 +247,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "bc1a283ca068f310f04d70c4f6a8eaa0b8f7e9318073a8bdc2ee233111b4e339",
sha256 = "52f2c52415228cee8a4de5a09abff785f439a77dfef8f03e834e4e16857673c1",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -263,7 +263,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "bbabb482c229ff9d4e2c7b77c992edb452f9d0af7c6d8dd4f922f06a7b101e81",
sha256 = "022dcc0d6de7dd27b337a0d1b945077eaf5ee47000700395a693fc25e12f96df",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "408a5524548ad3fcf387f65ac7ec52781d9ee899499720bb12451b48a15818d4",
sha256 = "0a9c110305cbd6ebbe0d942f0f33e6ce22dd484ce4ceed277bf185a091941cde",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -26,69 +26,126 @@ type OriginData struct {
bb []byte
st state.BeaconState
b interfaces.SignedBeaconBlock
cf *detect.VersionedUnmarshaler
}
// CheckpointString returns the standard string representation of a Checkpoint for the block root and epoch for the
// SignedBeaconBlock value found by DownloadOriginData.
// The format is a a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (od *OriginData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", od.wsd.BlockRoot, od.wsd.Epoch)
vu *detect.VersionedUnmarshaler
br [32]byte
sr [32]byte
}
// SaveBlock saves the downloaded block to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (od *OriginData) SaveBlock(dir string) (string, error) {
blockPath := path.Join(dir, fname("block", od.cf, od.b.Block().Slot(), od.wsd.BlockRoot))
return blockPath, file.WriteFile(blockPath, od.BlockBytes())
func (o *OriginData) SaveBlock(dir string) (string, error) {
blockPath := path.Join(dir, fname("block", o.vu, o.b.Block().Slot(), o.br))
return blockPath, file.WriteFile(blockPath, o.BlockBytes())
}
// SaveState saves the downloaded state to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (od *OriginData) SaveState(dir string) (string, error) {
statePath := path.Join(dir, fname("state", od.cf, od.st.Slot(), od.wsd.StateRoot))
return statePath, file.WriteFile(statePath, od.StateBytes())
func (o *OriginData) SaveState(dir string) (string, error) {
statePath := path.Join(dir, fname("state", o.vu, o.st.Slot(), o.sr))
return statePath, file.WriteFile(statePath, o.StateBytes())
}
// StateBytes returns the ssz-encoded bytes of the downloaded BeaconState value.
func (od *OriginData) StateBytes() []byte {
return od.sb
func (o *OriginData) StateBytes() []byte {
return o.sb
}
// BlockBytes returns the ssz-encoded bytes of the downloaded SignedBeaconBlock value.
func (od *OriginData) BlockBytes() []byte {
return od.bb
func (o *OriginData) BlockBytes() []byte {
return o.bb
}
func fname(prefix string, cf *detect.VersionedUnmarshaler, slot types.Slot, root [32]byte) string {
return fmt.Sprintf("%s_%s_%s_%d-%#x.ssz", prefix, cf.Config.ConfigName, version.String(cf.Fork), slot, root)
func fname(prefix string, vu *detect.VersionedUnmarshaler, slot types.Slot, root [32]byte) string {
return fmt.Sprintf("%s_%s_%s_%d-%#x.ssz", prefix, vu.Config.ConfigName, version.String(vu.Fork), slot, root)
}
// this method downloads the head state, which can be used to find the correct chain config
// and use prysm's helper methods to compute the latest weak subjectivity epoch.
func getWeakSubjectivityEpochFromHead(ctx context.Context, client *Client) (types.Epoch, error) {
headBytes, err := client.GetState(ctx, IdHead)
// DownloadFinalizedData downloads the most recently finalized state, and the block most recently applied to that state.
// This pair can be used to initialize a new beacon node via checkpoint sync.
func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, error) {
sb, err := client.GetState(ctx, IdFinalized)
if err != nil {
return 0, err
return nil, err
}
cf, err := detect.FromState(headBytes)
vu, err := detect.FromState(sb)
if err != nil {
return 0, errors.Wrap(err, "error detecting chain config for beacon state")
return nil, errors.Wrap(err, "error detecting chain config for finalized state")
}
log.Printf("detected supported config in remote head state, name=%s, fork=%s", cf.Config.ConfigName, version.String(cf.Fork))
headState, err := cf.UnmarshalBeaconState(headBytes)
log.Printf("detected supported config in remote finalized state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return 0, errors.Wrap(err, "error unmarshaling state to correct version")
return nil, errors.Wrap(err, "error unmarshaling finalized state to correct version")
}
epoch, err := helpers.LatestWeakSubjectivityEpoch(ctx, headState, cf.Config)
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return 0, errors.Wrap(err, "error computing the weak subjectivity epoch from head state")
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
}
header := s.LatestBlockHeader()
header.StateRoot = sr[:]
br, err := header.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error while computing block root using state data")
}
log.Printf("(computed client-side) weak subjectivity epoch = %d", epoch)
return epoch, nil
bb, err := client.GetBlock(ctx, IdFromRoot(br))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by root = %#x", br)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
realBlockRoot, err := b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
}
log.Printf("BeaconState slot=%d, Block slot=%d", s.Slot(), b.Block().Slot())
log.Printf("BeaconState htr=%#xd, Block state_root=%#x", sr, b.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#xd, block htr=%#x", br, realBlockRoot)
return &OriginData{
st: s,
b: b,
sb: sb,
bb: bb,
vu: vu,
br: br,
sr: sr,
}, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + SignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint beacon node flag to be used for validation.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch types.Epoch
}
// CheckpointString returns the standard string representation of a Checkpoint.
// The format is a a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (wsd *WeakSubjectivityData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", wsd.BlockRoot, wsd.Epoch)
}
// ComputeWeakSubjectivityCheckpoint attempts to use the prysm weak_subjectivity api
// to obtain the current weak_subjectivity checkpoint.
// For non-prysm nodes, the same computation will be performed with extra steps,
// using the head state downloaded from the beacon node api.
func ComputeWeakSubjectivityCheckpoint(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
ws, err := client.GetWeakSubjectivity(ctx)
if err != nil {
// a 404/405 is expected if querying an endpoint that doesn't support the weak subjectivity checkpoint api
if !errors.Is(err, ErrNotOK) {
return nil, errors.Wrap(err, "unexpected API response for prysm-only weak subjectivity checkpoint API")
}
// fall back to vanilla Beacon Node API method
return computeBackwardsCompatible(ctx, client)
}
log.Printf("server weak subjectivity checkpoint response - epoch=%d, block_root=%#x, state_root=%#x", ws.Epoch, ws.BlockRoot, ws.StateRoot)
return ws, nil
}
const (
@@ -96,8 +153,8 @@ const (
prysmImplementationName = "Prysm"
)
// ErrUnsupportedPrysmCheckpointVersion indicates remote beacon node can't be used for checkpoint retrieval.
var ErrUnsupportedPrysmCheckpointVersion = errors.New("node does not meet minimum version requirements for checkpoint retrieval")
// errUnsupportedPrysmCheckpointVersion indicates remote beacon node can't be used for checkpoint retrieval.
var errUnsupportedPrysmCheckpointVersion = errors.New("node does not meet minimum version requirements for checkpoint retrieval")
// for older endpoints or clients that do not support the weak_subjectivity api method
// we gather the necessary data for a checkpoint sync by:
@@ -105,14 +162,14 @@ var ErrUnsupportedPrysmCheckpointVersion = errors.New("node does not meet minimu
// - requesting the state at the first slot of the epoch
// - using hash_tree_root(state.latest_block_header) to compute the block the state integrates
// - requesting that block by its root
func downloadBackwardsCompatible(ctx context.Context, client *Client) (*OriginData, error) {
func computeBackwardsCompatible(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
log.Print("falling back to generic checkpoint derivation, weak_subjectivity API not supported by server")
nv, err := client.GetNodeVersion(ctx)
if err != nil {
return nil, errors.Wrap(err, "unable to proceed with fallback method without confirming node version")
}
if nv.implementation == prysmImplementationName && semver.Compare(nv.semver, prysmMinimumVersion) < 0 {
return nil, errors.Wrapf(ErrUnsupportedPrysmCheckpointVersion, "%s < minimum (%s)", nv.semver, prysmMinimumVersion)
return nil, errors.Wrapf(errUnsupportedPrysmCheckpointVersion, "%s < minimum (%s)", nv.semver, prysmMinimumVersion)
}
epoch, err := getWeakSubjectivityEpochFromHead(ctx, client)
if err != nil {
@@ -127,136 +184,78 @@ func downloadBackwardsCompatible(ctx context.Context, client *Client) (*OriginDa
log.Printf("requesting checkpoint state at slot %d", slot)
// get the state at the first slot of the epoch
stateBytes, err := client.GetState(ctx, IdFromSlot(slot))
sb, err := client.GetState(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "failed to request state by slot from api, slot=%d", slot)
}
// ConfigFork is used to unmarshal the BeaconState so we can read the block root in latest_block_header
cf, err := detect.FromState(stateBytes)
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in checkpoint state, name=%s, fork=%s", cf.Config.ConfigName, version.String(cf.Fork))
log.Printf("detected supported config in checkpoint state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
st, err := cf.UnmarshalBeaconState(stateBytes)
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error using detected config fork to unmarshal state bytes")
}
// compute state and block roots
stateRoot, err := st.HashTreeRoot(ctx)
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of state")
}
header := st.LatestBlockHeader()
header.StateRoot = stateRoot[:]
computedBlockRoot, err := header.HashTreeRoot()
h := s.LatestBlockHeader()
h.StateRoot = sr[:]
br, err := h.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error while computing block root using state data")
}
blockBytes, err := client.GetBlock(ctx, IdFromRoot(computedBlockRoot))
bb, err := client.GetBlock(ctx, IdFromRoot(br))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by root = %d", computedBlockRoot)
return nil, errors.Wrapf(err, "error requesting block by root = %d", br)
}
block, err := cf.UnmarshalBeaconBlock(blockBytes)
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
blockRoot, err := block.Block().HashTreeRoot()
br, err = b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root for block obtained via root")
}
log.Printf("BeaconState slot=%d, Block slot=%d", st.Slot(), block.Block().Slot())
log.Printf("BeaconState htr=%#xd, Block state_root=%#x", stateRoot, block.Block().StateRoot())
log.Printf("BeaconBlock root computed from state=%#x, Block htr=%#x", computedBlockRoot, blockRoot)
return &OriginData{
wsd: &WeakSubjectivityData{
BlockRoot: blockRoot,
StateRoot: stateRoot,
Epoch: epoch,
},
st: st,
sb: stateBytes,
b: block,
bb: blockBytes,
cf: cf,
return &WeakSubjectivityData{
Epoch: epoch,
BlockRoot: br,
StateRoot: sr,
}, nil
}
// DownloadOriginData attempts to use the proposed weak_subjectivity beacon node api
// to obtain the weak_subjectivity metadata (epoch, block_root, state_root) needed to sync
// a beacon node from the canonical weak subjectivity checkpoint. As this is a proposed API
// that will only be supported by prysm at first, in the event of a 404 we fallback to using a
// different technique where we first download the head state which can be used to compute the
// weak subjectivity epoch on the client side.
func DownloadOriginData(ctx context.Context, client *Client) (*OriginData, error) {
ws, err := client.GetWeakSubjectivity(ctx)
// this method downloads the head state, which can be used to find the correct chain config
// and use prysm's helper methods to compute the latest weak subjectivity epoch.
func getWeakSubjectivityEpochFromHead(ctx context.Context, client *Client) (types.Epoch, error) {
headBytes, err := client.GetState(ctx, IdHead)
if err != nil {
// a 404/405 is expected if querying an endpoint that doesn't support the weak subjectivity checkpoint api
if !errors.Is(err, ErrNotOK) {
return nil, errors.Wrap(err, "unexpected API response for prysm-only weak subjectivity checkpoint API")
}
// fall back to vanilla Beacon Node API method
return downloadBackwardsCompatible(ctx, client)
return 0, err
}
log.Printf("server weak subjectivity checkpoint response - epoch=%d, block_root=%#x, state_root=%#x", ws.Epoch, ws.BlockRoot, ws.StateRoot)
// use first slot of the epoch for the block slot
slot, err := slots.EpochStart(ws.Epoch)
vu, err := detect.FromState(headBytes)
if err != nil {
return nil, errors.Wrapf(err, "error computing first slot of epoch=%d", ws.Epoch)
return 0, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("requesting checkpoint state at slot %d", slot)
stateBytes, err := client.GetState(ctx, IdFromSlot(slot))
log.Printf("detected supported config in remote head state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
headState, err := vu.UnmarshalBeaconState(headBytes)
if err != nil {
return nil, errors.Wrapf(err, "failed to request state by slot from api, slot=%d", slot)
}
cf, err := detect.FromState(stateBytes)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in checkpoint state, name=%s, fork=%s", cf.Config.ConfigName, version.String(cf.Fork))
state, err := cf.UnmarshalBeaconState(stateBytes)
if err != nil {
return nil, errors.Wrap(err, "error using detected config fork to unmarshal state bytes")
}
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "failed to compute htr for state at slot=%d", slot)
return 0, errors.Wrap(err, "error unmarshaling state to correct version")
}
blockRoot, err := state.LatestBlockHeader().HashTreeRoot()
epoch, err := helpers.LatestWeakSubjectivityEpoch(ctx, headState, vu.Config)
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of latest_block_header")
return 0, errors.Wrap(err, "error computing the weak subjectivity epoch from head state")
}
blockBytes, err := client.GetBlock(ctx, IdFromRoot(ws.BlockRoot))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by slot = %d", slot)
}
block, err := cf.UnmarshalBeaconBlock(blockBytes)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
realBlockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
}
log.Printf("BeaconState slot=%d, Block slot=%d", state.Slot(), block.Block().Slot())
log.Printf("BeaconState htr=%#xd, Block state_root=%#x", stateRoot, block.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#xd, block htr=%#x", blockRoot, realBlockRoot)
return &OriginData{
wsd: ws,
st: state,
b: block,
sb: stateBytes,
bb: blockBytes,
cf: cf,
}, nil
log.Printf("(computed client-side) weak subjectivity epoch = %d", epoch)
return epoch, nil
}

View File

@@ -93,8 +93,8 @@ func TestFallbackVersionCheck(t *testing.T) {
}}
ctx := context.Background()
_, err := DownloadOriginData(ctx, c)
require.ErrorIs(t, err, ErrUnsupportedPrysmCheckpointVersion)
_, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.ErrorIs(t, err, errUnsupportedPrysmCheckpointVersion)
}
func TestFname(t *testing.T) {
@@ -120,9 +120,9 @@ func TestFname(t *testing.T) {
require.Equal(t, expected, actual)
}
func TestDownloadOriginData(t *testing.T) {
func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig()
cfg := params.MainnetConfig().Copy()
epoch := cfg.AltairForkEpoch - 1
// set up checkpoint state, using the epoch that will be computed as the ws checkpoint state based on the head state
@@ -204,19 +204,15 @@ func TestDownloadOriginData(t *testing.T) {
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
od, err := DownloadOriginData(ctx, c)
wsd, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.NoError(t, err)
require.Equal(t, expectedWSD.Epoch, od.wsd.Epoch)
require.Equal(t, expectedWSD.StateRoot, od.wsd.StateRoot)
require.Equal(t, expectedWSD.BlockRoot, od.wsd.BlockRoot)
require.DeepEqual(t, wsSerialized, od.sb)
require.DeepEqual(t, serBlock, od.bb)
require.DeepEqual(t, wst.Fork().CurrentVersion, od.cf.Version[:])
require.DeepEqual(t, version.Phase0, od.cf.Fork)
require.Equal(t, expectedWSD.Epoch, wsd.Epoch)
require.Equal(t, expectedWSD.StateRoot, wsd.StateRoot)
require.Equal(t, expectedWSD.BlockRoot, wsd.BlockRoot)
}
// runs downloadBackwardsCompatible directly
// and via DownloadOriginData with a round tripper that triggers the backwards compatible code path
// runs computeBackwardsCompatible directly
// and via ComputeWeakSubjectivityCheckpoint with a round tripper that triggers the backwards compatible code path
func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig()
@@ -297,16 +293,12 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
odPub, err := DownloadOriginData(ctx, c)
wsPub, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.NoError(t, err)
odPriv, err := downloadBackwardsCompatible(ctx, c)
wsPriv, err := computeBackwardsCompatible(ctx, c)
require.NoError(t, err)
require.DeepEqual(t, odPriv.wsd, odPub.wsd)
require.DeepEqual(t, odPriv.sb, odPub.sb)
require.DeepEqual(t, odPriv.bb, odPub.bb)
require.DeepEqual(t, odPriv.cf.Fork, odPub.cf.Fork)
require.DeepEqual(t, odPriv.cf.Version, odPub.cf.Version)
require.DeepEqual(t, wsPriv, wsPub)
}
func TestGetWeakSubjectivityEpochFromHead(t *testing.T) {
@@ -402,3 +394,94 @@ func populateValidators(cfg *params.BeaconChainConfig, st state.BeaconState, val
return nil
}
func TestDownloadFinalizedData(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig().Copy()
// avoid the altair zone because genesis tests are easier to set up
epoch := cfg.AltairForkEpoch - 1
// set up checkpoint state, using the epoch that will be computed as the ws checkpoint state based on the head state
slot, err := slots.EpochStart(epoch)
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, st.SetFork(fork))
// set up checkpoint block
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, wrapper.SetBlockParentRoot(b, cfg.ZeroHash))
require.NoError(t, wrapper.SetBlockSlot(b, slot))
require.NoError(t, wrapper.SetProposerIndex(b, 0))
// set up state header pointing at checkpoint block - this is how the block is downloaded by root
header, err := b.Header()
require.NoError(t, err)
require.NoError(t, st.SetLatestBlockHeader(header.Header))
// order of operations can be confusing here:
// - when computing the state root, make sure block header is complete, EXCEPT the state root should be zero-value
// - before computing the block root (to match the request route), the block should include the state root
// *computed from the state with a header that does not have a state root set yet*
sr, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
require.NoError(t, wrapper.SetBlockStateRoot(b, sr))
mb, err := b.MarshalSSZ()
require.NoError(t, err)
br, err := b.Block().HashTreeRoot()
require.NoError(t, err)
ms, err := st.MarshalSSZ()
require.NoError(t, err)
hc := &http.Client{
Transport: &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case renderGetStatePath(IdFinalized):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(ms))
case renderGetBlockPath(IdFromRoot(br)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(mb))
default:
res.StatusCode = http.StatusInternalServerError
res.Body = io.NopCloser(bytes.NewBufferString(""))
}
return res, nil
}},
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
// sanity check before we go through checkpoint
// make sure we can download the state and unmarshal it with the VersionedUnmarshaler
sb, err := c.GetState(ctx, IdFinalized)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(sb, ms))
vu, err := detect.FromState(sb)
require.NoError(t, err)
us, err := vu.UnmarshalBeaconState(sb)
require.NoError(t, err)
ushtr, err := us.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, sr, ushtr)
expected := &OriginData{
sb: ms,
bb: mb,
br: br,
sr: sr,
}
od, err := DownloadFinalizedData(ctx, c)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expected.sb, od.sb))
require.Equal(t, true, bytes.Equal(expected.bb, od.bb))
require.Equal(t, expected.br, od.br)
require.Equal(t, expected.sr, od.sr)
}

View File

@@ -46,10 +46,9 @@ const (
type StateOrBlockId string
const (
IdFinalized StateOrBlockId = "finalized"
IdGenesis StateOrBlockId = "genesis"
IdHead StateOrBlockId = "head"
IdJustified StateOrBlockId = "justified"
IdFinalized StateOrBlockId = "finalized"
)
var ErrMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
@@ -60,7 +59,7 @@ func IdFromRoot(r [32]byte) StateOrBlockId {
return StateOrBlockId(fmt.Sprintf("%#x", r))
}
// IdFromRoot encodes a Slot in the format expected by the API in places where a slot can be used to identify
// IdFromSlot encodes a Slot in the format expected by the API in places where a slot can be used to identify
// a BeaconState or SignedBeaconBlock.
func IdFromSlot(s types.Slot) StateOrBlockId {
return StateOrBlockId(strconv.FormatUint(uint64(s), 10))
@@ -346,16 +345,6 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
}, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + SignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint, or to download a BeaconState+SignedBeaconBlock pair that can be used to bootstrap
// a new Beacon Node using Checkpoint Sync.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch types.Epoch
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string

View File

@@ -0,0 +1,41 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"client.go",
"errors.go",
"types.go",
],
importpath = "github.com/prysmaticlabs/prysm/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"client_test.go",
"types_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -0,0 +1,254 @@
package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/url"
"text/template"
"time"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
log "github.com/sirupsen/logrus"
)
const (
getExecHeaderPath = "/eth/v1/builder/header/{{.Slot}}/{{.ParentHash}}/{{.Pubkey}}"
getStatus = "/eth/v1/builder/status"
postBlindedBeaconBlockPath = "/eth/v1/builder/blinded_blocks"
postRegisterValidatorPath = "/eth/v1/builder/validators"
)
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
// WithTimeout sets the .Timeout attribute of the wrapped http.Client.
func WithTimeout(timeout time.Duration) ClientOpt {
return func(c *Client) {
c.hc.Timeout = timeout
}
}
type observer interface {
observe(r *http.Request) error
}
func WithObserver(m observer) ClientOpt {
return func(c *Client) {
c.obvs = append(c.obvs, m)
}
}
type requestLogger struct{}
func (*requestLogger) observe(r *http.Request) (e error) {
b := bytes.NewBuffer(nil)
if r.Body == nil {
log.WithFields(log.Fields{
"body-base64": "(nil value)",
"url": r.URL.String(),
}).Info("builder http request")
return nil
}
t := io.TeeReader(r.Body, b)
defer func() {
if r.Body != nil {
e = r.Body.Close()
}
}()
body, err := io.ReadAll(t)
if err != nil {
return err
}
r.Body = io.NopCloser(b)
log.WithFields(log.Fields{
"body-base64": string(body),
"url": r.URL.String(),
}).Info("builder http request")
return nil
}
var _ observer = &requestLogger{}
// Client provides a collection of helper methods for calling Builder API endpoints.
type Client struct {
hc *http.Client
baseURL *url.URL
obvs []observer
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
u, err := urlForHost(host)
if err != nil {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
}
return c, nil
}
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
u, err := url.Parse(h)
if err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
host, port, err := net.SplitHostPort(h)
if err != nil {
return nil, errMalformedHostname
}
return &url.URL{Host: net.JoinHostPort(host, port), Scheme: "http"}, nil
}
// NodeURL returns a human-readable string representation of the beacon node base url.
func (c *Client) NodeURL() string {
return c.baseURL.String()
}
type reqOption func(*http.Request)
// do is a generic, opinionated GET function to reduce boilerplate amongst the getters in this packageapi/client/builder/types.go.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
log.Printf("requesting %s", u.String())
req, err := http.NewRequestWithContext(ctx, method, u.String(), body)
if err != nil {
return nil, err
}
for _, o := range opts {
o(req)
}
for _, o := range c.obvs {
if err := o.observe(req); err != nil {
return nil, err
}
}
r, err := c.hc.Do(req)
if err != nil {
return nil, err
}
defer func() {
err = r.Body.Close()
}()
if r.StatusCode != http.StatusOK {
return nil, non200Err(r)
}
b, err := io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body from GetBlock")
}
return b, nil
}
var execHeaderTemplate = template.Must(template.New("").Parse(getExecHeaderPath))
func execHeaderPath(slot types.Slot, parentHash [32]byte, pubkey [48]byte) (string, error) {
v := struct {
Slot types.Slot
ParentHash string
Pubkey string
}{
Slot: slot,
ParentHash: fmt.Sprintf("%#x", parentHash),
Pubkey: fmt.Sprintf("%#x", pubkey),
}
b := bytes.NewBuffer(nil)
err := execHeaderTemplate.Execute(b, v)
if err != nil {
return "", errors.Wrapf(err, "error rendering exec header template with slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
return b.String(), nil
}
// GetHeader is used by a proposing validator to request an ExecutionPayloadHeader from the Builder node.
func (c *Client) GetHeader(ctx context.Context, slot types.Slot, parentHash [32]byte, pubkey [48]byte) (*ethpb.SignedBuilderBid, error) {
path, err := execHeaderPath(slot, parentHash, pubkey)
if err != nil {
return nil, err
}
hb, err := c.do(ctx, http.MethodGet, path, nil)
if err != nil {
return nil, err
}
hr := &ExecHeaderResponse{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
return hr.ToProto()
}
// RegisterValidator encodes the SignedValidatorRegistrationV1 message to json (including hex-encoding the byte
// fields with 0x prefixes) and posts to the builder validator registration endpoint.
func (c *Client) RegisterValidator(ctx context.Context, svr *ethpb.SignedValidatorRegistrationV1) error {
v := &SignedValidatorRegistration{SignedValidatorRegistrationV1: svr}
body, err := json.Marshal(v)
if err != nil {
return errors.Wrap(err, "error encoding the SignedValidatorRegistration value body in RegisterValidator")
}
_, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body))
return err
}
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full ExecutionPayload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error) {
v := &SignedBlindedBeaconBlockBellatrix{SignedBlindedBeaconBlockBellatrix: sb}
body, err := json.Marshal(v)
if err != nil {
return nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockBellatrix value body in SubmitBlindedBlock")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body))
if err != nil {
return nil, errors.Wrap(err, "error posting the SignedBlindedBeaconBlockBellatrix to the builder api")
}
ep := &ExecPayloadResponse{}
if err := json.Unmarshal(rb, ep); err != nil {
return nil, errors.Wrap(err, "error unmarshaling the builder SubmitBlindedBlock response")
}
return ep.ToProto()
}
// Status asks the remote builder server for a health check. A response of 200 with an empty body is the success/healthy
// response, and an error response may have an error message. This method will return a nil value for error in the
// happy path, and an error with information about the server response body for a non-200 response.
func (c *Client) Status(ctx context.Context) error {
_, err := c.do(ctx, http.MethodGet, getStatus, nil)
return err
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case 404:
return errors.Wrap(ErrNotFound, msg)
default:
return errors.Wrap(ErrNotOK, msg)
}
}

View File

@@ -0,0 +1,344 @@
package builder
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
)
type roundtrip func(*http.Request) (*http.Response, error)
func (fn roundtrip) RoundTrip(r *http.Request) (*http.Response, error) {
return fn(r)
}
func TestClient_Status(t *testing.T) {
ctx := context.Background()
statusPath := "/eth/v1/builder/status"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
defer func() {
if r.Body == nil {
return
}
require.NoError(t, r.Body.Close())
}()
require.Equal(t, statusPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
require.NoError(t, c.Status(ctx))
hc = &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
defer func() {
if r.Body == nil {
return
}
require.NoError(t, r.Body.Close())
}()
require.Equal(t, statusPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c = &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
require.ErrorIs(t, c.Status(ctx), ErrNotOK)
}
func TestClient_RegisterValidator(t *testing.T) {
ctx := context.Background()
expectedBody := `{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"}}`
expectedPath := "/eth/v1/builder/validators"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
body, err := io.ReadAll(r.Body)
defer func() {
require.NoError(t, r.Body.Close())
}()
require.NoError(t, err)
require.Equal(t, expectedBody, string(body))
require.Equal(t, expectedPath, r.URL.Path)
require.Equal(t, http.MethodPost, r.Method)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
reg := &eth.SignedValidatorRegistrationV1{
Message: &eth.ValidatorRegistrationV1{
FeeRecipient: ezDecode(t, params.BeaconConfig().EthBurnAddressHex),
GasLimit: 23,
Timestamp: 42,
Pubkey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
},
}
require.NoError(t, c.RegisterValidator(ctx, reg))
}
func TestClient_GetHeader(t *testing.T) {
ctx := context.Background()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
var slot types.Slot = 23
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
pubkey := ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorIs(t, err, ErrNotOK)
hc = &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponse)),
Request: r.Clone(ctx),
}, nil
}),
}
c = &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
h, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.NoError(t, err)
expectedSig := ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.Equal(t, true, bytes.Equal(expectedSig, h.Signature))
expectedTxRoot := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.Equal(t, true, bytes.Equal(expectedTxRoot, h.Message.Header.TransactionsRoot))
require.Equal(t, uint64(1), h.Message.Header.GasUsed)
value := stringToUint256("652312848583266388373324160190187140051835877600158453279131187530910662656")
require.Equal(t, fmt.Sprintf("%#x", value.SSZBytes()), fmt.Sprintf("%#x", h.Message.Value))
}
func TestSubmitBlindedBlock(t *testing.T) {
ctx := context.Background()
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbbb := testSignedBlindedBeaconBlockBellatrix(t)
ep, err := c.SubmitBlindedBlock(ctx, sbbb)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"), ep.ParentHash))
bfpg := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
require.Equal(t, fmt.Sprintf("%#x", bfpg.SSZBytes()), fmt.Sprintf("%#x", ep.BaseFeePerGas))
require.Equal(t, uint64(1), ep.GasLimit)
}
func testSignedBlindedBeaconBlockBellatrix(t *testing.T) *eth.SignedBlindedBeaconBlockBellatrix {
return &eth.SignedBlindedBeaconBlockBellatrix{
Block: &eth.BlindedBeaconBlockBellatrix{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyBellatrix{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: &eth.Eth1Data{
DepositRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
DepositCount: 1,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Graffiti: ezDecode(t, "0xdeadbeefc0ffee"),
ProposerSlashings: []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
AttesterSlashings: []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
Attestations: []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Deposits: []*eth.Deposit{
{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
VoluntaryExits: []*eth.SignedVoluntaryExit{
{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 48),
SyncCommitteeBits: bitfield.Bitvector512{0x01},
},
ExecutionPayloadHeader: &eth.ExecutionPayloadHeader{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: []byte(strconv.FormatUint(1, 10)),
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}
func TestRequestLogger(t *testing.T) {
wo := WithObserver(&requestLogger{})
c, err := NewClient("localhost:3500", wo)
require.NoError(t, err)
ctx := context.Background()
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, getStatus, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
Request: r.Clone(ctx),
}, nil
}),
}
c.hc = hc
err = c.Status(ctx)
require.NoError(t, err)
}

View File

@@ -0,0 +1,10 @@
package builder
import "github.com/pkg/errors"
// ErrNotOK is used to indicate when an HTTP request to the Beacon Node API failed with any non-2xx response code.
// More specific errors may be returned, but an error in reaction to a non-2xx response will always wrap ErrNotOK.
var ErrNotOK = errors.New("did not receive 2xx response from API")
// ErrNotFound specifically means that a '404 - NOT FOUND' response was received from the API.
var ErrNotFound = errors.Wrap(ErrNotOK, "recv 404 NotFound response from API")

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"14074904626401341155369551180448584754667373453244490859944217516317499064576","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}

585
api/client/builder/types.go Normal file
View File

@@ -0,0 +1,585 @@
package builder
import (
"encoding/json"
"fmt"
"math/big"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
type SignedValidatorRegistration struct {
*eth.SignedValidatorRegistrationV1
}
type ValidatorRegistration struct {
*eth.ValidatorRegistrationV1
}
func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *ValidatorRegistration `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{
Message: &ValidatorRegistration{r.Message},
Signature: r.SignedValidatorRegistrationV1.Signature,
})
}
func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
GasLimit string `json:"gas_limit,omitempty"`
Timestamp string `json:"timestamp,omitempty"`
Pubkey hexutil.Bytes `json:"pubkey,omitempty"`
}{
FeeRecipient: r.FeeRecipient,
GasLimit: fmt.Sprintf("%d", r.GasLimit),
Timestamp: fmt.Sprintf("%d", r.Timestamp),
Pubkey: r.Pubkey,
})
}
type Uint256 struct {
*big.Int
}
func stringToUint256(s string) Uint256 {
bi := new(big.Int)
bi.SetString(s, 10)
return Uint256{Int: bi}
}
// sszBytesToUint256 creates a Uint256 from a ssz-style (little-endian byte slice) representation.
func sszBytesToUint256(b []byte) Uint256 {
bi := new(big.Int)
return Uint256{Int: bi.SetBytes(bytesutil.ReverseByteOrder(b))}
}
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256
func (s Uint256) SSZBytes() []byte {
return bytesutil.ReverseByteOrder(s.Int.Bytes())
}
var errUnmarshalUint256Failed = errors.New("unable to UnmarshalText into a Uint256 value")
func (s *Uint256) UnmarshalJSON(t []byte) error {
start := 0
end := len(t)
if t[0] == '"' {
start += 1
}
if t[end-1] == '"' {
end -= 1
}
return s.UnmarshalText(t[start:end])
}
func (s *Uint256) UnmarshalText(t []byte) error {
if s.Int == nil {
s.Int = big.NewInt(0)
}
z, ok := s.SetString(string(t), 10)
if !ok {
return errors.Wrapf(errUnmarshalUint256Failed, "value=%s", string(t))
}
s.Int = z
return nil
}
func (s Uint256) MarshalJSON() ([]byte, error) {
t, err := s.MarshalText()
if err != nil {
return nil, err
}
t = append([]byte{'"'}, t...)
t = append(t, '"')
return t, nil
}
func (s Uint256) MarshalText() ([]byte, error) {
return []byte(s.String()), nil
}
type Uint64String uint64
func (s *Uint64String) UnmarshalText(t []byte) error {
u, err := strconv.ParseUint(string(t), 10, 64)
*s = Uint64String(u)
return err
}
func (s Uint64String) MarshalText() ([]byte, error) {
return []byte(fmt.Sprintf("%d", s)), nil
}
type ExecHeaderResponse struct {
Version string `json:"version,omitempty"`
Data struct {
Signature hexutil.Bytes `json:"signature,omitempty"`
Message *BuilderBid `json:"message,omitempty"`
} `json:"data,omitempty"`
}
func (ehr *ExecHeaderResponse) ToProto() (*eth.SignedBuilderBid, error) {
bb, err := ehr.Data.Message.ToProto()
if err != nil {
return nil, err
}
return &eth.SignedBuilderBid{
Message: bb,
Signature: ehr.Data.Signature,
}, nil
}
func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
header, err := bb.Header.ToProto()
if err != nil {
return nil, err
}
return &eth.BuilderBid{
Header: header,
Value: bb.Value.SSZBytes(),
Pubkey: bb.Pubkey,
}, nil
}
func (h *ExecutionPayloadHeader) ToProto() (*eth.ExecutionPayloadHeader, error) {
return &eth.ExecutionPayloadHeader{
ParentHash: h.ParentHash,
FeeRecipient: h.FeeRecipient,
StateRoot: h.StateRoot,
ReceiptsRoot: h.ReceiptsRoot,
LogsBloom: h.LogsBloom,
PrevRandao: h.PrevRandao,
BlockNumber: uint64(h.BlockNumber),
GasLimit: uint64(h.GasLimit),
GasUsed: uint64(h.GasUsed),
Timestamp: uint64(h.Timestamp),
ExtraData: h.ExtraData,
BaseFeePerGas: h.BaseFeePerGas.SSZBytes(),
BlockHash: h.BlockHash,
TransactionsRoot: h.TransactionsRoot,
}, nil
}
type BuilderBid struct {
Header *ExecutionPayloadHeader `json:"header,omitempty"`
Value Uint256 `json:"value,omitempty"`
Pubkey hexutil.Bytes `json:"pubkey,omitempty"`
}
type ExecutionPayloadHeader struct {
ParentHash hexutil.Bytes `json:"parent_hash,omitempty"`
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
ReceiptsRoot hexutil.Bytes `json:"receipts_root,omitempty"`
LogsBloom hexutil.Bytes `json:"logs_bloom,omitempty"`
PrevRandao hexutil.Bytes `json:"prev_randao,omitempty"`
BlockNumber Uint64String `json:"block_number,omitempty"`
GasLimit Uint64String `json:"gas_limit,omitempty"`
GasUsed Uint64String `json:"gas_used,omitempty"`
Timestamp Uint64String `json:"timestamp,omitempty"`
ExtraData hexutil.Bytes `json:"extra_data,omitempty"`
BaseFeePerGas Uint256 `json:"base_fee_per_gas,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
TransactionsRoot hexutil.Bytes `json:"transactions_root,omitempty"`
*eth.ExecutionPayloadHeader
}
func (h *ExecutionPayloadHeader) MarshalJSON() ([]byte, error) {
type MarshalCaller ExecutionPayloadHeader
return json.Marshal(&MarshalCaller{
ParentHash: h.ExecutionPayloadHeader.ParentHash,
FeeRecipient: h.ExecutionPayloadHeader.FeeRecipient,
StateRoot: h.ExecutionPayloadHeader.StateRoot,
ReceiptsRoot: h.ExecutionPayloadHeader.ReceiptsRoot,
LogsBloom: h.ExecutionPayloadHeader.LogsBloom,
PrevRandao: h.ExecutionPayloadHeader.PrevRandao,
BlockNumber: Uint64String(h.ExecutionPayloadHeader.BlockNumber),
GasLimit: Uint64String(h.ExecutionPayloadHeader.GasLimit),
GasUsed: Uint64String(h.ExecutionPayloadHeader.GasUsed),
Timestamp: Uint64String(h.ExecutionPayloadHeader.Timestamp),
ExtraData: h.ExecutionPayloadHeader.ExtraData,
BaseFeePerGas: sszBytesToUint256(h.ExecutionPayloadHeader.BaseFeePerGas),
BlockHash: h.ExecutionPayloadHeader.BlockHash,
TransactionsRoot: h.ExecutionPayloadHeader.TransactionsRoot,
})
}
func (h *ExecutionPayloadHeader) UnmarshalJSON(b []byte) error {
type UnmarshalCaller ExecutionPayloadHeader
uc := &UnmarshalCaller{}
if err := json.Unmarshal(b, uc); err != nil {
return err
}
ep := ExecutionPayloadHeader(*uc)
*h = ep
var err error
h.ExecutionPayloadHeader, err = h.ToProto()
return err
}
type ExecPayloadResponse struct {
Version string `json:"version,omitempty"`
Data ExecutionPayload `json:"data,omitempty"`
}
type ExecutionPayload struct {
ParentHash hexutil.Bytes `json:"parent_hash,omitempty"`
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
ReceiptsRoot hexutil.Bytes `json:"receipts_root,omitempty"`
LogsBloom hexutil.Bytes `json:"logs_bloom,omitempty"`
PrevRandao hexutil.Bytes `json:"prev_randao,omitempty"`
BlockNumber Uint64String `json:"block_number,omitempty"`
GasLimit Uint64String `json:"gas_limit,omitempty"`
GasUsed Uint64String `json:"gas_used,omitempty"`
Timestamp Uint64String `json:"timestamp,omitempty"`
ExtraData hexutil.Bytes `json:"extra_data,omitempty"`
BaseFeePerGas Uint256 `json:"base_fee_per_gas,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
Transactions []hexutil.Bytes `json:"transactions,omitempty"`
}
func (r *ExecPayloadResponse) ToProto() (*v1.ExecutionPayload, error) {
return r.Data.ToProto()
}
func (p *ExecutionPayload) ToProto() (*v1.ExecutionPayload, error) {
txs := make([][]byte, len(p.Transactions))
for i := range p.Transactions {
txs[i] = p.Transactions[i]
}
return &v1.ExecutionPayload{
ParentHash: p.ParentHash,
FeeRecipient: p.FeeRecipient,
StateRoot: p.StateRoot,
ReceiptsRoot: p.ReceiptsRoot,
LogsBloom: p.LogsBloom,
PrevRandao: p.PrevRandao,
BlockNumber: uint64(p.BlockNumber),
GasLimit: uint64(p.GasLimit),
GasUsed: uint64(p.GasUsed),
Timestamp: uint64(p.Timestamp),
ExtraData: p.ExtraData,
BaseFeePerGas: p.BaseFeePerGas.SSZBytes(),
BlockHash: p.BlockHash,
Transactions: txs,
}, nil
}
type SignedBlindedBeaconBlockBellatrix struct {
*eth.SignedBlindedBeaconBlockBellatrix
}
type BlindedBeaconBlockBellatrix struct {
*eth.BlindedBeaconBlockBellatrix
}
type BlindedBeaconBlockBodyBellatrix struct {
*eth.BlindedBeaconBlockBodyBellatrix
}
func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *BlindedBeaconBlockBellatrix `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{
Message: &BlindedBeaconBlockBellatrix{r.SignedBlindedBeaconBlockBellatrix.Block},
Signature: r.SignedBlindedBeaconBlockBellatrix.Signature,
})
}
func (b *BlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index,omitempty"`
ParentRoot hexutil.Bytes `json:"parent_root,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
Body *BlindedBeaconBlockBodyBellatrix `json:"body,omitempty"`
}{
Slot: fmt.Sprintf("%d", b.Slot),
ProposerIndex: fmt.Sprintf("%d", b.ProposerIndex),
ParentRoot: b.ParentRoot,
StateRoot: b.StateRoot,
Body: &BlindedBeaconBlockBodyBellatrix{b.BlindedBeaconBlockBellatrix.Body},
})
}
type ProposerSlashing struct {
*eth.ProposerSlashing
}
func (s *ProposerSlashing) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
SignedHeader1 *SignedBeaconBlockHeader `json:"signed_header_1,omitempty"`
SignedHeader2 *SignedBeaconBlockHeader `json:"signed_header_2,omitempty"`
}{
SignedHeader1: &SignedBeaconBlockHeader{s.ProposerSlashing.Header_1},
SignedHeader2: &SignedBeaconBlockHeader{s.ProposerSlashing.Header_2},
})
}
type SignedBeaconBlockHeader struct {
*eth.SignedBeaconBlockHeader
}
func (h *SignedBeaconBlockHeader) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Header *BeaconBlockHeader `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{
Header: &BeaconBlockHeader{h.SignedBeaconBlockHeader.Header},
Signature: h.SignedBeaconBlockHeader.Signature,
})
}
type BeaconBlockHeader struct {
*eth.BeaconBlockHeader
}
func (h *BeaconBlockHeader) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot,omitempty"`
ProposerIndex string `json:"proposer_index,omitempty"`
ParentRoot hexutil.Bytes `json:"parent_root,omitempty"`
StateRoot hexutil.Bytes `json:"state_root,omitempty"`
BodyRoot hexutil.Bytes `json:"body_root,omitempty"`
}{
Slot: fmt.Sprintf("%d", h.BeaconBlockHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", h.BeaconBlockHeader.ProposerIndex),
ParentRoot: h.BeaconBlockHeader.ParentRoot,
StateRoot: h.BeaconBlockHeader.StateRoot,
BodyRoot: h.BeaconBlockHeader.BodyRoot,
})
}
type IndexedAttestation struct {
*eth.IndexedAttestation
}
func (a *IndexedAttestation) MarshalJSON() ([]byte, error) {
indices := make([]string, len(a.IndexedAttestation.AttestingIndices))
for i := range a.IndexedAttestation.AttestingIndices {
indices[i] = fmt.Sprintf("%d", a.AttestingIndices[i])
}
return json.Marshal(struct {
AttestingIndices []string `json:"attesting_indices,omitempty"`
Data *AttestationData `json:"data,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{
AttestingIndices: indices,
Data: &AttestationData{a.IndexedAttestation.Data},
Signature: a.IndexedAttestation.Signature,
})
}
type AttesterSlashing struct {
*eth.AttesterSlashing
}
func (s *AttesterSlashing) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Attestation1 *IndexedAttestation `json:"attestation_1,omitempty"`
Attestation2 *IndexedAttestation `json:"attestation_2,omitempty"`
}{
Attestation1: &IndexedAttestation{s.Attestation_1},
Attestation2: &IndexedAttestation{s.Attestation_2},
})
}
type Checkpoint struct {
*eth.Checkpoint
}
func (c *Checkpoint) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Epoch string `json:"epoch,omitempty"`
Root hexutil.Bytes `json:"root,omitempty"`
}{
Epoch: fmt.Sprintf("%d", c.Checkpoint.Epoch),
Root: c.Checkpoint.Root,
})
}
type AttestationData struct {
*eth.AttestationData
}
func (a *AttestationData) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Slot string `json:"slot,omitempty"`
Index string `json:"index,omitempty"`
BeaconBlockRoot hexutil.Bytes `json:"beacon_block_root,omitempty"`
Source *Checkpoint `json:"source,omitempty"`
Target *Checkpoint `json:"target,omitempty"`
}{
Slot: fmt.Sprintf("%d", a.AttestationData.Slot),
Index: fmt.Sprintf("%d", a.AttestationData.CommitteeIndex),
BeaconBlockRoot: a.AttestationData.BeaconBlockRoot,
Source: &Checkpoint{a.AttestationData.Source},
Target: &Checkpoint{a.AttestationData.Target},
})
}
type Attestation struct {
*eth.Attestation
}
func (a *Attestation) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
AggregationBits hexutil.Bytes `json:"aggregation_bits,omitempty"`
Data *AttestationData `json:"data,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty" ssz-size:"96"`
}{
AggregationBits: hexutil.Bytes(a.Attestation.AggregationBits),
Data: &AttestationData{a.Attestation.Data},
Signature: a.Attestation.Signature,
})
}
type DepositData struct {
*eth.Deposit_Data
}
func (d *DepositData) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
PublicKey hexutil.Bytes `json:"pubkey,omitempty"`
WithdrawalCredentials hexutil.Bytes `json:"withdrawal_credentials,omitempty"`
Amount string `json:"amount,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{
PublicKey: d.PublicKey,
WithdrawalCredentials: d.WithdrawalCredentials,
Amount: fmt.Sprintf("%d", d.Amount),
Signature: d.Signature,
})
}
type Deposit struct {
*eth.Deposit
}
func (d *Deposit) MarshalJSON() ([]byte, error) {
proof := make([]hexutil.Bytes, len(d.Proof))
for i := range d.Proof {
proof[i] = d.Proof[i]
}
return json.Marshal(struct {
Proof []hexutil.Bytes `json:"proof"`
Data *DepositData `json:"data"`
}{
Proof: proof,
Data: &DepositData{Deposit_Data: d.Deposit.Data},
})
}
type SignedVoluntaryExit struct {
*eth.SignedVoluntaryExit
}
func (sve *SignedVoluntaryExit) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Message *VoluntaryExit `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{
Signature: sve.SignedVoluntaryExit.Signature,
Message: &VoluntaryExit{sve.SignedVoluntaryExit.Exit},
})
}
type VoluntaryExit struct {
*eth.VoluntaryExit
}
func (ve *VoluntaryExit) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
Epoch string `json:"epoch,omitempty"`
ValidatorIndex string `json:"validator_index,omitempty"`
}{
Epoch: fmt.Sprintf("%d", ve.Epoch),
ValidatorIndex: fmt.Sprintf("%d", ve.ValidatorIndex),
})
}
type SyncAggregate struct {
*eth.SyncAggregate
}
func (s *SyncAggregate) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
SyncCommitteeBits hexutil.Bytes `json:"sync_committee_bits,omitempty"`
SyncCommitteeSignature hexutil.Bytes `json:"sync_committee_signature,omitempty"`
}{
SyncCommitteeBits: hexutil.Bytes(s.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: s.SyncAggregate.SyncCommitteeSignature,
})
}
type Eth1Data struct {
*eth.Eth1Data
}
func (e *Eth1Data) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
DepositRoot hexutil.Bytes `json:"deposit_root,omitempty"`
DepositCount string `json:"deposit_count,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
}{
DepositRoot: e.DepositRoot,
DepositCount: fmt.Sprintf("%d", e.DepositCount),
BlockHash: e.BlockHash,
})
}
func (b *BlindedBeaconBlockBodyBellatrix) MarshalJSON() ([]byte, error) {
sve := make([]*SignedVoluntaryExit, len(b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits))
for i := range b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits {
sve[i] = &SignedVoluntaryExit{SignedVoluntaryExit: b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits[i]}
}
deps := make([]*Deposit, len(b.BlindedBeaconBlockBodyBellatrix.Deposits))
for i := range b.BlindedBeaconBlockBodyBellatrix.Deposits {
deps[i] = &Deposit{Deposit: b.BlindedBeaconBlockBodyBellatrix.Deposits[i]}
}
atts := make([]*Attestation, len(b.BlindedBeaconBlockBodyBellatrix.Attestations))
for i := range b.BlindedBeaconBlockBodyBellatrix.Attestations {
atts[i] = &Attestation{Attestation: b.BlindedBeaconBlockBodyBellatrix.Attestations[i]}
}
atsl := make([]*AttesterSlashing, len(b.BlindedBeaconBlockBodyBellatrix.AttesterSlashings))
for i := range b.BlindedBeaconBlockBodyBellatrix.AttesterSlashings {
atsl[i] = &AttesterSlashing{AttesterSlashing: b.BlindedBeaconBlockBodyBellatrix.AttesterSlashings[i]}
}
pros := make([]*ProposerSlashing, len(b.BlindedBeaconBlockBodyBellatrix.ProposerSlashings))
for i := range b.BlindedBeaconBlockBodyBellatrix.ProposerSlashings {
pros[i] = &ProposerSlashing{ProposerSlashing: b.BlindedBeaconBlockBodyBellatrix.ProposerSlashings[i]}
}
return json.Marshal(struct {
RandaoReveal hexutil.Bytes `json:"randao_reveal,omitempty"`
Eth1Data *Eth1Data `json:"eth1_data,omitempty"`
Graffiti hexutil.Bytes `json:"graffiti,omitempty"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings,omitempty"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings,omitempty"`
Attestations []*Attestation `json:"attestations,omitempty"`
Deposits []*Deposit `json:"deposits,omitempty"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate,omitempty"`
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header,omitempty"`
}{
RandaoReveal: b.RandaoReveal,
Eth1Data: &Eth1Data{b.BlindedBeaconBlockBodyBellatrix.Eth1Data},
Graffiti: b.BlindedBeaconBlockBodyBellatrix.Graffiti,
ProposerSlashings: pros,
AttesterSlashings: atsl,
Attestations: atts,
Deposits: deps,
VoluntaryExits: sve,
SyncAggregate: &SyncAggregate{b.BlindedBeaconBlockBodyBellatrix.SyncAggregate},
ExecutionPayloadHeader: &ExecutionPayloadHeader{ExecutionPayloadHeader: b.BlindedBeaconBlockBodyBellatrix.ExecutionPayloadHeader},
})
}

View File

@@ -0,0 +1,726 @@
package builder
import (
"bytes"
"encoding/json"
"fmt"
"math/big"
"os"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
)
func ezDecode(t *testing.T, s string) []byte {
v, err := hexutil.Decode(s)
require.NoError(t, err)
return v
}
func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
svr := &eth.SignedValidatorRegistrationV1{
Message: &eth.ValidatorRegistrationV1{
FeeRecipient: make([]byte, 20),
GasLimit: 0,
Timestamp: 0,
Pubkey: make([]byte, 48),
},
Signature: make([]byte, 96),
}
je, err := json.Marshal(&SignedValidatorRegistration{SignedValidatorRegistrationV1: svr})
require.NoError(t, err)
// decode with a struct w/ plain strings so we can check the string encoding of the hex fields
un := struct {
Message struct {
FeeRecipient string `json:"fee_recipient"`
Pubkey string `json:"pubkey"`
} `json:"message"`
Signature string `json:"signature"`
}{}
require.NoError(t, json.Unmarshal(je, &un))
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Signature)
require.Equal(t, "0x0000000000000000000000000000000000000000", un.Message.FeeRecipient)
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Message.Pubkey)
}
var testExampleHeaderResponse = `{
"version": "bellatrix",
"data": {
"message": {
"header": {
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"value": "652312848583266388373324160190187140051835877600158453279131187530910662656",
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}`
func TestExecutionHeaderResponseUnmarshal(t *testing.T) {
hr := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponse), hr))
cases := []struct {
expected string
actual string
name string
}{
{
expected: "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
actual: hexutil.Encode(hr.Data.Signature),
name: "Signature",
},
{
expected: "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
actual: hexutil.Encode(hr.Data.Message.Pubkey),
name: "ExecHeaderResponse.Pubkey",
},
{
expected: "652312848583266388373324160190187140051835877600158453279131187530910662656",
actual: hr.Data.Message.Value.String(),
name: "ExecHeaderResponse.Value",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
},
{
expected: "1",
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
},
{
expected: "1",
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
},
{
expected: "1",
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
},
{
expected: "1",
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
},
}
for _, c := range cases {
require.Equal(t, c.expected, c.actual, fmt.Sprintf("unexpected value for field %s", c.name))
}
}
func TestExecutionHeaderResponseToProto(t *testing.T) {
bfpg := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
v := stringToUint256("652312848583266388373324160190187140051835877600158453279131187530910662656")
hr := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponse), hr))
p, err := hr.ToProto()
require.NoError(t, err)
signature, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.NoError(t, err)
pubkey, err := hexutil.Decode("0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
require.NoError(t, err)
parentHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
feeRecipient, err := hexutil.Decode("0xabcf8e0d4e9587369b2301d0790347320302cc09")
require.NoError(t, err)
stateRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
receiptsRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
logsBloom, err := hexutil.Decode("0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
require.NoError(t, err)
prevRandao, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
extraData, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
blockHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
txRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
expected := &eth.SignedBuilderBid{
Message: &eth.BuilderBid{
Header: &eth.ExecutionPayloadHeader{
ParentHash: parentHash,
FeeRecipient: feeRecipient,
StateRoot: stateRoot,
ReceiptsRoot: receiptsRoot,
LogsBloom: logsBloom,
PrevRandao: prevRandao,
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: extraData,
BaseFeePerGas: bfpg.SSZBytes(),
BlockHash: blockHash,
TransactionsRoot: txRoot,
},
Value: v.SSZBytes(),
Pubkey: pubkey,
},
Signature: signature,
}
require.DeepEqual(t, expected, p)
}
var testExampleExecutionPayload = `{
"version": "bellatrix",
"data": {
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
]
}
}`
func TestExecutionPayloadResponseUnmarshal(t *testing.T) {
epr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
cases := []struct {
expected string
actual string
name string
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hexutil.Encode(epr.Data.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hexutil.Encode(epr.Data.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
}
for _, c := range cases {
require.Equal(t, c.expected, c.actual, fmt.Sprintf("unexpected value for field %s", c.name))
}
require.Equal(t, 1, len(epr.Data.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
}
func TestExecutionPayloadResponseToProto(t *testing.T) {
hr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), hr))
p, err := hr.ToProto()
require.NoError(t, err)
parentHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
feeRecipient, err := hexutil.Decode("0xabcf8e0d4e9587369b2301d0790347320302cc09")
require.NoError(t, err)
stateRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
receiptsRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
logsBloom, err := hexutil.Decode("0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
require.NoError(t, err)
prevRandao, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
extraData, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
blockHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
tx, err := hexutil.Decode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")
require.NoError(t, err)
txList := [][]byte{tx}
bfpg := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
expected := &v1.ExecutionPayload{
ParentHash: parentHash,
FeeRecipient: feeRecipient,
StateRoot: stateRoot,
ReceiptsRoot: receiptsRoot,
LogsBloom: logsBloom,
PrevRandao: prevRandao,
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: extraData,
BaseFeePerGas: bfpg.SSZBytes(),
BlockHash: blockHash,
Transactions: txList,
}
require.DeepEqual(t, expected, p)
}
func pbEth1Data() *eth.Eth1Data {
return &eth.Eth1Data{
DepositRoot: make([]byte, 32),
DepositCount: 23,
BlockHash: make([]byte, 32),
}
}
func TestEth1DataMarshal(t *testing.T) {
ed := &Eth1Data{
Eth1Data: pbEth1Data(),
}
b, err := json.Marshal(ed)
require.NoError(t, err)
expected := `{"deposit_root":"0x0000000000000000000000000000000000000000000000000000000000000000","deposit_count":"23","block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000"}`
require.Equal(t, expected, string(b))
}
func pbSyncAggregate() *eth.SyncAggregate {
return &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 48),
SyncCommitteeBits: bitfield.Bitvector512{0x01},
}
}
func TestSyncAggregate_MarshalJSON(t *testing.T) {
sa := &SyncAggregate{pbSyncAggregate()}
b, err := json.Marshal(sa)
require.NoError(t, err)
expected := `{"sync_committee_bits":"0x01","sync_committee_signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}`
require.Equal(t, expected, string(b))
}
func pbDeposit(t *testing.T) *eth.Deposit {
return &eth.Deposit{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
}
}
func TestDeposit_MarshalJSON(t *testing.T) {
d := &Deposit{
Deposit: pbDeposit(t),
}
b, err := json.Marshal(d)
require.NoError(t, err)
expected := `{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
require.Equal(t, expected, string(b))
}
func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
return &eth.SignedVoluntaryExit{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}
func TestVoluntaryExit(t *testing.T) {
ve := &SignedVoluntaryExit{
SignedVoluntaryExit: pbSignedVoluntaryExit(t),
}
b, err := json.Marshal(ve)
require.NoError(t, err)
expected := `{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
require.Equal(t, expected, string(b))
}
func pbAttestation(t *testing.T) *eth.Attestation {
return &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}
func TestAttestationMarshal(t *testing.T) {
a := &Attestation{
Attestation: pbAttestation(t),
}
b, err := json.Marshal(a)
require.NoError(t, err)
expected := `{"aggregation_bits":"0x01","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
require.Equal(t, expected, string(b))
}
func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
return &eth.AttesterSlashing{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
},
}
}
func TestAttesterSlashing_MarshalJSON(t *testing.T) {
as := &AttesterSlashing{
AttesterSlashing: pbAttesterSlashing(t),
}
b, err := json.Marshal(as)
require.NoError(t, err)
expected := `{"attestation_1":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"attestation_2":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
require.Equal(t, expected, string(b))
}
func pbProposerSlashing(t *testing.T) *eth.ProposerSlashing {
return &eth.ProposerSlashing{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
}
}
func TestProposerSlashings(t *testing.T) {
ps := &ProposerSlashing{ProposerSlashing: pbProposerSlashing(t)}
b, err := json.Marshal(ps)
require.NoError(t, err)
expected := `{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
require.Equal(t, expected, string(b))
}
func pbExecutionPayloadHeader(t *testing.T) *eth.ExecutionPayloadHeader {
bfpg := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
return &eth.ExecutionPayloadHeader{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: bfpg.SSZBytes(),
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
}
}
func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
h := &ExecutionPayloadHeader{
ExecutionPayloadHeader: pbExecutionPayloadHeader(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
require.Equal(t, expected, string(b))
}
var testBuilderBid = `{
"version":"bellatrix",
"data":{
"message":{
"header":{
"parent_hash":"0xa0513a503d5bd6e89a144c3268e5b7e9da9dbf63df125a360e3950a7d0d67131",
"fee_recipient":"0xdfb434922631787e43725c6b926e989875125751",
"state_root":"0xca3149fa9e37db08d1cd49c9061db1002ef1cd58db2210f2115c8c989b2bdf45",
"receipts_root":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao":"0xc2fa210081542a87f334b7b14a2da3275e4b281dd77b007bcfcb10e34c42052e",
"block_number":"1",
"gas_limit":"10000000",
"gas_used":"0",
"timestamp":"4660",
"extra_data":"0x",
"base_fee_per_gas":"7",
"block_hash":"0x10746fa06c248e7eacd4ff8ad8b48a826c227387ee31a6aa5eb4d83ddad34f07",
"transactions_root":"0x7ffe241ea60187fdb0187bfa22de35d1f9bed7ab061d9401fd47e34a54fbede1"
},
"value":"452312848583266388373324160190187140051835877600158453279131187530910662656",
"pubkey":"0x8645866c95cbc2e08bc77ccad473540eddf4a1f51a2a8edc8d7a673824218f7f68fe565f1ab38dadd5c855b45bbcec95"
},
"signature":"0x9183ebc1edf9c3ab2bbd7abdc3b59c6b249d6647b5289a97eea36d9d61c47f12e283f64d928b1e7f5b8a5182b714fa921954678ea28ca574f5f232b2f78cf8900915a2993b396e3471e0655291fec143a300d41408f66478c8208e0f9be851dc"
}
}`
func TestBuilderBidUnmarshalUint256(t *testing.T) {
base10 := "452312848583266388373324160190187140051835877600158453279131187530910662656"
var expectedValue big.Int
require.NoError(t, expectedValue.UnmarshalText([]byte(base10)))
r := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testBuilderBid), r))
//require.Equal(t, expectedValue, r.Data.Message.Value)
marshaled := r.Data.Message.Value.String()
require.Equal(t, base10, marshaled)
require.Equal(t, 0, expectedValue.Cmp(r.Data.Message.Value.Int))
}
func TestMathBigUnmarshal(t *testing.T) {
base10 := "452312848583266388373324160190187140051835877600158453279131187530910662656"
var expectedValue big.Int
require.NoError(t, expectedValue.UnmarshalText([]byte(base10)))
marshaled, err := expectedValue.MarshalText()
require.NoError(t, err)
require.Equal(t, base10, string(marshaled))
var u256 Uint256
require.NoError(t, u256.UnmarshalText([]byte("452312848583266388373324160190187140051835877600158453279131187530910662656")))
}
func TestUint256Unmarshal(t *testing.T) {
base10 := "452312848583266388373324160190187140051835877600158453279131187530910662656"
bi := new(big.Int)
bi, ok := bi.SetString(base10, 10)
require.Equal(t, true, ok)
s := struct {
BigNumber Uint256 `json:"big_number"`
}{
BigNumber: Uint256{Int: bi},
}
m, err := json.Marshal(s)
require.NoError(t, err)
expected := `{"big_number":"452312848583266388373324160190187140051835877600158453279131187530910662656"}`
require.Equal(t, expected, string(m))
}
func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block.json")
require.NoError(t, err)
b := &BlindedBeaconBlockBellatrix{BlindedBeaconBlockBellatrix: &eth.BlindedBeaconBlockBellatrix{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyBellatrix{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: pbEth1Data(),
Graffiti: ezDecode(t, "0xdeadbeefc0ffee"),
ProposerSlashings: []*eth.ProposerSlashing{pbProposerSlashing(t)},
AttesterSlashings: []*eth.AttesterSlashing{pbAttesterSlashing(t)},
Attestations: []*eth.Attestation{pbAttestation(t)},
Deposits: []*eth.Deposit{pbDeposit(t)},
VoluntaryExits: []*eth.SignedVoluntaryExit{pbSignedVoluntaryExit(t)},
SyncAggregate: pbSyncAggregate(),
ExecutionPayloadHeader: pbExecutionPayloadHeader(t),
},
}}
m, err := json.Marshal(b)
require.NoError(t, err)
// string error output is easier to deal with
// -1 end slice index on expected is to get rid of trailing newline
// if you update this fixture and this test breaks, you probably removed the trailing newline
require.Equal(t, string(expected[0:len(expected)-1]), string(m))
}
func TestRoundTripUint256(t *testing.T) {
vs := "452312848583266388373324160190187140051835877600158453279131187530910662656"
u := stringToUint256(vs)
sb := u.SSZBytes()
uu := sszBytesToUint256(sb)
require.Equal(t, true, bytes.Equal(u.SSZBytes(), uu.SSZBytes()))
require.Equal(t, vs, uu.String())
}
func TestRoundTripProtoUint256(t *testing.T) {
h := pbExecutionPayloadHeader(t)
h.BaseFeePerGas = []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}
hm := &ExecutionPayloadHeader{ExecutionPayloadHeader: h}
m, err := json.Marshal(hm)
require.NoError(t, err)
hu := &ExecutionPayloadHeader{}
require.NoError(t, json.Unmarshal(m, hu))
hp, err := hu.ToProto()
require.NoError(t, err)
require.DeepEqual(t, h.BaseFeePerGas, hp.BaseFeePerGas)
}
func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
expected, err := os.ReadFile("testdata/execution-payload.json")
require.NoError(t, err)
hu := &ExecutionPayloadHeader{}
require.NoError(t, json.Unmarshal(expected, hu))
m, err := json.Marshal(hu)
require.NoError(t, err)
require.DeepEqual(t, string(expected[0:len(expected)-1]), string(m))
}

View File

@@ -64,8 +64,10 @@ go_library(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/forks/bellatrix:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
@@ -110,6 +112,7 @@ go_test(
"log_test.go",
"metrics_test.go",
"mock_test.go",
"new_slot_test.go",
"pow_block_test.go",
"process_attestation_test.go",
"process_block_test.go",
@@ -134,9 +137,11 @@ go_test(
"//beacon-chain/powchain/testing:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//beacon-chain/state/v3:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/trie:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
@@ -188,6 +193,7 @@ go_test(
"//beacon-chain/powchain/testing:go_default_library",
"//config/params:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/trie:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -4,6 +4,8 @@ import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
@@ -20,7 +22,7 @@ import (
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
// directly retrieves chain info related data.
// directly retrieve chain info related data.
type ChainInfoFetcher interface {
HeadFetcher
FinalizationFetcher
@@ -31,6 +33,12 @@ type ChainInfoFetcher interface {
HeadDomainFetcher
}
// HeadUpdater defines a common interface for methods in blockchain service
// which allow to update the head info
type HeadUpdater interface {
UpdateHead(context.Context) error
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
type TimeFetcher interface {
GenesisTime() time.Time
@@ -43,7 +51,7 @@ type GenesisFetcher interface {
}
// HeadFetcher defines a common interface for methods in blockchain service which
// directly retrieves head related data.
// directly retrieve head related data.
type HeadFetcher interface {
HeadSlot() types.Slot
HeadRoot(ctx context.Context) ([]byte, error)
@@ -55,8 +63,6 @@ type HeadFetcher interface {
HeadPublicKeyToValidatorIndex(pubKey [fieldparams.BLSPubkeyLength]byte) (types.ValidatorIndex, bool)
HeadValidatorIndexToPublicKey(ctx context.Context, index types.ValidatorIndex) ([fieldparams.BLSPubkeyLength]byte, error)
ChainHeads() ([][32]byte, []types.Slot)
IsOptimistic(ctx context.Context) (bool, error)
IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error)
HeadSyncCommitteeFetcher
HeadDomainFetcher
}
@@ -73,53 +79,62 @@ type CanonicalFetcher interface {
}
// FinalizationFetcher defines a common interface for methods in blockchain service which
// directly retrieves finalization and justification related data.
// directly retrieve finalization and justification related data.
type FinalizationFetcher interface {
FinalizedCheckpt() *ethpb.Checkpoint
CurrentJustifiedCheckpt() *ethpb.Checkpoint
PreviousJustifiedCheckpt() *ethpb.Checkpoint
FinalizedCheckpt() (*ethpb.Checkpoint, error)
CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error)
PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error)
VerifyFinalizedBlkDescendant(ctx context.Context, blockRoot [32]byte) error
}
// OptimisticModeFetcher retrieves information about optimistic status of the node.
type OptimisticModeFetcher interface {
IsOptimistic(ctx context.Context) (bool, error)
IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error)
}
// FinalizedCheckpt returns the latest finalized checkpoint from chain store.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
cp := s.store.FinalizedCheckpt()
if cp == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
func (s *Service) FinalizedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.FinalizedCheckpt()
if err != nil {
return nil, err
}
return ethpb.CopyCheckpoint(cp)
return ethpb.CopyCheckpoint(cp), nil
}
// CurrentJustifiedCheckpt returns the current justified checkpoint from chain store.
func (s *Service) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
cp := s.store.JustifiedCheckpt()
if cp == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
func (s *Service) CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.JustifiedCheckpt()
if err != nil {
return nil, err
}
return ethpb.CopyCheckpoint(cp)
return ethpb.CopyCheckpoint(cp), nil
}
// PreviousJustifiedCheckpt returns the previous justified checkpoint from chain store.
func (s *Service) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
cp := s.store.PrevJustifiedCheckpt()
if cp == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
func (s *Service) PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.PrevJustifiedCheckpt()
if err != nil {
return nil, err
}
return ethpb.CopyCheckpoint(cp)
return ethpb.CopyCheckpoint(cp), nil
}
// BestJustifiedCheckpt returns the best justified checkpoint from store.
func (s *Service) BestJustifiedCheckpt() *ethpb.Checkpoint {
cp := s.store.BestJustifiedCheckpt()
// If there is no best justified checkpoint, return the checkpoint with root as zeros to be used for genesis cases.
if cp == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
func (s *Service) BestJustifiedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.BestJustifiedCheckpt()
if err != nil {
// If there is no best justified checkpoint, return the checkpoint with root as zeros to be used for genesis cases.
if errors.Is(err, store.ErrNilCheckpoint) {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}, nil
}
return nil, err
}
return ethpb.CopyCheckpoint(cp)
return ethpb.CopyCheckpoint(cp), nil
}
// HeadSlot returns the slot of the head of the chain.
@@ -232,7 +247,7 @@ func (s *Service) GenesisTime() time.Time {
return s.genesisTime
}
// GenesisValidatorsRoot returns the genesis validator
// GenesisValidatorsRoot returns the genesis validators
// root of the chain.
func (s *Service) GenesisValidatorsRoot() [32]byte {
s.headLock.RLock()
@@ -299,7 +314,7 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index types.V
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface
// ForkChoicer returns the forkchoice interface.
func (s *Service) ForkChoicer() forkchoice.ForkChoicer {
return s.cfg.ForkChoiceStore
}
@@ -315,7 +330,7 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
return s.IsOptimisticForRoot(ctx, s.head.root)
}
// IsOptimisticForRoot takes the root and slot as arguments instead of the current head
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(root)
@@ -345,7 +360,7 @@ func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool,
return false, nil
}
// checkpoint root could be zeros before the first finalized epoch. Use genesis root if the case.
// Checkpoint root could be zeros before the first finalized epoch. Use genesis root if the case.
lastValidated, err := s.cfg.BeaconDB.StateSummary(ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(validatedCheckpoint.Root)))
if err != nil {
return false, err
@@ -363,7 +378,7 @@ func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool,
return false, err
}
// historical non-canonical blocks here are returned as optimistic for safety.
// Historical non-canonical blocks here are returned as optimistic for safety.
return !isCanonical, nil
}
@@ -372,7 +387,7 @@ func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t
}
// ForkChoiceStore returns the fork choice store in the service
// ForkChoiceStore returns the fork choice store in the service.
func (s *Service) ForkChoiceStore() forkchoice.ForkChoicer {
return s.cfg.ForkChoiceStore
}

View File

@@ -5,10 +5,13 @@ import (
"testing"
"time"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
v3 "github.com/prysmaticlabs/prysm/beacon-chain/state/v3"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
@@ -26,10 +29,46 @@ var _ ChainInfoFetcher = (*Service)(nil)
var _ TimeFetcher = (*Service)(nil)
var _ ForkFetcher = (*Service)(nil)
func TestFinalizedCheckpt_Nil(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
assert.DeepEqual(t, params.BeaconConfig().ZeroHash[:], c.FinalizedCheckpt().Root, "Incorrect pre chain start value")
// prepareForkchoiceState prepares a beacon state with the given data to mock
// insert into forkchoice
func prepareForkchoiceState(
_ context.Context,
slot types.Slot,
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
justifiedEpoch types.Epoch,
finalizedEpoch types.Epoch,
) (state.BeaconState, [32]byte, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
executionHeader := &ethpb.ExecutionPayloadHeader{
BlockHash: payloadHash[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: justifiedEpoch,
}
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: finalizedEpoch,
}
base := &ethpb.BeaconStateBellatrix{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
BlockRoots: make([][]byte, 1),
CurrentJustifiedCheckpoint: justifiedCheckpoint,
FinalizedCheckpoint: finalizedCheckpoint,
LatestExecutionPayloadHeader: executionHeader,
LatestBlockHeader: blockHeader,
}
base.BlockRoots[0] = append(base.BlockRoots[0], blockRoot[:]...)
st, err := v3.InitializeFromProto(base)
return st, blockRoot, err
}
func TestHeadRoot_Nil(t *testing.T) {
@@ -41,9 +80,9 @@ func TestHeadRoot_Nil(t *testing.T) {
}
func TestService_ForkChoiceStore(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
p := c.ForkChoiceStore()
require.Equal(t, 0, int(p.FinalizedEpoch()))
require.Equal(t, types.Epoch(0), p.FinalizedCheckpoint().Epoch)
}
func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
@@ -51,9 +90,11 @@ func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
cp := &ethpb.Checkpoint{Epoch: 5, Root: bytesutil.PadTo([]byte("foo"), 32)}
c := setupBeaconChain(t, beaconDB)
c.store.SetFinalizedCheckpt(cp)
c.store.SetFinalizedCheckptAndPayloadHash(cp, [32]byte{'a'})
assert.Equal(t, cp.Epoch, c.FinalizedCheckpt().Epoch, "Unexpected finalized epoch")
cp, err := c.FinalizedCheckpt()
require.NoError(t, err)
assert.Equal(t, cp.Epoch, cp.Epoch, "Unexpected finalized epoch")
}
func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
@@ -62,19 +103,24 @@ func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
genesisRoot := [32]byte{'A'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, beaconDB)
c.store.SetFinalizedCheckpt(cp)
c.store.SetFinalizedCheckptAndPayloadHash(cp, [32]byte{'a'})
c.originBlockRoot = genesisRoot
assert.DeepEqual(t, c.originBlockRoot[:], c.FinalizedCheckpt().Root)
cp, err := c.FinalizedCheckpt()
require.NoError(t, err)
assert.DeepEqual(t, c.originBlockRoot[:], cp.Root)
}
func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
assert.Equal(t, params.BeaconConfig().ZeroHash, bytesutil.ToBytes32(c.CurrentJustifiedCheckpt().Root), "Unexpected justified epoch")
_, err := c.CurrentJustifiedCheckpt()
require.ErrorIs(t, err, store.ErrNilCheckpoint)
cp := &ethpb.Checkpoint{Epoch: 6, Root: bytesutil.PadTo([]byte("foo"), 32)}
c.store.SetJustifiedCheckpt(cp)
assert.Equal(t, cp.Epoch, c.CurrentJustifiedCheckpt().Epoch, "Unexpected justified epoch")
c.store.SetJustifiedCheckptAndPayloadHash(cp, [32]byte{})
jp, err := c.CurrentJustifiedCheckpt()
require.NoError(t, err)
assert.Equal(t, cp.Epoch, jp.Epoch, "Unexpected justified epoch")
}
func TestJustifiedCheckpt_GenesisRootOk(t *testing.T) {
@@ -83,9 +129,11 @@ func TestJustifiedCheckpt_GenesisRootOk(t *testing.T) {
c := setupBeaconChain(t, beaconDB)
genesisRoot := [32]byte{'B'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c.store.SetJustifiedCheckpt(cp)
c.store.SetJustifiedCheckptAndPayloadHash(cp, [32]byte{})
c.originBlockRoot = genesisRoot
assert.DeepEqual(t, c.originBlockRoot[:], c.CurrentJustifiedCheckpt().Root)
cp, err := c.CurrentJustifiedCheckpt()
require.NoError(t, err)
assert.DeepEqual(t, c.originBlockRoot[:], cp.Root)
}
func TestPreviousJustifiedCheckpt_CanRetrieve(t *testing.T) {
@@ -93,9 +141,12 @@ func TestPreviousJustifiedCheckpt_CanRetrieve(t *testing.T) {
cp := &ethpb.Checkpoint{Epoch: 7, Root: bytesutil.PadTo([]byte("foo"), 32)}
c := setupBeaconChain(t, beaconDB)
assert.Equal(t, params.BeaconConfig().ZeroHash, bytesutil.ToBytes32(c.CurrentJustifiedCheckpt().Root), "Unexpected justified epoch")
_, err := c.PreviousJustifiedCheckpt()
require.ErrorIs(t, err, store.ErrNilCheckpoint)
c.store.SetPrevJustifiedCheckpt(cp)
assert.Equal(t, cp.Epoch, c.PreviousJustifiedCheckpt().Epoch, "Unexpected previous justified epoch")
pcp, err := c.PreviousJustifiedCheckpt()
require.NoError(t, err)
assert.Equal(t, cp.Epoch, pcp.Epoch, "Unexpected previous justified epoch")
}
func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
@@ -106,7 +157,9 @@ func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
c := setupBeaconChain(t, beaconDB)
c.store.SetPrevJustifiedCheckpt(cp)
c.originBlockRoot = genesisRoot
assert.DeepEqual(t, c.originBlockRoot[:], c.PreviousJustifiedCheckpt().Root)
pcp, err := c.PreviousJustifiedCheckpt()
require.NoError(t, err)
assert.DeepEqual(t, c.originBlockRoot[:], pcp.Root)
}
func TestHeadSlot_CanRetrieve(t *testing.T) {
@@ -274,27 +327,55 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
}
func TestService_ChainHeads_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0,
params.BeaconConfig().ZeroHash)}}
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
roots, slots := c.ChainHeads()
require.DeepEqual(t, [][32]byte{{'c'}, {'d'}, {'e'}}, roots)
require.DeepEqual(t, []types.Slot{102, 103, 104}, slots)
}
//
// A <- B <- C
// \ \
// \ ---------- E
// ---------- D
func TestService_ChainHeads_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}}
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
roots, slots := c.ChainHeads()
require.Equal(t, 3, len(roots))
@@ -371,9 +452,13 @@ func TestService_IsOptimistic_ProtoArray(t *testing.T) {
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
@@ -387,9 +472,13 @@ func TestService_IsOptimistic_DoublyLinkedTree(t *testing.T) {
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
@@ -406,9 +495,13 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
@@ -417,9 +510,13 @@ func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
@@ -429,7 +526,7 @@ func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
@@ -494,7 +591,7 @@ func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
@@ -558,7 +655,7 @@ func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10

View File

@@ -3,19 +3,62 @@ package blockchain
import "github.com/pkg/errors"
var (
// errNilJustifiedInStore is returned when a nil justified checkpt is returned from store.
errNilJustifiedInStore = errors.New("nil justified checkpoint returned from store")
// errNilBestJustifiedInStore is returned when a nil justified checkpt is returned from store.
errNilBestJustifiedInStore = errors.New("nil best justified checkpoint returned from store")
// ErrInvalidPayload is returned when the payload is invalid
ErrInvalidPayload = errors.New("recevied an INVALID payload from execution engine")
// ErrUndefinedExecutionEngineError is returned when the execution engine returns an error that is not defined
ErrUndefinedExecutionEngineError = errors.New("received an undefined ee error")
// errNilFinalizedInStore is returned when a nil finalized checkpt is returned from store.
errNilFinalizedInStore = errors.New("nil finalized checkpoint returned from store")
// errNilFinalizedCheckpoint is returned when a nil finalized checkpt is returned from a state.
errNilFinalizedCheckpoint = errors.New("nil finalized checkpoint returned from state")
// errNilJustifiedCheckpoint is returned when a nil justified checkpt is returned from a state.
errNilJustifiedCheckpoint = errors.New("nil finalized checkpoint returned from state")
// errInvalidNilSummary is returned when a nil summary is returned from the DB.
errInvalidNilSummary = errors.New("nil summary returned from the DB")
// errNilParentInDB is returned when a nil parent block is returned from the DB.
errNilParentInDB = errors.New("nil parent block in DB")
// errWrongBlockCount is returned when the wrong number of blocks or
// block roots is used
// errWrongBlockCount is returned when the wrong number of blocks or block roots is used
errWrongBlockCount = errors.New("wrong number of blocks or block roots")
// block is not a valid optimistic candidate block
errNotOptimisticCandidate = errors.New("block is not suitable for optimistic sync")
// errBlockNotFoundInCacheOrDB is returned when a block is not found in the cache or DB.
errBlockNotFoundInCacheOrDB = errors.New("block not found in cache or db")
// errNilStateFromStategen is returned when a nil state is returned from the state generator.
errNilStateFromStategen = errors.New("justified state can't be nil")
// errWSBlockNotFound is returned when a block is not found in the WS cache or DB.
errWSBlockNotFound = errors.New("weak subjectivity root not found in db")
// errWSBlockNotFoundInEpoch is returned when a block is not found in the WS cache or DB within epoch.
errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
// errNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
errNotDescendantOfFinalized = invalidBlock{errors.New("not descendant of finalized checkpoint")}
)
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.
// Some examples of invalid blocks are:
// The block violates state transition rules.
// The block is deemed invalid according to execution layer client.
// The block violates certain fork choice rules (before finalized slot, not finalized ancestor)
type invalidBlock struct {
error
}
type invalidBlockError interface {
Error() string
InvalidBlock() bool
}
// InvalidBlock returns true for `invalidBlock`.
func (e invalidBlock) InvalidBlock() bool {
return true
}
// IsInvalidBlock returns true if the error has `invalidBlock`.
func IsInvalidBlock(e error) bool {
if e == nil {
return false
}
d, ok := e.(invalidBlockError)
if !ok {
return IsInvalidBlock(errors.Unwrap(e))
}
return d.InvalidBlock()
}

View File

@@ -0,0 +1,17 @@
package blockchain
import (
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestIsInvalidBlock(t *testing.T) {
require.Equal(t, false, IsInvalidBlock(ErrInvalidPayload))
err := invalidBlock{ErrInvalidPayload}
require.Equal(t, true, IsInvalidBlock(err))
newErr := errors.Wrap(err, "wrap me")
require.Equal(t, true, IsInvalidBlock(newErr))
}

View File

@@ -12,10 +12,10 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -24,18 +24,11 @@ import (
"go.opencensus.io/trace"
)
var (
ErrInvalidPayload = errors.New("recevied an INVALID payload from execution engine")
ErrUndefinedExecutionEngineError = errors.New("received an undefined ee error")
)
// notifyForkchoiceUpdateArg is the argument for the forkchoice update notification `notifyForkchoiceUpdate`.
type notifyForkchoiceUpdateArg struct {
headState state.BeaconState
headRoot [32]byte
headBlock interfaces.BeaconBlock
finalizedRoot [32]byte
justifiedRoot [32]byte
headState state.BeaconState
headRoot [32]byte
headBlock interfaces.BeaconBlock
}
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
@@ -61,18 +54,12 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
finalizedHash, err := s.getPayloadHash(ctx, arg.finalizedRoot)
if err != nil {
return nil, errors.Wrap(err, "could not get finalized block hash")
}
justifiedHash, err := s.getPayloadHash(ctx, arg.justifiedRoot)
if err != nil {
return nil, errors.Wrap(err, "could not get justified block hash")
}
finalizedHash := s.store.FinalizedPayloadBlockHash()
justifiedHash := s.store.JustifiedPayloadBlockHash()
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: headPayload.BlockHash,
SafeBlockHash: justifiedHash,
FinalizedBlockHash: finalizedHash,
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
@@ -89,10 +76,11 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
log.WithFields(logrus.Fields{
"headSlot": headBlk.Slot(),
"headPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(headPayload.BlockHash)),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash)),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called fork choice updated with optimistic block")
return payloadID, s.optimisticCandidateBlock(ctx, headBlk)
case powchain.ErrInvalidPayloadStatus:
newPayloadInvalidNodeCount.Inc()
headRoot := arg.headRoot
invalidRoots, err := s.ForkChoicer().SetOptimisticToInvalid(ctx, headRoot, bytesutil.ToBytes32(headBlk.ParentRoot()), bytesutil.ToBytes32(lastValidHash))
if err != nil {
@@ -101,12 +89,35 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
return nil, err
}
r, err := s.updateHead(ctx, s.justifiedBalances.balances)
if err != nil {
return nil, err
}
b, err := s.getBlock(ctx, r)
if err != nil {
return nil, err
}
st, err := s.cfg.StateGen.StateByRoot(ctx, r)
if err != nil {
return nil, err
}
pid, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: st,
headRoot: r,
headBlock: b.Block(),
})
if err != nil {
return nil, err
}
log.WithFields(logrus.Fields{
"slot": headBlk.Slot(),
"blockRoot": fmt.Sprintf("%#x", headRoot),
"invalidCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
return nil, ErrInvalidPayload
return pid, ErrInvalidPayload
default:
return nil, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
@@ -125,19 +136,19 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
// getPayloadHash returns the payload hash given the block root.
// if the block is before bellatrix fork epoch, it returns the zero hash.
func (s *Service) getPayloadHash(ctx context.Context, root [32]byte) ([]byte, error) {
finalizedBlock, err := s.getBlock(ctx, s.ensureRootNotZeros(root))
func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, error) {
blk, err := s.getBlock(ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(root)))
if err != nil {
return nil, err
return [32]byte{}, err
}
if blocks.IsPreBellatrixVersion(finalizedBlock.Block().Version()) {
return params.BeaconConfig().ZeroHash[:], nil
if blocks.IsPreBellatrixVersion(blk.Block().Version()) {
return params.BeaconConfig().ZeroHash, nil
}
payload, err := finalizedBlock.Block().Body().ExecutionPayload()
payload, err := blk.Block().Body().ExecutionPayload()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
return [32]byte{}, errors.Wrap(err, "could not get execution payload")
}
return payload.BlockHash, nil
return bytesutil.ToBytes32(payload.BlockHash), nil
}
// notifyForkchoiceUpdate signals execution engine on a new payload.
@@ -152,20 +163,20 @@ func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
if blocks.IsPreBellatrixVersion(postStateVersion) {
return true, nil
}
if err := helpers.BeaconBlockIsNil(blk); err != nil {
if err := wrapper.BeaconBlockIsNil(blk); err != nil {
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(postStateHeader, body)
if err != nil {
return false, errors.Wrap(err, "could not determine if execution is enabled")
return false, errors.Wrap(invalidBlock{err}, "could not determine if execution is enabled")
}
if !enabled {
return true, nil
}
payload, err := body.ExecutionPayload()
if err != nil {
return false, errors.Wrap(err, "could not get execution payload")
return false, errors.Wrap(invalidBlock{err}, "could not get execution payload")
}
lastValidHash, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload)
switch err {
@@ -197,7 +208,7 @@ func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
"blockRoot": fmt.Sprintf("%#x", root),
"invalidCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
return false, ErrInvalidPayload
return false, invalidBlock{ErrInvalidPayload}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
@@ -219,14 +230,10 @@ func (s *Service) optimisticCandidateBlock(ctx context.Context, blk interfaces.B
if blk.Slot()+params.BeaconConfig().SafeSlotsToImportOptimistically <= s.CurrentSlot() {
return nil
}
parent, err := s.cfg.BeaconDB.Block(ctx, bytesutil.ToBytes32(blk.ParentRoot()))
parent, err := s.getBlock(ctx, bytesutil.ToBytes32(blk.ParentRoot()))
if err != nil {
return err
}
if parent == nil || parent.IsNil() {
return errNilParentInDB
}
parentIsExecutionBlock, err := blocks.IsExecutionBlock(parent.Block().Body())
if err != nil {
return err
@@ -262,11 +269,14 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
recipient, err := s.cfg.BeaconDB.FeeRecipientByValidatorID(ctx, proposerID)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
if feeRecipient.String() == fieldparams.EthBurnAddressHex {
if feeRecipient.String() == params.BeaconConfig().EthBurnAddressHex {
logrus.WithFields(logrus.Fields{
"validatorIndex": proposerID,
"burnAddress": fieldparams.EthBurnAddressHex,
}).Error("Fee recipient not set. Using burn address")
"burnAddress": params.BeaconConfig().EthBurnAddressHex,
}).Warn("Fee recipient is currently using the burn address, " +
"you will not be rewarded transaction fees on this setting. " +
"Please set a different eth address as the fee recipient. " +
"Please refer to our documentation for instructions")
}
case err != nil:
return false, nil, 0, errors.Wrap(err, "could not get fee recipient in db")

View File

@@ -9,10 +9,12 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
bstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
@@ -42,7 +44,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, altairBlk))
require.NoError(t, beaconDB.SaveBlock(ctx, bellatrixBlk))
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -50,13 +52,20 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
st, _ := util.DeterministicGenesisState(t, 1)
st, _ := util.DeterministicGenesisState(t, 10)
service.head = &head{
state: st,
}
require.NoError(t, err)
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 1, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
tests := []struct {
name string
@@ -174,14 +183,17 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: tt.newForkchoiceErr}
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, beaconDB.SaveState(ctx, st, tt.finalizedRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, tt.finalizedRoot))
fc := &ethpb.Checkpoint{Epoch: 0, Root: tt.finalizedRoot[:]}
service.store.SetFinalizedCheckptAndPayloadHash(fc, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(fc, [32]byte{'b'})
arg := &notifyForkchoiceUpdateArg{
headState: st,
headRoot: tt.headRoot,
headBlock: tt.blk,
finalizedRoot: tt.finalizedRoot,
justifiedRoot: tt.justifiedRoot,
headState: st,
headRoot: tt.headRoot,
headBlock: tt.blk,
}
_, err := service.notifyForkchoiceUpdate(ctx, arg)
_, err = service.notifyForkchoiceUpdate(ctx, arg)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -191,6 +203,163 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}
}
//
//
// A <- B <- C <- D
// \
// ---------- E <- F
// \
// ------ G
// D is the current head, attestations for F and G come late, both are invalid.
// We switch recursively to F then G and finally to D.
//
// We test:
// 1. forkchoice removes blocks F and G from the forkchoice implementation
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
// Prepare blocks
ba := util.NewBeaconBlockBellatrix()
ba.Block.Body.ExecutionPayload.BlockNumber = 1
wba, err := wrapper.WrappedSignedBeaconBlock(ba)
require.NoError(t, err)
bra, err := wba.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wba))
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.ExecutionPayload.BlockNumber = 2
wbb, err := wrapper.WrappedSignedBeaconBlock(bb)
require.NoError(t, err)
brb, err := wbb.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbb))
bc := util.NewBeaconBlockBellatrix()
bc.Block.Body.ExecutionPayload.BlockNumber = 3
wbc, err := wrapper.WrappedSignedBeaconBlock(bc)
require.NoError(t, err)
brc, err := wbc.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbc))
bd := util.NewBeaconBlockBellatrix()
pd := [32]byte{'D'}
bd.Block.Body.ExecutionPayload.BlockHash = pd[:]
bd.Block.Body.ExecutionPayload.BlockNumber = 4
wbd, err := wrapper.WrappedSignedBeaconBlock(bd)
require.NoError(t, err)
brd, err := wbd.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbd))
be := util.NewBeaconBlockBellatrix()
pe := [32]byte{'E'}
be.Block.Body.ExecutionPayload.BlockHash = pe[:]
be.Block.Body.ExecutionPayload.BlockNumber = 5
wbe, err := wrapper.WrappedSignedBeaconBlock(be)
require.NoError(t, err)
bre, err := wbe.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbe))
bf := util.NewBeaconBlockBellatrix()
pf := [32]byte{'F'}
bf.Block.Body.ExecutionPayload.BlockHash = pf[:]
bf.Block.Body.ExecutionPayload.BlockNumber = 6
bf.Block.ParentRoot = bre[:]
wbf, err := wrapper.WrappedSignedBeaconBlock(bf)
require.NoError(t, err)
brf, err := wbf.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbf))
bg := util.NewBeaconBlockBellatrix()
bg.Block.Body.ExecutionPayload.BlockNumber = 7
pg := [32]byte{'G'}
bg.Block.Body.ExecutionPayload.BlockHash = pg[:]
bg.Block.ParentRoot = bre[:]
wbg, err := wrapper.WrappedSignedBeaconBlock(bg)
require.NoError(t, err)
brg, err := wbg.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbg))
// Insert blocks into forkchoice
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
// Insert Attestations to D, F and G so that they have higher weight than D
// Ensure G is head
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(jc))
headRoot, err := fcs.Head(ctx, []uint64{50, 100, 200})
require.NoError(t, err)
require.Equal(t, brg, headRoot)
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: powchain.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
fc := &ethpb.Checkpoint{Epoch: 0, Root: bra[:]}
service.store.SetFinalizedCheckptAndPayloadHash(fc, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(fc, [32]byte{'b'})
a := &notifyForkchoiceUpdateArg{
headState: st,
headBlock: wbg.Block(),
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.ErrorIs(t, ErrInvalidPayload, err)
// Ensure Head is D
headRoot, err = fcs.Head(ctx, service.justifiedBalances.balances)
require.NoError(t, err)
require.Equal(t, brd, headRoot)
// Ensure F and G where removed but their parent E wasn't
require.Equal(t, false, fcs.HasNode(brf))
require.Equal(t, false, fcs.HasNode(brg))
require.Equal(t, true, fcs.HasNode(bre))
}
func Test_NotifyNewPayload(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = "2"
@@ -198,7 +367,7 @@ func Test_NotifyNewPayload(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -238,16 +407,21 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
r, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
tests := []struct {
name string
postState state.BeaconState
postState bstate.BeaconState
invalidBlock bool
isValidPayload bool
blk interfaces.SignedBeaconBlock
newPayloadErr error
errString string
name string
}{
{
name: "phase 0 post state",
@@ -279,6 +453,7 @@ func Test_NotifyNewPayload(t *testing.T) {
newPayloadErr: powchain.ErrInvalidPayloadStatus,
errString: ErrInvalidPayload.Error(),
isValidPayload: false,
invalidBlock: true,
},
{
name: "altair pre state, altair block",
@@ -384,15 +559,21 @@ func Test_NotifyNewPayload(t *testing.T) {
}
service.cfg.ExecutionEngineCaller = e
root := [32]byte{'a'}
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 0, root, root, params.BeaconConfig().ZeroHash, 0, 0))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, root, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, tt.blk)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
if tt.invalidBlock {
require.Equal(t, true, IsInvalidBlock(err))
}
} else {
require.NoError(t, err)
require.Equal(t, tt.isValidPayload, isValidPayload)
require.Equal(t, false, IsInvalidBlock(err))
}
})
}
@@ -404,7 +585,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -447,7 +628,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -537,15 +718,19 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
jRoot, err := tt.justified.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, tt.justified))
service.store.SetJustifiedCheckpt(
service.store.SetJustifiedCheckptAndPayloadHash(
&ethpb.Checkpoint{
Root: jRoot[:],
Epoch: slots.ToEpoch(tt.justified.Block().Slot()),
})
}, [32]byte{'a'})
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrappedParentBlock))
err = service.optimisticCandidateBlock(ctx, tt.blk)
require.Equal(t, tt.err, err)
if tt.err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.NoError(t, err)
}
}
}
@@ -624,8 +809,8 @@ func Test_GetPayloadAttribute(t *testing.T) {
require.NoError(t, err)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, fieldparams.EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient).String())
require.LogsContain(t, hook, "Fee recipient not set. Using burn address")
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
@@ -645,7 +830,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
stateGen := stategen.New(beaconDB)
fcs := protoarray.New(0, 0, [32]byte{})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stateGen),
@@ -661,8 +846,9 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
genesisRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash,
params.BeaconConfig().ZeroHash, 0, 0))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
genesisSummary := &ethpb.StateSummary{
Root: genesisStateRoot[:],
Slot: 0,
@@ -693,8 +879,9 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
Slot: 320,
}
require.NoError(t, beaconDB.SaveStateSummary(ctx, opStateSummary))
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 320, opRoot, genesisRoot,
params.BeaconConfig().ZeroHash, 10, 10))
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, 10, 10)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, opRoot))
require.NoError(t, service.updateFinalized(ctx, opCheckpoint))
cp, err := service.cfg.BeaconDB.LastValidatedCheckpoint(ctx)
@@ -721,8 +908,9 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
Slot: 640,
}
require.NoError(t, beaconDB.SaveStateSummary(ctx, validSummary))
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 640, validRoot, params.BeaconConfig().ZeroHash,
params.BeaconConfig().ZeroHash, 20, 20))
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 20, 20)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
require.NoError(t, fcs.SetOptimisticToValid(ctx, validRoot))
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, validRoot))
require.NoError(t, service.updateFinalized(ctx, validCheckpoint))
@@ -742,7 +930,7 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
WithForkChoiceStore(protoarray.New()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -798,12 +986,12 @@ func TestService_getPayloadHash(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
WithForkChoiceStore(protoarray.New()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
_, err = service.getPayloadHash(ctx, [32]byte{})
_, err = service.getPayloadHash(ctx, []byte{})
require.ErrorIs(t, errBlockNotFoundInCacheOrDB, err)
b := util.NewBeaconBlock()
@@ -813,20 +1001,20 @@ func TestService_getPayloadHash(t *testing.T) {
require.NoError(t, err)
service.saveInitSyncBlock(r, wsb)
h, err := service.getPayloadHash(ctx, r)
h, err := service.getPayloadHash(ctx, r[:])
require.NoError(t, err)
require.DeepEqual(t, params.BeaconConfig().ZeroHash[:], h)
require.DeepEqual(t, params.BeaconConfig().ZeroHash, h)
bb := util.NewBeaconBlockBellatrix()
h = []byte{'a'}
bb.Block.Body.ExecutionPayload.BlockHash = h
h = [32]byte{'a'}
bb.Block.Body.ExecutionPayload.BlockHash = h[:]
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(bb)
require.NoError(t, err)
service.saveInitSyncBlock(r, wsb)
h, err = service.getPayloadHash(ctx, r)
h, err = service.getPayloadHash(ctx, r[:])
require.NoError(t, err)
require.DeepEqual(t, []byte{'a'}, h)
require.DeepEqual(t, [32]byte{'a'}, h)
}

View File

@@ -9,14 +9,17 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
"github.com/prysmaticlabs/prysm/time/slots"
@@ -27,20 +30,21 @@ import (
// UpdateAndSaveHeadWithBalances updates the beacon state head after getting justified balanced from cache.
// This function is only used in spec-tests, it does save the head after updating it.
func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
cp := s.store.JustifiedCheckpt()
if cp == nil {
return errors.New("no justified checkpoint")
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(cp.Root))
jp, err := s.store.JustifiedCheckpt()
if err != nil {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", cp.Root)
return err
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(jp.Root))
if err != nil {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", jp.Root)
return errors.Wrap(err, msg)
}
headRoot, err := s.updateHead(ctx, balances)
if err != nil {
return errors.Wrap(err, "could not update head")
}
headBlock, err := s.cfg.BeaconDB.Block(ctx, headRoot)
headBlock, err := s.getBlock(ctx, headRoot)
if err != nil {
return err
}
@@ -66,58 +70,66 @@ func (s *Service) updateHead(ctx context.Context, balances []uint64) ([32]byte,
defer span.End()
// Get head from the fork choice service.
f := s.store.FinalizedCheckpt()
if f == nil {
return [32]byte{}, errNilFinalizedInStore
f, err := s.store.FinalizedCheckpt()
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get finalized checkpoint")
}
j := s.store.JustifiedCheckpt()
if j == nil {
return [32]byte{}, errNilJustifiedInStore
j, err := s.store.JustifiedCheckpt()
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get justified checkpoint")
}
// To get head before the first justified epoch, the fork choice will start with origin root
// instead of zero hashes.
headStartRoot := bytesutil.ToBytes32(j.Root)
if headStartRoot == params.BeaconConfig().ZeroHash {
headStartRoot = s.originBlockRoot
}
headStartRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(j.Root))
// In order to process head, fork choice store requires justified info.
// If the fork choice store is missing justified block info, a node should
// re-initiate fork choice store using the latest justified info.
// This recovers a fatal condition and should not happen in run time.
if !s.cfg.ForkChoiceStore.HasNode(headStartRoot) {
jb, err := s.cfg.BeaconDB.Block(ctx, headStartRoot)
jb, err := s.getBlock(ctx, headStartRoot)
if err != nil {
return [32]byte{}, err
}
st, err := s.cfg.StateGen.StateByRoot(ctx, s.ensureRootNotZeros(headStartRoot))
if err != nil {
return [32]byte{}, err
}
if features.Get().EnableForkChoiceDoublyLinkedTree {
s.cfg.ForkChoiceStore = doublylinkedtree.New(j.Epoch, f.Epoch)
s.cfg.ForkChoiceStore = doublylinkedtree.New()
} else {
s.cfg.ForkChoiceStore = protoarray.New(j.Epoch, f.Epoch, bytesutil.ToBytes32(f.Root))
s.cfg.ForkChoiceStore = protoarray.New()
}
if err := s.insertBlockToForkChoiceStore(ctx, jb.Block(), headStartRoot, f, j); err != nil {
if err := s.insertBlockToForkChoiceStore(ctx, jb.Block(), headStartRoot, st, f, j); err != nil {
return [32]byte{}, err
}
}
return s.cfg.ForkChoiceStore.Head(ctx, j.Epoch, headStartRoot, balances, f.Epoch)
jc := &forkchoicetypes.Checkpoint{Epoch: j.Epoch, Root: headStartRoot}
fc := &forkchoicetypes.Checkpoint{Epoch: f.Epoch, Root: s.ensureRootNotZeros(bytesutil.ToBytes32(f.Root))}
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(jc); err != nil {
return [32]byte{}, err
}
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fc); err != nil {
return [32]byte{}, err
}
return s.cfg.ForkChoiceStore.Head(ctx, balances)
}
// This saves head info to the local service cache, it also saves the
// new head root to the DB.
func (s *Service) saveHead(ctx context.Context, headRoot [32]byte, headBlock interfaces.SignedBeaconBlock, headState state.BeaconState) error {
func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock interfaces.SignedBeaconBlock, headState state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.saveHead")
defer span.End()
// Do nothing if head hasn't changed.
r, err := s.HeadRoot(ctx)
oldHeadroot, err := s.HeadRoot(ctx)
if err != nil {
return err
}
if headRoot == bytesutil.ToBytes32(r) {
if newHeadRoot == bytesutil.ToBytes32(oldHeadroot) {
return nil
}
if err := helpers.BeaconBlockIsNil(headBlock); err != nil {
if err := wrapper.BeaconBlockIsNil(headBlock); err != nil {
return err
}
if headState == nil || headState.IsNil() {
@@ -126,17 +138,19 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte, headBlock int
// If the head state is not available, just return nil.
// There's nothing to cache
if !s.cfg.BeaconDB.HasStateSummary(ctx, headRoot) {
if !s.cfg.BeaconDB.HasStateSummary(ctx, newHeadRoot) {
return nil
}
// A chain re-org occurred, so we fire an event notifying the rest of the services.
headSlot := s.HeadSlot()
newHeadSlot := headBlock.Block().Slot()
s.headLock.RLock()
oldHeadRoot := s.headRoot()
oldStateRoot := s.headBlock().Block().StateRoot()
s.headLock.RUnlock()
headSlot := s.HeadSlot()
newHeadSlot := headBlock.Block().Slot()
newStateRoot := headBlock.Block().StateRoot()
if bytesutil.ToBytes32(headBlock.Block().ParentRoot()) != bytesutil.ToBytes32(r) {
if bytesutil.ToBytes32(headBlock.Block().ParentRoot()) != bytesutil.ToBytes32(oldHeadroot) {
log.WithFields(logrus.Fields{
"newSlot": fmt.Sprintf("%d", newHeadSlot),
"oldSlot": fmt.Sprintf("%d", headSlot),
@@ -152,7 +166,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte, headBlock int
Slot: newHeadSlot,
Depth: absoluteSlotDifference,
OldHeadBlock: oldHeadRoot[:],
NewHeadBlock: headRoot[:],
NewHeadBlock: newHeadRoot[:],
OldHeadState: oldStateRoot,
NewHeadState: newStateRoot,
Epoch: slots.ToEpoch(newHeadSlot),
@@ -160,25 +174,24 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte, headBlock int
},
})
if err := s.saveOrphanedAtts(ctx, bytesutil.ToBytes32(r)); err != nil {
if err := s.saveOrphanedAtts(ctx, bytesutil.ToBytes32(oldHeadroot), newHeadRoot); err != nil {
return err
}
reorgCount.Inc()
}
// Cache the new head info.
s.setHead(headRoot, headBlock, headState)
s.setHead(newHeadRoot, headBlock, headState)
// Save the new head root to DB.
if err := s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, headRoot); err != nil {
if err := s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, newHeadRoot); err != nil {
return errors.Wrap(err, "could not save head root in DB")
}
// Forward an event capturing a new chain head over a common event feed
// done in a goroutine to avoid blocking the critical runtime main routine.
go func() {
if err := s.notifyNewHeadEvent(ctx, newHeadSlot, headState, newStateRoot, headRoot[:]); err != nil {
if err := s.notifyNewHeadEvent(ctx, newHeadSlot, headState, newStateRoot, newHeadRoot[:]); err != nil {
log.WithError(err).Error("Could not notify event feed of new chain head")
}
}()
@@ -190,7 +203,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte, headBlock int
// root in DB. With the inception of initial-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.SignedBeaconBlock, r [32]byte, hs state.BeaconState) error {
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return err
}
cachedHeadRoot, err := s.HeadRoot(ctx)
@@ -351,35 +364,48 @@ func (s *Service) notifyNewHeadEvent(
return nil
}
// This saves the attestations inside the beacon block with respect to root `orphanedRoot` back into the
// attestation pool. It also filters out the attestations that is one epoch older as a
// defense so invalid attestations don't flow into the attestation pool.
func (s *Service) saveOrphanedAtts(ctx context.Context, orphanedRoot [32]byte) error {
orphanedBlk, err := s.cfg.BeaconDB.Block(ctx, orphanedRoot)
if err != nil {
// This saves the attestations between `orphanedRoot` and the common ancestor root that is derived using `newHeadRoot`.
// It also filters out the attestations that is one epoch older as a defense so invalid attestations don't flow into the attestation pool.
func (s *Service) saveOrphanedAtts(ctx context.Context, orphanedRoot [32]byte, newHeadRoot [32]byte) error {
commonAncestorRoot, err := s.ForkChoicer().CommonAncestorRoot(ctx, newHeadRoot, orphanedRoot)
switch {
// Exit early if there's no common ancestor as there would be nothing to save.
case errors.Is(err, forkchoice.ErrUnknownCommonAncestor):
return nil
case err != nil:
return err
}
if orphanedBlk == nil || orphanedBlk.IsNil() {
return errors.New("orphaned block can't be nil")
}
for _, a := range orphanedBlk.Block().Body().Attestations() {
// Is the attestation one epoch older.
if a.Data.Slot+params.BeaconConfig().SlotsPerEpoch < s.CurrentSlot() {
continue
for orphanedRoot != commonAncestorRoot {
if ctx.Err() != nil {
return ctx.Err()
}
if helpers.IsAggregated(a) {
if err := s.cfg.AttPool.SaveAggregatedAttestation(a); err != nil {
return err
}
} else {
if err := s.cfg.AttPool.SaveUnaggregatedAttestation(a); err != nil {
return err
}
}
saveOrphanedAttCount.Inc()
}
orphanedBlk, err := s.getBlock(ctx, orphanedRoot)
if err != nil {
return err
}
// If the block is an epoch older, break out of the loop since we can't include atts anyway.
// This prevents stuck within this for loop longer than necessary.
if orphanedBlk.Block().Slot()+params.BeaconConfig().SlotsPerEpoch <= s.CurrentSlot() {
break
}
for _, a := range orphanedBlk.Block().Body().Attestations() {
// if the attestation is one epoch older, it wouldn't been useful to save it.
if a.Data.Slot+params.BeaconConfig().SlotsPerEpoch < s.CurrentSlot() {
continue
}
if helpers.IsAggregated(a) {
if err := s.cfg.AttPool.SaveAggregatedAttestation(a); err != nil {
return err
}
} else {
if err := s.cfg.AttPool.SaveUnaggregatedAttestation(a); err != nil {
return err
}
}
saveOrphanedAttCount.Inc()
}
orphanedRoot = bytesutil.ToBytes32(orphanedBlk.Block().ParentRoot())
}
return nil
}

View File

@@ -3,6 +3,7 @@ package blockchain
import (
"bytes"
"context"
"sort"
"testing"
"time"
@@ -10,9 +11,11 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
@@ -49,6 +52,9 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), oldBlock))
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, bytesutil.ToBytes32(oldBlock.Block().ParentRoot()), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
slot: 0,
root: oldRoot,
@@ -64,6 +70,9 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, bytesutil.ToBytes32(wsb.Block().ParentRoot()), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
@@ -93,6 +102,9 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), oldBlock))
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, bytesutil.ToBytes32(oldBlock.Block().ParentRoot()), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
slot: 0,
root: oldRoot,
@@ -110,6 +122,9 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, bytesutil.ToBytes32(wsb.Block().ParentRoot()), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
@@ -153,14 +168,14 @@ func TestUpdateHead_MissingJustifiedRoot(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
state, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), state, r))
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'a'})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{'b'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{})
headRoot, err := service.updateHead(context.Background(), []uint64{})
_, err = service.updateHead(context.Background(), []uint64{})
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.saveHead(context.Background(), headRoot, wsb, st))
}
func Test_notifyNewHeadEvent(t *testing.T) {
@@ -229,57 +244,374 @@ func Test_notifyNewHeadEvent(t *testing.T) {
})
}
func TestSaveOrphanedAtts(t *testing.T) {
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
func TestSaveOrphanedAtts_NoCommonAncestor(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now()
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
// Chain setup
// 0 -- 1 -- 2 -- 3
// -4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.NoError(t, service.saveOrphanedAtts(ctx, r))
require.Equal(t, len(b.Block.Body.Attestations), service.cfg.AttPool.AggregatedAttestationCount())
savedAtts := service.cfg.AttPool.AggregatedAttestations()
atts := b.Block.Body.Attestations
require.DeepSSZEqual(t, atts, savedAtts)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 3)
assert.NoError(t, err)
blk3.Block.ParentRoot = r2[:]
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
func TestSaveOrphanedAtts(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2 -- 3
// \-4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 3)
assert.NoError(t, err)
blk3.Block.ParentRoot = r2[:]
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
blk2.Block.Body.Attestations[0],
blk1.Block.Body.Attestations[0],
}
atts := service.cfg.AttPool.AggregatedAttestations()
sort.Slice(atts, func(i, j int) bool {
return atts[i].Data.Slot > atts[j].Data.Slot
})
require.DeepEqual(t, wantAtts, atts)
}
func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+2)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2
// \-4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
}
require.NoError(t, service.saveOrphanedAtts(ctx, r2, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
func TestSaveOrphanedAtts_NoCommonAncestor_DoublyLinkedTrie(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnableForkChoiceDoublyLinkedTree: true,
})
defer resetCfg()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
// Chain setup
// 0 -- 1 -- 2 -- 3
// -4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.NoError(t, service.saveOrphanedAtts(ctx, r))
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 3)
assert.NoError(t, err)
blk3.Block.ParentRoot = r2[:]
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnableForkChoiceDoublyLinkedTree: true,
})
defer resetCfg()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2 -- 3
// \-4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 3)
assert.NoError(t, err)
blk3.Block.ParentRoot = r2[:]
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
blk2.Block.Body.Attestations[0],
blk1.Block.Body.Attestations[0],
}
atts := service.cfg.AttPool.AggregatedAttestations()
sort.Slice(atts, func(i, j int) bool {
return atts[i].Data.Slot > atts[j].Data.Slot
})
require.DeepEqual(t, wantAtts, atts)
}
func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnableForkChoiceDoublyLinkedTree: true,
})
defer resetCfg()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+2)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2
// \-4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
}
require.NoError(t, service.saveOrphanedAtts(ctx, r2, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
savedAtts := service.cfg.AttPool.AggregatedAttestations()
atts := b.Block.Body.Attestations
require.DeepNotSSZEqual(t, atts, savedAtts)
}
func TestUpdateHead_noSavedChanges(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -296,10 +628,10 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(ctx, bellatrixBlk))
fcp := &ethpb.Checkpoint{
Root: bellatrixBlkRoot[:],
Epoch: 1,
Epoch: 0,
}
service.store.SetFinalizedCheckpt(fcp)
service.store.SetJustifiedCheckpt(fcp)
service.store.SetFinalizedCheckptAndPayloadHash(fcp, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(fcp, [32]byte{'b'})
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bellatrixBlkRoot))
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
@@ -309,6 +641,9 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
headRoot := service.headRoot()
require.Equal(t, [32]byte{}, headRoot)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
newRoot, err := service.updateHead(ctx, []uint64{1, 2})
require.NoError(t, err)
require.NotEqual(t, headRoot, newRoot)

View File

@@ -4,12 +4,10 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
)
var errBlockNotFoundInCacheOrDB = errors.New("block not found in cache or db")
// This saves a beacon block to the initial sync blocks cache.
func (s *Service) saveInitSyncBlock(r [32]byte, b interfaces.SignedBeaconBlock) {
s.initSyncBlocksLock.Lock()
@@ -49,7 +47,7 @@ func (s *Service) getBlock(ctx context.Context, r [32]byte) (interfaces.SignedBe
return nil, errors.Wrap(err, "could not retrieve block from db")
}
}
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return nil, errBlockNotFoundInCacheOrDB
}
return b, nil

View File

@@ -6,7 +6,7 @@ import (
func init() {
// Override network name so that hardcoded genesis files are not loaded.
cfg := params.BeaconConfig()
cfg.ConfigName = "test"
params.OverrideBeaconConfig(cfg)
if err := params.SetActive(params.MainnetTestConfig()); err != nil {
panic(err)
}
}

View File

@@ -54,25 +54,29 @@ func logStateTransitionData(b interfaces.BeaconBlock) error {
return nil
}
func logBlockSyncStatus(block interfaces.BeaconBlock, blockRoot [32]byte, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64) error {
func logBlockSyncStatus(block interfaces.BeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64) error {
startTime, err := slots.ToTime(genesisTime, block.Slot())
if err != nil {
return err
}
level := log.Logger.GetLevel()
log = log.WithField("slot", block.Slot())
if level >= logrus.DebugLevel {
log = log.WithField("slotInEpoch", block.Slot()%params.BeaconConfig().SlotsPerEpoch)
log = log.WithField("justifiedEpoch", justified.Epoch)
log = log.WithField("justifiedRoot", fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]))
log = log.WithField("parentRoot", fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]))
log = log.WithField("version", version.String(block.Version()))
log = log.WithField("sinceSlotStartTime", prysmTime.Now().Sub(startTime))
log = log.WithField("chainServiceProcessedTime", prysmTime.Now().Sub(receivedTime))
}
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]),
"version": version.String(block.Version()),
}).Info("Synced new block")
log.WithFields(logrus.Fields{
"slot": block.Slot,
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime),
}).Debug("Sync new block times")
return nil
}

View File

@@ -158,6 +158,10 @@ var (
Name: "forkchoice_updated_optimistic_node_count",
Help: "Count the number of optimistic nodes after forkchoiceUpdated EE call",
})
missedPayloadIDFilledCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "missed_payload_id_filled_count",
Help: "",
})
)
// reportSlotMetrics reports slot related metrics.

View File

@@ -13,7 +13,7 @@ import (
func testServiceOptsWithDB(t *testing.T) []Option {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
return []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -21,7 +21,7 @@ func testServiceOptsWithDB(t *testing.T) []Option {
}
}
// warning: only use these opts when you are certain there are no db calls
// WARNING: only use these opts when you are certain there are no db calls
// in your code path. this is a lightweight way to satisfy the stategen/beacondb
// initialization requirements w/o the overhead of db init.
func testServiceOptsNoDB() []Option {

View File

@@ -5,7 +5,9 @@ import (
"context"
"github.com/pkg/errors"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -30,7 +32,6 @@ import (
// if ancestor_at_finalized_slot == store.finalized_checkpoint.root:
// store.justified_checkpoint = store.best_justified_checkpoint
func (s *Service) NewSlot(ctx context.Context, slot types.Slot) error {
// Reset proposer boost root in fork choice.
if err := s.cfg.ForkChoiceStore.ResetBoostedProposerRoot(ctx); err != nil {
return errors.Wrap(err, "could not reset boosted proposer root in fork choice")
@@ -42,17 +43,17 @@ func (s *Service) NewSlot(ctx context.Context, slot types.Slot) error {
}
// Update store.justified_checkpoint if a better checkpoint on the store.finalized_checkpoint chain
bj := s.store.BestJustifiedCheckpt()
if bj == nil {
return errNilBestJustifiedInStore
bj, err := s.store.BestJustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get best justified checkpoint")
}
j := s.store.JustifiedCheckpt()
if j == nil {
return errNilJustifiedInStore
j, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
f := s.store.FinalizedCheckpt()
if f == nil {
return errNilFinalizedInStore
f, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if bj.Epoch > j.Epoch {
finalizedSlot, err := slots.EpochStart(f.Epoch)
@@ -64,7 +65,15 @@ func (s *Service) NewSlot(ctx context.Context, slot types.Slot) error {
return err
}
if bytes.Equal(r, f.Root) {
s.store.SetJustifiedCheckpt(bj)
h, err := s.getPayloadHash(ctx, bj.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(bj, h)
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{
Epoch: bj.Epoch, Root: bytesutil.ToBytes32(bj.Root)}); err != nil {
return err
}
}
}
return nil

View File

@@ -5,19 +5,22 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestService_newSlot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -25,11 +28,29 @@ func TestService_newSlot(t *testing.T) {
}
ctx := context.Background()
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, 0, 0)) // genesis
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 32, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0)) // finalized
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 64, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0)) // justified
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 96, [32]byte{'c'}, [32]byte{'a'}, [32]byte{}, 0, 0)) // best justified
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 97, [32]byte{'d'}, [32]byte{}, [32]byte{}, 0, 0)) // bad
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
wsb, err := wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
bj, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // genesis
state, blkRoot, err = prepareForkchoiceState(ctx, 32, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // finalized
state, blkRoot, err = prepareForkchoiceState(ctx, 64, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // justified
state, blkRoot, err = prepareForkchoiceState(ctx, 96, bj, [32]byte{'a'}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // best justified
state, blkRoot, err = prepareForkchoiceState(ctx, 97, [32]byte{'d'}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // bad
type args struct {
slot types.Slot
@@ -48,7 +69,7 @@ func TestService_newSlot(t *testing.T) {
slot: params.BeaconConfig().SlotsPerEpoch + 1,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bytesutil.PadTo([]byte{'c'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bj[:]},
shouldEqual: false,
},
},
@@ -58,7 +79,7 @@ func TestService_newSlot(t *testing.T) {
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 3, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'c'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 2, Root: bj[:]},
shouldEqual: false,
},
},
@@ -78,7 +99,7 @@ func TestService_newSlot(t *testing.T) {
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bytesutil.PadTo([]byte{'c'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bj[:]},
shouldEqual: true,
},
},
@@ -92,9 +113,17 @@ func TestService_newSlot(t *testing.T) {
require.NoError(t, service.NewSlot(ctx, test.args.slot))
if test.args.shouldEqual {
require.DeepSSZEqual(t, service.store.BestJustifiedCheckpt(), service.store.JustifiedCheckpt())
bcp, err := service.store.BestJustifiedCheckpt()
require.NoError(t, err)
cp, err := service.store.JustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, bcp, cp)
} else {
require.DeepNotSSZEqual(t, service.store.BestJustifiedCheckpt(), service.store.JustifiedCheckpt())
bcp, err := service.store.BestJustifiedCheckpt()
require.NoError(t, err)
cp, err := service.store.JustifiedCheckpt()
require.NoError(t, err)
require.DeepNotSSZEqual(t, bcp, cp)
}
}
}

View File

@@ -10,10 +10,10 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/prysmaticlabs/prysm/time/slots"
@@ -38,7 +38,7 @@ import (
// # Check if `pow_block` is a valid terminal PoW block
// assert is_valid_terminal_pow_block(pow_block, pow_parent)
func (s *Service) validateMergeBlock(ctx context.Context, b interfaces.SignedBeaconBlock) error {
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return err
}
payload, err := b.Block().Body().ExecutionPayload()
@@ -64,8 +64,9 @@ func (s *Service) validateMergeBlock(ctx context.Context, b interfaces.SignedBea
return err
}
if !valid {
return fmt.Errorf("invalid TTD, configTTD: %s, currentTTD: %s, parentTTD: %s",
err := fmt.Errorf("invalid TTD, configTTD: %s, currentTTD: %s, parentTTD: %s",
params.BeaconConfig().TerminalTotalDifficulty, mergeBlockTD, mergeBlockParentTD)
return invalidBlock{err}
}
log.WithFields(logrus.Fields{

View File

@@ -109,7 +109,7 @@ func Test_validateMergeBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -144,13 +144,15 @@ func Test_validateMergeBlock(t *testing.T) {
cfg.TerminalTotalDifficulty = "1"
params.OverrideBeaconConfig(cfg)
require.ErrorContains(t, "invalid TTD, configTTD: 1, currentTTD: 2, parentTTD: 1", service.validateMergeBlock(ctx, b))
err = service.validateMergeBlock(ctx, b)
require.ErrorContains(t, "invalid TTD, configTTD: 1, currentTTD: 2, parentTTD: 1", err)
require.Equal(t, true, IsInvalidBlock(err))
}
func Test_getBlkParentHashAndTD(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),

View File

@@ -7,11 +7,11 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/async"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
@@ -80,7 +80,7 @@ func (s *Service) verifyBeaconBlock(ctx context.Context, data *ethpb.Attestation
if err != nil {
return err
}
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return err
}
if b.Block().Slot() > data.Slot {

View File

@@ -9,6 +9,7 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
@@ -28,7 +29,7 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
WithForkChoiceStore(protoarray.New()),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
@@ -140,7 +141,7 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New(0, 0)),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
@@ -250,7 +251,7 @@ func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -268,7 +269,9 @@ func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
}
@@ -276,7 +279,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -294,7 +297,9 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
}
@@ -324,9 +329,9 @@ func TestStore_SaveCheckpointState(t *testing.T) {
r := [32]byte{'g'}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, r))
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'b'})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: r[:]})
r = bytesutil.ToBytes32([]byte{'A'})
@@ -358,9 +363,9 @@ func TestStore_SaveCheckpointState(t *testing.T) {
assert.Equal(t, 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot(), "Unexpected state slot")
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'b'})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: r[:]})
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})))
@@ -483,7 +488,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -500,7 +505,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 1})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 1}, [32]byte{})
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
@@ -518,7 +523,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -535,7 +540,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 1})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 1}, [32]byte{})
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
@@ -564,7 +569,7 @@ func TestVerifyFinalizedConsistency_OK(t *testing.T) {
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: r32[:], Epoch: 1})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r32[:], Epoch: 1}, [32]byte{})
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
@@ -591,7 +596,7 @@ func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: r32[:], Epoch: 1})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r32[:], Epoch: 1}, [32]byte{})
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
@@ -599,10 +604,16 @@ func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, b32.Block.Slot, r32, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, b33.Block.Slot, r33, r32, params.BeaconConfig().ZeroHash, 0, 0))
state, blkRoot, err := prepareForkchoiceState(ctx, b32.Block.Slot, r32, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, b33.Block.Slot, r33, r32, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
_, err = service.cfg.ForkChoiceStore.Head(ctx, 0, r32, []uint64{}, 0)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r32}
require.NoError(t, service.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(jc))
_, err = service.cfg.ForkChoiceStore.Head(ctx, []uint64{})
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(context.Background(), r33[:])
require.NoError(t, err)

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
@@ -16,8 +17,10 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/forks/bellatrix"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/crypto/bls"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
@@ -92,8 +95,8 @@ var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlock, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
defer span.End()
if err := helpers.BeaconBlockIsNil(signed); err != nil {
return err
if err := wrapper.BeaconBlockIsNil(signed); err != nil {
return invalidBlock{err}
}
b := signed.Block()
@@ -108,7 +111,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return err
return invalidBlock{err}
}
postStateVersion, postStateHeader, err := getStateVersionAndPayload(postState)
if err != nil {
@@ -116,18 +119,20 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}
isValidPayload, err := s.notifyNewPayload(ctx, postStateVersion, postStateHeader, signed)
if err != nil {
return errors.Wrap(err, "could not verify new payload")
return fmt.Errorf("could not verify new payload: %v", err)
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preStateVersion, preStateHeader, signed); err != nil {
return err
}
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState); err != nil {
return err
}
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, signed.Block(), blockRoot, postState); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
}
s.insertSlashingsToForkChoiceStore(ctx, signed.Block())
s.InsertSlashingsToForkChoiceStore(ctx, signed.Block().Body().AttesterSlashings())
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
return errors.Wrap(err, "could not set optimistic block to valid")
@@ -146,9 +151,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
return err
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState); err != nil {
return err
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
@@ -177,27 +179,57 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}
// Update justified check point.
justified := s.store.JustifiedCheckpt()
if justified == nil {
return errNilJustifiedInStore
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
currJustifiedEpoch := justified.Epoch
if postState.CurrentJustifiedCheckpoint().Epoch > currJustifiedEpoch {
psj := postState.CurrentJustifiedCheckpoint()
if psj == nil {
return errNilJustifiedCheckpoint
}
if psj.Epoch > currJustifiedEpoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
finalized := s.store.FinalizedCheckpt()
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if finalized == nil {
return errNilFinalizedInStore
}
newFinalized := postState.FinalizedCheckpointEpoch() > finalized.Epoch
psf := postState.FinalizedCheckpoint()
if psf == nil {
return errNilFinalizedCheckpoint
}
newFinalized := psf.Epoch > finalized.Epoch
if newFinalized {
s.store.SetPrevFinalizedCheckpt(finalized)
s.store.SetFinalizedCheckpt(postState.FinalizedCheckpoint())
h, err := s.getPayloadHash(ctx, psf.Root)
if err != nil {
return err
}
s.store.SetFinalizedCheckptAndPayloadHash(psf, h)
s.store.SetPrevJustifiedCheckpt(justified)
s.store.SetJustifiedCheckpt(postState.CurrentJustifiedCheckpoint())
h, err = s.getPayloadHash(ctx, psj.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(postState.CurrentJustifiedCheckpoint(), h)
// Update Forkchoice checkpoints
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{
Epoch: psj.Epoch, Root: bytesutil.ToBytes32(psj.Root)}); err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
Epoch: psf.Epoch, Root: bytesutil.ToBytes32(psf.Root)}); err != nil {
return err
}
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(justified.Root))
@@ -252,7 +284,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
if err := s.cfg.ForkChoiceStore.Prune(ctx, fRoot); err != nil {
return errors.Wrap(err, "could not prune proto array fork choice nodes")
return errors.Wrap(err, "could not prune fork choice nodes")
}
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
if err != nil {
@@ -306,33 +338,38 @@ func getStateVersionAndPayload(st state.BeaconState) (int, *ethpb.ExecutionPaylo
}
func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeaconBlock,
blockRoots [][32]byte) ([]*ethpb.Checkpoint, []*ethpb.Checkpoint, error) {
blockRoots [][32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
if len(blks) == 0 || len(blockRoots) == 0 {
return nil, nil, errors.New("no blocks provided")
return errors.New("no blocks provided")
}
if len(blks) != len(blockRoots) {
return nil, nil, errWrongBlockCount
return errWrongBlockCount
}
if err := helpers.BeaconBlockIsNil(blks[0]); err != nil {
return nil, nil, err
if err := wrapper.BeaconBlockIsNil(blks[0]); err != nil {
return invalidBlock{err}
}
b := blks[0].Block()
// Retrieve incoming block's pre state.
if err := s.verifyBlkPreState(ctx, b); err != nil {
return nil, nil, err
return err
}
preState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, bytesutil.ToBytes32(b.ParentRoot()))
if err != nil {
return nil, nil, err
return err
}
if preState == nil || preState.IsNil() {
return nil, nil, fmt.Errorf("nil pre state for slot %d", b.Slot())
return fmt.Errorf("nil pre state for slot %d", b.Slot())
}
// Fill in missing blocks
if err := s.fillInForkChoiceMissingBlocks(ctx, blks[0].Block(), preState.CurrentJustifiedCheckpoint(), preState.FinalizedCheckpoint()); err != nil {
return errors.Wrap(err, "could not fill in missing blocks to forkchoice")
}
jCheckpoints := make([]*ethpb.Checkpoint, len(blks))
@@ -353,7 +390,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
for i, b := range blks {
v, h, err := getStateVersionAndPayload(preState)
if err != nil {
return nil, nil, err
return err
}
preVersionAndHeaders[i] = &versionAndHeader{
version: v,
@@ -362,7 +399,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
set, preState, err = transition.ExecuteStateTransitionNoVerifyAnySig(ctx, preState, b)
if err != nil {
return nil, nil, err
return invalidBlock{err}
}
// Save potential boundary states.
if slots.IsEpochStart(preState.Slot()) {
@@ -373,7 +410,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
v, h, err = getStateVersionAndPayload(preState)
if err != nil {
return nil, nil, err
return err
}
postVersionAndHeaders[i] = &versionAndHeader{
version: v,
@@ -383,65 +420,81 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
}
verify, err := sigSet.Verify()
if err != nil {
return nil, nil, err
return invalidBlock{err}
}
if !verify {
return nil, nil, errors.New("batch block signature verification failed")
return errors.New("batch block signature verification failed")
}
// blocks have been verified, add them to forkchoice and call the engine
// blocks have been verified, save them and call the engine
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, len(blks))
var isValidPayload bool
for i, b := range blks {
isValidPayload, err := s.notifyNewPayload(ctx,
isValidPayload, err = s.notifyNewPayload(ctx,
postVersionAndHeaders[i].version,
postVersionAndHeaders[i].header, b)
if err != nil {
return nil, nil, err
return err
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preVersionAndHeaders[i].version,
preVersionAndHeaders[i].header, b); err != nil {
return nil, nil, err
}
}
if err := s.insertBlockToForkChoiceStore(ctx, b.Block(), blockRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
return nil, nil, err
}
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoots[i]); err != nil {
return nil, nil, errors.Wrap(err, "could not set optimistic block to valid")
return err
}
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
pendingNodes[len(blks)-i-1] = args
s.saveInitSyncBlock(blockRoots[i], b)
if err = s.handleBlockAfterBatchVerify(ctx, b, blockRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
// Insert all nodes but the last one to forkchoice
if err := s.cfg.ForkChoiceStore.InsertOptimisticChain(ctx, pendingNodes); err != nil {
return errors.Wrap(err, "could not insert batch to forkchoice")
}
// Insert the last block to forkchoice
lastBR := blockRoots[len(blks)-1]
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, preState, lastBR); err != nil {
return errors.Wrap(err, "could not insert last block in batch to forkchoice")
}
// Prune forkchoice store only if the new finalized checkpoint is higher
// than the finalized checkpoint in forkchoice store.
if fCheckpoints[len(blks)-1].Epoch > s.cfg.ForkChoiceStore.FinalizedCheckpoint().Epoch {
if err := s.cfg.ForkChoiceStore.Prune(ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(fCheckpoints[len(blks)-1].Root))); err != nil {
return errors.Wrap(err, "could not prune fork choice nodes")
}
}
// Set their optimistic status
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, lastBR); err != nil {
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
for r, st := range boundaries {
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return nil, nil, err
return err
}
}
// Also saves the last post state which to be used as pre state for the next batch.
lastB := blks[len(blks)-1]
lastBR := blockRoots[len(blockRoots)-1]
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return nil, nil, err
return err
}
f := fCheckpoints[len(fCheckpoints)-1]
j := jCheckpoints[len(jCheckpoints)-1]
arg := &notifyForkchoiceUpdateArg{
headState: preState,
headRoot: lastBR,
headBlock: lastB.Block(),
finalizedRoot: bytesutil.ToBytes32(f.Root),
justifiedRoot: bytesutil.ToBytes32(j.Root),
headState: preState,
headRoot: lastBR,
headBlock: lastB.Block(),
}
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return nil, nil, err
return err
}
if err := s.saveHeadNoDB(ctx, lastB, lastBR, preState); err != nil {
return nil, nil, err
}
return fCheckpoints, jCheckpoints, nil
return s.saveHeadNoDB(ctx, lastB, lastBR, preState)
}
// handles a block after the block's batch has been verified, where we can save blocks
@@ -464,9 +517,9 @@ func (s *Service) handleBlockAfterBatchVerify(ctx context.Context, signed interf
s.clearInitSyncBlocks()
}
justified := s.store.JustifiedCheckpt()
if justified == nil {
return errNilJustifiedInStore
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
if jCheckpoint.Epoch > justified.Epoch {
if err := s.updateJustifiedInitSync(ctx, jCheckpoint); err != nil {
@@ -474,7 +527,10 @@ func (s *Service) handleBlockAfterBatchVerify(ctx context.Context, signed interf
}
}
finalized := s.store.FinalizedCheckpt()
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if finalized == nil {
return errNilFinalizedInStore
}
@@ -484,7 +540,15 @@ func (s *Service) handleBlockAfterBatchVerify(ctx context.Context, signed interf
return err
}
s.store.SetPrevFinalizedCheckpt(finalized)
s.store.SetFinalizedCheckpt(fCheckpoint)
h, err := s.getPayloadHash(ctx, fCheckpoint.Root)
if err != nil {
return err
}
s.store.SetFinalizedCheckptAndPayloadHash(fCheckpoint, h)
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
Epoch: fCheckpoint.Epoch, Root: bytesutil.ToBytes32(fCheckpoint.Root)}); err != nil {
return err
}
}
return nil
}
@@ -495,22 +559,26 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
defer span.End()
if postState.Slot()+1 == s.nextEpochBoundarySlot {
// Update caches for the next epoch at epoch boundary slot - 1.
if err := helpers.UpdateCommitteeCache(postState, coreTime.NextEpoch(postState)); err != nil {
return err
}
copied := postState.Copy()
copied, err := transition.ProcessSlots(ctx, copied, copied.Slot()+1)
if err != nil {
return err
}
// Update caches for the next epoch at epoch boundary slot - 1.
if err := helpers.UpdateCommitteeCache(ctx, copied, coreTime.CurrentEpoch(copied)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, copied); err != nil {
return err
}
} else if postState.Slot() >= s.nextEpochBoundarySlot {
if err := reportEpochMetrics(ctx, postState, s.head.state); err != nil {
s.headLock.RLock()
st := s.head.state
s.headLock.RUnlock()
if err := reportEpochMetrics(ctx, postState, st); err != nil {
return err
}
var err error
s.nextEpochBoundarySlot, err = slots.EpochStart(coreTime.NextEpoch(postState))
if err != nil {
@@ -519,7 +587,7 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
// Update caches at epoch boundary slot.
// The following updates have short cut to return nil cheaply if fulfilled during boundary slot - 1.
if err := helpers.UpdateCommitteeCache(postState, coreTime.CurrentEpoch(postState)); err != nil {
if err := helpers.UpdateCommitteeCache(ctx, postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, postState); err != nil {
@@ -532,14 +600,13 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
// This feeds in the block and block's attestations to fork choice store. It's allows fork choice store
// to gain information on the most current chain.
func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock, root [32]byte,
st state.BeaconState) error {
func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock, root [32]byte, st state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.insertBlockAndAttestationsToForkChoiceStore")
defer span.End()
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.insertBlockToForkChoiceStore(ctx, blk, root, fCheckpoint, jCheckpoint); err != nil {
if err := s.insertBlockToForkChoiceStore(ctx, blk, root, st, fCheckpoint, jCheckpoint); err != nil {
return err
}
// Feed in block's attestations to fork choice store.
@@ -557,27 +624,16 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
return nil
}
func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock,
root [32]byte, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock, root [32]byte, st state.BeaconState, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
// Feed in block to fork choice store.
payloadHash, err := getBlockPayloadHash(blk)
if err != nil {
return err
}
return s.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx,
blk.Slot(), root, bytesutil.ToBytes32(blk.ParentRoot()), payloadHash,
jCheckpoint.Epoch,
fCheckpoint.Epoch)
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, root)
}
// Inserts attester slashing indices to fork choice store.
// To call this function, it's caller's responsibility to ensure the slashing object is valid.
func (s *Service) insertSlashingsToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock) {
slashings := blk.Body().AttesterSlashings()
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []*ethpb.AttesterSlashing) {
for _, slashing := range slashings {
indices := blocks.SlashableAttesterIndices(slashing)
for _, index := range indices {
@@ -586,18 +642,6 @@ func (s *Service) insertSlashingsToForkChoiceStore(ctx context.Context, blk inte
}
}
func getBlockPayloadHash(blk interfaces.BeaconBlock) ([32]byte, error) {
payloadHash := [32]byte{}
if blocks.IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
}
payload, err := blk.Body().ExecutionPayload()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash), nil
}
// This saves post state info to DB or cache. This also saves post state info to fork choice store.
// Post state info consists of processed block and state. Do not call this method unless the block and state are verified.
func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interfaces.SignedBeaconBlock, st state.BeaconState) error {
@@ -652,9 +696,9 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// Skip validation if block has an empty payload.
payload, err := blk.Block().Body().ExecutionPayload()
if err != nil {
return err
return invalidBlock{err}
}
if blocks.IsEmptyPayload(payload) {
if bellatrix.IsEmptyPayload(payload) {
return nil
}
@@ -674,3 +718,53 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
}
return s.validateMergeBlock(ctx, blk)
}
// This routine checks if there is a cached proposer payload ID available for the next slot proposer.
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
func (s *Service) fillMissingPayloadIDRoutine(ctx context.Context, stateFeed *event.Feed) {
// Wait for state to be initialized.
stateChannel := make(chan *feed.Event, 1)
stateSub := stateFeed.Subscribe(stateChannel)
go func() {
select {
case <-s.ctx.Done():
stateSub.Unsubscribe()
return
case <-stateChannel:
stateSub.Unsubscribe()
break
}
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case ti := <-ticker.C:
if !atHalfSlot(ti) {
continue
}
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot() + 1)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if has && id == [8]byte{} {
if _, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: s.headState(ctx),
headRoot: s.headRoot(),
headBlock: s.headBlock().Block(),
}); err != nil {
log.WithError(err).Error("Could not prepare payload on empty ID")
}
missedPayloadIDFilledCount.Inc()
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
return
}
}
}()
}
// Returns true if time `t` is halfway through the slot in sec.
func atHalfSlot(t time.Time) bool {
s := params.BeaconConfig().SecondsPerSlot
return uint64(t.Second())%s == s/2
}

View File

@@ -7,6 +7,9 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
@@ -92,18 +95,15 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.BeaconBloc
func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.VerifyFinalizedBlkDescendant")
defer span.End()
finalized := s.store.FinalizedCheckpt()
if finalized == nil {
return errNilFinalizedInStore
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
finalizedBlkSigned, err := s.cfg.BeaconDB.Block(ctx, fRoot)
finalizedBlkSigned, err := s.getBlock(ctx, fRoot)
if err != nil {
return err
}
if finalizedBlkSigned == nil || finalizedBlkSigned.IsNil() || finalizedBlkSigned.Block().IsNil() {
return errors.New("nil finalized block")
}
finalizedBlk := finalizedBlkSigned.Block()
bFinalizedRoot, err := s.ancestor(ctx, root[:], finalizedBlk.Slot())
if err != nil {
@@ -118,7 +118,7 @@ func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byt
bytesutil.Trunc(root[:]), finalizedBlk.Slot(), bytesutil.Trunc(bFinalizedRoot),
bytesutil.Trunc(fRoot[:]))
tracing.AnnotateError(span, err)
return err
return invalidBlock{err}
}
return nil
}
@@ -126,16 +126,17 @@ func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byt
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Service) verifyBlkFinalizedSlot(b interfaces.BeaconBlock) error {
finalized := s.store.FinalizedCheckpt()
if finalized == nil {
return errNilFinalizedInStore
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
finalizedSlot, err := slots.EpochStart(finalized.Epoch)
if err != nil {
return err
}
if finalizedSlot >= b.Slot() {
return fmt.Errorf("block is equal or earlier than finalized block, slot %d < slot %d", b.Slot(), finalizedSlot)
err = fmt.Errorf("block is equal or earlier than finalized block, slot %d < slot %d", b.Slot(), finalizedSlot)
return invalidBlock{err}
}
return nil
}
@@ -168,7 +169,10 @@ func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustified
if slots.SinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
justified := s.store.JustifiedCheckpt()
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return false, errors.Wrap(err, "could not get justified checkpoint")
}
jSlot, err := slots.EpochStart(justified.Epoch)
if err != nil {
return false, err
@@ -190,9 +194,9 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
defer span.End()
cpt := state.CurrentJustifiedCheckpoint()
bestJustified := s.store.BestJustifiedCheckpt()
if bestJustified == nil {
return errNilBestJustifiedInStore
bestJustified, err := s.store.BestJustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get best justified checkpoint")
}
if cpt.Epoch > bestJustified.Epoch {
s.store.SetBestJustifiedCheckpt(cpt)
@@ -203,12 +207,21 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
}
if canUpdate {
justified := s.store.JustifiedCheckpt()
if justified == nil {
return errNilJustifiedInStore
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
s.store.SetPrevJustifiedCheckpt(justified)
s.store.SetJustifiedCheckpt(cpt)
h, err := s.getPayloadHash(ctx, cpt.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(cpt, h)
// Update forkchoice's justified checkpoint
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{
Epoch: cpt.Epoch, Root: bytesutil.ToBytes32(cpt.Root)}); err != nil {
return err
}
}
return nil
@@ -218,18 +231,22 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
// caches justified checkpoint balances for fork choice and save justified checkpoint in DB.
// This method does not have defense against fork choice bouncing attack, which is why it's only recommend to be used during initial syncing.
func (s *Service) updateJustifiedInitSync(ctx context.Context, cp *ethpb.Checkpoint) error {
justified := s.store.JustifiedCheckpt()
if justified == nil {
return errNilJustifiedInStore
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
s.store.SetPrevJustifiedCheckpt(justified)
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cp); err != nil {
return err
}
s.store.SetJustifiedCheckpt(cp)
return nil
h, err := s.getPayloadHash(ctx, cp.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(cp, h)
return s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{
Epoch: cp.Epoch, Root: bytesutil.ToBytes32(cp.Root)})
}
func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) error {
@@ -249,7 +266,7 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
fRoot := bytesutil.ToBytes32(cp.Root)
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
if err != nil {
if err != nil && err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
return err
}
if !optimistic {
@@ -333,53 +350,42 @@ func (s *Service) ancestorByDB(ctx context.Context, r [32]byte, slot types.Slot)
// This is useful for block tree visualizer and additional vote accounting.
func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk interfaces.BeaconBlock,
fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
pendingNodes := make([]interfaces.BeaconBlock, 0)
pendingRoots := make([][32]byte, 0)
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, 0)
parentRoot := bytesutil.ToBytes32(blk.ParentRoot())
slot := blk.Slot()
// Fork choice only matters from last finalized slot.
finalized := s.store.FinalizedCheckpt()
if finalized == nil {
return errNilFinalizedInStore
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return err
}
fSlot, err := slots.EpochStart(finalized.Epoch)
if err != nil {
return err
}
higherThanFinalized := slot > fSlot
pendingNodes = append(pendingNodes, &forkchoicetypes.BlockAndCheckpoints{Block: blk,
JustifiedCheckpoint: jCheckpoint, FinalizedCheckpoint: fCheckpoint})
// As long as parent node is not in fork choice store, and parent node is in DB.
for !s.cfg.ForkChoiceStore.HasNode(parentRoot) && s.cfg.BeaconDB.HasBlock(ctx, parentRoot) && higherThanFinalized {
b, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
root := bytesutil.ToBytes32(blk.ParentRoot())
for !s.cfg.ForkChoiceStore.HasNode(root) && s.cfg.BeaconDB.HasBlock(ctx, root) {
b, err := s.getBlock(ctx, root)
if err != nil {
return err
}
pendingNodes = append(pendingNodes, b.Block())
copiedRoot := parentRoot
pendingRoots = append(pendingRoots, copiedRoot)
parentRoot = bytesutil.ToBytes32(b.Block().ParentRoot())
slot = b.Block().Slot()
higherThanFinalized = slot > fSlot
}
// Insert parent nodes to fork choice store in reverse order.
// Lower slots should be at the end of the list.
for i := len(pendingNodes) - 1; i >= 0; i-- {
b := pendingNodes[i]
r := pendingRoots[i]
payloadHash, err := getBlockPayloadHash(blk)
if err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx,
b.Slot(), r, bytesutil.ToBytes32(b.ParentRoot()), payloadHash,
jCheckpoint.Epoch,
fCheckpoint.Epoch); err != nil {
return errors.Wrap(err, "could not process block for proto array fork choice")
if b.Block().Slot() <= fSlot {
break
}
root = bytesutil.ToBytes32(b.Block().ParentRoot())
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
JustifiedCheckpoint: jCheckpoint,
FinalizedCheckpoint: fCheckpoint}
pendingNodes = append(pendingNodes, args)
}
return nil
if len(pendingNodes) == 1 {
return nil
}
if root != s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root)) {
return errNotDescendantOfFinalized
}
return s.cfg.ForkChoiceStore.InsertOptimisticChain(ctx, pendingNodes)
}
// inserts finalized deposits into our finalized deposit trie.

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"math/big"
"strconv"
"sync"
"testing"
"time"
@@ -39,13 +40,14 @@ import (
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/time"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestStore_OnBlock_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -129,9 +131,9 @@ func TestStore_OnBlock_ProtoArray(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: validGenesisRoot[:]}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: roots[0]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[0]}, [32]byte{'b'})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
root, err := tt.blk.Block.HashTreeRoot()
@@ -148,7 +150,7 @@ func TestStore_OnBlock_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -232,9 +234,9 @@ func TestStore_OnBlock_DoublyLinkedTree(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: validGenesisRoot[:]}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: roots[0]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[0]}, [32]byte{'b'})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
root, err := tt.blk.Block.HashTreeRoot()
@@ -251,7 +253,7 @@ func TestStore_OnBlock_ProposerBoostEarly(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
@@ -266,8 +268,7 @@ func TestStore_OnBlock_ProposerBoostEarly(t *testing.T) {
SecondsIntoSlot: 0,
}
require.NoError(t, service.cfg.ForkChoiceStore.BoostProposerRoot(ctx, args))
_, err = service.cfg.ForkChoiceStore.Head(ctx, 0,
params.BeaconConfig().ZeroHash, []uint64{}, 0)
_, err = service.cfg.ForkChoiceStore.Head(ctx, []uint64{})
require.ErrorContains(t, "could not apply proposer boost score: invalid proposer boost root", err)
}
@@ -289,9 +290,10 @@ func TestStore_OnBlockBatch_ProtoArray(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'b'})
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.cfg.ForkChoiceStore = protoarray.New()
wsb, err = wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
service.saveInitSyncBlock(gRoot, wsb)
@@ -303,7 +305,7 @@ func TestStore_OnBlockBatch_ProtoArray(t *testing.T) {
var blks []interfaces.SignedBeaconBlock
var blkRoots [][32]byte
var firstState state.BeaconState
for i := 1; i < 10; i++ {
for i := 1; i < 97; i++ {
b, err := util.GenerateFullBlock(bState, keys, util.DefaultBlockGenConfig(), types.Slot(i))
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
@@ -329,9 +331,78 @@ func TestStore_OnBlockBatch_ProtoArray(t *testing.T) {
rBlock.Block.ParentRoot = gRoot[:]
require.NoError(t, beaconDB.SaveBlock(context.Background(), blks[0]))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, blkRoots[0], firstState))
_, _, err = service.onBlockBatch(ctx, blks, blkRoots[1:])
err = service.onBlockBatch(ctx, blks, blkRoots[1:])
require.ErrorIs(t, errWrongBlockCount, err)
_, _, err = service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
service.originBlockRoot = blkRoots[1]
err = service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
require.NoError(t, err)
jcp, err := service.store.JustifiedCheckpt()
require.NoError(t, err)
jroot := bytesutil.ToBytes32(jcp.Root)
require.Equal(t, blkRoots[63], jroot)
require.Equal(t, types.Epoch(2), service.cfg.ForkChoiceStore.JustifiedCheckpoint().Epoch)
}
func TestStore_OnBlockBatch_PruneOK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
wsb, err := wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New()
wsb, err = wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
service.saveInitSyncBlock(gRoot, wsb)
st, keys := util.DeterministicGenesisState(t, 64)
bState := st.Copy()
var blks []interfaces.SignedBeaconBlock
var blkRoots [][32]byte
var firstState state.BeaconState
for i := 1; i < 128; i++ {
b, err := util.GenerateFullBlock(bState, keys, util.DefaultBlockGenConfig(), types.Slot(i))
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
bState, err = transition.ExecuteStateTransition(ctx, bState, wsb)
if i == 32 {
firstState = bState.Copy()
}
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
service.saveInitSyncBlock(root, wsb)
wsb, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
blks = append(blks, wsb)
blkRoots = append(blkRoots, root)
}
for i := 0; i < 32; i++ {
require.NoError(t, beaconDB.SaveBlock(context.Background(), blks[i]))
}
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: blkRoots[31][:], Epoch: 1}, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: blkRoots[31][:], Epoch: 1}, [32]byte{'b'})
require.NoError(t, service.cfg.StateGen.SaveState(ctx, blkRoots[31], firstState))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, firstState, blkRoots[31]))
err = service.onBlockBatch(ctx, blks[32:], blkRoots[32:])
require.NoError(t, err)
}
@@ -353,9 +424,10 @@ func TestStore_OnBlockBatch_DoublyLinkedTree(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'b'})
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.cfg.ForkChoiceStore = doublylinkedtree.New()
wsb, err = wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
service.saveInitSyncBlock(gRoot, wsb)
@@ -367,7 +439,7 @@ func TestStore_OnBlockBatch_DoublyLinkedTree(t *testing.T) {
var blks []interfaces.SignedBeaconBlock
var blkRoots [][32]byte
var firstState state.BeaconState
for i := 1; i < 10; i++ {
for i := 1; i < 97; i++ {
b, err := util.GenerateFullBlock(bState, keys, util.DefaultBlockGenConfig(), types.Slot(i))
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(b)
@@ -393,10 +465,16 @@ func TestStore_OnBlockBatch_DoublyLinkedTree(t *testing.T) {
rBlock.Block.ParentRoot = gRoot[:]
require.NoError(t, beaconDB.SaveBlock(context.Background(), blks[0]))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, blkRoots[0], firstState))
_, _, err = service.onBlockBatch(ctx, blks, blkRoots[1:])
err = service.onBlockBatch(ctx, blks, blkRoots[1:])
require.ErrorIs(t, errWrongBlockCount, err)
_, _, err = service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
service.originBlockRoot = blkRoots[1]
err = service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
require.NoError(t, err)
jcp, err := service.store.JustifiedCheckpt()
require.NoError(t, err)
jroot := bytesutil.ToBytes32(jcp.Root)
require.Equal(t, blkRoots[63], jroot)
require.Equal(t, types.Epoch(2), service.cfg.ForkChoiceStore.JustifiedCheckpoint().Epoch)
}
func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
@@ -415,8 +493,10 @@ func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'b'})
service.cfg.ForkChoiceStore = doublylinkedtree.New()
service.saveInitSyncBlock(gRoot, wsb)
st, keys := util.DeterministicGenesisState(t, 64)
bState := st.Copy()
@@ -447,10 +527,9 @@ func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
rBlock.Block.ParentRoot = gRoot[:]
require.NoError(t, beaconDB.SaveBlock(context.Background(), blks[0]))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, blkRoots[0], firstState))
cp1, cp2, err := service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
service.originBlockRoot = blkRoots[1]
err = service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
require.NoError(t, err)
require.Equal(t, blkCount-1, len(cp1))
require.Equal(t, blkCount-1, len(cp2))
}
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
@@ -484,7 +563,7 @@ func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
diff := params.BeaconConfig().SlotsPerEpoch.Sub(1).Mul(params.BeaconConfig().SecondsPerSlot)
service.genesisTime = time.Unix(time.Now().Unix()-int64(diff), 0)
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]}, [32]byte{'a'})
update, err = service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
require.NoError(t, err)
assert.Equal(t, true, update, "Should be able to update justified")
@@ -498,7 +577,7 @@ func TestShouldUpdateJustified_ReturnFalse_ProtoArray(t *testing.T) {
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.cfg.ForkChoiceStore = protoarray.New()
lastJustifiedBlk := util.NewBeaconBlock()
lastJustifiedBlk.Block.ParentRoot = bytesutil.PadTo([]byte{'G'}, 32)
lastJustifiedRoot, err := lastJustifiedBlk.Block.HashTreeRoot()
@@ -516,7 +595,7 @@ func TestShouldUpdateJustified_ReturnFalse_ProtoArray(t *testing.T) {
diff := params.BeaconConfig().SlotsPerEpoch.Sub(1).Mul(params.BeaconConfig().SecondsPerSlot)
service.genesisTime = time.Unix(time.Now().Unix()-int64(diff), 0)
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]}, [32]byte{'a'})
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
require.NoError(t, err)
@@ -531,7 +610,7 @@ func TestShouldUpdateJustified_ReturnFalse_DoublyLinkedTree(t *testing.T) {
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.cfg.ForkChoiceStore = doublylinkedtree.New()
lastJustifiedBlk := util.NewBeaconBlock()
lastJustifiedBlk.Block.ParentRoot = bytesutil.PadTo([]byte{'G'}, 32)
lastJustifiedRoot, err := lastJustifiedBlk.Block.HashTreeRoot()
@@ -549,7 +628,7 @@ func TestShouldUpdateJustified_ReturnFalse_DoublyLinkedTree(t *testing.T) {
diff := params.BeaconConfig().SlotsPerEpoch.Sub(1).Mul(params.BeaconConfig().SecondsPerSlot)
service.genesisTime = time.Unix(time.Now().Unix()-int64(diff), 0)
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]}, [32]byte{'a'})
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
require.NoError(t, err)
@@ -577,8 +656,8 @@ func TestCachedPreState_CanGetFromStateSummary_ProtoArray(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
service.cfg.ForkChoiceStore = protoarray.New()
wsb, err = wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
service.saveInitSyncBlock(gRoot, wsb)
@@ -614,8 +693,8 @@ func TestCachedPreState_CanGetFromStateSummary_DoublyLinkedTree(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
service.cfg.ForkChoiceStore = doublylinkedtree.New()
wsb, err = wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
service.saveInitSyncBlock(gRoot, wsb)
@@ -648,15 +727,15 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
service.cfg.ForkChoiceStore = protoarray.New()
wsb, err = wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
service.saveInitSyncBlock(gRoot, wsb)
b := util.NewBeaconBlock()
b.Block.Slot = 1
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
wb, err := wrapper.WrappedBeaconBlock(b.Block)
require.NoError(t, err)
err = service.verifyBlkPreState(ctx, wb)
@@ -680,7 +759,7 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
WithForkChoiceStore(protoarray.New()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -691,7 +770,7 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
r, err := signedBlock.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: []byte{'A'}})
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: []byte{'A'}}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: []byte{'A'}})
st, err := util.NewBeaconState()
require.NoError(t, err)
@@ -703,13 +782,17 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
require.NoError(t, s.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: 1, Root: r[:]}))
require.NoError(t, service.updateJustified(context.Background(), s))
assert.Equal(t, s.CurrentJustifiedCheckpoint().Epoch, service.store.BestJustifiedCheckpt().Epoch, "Incorrect justified epoch in service")
cp, err := service.store.BestJustifiedCheckpt()
require.NoError(t, err)
assert.Equal(t, s.CurrentJustifiedCheckpoint().Epoch, cp.Epoch, "Incorrect justified epoch in service")
// Could not update
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: []byte{'A'}, Epoch: 2})
require.NoError(t, service.updateJustified(context.Background(), s))
assert.Equal(t, types.Epoch(2), service.store.BestJustifiedCheckpt().Epoch, "Incorrect justified epoch in service")
cp, err = service.store.BestJustifiedCheckpt()
require.NoError(t, err)
assert.Equal(t, types.Epoch(2), cp.Epoch, "Incorrect justified epoch in service")
}
func TestFillForkChoiceMissingBlocks_CanSave_ProtoArray(t *testing.T) {
@@ -722,8 +805,7 @@ func TestFillForkChoiceMissingBlocks_CanSave_ProtoArray(t *testing.T) {
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: make([]byte, 32)})
service.cfg.ForkChoiceStore = protoarray.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -746,12 +828,14 @@ func TestFillForkChoiceMissingBlocks_CanSave_ProtoArray(t *testing.T) {
wsb, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[0]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// 4 nodes from the block tree 1. B3 - B4 - B6 - B8
assert.Equal(t, 4, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[3])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[4])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[6])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[8])), "Didn't save node")
@@ -767,8 +851,7 @@ func TestFillForkChoiceMissingBlocks_CanSave_DoublyLinkedTree(t *testing.T) {
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: make([]byte, 32)})
service.cfg.ForkChoiceStore = doublylinkedtree.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -792,12 +875,14 @@ func TestFillForkChoiceMissingBlocks_CanSave_DoublyLinkedTree(t *testing.T) {
wsb, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[0]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, 4, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[3])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[4])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[6])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[8])), "Didn't save node")
@@ -813,8 +898,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch_ProtoArray(t *testing.T) {
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: make([]byte, 32)})
service.cfg.ForkChoiceStore = protoarray.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -838,14 +922,15 @@ func TestFillForkChoiceMissingBlocks_RootsMatch_ProtoArray(t *testing.T) {
wsb, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[0]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// 4 nodes from the block tree 1. B3 - B4 - B6 - B8
assert.Equal(t, 4, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Ensure all roots and their respective blocks exist.
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
wantedRoots := [][]byte{roots[3], roots[4], roots[6], roots[8]}
for i, rt := range wantedRoots {
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(rt)), fmt.Sprintf("Didn't save node: %d", i))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(context.Background(), bytesutil.ToBytes32(rt)))
@@ -862,8 +947,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch_DoublyLinkedTree(t *testing.T) {
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: make([]byte, 32)})
service.cfg.ForkChoiceStore = doublylinkedtree.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -887,14 +971,15 @@ func TestFillForkChoiceMissingBlocks_RootsMatch_DoublyLinkedTree(t *testing.T) {
wsb, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[0]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// 5 nodes from the block tree 1. B3 - B4 - B6 - B8
assert.Equal(t, 4, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Ensure all roots and their respective blocks exist.
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
wantedRoots := [][]byte{roots[3], roots[4], roots[6], roots[8]}
for i, rt := range wantedRoots {
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(rt)), fmt.Sprintf("Didn't save node: %d", i))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(context.Background(), bytesutil.ToBytes32(rt)))
@@ -911,9 +996,7 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized_ProtoArray(t *testing.T) {
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
// Set finalized epoch to 1.
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 1})
service.cfg.ForkChoiceStore = protoarray.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -927,7 +1010,7 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized_ProtoArray(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
// Define a tree branch, slot 63 <- 64 <- 65
// Define a tree branch, slot 63 <- 64 <- 65 <- 66
b63 := util.NewBeaconBlock()
b63.Block.Slot = 63
wsb, err = wrapper.WrappedSignedBeaconBlock(b63)
@@ -946,20 +1029,28 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized_ProtoArray(t *testing.T) {
b65 := util.NewBeaconBlock()
b65.Block.Slot = 65
b65.Block.ParentRoot = r64[:]
r65, err := b65.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(b65)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
b66 := util.NewBeaconBlock()
b66.Block.Slot = 66
b66.Block.ParentRoot = r65[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b66)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
beaconState, _ := util.DeterministicGenesisState(t, 32)
// Set finalized epoch to 2.
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 2, Root: r64[:]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// There should be 2 nodes, block 65 and block 64.
assert.Equal(t, 2, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Block with slot 63 should be in fork choice because it's less than finalized epoch 1.
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r63), "Didn't save node")
// We should have saved 1 node: block 65
assert.Equal(t, 1, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r65), "Didn't save node")
}
func TestFillForkChoiceMissingBlocks_FilterFinalized_DoublyLinkedTree(t *testing.T) {
@@ -972,9 +1063,7 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized_DoublyLinkedTree(t *testing
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
// Set finalized epoch to 1.
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 1})
service.cfg.ForkChoiceStore = doublylinkedtree.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -1007,27 +1096,75 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized_DoublyLinkedTree(t *testing
b65 := util.NewBeaconBlock()
b65.Block.Slot = 65
b65.Block.ParentRoot = r64[:]
r65, err := b65.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(b65)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
b66 := util.NewBeaconBlock()
b66.Block.Slot = 66
b66.Block.ParentRoot = r65[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b66)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
beaconState, _ := util.DeterministicGenesisState(t, 32)
// Set finalized epoch to 1.
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 2, Root: r64[:]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// There should be 2 nodes, block 65 and block 64.
assert.Equal(t, 2, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// There should be 1 node: block 65
assert.Equal(t, 1, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r65), "Didn't save node")
}
// Block with slot 63 should be in fork choice because it's less than finalized epoch 1.
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r63), "Didn't save node")
func TestFillForkChoiceMissingBlocks_FinalizedSibling_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
wsb, err := wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
beaconState, _ := util.DeterministicGenesisState(t, 32)
blk := util.NewBeaconBlock()
blk.Block.Slot = 9
blk.Block.ParentRoot = roots[8]
wsb, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: roots[1]}, [32]byte{})
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.ErrorIs(t, errNotDescendantOfFinalized, err)
}
// blockTree1 constructs the following tree:
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
// (B1, and B3 are all from the same slots)
func blockTree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][]byte, error) {
genesisRoot = bytesutil.PadTo(genesisRoot, 32)
b0 := util.NewBeaconBlock()
@@ -1135,7 +1272,7 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -1210,7 +1347,9 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, 0, 0)) // Saves blocks to fork choice store.
state, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
}
r, err := service.ancestor(context.Background(), r200[:], 150)
@@ -1224,7 +1363,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -1257,7 +1396,9 @@ func TestAncestor_CanUseDB(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb)) // Saves blocks to DB.
}
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, 0, 0))
state, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
r, err := service.ancestor(context.Background(), r200[:], 150)
require.NoError(t, err)
@@ -1284,7 +1425,7 @@ func TestVerifyBlkDescendant(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -1312,16 +1453,17 @@ func TestVerifyBlkDescendant(t *testing.T) {
finalizedRoot [32]byte
}
tests := []struct {
name string
args args
wantedErr string
name string
args args
wantedErr string
invalidBlockRoot bool
}{
{
name: "could not get finalized block in block service cache",
args: args{
finalizedRoot: [32]byte{'a'},
},
wantedErr: "nil finalized block",
wantedErr: "block not found in cache or db",
},
{
name: "could not get finalized block root in DB",
@@ -1337,7 +1479,8 @@ func TestVerifyBlkDescendant(t *testing.T) {
finalizedRoot: r1,
parentRoot: r,
},
wantedErr: "is not a descendant of the current finalized block slot",
wantedErr: "is not a descendant of the current finalized block slot",
invalidBlockRoot: true,
},
{
name: "is descendant",
@@ -1350,10 +1493,13 @@ func TestVerifyBlkDescendant(t *testing.T) {
for _, tt := range tests {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: tt.args.finalizedRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: tt.args.finalizedRoot[:]}, [32]byte{})
err = service.VerifyFinalizedBlkDescendant(ctx, tt.args.parentRoot)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
if tt.invalidBlockRoot {
require.Equal(t, true, IsInvalidBlock(err))
}
} else if err != nil {
assert.NoError(t, err)
}
@@ -1378,14 +1524,18 @@ func TestUpdateJustifiedInitSync(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, beaconState, gRoot))
service.originBlockRoot = gRoot
currentCp := &ethpb.Checkpoint{Epoch: 1}
service.store.SetJustifiedCheckpt(currentCp)
service.store.SetJustifiedCheckptAndPayloadHash(currentCp, [32]byte{'a'})
newCp := &ethpb.Checkpoint{Epoch: 2, Root: gRoot[:]}
require.NoError(t, service.updateJustifiedInitSync(ctx, newCp))
assert.DeepSSZEqual(t, currentCp, service.PreviousJustifiedCheckpt(), "Incorrect previous justified checkpoint")
assert.DeepSSZEqual(t, newCp, service.CurrentJustifiedCheckpt(), "Incorrect current justified checkpoint in cache")
cp, err := service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
cp, err := service.PreviousJustifiedCheckpt()
require.NoError(t, err)
assert.DeepSSZEqual(t, currentCp, cp, "Incorrect previous justified checkpoint")
cp, err = service.CurrentJustifiedCheckpt()
require.NoError(t, err)
assert.DeepSSZEqual(t, newCp, cp, "Incorrect current justified checkpoint in cache")
cp, err = service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, newCp, cp, "Incorrect current justified checkpoint in db")
}
@@ -1420,7 +1570,7 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
func TestOnBlock_CanFinalize(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
@@ -1439,7 +1589,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
testState := gs.Copy()
for i := types.Slot(1); i <= 4*params.BeaconConfig().SlotsPerEpoch; i++ {
@@ -1453,28 +1603,49 @@ func TestOnBlock_CanFinalize(t *testing.T) {
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
require.Equal(t, types.Epoch(3), service.CurrentJustifiedCheckpt().Epoch)
require.Equal(t, types.Epoch(2), service.FinalizedCheckpt().Epoch)
cp, err := service.CurrentJustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, types.Epoch(3), cp.Epoch)
cp, err = service.FinalizedCheckpt()
require.NoError(t, err)
require.Equal(t, types.Epoch(2), cp.Epoch)
// The update should persist in DB.
j, err := service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
require.NoError(t, err)
require.Equal(t, j.Epoch, service.CurrentJustifiedCheckpt().Epoch)
cp, err = service.CurrentJustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, j.Epoch, cp.Epoch)
f, err := service.cfg.BeaconDB.FinalizedCheckpoint(ctx)
require.NoError(t, err)
require.Equal(t, f.Epoch, service.FinalizedCheckpt().Epoch)
cp, err = service.FinalizedCheckpt()
require.NoError(t, err)
require.Equal(t, f.Epoch, cp.Epoch)
}
func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
config.BellatrixForkEpoch = 2
params.OverrideBeaconConfig(config)
func TestOnBlock_NilBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
err = service.onBlock(ctx, nil, [32]byte{})
require.Equal(t, true, IsInvalidBlock(err))
}
func TestOnBlock_InvalidSignature(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
@@ -1493,7 +1664,48 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
blk, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
blk.Signature = []byte{'a'} // Mutate the signature.
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
err = service.onBlock(ctx, wsb, r)
require.Equal(t, true, IsInvalidBlock(err))
}
func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
config.BellatrixForkEpoch = 2
params.OverrideBeaconConfig(config)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
WithStateNotifier(&mock.MockStateNotifier{}),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gBlk, err := service.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
testState := gs.Copy()
for i := types.Slot(1); i < params.BeaconConfig().SlotsPerEpoch; i++ {
@@ -1524,7 +1736,7 @@ func TestInsertFinalizedDeposits(t *testing.T) {
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
gs = gs.Copy()
assert.NoError(t, gs.SetEth1Data(&ethpb.Eth1Data{DepositCount: 10}))
assert.NoError(t, gs.SetEth1DepositIndex(8))
@@ -1563,7 +1775,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{})
gs = gs.Copy()
assert.NoError(t, gs.SetEth1Data(&ethpb.Eth1Data{DepositCount: 7}))
assert.NoError(t, gs.SetEth1DepositIndex(6))
@@ -1712,7 +1924,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -1840,7 +2052,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
func TestService_insertSlashingsToForkChoiceStore(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -1885,5 +2097,104 @@ func TestService_insertSlashingsToForkChoiceStore(t *testing.T) {
b.Block.Body.AttesterSlashings = slashings
wb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
service.insertSlashingsToForkChoiceStore(ctx, wb.Block())
service.InsertSlashingsToForkChoiceStore(ctx, wb.Block().Body().AttesterSlashings())
}
func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
WithStateNotifier(&mock.MockStateNotifier{}),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gBlk, err := service.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
blk1, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
wsb1, err := wrapper.WrappedSignedBeaconBlock(blk1)
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
wsb2, err := wrapper.WrappedSignedBeaconBlock(blk2)
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 3)
require.NoError(t, err)
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
wsb3, err := wrapper.WrappedSignedBeaconBlock(blk3)
require.NoError(t, err)
blk4, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 4)
require.NoError(t, err)
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
wsb4, err := wrapper.WrappedSignedBeaconBlock(blk4)
require.NoError(t, err)
logHook := logTest.NewGlobal()
for i := 0; i < 10; i++ {
var wg sync.WaitGroup
wg.Add(4)
go func() {
require.NoError(t, service.onBlock(ctx, wsb1, r1))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb2, r2))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb3, r3))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb4, r4))
wg.Done()
}()
wg.Wait()
require.LogsDoNotContain(t, logHook, "New head does not exist in DB. Do nothing")
require.NoError(t, service.cfg.BeaconDB.DeleteBlock(ctx, r1))
require.NoError(t, service.cfg.BeaconDB.DeleteBlock(ctx, r2))
require.NoError(t, service.cfg.BeaconDB.DeleteBlock(ctx, r3))
require.NoError(t, service.cfg.BeaconDB.DeleteBlock(ctx, r4))
service.cfg.ForkChoiceStore = protoarray.New()
}
}
func Test_verifyBlkFinalizedSlot_invalidBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 1}, [32]byte{'a'})
blk := util.HydrateBeaconBlock(&ethpb.BeaconBlock{Slot: 1})
wb, err := wrapper.WrappedBeaconBlock(blk)
require.NoError(t, err)
err = service.verifyBlkFinalizedSlot(wb)
require.Equal(t, true, IsInvalidBlock(err))
}

View File

@@ -71,9 +71,9 @@ func (s *Service) VerifyFinalizedConsistency(ctx context.Context, root []byte) e
return nil
}
f := s.FinalizedCheckpt()
if f == nil {
return errNilFinalizedInStore
f, err := s.FinalizedCheckpt()
if err != nil {
return err
}
ss, err := slots.EpochStart(f.Epoch)
if err != nil {
@@ -128,56 +128,68 @@ func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
return
}
// Continue when there's no fork choice attestation, there's nothing to process and update head.
// This covers the condition when the node is still initial syncing to the head of the chain.
if s.cfg.AttPool.ForkchoiceAttestationCount() == 0 {
continue
if err := s.UpdateHead(s.ctx); err != nil {
log.WithError(err).Error("Could not process attestations and update head")
return
}
s.processAttestations(s.ctx)
justified := s.store.JustifiedCheckpt()
if justified == nil {
log.WithError(errNilJustifiedInStore).Error("Could not get justified checkpoint")
continue
}
balances, err := s.justifiedBalances.get(s.ctx, bytesutil.ToBytes32(justified.Root))
if err != nil {
log.WithError(err).Errorf("Unable to get justified balances for root %v", justified.Root)
continue
}
newHeadRoot, err := s.updateHead(s.ctx, balances)
if err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
}
if s.headRoot() != newHeadRoot {
log.WithFields(logrus.Fields{
"oldHeadRoot": fmt.Sprintf("%#x", s.headRoot()),
"newHeadRoot": fmt.Sprintf("%#x", newHeadRoot),
}).Debug("Head changed due to attestations")
}
s.notifyEngineIfChangedHead(s.ctx, newHeadRoot)
}
}
}()
}
// UpdateHead updates the canonical head of the chain based on information from fork-choice attestations and votes.
// It requires no external inputs.
func (s *Service) UpdateHead(ctx context.Context) error {
// Continue when there's no fork choice attestation, there's nothing to process and update head.
// This covers the condition when the node is still initial syncing to the head of the chain.
if s.cfg.AttPool.ForkchoiceAttestationCount() == 0 {
return nil
}
// Only one process can process attestations and update head at a time.
s.processAttestationsLock.Lock()
defer s.processAttestationsLock.Unlock()
s.processAttestations(ctx)
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return err
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(justified.Root))
if err != nil {
return err
}
newHeadRoot, err := s.updateHead(ctx, balances)
if err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
}
s.headLock.RLock()
if s.headRoot() != newHeadRoot {
log.WithFields(logrus.Fields{
"oldHeadRoot": fmt.Sprintf("%#x", s.headRoot()),
"newHeadRoot": fmt.Sprintf("%#x", newHeadRoot),
}).Debug("Head changed due to attestations")
}
s.headLock.RUnlock()
s.notifyEngineIfChangedHead(ctx, newHeadRoot)
return nil
}
// This calls notify Forkchoice Update in the event that the head has changed
func (s *Service) notifyEngineIfChangedHead(ctx context.Context, newHeadRoot [32]byte) {
if s.headRoot() == newHeadRoot {
s.headLock.RLock()
if newHeadRoot == [32]byte{} || s.headRoot() == newHeadRoot {
s.headLock.RUnlock()
return
}
s.headLock.RUnlock()
if !s.hasBlockInInitSyncOrDB(ctx, newHeadRoot) {
log.Debug("New head does not exist in DB. Do nothing")
return // We don't have the block, don't notify the engine and update head.
}
finalized := s.store.FinalizedCheckpt()
if finalized == nil {
log.WithError(errNilFinalizedInStore).Error("could not get finalized checkpoint")
return
}
newHeadBlock, err := s.getBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not get new head block")
@@ -189,11 +201,9 @@ func (s *Service) notifyEngineIfChangedHead(ctx context.Context, newHeadRoot [32
return
}
arg := &notifyForkchoiceUpdateArg{
headState: headState,
headRoot: newHeadRoot,
headBlock: newHeadBlock.Block(),
finalizedRoot: bytesutil.ToBytes32(finalized.Root),
justifiedRoot: bytesutil.ToBytes32(s.store.JustifiedCheckpt().Root),
headState: headState,
headRoot: newHeadRoot,
headBlock: newHeadBlock.Block(),
}
_, err = s.notifyForkchoiceUpdate(s.ctx, arg)
if err != nil {

View File

@@ -120,7 +120,9 @@ func TestProcessAttestations_Ok(t *testing.T) {
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
service.processAttestations(ctx)
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations()))
@@ -137,14 +139,14 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.notifyEngineIfChangedHead(ctx, service.headRoot())
hookErr := "could not notify forkchoice update"
finalizedErr := "could not get finalized checkpoint"
require.LogsDoNotContain(t, hook, finalizedErr)
invalidStateErr := "Could not get state from db"
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
gb, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
service.saveInitSyncBlock([32]byte{'a'}, gb)
service.notifyEngineIfChangedHead(ctx, [32]byte{'a'})
require.LogsContain(t, hook, finalizedErr)
require.LogsContain(t, hook, invalidStateErr)
hook.Reset()
service.head = &head{
@@ -169,9 +171,9 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1})
service.store.SetFinalizedCheckpt(finalized)
service.store.SetFinalizedCheckptAndPayloadHash(finalized, [32]byte{})
service.notifyEngineIfChangedHead(ctx, r1)
require.LogsDoNotContain(t, hook, finalizedErr)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
// Block in DB
@@ -191,12 +193,56 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1})
service.store.SetFinalizedCheckpt(finalized)
service.store.SetFinalizedCheckptAndPayloadHash(finalized, [32]byte{})
service.notifyEngineIfChangedHead(ctx, r1)
require.LogsDoNotContain(t, hook, finalizedErr)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
vId, payloadID, has := service.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(2)
require.Equal(t, true, has)
require.Equal(t, types.ValidatorIndex(1), vId)
require.Equal(t, [8]byte{1}, payloadID)
// Test zero headRoot returns immediately.
headRoot := service.headRoot()
service.notifyEngineIfChangedHead(ctx, [32]byte{})
require.Equal(t, service.headRoot(), headRoot)
}
func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
opts = append(opts, WithAttestationPool(attestations.NewPool()), WithStateNotifier(&mockBeaconNode{}))
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(atts[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: tRoot[:]}))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
b := util.NewBeaconBlock()
wb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wb))
state, blkRoot, err = prepareForkchoiceState(ctx, wb.Block().Slot(), r, bytesutil.ToBytes32(wb.Block().ParentRoot()), [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head.root = r // Old head
require.Equal(t, 1, len(service.cfg.AttPool.ForkchoiceAttestations()))
require.NoError(t, err, service.UpdateHead(ctx))
require.Equal(t, tRoot, service.head.root) // Validate head is the new one
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
@@ -24,9 +25,14 @@ type BlockReceiver interface {
HasBlock(ctx context.Context, root [32]byte) bool
}
// ReceiveBlock is a function that defines the the operations (minus pubsub)
// that are performed on blocks that is received from regular sync service. The operations consists of:
// 1. Validate block, apply state transition and update check points
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashings *ethpb.AttesterSlashing)
}
// ReceiveBlock is a function that defines the operations (minus pubsub)
// that are performed on a received block. The operations consist of:
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, blockRoot [32]byte) error {
@@ -53,14 +59,18 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.SignedBeaco
}
// Reports on block and fork choice metrics.
finalized := s.store.FinalizedCheckpt()
if finalized == nil {
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return err
}
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errNilFinalizedInStore
}
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
// Log block sync status.
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, finalized, receivedTime, uint64(s.genesisTime.Unix())); err != nil {
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix())); err != nil {
return err
}
// Log state transition data.
@@ -79,8 +89,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Sig
defer span.End()
// Apply state transition on the incoming newly received block batches, one by one.
fCheckpoints, jCheckpoints, err := s.onBlockBatch(ctx, blocks, blkRoots)
if err != nil {
if err := s.onBlockBatch(ctx, blocks, blkRoots); err != nil {
err := errors.Wrap(err, "could not process block in batch")
tracing.AnnotateError(span, err)
return err
@@ -88,10 +97,6 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Sig
for i, b := range blocks {
blockCopy := b.Copy()
if err = s.handleBlockAfterBatchVerify(ctx, blockCopy, blkRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
@@ -104,9 +109,9 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Sig
})
// Reports on blockCopy and fork choice metrics.
finalized := s.store.FinalizedCheckpt()
if finalized == nil {
return errNilFinalizedInStore
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
}
@@ -114,7 +119,10 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Sig
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
finalized := s.store.FinalizedCheckpt()
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if finalized == nil {
return errNilFinalizedInStore
}
@@ -133,6 +141,11 @@ func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
return s.hasBlockInInitSyncOrDB(ctx, root)
}
// ReceiveAttesterSlashing receives an attester slashing and inserts it to forkchoice
func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.AttesterSlashing) {
s.InsertSlashingsToForkChoiceStore(ctx, []*ethpb.AttesterSlashing{slashing})
}
func (s *Service) handlePostBlockOperations(b interfaces.BeaconBlock) error {
// Delete the processed block attestations from attestation pool.
if err := s.deletePoolAtts(b.Body().Attestations()); err != nil {
@@ -162,7 +175,10 @@ func (s *Service) checkSaveHotStateDB(ctx context.Context) error {
currentEpoch := slots.ToEpoch(s.CurrentSlot())
// Prevent `sinceFinality` going underflow.
var sinceFinality types.Epoch
finalized := s.store.FinalizedCheckpt()
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return err
}
if finalized == nil {
return errNilFinalizedInStore
}

View File

@@ -34,7 +34,7 @@ func TestService_ReceiveBlock(t *testing.T) {
return blk
}
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc := params.BeaconConfig().Copy()
bc.ShardCommitteePeriod = 0 // Required for voluntary exits test in reasonable time.
params.OverrideBeaconConfig(bc)
@@ -127,7 +127,7 @@ func TestService_ReceiveBlock(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithForkChoiceStore(protoarray.New()),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
@@ -141,7 +141,8 @@ func TestService_ReceiveBlock(t *testing.T) {
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
h := [32]byte{'a'}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, h)
root, err := tt.args.block.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(tt.args.block)
@@ -167,7 +168,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithForkChoiceStore(protoarray.New()),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
@@ -181,7 +182,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wg := sync.WaitGroup{}
@@ -245,11 +246,9 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisBlockRoot, err := genesis.HashTreeRoot(ctx)
require.NoError(t, err)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithForkChoiceStore(protoarray.New()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
@@ -262,7 +261,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
root, err := tt.args.block.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(tt.args.block)
@@ -312,7 +311,7 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{})
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
assert.LogsContain(t, hook, "Entering mode to save hot states in DB")
@@ -323,7 +322,7 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{})
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
s.genesisTime = time.Now()
@@ -336,7 +335,7 @@ func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 10000000})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{})
s.genesisTime = time.Now()
require.NoError(t, s.checkSaveHotStateDB(context.Background()))

View File

@@ -23,6 +23,7 @@ import (
f "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
@@ -35,11 +36,11 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -50,21 +51,22 @@ const headSyncMinEpochsAfterCheckpoint = 128
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
nextEpochBoundarySlot types.Slot
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.SignedBeaconBlock
initSyncBlocksLock sync.RWMutex
justifiedBalances *stateBalanceCache
wsVerifier *WeakSubjectivityVerifier
store *store.Store
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
nextEpochBoundarySlot types.Slot
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.SignedBeaconBlock
initSyncBlocksLock sync.RWMutex
justifiedBalances *stateBalanceCache
wsVerifier *WeakSubjectivityVerifier
store *store.Store
processAttestationsLock sync.Mutex
}
// config options for the service.
@@ -136,18 +138,25 @@ func (s *Service) Start() {
}
}
s.spawnProcessAttestationsRoutine(s.cfg.StateNotifier.StateFeed())
s.fillMissingPayloadIDRoutine(s.ctx, s.cfg.StateNotifier.StateFeed())
}
// Stop the blockchain service's main event loop and associated goroutines.
func (s *Service) Stop() error {
defer s.cancel()
// lock before accessing s.head, s.head.state, s.head.state.FinalizedCheckpoint().Root
s.headLock.RLock()
if s.cfg.StateGen != nil && s.head != nil && s.head.state != nil {
if err := s.cfg.StateGen.ForceCheckpoint(s.ctx, s.head.state.FinalizedCheckpoint().Root); err != nil {
r := s.head.state.FinalizedCheckpoint().Root
s.headLock.RUnlock()
// Save the last finalized state so that starting up in the following run will be much faster.
if err := s.cfg.StateGen.ForceCheckpoint(s.ctx, r); err != nil {
return err
}
} else {
s.headLock.RUnlock()
}
// Save initial sync cached blocks to the DB before stop.
return s.cfg.BeaconDB.SaveBlocks(s.ctx, s.getInitSyncBlocks())
}
@@ -185,34 +194,40 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
if justified == nil {
return errNilJustifiedCheckpoint
}
finalized, err := s.cfg.BeaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if finalized == nil {
return errNilFinalizedCheckpoint
}
s.store = store.New(justified, finalized)
var forkChoicer f.ForkChoicer
fRoot := bytesutil.ToBytes32(finalized.Root)
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
if features.Get().EnableForkChoiceDoublyLinkedTree {
forkChoicer = doublylinkedtree.New(justified.Epoch, finalized.Epoch)
forkChoicer = doublylinkedtree.New()
} else {
forkChoicer = protoarray.New(justified.Epoch, finalized.Epoch, fRoot)
forkChoicer = protoarray.New()
}
s.cfg.ForkChoiceStore = forkChoicer
fb, err := s.cfg.BeaconDB.Block(s.ctx, s.ensureRootNotZeros(fRoot))
if err := forkChoicer.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := forkChoicer.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")
}
st, err := s.cfg.StateGen.StateByRoot(s.ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint block")
return errors.Wrap(err, "could not get finalized checkpoint state")
}
if fb == nil || fb.IsNil() {
return errNilFinalizedInStore
}
payloadHash, err := getBlockPayloadHash(fb.Block())
if err != nil {
return errors.Wrap(err, "could not get execution payload hash")
}
fSlot := fb.Block().Slot()
if err := forkChoicer.InsertOptimisticBlock(s.ctx, fSlot, fRoot, params.BeaconConfig().ZeroHash,
payloadHash, justified.Epoch, finalized.Epoch); err != nil {
if err := forkChoicer.InsertNode(s.ctx, st, fRoot); err != nil {
return errors.Wrap(err, "could not insert finalized block to forkchoice")
}
@@ -225,18 +240,6 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "could not set finalized block as validated")
}
}
h := s.headBlock().Block()
if h.Slot() > fSlot {
log.WithFields(logrus.Fields{
"startSlot": fSlot,
"endSlot": h.Slot(),
}).Info("Loading blocks to fork choice store, this may take a while.")
if err := s.fillInForkChoiceMissingBlocks(s.ctx, h, finalized, justified); err != nil {
return errors.Wrap(err, "could not fill in fork choice store missing blocks")
}
}
// not attempting to save initial sync blocks here, because there shouldn't be any until
// after the statefeed.Initialized event is fired (below)
if err := s.wsVerifier.VerifyWeakSubjectivity(s.ctx, finalized.Epoch); err != nil {
@@ -271,7 +274,7 @@ func (s *Service) originRootFromSavedState(ctx context.Context) ([32]byte, error
if err != nil {
return originRoot, errors.Wrap(err, "could not get genesis block from db")
}
if err := helpers.BeaconBlockIsNil(genesisBlock); err != nil {
if err := wrapper.BeaconBlockIsNil(genesisBlock); err != nil {
return originRoot, err
}
genesisBlkRoot, err := genesisBlock.Block().HashTreeRoot()
@@ -337,14 +340,13 @@ func (s *Service) initializeHeadFromDB(ctx context.Context) error {
finalizedState.Slot(), flags.HeadSync.Name)
}
}
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block from db")
if finalizedState == nil || finalizedState.IsNil() {
return errors.New("finalized state can't be nil")
}
if finalizedState == nil || finalizedState.IsNil() || finalizedBlock == nil || finalizedBlock.IsNil() {
return errors.New("finalized state and block can't be nil")
finalizedBlock, err := s.getBlock(ctx, finalizedRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block")
}
s.setHead(finalizedRoot, finalizedBlock, finalizedState)
@@ -440,7 +442,7 @@ func (s *Service) initializeBeaconChain(
s.cfg.ChainStartFetcher.ClearPreGenesisData()
// Update committee shuffled indices for genesis epoch.
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
if err := helpers.UpdateCommitteeCache(ctx, genesisState, 0); err != nil {
return nil, err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState); err != nil {
@@ -473,17 +475,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
genesisCheckpoint := genesisState.FinalizedCheckpoint()
s.store = store.New(genesisCheckpoint, genesisCheckpoint)
payloadHash, err := getBlockPayloadHash(genesisBlk.Block())
if err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx,
genesisBlk.Block().Slot(),
genesisBlkRoot,
params.BeaconConfig().ZeroHash,
payloadHash,
genesisCheckpoint.Epoch,
genesisCheckpoint.Epoch); err != nil {
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, genesisState, genesisBlkRoot); err != nil {
log.Fatalf("Could not process genesis block for fork choice: %v", err)
}
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, genesisBlkRoot); err != nil {

View File

@@ -31,6 +31,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/container/trie"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
@@ -86,9 +87,11 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
bState, _ := util.DeterministicGenesisState(t, 10)
pbState, err := v1.ProtobufBeaconState(bState.InnerStateUnsafe())
require.NoError(t, err)
mockTrie, err := trie.NewTrie(0)
require.NoError(t, err)
err = beaconDB.SavePowchainData(ctx, &ethpb.ETH1ChainData{
BeaconState: pbState,
Trie: &ethpb.SparseMerkleTrie{},
Trie: mockTrie.ToProto(),
CurrentEth1Data: &ethpb.LatestETH1Data{
BlockHash: make([]byte, 32),
},
@@ -127,7 +130,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(protoarray.New(0, 0, params.BeaconConfig().ZeroHash)),
WithForkChoiceStore(protoarray.New()),
WithAttestationService(attService),
WithStateGen(stateGen),
}
@@ -221,7 +224,8 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
require.NoError(t, err)
trie, _, err := util.DepositTrieFromDeposits(deposits)
require.NoError(t, err)
hashTreeRoot := trie.HashTreeRoot()
hashTreeRoot, err := trie.HashTreeRoot()
require.NoError(t, err)
genState, err := transition.EmptyGenesisState()
require.NoError(t, err)
err = genState.SetEth1Data(&ethpb.Eth1Data{
@@ -273,8 +277,12 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
// Test the start function.
chainService.Start()
require.DeepEqual(t, blkRoot[:], chainService.store.FinalizedCheckpt().Root, "Finalize Checkpoint root is incorrect")
require.DeepEqual(t, params.BeaconConfig().ZeroHash[:], chainService.store.JustifiedCheckpt().Root, "Justified Checkpoint root is incorrect")
cp, err := chainService.store.FinalizedCheckpt()
require.NoError(t, err)
require.DeepEqual(t, blkRoot[:], cp.Root, "Finalize Checkpoint root is incorrect")
cp, err = chainService.store.JustifiedCheckpt()
require.NoError(t, err)
require.DeepEqual(t, params.BeaconConfig().ZeroHash[:], cp.Root, "Justified Checkpoint root is incorrect")
require.NoError(t, chainService.Stop(), "Unable to stop chain service")
@@ -497,10 +505,10 @@ func TestHasBlock_ForkChoiceAndDB_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{}), BeaconDB: beaconDB},
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -518,10 +526,10 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0), BeaconDB: beaconDB},
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -591,10 +599,10 @@ func BenchmarkHasBlockForkChoiceStore_ProtoArray(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{}), BeaconDB: beaconDB},
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
@@ -614,10 +622,10 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0), BeaconDB: beaconDB},
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)

View File

@@ -11,8 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
)
var errNilStateFromStategen = errors.New("justified state can't be nil")
type stateBalanceCache struct {
sync.Mutex
balances []uint64
@@ -37,7 +35,7 @@ func newStateBalanceCache(sg *stategen.State) (*stateBalanceCache, error) {
// the previously read value. This cache assumes we only want to cache one
// set of balances for a single root (the current justified root).
//
// warning: this is not thread-safe on its own, relies on get() for locking
// WARNING: this is not thread-safe on its own, relies on get() for locking
func (c *stateBalanceCache) update(ctx context.Context, justifiedRoot [32]byte) ([]uint64, error) {
stateBalanceCacheMiss.Inc()
justifiedState, err := c.stateGen.StateByRoot(ctx, justifiedRoot)

View File

@@ -10,7 +10,10 @@ go_library(
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store",
visibility = ["//beacon-chain:__subpackages__"],
deps = ["//proto/prysm/v1alpha1:go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(

View File

@@ -17,9 +17,19 @@ func TestNew(t *testing.T) {
Root: []byte("hello"),
}
s := New(j, f)
require.DeepSSZEqual(t, s.JustifiedCheckpt(), j)
require.DeepSSZEqual(t, s.BestJustifiedCheckpt(), j)
require.DeepSSZEqual(t, s.PrevJustifiedCheckpt(), j)
require.DeepSSZEqual(t, s.FinalizedCheckpt(), f)
require.DeepSSZEqual(t, s.PrevFinalizedCheckpt(), f)
cp, err := s.JustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, j, cp)
cp, err = s.BestJustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, j, cp)
cp, err = s.PrevJustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, j, cp)
cp, err = s.FinalizedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, f, cp)
cp, err = s.PrevFinalizedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, f, cp)
}

View File

@@ -1,40 +1,76 @@
package store
import ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
var (
ErrNilCheckpoint = errors.New("nil checkpoint")
)
// PrevJustifiedCheckpt returns the previous justified checkpoint in the Store.
func (s *Store) PrevJustifiedCheckpt() *ethpb.Checkpoint {
func (s *Store) PrevJustifiedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
return s.prevJustifiedCheckpt
if s.prevJustifiedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.prevJustifiedCheckpt, nil
}
// BestJustifiedCheckpt returns the best justified checkpoint in the Store.
func (s *Store) BestJustifiedCheckpt() *ethpb.Checkpoint {
func (s *Store) BestJustifiedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
return s.bestJustifiedCheckpt
if s.bestJustifiedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.bestJustifiedCheckpt, nil
}
// JustifiedCheckpt returns the justified checkpoint in the Store.
func (s *Store) JustifiedCheckpt() *ethpb.Checkpoint {
func (s *Store) JustifiedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
return s.justifiedCheckpt
if s.justifiedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.justifiedCheckpt, nil
}
// JustifiedPayloadBlockHash returns the justified payload block hash reflecting justified check point.
func (s *Store) JustifiedPayloadBlockHash() [32]byte {
s.RLock()
defer s.RUnlock()
return s.justifiedPayloadBlockHash
}
// PrevFinalizedCheckpt returns the previous finalized checkpoint in the Store.
func (s *Store) PrevFinalizedCheckpt() *ethpb.Checkpoint {
func (s *Store) PrevFinalizedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
return s.prevFinalizedCheckpt
if s.prevFinalizedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.prevFinalizedCheckpt, nil
}
// FinalizedCheckpt returns the finalized checkpoint in the Store.
func (s *Store) FinalizedCheckpt() *ethpb.Checkpoint {
func (s *Store) FinalizedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
return s.finalizedCheckpt
if s.finalizedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.finalizedCheckpt, nil
}
// FinalizedPayloadBlockHash returns the finalized payload block hash reflecting finalized check point.
func (s *Store) FinalizedPayloadBlockHash() [32]byte {
s.RLock()
defer s.RUnlock()
return s.finalizedPayloadBlockHash
}
// SetPrevJustifiedCheckpt sets the previous justified checkpoint in the Store.
@@ -51,18 +87,20 @@ func (s *Store) SetBestJustifiedCheckpt(cp *ethpb.Checkpoint) {
s.bestJustifiedCheckpt = cp
}
// SetJustifiedCheckpt sets the justified checkpoint in the Store.
func (s *Store) SetJustifiedCheckpt(cp *ethpb.Checkpoint) {
// SetJustifiedCheckptAndPayloadHash sets the justified checkpoint and blockhash in the Store.
func (s *Store) SetJustifiedCheckptAndPayloadHash(cp *ethpb.Checkpoint, h [32]byte) {
s.Lock()
defer s.Unlock()
s.justifiedCheckpt = cp
s.justifiedPayloadBlockHash = h
}
// SetFinalizedCheckpt sets the finalized checkpoint in the Store.
func (s *Store) SetFinalizedCheckpt(cp *ethpb.Checkpoint) {
// SetFinalizedCheckptAndPayloadHash sets the finalized checkpoint and blockhash in the Store.
func (s *Store) SetFinalizedCheckptAndPayloadHash(cp *ethpb.Checkpoint, h [32]byte) {
s.Lock()
defer s.Unlock()
s.finalizedCheckpt = cp
s.finalizedPayloadBlockHash = h
}
// SetPrevFinalizedCheckpt sets the previous finalized checkpoint in the Store.

View File

@@ -10,44 +10,63 @@ import (
func Test_store_PrevJustifiedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
require.Equal(t, cp, s.PrevJustifiedCheckpt())
_, err := s.PrevJustifiedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetPrevJustifiedCheckpt(cp)
require.Equal(t, cp, s.PrevJustifiedCheckpt())
got, err := s.PrevJustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
}
func Test_store_BestJustifiedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
require.Equal(t, cp, s.BestJustifiedCheckpt())
_, err := s.BestJustifiedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetBestJustifiedCheckpt(cp)
require.Equal(t, cp, s.BestJustifiedCheckpt())
got, err := s.BestJustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
}
func Test_store_JustifiedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
require.Equal(t, cp, s.JustifiedCheckpt())
_, err := s.JustifiedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetJustifiedCheckpt(cp)
require.Equal(t, cp, s.JustifiedCheckpt())
h := [32]byte{'b'}
s.SetJustifiedCheckptAndPayloadHash(cp, h)
got, err := s.JustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
require.Equal(t, h, s.JustifiedPayloadBlockHash())
}
func Test_store_FinalizedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
require.Equal(t, cp, s.FinalizedCheckpt())
_, err := s.FinalizedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetFinalizedCheckpt(cp)
require.Equal(t, cp, s.FinalizedCheckpt())
h := [32]byte{'b'}
s.SetFinalizedCheckptAndPayloadHash(cp, h)
got, err := s.FinalizedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
require.Equal(t, h, s.FinalizedPayloadBlockHash())
}
func Test_store_PrevFinalizedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
require.Equal(t, cp, s.PrevFinalizedCheckpt())
_, err := s.PrevFinalizedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetPrevFinalizedCheckpt(cp)
require.Equal(t, cp, s.PrevFinalizedCheckpt())
got, err := s.PrevFinalizedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
}

View File

@@ -17,9 +17,11 @@ import (
// best_justified_checkpoint: Checkpoint
// proposerBoostRoot: Root
type Store struct {
justifiedCheckpt *ethpb.Checkpoint
finalizedCheckpt *ethpb.Checkpoint
bestJustifiedCheckpt *ethpb.Checkpoint
justifiedCheckpt *ethpb.Checkpoint
justifiedPayloadBlockHash [32]byte
finalizedCheckpt *ethpb.Checkpoint
finalizedPayloadBlockHash [32]byte
bestJustifiedCheckpt *ethpb.Checkpoint
sync.RWMutex
// These are not part of the consensus spec, but we do use them to return gRPC API requests.
// TODO(10094): Consider removing in v3.

View File

@@ -62,6 +62,7 @@ type ChainService struct {
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
OptimisticCheckRootReceived [32]byte
}
// ForkChoicer mocks the same method in the chain service
@@ -282,18 +283,18 @@ func (s *ChainService) CurrentFork() *ethpb.Fork {
}
// FinalizedCheckpt mocks FinalizedCheckpt method in chain service.
func (s *ChainService) FinalizedCheckpt() *ethpb.Checkpoint {
return s.FinalizedCheckPoint
func (s *ChainService) FinalizedCheckpt() (*ethpb.Checkpoint, error) {
return s.FinalizedCheckPoint, nil
}
// CurrentJustifiedCheckpt mocks CurrentJustifiedCheckpt method in chain service.
func (s *ChainService) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
return s.CurrentJustifiedCheckPoint
func (s *ChainService) CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error) {
return s.CurrentJustifiedCheckPoint, nil
}
// PreviousJustifiedCheckpt mocks PreviousJustifiedCheckpt method in chain service.
func (s *ChainService) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
return s.PreviousJustifiedCheckPoint
func (s *ChainService) PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error) {
return s.PreviousJustifiedCheckPoint, nil
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
@@ -376,7 +377,7 @@ func (_ *ChainService) HeadGenesisValidatorsRoot() [32]byte {
return [32]byte{}
}
// VerifyBlkDescendant mocks VerifyBlkDescendant and always returns nil.
// VerifyFinalizedBlkDescendant mocks VerifyBlkDescendant and always returns nil.
func (s *ChainService) VerifyFinalizedBlkDescendant(_ context.Context, _ [32]byte) error {
return s.VerifyBlkDescendantErr
}
@@ -447,6 +448,13 @@ func (s *ChainService) IsOptimistic(_ context.Context) (bool, error) {
}
// IsOptimisticForRoot mocks the same method in the chain service.
func (s *ChainService) IsOptimisticForRoot(_ context.Context, _ [32]byte) (bool, error) {
func (s *ChainService) IsOptimisticForRoot(_ context.Context, root [32]byte) (bool, error) {
s.OptimisticCheckRootReceived = root
return s.Optimistic, nil
}
// UpdateHead mocks the same method in the chain service.
func (s *ChainService) UpdateHead(_ context.Context) error { return nil }
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}

View File

@@ -13,9 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/time/slots"
)
var errWSBlockNotFound = errors.New("weak subjectivity root not found in db")
var errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
type weakSubjectivityDB interface {
HasBlock(ctx context.Context, blockRoot [32]byte) bool
BlockRoots(ctx context.Context, f *filters.QueryFilter) ([][32]byte, error)
@@ -30,10 +27,11 @@ type WeakSubjectivityVerifier struct {
db weakSubjectivityDB
}
// NewWeakSubjectivityVerifier validates a checkpoint, and if valid, uses it to initialize a weak subjectivity verifier
// NewWeakSubjectivityVerifier validates a checkpoint, and if valid, uses it to initialize a weak subjectivity verifier.
func NewWeakSubjectivityVerifier(wsc *ethpb.Checkpoint, db weakSubjectivityDB) (*WeakSubjectivityVerifier, error) {
if wsc == nil || len(wsc.Root) == 0 || wsc.Epoch == 0 {
log.Warn("No valid weak subjectivity checkpoint specified, running without weak subjectivity verification")
log.Info("No checkpoint for syncing provided, node will begin syncing from genesis. Checkpoint Sync is an optional feature that allows your node to sync from a more recent checkpoint, " +
"which enhances the security of your local beacon node and the broader network. See https://docs.prylabs.network/docs/next/prysm-usage/checkpoint-sync/ to learn how to configure Checkpoint Sync.")
return &WeakSubjectivityVerifier{
enabled: false,
}, nil
@@ -58,7 +56,6 @@ func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, f
if v.verified || !v.enabled {
return nil
}
// Two conditions are described in the specs:
// IF epoch_number > store.finalized_checkpoint.epoch,
// then ASSERT during block sync that block with root block_root
@@ -92,6 +89,5 @@ func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, f
return nil
}
}
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
}

View File

@@ -79,8 +79,10 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
store: &store.Store{},
wsVerifier: wv,
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: tt.finalizedEpoch})
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), s.store.FinalizedCheckpt().Epoch)
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: tt.finalizedEpoch}, [32]byte{})
cp, err := s.store.FinalizedCheckpt()
require.NoError(t, err)
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), cp.Epoch)
if tt.wantErr == nil {
require.NoError(t, err)
} else {

View File

@@ -0,0 +1,30 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"error.go",
"metric.go",
"option.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/builder",
visibility = ["//visibility:public"],
deps = [
"//api/client/builder:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/db:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network:go_default_library",
"//network/authorization:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -0,0 +1,7 @@
package builder
import "github.com/pkg/errors"
var (
ErrNotRunning = errors.New("builder is not running")
)

View File

@@ -0,0 +1,37 @@
package builder
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
submitBlindedBlockLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "submit_blinded_block_latency_milliseconds",
Help: "Captures RPC latency for submitting blinded block in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
getHeaderLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_header_latency_milliseconds",
Help: "Captures RPC latency for get header in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
getStatusLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_status_latency_milliseconds",
Help: "Captures RPC latency for get status in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
registerValidatorLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "register_validator_latency_milliseconds",
Help: "Captures RPC latency for register validator in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
)

View File

@@ -0,0 +1,45 @@
package builder
import (
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/network"
"github.com/prysmaticlabs/prysm/network/authorization"
"github.com/urfave/cli/v2"
)
type Option func(s *Service) error
// FlagOptions for builder service flag configurations.
func FlagOptions(c *cli.Context) ([]Option, error) {
endpoint := c.String(flags.MevRelayEndpoint.Name)
opts := []Option{
WithBuilderEndpoints(endpoint),
}
return opts, nil
}
// WithBuilderEndpoints sets the endpoint for the beacon chain builder service.
func WithBuilderEndpoints(endpoint string) Option {
return func(s *Service) error {
s.cfg.builderEndpoint = covertEndPoint(endpoint)
return nil
}
}
// WithDatabase sets the database for the beacon chain builder service.
func WithDatabase(database db.HeadAccessDatabase) Option {
return func(s *Service) error {
s.cfg.beaconDB = database
return nil
}
}
func covertEndPoint(ep string) network.Endpoint {
return network.Endpoint{
Url: ep,
Auth: network.AuthorizationData{ // Auth is not used for builder.
Method: authorization.None,
Value: "",
}}
}

View File

@@ -0,0 +1,121 @@
package builder
import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/api/client/builder"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/network"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
)
// BlockBuilder defines the interface for interacting with the block builder
type BlockBuilder interface {
SubmitBlindedBlock(ctx context.Context, block *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error)
GetHeader(ctx context.Context, slot types.Slot, parentHash [32]byte, pubKey [48]byte) (*ethpb.SignedBuilderBid, error)
Status(ctx context.Context) error
RegisterValidator(ctx context.Context, reg *ethpb.SignedValidatorRegistrationV1) error
}
// config defines a config struct for dependencies into the service.
type config struct {
builderEndpoint network.Endpoint
beaconDB db.HeadAccessDatabase
headFetcher blockchain.HeadFetcher
}
// Service defines a service that provides a client for interacting with the beacon chain and MEV relay network.
type Service struct {
cfg *config
c *builder.Client
}
// NewService instantiates a new service.
func NewService(ctx context.Context, opts ...Option) (*Service, error) {
s := &Service{}
for _, opt := range opts {
if err := opt(s); err != nil {
return nil, err
}
}
if s.cfg.builderEndpoint.Url != "" {
c, err := builder.NewClient(s.cfg.builderEndpoint.Url)
if err != nil {
return nil, err
}
s.c = c
}
return s, nil
}
// Start initializes the service.
func (*Service) Start() {}
// Stop halts the service.
func (*Service) Stop() error {
return nil
}
// SubmitBlindedBlock submits a blinded block to the builder relay network.
func (s *Service) SubmitBlindedBlock(ctx context.Context, b *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error) {
ctx, span := trace.StartSpan(ctx, "builder.SubmitBlindedBlock")
defer span.End()
start := time.Now()
defer func() {
submitBlindedBlockLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
return s.c.SubmitBlindedBlock(ctx, b)
}
// GetHeader retrieves the header for a given slot and parent hash from the builder relay network.
func (s *Service) GetHeader(ctx context.Context, slot types.Slot, parentHash [32]byte, pubKey [48]byte) (*ethpb.SignedBuilderBid, error) {
ctx, span := trace.StartSpan(ctx, "builder.GetHeader")
defer span.End()
start := time.Now()
defer func() {
getHeaderLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
return s.c.GetHeader(ctx, slot, parentHash, pubKey)
}
// Status retrieves the status of the builder relay network.
func (s *Service) Status(ctx context.Context) error {
ctx, span := trace.StartSpan(ctx, "builder.Status")
defer span.End()
start := time.Now()
defer func() {
getStatusLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
return s.c.Status(ctx)
}
// RegisterValidator registers a validator with the builder relay network.
// It also saves the registration object to the DB.
func (s *Service) RegisterValidator(ctx context.Context, reg *ethpb.SignedValidatorRegistrationV1) error {
ctx, span := trace.StartSpan(ctx, "builder.RegisterValidator")
defer span.End()
start := time.Now()
defer func() {
registerValidatorLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
idx, exists := s.cfg.headFetcher.HeadPublicKeyToValidatorIndex(bytesutil.ToBytes48(reg.Message.Pubkey))
if !exists {
return nil // If the pubkey is not found, it is not a validator. Do nothing.
}
if err := s.c.RegisterValidator(ctx, reg); err != nil {
return errors.Wrap(err, "could not register validator")
}
return s.cfg.beaconDB.SaveRegistrationsByValidatorIDs(ctx, []types.ValidatorIndex{idx}, []*ethpb.ValidatorRegistrationV1{reg.Message})
}

View File

@@ -103,9 +103,12 @@ func (c *CommitteeCache) Committee(ctx context.Context, slot types.Slot, seed [3
// AddCommitteeShuffledList adds Committee shuffled list object to the cache. T
// his method also trims the least recently list if the cache size has ready the max cache size limit.
func (c *CommitteeCache) AddCommitteeShuffledList(committees *Committees) error {
func (c *CommitteeCache) AddCommitteeShuffledList(ctx context.Context, committees *Committees) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := ctx.Err(); err != nil {
return err
}
key, err := committeeKeyFn(committees)
if err != nil {
return err

View File

@@ -27,7 +27,7 @@ func (c *FakeCommitteeCache) Committee(ctx context.Context, slot types.Slot, see
// AddCommitteeShuffledList adds Committee shuffled list object to the cache. T
// his method also trims the least recently list if the cache size has ready the max cache size limit.
func (c *FakeCommitteeCache) AddCommitteeShuffledList(committees *Committees) error {
func (c *FakeCommitteeCache) AddCommitteeShuffledList(ctx context.Context, committees *Committees) error {
return nil
}

View File

@@ -31,7 +31,7 @@ func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(c))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
_, err := cache.Committee(context.Background(), 0, c.Seed, 0)
require.NoError(t, err)
}
@@ -46,7 +46,7 @@ func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(c))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
indices, err := cache.ActiveIndices(context.Background(), c.Seed)
require.NoError(t, err)

View File

@@ -50,7 +50,7 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
}
require.NoError(t, cache.AddCommitteeShuffledList(item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
wantedIndex := types.CommitteeIndex(0)
indices, err = cache.Committee(context.Background(), slot, item.Seed, wantedIndex)
@@ -70,7 +70,7 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
t.Error("Expected committee not to exist in empty cache")
}
require.NoError(t, cache.AddCommitteeShuffledList(item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
indices, err = cache.ActiveIndices(context.Background(), item.Seed)
require.NoError(t, err)
@@ -85,7 +85,7 @@ func TestCommitteeCache_ActiveCount(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
require.NoError(t, cache.AddCommitteeShuffledList(item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
@@ -101,7 +101,7 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
for i := start; i < end; i++ {
s := []byte(strconv.Itoa(i))
item := &Committees{Seed: bytesutil.ToBytes32(s)}
require.NoError(t, cache.AddCommitteeShuffledList(item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
}
k := cache.CommitteeCache.Keys()
@@ -110,7 +110,7 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
sort.Slice(k, func(i, j int) bool {
return k[i].(string) < k[j].(string)
})
wanted := end - int(maxCommitteesCacheSize)
wanted := end - maxCommitteesCacheSize
s := bytesutil.ToBytes32([]byte(strconv.Itoa(wanted)))
assert.Equal(t, key(s), k[0], "incorrect key received for slot 190")
@@ -134,3 +134,20 @@ func TestCommitteeCacheOutOfRange(t *testing.T) {
_, err = cache.Committee(context.Background(), 0, seed, math.MaxUint64) // Overflow!
require.NotNil(t, err, "Did not fail as expected")
}
func TestCommitteeCache_DoesNothingWhenCancelledContext(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []types.ValidatorIndex{1, 2, 3, 4, 5, 6}}
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
cancelled, cancel := context.WithCancel(context.Background())
cancel()
require.ErrorIs(t, cache.AddCommitteeShuffledList(cancelled, item), context.Canceled)
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count)
}

View File

@@ -430,7 +430,11 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
}
trie, err := trie.GenerateTrieFromItems(deps, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate deposit trie")
assert.Equal(t, trie.HashTreeRoot(), cachedDeposits.Deposits.HashTreeRoot())
rootA, err := trie.HashTreeRoot()
require.NoError(t, err)
rootB, err := cachedDeposits.Deposits.HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, rootA, rootB)
}
func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
@@ -488,7 +492,11 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
}
trie, err := trie.GenerateTrieFromItems(deps, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate deposit trie")
assert.Equal(t, trie.HashTreeRoot(), cachedDeposits.Deposits.HashTreeRoot())
rootA, err := trie.HashTreeRoot()
require.NoError(t, err)
rootB, err := cachedDeposits.Deposits.HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, rootA, rootB)
}
func TestFinalizedDeposits_HandleZeroDeposits(t *testing.T) {

View File

@@ -123,7 +123,7 @@ func (s *SyncCommitteeCache) idxPositionInCommittee(
// UpdatePositionsInCommittee updates caching of validators position in sync committee in respect to
// current epoch and next epoch. This should be called when `current_sync_committee` and `next_sync_committee`
// change and that happens every `EPOCHS_PER_SYNC_COMMITTEE_PERIOD`.
func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, st state.BeaconStateAltair) error {
func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, st state.BeaconState) error {
csc, err := st.CurrentSyncCommittee()
if err != nil {
return err

View File

@@ -28,6 +28,6 @@ func (s *FakeSyncCommitteeCache) NextPeriodIndexPosition(root [32]byte, valIdx t
}
// UpdatePositionsInCommittee -- fake.
func (s *FakeSyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, state state.BeaconStateAltair) error {
func (s *FakeSyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, state state.BeaconState) error {
return nil
}

View File

@@ -16,6 +16,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/altair",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/endtoend/evaluators:__subpackages__",
"//testing/spectest:__subpackages__",
"//testing/util:__pkg__",
"//validator/client:__pkg__",
@@ -34,13 +35,13 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -13,6 +13,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"go.opencensus.io/trace"
@@ -25,7 +26,7 @@ func ProcessAttestationsNoVerifySignature(
beaconState state.BeaconState,
b interfaces.SignedBeaconBlock,
) (state.BeaconState, error) {
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return nil, err
}
body := b.Block().Body()
@@ -46,10 +47,10 @@ func ProcessAttestationsNoVerifySignature(
// method is used to validate attestations whose signatures have already been verified or will be verified later.
func ProcessAttestationNoVerifySignature(
ctx context.Context,
beaconState state.BeaconStateAltair,
beaconState state.BeaconState,
att *ethpb.Attestation,
totalBalance uint64,
) (state.BeaconStateAltair, error) {
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "altair.ProcessAttestationNoVerifySignature")
defer span.End()
@@ -262,7 +263,7 @@ func RewardProposer(ctx context.Context, beaconState state.BeaconState, proposer
// participation_flag_indices.append(TIMELY_HEAD_FLAG_INDEX)
//
// return participation_flag_indices
func AttestationParticipationFlagIndices(beaconState state.BeaconStateAltair, data *ethpb.AttestationData, delay types.Slot) (map[uint8]bool, error) {
func AttestationParticipationFlagIndices(beaconState state.BeaconState, data *ethpb.AttestationData, delay types.Slot) (map[uint8]bool, error) {
currEpoch := time.CurrentEpoch(beaconState)
var justifiedCheckpt *ethpb.Checkpoint
if data.Target.Epoch == currEpoch {

View File

@@ -44,7 +44,7 @@ import (
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
// else:
// decrease_balance(state, participant_index, participant_reward)
func ProcessSyncAggregate(ctx context.Context, s state.BeaconStateAltair, sync *ethpb.SyncAggregate) (state.BeaconStateAltair, error) {
func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, error) {
votedKeys, votedIndices, didntVoteIndices, err := FilterSyncCommitteeVotes(s, sync)
if err != nil {
return nil, err
@@ -58,7 +58,7 @@ func ProcessSyncAggregate(ctx context.Context, s state.BeaconStateAltair, sync *
}
// FilterSyncCommitteeVotes filters the validator public keys and indices for the ones that voted and didn't vote.
func FilterSyncCommitteeVotes(s state.BeaconStateAltair, sync *ethpb.SyncAggregate) (
func FilterSyncCommitteeVotes(s state.BeaconState, sync *ethpb.SyncAggregate) (
votedKeys []bls.PublicKey,
votedIndices []types.ValidatorIndex,
didntVoteIndices []types.ValidatorIndex,
@@ -100,7 +100,7 @@ func FilterSyncCommitteeVotes(s state.BeaconStateAltair, sync *ethpb.SyncAggrega
}
// VerifySyncCommitteeSig verifies sync committee signature `syncSig` is valid with respect to public keys `syncKeys`.
func VerifySyncCommitteeSig(s state.BeaconStateAltair, syncKeys []bls.PublicKey, syncSig []byte) error {
func VerifySyncCommitteeSig(s state.BeaconState, syncKeys []bls.PublicKey, syncSig []byte) error {
ps := slots.PrevSlot(s.Slot())
d, err := signing.Domain(s.Fork(), slots.ToEpoch(ps), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())
if err != nil {
@@ -126,7 +126,7 @@ func VerifySyncCommitteeSig(s state.BeaconStateAltair, syncKeys []bls.PublicKey,
}
// ApplySyncRewardsPenalties applies rewards and penalties for proposer and sync committee participants.
func ApplySyncRewardsPenalties(ctx context.Context, s state.BeaconStateAltair, votedIndices, didntVoteIndices []types.ValidatorIndex) (state.BeaconStateAltair, error) {
func ApplySyncRewardsPenalties(ctx context.Context, s state.BeaconState, votedIndices, didntVoteIndices []types.ValidatorIndex) (state.BeaconState, error) {
activeBalance, err := helpers.TotalActiveBalance(s)
if err != nil {
return nil, err

View File

@@ -13,9 +13,9 @@ import (
// ProcessDeposits processes validator deposits for beacon state Altair.
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconStateAltair,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconStateAltair, error) {
) (state.BeaconState, error) {
batchVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, err
@@ -34,7 +34,7 @@ func ProcessDeposits(
}
// ProcessDeposit processes validator deposit for beacon state Altair.
func ProcessDeposit(ctx context.Context, beaconState state.BeaconStateAltair, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconStateAltair, error) {
func ProcessDeposit(ctx context.Context, beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
beaconState, isNewValidator, err := blocks.ProcessDeposit(beaconState, deposit, verifySignature)
if err != nil {
return nil, err

View File

@@ -143,7 +143,8 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
}
balances := []uint64{0, 50}
root := depositTrie.HashTreeRoot()
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
Validators: registry,
Balances: balances,
@@ -202,7 +203,8 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
dep[0].Data.Signature = make([]byte, 96)
trie, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root := trie.HashTreeRoot()
root, err := trie.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,

View File

@@ -10,12 +10,11 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/math"
"github.com/prysmaticlabs/prysm/runtime/version"
"go.opencensus.io/trace"
)
// InitializePrecomputeValidators precomputes individual validator for its attested balances and the total sum of validators attested balances of the epoch.
func InitializePrecomputeValidators(ctx context.Context, beaconState state.BeaconStateAltair) ([]*precompute.Validator, *precompute.Balance, error) {
func InitializePrecomputeValidators(ctx context.Context, beaconState state.BeaconState) ([]*precompute.Validator, *precompute.Balance, error) {
_, span := trace.StartSpan(ctx, "altair.InitializePrecomputeValidators")
defer span.End()
vals := make([]*precompute.Validator, beaconState.NumValidators())
@@ -48,7 +47,7 @@ func InitializePrecomputeValidators(ctx context.Context, beaconState state.Beaco
return err
}
}
// Set validator's active status for preivous epoch.
// Set validator's active status for previous epoch.
if helpers.IsActiveValidatorUsingTrie(val, prevEpoch) {
v.IsActivePrevEpoch = true
bal.ActivePrevEpoch, err = math.Add64(bal.ActivePrevEpoch, val.EffectiveBalance())
@@ -209,10 +208,10 @@ func ProcessEpochParticipation(
// ProcessRewardsAndPenaltiesPrecompute processes the rewards and penalties of individual validator.
// This is an optimized version by passing in precomputed validator attesting records and and total epoch balances.
func ProcessRewardsAndPenaltiesPrecompute(
beaconState state.BeaconStateAltair,
beaconState state.BeaconState,
bal *precompute.Balance,
vals []*precompute.Validator,
) (state.BeaconStateAltair, error) {
) (state.BeaconState, error) {
// Don't process rewards and penalties in genesis epoch.
cfg := params.BeaconConfig()
if time.CurrentEpoch(beaconState) == cfg.GenesisEpoch {
@@ -268,16 +267,12 @@ func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, v
leak := helpers.IsInInactivityLeak(prevEpoch, finalizedEpoch)
// Modified in Altair and Bellatrix.
var inactivityDenominator uint64
bias := cfg.InactivityScoreBias
switch beaconState.Version() {
case version.Altair:
inactivityDenominator = bias * cfg.InactivityPenaltyQuotientAltair
case version.Bellatrix:
inactivityDenominator = bias * cfg.InactivityPenaltyQuotientBellatrix
default:
return nil, nil, errors.Errorf("invalid state type version: %T", beaconState.Version())
inactivityPenaltyQuotient, err := beaconState.InactivityPenaltyQuotient()
if err != nil {
return nil, nil, err
}
inactivityDenominator := bias * inactivityPenaltyQuotient
for i, v := range vals {
rewards[i], penalties[i], err = attestationDelta(bal, v, baseRewardMultiplier, inactivityDenominator, leak)

View File

@@ -18,7 +18,7 @@ import (
// if next_epoch % EPOCHS_PER_SYNC_COMMITTEE_PERIOD == 0:
// state.current_sync_committee = state.next_sync_committee
// state.next_sync_committee = get_next_sync_committee(state)
func ProcessSyncCommitteeUpdates(ctx context.Context, beaconState state.BeaconStateAltair) (state.BeaconStateAltair, error) {
func ProcessSyncCommitteeUpdates(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
nextEpoch := time.NextEpoch(beaconState)
if nextEpoch%params.BeaconConfig().EpochsPerSyncCommitteePeriod == 0 {
nextSyncCommittee, err := beaconState.NextSyncCommittee()
@@ -48,7 +48,7 @@ func ProcessSyncCommitteeUpdates(ctx context.Context, beaconState state.BeaconSt
// def process_participation_flag_updates(state: BeaconState) -> None:
// state.previous_epoch_participation = state.current_epoch_participation
// state.current_epoch_participation = [ParticipationFlags(0b0000_0000) for _ in range(len(state.validators))]
func ProcessParticipationFlagUpdates(beaconState state.BeaconStateAltair) (state.BeaconStateAltair, error) {
func ProcessParticipationFlagUpdates(beaconState state.BeaconState) (state.BeaconState, error) {
c, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err

View File

@@ -54,7 +54,7 @@ func ValidateNilSyncContribution(s *ethpb.SignedContributionAndProof) error {
// pubkeys = [state.validators[index].pubkey for index in indices]
// aggregate_pubkey = bls.AggregatePKs(pubkeys)
// return SyncCommittee(pubkeys=pubkeys, aggregate_pubkey=aggregate_pubkey)
func NextSyncCommittee(ctx context.Context, s state.BeaconStateAltair) (*ethpb.SyncCommittee, error) {
func NextSyncCommittee(ctx context.Context, s state.BeaconState) (*ethpb.SyncCommittee, error) {
indices, err := NextSyncCommitteeIndices(ctx, s)
if err != nil {
return nil, err
@@ -98,7 +98,7 @@ func NextSyncCommittee(ctx context.Context, s state.BeaconStateAltair) (*ethpb.S
// sync_committee_indices.append(candidate_index)
// i += 1
// return sync_committee_indices
func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconStateAltair) ([]types.ValidatorIndex, error) {
func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]types.ValidatorIndex, error) {
epoch := coreTime.NextEpoch(s)
indices, err := helpers.ActiveValidatorIndices(ctx, s, epoch)
if err != nil {

View File

@@ -20,7 +20,7 @@ import (
)
func TestSyncCommitteeIndices_CanGet(t *testing.T) {
getState := func(t *testing.T, count uint64) state.BeaconStateAltair {
getState := func(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -37,7 +37,7 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
}
type args struct {
state state.BeaconStateAltair
state state.BeaconState
epoch types.Epoch
}
tests := []struct {
@@ -95,7 +95,7 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
helpers.ClearCache()
getState := func(t *testing.T, count uint64) state.BeaconStateAltair {
getState := func(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -129,7 +129,7 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
}
func TestSyncCommittee_CanGet(t *testing.T) {
getState := func(t *testing.T, count uint64) state.BeaconStateAltair {
getState := func(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
blsKey, err := bls.RandKey()
@@ -149,7 +149,7 @@ func TestSyncCommittee_CanGet(t *testing.T) {
}
type args struct {
state state.BeaconStateAltair
state state.BeaconState
epoch types.Epoch
}
tests := []struct {
@@ -384,7 +384,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
}
}
func getState(t *testing.T, count uint64) state.BeaconStateAltair {
func getState(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
blsKey, err := bls.RandKey()

View File

@@ -7,8 +7,6 @@ import (
e "github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/runtime/version"
"go.opencensus.io/trace"
)
@@ -29,7 +27,7 @@ import (
// process_historical_roots_update(state)
// process_participation_flag_updates(state) # [New in Altair]
// process_sync_committee_updates(state) # [New in Altair]
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconStateAltair, error) {
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "altair.ProcessEpoch")
defer span.End()
@@ -70,22 +68,14 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconSta
}
// Modified in Altair and Bellatrix.
cfg := params.BeaconConfig()
switch state.Version() {
case version.Altair:
state, err = e.ProcessSlashings(state, cfg.ProportionalSlashingMultiplierAltair)
if err != nil {
return nil, err
}
case version.Bellatrix:
state, err = e.ProcessSlashings(state, cfg.ProportionalSlashingMultiplierBellatrix)
if err != nil {
return nil, err
}
default:
return nil, errors.Errorf("invalid state type version: %T", state.Version())
proportionalSlashingMultipler, err := state.ProportionalSlashingMultiplier()
if err != nil {
return nil, err
}
state, err = e.ProcessSlashings(state, proportionalSlashingMultipler)
if err != nil {
return nil, err
}
state, err = e.ProcessEth1DataReset(state)
if err != nil {
return nil, err

View File

@@ -62,7 +62,7 @@ import (
// post.current_sync_committee = get_next_sync_committee(post)
// post.next_sync_committee = get_next_sync_committee(post)
// return post
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconStateAltair, error) {
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
@@ -137,7 +137,7 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
// for index in get_attesting_indices(state, data, attestation.aggregation_bits):
// for flag_index in participation_flag_indices:
// epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
func TranslateParticipation(ctx context.Context, state state.BeaconStateAltair, atts []*ethpb.PendingAttestation) (state.BeaconStateAltair, error) {
func TranslateParticipation(ctx context.Context, state state.BeaconState, atts []*ethpb.PendingAttestation) (state.BeaconState, error) {
epochParticipation, err := state.PreviousEpochParticipation()
if err != nil {
return nil, err

View File

@@ -31,6 +31,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/forks/bellatrix:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
@@ -40,7 +41,6 @@ go_library(
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
@@ -89,6 +89,7 @@ go_test(
"//beacon-chain/state/v1:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/forks/bellatrix:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/trie:go_default_library",

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
@@ -25,7 +26,7 @@ func ProcessAttestationsNoVerifySignature(
beaconState state.BeaconState,
b interfaces.SignedBeaconBlock,
) (state.BeaconState, error) {
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return nil, err
}
body := b.Block().Body()

View File

@@ -173,7 +173,8 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
}
balances := []uint64{0, 50}
root := depositTrie.HashTreeRoot()
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := v1.InitializeFromProto(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
@@ -233,7 +234,8 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
dep[0].Data.Signature = make([]byte, 96)
trie, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root := trie.HashTreeRoot()
root, err := trie.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
@@ -289,7 +291,9 @@ func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
require.NoError(t, err)
dep[i].Proof = proof
}
root := trie.HashTreeRoot()
root, err := trie.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
@@ -376,7 +380,9 @@ func TestProcessDeposit_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
}
balances := []uint64{0, 50}
root := depositTrie.HashTreeRoot()
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := v1.InitializeFromProto(&ethpb.BeaconState{
Validators: registry,
Balances: balances,

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
@@ -43,7 +44,7 @@ func ProcessBlockHeader(
beaconState state.BeaconState,
block interfaces.SignedBeaconBlock,
) (state.BeaconState, error) {
if err := helpers.BeaconBlockIsNil(block); err != nil {
if err := wrapper.BeaconBlockIsNil(block); err != nil {
return nil, err
}
bodyRoot, err := block.Block().Body().HashTreeRoot()

View File

@@ -7,11 +7,10 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/consensus-types/forks/bellatrix"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/runtime/version"
@@ -41,7 +40,7 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
if err != nil {
return false, err
}
return !isEmptyHeader(h), nil
return !bellatrix.IsEmptyHeader(h), nil
}
// IsMergeTransitionBlockUsingPreStatePayloadHeader returns true if the input block is the terminal merge block.
@@ -51,7 +50,7 @@ func IsMergeTransitionBlockUsingPreStatePayloadHeader(h *ethpb.ExecutionPayloadH
if h == nil || body == nil {
return false, errors.New("nil header or block body")
}
if !isEmptyHeader(h) {
if !bellatrix.IsEmptyHeader(h) {
return false, nil
}
return IsExecutionBlock(body)
@@ -74,7 +73,7 @@ func IsExecutionBlock(body interfaces.BeaconBlockBody) (bool, error) {
return false, err
default:
}
return !IsEmptyPayload(payload), nil
return !bellatrix.IsEmptyPayload(payload), nil
}
// IsExecutionEnabled returns true if the beacon chain can begin executing.
@@ -100,7 +99,7 @@ func IsExecutionEnabled(st state.BeaconState, body interfaces.BeaconBlockBody) (
// IsExecutionEnabledUsingHeader returns true if the execution is enabled using post processed payload header and block body.
// This is an optimized version of IsExecutionEnabled where beacon state is not required as an argument.
func IsExecutionEnabledUsingHeader(header *ethpb.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
if !isEmptyHeader(header) {
if !bellatrix.IsEmptyHeader(header) {
return true, nil
}
return IsExecutionBlock(body)
@@ -205,7 +204,7 @@ func ProcessPayload(st state.BeaconState, payload *enginev1.ExecutionPayload) (s
return nil, err
}
header, err := PayloadToHeader(payload)
header, err := bellatrix.PayloadToHeader(payload)
if err != nil {
return nil, err
}
@@ -215,126 +214,6 @@ func ProcessPayload(st state.BeaconState, payload *enginev1.ExecutionPayload) (s
return st, nil
}
// PayloadToHeader converts `payload` into execution payload header format.
func PayloadToHeader(payload *enginev1.ExecutionPayload) (*ethpb.ExecutionPayloadHeader, error) {
txRoot, err := ssz.TransactionsRoot(payload.Transactions)
if err != nil {
return nil, err
}
return &ethpb.ExecutionPayloadHeader{
ParentHash: bytesutil.SafeCopyBytes(payload.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(payload.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
BlockNumber: payload.BlockNumber,
GasLimit: payload.GasLimit,
GasUsed: payload.GasUsed,
Timestamp: payload.Timestamp,
ExtraData: bytesutil.SafeCopyBytes(payload.ExtraData),
BaseFeePerGas: bytesutil.SafeCopyBytes(payload.BaseFeePerGas),
BlockHash: bytesutil.SafeCopyBytes(payload.BlockHash),
TransactionsRoot: txRoot[:],
}, nil
}
func IsEmptyPayload(p *enginev1.ExecutionPayload) bool {
if p == nil {
return true
}
if !bytes.Equal(p.ParentHash, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(p.FeeRecipient, make([]byte, fieldparams.FeeRecipientLength)) {
return false
}
if !bytes.Equal(p.StateRoot, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(p.ReceiptsRoot, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(p.LogsBloom, make([]byte, fieldparams.LogsBloomLength)) {
return false
}
if !bytes.Equal(p.PrevRandao, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(p.BaseFeePerGas, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(p.BlockHash, make([]byte, fieldparams.RootLength)) {
return false
}
if len(p.Transactions) != 0 {
return false
}
if len(p.ExtraData) != 0 {
return false
}
if p.BlockNumber != 0 {
return false
}
if p.GasLimit != 0 {
return false
}
if p.GasUsed != 0 {
return false
}
if p.Timestamp != 0 {
return false
}
return true
}
func isEmptyHeader(h *ethpb.ExecutionPayloadHeader) bool {
if !bytes.Equal(h.ParentHash, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.FeeRecipient, make([]byte, fieldparams.FeeRecipientLength)) {
return false
}
if !bytes.Equal(h.StateRoot, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.ReceiptsRoot, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.LogsBloom, make([]byte, fieldparams.LogsBloomLength)) {
return false
}
if !bytes.Equal(h.PrevRandao, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.BaseFeePerGas, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.BlockHash, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.TransactionsRoot, make([]byte, fieldparams.RootLength)) {
return false
}
if len(h.ExtraData) != 0 {
return false
}
if h.BlockNumber != 0 {
return false
}
if h.GasLimit != 0 {
return false
}
if h.GasUsed != 0 {
return false
}
if h.Timestamp != 0 {
return false
}
return true
}
// ValidatePayloadHeaderWhenMergeCompletes validates the payload header when the merge completes.
func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header *ethpb.ExecutionPayloadHeader) error {
// Skip validation if the state is not merge compatible.
@@ -393,3 +272,16 @@ func ProcessPayloadHeader(st state.BeaconState, header *ethpb.ExecutionPayloadHe
}
return st, nil
}
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.BeaconBlock) ([32]byte, error) {
payloadHash := [32]byte{}
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
}
payload, err := blk.Body().ExecutionPayload()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash), nil
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/consensus-types/forks/bellatrix"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/encoding/ssz"
@@ -669,7 +670,7 @@ func Test_ProcessPayload(t *testing.T) {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)
want, err := blocks.PayloadToHeader(tt.payload)
want, err := bellatrix.PayloadToHeader(tt.payload)
require.Equal(t, tt.err, err)
got, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
@@ -824,7 +825,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
func Test_PayloadToHeader(t *testing.T) {
p := emptyPayload()
h, err := blocks.PayloadToHeader(p)
h, err := bellatrix.PayloadToHeader(p)
require.NoError(t, err)
txRoot, err := ssz.TransactionsRoot(p.Transactions)
require.NoError(t, err)

View File

@@ -4,10 +4,10 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/crypto/hash"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -31,7 +31,7 @@ func ProcessRandao(
beaconState state.BeaconState,
b interfaces.SignedBeaconBlock,
) (state.BeaconState, error) {
if err := helpers.BeaconBlockIsNil(b); err != nil {
if err := wrapper.BeaconBlockIsNil(b); err != nil {
return nil, err
}
body := b.Block().Body()

View File

@@ -28,6 +28,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
@@ -43,11 +44,13 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//beacon-chain/state/v2:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -2,6 +2,7 @@ package precompute
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
@@ -9,6 +10,24 @@ import (
"github.com/prysmaticlabs/prysm/time/slots"
)
var errNilState = errors.New("nil state")
// UnrealizedCheckpoints returns the justification and finalization checkpoints of the
// given state as if it was progressed with empty slots until the next epoch.
func UnrealizedCheckpoints(st state.BeaconState) (*ethpb.Checkpoint, *ethpb.Checkpoint, error) {
if st == nil || st.IsNil() {
return nil, nil, errNilState
}
activeBalance, prevTarget, currentTarget, err := st.UnrealizedCheckpointBalances()
if err != nil {
return nil, nil, err
}
justification := processJustificationBits(st, activeBalance, prevTarget, currentTarget)
return ComputeCheckpoints(st, justification)
}
// ProcessJustificationAndFinalizationPreCompute processes justification and finalization during
// epoch processing. This is where a beacon node can justify and finalize a new epoch.
// Note: this is an optimized version by passing in precomputed total and attesting balances.
@@ -34,12 +53,55 @@ func ProcessJustificationAndFinalizationPreCompute(state state.BeaconState, pBal
return state, nil
}
return weighJustificationAndFinalization(state, pBal.ActiveCurrentEpoch, pBal.PrevEpochTargetAttested, pBal.CurrentEpochTargetAttested)
newBits := processJustificationBits(state, pBal.ActiveCurrentEpoch, pBal.PrevEpochTargetAttested, pBal.CurrentEpochTargetAttested)
return weighJustificationAndFinalization(state, newBits)
}
// weighJustificationAndFinalization processes justification and finalization during
// processJustificationBits processes the justification bits during epoch processing.
func processJustificationBits(state state.BeaconState, totalActiveBalance, prevEpochTargetBalance, currEpochTargetBalance uint64) bitfield.Bitvector4 {
newBits := state.JustificationBits()
newBits.Shift(1)
// If 2/3 or more of total balance attested in the previous epoch.
if 3*prevEpochTargetBalance >= 2*totalActiveBalance {
newBits.SetBitAt(1, true)
}
if 3*currEpochTargetBalance >= 2*totalActiveBalance {
newBits.SetBitAt(0, true)
}
return newBits
}
// updateJustificationAndFinalization processes justification and finalization during
// epoch processing. This is where a beacon node can justify and finalize a new epoch.
//
func weighJustificationAndFinalization(state state.BeaconState, newBits bitfield.Bitvector4) (state.BeaconState, error) {
jc, fc, err := ComputeCheckpoints(state, newBits)
if err != nil {
return nil, err
}
if err := state.SetPreviousJustifiedCheckpoint(state.CurrentJustifiedCheckpoint()); err != nil {
return nil, err
}
if err := state.SetCurrentJustifiedCheckpoint(jc); err != nil {
return nil, err
}
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
}
if err := state.SetFinalizedCheckpoint(fc); err != nil {
return nil, err
}
return state, nil
}
// ComputeCheckpoints computes the new Justification and Finalization
// checkpoints at epoch transition
// Spec pseudocode definition:
// def weigh_justification_and_finalization(state: BeaconState,
// total_active_balance: Gwei,
@@ -77,88 +139,57 @@ func ProcessJustificationAndFinalizationPreCompute(state state.BeaconState, pBal
// # The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
// if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
func weighJustificationAndFinalization(state state.BeaconState,
totalActiveBalance, prevEpochTargetBalance, currEpochTargetBalance uint64) (state.BeaconState, error) {
func ComputeCheckpoints(state state.BeaconState, newBits bitfield.Bitvector4) (*ethpb.Checkpoint, *ethpb.Checkpoint, error) {
prevEpoch := time.PrevEpoch(state)
currentEpoch := time.CurrentEpoch(state)
oldPrevJustifiedCheckpoint := state.PreviousJustifiedCheckpoint()
oldCurrJustifiedCheckpoint := state.CurrentJustifiedCheckpoint()
// Process justifications
if err := state.SetPreviousJustifiedCheckpoint(state.CurrentJustifiedCheckpoint()); err != nil {
return nil, err
}
newBits := state.JustificationBits()
newBits.Shift(1)
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
}
// Note: the spec refers to the bit index position starting at 1 instead of starting at zero.
// We will use that paradigm here for consistency with the godoc spec definition.
// If 2/3 or more of total balance attested in the previous epoch.
if 3*prevEpochTargetBalance >= 2*totalActiveBalance {
blockRoot, err := helpers.BlockRoot(state, prevEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for previous epoch %d", prevEpoch)
}
if err := state.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: prevEpoch, Root: blockRoot}); err != nil {
return nil, err
}
newBits := state.JustificationBits()
newBits.SetBitAt(1, true)
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
}
}
justifiedCheckpoint := state.CurrentJustifiedCheckpoint()
finalizedCheckpoint := state.FinalizedCheckpoint()
// If 2/3 or more of the total balance attested in the current epoch.
if 3*currEpochTargetBalance >= 2*totalActiveBalance {
if newBits.BitAt(0) && currentEpoch >= justifiedCheckpoint.Epoch {
blockRoot, err := helpers.BlockRoot(state, currentEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for current epoch %d", prevEpoch)
return nil, nil, errors.Wrapf(err, "could not get block root for current epoch %d", currentEpoch)
}
if err := state.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: currentEpoch, Root: blockRoot}); err != nil {
return nil, err
}
newBits := state.JustificationBits()
newBits.SetBitAt(0, true)
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
justifiedCheckpoint.Epoch = currentEpoch
justifiedCheckpoint.Root = blockRoot
} else if newBits.BitAt(1) && prevEpoch >= justifiedCheckpoint.Epoch {
// If 2/3 or more of total balance attested in the previous epoch.
blockRoot, err := helpers.BlockRoot(state, prevEpoch)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get block root for previous epoch %d", prevEpoch)
}
justifiedCheckpoint.Epoch = prevEpoch
justifiedCheckpoint.Root = blockRoot
}
// Process finalization according to Ethereum Beacon Chain specification.
justification := state.JustificationBits().Bytes()[0]
if len(newBits) == 0 {
return nil, nil, errors.New("empty justification bits")
}
justification := newBits.Bytes()[0]
// 2nd/3rd/4th (0b1110) most recent epochs are justified, the 2nd using the 4th as source.
if justification&0x0E == 0x0E && (oldPrevJustifiedCheckpoint.Epoch+3) == currentEpoch {
if err := state.SetFinalizedCheckpoint(oldPrevJustifiedCheckpoint); err != nil {
return nil, err
}
finalizedCheckpoint = oldPrevJustifiedCheckpoint
}
// 2nd/3rd (0b0110) most recent epochs are justified, the 2nd using the 3rd as source.
if justification&0x06 == 0x06 && (oldPrevJustifiedCheckpoint.Epoch+2) == currentEpoch {
if err := state.SetFinalizedCheckpoint(oldPrevJustifiedCheckpoint); err != nil {
return nil, err
}
finalizedCheckpoint = oldPrevJustifiedCheckpoint
}
// 1st/2nd/3rd (0b0111) most recent epochs are justified, the 1st using the 3rd as source.
if justification&0x07 == 0x07 && (oldCurrJustifiedCheckpoint.Epoch+2) == currentEpoch {
if err := state.SetFinalizedCheckpoint(oldCurrJustifiedCheckpoint); err != nil {
return nil, err
}
finalizedCheckpoint = oldCurrJustifiedCheckpoint
}
// The 1st/2nd (0b0011) most recent epochs are justified, the 1st using the 2nd as source
if justification&0x03 == 0x03 && (oldCurrJustifiedCheckpoint.Epoch+1) == currentEpoch {
if err := state.SetFinalizedCheckpoint(oldCurrJustifiedCheckpoint); err != nil {
return nil, err
}
finalizedCheckpoint = oldCurrJustifiedCheckpoint
}
return state, nil
return justifiedCheckpoint, finalizedCheckpoint, nil
}

View File

@@ -1,11 +1,14 @@
package precompute_test
import (
"context"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
v2 "github.com/prysmaticlabs/prysm/beacon-chain/state/v2"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
@@ -123,3 +126,142 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyPrevEpoch(t *testi
assert.DeepEqual(t, params.BeaconConfig().ZeroHash[:], newState.FinalizedCheckpoint().Root)
assert.Equal(t, types.Epoch(0), newState.FinalizedCheckpointEpoch(), "Unexpected finalized epoch")
}
func TestUnrealizedCheckpoints(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
balances := make([]uint64, len(validators))
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
balances[i] = params.BeaconConfig().MaxEffectiveBalance
}
pjr := [32]byte{'p'}
cjr := [32]byte{'c'}
je := types.Epoch(3)
fe := types.Epoch(2)
pjcp := &ethpb.Checkpoint{Root: pjr[:], Epoch: fe}
cjcp := &ethpb.Checkpoint{Root: cjr[:], Epoch: je}
fcp := &ethpb.Checkpoint{Root: pjr[:], Epoch: fe}
tests := []struct {
name string
slot types.Slot
prevVals, currVals int
expectedJustified, expectedFinalized types.Epoch // The expected unrealized checkpoint epochs
}{
{
"Not enough votes, keep previous justification",
129,
len(validators) / 3,
len(validators) / 3,
je,
fe,
},
{
"Not enough votes, keep previous justification, N+2",
161,
len(validators) / 3,
len(validators) / 3,
je,
fe,
},
{
"Enough to justify previous epoch but not current",
129,
2*len(validators)/3 + 3,
len(validators) / 3,
je,
fe,
},
{
"Enough to justify previous epoch but not current, N+2",
161,
2*len(validators)/3 + 3,
len(validators) / 3,
je + 1,
fe,
},
{
"Enough to justify current epoch",
129,
len(validators) / 3,
2*len(validators)/3 + 3,
je + 1,
fe,
},
{
"Enough to justify current epoch, but not previous",
161,
len(validators) / 3,
2*len(validators)/3 + 3,
je + 2,
fe,
},
{
"Enough to justify current and previous",
161,
2*len(validators)/3 + 3,
2*len(validators)/3 + 3,
je + 2,
fe,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
base := &ethpb.BeaconStateAltair{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
Validators: validators,
Slot: test.slot,
CurrentEpochParticipation: make([]byte, params.BeaconConfig().MinGenesisActiveValidatorCount),
PreviousEpochParticipation: make([]byte, params.BeaconConfig().MinGenesisActiveValidatorCount),
Balances: balances,
PreviousJustifiedCheckpoint: pjcp,
CurrentJustifiedCheckpoint: cjcp,
FinalizedCheckpoint: fcp,
InactivityScores: make([]uint64, len(validators)),
JustificationBits: make(bitfield.Bitvector4, 1),
}
for i := 0; i < test.prevVals; i++ {
base.PreviousEpochParticipation[i] = 0xFF
}
for i := 0; i < test.currVals; i++ {
base.CurrentEpochParticipation[i] = 0xFF
}
if test.slot > 130 {
base.JustificationBits.SetBitAt(2, true)
base.JustificationBits.SetBitAt(3, true)
} else {
base.JustificationBits.SetBitAt(1, true)
base.JustificationBits.SetBitAt(2, true)
}
state, err := v2.InitializeFromProto(base)
require.NoError(t, err)
_, _, err = altair.InitializePrecomputeValidators(context.Background(), state)
require.NoError(t, err)
jc, fc, err := precompute.UnrealizedCheckpoints(state)
require.NoError(t, err)
require.DeepEqual(t, test.expectedJustified, jc.Epoch)
require.DeepEqual(t, test.expectedFinalized, fc.Epoch)
})
}
}
func Test_ComputeCheckpoints_CantUpdateToLower(t *testing.T) {
st, err := v2.InitializeFromProto(&ethpb.BeaconStateAltair{
Slot: params.BeaconConfig().SlotsPerEpoch * 2,
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 2,
},
})
require.NoError(t, err)
jb := make(bitfield.Bitvector4, 1)
jb.SetBitAt(1, true)
cp, _, err := precompute.ComputeCheckpoints(st, jb)
require.NoError(t, err)
require.Equal(t, types.Epoch(2), cp.Epoch)
}

View File

@@ -22,7 +22,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//container/trie:go_default_library",

View File

@@ -285,7 +285,7 @@ func ShuffledIndices(s state.ReadOnlyBeaconState, epoch types.Epoch) ([]types.Va
// UpdateCommitteeCache gets called at the beginning of every epoch to cache the committee shuffled indices
// list with committee index and epoch number. It caches the shuffled indices for current epoch and next epoch.
func UpdateCommitteeCache(state state.ReadOnlyBeaconState, epoch types.Epoch) error {
func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch types.Epoch) error {
for _, e := range []types.Epoch{epoch, epoch + 1} {
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
@@ -311,7 +311,7 @@ func UpdateCommitteeCache(state state.ReadOnlyBeaconState, epoch types.Epoch) er
return sortedIndices[i] < sortedIndices[j]
})
if err := committeeCache.AddCommitteeShuffledList(&cache.Committees{
if err := committeeCache.AddCommitteeShuffledList(ctx, &cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,

View File

@@ -386,7 +386,7 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
require.NoError(t, UpdateCommitteeCache(state, time.CurrentEpoch(state)))
require.NoError(t, UpdateCommitteeCache(context.Background(), state, time.CurrentEpoch(state)))
epoch := types.Epoch(1)
idx := types.CommitteeIndex(1)

View File

@@ -6,31 +6,10 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/time/slots"
)
var ErrNilSignedBeaconBlock = errors.New("signed beacon block can't be nil")
var ErrNilBeaconBlock = errors.New("beacon block can't be nil")
var ErrNilBeaconBlockBody = errors.New("beacon block body can't be nil")
// BeaconBlockIsNil checks if any composite field of input signed beacon block is nil.
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
func BeaconBlockIsNil(b interfaces.SignedBeaconBlock) error {
if b == nil || b.IsNil() {
return ErrNilSignedBeaconBlock
}
if b.Block().IsNil() {
return ErrNilBeaconBlock
}
if b.Block().Body().IsNil() {
return ErrNilBeaconBlockBody
}
return nil
}
// BlockRootAtSlot returns the block root stored in the BeaconState for a recent slot.
// It returns an error if the requested block root is not within the slot range.
//

View File

@@ -41,7 +41,10 @@ func UpdateGenesisEth1Data(state state.BeaconState, deposits []*ethpb.Deposit, e
}
}
depositRoot := t.HashTreeRoot()
depositRoot, err := t.HashTreeRoot()
if err != nil {
return nil, err
}
eth1Data.DepositRoot = depositRoot[:]
err = state.SetEth1Data(eth1Data)
if err != nil {

View File

@@ -76,6 +76,8 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
return 0, err
}
// Spec defines `EffectiveBalanceIncrement` as min to avoid divisions by zero.
total = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, total)
if err := balanceCache.AddTotalEffectiveBalance(s, total); err != nil {
return 0, err
}

View File

@@ -74,6 +74,27 @@ func TestTotalActiveBalance(t *testing.T) {
}
}
func TestTotalActiveBal_ReturnMin(t *testing.T) {
tests := []struct {
vCount int
}{
{1},
{10},
{10000},
}
for _, test := range tests {
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: 1, ExitEpoch: 1})
}
state, err := v1.InitializeFromProto(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
bal, err := TotalActiveBalance(state)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().EffectiveBalanceIncrement, bal)
}
}
func TestTotalActiveBalance_WithCache(t *testing.T) {
tests := []struct {
vCount int

View File

@@ -26,7 +26,7 @@ var (
// 1. Checks if the public key exists in the sync committee cache
// 2. If 1 fails, checks if the public key exists in the input current sync committee object
func IsCurrentPeriodSyncCommittee(
st state.BeaconStateAltair, valIdx types.ValidatorIndex,
st state.BeaconState, valIdx types.ValidatorIndex,
) (bool, error) {
root, err := syncPeriodBoundaryRoot(st)
if err != nil {
@@ -63,7 +63,7 @@ func IsCurrentPeriodSyncCommittee(
// 1. Checks if the public key exists in the sync committee cache
// 2. If 1 fails, checks if the public key exists in the input next sync committee object
func IsNextPeriodSyncCommittee(
st state.BeaconStateAltair, valIdx types.ValidatorIndex,
st state.BeaconState, valIdx types.ValidatorIndex,
) (bool, error) {
root, err := syncPeriodBoundaryRoot(st)
if err != nil {
@@ -90,7 +90,7 @@ func IsNextPeriodSyncCommittee(
// CurrentPeriodSyncSubcommitteeIndices returns the subcommittee indices of the
// current period sync committee for input validator.
func CurrentPeriodSyncSubcommitteeIndices(
st state.BeaconStateAltair, valIdx types.ValidatorIndex,
st state.BeaconState, valIdx types.ValidatorIndex,
) ([]types.CommitteeIndex, error) {
root, err := syncPeriodBoundaryRoot(st)
if err != nil {
@@ -124,7 +124,7 @@ func CurrentPeriodSyncSubcommitteeIndices(
// NextPeriodSyncSubcommitteeIndices returns the subcommittee indices of the next period sync committee for input validator.
func NextPeriodSyncSubcommitteeIndices(
st state.BeaconStateAltair, valIdx types.ValidatorIndex,
st state.BeaconState, valIdx types.ValidatorIndex,
) ([]types.CommitteeIndex, error) {
root, err := syncPeriodBoundaryRoot(st)
if err != nil {
@@ -151,7 +151,7 @@ func NextPeriodSyncSubcommitteeIndices(
// UpdateSyncCommitteeCache updates sync committee cache.
// It uses `state`'s latest block header root as key. To avoid misuse, it disallows
// block header with state root zeroed out.
func UpdateSyncCommitteeCache(st state.BeaconStateAltair) error {
func UpdateSyncCommitteeCache(st state.BeaconState) error {
nextSlot := st.Slot() + 1
if nextSlot%params.BeaconConfig().SlotsPerEpoch != 0 {
return errors.New("not at the end of the epoch to update cache")

View File

@@ -126,7 +126,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
return nil, err
}
if err := UpdateCommitteeCache(s, epoch); err != nil {
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
return nil, errors.Wrap(err, "could not update committee cache")
}
@@ -175,7 +175,7 @@ func ActiveValidatorCount(ctx context.Context, s state.ReadOnlyBeaconState, epoc
return 0, err
}
if err := UpdateCommitteeCache(s, epoch); err != nil {
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
return 0, errors.Wrap(err, "could not update committee cache")
}

View File

@@ -301,7 +301,7 @@ func TestActiveValidatorCount_Genesis(t *testing.T) {
// Preset cache to a bad count.
seed, err := Seed(beaconState, 0, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.NoError(t, committeeCache.AddCommitteeShuffledList(&cache.Committees{Seed: seed, ShuffledIndices: []types.ValidatorIndex{1, 2, 3}}))
require.NoError(t, committeeCache.AddCommitteeShuffledList(context.Background(), &cache.Committees{Seed: seed, ShuffledIndices: []types.ValidatorIndex{1, 2, 3}}))
validatorCount, err := ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
require.NoError(t, err)
assert.Equal(t, uint64(c), validatorCount, "Did not get the correct validator count")

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"domain.go",
"signature.go",
"signing_root.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/signing",
@@ -24,6 +25,7 @@ go_test(
name = "go_default_test",
srcs = [
"domain_test.go",
"signature_test.go",
"signing_root_test.go",
],
embed = [":go_default_library"],
@@ -40,6 +42,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
],
)

View File

@@ -0,0 +1,33 @@
package signing
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
var ErrNilRegistration = errors.New("nil signed registration")
// VerifyRegistrationSignature verifies the signature of a validator's registration.
func VerifyRegistrationSignature(
e types.Epoch,
f *ethpb.Fork,
sr *ethpb.SignedValidatorRegistrationV1,
genesisRoot []byte,
) error {
if sr == nil || sr.Message == nil {
return ErrNilRegistration
}
d := params.BeaconConfig().DomainApplicationBuilder
sd, err := Domain(f, e, d, genesisRoot)
if err != nil {
return err
}
if err := VerifySigningRoot(sr.Message, sr.Message.Pubkey, sr.Signature, sd); err != nil {
return ErrSigFailedToVerify
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More