Compare commits

...

112 Commits

Author SHA1 Message Date
nisdas
7182aada02 add in skip 2022-07-05 14:08:37 +08:00
Nishant Das
0ed5007d2e Fix Pubsub Panic In Handling Dead Peers (#10976)
* fix

* fix it

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-04 00:41:33 +00:00
Raul Jordan
65900115fc Move Slasher E2E To Scenario Test (#10973)
* consolidate into slasher scenario test

* pattern

* revert

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-03 23:55:33 +00:00
Potuz
379bed9268 add heuristics for pulltips (#10955)
* add heuristics for pulltips

* gazelle

* add unit test

* fix unit test

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-03 20:27:39 +00:00
Potuz
2dd2e74369 update finalization on onblock (#10980)
* update finalization on onblock

* add unit test

* Minor cleanups

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-07-03 19:39:31 +00:00
Potuz
73237826d3 ensure there are as many deltas as nodes (#10979)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-03 00:53:33 -03:00
terencechain
af4d0c84c8 Check finalized beyond DB (#10978)
* Check finalized beyond DB

* Unhandle error

* Remove debug log

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-07-02 16:31:05 -03:00
Potuz
c68f1883d6 Save Head after pruning invalid nodes (#10977)
* Save Head after prunning

* fix unit test
2022-07-02 16:38:08 +00:00
terencechain
ae1685d937 Log invalid finalized root (#10975)
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-07-02 10:58:54 +00:00
terencechain
2a5f05bc29 Improve "rasied file descriptor limit..." log (#10970)
* Improvement to raise file descriptor log

* Radek feedback

* Change to debug
2022-07-01 18:05:01 -04:00
Potuz
49e5e73ec0 Default SafeSlotsToImportOptimistically to 128 (#10967)
* Default SafeSlotsToImportOptimistically to 128

* fix tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-07-01 17:04:51 +00:00
Nishant Das
2ecb905ae5 Update Prysm Libp2p Dependencies (#10958)
* add all changes in

* fix issues

* fix build

* remove curve check

* fix tool

* add test

* add tidy

* fmt

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-01 15:34:11 +00:00
Radosław Kapka
d4e7da8200 Change log level to debug in fetchBlocksFromPeer (#10969)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-01 14:49:31 +00:00
Nishant Das
4b042a7103 Fix Multiclient E2E (#10965)
* fix it

* gaz

* fix

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-01 14:02:01 +00:00
Radosław Kapka
e59859c78f Wrap client-stats flags (#10966) 2022-07-01 12:49:38 +00:00
Potuz
6bcc7d3a5e Do not fill in missing blocks on regular sync (#10957) 2022-07-01 11:03:49 +00:00
terencechain
7b597bb130 More sepolia boot nodes (#10962)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-30 19:49:35 +00:00
Potuz
2c7b273260 do not overwrite log (#10963) 2022-06-30 19:02:23 +00:00
Radosław Kapka
8bedaaf0a8 Log error in fetchBlocksFromPeer (#10959)
* Log error in `fetchBlocksFromPeer`

* update case
2022-06-30 16:22:10 +00:00
james-prysm
69350a6a80 Fee Recipient E2E misscounting deterministic keys leading to flakes (#10960) 2022-06-30 15:01:33 +00:00
terencechain
93e8c749f8 Can get payload header from builder (#10954) 2022-06-29 21:13:25 -07:00
james-prysm
96fecf8c57 E2E: fee-recipient evaluator (#10528)
* testing out fee-recipient evaluator

* fixing bazel linter

* adjusting comparison

* typo on file rolling back

* adding fee recipient is present to minimal e2e

* fixing gofmt

* fixing gofmt

* fixing flag usage name

* adding in log to help debug

* fixing log build

* trying to figure out why suggested fee recipient isn't working in e2e, adding more logging temporarily

* rolling back logs

* making e2e test more dynamic

* fixing deepsource issue

* fixing bazel

* adding in condition for latest release

* duplicate condtion check.

* fixing gofmt

* rolling back changes

* adding fee recipient evaluator in new file

* fixing validator component logic

* testing rpc client addition

* testing fee recipient evaluator

* fixing bazel:

* testing casting

* test casting

* reverting

* testing casting

* testing casting

* testing log

* adding bazel fix

* switching mixed case and adding temp logging

* fixing gofmt

* rolling back changes

* removing fee recipient evaluator when web3signer is used

* test only minimal config

* reverting changes

* adding fee recipient evaluator to mainnet

* current version uses wrong flag name

* optimizing key usage

* making mining address a variable

* moving from global to local variable

* removing unneeded log

* removing redundant check

* make proposer settings mroe deterministic and also have the evaluator compare the wanting values

* fixing err return

* fixing bazel

* checking file too much moving it out

* fixing gosec

* trying to fix gosec error

* trying to fix address

* fixing linting

* trying to gerenate key and random address

* fixing linting

* fixing check for proposer config

* trying with multi config files

* fixing is dir check

* testing for older previous balance

* adding logging to help debug

* changing how i get the block numbers

* fixing missed error check

* adding gasused check

* adding log for current gas used

* taking suggestion to make fee recipient more deterministic

* fixing linting

* fixing check

* fixing the address check

* fixing format error

* logic to differentiate recipients

* fixing linting
2022-06-30 00:24:39 +00:00
Potuz
5d29ca4984 Experimental disable boundary checks (#10936)
* init

* bellatrix + altair tests passing

* Add Phase0 support

* add feature flag

* phase0 test

* restore testvectors

* mod tidy

* state tests

* gaz

* do not call precompute

* fix test

* Fix context

* move to own's method

* remove spectests pulltips

* time import

* remove phase0

* mod tidy

* fix getters

* Update beacon-chain/forkchoice/doubly-linked-tree/types.go

* reviews

* fix workspace

* Recursive rlocks

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-06-29 23:37:21 +00:00
terencechain
43523c0266 RPC adds builder service (#10953)
* RPC adds builder service

* Update beacon-chain/builder/service.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-29 18:54:24 +00:00
Sammy Rosso
8ebbde7836 Testutil refactor attestations (#10952)
* Add AttestationUtil receiver

* Modify usage to account for the receiver

* Add missing explanatory comments

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-06-29 16:42:33 +00:00
Radosław Kapka
44c39a0b40 Don't log terminal difficulty has not been reached yet... until Bellatrix (#10951) 2022-06-29 13:53:59 +00:00
Radosław Kapka
f376f3fb9b Integrate better fastssz validation errors into Prysm (#10945)
* update dep

* regenerate SSZ, update test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-29 16:05:56 +08:00
Sammy Rosso
8510743406 Add additional logging fields for post-merge transition blocks (#10944)
Added additional logging information to Bellatrix blocks.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-28 13:55:36 +00:00
Radosław Kapka
b82e2e7d40 Use prysmaticlabs/fastssz as a direct dependency (#10941)
* Update dependency

* Regenerate SSZ files

* fix BUILD files

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-28 13:03:24 +00:00
Raul Jordan
acfafd3f0d Add More Visibility Into Sync Committee Message Participation (#10943)
* build

* granular time message

* timing

* log

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-27 23:01:24 +00:00
Delweng
13001cd000 typo: convert tab to space (#10918)
* typo: convert tab to space

Signed-off-by: Delweng <delweng@gmail.com>

* typo: indent shell scripts with 4spaces

Signed-off-by: Delweng <delweng@gmail.com>

* fix the indent typo again

Signed-off-by: Delweng <delweng@gmail.com>

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-06-27 21:53:32 +00:00
terencechain
5291b9d85b RPC: Add submit registration endpoint (#10942) 2022-06-27 19:54:28 +00:00
terencechain
7747471624 Update sepolia ttd (#10940)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-27 18:27:49 +00:00
Radosław Kapka
31f96a05b3 Update fastssz dependency in deps.bzl (#10939) 2022-06-27 18:32:38 +02:00
Murphy Law
dfe8e54a42 Relinquish peerLock when blockFetcher context is done (#10932)
* Relinquish peerLock when blockFetcher context is done

* Unlock before sending range requests

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-06-27 15:25:33 +00:00
Nishant Das
3a841a8467 Fix Eth1Connection API Panic (#10938)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-27 14:19:44 +00:00
Radosław Kapka
7f56ac6355 Massive code cleanup (#10913)
* Massive code cleanup

* fix test issues

* remove GetGenesis mock expectations

* unused receiver

* rename unused params

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-27 13:34:38 +00:00
Preston Van Loon
9216be7d43 State: add fuzz test for unmarshal ssz (#10935)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-06-27 11:19:39 +00:00
Nishant Das
7c489199bf Fix Builder Service Panic (#10937) 2022-06-27 10:50:15 +00:00
Potuz
f8b4d8c57a Deprecate store in blockchain pkg (#10903)
* Deprecate store WIP

* fix spectests build

* mock right interface

* sync tests build

* more tests builds

* blockchain tests

- TestFinalizedCheckpt_GenesisRootOk
- TestCurrentJustifiedCheckpt_CanRetrieve
- TestJustifiedCheckpt_GenesisRootOk
- TestHeadRoot_CanRetrieve
- TestHeadRoot_UseDB
- TestService_ProcessAttestationsAndUpdateHead
- TestService_VerifyWeakSubjectivityRoot
- TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray
- TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree
- TestVerifyFinalizedConsistency_Ok
- TestStore_OnBlock_ProtoArray
- TestStore_OnBlock_DoublyLinkedTree
- TestStore_OnBlockBatch_ProtoArray
- TestStore_OnBlockBatch_DoublyLinkedTree
- TestStore_OnBlockBatch_NotifyNewPayload
- TestCachedPreState_CanGetFromStateSummary_ProtoArray
- TestCachedPreState_CanGetFromStateSummary_DoublyLinkedTree

* more blockchain tests

- TestStore_OnBlockBatch_PruneOK_Protoarray
- TestFillForkChoiceMissingBlocks_CanSave_ProtoArray
- TestFillForkChoiceMissingBlocks_CanSave_DoublyLinkedTree
- TestFillForkChoiceMissingBlocks_RootsMatch_Protoarray
- TestFillForkChoiceMissingBlocks_RootsMatch_DoublyLinkedTree
- TestFillForkChoiceMissingBlocks_FilterFinalized_ProtoArray
- TestFillForkChoiceMissingBlocks_FilterFinalized_DoublyLinkedTree
- TestVerifyBlkDescendant
- Test_verifyBlkFinalizedSlot_invalidBlock
- TestChainService_SaveHeadNoDB

* update best justified from genesis

* deal with nil head on saveHead

* initialize prev justified checkpoint

* update finalization correctly

* update finalization logic

* update finalization logic

* track the wall clock on forkchoice spectests

* export copies of checkpoints from blockchain package

* do not use forkchoice's head on HeadRoot

* Remove debug remain

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* terence's review

* add forkchoice types deps

* wtf

* debugging

* init-sync: update justified and finalized checkpoints on db

* call updateFinalized instead of only DB

* remove debug symbols

* safe copy headroot

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-06-25 03:57:52 +00:00
terencechain
b7463d0070 Ignore nil node when saving orphaned atts (#10930)
* Ignore nil node when saving orphaned atts

* Just use ErrUnknownCommonAncestor
2022-06-24 18:07:31 +00:00
Radosław Kapka
2b6e86ec1b Some test improvements (#10928)
* extract DeterministicGenesisStateWithGenesisBlock

(cherry picked from commit a5e3a9c9bbbacb23a644f5c68c92839a315f66a1)

# Conflicts:
#	testing/util/state.go

* part 1

* part 2

* part 3

* fix errors

* db interface public visibility

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-24 17:22:39 +00:00
kasey
4e604ee22b Stop ruining our lives with flaky e2e (#10929)
* merge checkpoint sync test with base minimal test

* move web3signer to scenario test

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-24 17:01:12 +00:00
Delweng
3576d2ccfe cmd/beacon-chain: add jwtcommands (#10919)
* cmd/beacon-chain: add jwtcommands

Signed-off-by: Delweng <delweng@gmail.com>

* gazelle

Co-authored-by: mick <103775631+symbolpunk@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-06-24 15:54:08 +00:00
Radosław Kapka
4b009ed813 Rename upgrade_to_altair.go to upgrade_to_bellatrix.go (#10926)
(cherry picked from commit c664e59a2e66a2e52814bdb59840ec3a205273e7)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-24 12:46:07 +00:00
Radosław Kapka
7faad27f5f Update fastssz dependency (#10927) 2022-06-24 12:18:42 +00:00
terencechain
2d3966bf4f RPC: builder block (#10908)
* Can contruct builder block

* Fix build and tests

* Add nil checks and tests

* Gazelle

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-24 02:41:16 +00:00
Nishant Das
58d10e3ace Deprecate Step Parameter from our Block By Range Requests (#10914)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-23 21:48:28 +00:00
terencechain
88becdc114 Register builder service (#10924) 2022-06-23 13:44:27 -07:00
Potuz
20b4c60bcb Apply Proposer Boost from forkchoice (#10893)
* Apply proposer boost at block insertion

* gaz

* fix tests

* revert time change

* conflict

* change genesis time in forkchoice on spectests

* Terence Review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-06-22 13:56:07 +00:00
terencechain
0b50ab7832 Consolidate ExecutionPayloadHeader Protobuf definitions (#10917) 2022-06-22 13:08:06 +02:00
Radosław Kapka
6b55d33ea2 Return error when requesting future state (#10915)
* Return error when requesting future state

* fix genesis test case

* fix test cases
2022-06-21 23:13:19 +00:00
terencechain
7e64a3963d Change submit validator registration to registration(s) (#10907)
* Change submit validator registration to registration(s)

* Fixed a few tests

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-21 18:46:28 +00:00
Potuz
55b4af9f92 track the wall clock on forkchoice spectests (#10916) 2022-06-21 18:12:08 +00:00
terencechain
546bb5ed53 Clean up misc warnings (#10900)
* Clean up misc warnings

* Gazelle

* Rm state assignments

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-21 15:26:56 +00:00
Radosław Kapka
29a25b3f09 Add spec URLs to API docs (#10912) 2022-06-21 14:29:44 +00:00
Jie Hou
56af079aea Change gocognit complextity threshold to 65 (#10906)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-06-20 03:57:39 +00:00
Potuz
385f101b2b Use forkchoice to verify finalized descendant (#10905)
* Use forkchoice to verify finalized descendant

* fix test
2022-06-20 00:55:17 +00:00
Radosław Kapka
b3c3b207c9 Enable fastssz to use vectorized HTR hashing algo (#10819)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-18 12:13:36 +00:00
Jie Hou
e0cd10ed3c Refactor: Reduce cognitive complexity for 5 out of top 10 functions in code base (#10854)
* Refactor waitForActivation

* Finish refactor of runner.go

* Refactor validator/client/metrics.go

* Refactored beacon-chain/sync/pending_attestations_queue.go

* Refactored beacon-chain/powchain/log_processing.go

* Resolve conflicts with develop branch

* Fix Deepsource: Context should be the first param

* Address review comments

* Put headersMap as pass-in function parameter

* Change function signature of processBlockInBatch

* Address nisdas's comment
2022-06-18 10:14:43 +00:00
Potuz
2d0fdf8b4a prune within forkchoice (#10896) 2022-06-17 12:53:17 -03:00
Potuz
3a7117dcbf Do not downcast time in currentslot (#10897)
* Do not downcast time in currentslot

* no magic constants

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-16 22:26:55 +00:00
Radosław Kapka
8888fa4bb3 Revert enable-native-state flag (#10898)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-16 20:59:46 +00:00
Radosław Kapka
f065209a3e Small API Middleware upgrades (#10895)
* remove useless comments

* add comment

* fix writing bug

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-16 19:19:05 +00:00
Potuz
e439f4aff6 Forkchoice duplicate store (#10840)
* forkchoice tests

* blockchain tests

* no fatal errors

* use ZeroHash for genesis

* deal with zerohashes at genesis

* avoid nil best justified checkpoint on new store

* unit tests

* Debug log

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Terence's review

* update capitalization

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* update capitalization

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* update capitalization

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* update capitalization

Co-authored-by: terencechain <terence@prysmaticlabs.com>

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-06-16 18:21:40 +00:00
terencechain
88f8dbecc8 Compute correct domain for validator registration (#10894)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-16 16:32:04 +00:00
james-prysm
2dfe291cf9 Keymanager APIs: fee recipient api (#10850)
* initial commit wip

* setting protos for fee recipient

* updating proto based on specs

* updating apimiddleware

* generated APIs

* updating proto to fit spec

* fixing naming of fields

* fixing endpoint_factory and associated structs

* fixing imports

* adding in custom http types to grpc gateway

* adding import options

* changing package option

* still testing protos

* adding to bazel

* testing dependency changes

* more tests

* more tests

* more tests

* more tests

* more tests

* more tests

* testing changing repo dep

* testing deps

* testing deps

* testing deps

* testing deps

* testing deps

* testing deps

* reverting

* testing import

* testing bazel

* bazel test

* testing

* testing

* testing import

* updating generated proto code

* wip set fee recipient by pubkey

* adding list fee recipient logic

* fixing thrown error

* fixing bug with API

* fee recipient delete function

* updating generated proto logic

* fixing deposit api and adding postman tests

* fixing proto imports

* fixing unit tests and checksums

* fixing test

* adding unit tests

* fixing bazel

* Update validator/rpc/standard_api.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/rpc/standard_api.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* resolving review comments

* fixing return

* Update config/validator/service/proposer-settings.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* updating proto

* updating message name

* fixing imports

* updating based on review comments

* adding middleware unit tests

* capitalizing errors

* using error instead of errorf

* fixing missed unit test variable rename

* fixing format variable

* fixing unit test

* Update validator/rpc/standard_api.go

* Update validator/rpc/standard_api.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-16 15:10:23 +00:00
Mike Neuder
a80c15f3a9 Refactor validator accounts import to remove cli context dependency (#10890)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-06-16 14:14:03 +00:00
Nishant Das
4de92bafc4 Improve Field Trie Recomputation (#10884)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-06-16 13:14:29 +00:00
terencechain
69438583e5 Pad Uint256's SSZBytes to length 32 (#10889)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-16 04:29:32 +00:00
Raul Jordan
e81f3fed01 Remove Extraneous BoltDB Logs (#10888) 2022-06-16 01:11:07 +00:00
Raul Jordan
1b2a5fb4a5 Update CODEOWNERS (#10887) 2022-06-15 21:51:44 +00:00
Jie Hou
6c878b1665 Refactor: Continue reducing cognitive complexity (#10862)
* Refactor beacon-chain/db/kv/state.go

* Refactor api/gateway/apimiddleware/process_field.go

* Refactor beacon-chain/sync/initial-sync/blocks_queue.go

* Refactor validator/db/kv/migration_optimal_attester_protection.go

* goimports

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-15 18:34:59 +00:00
james-prysm
838963c9f7 validator registration request bug: reusing public keys (#10883)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-15 17:26:05 +00:00
kasey
7b38f8b8fc submit lists of validator registrations (#10882)
* submit lists of validator registrations

* RegisterValidator to take a list

* Gazelle

* Fix go imports

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-15 16:29:56 +00:00
Nishant Das
23e8e695cc Fix Sepolia Testnet Initialization (#10886) 2022-06-15 15:05:44 +00:00
Sammy Rosso
ce9eaae22e Add payload data logging (#10845)
* Add logging of block payload data

Added a new func logBlockPayloadData that includes logging of the
block number and the gas utilized.
Related to #10795.

* Replace Info with Debug + renamed func

Renamed the function to be clearer and replaced Info logging with Debug.
Related to #10795.

* Compute correct value for gas utilized

Related to #10795.

* Round result of gas utilized to 2 decimal places

* Add new error message

* Check if block is an Execution block

Co-authored-by: terencechain <terence@prysmaticlabs.com>

* Fix missing imports

* Undo changes

* Update beacon-chain/blockchain/receive_block.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Added error logging to log statements

Changed the error handling from log statements. Instead of returning the
error we log the error.

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-15 10:03:48 +00:00
Nishant Das
7010e8dec8 Graduate Prune Canonical Attestations Feature (#10623)
* graduate canonical prune feat

* fix test

* fix tests

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-15 09:05:19 +00:00
Nishant Das
9e4ba75e71 Batch Scenario Runs Into Single Test (#10878)
* batch scenarios

* fix

* fix

* Update testing/endtoend/endtoend_test.go
2022-06-15 08:02:31 +00:00
kasey
044a4ad5a3 Ignore genesis state url and checkpoint sync flags after first run of prysm (#10881)
* ignore remote genesis url flag if present in db

* ignore checkpoint sync flags if initialized

* lint

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-14 23:23:25 +00:00
Radosław Kapka
690084cab6 Enable native state for Sepolia (#10880)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-14 21:51:58 +00:00
james-prysm
88db7117d2 Adding additional checksum check for fee recipient. (#10879)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-14 18:43:37 +00:00
mick
1faa292615 Add is_optimistic to SyncDetails, hydrate via ValidateSync (#10692)
* cache test

* oh

* syntax fix

* error fix

* tinker

* tinker

* newlines?

* no-whitespace?

* feedback

* fix

* comment

* comments

* need to figure out how to lint locally...

* feedback

* fixes

* progress

* progress

* dedupe

* s

* working

* remove empty lines

* update test

* return errors properly

* make helpers publicly visible

* fix tests

Co-authored-by: rkapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-14 17:47:09 +00:00
terencechain
434018a4b9 Add Sepolia config (#10868) 2022-06-14 14:50:05 +00:00
Nishant Das
54624569bf Fix Fuzzing Failures in Our CI (#10875)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-14 13:12:28 +00:00
Håvard Anda Estensen
b55ddb5a34 Use go:build lines and remove obsolete +build lines (#10704)
* Use go:build lines and remove obsolete +build lines

* Run gazelle

* Update crypto/bls/blst/stub.go

* Update crypto/bls/blst/stub.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-06-14 11:47:27 +00:00
terencechain
a38de90435 Move computeCheckpoints to private (#10874)
* Move computeCheckpoints to private

* Feedback

* Godoc

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-14 09:25:41 +00:00
Michael Blau
d454d30f19 Merge ascii art banner (#10773)
* Add Merge ASCII art banner

* Add merge ASCII art banner

* gofmt

* Go fmt

* Fix go fmt again

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-06-14 08:25:42 +00:00
Jie Hou
b04dd9fe5c Enable gocognit linter (#10867)
* Enable gocognit linter

Currently the gocognit complexity threshold is set to 95 to make
sure no existing files will fail the linter. In future we will
reduce this threshold to a much lower one.

The recommended threshold is usually 30. Our code base has maximum
of 97 right now...But it's better late than never to pay attention
to our code compexity.

* Test to see github complains

* Resume to 97

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-14 14:19:34 +08:00
kasey
8140a1a7e0 update info message about ws checkpoint (#10871)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-14 00:55:00 +00:00
terencechain
cab9917317 Fix message typo for ErrorIs (#10873)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-14 00:19:04 +00:00
terencechain
4c4fb9f2c0 Fix gosec scan: G112 (CWE-400) Potential Slowloris Attack (#10872) 2022-06-13 22:29:26 +00:00
Mike Neuder
80f4f22401 Refactor validator accounts exit to remove cli context dependency (#10841)
* Refactor validator accounts exit to remove cli context dependency

* bazel run //:gazelle -- fix

* fixing deepsource findings

* fixing broken test

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-13 15:17:46 +00:00
terencechain
dd296cbd8a Disallow lower justified epoch to override higher epoch (#10865) 2022-06-11 17:37:37 +00:00
terencechain
f9e3b0a3c2 Active balance: return EFFECTIVE_BALANCE_INCREMENT as min (#10866) 2022-06-11 08:54:33 -07:00
terencechain
a58809597e Sync: don't process pending blocks w/o genesis time (#10750)
* Sync: don't process pending blocks w/o genesis time

* Update pending_blocks_queue.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-10 05:02:47 +00:00
Nishant Das
7f443e8387 Add Optimistic Sync Scenario Testing (#10836)
* add latest changes

* fix it

* add multiclient support

* fix tests

* Apply suggestions from code review

* fix test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 23:24:53 +00:00
Potuz
18fc17c903 Forkchoice checkpoints (#10823)
* double_tree_changes

* protoarray changes

* beacon-chain changes

* spec tests and debug rpc fixes

* more conflicts

* more conflicts

* Terence's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 22:28:30 +00:00
Sammy Rosso
d7b01b9d81 Fix default mainnet log when using chain config (#10855)
* Fix default mainnet log when using chain config

Add a log to specify the use of a chain-config-file rather than
defaulting to that of mainnet.
Related to #10821.

* Add more specific log message

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 21:31:59 +00:00
terencechain
7ebd9035dd Add ttd metric (#10851)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 20:35:38 +00:00
Radosław Kapka
578fea73d7 API's IsOptimistic - update header.StateRoot only when block is not missing (#10852)
* bug fix

* tests

* comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 19:35:35 +00:00
terencechain
7fcadbe3ef Add optimistic status to chainhead (#10842)
* Add optimistic status to chainhead

* Fix tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-09 17:37:52 +00:00
Radosław Kapka
fce9e6883d Native Blocks Ep. 1 - New types and functions (#10837)
* types and functions

* partially done tests

* refactor

* remaining Proto() tests

* remaining proto.go tests

* simplify UnmarshalSSZ and move BeaconBlockIsNil

* getters_test

* remove errAssertionFailed

* review feedback

* remove cloning protobuf

* fmt

* change IsNil

* fix tests
2022-06-09 13:13:02 +00:00
terencechain
5ee66a4a68 Update engine api buckets (#10847)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-08 20:12:54 +00:00
kasey
f4c7fb6182 checkpoint sync e2e to use 3m timeout w/ elapsed log (#10849)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-06-08 19:14:10 +00:00
Potuz
077dcdc643 enable dev on Ropsten (#10839)
* enable dev on Ropsten

* include only vectorized HTR and doublylinkedtree

* error handling

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-08 13:45:13 +00:00
kasey
1fa864cb1a use slot:block index correctly (#10820)
* adding splitRoots, refactor to use it

* use splitRoots & work in roots only

the most common use case for this method is to get a list of
candidate roots and check if they are canonical. there isn't a great
reason to look up all the non-canonical blocks, because forkchoice
checks based on the root only, so just return roots and defer the
responsibility of resolving those to full blocks.

* update comment

* clean up shadowing

* more clear non-error return

* add test case for single root in index slot

* fmt

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-07 16:47:42 +00:00
Mike Neuder
6357860cc2 Refactor validator accounts backup to remove cli context dependency (#10824)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-06-07 15:19:12 +00:00
mick
cc1ea81d4a Implement generate-auth-secret on beacon node CLI (#10733)
* s

* s

* typo

* typo

* s

* s

* fixes based on PR feedback

* PR feedback

* reverting log changes

* adding flag per feedback

* conventions

* main fixes

* Update cmd/beacon-chain/jwt/jwt.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update cmd/beacon-chain/jwt/jwt.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update cmd/flags.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* s

* tests

* test attempt

* test

* Update cmd/beacon-chain/jwt/jwt.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* err fix

* s

* further simplify

* cleanup

* namefix

* tests pass

* gaz

* rem deadcode

* Gaz

* shorthand

* naming

* test pass

* dedup

* success

* Ignore jwt.hex file

* logrus

* feedback

* junk

* Also check that no file was written

* local run config

* small fix

* jwt

* testfix

* s

* disabling test

* reverting main changes

* main revert

* removing temp folder

* comment

* gaz

* clarity

* rem

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-06-07 07:37:12 +00:00
Nishant Das
dd65622441 Efficiently Pack Uint64 Lists (#10830)
* make it more efficient

* radek's review

* review again
2022-06-07 06:34:13 +00:00
Nishant Das
6c39301f33 Integrate Engine Proxy into E2E (#10808)
* add it in

* support jwt secret

* fix it

* fix

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 23:35:54 +00:00
terencechain
5b12f5a27d Integrate builder client into builder service (#10825)
* Integrate builder client into builder service

* Do nothing if pubkey is not found

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-06 22:38:49 +00:00
698 changed files with 18896 additions and 12228 deletions

10
.github/CODEOWNERS vendored
View File

@@ -9,8 +9,8 @@ deps.bzl @prysmaticlabs/core-team
# Radek and Nishant are responsible for changes that can affect the native state feature.
# See https://www.notion.so/prysmaticlabs/Native-Beacon-State-Redesign-6cc9744b4ec1439bb34fa829b36a35c1
/beacon-chain/state/fieldtrie/ @rkapka @nisdas
/beacon-chain/state/v1/ @rkapka @nisdas
/beacon-chain/state/v2/ @rkapka @nisdas
/beacon-chain/state/v3/ @rkapka @nisdas
/beacon-chain/state/state-native/ @rkapka @nisdas
/beacon-chain/state/fieldtrie/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v1/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v2/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v3/ @rkapka @nisdas @rauljordan
/beacon-chain/state/state-native/ @rkapka @nisdas @rauljordan

View File

@@ -64,7 +64,6 @@ jobs:
- name: Golangci-lint
uses: golangci/golangci-lint-action@v2
with:
args: --print-issued-lines --sort-results --no-config --timeout=10m --disable-all -E deadcode -E errcheck -E gosimple --skip-files=validator/web/site_data.go --skip-dirs=proto --go=1.18
version: v1.45.2
skip-go-installation: true
@@ -88,11 +87,14 @@ jobs:
- name: Build
# Use blst tag to allow go and bazel builds for blst.
run: go build -v ./...
env:
CGO_CFLAGS: "-O -D__BLST_PORTABLE__"
# fuzz leverage go tag based stubs at compile time.
# Building and testing with these tags should be checked and enforced at pre-submit.
- name: Test for fuzzing
run: go test -tags=fuzz,develop ./... -test.run=^Fuzz
env:
CGO_CFLAGS: "-O -D__BLST_PORTABLE__"
# Tests run via Bazel for now...
# - name: Test

3
.gitignore vendored
View File

@@ -35,3 +35,6 @@ bin
# p2p metaData
metaData
# execution API authentication
jwt.hex

View File

@@ -1,69 +1,26 @@
linters-settings:
govet:
check-shadowing: true
settings:
printf:
funcs:
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
golint:
min-confidence: 0
gocyclo:
min-complexity: 10
maligned:
suggest-new: true
dupl:
threshold: 100
goconst:
min-len: 2
min-occurrences: 2
depguard:
list-type: blacklist
packages:
# logging is allowed only by logutils.Log, logrus
# is allowed to use only in logutils package
- github.com/sirupsen/logrus
misspell:
locale: US
lll:
line-length: 140
goimports:
local-prefixes: github.com/golangci/golangci-lint
gocritic:
enabled-tags:
- performance
- style
- experimental
disabled-checks:
- wrapperFunc
run:
skip-files:
- validator/web/site_data.go
- .*_test.go
skip-dirs:
- proto
- tools/analyzers
timeout: 10m
go: '1.18'
linters:
disable-all: true
enable:
- deadcode
- goconst
- goimports
- golint
- gosec
- misspell
- structcheck
- typecheck
- unparam
- varcheck
- gofmt
- unused
disable-all: true
- errcheck
- gosimple
- gocognit
run:
skip-dirs:
- proto/
- ^contracts/
deadline: 10m
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
# golangci.com configuration
# https://github.com/golangci/golangci/wiki/Configuration
service:
golangci-lint-version: 1.15.0 # use the fixed version to not introduce new linters unexpectedly
prepare:
- echo "here I can run custom commands, but no preparation needed for this repo"
output:
print-issued-lines: true
sort-results: true

View File

@@ -28,6 +28,7 @@ const (
)
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
@@ -199,9 +200,15 @@ func (c *Client) GetHeader(ctx context.Context, slot types.Slot, parentHash [32]
// RegisterValidator encodes the SignedValidatorRegistrationV1 message to json (including hex-encoding the byte
// fields with 0x prefixes) and posts to the builder validator registration endpoint.
func (c *Client) RegisterValidator(ctx context.Context, svr *ethpb.SignedValidatorRegistrationV1) error {
v := &SignedValidatorRegistration{SignedValidatorRegistrationV1: svr}
body, err := json.Marshal(v)
func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error {
if len(svr) == 0 {
return errors.Wrap(errMalformedRequest, "empty validator registration list")
}
vs := make([]*SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
vs[i] = &SignedValidatorRegistration{SignedValidatorRegistrationV1: svr[i]}
}
body, err := json.Marshal(vs)
if err != nil {
return errors.Wrap(err, "error encoding the SignedValidatorRegistration value body in RegisterValidator")
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
)
@@ -73,7 +74,7 @@ func TestClient_Status(t *testing.T) {
func TestClient_RegisterValidator(t *testing.T) {
ctx := context.Background()
expectedBody := `{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"}}`
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"}}]`
expectedPath := "/eth/v1/builder/validators"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
@@ -104,7 +105,7 @@ func TestClient_RegisterValidator(t *testing.T) {
Pubkey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
},
}
require.NoError(t, c.RegisterValidator(ctx, reg))
require.NoError(t, c.RegisterValidator(ctx, []*eth.SignedValidatorRegistrationV1{reg}))
}
func TestClient_GetHeader(t *testing.T) {
@@ -300,7 +301,7 @@ func testSignedBlindedBeaconBlockBellatrix(t *testing.T) *eth.SignedBlindedBeaco
SyncCommitteeSignature: make([]byte, 48),
SyncCommitteeBits: bitfield.Bitvector512{0x01},
},
ExecutionPayloadHeader: &eth.ExecutionPayloadHeader{
ExecutionPayloadHeader: &v1.ExecutionPayloadHeader{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),

View File

@@ -63,7 +63,7 @@ func sszBytesToUint256(b []byte) Uint256 {
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256
func (s Uint256) SSZBytes() []byte {
return bytesutil.ReverseByteOrder(s.Int.Bytes())
return bytesutil.PadTo(bytesutil.ReverseByteOrder(s.Int.Bytes()), 32)
}
var errUnmarshalUint256Failed = errors.New("unable to UnmarshalText into a Uint256 value")
@@ -149,8 +149,8 @@ func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
}, nil
}
func (h *ExecutionPayloadHeader) ToProto() (*eth.ExecutionPayloadHeader, error) {
return &eth.ExecutionPayloadHeader{
func (h *ExecutionPayloadHeader) ToProto() (*v1.ExecutionPayloadHeader, error) {
return &v1.ExecutionPayloadHeader{
ParentHash: h.ParentHash,
FeeRecipient: h.FeeRecipient,
StateRoot: h.StateRoot,
@@ -189,7 +189,7 @@ type ExecutionPayloadHeader struct {
BaseFeePerGas Uint256 `json:"base_fee_per_gas,omitempty"`
BlockHash hexutil.Bytes `json:"block_hash,omitempty"`
TransactionsRoot hexutil.Bytes `json:"transactions_root,omitempty"`
*eth.ExecutionPayloadHeader
*v1.ExecutionPayloadHeader
}
func (h *ExecutionPayloadHeader) MarshalJSON() ([]byte, error) {

View File

@@ -205,7 +205,7 @@ func TestExecutionHeaderResponseToProto(t *testing.T) {
expected := &eth.SignedBuilderBid{
Message: &eth.BuilderBid{
Header: &eth.ExecutionPayloadHeader{
Header: &v1.ExecutionPayloadHeader{
ParentHash: parentHash,
FeeRecipient: feeRecipient,
StateRoot: stateRoot,
@@ -567,9 +567,9 @@ func TestProposerSlashings(t *testing.T) {
require.Equal(t, expected, string(b))
}
func pbExecutionPayloadHeader(t *testing.T) *eth.ExecutionPayloadHeader {
func pbExecutionPayloadHeader(t *testing.T) *v1.ExecutionPayloadHeader {
bfpg := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
return &eth.ExecutionPayloadHeader{
return &v1.ExecutionPayloadHeader{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
@@ -694,9 +694,10 @@ func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
}
func TestRoundTripUint256(t *testing.T) {
vs := "452312848583266388373324160190187140051835877600158453279131187530910662656"
vs := "4523128485832663883733241601901871400518358776001584532791311875309106626"
u := stringToUint256(vs)
sb := u.SSZBytes()
require.Equal(t, 32, len(sb))
uu := sszBytesToUint256(sb)
require.Equal(t, true, bytes.Equal(u.SSZBytes(), uu.SSZBytes()))
require.Equal(t, vs, uu.String())

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"gateway.go",
"log.go",
"modifiers.go",
"options.go",
],
importpath = "github.com/prysmaticlabs/prysm/api/gateway",
@@ -23,6 +24,7 @@ go_library(
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//connectivity:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -15,6 +15,7 @@ go_library(
deps = [
"//api/grpc:go_default_library",
"//encoding/bytesutil:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -111,7 +111,7 @@ func (m *ApiProxyMiddleware) WithMiddleware(path string) http.HandlerFunc {
}
}
if req.Method == "DELETE" {
if req.Method == "DELETE" && req.Body != http.NoBody {
if errJson := handleDeleteRequestForEndpoint(endpoint, req); errJson != nil {
WriteError(w, errJson, nil)
return

View File

@@ -9,6 +9,7 @@ import (
"strings"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/wealdtech/go-bytesutil"
@@ -31,26 +32,26 @@ func processField(s interface{}, processors []fieldProcessor) error {
sliceElem := t.Field(i).Type.Elem()
kind := sliceElem.Kind()
// Recursively process slices to struct pointers.
if kind == reflect.Ptr && sliceElem.Elem().Kind() == reflect.Struct {
switch {
case kind == reflect.Ptr && sliceElem.Elem().Kind() == reflect.Struct:
for j := 0; j < v.Field(i).Len(); j++ {
if err := processField(v.Field(i).Index(j).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
// Process each string in string slices.
if kind == reflect.String {
case kind == reflect.String:
for _, proc := range processors {
_, hasTag := t.Field(i).Tag.Lookup(proc.tag)
if hasTag {
for j := 0; j < v.Field(i).Len(); j++ {
if err := proc.f(v.Field(i).Index(j)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
if !hasTag {
continue
}
for j := 0; j < v.Field(i).Len(); j++ {
if err := proc.f(v.Field(i).Index(j)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
// Recursively process struct pointers.
case reflect.Ptr:
@@ -100,6 +101,20 @@ func base64ToHexProcessor(v reflect.Value) error {
return nil
}
func base64ToChecksumAddressProcessor(v reflect.Value) error {
if v.String() == "" {
// Empty hex values are represented as "0x".
v.SetString("0x")
return nil
}
b, err := base64.StdEncoding.DecodeString(v.String())
if err != nil {
return err
}
v.SetString(common.BytesToAddress(b).Hex())
return nil
}
func base64ToUint256Processor(v reflect.Value) error {
if v.String() == "" {
return nil

View File

@@ -104,6 +104,8 @@ func ReadGrpcResponseBody(r io.Reader) ([]byte, ErrorJson) {
}
// HandleGrpcResponseError acts on an error that resulted from a grpc-gateway's response.
// Whether there was an error is indicated by the bool return value. In case of an error,
// there is no need to write to the response because it's taken care of by the function.
func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, respBody []byte, w http.ResponseWriter) (bool, ErrorJson) {
responseHasError := false
if err := json.Unmarshal(respBody, errJson); err != nil {
@@ -149,6 +151,10 @@ func ProcessMiddlewareResponseFields(responseContainer interface{}) ErrorJson {
tag: "hex",
f: base64ToHexProcessor,
},
{
tag: "address",
f: base64ToChecksumAddressProcessor,
},
{
tag: "enum",
f: enumToLowercaseProcessor,

View File

@@ -31,21 +31,25 @@ func defaultRequestContainer() *testRequestContainer {
}
type testResponseContainer struct {
TestString string
TestHex string `hex:"true"`
TestEmptyHex string `hex:"true"`
TestUint256 string `uint256:"true"`
TestEnum string `enum:"true"`
TestTime string `time:"true"`
TestString string
TestHex string `hex:"true"`
TestEmptyHex string `hex:"true"`
TestAddress string `address:"true"`
TestEmptyAddress string `address:"true"`
TestUint256 string `uint256:"true"`
TestEnum string `enum:"true"`
TestTime string `time:"true"`
}
func defaultResponseContainer() *testResponseContainer {
return &testResponseContainer{
TestString: "test string",
TestHex: "Zm9v", // base64 encoding of "foo"
TestEmptyHex: "",
TestEnum: "Test Enum",
TestTime: "2006-01-02T15:04:05Z",
TestString: "test string",
TestHex: "Zm9v", // base64 encoding of "foo"
TestEmptyHex: "",
TestAddress: "Zm9v",
TestEmptyAddress: "",
TestEnum: "Test Enum",
TestTime: "2006-01-02T15:04:05Z",
// base64 encoding of 4196 in little-endian
TestUint256: "ZBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=",
@@ -247,6 +251,8 @@ func TestProcessMiddlewareResponseFields(t *testing.T) {
require.Equal(t, true, errJson == nil)
assert.Equal(t, "0x666f6f", container.TestHex)
assert.Equal(t, "0x", container.TestEmptyHex)
assert.Equal(t, "0x0000000000000000000000000000000000666F6f", container.TestAddress)
assert.Equal(t, "0x", container.TestEmptyAddress)
assert.Equal(t, "4196", container.TestUint256)
assert.Equal(t, "test enum", container.TestEnum)
assert.Equal(t, "1136214245", container.TestTime)
@@ -292,7 +298,7 @@ func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
v, ok = writer.Header()["Content-Length"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "181", v[0])
assert.Equal(t, "224", v[0])
assert.Equal(t, 204, writer.Code)
assert.DeepEqual(t, responseJson, writer.Body.Bytes())
})

View File

@@ -121,8 +121,9 @@ func (g *Gateway) Start() {
}
g.server = &http.Server{
Addr: g.cfg.gatewayAddr,
Handler: corsMux,
Addr: g.cfg.gatewayAddr,
Handler: corsMux,
ReadHeaderTimeout: time.Second,
}
go func() {

30
api/gateway/modifiers.go Normal file
View File

@@ -0,0 +1,30 @@
package gateway
import (
"context"
"net/http"
"strconv"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"google.golang.org/protobuf/proto"
)
func HttpResponseModifier(ctx context.Context, w http.ResponseWriter, _ proto.Message) error {
md, ok := gwruntime.ServerMetadataFromContext(ctx)
if !ok {
return nil
}
// set http status code
if vals := md.HeaderMD.Get("x-http-code"); len(vals) > 0 {
code, err := strconv.Atoi(vals[0])
if err != nil {
return err
}
// delete the headers to not expose any grpc-metadata in http response
delete(md.HeaderMD, "x-http-code")
delete(w.Header(), "Grpc-Metadata-X-Http-Code")
w.WriteHeader(code)
}
return nil
}

View File

@@ -49,7 +49,7 @@ func (lk *Lock) Lock() {
lk.unlock <- 1
}
// Unlocks this lock. Must be called after Lock.
// Unlock unlocks this lock. Must be called after Lock.
// Can only be invoked if there is a previous call to Lock.
func (lk *Lock) Unlock() {
<-lk.unlock
@@ -65,14 +65,14 @@ func (lk *Lock) Unlock() {
<-lk.lock
}
// Temporarily unlocks, gives up the cpu time to other goroutine, and attempts to lock again.
// Yield temporarily unlocks, gives up the cpu time to other goroutine, and attempts to lock again.
func (lk *Lock) Yield() {
lk.Unlock()
runtime.Gosched()
lk.Lock()
}
// Creates a new multilock for the specified keys
// NewMultilock creates a new multilock for the specified keys
func NewMultilock(locks ...string) *Lock {
if len(locks) == 0 {
return nil
@@ -87,7 +87,7 @@ func NewMultilock(locks ...string) *Lock {
}
}
// Cleans old unused locks. Returns removed keys.
// Clean cleans old unused locks. Returns removed keys.
func Clean() []string {
locks.lock <- 1
defer func() { <-locks.lock }()

View File

@@ -22,22 +22,22 @@ import (
func TestUnique(t *testing.T) {
var arr []string
assert := assert.New(t)
a := assert.New(t)
arr = []string{"a", "b", "c"}
assert.Equal(arr, unique(arr))
a.Equal(arr, unique(arr))
arr = []string{"a", "a", "a"}
assert.Equal([]string{"a"}, unique(arr))
a.Equal([]string{"a"}, unique(arr))
arr = []string{"a", "a", "b"}
assert.Equal([]string{"a", "b"}, unique(arr))
a.Equal([]string{"a", "b"}, unique(arr))
arr = []string{"a", "b", "a"}
assert.Equal([]string{"a", "b"}, unique(arr))
a.Equal([]string{"a", "b"}, unique(arr))
arr = []string{"a", "b", "c", "b", "d"}
assert.Equal([]string{"a", "b", "c", "d"}, unique(arr))
a.Equal([]string{"a", "b", "c", "d"}, unique(arr))
}
func TestGetChan(t *testing.T) {
@@ -45,9 +45,9 @@ func TestGetChan(t *testing.T) {
ch2 := getChan("aa")
ch3 := getChan("a")
assert := assert.New(t)
assert.NotEqual(ch1, ch2)
assert.Equal(ch1, ch3)
a := assert.New(t)
a.NotEqual(ch1, ch2)
a.Equal(ch1, ch3)
}
func TestLockUnlock(_ *testing.T) {

View File

@@ -10,8 +10,8 @@ go_library(
"head_sync_committee_info.go",
"init_sync_process_block.go",
"log.go",
"merge_ascii_art.go",
"metrics.go",
"new_slot.go",
"options.go",
"pow_block.go",
"process_attestation.go",
@@ -34,7 +34,6 @@ go_library(
deps = [
"//async:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain/store:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
@@ -112,7 +111,6 @@ go_test(
"log_test.go",
"metrics_test.go",
"mock_test.go",
"new_slot_test.go",
"pow_block_test.go",
"process_attestation_test.go",
"process_block_test.go",
@@ -132,6 +130,7 @@ go_test(
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
@@ -188,6 +187,7 @@ go_test(
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",

View File

@@ -4,8 +4,6 @@ import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
@@ -81,10 +79,11 @@ type CanonicalFetcher interface {
// FinalizationFetcher defines a common interface for methods in blockchain service which
// directly retrieve finalization and justification related data.
type FinalizationFetcher interface {
FinalizedCheckpt() (*ethpb.Checkpoint, error)
CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error)
PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error)
FinalizedCheckpt() *ethpb.Checkpoint
CurrentJustifiedCheckpt() *ethpb.Checkpoint
PreviousJustifiedCheckpt() *ethpb.Checkpoint
VerifyFinalizedBlkDescendant(ctx context.Context, blockRoot [32]byte) error
IsFinalized(ctx context.Context, blockRoot [32]byte) bool
}
// OptimisticModeFetcher retrieves information about optimistic status of the node.
@@ -94,47 +93,27 @@ type OptimisticModeFetcher interface {
}
// FinalizedCheckpt returns the latest finalized checkpoint from chain store.
func (s *Service) FinalizedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.FinalizedCheckpt()
if err != nil {
return nil, err
}
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
cp := s.ForkChoicer().FinalizedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
return ethpb.CopyCheckpoint(cp), nil
// PreviousJustifiedCheckpt returns the current justified checkpoint from chain store.
func (s *Service) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
cp := s.ForkChoicer().PreviousJustifiedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
// CurrentJustifiedCheckpt returns the current justified checkpoint from chain store.
func (s *Service) CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.JustifiedCheckpt()
if err != nil {
return nil, err
}
return ethpb.CopyCheckpoint(cp), nil
}
// PreviousJustifiedCheckpt returns the previous justified checkpoint from chain store.
func (s *Service) PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.PrevJustifiedCheckpt()
if err != nil {
return nil, err
}
return ethpb.CopyCheckpoint(cp), nil
func (s *Service) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
cp := s.ForkChoicer().JustifiedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
// BestJustifiedCheckpt returns the best justified checkpoint from store.
func (s *Service) BestJustifiedCheckpt() (*ethpb.Checkpoint, error) {
cp, err := s.store.BestJustifiedCheckpt()
if err != nil {
// If there is no best justified checkpoint, return the checkpoint with root as zeros to be used for genesis cases.
if errors.Is(err, store.ErrNilCheckpoint) {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}, nil
}
return nil, err
}
return ethpb.CopyCheckpoint(cp), nil
func (s *Service) BestJustifiedCheckpt() *ethpb.Checkpoint {
cp := s.ForkChoicer().BestJustifiedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
// HeadSlot returns the slot of the head of the chain.
@@ -154,9 +133,8 @@ func (s *Service) HeadRoot(ctx context.Context) ([]byte, error) {
s.headLock.RLock()
defer s.headLock.RUnlock()
if s.headRoot() != params.BeaconConfig().ZeroHash {
r := s.headRoot()
return r[:], nil
if s.head != nil && s.head.root != params.BeaconConfig().ZeroHash {
return bytesutil.SafeCopyBytes(s.head.root[:]), nil
}
b, err := s.cfg.BeaconDB.HeadBlock(ctx)
@@ -330,6 +308,15 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
return s.IsOptimisticForRoot(ctx, s.head.root)
}
// IsFinalized returns true if the input root is finalized.
// It first checks latest finalized root then checks finalized root index in DB.
func (s *Service) IsFinalized(ctx context.Context, root [32]byte) bool {
if s.ForkChoicer().FinalizedCheckpoint().Root == root {
return true
}
return s.cfg.BeaconDB.IsFinalizedBlock(ctx, root)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -5,11 +5,12 @@ import (
"testing"
"time"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
v3 "github.com/prysmaticlabs/prysm/beacon-chain/state/v3"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
@@ -17,6 +18,7 @@ import (
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
@@ -37,31 +39,23 @@ func prepareForkchoiceState(
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
justifiedEpoch types.Epoch,
finalizedEpoch types.Epoch,
justified *ethpb.Checkpoint,
finalized *ethpb.Checkpoint,
) (state.BeaconState, [32]byte, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
executionHeader := &ethpb.ExecutionPayloadHeader{
executionHeader := &enginev1.ExecutionPayloadHeader{
BlockHash: payloadHash[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: justifiedEpoch,
}
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: finalizedEpoch,
}
base := &ethpb.BeaconStateBellatrix{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
BlockRoots: make([][]byte, 1),
CurrentJustifiedCheckpoint: justifiedCheckpoint,
FinalizedCheckpoint: finalizedCheckpoint,
CurrentJustifiedCheckpoint: justified,
FinalizedCheckpoint: finalized,
LatestExecutionPayloadHeader: executionHeader,
LatestBlockHeader: blockHeader,
}
@@ -80,86 +74,53 @@ func TestHeadRoot_Nil(t *testing.T) {
}
func TestService_ForkChoiceStore(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
p := c.ForkChoiceStore()
require.Equal(t, 0, int(p.FinalizedEpoch()))
}
func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
beaconDB := testDB.SetupDB(t)
cp := &ethpb.Checkpoint{Epoch: 5, Root: bytesutil.PadTo([]byte("foo"), 32)}
c := setupBeaconChain(t, beaconDB)
c.store.SetFinalizedCheckptAndPayloadHash(cp, [32]byte{'a'})
cp, err := c.FinalizedCheckpt()
require.NoError(t, err)
assert.Equal(t, cp.Epoch, cp.Epoch, "Unexpected finalized epoch")
require.Equal(t, types.Epoch(0), p.FinalizedCheckpoint().Epoch)
}
func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
genesisRoot := [32]byte{'A'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, beaconDB)
c.store.SetFinalizedCheckptAndPayloadHash(cp, [32]byte{'a'})
c.originBlockRoot = genesisRoot
cp, err := c.FinalizedCheckpt()
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
assert.DeepEqual(t, c.originBlockRoot[:], cp.Root)
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
cp := service.FinalizedCheckpt()
assert.DeepEqual(t, [32]byte{}, bytesutil.ToBytes32(cp.Root))
cp = service.CurrentJustifiedCheckpt()
assert.DeepEqual(t, [32]byte{}, bytesutil.ToBytes32(cp.Root))
// check that forkchoice has the right genesis root as the node root
root, err := fcs.Head(ctx, []uint64{})
require.NoError(t, err)
require.Equal(t, service.originBlockRoot, root)
}
func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
_, err := c.CurrentJustifiedCheckpt()
require.ErrorIs(t, err, store.ErrNilCheckpoint)
cp := &ethpb.Checkpoint{Epoch: 6, Root: bytesutil.PadTo([]byte("foo"), 32)}
c.store.SetJustifiedCheckptAndPayloadHash(cp, [32]byte{})
jp, err := c.CurrentJustifiedCheckpt()
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
cp := &forkchoicetypes.Checkpoint{Epoch: 6, Root: [32]byte{'j'}}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(cp))
jp := service.CurrentJustifiedCheckpt()
assert.Equal(t, cp.Epoch, jp.Epoch, "Unexpected justified epoch")
}
func TestJustifiedCheckpt_GenesisRootOk(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
genesisRoot := [32]byte{'B'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c.store.SetJustifiedCheckptAndPayloadHash(cp, [32]byte{})
c.originBlockRoot = genesisRoot
cp, err := c.CurrentJustifiedCheckpt()
require.NoError(t, err)
assert.DeepEqual(t, c.originBlockRoot[:], cp.Root)
}
func TestPreviousJustifiedCheckpt_CanRetrieve(t *testing.T) {
beaconDB := testDB.SetupDB(t)
cp := &ethpb.Checkpoint{Epoch: 7, Root: bytesutil.PadTo([]byte("foo"), 32)}
c := setupBeaconChain(t, beaconDB)
_, err := c.PreviousJustifiedCheckpt()
require.ErrorIs(t, err, store.ErrNilCheckpoint)
c.store.SetPrevJustifiedCheckpt(cp)
pcp, err := c.PreviousJustifiedCheckpt()
require.NoError(t, err)
assert.Equal(t, cp.Epoch, pcp.Epoch, "Unexpected previous justified epoch")
}
func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisRoot := [32]byte{'C'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, beaconDB)
c.store.SetPrevJustifiedCheckpt(cp)
c.originBlockRoot = genesisRoot
pcp, err := c.PreviousJustifiedCheckpt()
require.NoError(t, err)
assert.DeepEqual(t, c.originBlockRoot[:], pcp.Root)
require.Equal(t, cp.Root, bytesutil.ToBytes32(jp.Root))
}
func TestHeadSlot_CanRetrieve(t *testing.T) {
@@ -171,26 +132,46 @@ func TestHeadSlot_CanRetrieve(t *testing.T) {
}
func TestHeadRoot_CanRetrieve(t *testing.T) {
c := &Service{}
c.head = &head{root: [32]byte{'A'}}
r, err := c.HeadRoot(context.Background())
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
assert.Equal(t, [32]byte{'A'}, bytesutil.ToBytes32(r))
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
r, err := service.HeadRoot(ctx)
require.NoError(t, err)
assert.Equal(t, service.originBlockRoot, bytesutil.ToBytes32(r))
}
func TestHeadRoot_UseDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
c := &Service{cfg: &config{BeaconDB: beaconDB}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:]}))
require.NoError(t, beaconDB.SaveHeadBlockRoot(context.Background(), br))
r, err := c.HeadRoot(context.Background())
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
require.NoError(t, beaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: br[:]}))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, br))
r, err := service.HeadRoot(ctx)
require.NoError(t, err)
assert.Equal(t, br, bytesutil.ToBytes32(r))
}
@@ -285,9 +266,7 @@ func TestIsCanonical_Ok(t *testing.T) {
blk.Block.Slot = 0
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, blk)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, root))
can, err := c.IsCanonical(ctx, root)
require.NoError(t, err)
@@ -327,22 +306,24 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
}
func TestService_ChainHeads_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0)}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
roots, slots := c.ChainHeads()
require.DeepEqual(t, [][32]byte{{'c'}, {'d'}, {'e'}}, roots)
@@ -357,25 +338,27 @@ func TestService_ChainHeads_ProtoArray(t *testing.T) {
func TestService_ChainHeads_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
roots, slots := c.ChainHeads()
require.Equal(t, 3, len(roots))
@@ -452,13 +435,15 @@ func TestService_IsOptimistic_ProtoArray(t *testing.T) {
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
@@ -472,13 +457,15 @@ func TestService_IsOptimistic_DoublyLinkedTree(t *testing.T) {
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
@@ -495,13 +482,15 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
@@ -510,13 +499,15 @@ func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
@@ -526,32 +517,26 @@ func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(optimisticBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(validatedBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
@@ -591,32 +576,26 @@ func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(optimisticBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(validatedBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
@@ -655,32 +634,26 @@ func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(optimisticBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(validatedBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
@@ -696,3 +669,25 @@ func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
require.Equal(t, true, validated)
}
func TestService_IsFinalized(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}}
r1 := [32]byte{'a'}
require.NoError(t, c.ForkChoiceStore().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
Root: r1,
}))
b := util.NewBeaconBlock()
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: br[:], Slot: 10}))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{
Root: br[:],
}))
require.Equal(t, true, c.IsFinalized(ctx, r1))
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}

View File

@@ -1,5 +1,4 @@
//go:build !develop
// +build !develop
package blockchain

View File

@@ -18,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -54,8 +53,8 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
finalizedHash := s.store.FinalizedPayloadBlockHash()
justifiedHash := s.store.JustifiedPayloadBlockHash()
finalizedHash := s.ForkChoicer().FinalizedPayloadBlockHash()
justifiedHash := s.ForkChoicer().JustifiedPayloadBlockHash()
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: headPayload.BlockHash,
SafeBlockHash: justifiedHash[:],
@@ -90,7 +89,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
return nil, err
}
r, err := s.updateHead(ctx, s.justifiedBalances.balances)
r, err := s.cfg.ForkChoiceStore.Head(ctx, s.justifiedBalances.balances)
if err != nil {
return nil, err
}
@@ -111,10 +110,15 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
return nil, err
}
if err := s.saveHead(ctx, r, b, st); err != nil {
log.WithError(err).Error("could not save head after pruning invalid blocks")
}
log.WithFields(logrus.Fields{
"slot": headBlk.Slot(),
"blockRoot": fmt.Sprintf("%#x", headRoot),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot[:])),
"invalidCount": len(invalidRoots),
"newHeadRoot": fmt.Sprintf("%#x", bytesutil.Trunc(r[:])),
}).Warn("Pruned invalid blocks")
return pid, ErrInvalidPayload
@@ -154,7 +158,7 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyForkchoiceUpdate signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
postStateHeader *ethpb.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) (bool, error) {
postStateHeader *enginev1.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()

View File

@@ -11,6 +11,7 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
bstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
@@ -26,24 +27,20 @@ import (
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func Test_NotifyForkchoiceUpdate(t *testing.T) {
params.BeaconConfig().SafeSlotsToImportOptimistically = 0
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
altairBlk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
altairBlkRoot, err := altairBlk.Block().HashTreeRoot()
require.NoError(t, err)
bellatrixBlk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
bellatrixBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockBellatrix())
bellatrixBlkRoot, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, altairBlk))
require.NoError(t, beaconDB.SaveBlock(ctx, bellatrixBlk))
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -56,13 +53,15 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
state: st,
}
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -184,9 +183,6 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, beaconDB.SaveState(ctx, st, tt.finalizedRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, tt.finalizedRoot))
fc := &ethpb.Checkpoint{Epoch: 1, Root: tt.finalizedRoot[:]}
service.store.SetFinalizedCheckptAndPayloadHash(fc, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(fc, [32]byte{'b'})
arg := &notifyForkchoiceUpdateArg{
headState: st,
headRoot: tt.headRoot,
@@ -217,107 +213,92 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
func Test_NotifyForkchoiceUpdateRecursive_Protoarray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
// Prepare blocks
ba := util.NewBeaconBlockBellatrix()
ba.Block.Body.ExecutionPayload.BlockNumber = 1
wba, err := wrapper.WrappedSignedBeaconBlock(ba)
require.NoError(t, err)
wba := util.SaveBlock(t, ctx, beaconDB, ba)
bra, err := wba.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wba))
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.ExecutionPayload.BlockNumber = 2
wbb, err := wrapper.WrappedSignedBeaconBlock(bb)
require.NoError(t, err)
wbb := util.SaveBlock(t, ctx, beaconDB, bb)
brb, err := wbb.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbb))
bc := util.NewBeaconBlockBellatrix()
bc.Block.Body.ExecutionPayload.BlockNumber = 3
wbc, err := wrapper.WrappedSignedBeaconBlock(bc)
require.NoError(t, err)
wbc := util.SaveBlock(t, ctx, beaconDB, bc)
brc, err := wbc.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbc))
bd := util.NewBeaconBlockBellatrix()
pd := [32]byte{'D'}
bd.Block.Body.ExecutionPayload.BlockHash = pd[:]
bd.Block.Body.ExecutionPayload.BlockNumber = 4
wbd, err := wrapper.WrappedSignedBeaconBlock(bd)
require.NoError(t, err)
wbd := util.SaveBlock(t, ctx, beaconDB, bd)
brd, err := wbd.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbd))
be := util.NewBeaconBlockBellatrix()
pe := [32]byte{'E'}
be.Block.Body.ExecutionPayload.BlockHash = pe[:]
be.Block.Body.ExecutionPayload.BlockNumber = 5
wbe, err := wrapper.WrappedSignedBeaconBlock(be)
require.NoError(t, err)
wbe := util.SaveBlock(t, ctx, beaconDB, be)
bre, err := wbe.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbe))
bf := util.NewBeaconBlockBellatrix()
pf := [32]byte{'F'}
bf.Block.Body.ExecutionPayload.BlockHash = pf[:]
bf.Block.Body.ExecutionPayload.BlockNumber = 6
bf.Block.ParentRoot = bre[:]
wbf, err := wrapper.WrappedSignedBeaconBlock(bf)
require.NoError(t, err)
wbf := util.SaveBlock(t, ctx, beaconDB, bf)
brf, err := wbf.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbf))
bg := util.NewBeaconBlockBellatrix()
bg.Block.Body.ExecutionPayload.BlockNumber = 7
pg := [32]byte{'G'}
bg.Block.Body.ExecutionPayload.BlockHash = pg[:]
bg.Block.ParentRoot = bre[:]
wbg, err := wrapper.WrappedSignedBeaconBlock(bg)
require.NoError(t, err)
wbg := util.SaveBlock(t, ctx, beaconDB, bg)
brg, err := wbg.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wbg))
// Insert blocks into forkchoice
fcs := doublylinkedtree.New(0, 0)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
service := setupBeaconChain(t, beaconDB)
fcs := protoarray.New()
service.cfg.ForkChoiceStore = fcs
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -326,19 +307,22 @@ func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
headRoot, err := fcs.Head(ctx, bra, []uint64{50, 100, 200})
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(jc))
headRoot, err := fcs.Head(ctx, []uint64{50, 100, 200})
require.NoError(t, err)
require.Equal(t, brg, headRoot)
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: powchain.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
block: wba,
}
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
fc := &ethpb.Checkpoint{Epoch: 0, Root: bra[:]}
service.store.SetFinalizedCheckptAndPayloadHash(fc, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(fc, [32]byte{'b'})
a := &notifyForkchoiceUpdateArg{
headState: st,
headBlock: wbg.Block(),
@@ -347,7 +331,150 @@ func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.ErrorIs(t, ErrInvalidPayload, err)
// Ensure Head is D
headRoot, err = fcs.Head(ctx, bra, service.justifiedBalances.balances)
headRoot, err = fcs.Head(ctx, service.justifiedBalances.balances)
require.NoError(t, err)
require.Equal(t, brd, headRoot)
// Ensure F and G where removed but their parent E wasn't
require.Equal(t, false, fcs.HasNode(brf))
require.Equal(t, false, fcs.HasNode(brg))
require.Equal(t, true, fcs.HasNode(bre))
}
//
//
// A <- B <- C <- D
// \
// ---------- E <- F
// \
// ------ G
// D is the current head, attestations for F and G come late, both are invalid.
// We switch recursively to F then G and finally to D.
//
// We test:
// 1. forkchoice removes blocks F and G from the forkchoice implementation
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
// Prepare blocks
ba := util.NewBeaconBlockBellatrix()
ba.Block.Body.ExecutionPayload.BlockNumber = 1
wba := util.SaveBlock(t, ctx, beaconDB, ba)
bra, err := wba.Block().HashTreeRoot()
require.NoError(t, err)
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.ExecutionPayload.BlockNumber = 2
wbb := util.SaveBlock(t, ctx, beaconDB, bb)
brb, err := wbb.Block().HashTreeRoot()
require.NoError(t, err)
bc := util.NewBeaconBlockBellatrix()
bc.Block.Body.ExecutionPayload.BlockNumber = 3
wbc := util.SaveBlock(t, ctx, beaconDB, bc)
brc, err := wbc.Block().HashTreeRoot()
require.NoError(t, err)
bd := util.NewBeaconBlockBellatrix()
pd := [32]byte{'D'}
bd.Block.Body.ExecutionPayload.BlockHash = pd[:]
bd.Block.Body.ExecutionPayload.BlockNumber = 4
wbd := util.SaveBlock(t, ctx, beaconDB, bd)
brd, err := wbd.Block().HashTreeRoot()
require.NoError(t, err)
be := util.NewBeaconBlockBellatrix()
pe := [32]byte{'E'}
be.Block.Body.ExecutionPayload.BlockHash = pe[:]
be.Block.Body.ExecutionPayload.BlockNumber = 5
wbe := util.SaveBlock(t, ctx, beaconDB, be)
bre, err := wbe.Block().HashTreeRoot()
require.NoError(t, err)
bf := util.NewBeaconBlockBellatrix()
pf := [32]byte{'F'}
bf.Block.Body.ExecutionPayload.BlockHash = pf[:]
bf.Block.Body.ExecutionPayload.BlockNumber = 6
bf.Block.ParentRoot = bre[:]
wbf := util.SaveBlock(t, ctx, beaconDB, bf)
brf, err := wbf.Block().HashTreeRoot()
require.NoError(t, err)
bg := util.NewBeaconBlockBellatrix()
bg.Block.Body.ExecutionPayload.BlockNumber = 7
pg := [32]byte{'G'}
bg.Block.Body.ExecutionPayload.BlockHash = pg[:]
bg.Block.ParentRoot = bre[:]
wbg := util.SaveBlock(t, ctx, beaconDB, bg)
brg, err := wbg.Block().HashTreeRoot()
require.NoError(t, err)
// Insert blocks into forkchoice
service := setupBeaconChain(t, beaconDB)
fcs := doublylinkedtree.New()
service.cfg.ForkChoiceStore = fcs
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
// Insert Attestations to D, F and G so that they have higher weight than D
// Ensure G is head
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(jc))
headRoot, err := fcs.Head(ctx, []uint64{50, 100, 200})
require.NoError(t, err)
require.Equal(t, brg, headRoot)
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: powchain.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
block: wba,
}
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
headState: st,
headBlock: wbg.Block(),
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.ErrorIs(t, ErrInvalidPayload, err)
// Ensure Head is D
headRoot, err = fcs.Head(ctx, service.justifiedBalances.balances)
require.NoError(t, err)
require.Equal(t, brd, headRoot)
@@ -364,7 +491,7 @@ func Test_NotifyNewPayload(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -402,12 +529,16 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
service.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
r, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -556,7 +687,7 @@ func Test_NotifyNewPayload(t *testing.T) {
}
service.cfg.ExecutionEngineCaller = e
root := [32]byte{'a'}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, root, params.BeaconConfig().ZeroHash, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, root, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
@@ -582,7 +713,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -625,7 +756,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -712,14 +843,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
},
}
for _, tt := range tests {
jRoot, err := tt.justified.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, tt.justified))
service.store.SetJustifiedCheckptAndPayloadHash(
&ethpb.Checkpoint{
Root: jRoot[:],
Epoch: slots.ToEpoch(tt.justified.Block().Slot()),
}, [32]byte{'a'})
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrappedParentBlock))
err = service.optimisticCandidateBlock(ctx, tt.blk)
@@ -762,9 +886,7 @@ func Test_IsOptimisticShallowExecutionParent(t *testing.T) {
b := &ethpb.BeaconBlockBellatrix{Body: body, Slot: 200}
rawSigned := &ethpb.SignedBeaconBlockBellatrix{Block: b}
blk := util.HydrateSignedBeaconBlockBellatrix(rawSigned)
wr, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wr))
wr := util.SaveBlock(t, ctx, service.cfg.BeaconDB, blk)
blkRoot, err := wr.Block().HashTreeRoot()
require.NoError(t, err)
@@ -772,9 +894,7 @@ func Test_IsOptimisticShallowExecutionParent(t *testing.T) {
childBlock.Block.ParentRoot = blkRoot[:]
// shallow block
childBlock.Block.Slot = 201
wrappedChild, err := wrapper.WrappedSignedBeaconBlock(childBlock)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrappedChild))
wrappedChild := util.SaveBlock(t, ctx, service.cfg.BeaconDB, childBlock)
err = service.optimisticCandidateBlock(ctx, wrappedChild.Block())
require.NoError(t, err)
}
@@ -827,7 +947,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
stateGen := stategen.New(beaconDB)
fcs := protoarray.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stateGen),
@@ -837,15 +957,19 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, err)
genesisStateRoot := [32]byte{}
genesisBlk := blocks.NewGenesisBlock(genesisStateRoot[:])
wr, err := wrapper.WrappedSignedBeaconBlock(genesisBlk)
require.NoError(t, err)
assert.NoError(t, beaconDB.SaveBlock(ctx, wr))
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
genesisRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
fjc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(fjc))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(fjc))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
fcs.SetOriginRoot(genesisRoot)
genesisSummary := &ethpb.StateSummary{
Root: genesisStateRoot[:],
Slot: 0,
@@ -861,9 +985,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
blk := util.NewBeaconBlock()
blk.Block.Slot = 320
blk.Block.ParentRoot = genesisRoot[:]
wr, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wr))
util.SaveBlock(t, ctx, beaconDB, blk)
opRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
@@ -876,7 +998,9 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
Slot: 320,
}
require.NoError(t, beaconDB.SaveStateSummary(ctx, opStateSummary))
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, 10, 10)
tenjc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
tenfc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, tenjc, tenfc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, opRoot))
@@ -890,9 +1014,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
blk = util.NewBeaconBlock()
blk.Block.Slot = 640
blk.Block.ParentRoot = opRoot[:]
wr, err = wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wr))
util.SaveBlock(t, ctx, beaconDB, blk)
validRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
@@ -905,7 +1027,9 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
Slot: 640,
}
require.NoError(t, beaconDB.SaveStateSummary(ctx, validSummary))
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 20, 20)
twentyjc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
twentyfc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, twentyjc, twentyfc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
require.NoError(t, fcs.SetOptimisticToValid(ctx, validRoot))
@@ -927,7 +1051,7 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -938,12 +1062,10 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
// Happy case
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
blk1, err := wrapper.WrappedSignedBeaconBlock(b1)
require.NoError(t, err)
blk1 := util.SaveBlock(t, ctx, service.cfg.BeaconDB, b1)
r1, err := blk1.Block().HashTreeRoot()
require.NoError(t, err)
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, blk1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: 1,
Root: r1[:],
@@ -952,11 +1074,9 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
b2 := util.NewBeaconBlock()
b2.Block.Slot = 2
blk2, err := wrapper.WrappedSignedBeaconBlock(b2)
require.NoError(t, err)
blk2 := util.SaveBlock(t, ctx, service.cfg.BeaconDB, b2)
r2, err := blk2.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, blk2))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: 2,
Root: r2[:],
@@ -983,7 +1103,7 @@ func TestService_getPayloadHash(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -996,7 +1116,7 @@ func TestService_getPayloadHash(t *testing.T) {
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
service.saveInitSyncBlock(r, wsb)
require.NoError(t, service.saveInitSyncBlock(ctx, r, wsb))
h, err := service.getPayloadHash(ctx, r[:])
require.NoError(t, err)
@@ -1009,7 +1129,7 @@ func TestService_getPayloadHash(t *testing.T) {
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(bb)
require.NoError(t, err)
service.saveInitSyncBlock(r, wsb)
require.NoError(t, service.saveInitSyncBlock(ctx, r, wsb))
h, err = service.getPayloadHash(ctx, r[:])
require.NoError(t, err)

View File

@@ -10,10 +10,7 @@ import (
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
@@ -29,17 +26,14 @@ import (
// UpdateAndSaveHeadWithBalances updates the beacon state head after getting justified balanced from cache.
// This function is only used in spec-tests, it does save the head after updating it.
func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
jp, err := s.store.JustifiedCheckpt()
if err != nil {
return err
}
jp := s.CurrentJustifiedCheckpt()
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(jp.Root))
if err != nil {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", jp.Root)
return errors.Wrap(err, msg)
}
headRoot, err := s.updateHead(ctx, balances)
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx, balances)
if err != nil {
return errors.Wrap(err, "could not update head")
}
@@ -62,51 +56,6 @@ type head struct {
state state.BeaconState // current head state.
}
// Determined the head from the fork choice service and saves its new data
// (head root, head block, and head state) to the local service cache.
func (s *Service) updateHead(ctx context.Context, balances []uint64) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.updateHead")
defer span.End()
// Get head from the fork choice service.
f, err := s.store.FinalizedCheckpt()
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get finalized checkpoint")
}
j, err := s.store.JustifiedCheckpt()
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get justified checkpoint")
}
// To get head before the first justified epoch, the fork choice will start with origin root
// instead of zero hashes.
headStartRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(j.Root))
// In order to process head, fork choice store requires justified info.
// If the fork choice store is missing justified block info, a node should
// re-initiate fork choice store using the latest justified info.
// This recovers a fatal condition and should not happen in run time.
if !s.cfg.ForkChoiceStore.HasNode(headStartRoot) {
jb, err := s.getBlock(ctx, headStartRoot)
if err != nil {
return [32]byte{}, err
}
st, err := s.cfg.StateGen.StateByRoot(ctx, s.ensureRootNotZeros(headStartRoot))
if err != nil {
return [32]byte{}, err
}
if features.Get().EnableForkChoiceDoublyLinkedTree {
s.cfg.ForkChoiceStore = doublylinkedtree.New(j.Epoch, f.Epoch)
} else {
s.cfg.ForkChoiceStore = protoarray.New(j.Epoch, f.Epoch)
}
if err := s.insertBlockToForkChoiceStore(ctx, jb.Block(), headStartRoot, st, f, j); err != nil {
return [32]byte{}, err
}
}
return s.cfg.ForkChoiceStore.Head(ctx, headStartRoot, balances)
}
// This saves head info to the local service cache, it also saves the
// new head root to the DB.
func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock interfaces.SignedBeaconBlock, headState state.BeaconState) error {
@@ -114,11 +63,15 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
defer span.End()
// Do nothing if head hasn't changed.
oldHeadroot, err := s.HeadRoot(ctx)
if err != nil {
return err
var oldHeadRoot [32]byte
s.headLock.RLock()
if s.head == nil {
oldHeadRoot = s.originBlockRoot
} else {
oldHeadRoot = s.head.root
}
if newHeadRoot == bytesutil.ToBytes32(oldHeadroot) {
s.headLock.RUnlock()
if newHeadRoot == oldHeadRoot {
return nil
}
if err := wrapper.BeaconBlockIsNil(headBlock); err != nil {
@@ -136,13 +89,12 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
// A chain re-org occurred, so we fire an event notifying the rest of the services.
s.headLock.RLock()
oldHeadRoot := s.headRoot()
oldStateRoot := s.headBlock().Block().StateRoot()
s.headLock.RUnlock()
headSlot := s.HeadSlot()
newHeadSlot := headBlock.Block().Slot()
newStateRoot := headBlock.Block().StateRoot()
if bytesutil.ToBytes32(headBlock.Block().ParentRoot()) != bytesutil.ToBytes32(oldHeadroot) {
if bytesutil.ToBytes32(headBlock.Block().ParentRoot()) != oldHeadRoot {
log.WithFields(logrus.Fields{
"newSlot": fmt.Sprintf("%d", newHeadSlot),
"oldSlot": fmt.Sprintf("%d", headSlot),
@@ -166,7 +118,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
},
})
if err := s.saveOrphanedAtts(ctx, bytesutil.ToBytes32(oldHeadroot), newHeadRoot); err != nil {
if err := s.saveOrphanedAtts(ctx, oldHeadRoot, newHeadRoot); err != nil {
return err
}
reorgCount.Inc()
@@ -361,7 +313,7 @@ func (s *Service) notifyNewHeadEvent(
func (s *Service) saveOrphanedAtts(ctx context.Context, orphanedRoot [32]byte, newHeadRoot [32]byte) error {
commonAncestorRoot, err := s.ForkChoicer().CommonAncestorRoot(ctx, newHeadRoot, orphanedRoot)
switch {
// Exit early if there's no common ancestor as there would be nothing to save.
// Exit early if there's no common ancestor and root doesn't exist, there would be nothing to save.
case errors.Is(err, forkchoice.ErrUnknownCommonAncestor):
return nil
case err != nil:

View File

@@ -44,15 +44,12 @@ func TestSaveHead_Different(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
util.NewBeaconBlock()
oldBlock, err := wrapper.WrappedSignedBeaconBlock(
util.NewBeaconBlock(),
)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), oldBlock))
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, bytesutil.ToBytes32(oldBlock.Block().ParentRoot()), [32]byte{}, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, bytesutil.ToBytes32(oldBlock.Block().ParentRoot()), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -65,12 +62,10 @@ func TestSaveHead_Different(t *testing.T) {
newHeadSignedBlock.Block.Slot = 1
newHeadBlock := newHeadSignedBlock.Block
wsb, err := wrapper.WrappedSignedBeaconBlock(newHeadSignedBlock)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, bytesutil.ToBytes32(wsb.Block().ParentRoot()), [32]byte{}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, bytesutil.ToBytes32(wsb.Block().ParentRoot()), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -95,14 +90,12 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock, err := wrapper.WrappedSignedBeaconBlock(
util.NewBeaconBlock(),
)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), oldBlock))
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, bytesutil.ToBytes32(oldBlock.Block().ParentRoot()), [32]byte{}, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, bytesutil.ToBytes32(oldBlock.Block().ParentRoot()), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -117,12 +110,10 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
newHeadSignedBlock.Block.ParentRoot = reorgChainParent[:]
newHeadBlock := newHeadSignedBlock.Block
wsb, err := wrapper.WrappedSignedBeaconBlock(newHeadSignedBlock)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, bytesutil.ToBytes32(wsb.Block().ParentRoot()), [32]byte{}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, bytesutil.ToBytes32(wsb.Block().ParentRoot()), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -158,28 +149,6 @@ func TestCacheJustifiedStateBalances_CanCache(t *testing.T) {
require.DeepEqual(t, balances, state.Balances(), "Incorrect justified balances")
}
func TestUpdateHead_MissingJustifiedRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
b := util.NewBeaconBlock()
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
state, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), state, r))
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'a'})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{'b'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{})
headRoot, err := service.updateHead(context.Background(), []uint64{})
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.saveHead(context.Background(), headRoot, wsb, st))
}
func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("genesis_state_root", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
@@ -258,9 +227,7 @@ func TestSaveOrphanedAtts_NoCommonAncestor(t *testing.T) {
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
@@ -290,12 +257,12 @@ func TestSaveOrphanedAtts_NoCommonAncestor(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
@@ -314,9 +281,8 @@ func TestSaveOrphanedAtts(t *testing.T) {
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
@@ -343,16 +309,16 @@ func TestSaveOrphanedAtts(t *testing.T) {
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
@@ -381,9 +347,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
@@ -404,16 +368,16 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
blk4.Block.ParentRoot = rG[:]
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r2, r4))
@@ -437,9 +401,7 @@ func TestSaveOrphanedAtts_NoCommonAncestor_DoublyLinkedTrie(t *testing.T) {
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
@@ -466,15 +428,15 @@ func TestSaveOrphanedAtts_NoCommonAncestor_DoublyLinkedTrie(t *testing.T) {
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
@@ -498,9 +460,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
@@ -528,15 +488,15 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
@@ -570,9 +530,7 @@ func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
b, err := wrapper.WrappedSignedBeaconBlock(blkG)
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
@@ -594,15 +552,15 @@ func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r2, r4))
@@ -613,7 +571,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -622,18 +580,18 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
bellatrixBlk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlockBellatrix())
ojp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
bellatrixBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockBellatrix())
bellatrixBlkRoot, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, bellatrixBlk))
fcp := &ethpb.Checkpoint{
Root: bellatrixBlkRoot[:],
Epoch: 1,
Epoch: 0,
}
service.store.SetFinalizedCheckptAndPayloadHash(fcp, [32]byte{'a'})
service.store.SetJustifiedCheckptAndPayloadHash(fcp, [32]byte{'b'})
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bellatrixBlkRoot))
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
@@ -643,10 +601,10 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
headRoot := service.headRoot()
require.Equal(t, [32]byte{}, headRoot)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, 0, 0)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
newRoot, err := service.updateHead(ctx, []uint64{1, 2})
newRoot, err := service.cfg.ForkChoiceStore.Head(ctx, []uint64{1, 2})
require.NoError(t, err)
require.NotEqual(t, headRoot, newRoot)
require.Equal(t, headRoot, service.headRoot())

View File

@@ -8,11 +8,20 @@ import (
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
)
// This saves a beacon block to the initial sync blocks cache.
func (s *Service) saveInitSyncBlock(r [32]byte, b interfaces.SignedBeaconBlock) {
// This saves a beacon block to the initial sync blocks cache. It rate limits how many blocks
// the cache keeps in memory (2 epochs worth of blocks) and saves them to DB when it hits this limit.
func (s *Service) saveInitSyncBlock(ctx context.Context, r [32]byte, b interfaces.SignedBeaconBlock) error {
s.initSyncBlocksLock.Lock()
defer s.initSyncBlocksLock.Unlock()
s.initSyncBlocks[r] = b
numBlocks := len(s.initSyncBlocks)
s.initSyncBlocksLock.Unlock()
if uint64(numBlocks) > initialSyncBlockCacheSize {
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
}
return nil
}
// This checks if a beacon block exists in the initial sync blocks cache using the root

View File

@@ -29,15 +29,13 @@ func TestService_getBlock(t *testing.T) {
// block in cache
b, err := wrapper.WrappedSignedBeaconBlock(b1)
require.NoError(t, err)
s.saveInitSyncBlock(r1, b)
s.saveInitSyncBlock(ctx, r1, b)
got, err := s.getBlock(ctx, r1)
require.NoError(t, err)
require.DeepEqual(t, b, got)
// block in db
b, err = wrapper.WrappedSignedBeaconBlock(b2)
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(ctx, b))
b = util.SaveBlock(t, ctx, s.cfg.BeaconDB, b2)
got, err = s.getBlock(ctx, r2)
require.NoError(t, err)
require.DeepEqual(t, b, got)
@@ -61,12 +59,10 @@ func TestService_hasBlockInInitSyncOrDB(t *testing.T) {
// block in cache
b, err := wrapper.WrappedSignedBeaconBlock(b1)
require.NoError(t, err)
s.saveInitSyncBlock(r1, b)
s.saveInitSyncBlock(ctx, r1, b)
require.Equal(t, true, s.hasBlockInInitSyncOrDB(ctx, r1))
// block in db
b, err = wrapper.WrappedSignedBeaconBlock(b2)
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(ctx, b))
util.SaveBlock(t, ctx, s.cfg.BeaconDB, b2)
require.Equal(t, true, s.hasBlockInInitSyncOrDB(ctx, r2))
}

View File

@@ -5,6 +5,8 @@ import (
"fmt"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -60,23 +62,56 @@ func logBlockSyncStatus(block interfaces.BeaconBlock, blockRoot [32]byte, justif
return err
}
level := log.Logger.GetLevel()
log = log.WithField("slot", block.Slot())
if level >= logrus.DebugLevel {
log = log.WithField("slotInEpoch", block.Slot()%params.BeaconConfig().SlotsPerEpoch)
log = log.WithField("justifiedEpoch", justified.Epoch)
log = log.WithField("justifiedRoot", fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]))
log = log.WithField("parentRoot", fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]))
log = log.WithField("version", version.String(block.Version()))
log = log.WithField("sinceSlotStartTime", prysmTime.Now().Sub(startTime))
log = log.WithField("chainServiceProcessedTime", prysmTime.Now().Sub(receivedTime))
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime),
}).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
}
log.WithFields(logrus.Fields{
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
}).Info("Synced new block")
return nil
}
// logs payload related data every slot.
func logPayload(block interfaces.BeaconBlock) error {
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
return nil
}
payload, err := block.Body().ExecutionPayload()
if err != nil {
return err
}
if payload.GasLimit == 0 {
return errors.New("gas limit should not be 0")
}
gasUtilized := float64(payload.GasUsed) / float64(payload.GasLimit)
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash)),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash)),
"blockNumber": payload.BlockNumber,
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}).Debug("Synced new payload")
return nil
}

View File

@@ -0,0 +1,81 @@
package blockchain
var mergeAsciiArt = `
+?$$$$$$?*; ;*?$$$$?*; +!$$$$$$?!;
!##$???$@##$+ !&#@$??$&#@* +@#&$????$##+
!##; +@#&; !##* ;$##* @#$ ;&#+
!##; !##+ ;##$ @#@ $#&* ++;
!##; ;@#&; *##* ?##; ;$##&$!+;
!##?!!!?$##$+ !##+ !##+ ;!$@##&$!;
!##@@@@$$!+ *##* ?##; ;*?@#&!
!##; ;##$ @#@ ;?$; ?##+
!##; ?##! ;?##+ ;##+ ;$#&;
!##; !&#&$??$&#@* ;&#&$$??$$&#@+
+??; ;*?$$$$?+ ;+!?$$$$$!+
;;;;
;+!??$$$?!*+; ;*?$@&&&&&@@$!*;
*?@############&$?+ ;!@###############&$!;
;!@####&@$????$$@#####@! ;?&####$?*++;++*!?@####&?;
*@###&$*; ;*$&###@* *&###@!; ;!@###&!
!###&!; ;?&###? *####! *&###?
!###@+ ;$###$ +###&+ ;$###?
;###&; $###? ;;+*!??$$$$$$$$??!$###* ;@###*
!###! &###?$@&#####################@$?!+; +###$
$###+ ;*?&#################################&$?*; &##&;
$###+ ;!$&########################################&$!; &###;
*###? ;!$################################################$*; +###@
;@###+ +$&####################################################&?; $###!
+###&+ *$##########################################################$+ ;$###$;
*&###?; +$##############################################################?*@###$;
+$###&?+ ;$#####################################################################?;
*@####@&#####################################################################*
+$&##################@?!*++*!$&###################&$?*++*!?$###############&*
$###############&?+ ;!@###############@!; ;!@##############!
;$##############&!; *&###########&!; !&#############!
$##############@+ +@* ;$#########$; +@* ;$#############!
?##############$; *###* $#######$; +&##! ?#############+
+##############$ !#####! $#####@; *#####? ?############@;
@#############@; !#######! ;&####+ *#######? $############?
+##############+ ?#########? $###$ !#########$ +############&;
$#############$ ;$###########? !###? !###########$; $############!
@#############* !#############! ?###$ *#############? +############@
;&############&; +?@#######&$+; ;&####; ;+?@#######&?+; @############;
;#############@ +$&#&$*; ;$#####@; +$&#&$*; $############+
;#############$ *+ ;+; ;*; *&#######&! ;*; ;+; +*; $############*
&############@ ;$@!; +$@! ;?###########$; *@$* ;*$@+ $############*
$############&; ;$#&$*+!@##* +@#############@+ +&#@?++$&#@; @############+
*#############* $######&+ !#################? +@######$; +#############;
@############$ ?####@+ ;$###################$; ;@####$ $############@
*#############* !##@; +@#####################&* ;$##? +#############!
$#############+ *$; ?#########################?; $! +&############&;
;&#############! +$###########################@+ *&#############?
+##############$*; ;?###############################$+ *$##############@;
*###############&$?!!$###################################$?!?$&################+
*###################################&@$$$$@&#################################!
+&##############################&?+; ;+?&#############################?
;$############################@; ;@###########################!
?###########################* *##########################*
+@#############&$!+$#######? ?########$+!$&###########@+
!&###########&; $#######$+ +$########? +&##########?;
;?###########&* *@#######@$!; ;$@########@* *##########$+
;?&##########?; ;*$&####&$* ;!$&####&$* ;$#########@*
;!@#########@!; ;++*+; ;*; ;+*++; ;!&########$*
*$&########&$*; ;*$&#&$*; +*$&#######&?+
;*$&#########&@$$@&#########&@$$@&########&$*;
;+?$&##############################&$?+;
;+!?$@&###################&$$!+;
;++*!??$$$$$$$?!!*+;
;;; ;;+*++; ;;;++;;;++;;; ;+++;;;++++; ;;; ;; ;;; ;;;++;;;+++;; ;;;+++++++; ;+++++;
;@#&+ +$&&@$@&#? @#@@@&#&@@@#$ ;$@@@@#&@@@@! !#@ !#$ +&#&; !#&@@@#&@@@&&; !#&@@@@@@@* $#&@@@&&$+
$#?#@; ?#&!; *#$ &&;;;$#!;;*#$ ;;;*#@;;;; $#? +#&; ;@#?#$ !#!;;*#@;;;@#; ?#$;;;;;;; @#* ;!&#?
*#$ ?#$ *#&; ;+; ++ $#! ;+; *#$ ;&#+ $#* ?#? $#! ;+; +#@ ++ ?#$ $#* ;&#*
;&&; @#* $#$ $#! *#$ !#@; !#$ +#@ ;&#; +#@ ?#@??????! $#* $#$
$#!;;;*#&; $#? $#! *#$ $#? ;&&; ;@#*;;;!#$ +#@ ?#@??????! $#* $#$
!##@@@@@&#$ !#@; $#! *#$ ;&#+ $#* ?#&@@@@@##! +#@ ?#$ $#* @#*
;&&+;;;;;;@#* ;$#$+ @$ $#! *#$ *#@!#? *#@;;;;;;+##+ +#@ ?#$ $#* +$#$
$#* +#&; ;?&#@$$$$#$ +$$&#@$$; ;$$$$@##$$$$* $##@; ;&#+ !#@; $$@##$$* ?#&$$$$$$$! @#@$$$@&@!
;** +*; +*!?!*+; ;******* ;***********+ ;**; ;*+ **; *******+ ;*********+ +*!!!!*;
`

View File

@@ -35,7 +35,7 @@ func TestReportEpochMetrics_SlashedValidatorOutOfBound(t *testing.T) {
require.NoError(t, err)
v.Slashed = true
require.NoError(t, h.UpdateValidatorAtIndex(0, v))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.HydrateAttestationData(&eth.AttestationData{})}))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.NewAttestationUtil().HydrateAttestationData(&eth.AttestationData{})}))
err = reportEpochMetrics(context.Background(), h, h)
require.ErrorContains(t, "slot 0 out of bounds", err)
}

View File

@@ -13,7 +13,7 @@ import (
func testServiceOptsWithDB(t *testing.T) []Option {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
return []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),

View File

@@ -1,129 +0,0 @@
package blockchain
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestService_newSlot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
ctx := context.Background()
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
wsb, err := wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
assert.NoError(t, beaconDB.SaveBlock(ctx, wsb))
bj, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // genesis
state, blkRoot, err = prepareForkchoiceState(ctx, 32, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // finalized
state, blkRoot, err = prepareForkchoiceState(ctx, 64, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // justified
state, blkRoot, err = prepareForkchoiceState(ctx, 96, bj, [32]byte{'a'}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // best justified
state, blkRoot, err = prepareForkchoiceState(ctx, 97, [32]byte{'d'}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot)) // bad
type args struct {
slot types.Slot
finalized *ethpb.Checkpoint
justified *ethpb.Checkpoint
bestJustified *ethpb.Checkpoint
shouldEqual bool
}
tests := []struct {
name string
args args
}{
{
name: "Not epoch boundary. No change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch + 1,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bj[:]},
shouldEqual: false,
},
},
{
name: "Justified higher than best justified. No change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 3, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 2, Root: bj[:]},
shouldEqual: false,
},
},
{
name: "Best justified not on the same chain as finalized. No change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bytesutil.PadTo([]byte{'d'}, 32)},
shouldEqual: false,
},
},
{
name: "Best justified on the same chain as finalized. Yes change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'a'}, 32)},
justified: &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'b'}, 32)},
bestJustified: &ethpb.Checkpoint{Epoch: 3, Root: bj[:]},
shouldEqual: true,
},
},
}
for _, test := range tests {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
s := store.New(test.args.justified, test.args.finalized)
s.SetBestJustifiedCheckpt(test.args.bestJustified)
service.store = s
require.NoError(t, service.NewSlot(ctx, test.args.slot))
if test.args.shouldEqual {
bcp, err := service.store.BestJustifiedCheckpt()
require.NoError(t, err)
cp, err := service.store.JustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, bcp, cp)
} else {
bcp, err := service.store.BestJustifiedCheckpt()
require.NoError(t, err)
cp, err := service.store.JustifiedCheckpt()
require.NoError(t, err)
require.DeepNotSSZEqual(t, bcp, cp)
}
}
}

View File

@@ -78,6 +78,8 @@ func (s *Service) validateMergeBlock(ctx context.Context, b interfaces.SignedBea
"mergeBlockParentTotalDifficulty": mergeBlockParentTD,
}).Info("Validated terminal block")
log.Info(mergeAsciiArt)
return nil
}

View File

@@ -109,7 +109,7 @@ func Test_validateMergeBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -152,7 +152,7 @@ func Test_validateMergeBlock(t *testing.T) {
func Test_getBlkParentHashAndTD(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),

View File

@@ -9,11 +9,11 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
@@ -28,7 +28,7 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
@@ -37,20 +37,16 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
BlkWithOutState := util.NewBeaconBlock()
BlkWithOutState.Block.Slot = 0
wsb, err := wrapper.WrappedSignedBeaconBlock(BlkWithOutState)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
BlkWithOutStateRoot, err := BlkWithOutState.Block.HashTreeRoot()
blkWithoutState := util.NewBeaconBlock()
blkWithoutState.Block.Slot = 0
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
BlkWithOutStateRoot, err := blkWithoutState.Block.HashTreeRoot()
require.NoError(t, err)
BlkWithStateBadAtt := util.NewBeaconBlock()
BlkWithStateBadAtt.Block.Slot = 1
wsb, err = wrapper.WrappedSignedBeaconBlock(BlkWithStateBadAtt)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
BlkWithStateBadAttRoot, err := BlkWithStateBadAtt.Block.HashTreeRoot()
blkWithStateBadAtt := util.NewBeaconBlock()
blkWithStateBadAtt.Block.Slot = 1
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
BlkWithStateBadAttRoot, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
s, err := util.NewBeaconState()
@@ -58,13 +54,11 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
require.NoError(t, s.SetSlot(100*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
BlkWithValidState := util.NewBeaconBlock()
BlkWithValidState.Block.Slot = 2
wsb, err = wrapper.WrappedSignedBeaconBlock(BlkWithValidState)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
blkWithValidState := util.NewBeaconBlock()
blkWithValidState.Block.Slot = 2
util.SaveBlock(t, ctx, beaconDB, blkWithValidState)
BlkWithValidStateRoot, err := BlkWithValidState.Block.HashTreeRoot()
blkWithValidStateRoot, err := blkWithValidState.Block.HashTreeRoot()
require.NoError(t, err)
s, err = util.NewBeaconState()
require.NoError(t, err)
@@ -74,8 +68,9 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
})
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithValidStateRoot))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
au := util.AttestationUtil{}
tests := []struct {
name string
a *ethpb.Attestation
@@ -83,17 +78,17 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
@@ -140,7 +135,7 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New(0, 0)),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
@@ -149,20 +144,16 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
BlkWithOutState := util.NewBeaconBlock()
BlkWithOutState.Block.Slot = 0
wsb, err := wrapper.WrappedSignedBeaconBlock(BlkWithOutState)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
BlkWithOutStateRoot, err := BlkWithOutState.Block.HashTreeRoot()
blkWithoutState := util.NewBeaconBlock()
blkWithoutState.Block.Slot = 0
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
BlkWithOutStateRoot, err := blkWithoutState.Block.HashTreeRoot()
require.NoError(t, err)
BlkWithStateBadAtt := util.NewBeaconBlock()
BlkWithStateBadAtt.Block.Slot = 1
wsb, err = wrapper.WrappedSignedBeaconBlock(BlkWithStateBadAtt)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
BlkWithStateBadAttRoot, err := BlkWithStateBadAtt.Block.HashTreeRoot()
blkWithStateBadAtt := util.NewBeaconBlock()
blkWithStateBadAtt.Block.Slot = 1
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
BlkWithStateBadAttRoot, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
s, err := util.NewBeaconState()
@@ -170,13 +161,11 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
require.NoError(t, s.SetSlot(100*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
BlkWithValidState := util.NewBeaconBlock()
BlkWithValidState.Block.Slot = 2
wsb, err = wrapper.WrappedSignedBeaconBlock(BlkWithValidState)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
blkWithValidState := util.NewBeaconBlock()
blkWithValidState.Block.Slot = 2
util.SaveBlock(t, ctx, beaconDB, blkWithValidState)
BlkWithValidStateRoot, err := BlkWithValidState.Block.HashTreeRoot()
blkWithValidStateRoot, err := blkWithValidState.Block.HashTreeRoot()
require.NoError(t, err)
s, err = util.NewBeaconState()
require.NoError(t, err)
@@ -186,8 +175,9 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
})
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithValidStateRoot))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
au := util.AttestationUtil{}
tests := []struct {
name string
a *ethpb.Attestation
@@ -195,17 +185,17 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
@@ -250,7 +240,7 @@ func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -261,14 +251,16 @@ func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
att, err := util.NewAttestationUtil().GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
ojc := &ethpb.Checkpoint{Epoch: 1, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 1, Root: tRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
@@ -278,7 +270,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -289,14 +281,16 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
att, err := util.NewAttestationUtil().GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
ojc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
@@ -328,12 +322,6 @@ func TestStore_SaveCheckpointState(t *testing.T) {
r := [32]byte{'g'}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, r))
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'b'})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: r[:]})
r = bytesutil.ToBytes32([]byte{'A'})
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)}))
@@ -362,10 +350,6 @@ func TestStore_SaveCheckpointState(t *testing.T) {
assert.Equal(t, 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot(), "Unexpected state slot")
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
service.store.SetJustifiedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'a'})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r[:]}, [32]byte{'b'})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: r[:]})
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}))
@@ -440,8 +424,7 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
d := util.HydrateAttestationData(&ethpb.AttestationData{})
d := util.NewAttestationUtil().HydrateAttestationData(&ethpb.AttestationData{})
require.Equal(t, errBlockNotFoundInCacheOrDB, service.verifyBeaconBlock(ctx, d))
}
@@ -454,9 +437,7 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Slot = 2
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
d := &ethpb.AttestationData{Slot: 1, BeaconBlockRoot: r[:]}
@@ -473,9 +454,7 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Slot = 2
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
d := &ethpb.AttestationData{Slot: 2, BeaconBlockRoot: r[:]}
@@ -487,7 +466,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -498,19 +477,15 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
wsb, err := wrapper.WrappedSignedBeaconBlock(b32)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 1}, [32]byte{})
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1}))
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b33)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
@@ -522,7 +497,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
@@ -533,19 +508,15 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
wsb, err := wrapper.WrappedSignedBeaconBlock(b32)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 1}, [32]byte{})
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1}))
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b33)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
@@ -562,20 +533,15 @@ func TestVerifyFinalizedConsistency_OK(t *testing.T) {
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
wsb, err := wrapper.WrappedSignedBeaconBlock(b32)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r32[:], Epoch: 1}, [32]byte{})
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1, Root: r32}))
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b33)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
@@ -595,22 +561,24 @@ func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: r32[:], Epoch: 1}, [32]byte{})
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, b32.Block.Slot, r32, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, b32.Block.Slot, r32, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, b33.Block.Slot, r33, r32, params.BeaconConfig().ZeroHash, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, b33.Block.Slot, r33, r32, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
_, err = service.cfg.ForkChoiceStore.Head(ctx, r32, []uint64{})
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r32}
require.NoError(t, service.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(jc))
_, err = service.cfg.ForkChoiceStore.Head(ctx, []uint64{})
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(context.Background(), r33[:])
require.NoError(t, err)

View File

@@ -24,6 +24,7 @@ import (
"github.com/prysmaticlabs/prysm/crypto/bls"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
@@ -105,6 +106,12 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
return err
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.ForkChoicer().JustifiedCheckpoint().Epoch
currStoreFinalizedEpoch := s.ForkChoicer().FinalizedCheckpoint().Epoch
preStateFinalizedEpoch := preState.FinalizedCheckpoint().Epoch
preStateJustifiedEpoch := preState.CurrentJustifiedCheckpoint().Epoch
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
@@ -129,6 +136,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState); err != nil {
return err
}
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, signed.Block(), blockRoot, postState); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
}
@@ -139,18 +147,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}
}
// We add a proposer score boost to fork choice for the block root if applicable, right after
// running a successful state transition for the block.
secondsIntoSlot := uint64(time.Since(s.genesisTime).Seconds()) % params.BeaconConfig().SecondsPerSlot
if err := s.cfg.ForkChoiceStore.BoostProposerRoot(ctx, &forkchoicetypes.ProposerBoostRootArgs{
BlockRoot: blockRoot,
BlockSlot: signed.Block().Slot(),
CurrentSlot: slots.SinceGenesis(s.genesisTime),
SecondsIntoSlot: secondsIntoSlot,
}); err != nil {
return err
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
@@ -178,64 +174,13 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}()
}
// Update justified check point.
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
currJustifiedEpoch := justified.Epoch
psj := postState.CurrentJustifiedCheckpoint()
if psj == nil {
return errNilJustifiedCheckpoint
}
if psj.Epoch > currJustifiedEpoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if finalized == nil {
return errNilFinalizedInStore
}
psf := postState.FinalizedCheckpoint()
if psf == nil {
return errNilFinalizedCheckpoint
}
newFinalized := psf.Epoch > finalized.Epoch
if newFinalized {
s.store.SetPrevFinalizedCheckpt(finalized)
h, err := s.getPayloadHash(ctx, psf.Root)
if err != nil {
return err
}
s.store.SetFinalizedCheckptAndPayloadHash(psf, h)
s.store.SetPrevJustifiedCheckpt(justified)
h, err = s.getPayloadHash(ctx, psj.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(postState.CurrentJustifiedCheckpoint(), h)
// Update Forkchoice checkpoints
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(psj); err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(psf); err != nil {
return err
}
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(justified.Root))
justified := s.ForkChoicer().JustifiedCheckpoint()
balances, err := s.justifiedBalances.get(ctx, justified.Root)
if err != nil {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", justified.Root)
return errors.Wrap(err, msg)
}
headRoot, err := s.updateHead(ctx, balances)
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx, balances)
if err != nil {
log.WithError(err).Warn("Could not update head")
}
@@ -269,22 +214,23 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}()
// Save justified check point to db.
if postState.CurrentJustifiedCheckpoint().Epoch > currJustifiedEpoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint()); err != nil {
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > currStoreJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
// Update finalized check point.
if newFinalized {
if err := s.updateFinalized(ctx, postState.FinalizedCheckpoint()); err != nil {
// Save finalized check point to db and more.
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.ForkChoicer().FinalizedCheckpoint()
if finalized.Epoch > currStoreFinalizedEpoch || (finalized.Epoch == postStateFinalizedEpoch && finalized.Epoch > preStateFinalizedEpoch) {
if err := s.updateFinalized(ctx, &ethpb.Checkpoint{Epoch: finalized.Epoch, Root: finalized.Root[:]}); err != nil {
return err
}
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
if err := s.cfg.ForkChoiceStore.Prune(ctx, fRoot); err != nil {
return errors.Wrap(err, "could not prune fork choice nodes")
}
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(finalized.Root)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
@@ -305,23 +251,21 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
// with a custom deadline, therefore using the background context instead.
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
defer cancel()
if err := s.insertFinalizedDeposits(depCtx, fRoot); err != nil {
if err := s.insertFinalizedDeposits(depCtx, finalized.Root); err != nil {
log.WithError(err).Error("Could not insert finalized deposits.")
}
}()
}
defer reportAttestationInclusion(b)
return s.handleEpochBoundary(ctx, postState)
}
func getStateVersionAndPayload(st state.BeaconState) (int, *ethpb.ExecutionPayloadHeader, error) {
func getStateVersionAndPayload(st state.BeaconState) (int, *enginev1.ExecutionPayloadHeader, error) {
if st == nil {
return 0, nil, errors.New("nil state")
}
var preStateHeader *ethpb.ExecutionPayloadHeader
var preStateHeader *enginev1.ExecutionPayloadHeader
var err error
preStateVersion := st.Version()
switch preStateVersion {
@@ -379,7 +323,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
}
type versionAndHeader struct {
version int
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
}
preVersionAndHeaders := make([]*versionAndHeader, len(blks))
postVersionAndHeaders := make([]*versionAndHeader, len(blks))
@@ -441,14 +385,32 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
}
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
JustifiedEpoch: jCheckpoints[i].Epoch,
FinalizedEpoch: fCheckpoints[i].Epoch}
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
pendingNodes[len(blks)-i-1] = args
s.saveInitSyncBlock(blockRoots[i], b)
if err = s.handleBlockAfterBatchVerify(ctx, b, blockRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
if err := s.saveInitSyncBlock(ctx, blockRoots[i], b); err != nil {
tracing.AnnotateError(span, err)
return err
}
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: b.Block().Slot(),
Root: blockRoots[i][:],
}); err != nil {
tracing.AnnotateError(span, err)
return err
}
if i > 0 && jCheckpoints[i].Epoch > jCheckpoints[i-1].Epoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, jCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
if i > 0 && fCheckpoints[i].Epoch > fCheckpoints[i-1].Epoch {
if err := s.updateFinalized(ctx, fCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
}
// Insert all nodes but the last one to forkchoice
if err := s.cfg.ForkChoiceStore.InsertOptimisticChain(ctx, pendingNodes); err != nil {
@@ -459,14 +421,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, preState, lastBR); err != nil {
return errors.Wrap(err, "could not insert last block in batch to forkchoice")
}
// Prune forkchoice store only if the new finalized checkpoint is higher
// than the finalized checkpoint in forkchoice store.
if fCheckpoints[len(blks)-1].Epoch > s.cfg.ForkChoiceStore.FinalizedEpoch() {
if err := s.cfg.ForkChoiceStore.Prune(ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(fCheckpoints[len(blks)-1].Root))); err != nil {
return errors.Wrap(err, "could not prune fork choice nodes")
}
}
// Set their optimistic status
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, lastBR); err != nil {
@@ -495,61 +449,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
return s.saveHeadNoDB(ctx, lastB, lastBR, preState)
}
// handles a block after the block's batch has been verified, where we can save blocks
// their state summaries and split them off to relative hot/cold storage.
func (s *Service) handleBlockAfterBatchVerify(ctx context.Context, signed interfaces.SignedBeaconBlock,
blockRoot [32]byte, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: signed.Block().Slot(),
Root: blockRoot[:],
}); err != nil {
return err
}
// Rate limit how many blocks (2 epochs worth of blocks) a node keeps in the memory.
if uint64(len(s.getInitSyncBlocks())) > initialSyncBlockCacheSize {
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
}
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
if jCheckpoint.Epoch > justified.Epoch {
if err := s.updateJustifiedInitSync(ctx, jCheckpoint); err != nil {
return err
}
}
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
if finalized == nil {
return errNilFinalizedInStore
}
// Update finalized check point. Prune the block cache and helper caches on every new finalized epoch.
if fCheckpoint.Epoch > finalized.Epoch {
if err := s.updateFinalized(ctx, fCheckpoint); err != nil {
return err
}
s.store.SetPrevFinalizedCheckpt(finalized)
h, err := s.getPayloadHash(ctx, fCheckpoint.Root)
if err != nil {
return err
}
s.store.SetFinalizedCheckptAndPayloadHash(fCheckpoint, h)
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fCheckpoint); err != nil {
return err
}
}
return nil
}
// Epoch boundary bookkeeping such as logging epoch summaries.
func (s *Service) handleEpochBoundary(ctx context.Context, postState state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
@@ -601,9 +500,15 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
ctx, span := trace.StartSpan(ctx, "blockChain.insertBlockAndAttestationsToForkChoiceStore")
defer span.End()
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.insertBlockToForkChoiceStore(ctx, blk, root, st, fCheckpoint, jCheckpoint); err != nil {
if !s.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(blk.ParentRoot())) {
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
}
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, st, root); err != nil {
return err
}
// Feed in block's attestations to fork choice store.
@@ -621,14 +526,7 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
return nil
}
func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock, root [32]byte, st state.BeaconState, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, root)
}
// Inserts attester slashing indices to fork choice store.
// InsertSlashingsToForkChoiceStore inserts attester slashing indices to fork choice store.
// To call this function, it's caller's responsibility to ensure the slashing object is valid.
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []*ethpb.AttesterSlashing) {
for _, slashing := range slashings {
@@ -656,10 +554,6 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
// This removes the attestations from the mem pool. It will only remove the attestations if input root `r` is canonical,
// meaning the block `b` is part of the canonical chain.
func (s *Service) pruneCanonicalAttsFromPool(ctx context.Context, r [32]byte, b interfaces.SignedBeaconBlock) error {
if !features.Get().CorrectlyPruneCanonicalAtts {
return nil
}
canonical, err := s.IsCanonical(ctx, r)
if err != nil {
return err
@@ -684,7 +578,7 @@ func (s *Service) pruneCanonicalAttsFromPool(ctx context.Context, r [32]byte, b
}
// validateMergeTransitionBlock validates the merge transition block.
func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion int, stateHeader *ethpb.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) error {
func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion int, stateHeader *enginev1.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) error {
// Skip validation if block is older than Bellatrix.
if blocks.IsPreBellatrixVersion(blk.Block().Version()) {
return nil

View File

@@ -95,17 +95,13 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.BeaconBloc
func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.VerifyFinalizedBlkDescendant")
defer span.End()
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
finalizedBlkSigned, err := s.getBlock(ctx, fRoot)
finalized := s.ForkChoicer().FinalizedCheckpoint()
fRoot := s.ensureRootNotZeros(finalized.Root)
fSlot, err := slots.EpochStart(finalized.Epoch)
if err != nil {
return err
}
finalizedBlk := finalizedBlkSigned.Block()
bFinalizedRoot, err := s.ancestor(ctx, root[:], finalizedBlk.Slot())
bFinalizedRoot, err := s.ancestor(ctx, root[:], fSlot)
if err != nil {
return errors.Wrap(err, "could not get finalized block root")
}
@@ -115,7 +111,7 @@ func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byt
if !bytes.Equal(bFinalizedRoot, fRoot[:]) {
err := fmt.Errorf("block %#x is not a descendant of the current finalized block slot %d, %#x != %#x",
bytesutil.Trunc(root[:]), finalizedBlk.Slot(), bytesutil.Trunc(bFinalizedRoot),
bytesutil.Trunc(root[:]), fSlot, bytesutil.Trunc(bFinalizedRoot),
bytesutil.Trunc(fRoot[:]))
tracing.AnnotateError(span, err)
return invalidBlock{err}
@@ -126,10 +122,7 @@ func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byt
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Service) verifyBlkFinalizedSlot(b interfaces.BeaconBlock) error {
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
finalized := s.ForkChoicer().FinalizedCheckpoint()
finalizedSlot, err := slots.EpochStart(finalized.Epoch)
if err != nil {
return err
@@ -141,112 +134,8 @@ func (s *Service) verifyBlkFinalizedSlot(b interfaces.BeaconBlock) error {
return nil
}
// shouldUpdateCurrentJustified prevents bouncing attack, by only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
//
// Spec code:
// def should_update_justified_checkpoint(store: Store, new_justified_checkpoint: Checkpoint) -> bool:
// """
// To address the bouncing attack, only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
//
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
// """
// if compute_slots_since_epoch_start(get_current_slot(store)) < SAFE_SLOTS_TO_UPDATE_JUSTIFIED:
// return True
//
// justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
// if not get_ancestor(store, new_justified_checkpoint.root, justified_slot) == store.justified_checkpoint.root:
// return False
//
// return True
func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustifiedCheckpt *ethpb.Checkpoint) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.shouldUpdateCurrentJustified")
defer span.End()
if slots.SinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return false, errors.Wrap(err, "could not get justified checkpoint")
}
jSlot, err := slots.EpochStart(justified.Epoch)
if err != nil {
return false, err
}
justifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(newJustifiedCheckpt.Root))
b, err := s.ancestor(ctx, justifiedRoot[:], jSlot)
if err != nil {
return false, err
}
if !bytes.Equal(b, justified.Root) {
return false, nil
}
return true, nil
}
func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.updateJustified")
defer span.End()
cpt := state.CurrentJustifiedCheckpoint()
bestJustified, err := s.store.BestJustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get best justified checkpoint")
}
if cpt.Epoch > bestJustified.Epoch {
s.store.SetBestJustifiedCheckpt(cpt)
}
canUpdate, err := s.shouldUpdateCurrentJustified(ctx, cpt)
if err != nil {
return err
}
if canUpdate {
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
s.store.SetPrevJustifiedCheckpt(justified)
h, err := s.getPayloadHash(ctx, cpt.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(cpt, h)
// Update forkchoice's justified checkpoint
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(cpt); err != nil {
return err
}
}
return nil
}
// This caches input checkpoint as justified for the service struct. It rotates current justified to previous justified,
// caches justified checkpoint balances for fork choice and save justified checkpoint in DB.
// This method does not have defense against fork choice bouncing attack, which is why it's only recommend to be used during initial syncing.
func (s *Service) updateJustifiedInitSync(ctx context.Context, cp *ethpb.Checkpoint) error {
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
s.store.SetPrevJustifiedCheckpt(justified)
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cp); err != nil {
return err
}
h, err := s.getPayloadHash(ctx, cp.Root)
if err != nil {
return err
}
s.store.SetJustifiedCheckptAndPayloadHash(cp, h)
return s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(cp)
}
// updateFinalized saves the init sync blocks, finalized checkpoint, migrates
// to cold old states and saves the last validated checkpoint to DB
func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) error {
ctx, span := trace.StartSpan(ctx, "blockChain.updateFinalized")
defer span.End()
@@ -319,7 +208,8 @@ func (s *Service) ancestorByForkChoiceStore(ctx context.Context, r [32]byte, slo
if !s.cfg.ForkChoiceStore.HasParent(r) {
return nil, errors.New("could not find root in fork choice store")
}
return s.cfg.ForkChoiceStore.AncestorRoot(ctx, r, slot)
root, err := s.cfg.ForkChoiceStore.AncestorRoot(ctx, r, slot)
return root[:], err
}
// This retrieves an ancestor root using DB. The look up is recursively looking up DB. Slower than `ancestorByForkChoiceStore`.
@@ -351,16 +241,13 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk interfa
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, 0)
// Fork choice only matters from last finalized slot.
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return err
}
finalized := s.ForkChoicer().FinalizedCheckpoint()
fSlot, err := slots.EpochStart(finalized.Epoch)
if err != nil {
return err
}
pendingNodes = append(pendingNodes, &forkchoicetypes.BlockAndCheckpoints{Block: blk,
JustifiedEpoch: jCheckpoint.Epoch, FinalizedEpoch: fCheckpoint.Epoch})
JustifiedCheckpoint: jCheckpoint, FinalizedCheckpoint: fCheckpoint})
// As long as parent node is not in fork choice store, and parent node is in DB.
root := bytesutil.ToBytes32(blk.ParentRoot())
for !s.cfg.ForkChoiceStore.HasNode(root) && s.cfg.BeaconDB.HasBlock(ctx, root) {
@@ -373,14 +260,14 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk interfa
}
root = bytesutil.ToBytes32(b.Block().ParentRoot())
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
JustifiedEpoch: jCheckpoint.Epoch,
FinalizedEpoch: fCheckpoint.Epoch}
JustifiedCheckpoint: jCheckpoint,
FinalizedCheckpoint: fCheckpoint}
pendingNodes = append(pendingNodes, args)
}
if len(pendingNodes) == 1 {
return nil
}
if root != s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root)) {
if root != s.ensureRootNotZeros(finalized.Root) {
return errNotDescendantOfFinalized
}
return s.cfg.ForkChoiceStore.InsertOptimisticChain(ctx, pendingNodes)

File diff suppressed because it is too large Load Diff

View File

@@ -71,10 +71,7 @@ func (s *Service) VerifyFinalizedConsistency(ctx context.Context, root []byte) e
return nil
}
f, err := s.FinalizedCheckpt()
if err != nil {
return err
}
f := s.FinalizedCheckpt()
ss, err := slots.EpochStart(f.Epoch)
if err != nil {
return err
@@ -123,7 +120,7 @@ func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
case <-s.ctx.Done():
return
case <-st.C():
if err := s.NewSlot(s.ctx, s.CurrentSlot()); err != nil {
if err := s.ForkChoicer().NewSlot(s.ctx, s.CurrentSlot()); err != nil {
log.WithError(err).Error("Could not process new slot")
return
}
@@ -152,15 +149,12 @@ func (s *Service) UpdateHead(ctx context.Context) error {
s.processAttestations(ctx)
justified, err := s.store.JustifiedCheckpt()
justified := s.ForkChoicer().JustifiedCheckpoint()
balances, err := s.justifiedBalances.get(ctx, justified.Root)
if err != nil {
return err
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(justified.Root))
if err != nil {
return err
}
newHeadRoot, err := s.updateHead(ctx, balances)
newHeadRoot, err := s.cfg.ForkChoiceStore.Head(ctx, balances)
if err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
@@ -48,22 +49,18 @@ func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
wsb, err := wrapper.WrappedSignedBeaconBlock(b32)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b33)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
wanted := "FFG and LMD votes are not consistent"
a := util.NewAttestation()
a := util.NewAttestationUtil().NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = []byte{'a'}
a.Data.BeaconBlockRoot = r33[:]
@@ -79,21 +76,16 @@ func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
wsb, err := wrapper.WrappedSignedBeaconBlock(b32)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
wsb, err = wrapper.WrappedSignedBeaconBlock(b33)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
a := util.NewAttestation()
a := util.NewAttestationUtil().NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = r32[:]
a.Data.BeaconBlockRoot = r33[:]
@@ -113,14 +105,16 @@ func TestProcessAttestations_Ok(t *testing.T) {
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
atts, err := util.NewAttestationUtil().GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(atts[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
@@ -144,7 +138,7 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
require.LogsDoNotContain(t, hook, hookErr)
gb, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
service.saveInitSyncBlock([32]byte{'a'}, gb)
require.NoError(t, service.saveInitSyncBlock(ctx, [32]byte{'a'}, gb))
service.notifyEngineIfChangedHead(ctx, [32]byte{'a'})
require.LogsContain(t, hook, invalidStateErr)
@@ -161,8 +155,7 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
require.NoError(t, err)
r1, err := b.Block.HashTreeRoot()
require.NoError(t, err)
service.saveInitSyncBlock(r1, wsb)
finalized := &ethpb.Checkpoint{Root: r1[:], Epoch: 0}
require.NoError(t, service.saveInitSyncBlock(ctx, r1, wsb))
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
slot: 1,
@@ -171,7 +164,6 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1})
service.store.SetFinalizedCheckptAndPayloadHash(finalized, [32]byte{})
service.notifyEngineIfChangedHead(ctx, r1)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
@@ -179,12 +171,9 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
// Block in DB
b = util.NewBeaconBlock()
b.Block.Slot = 3
wsb, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
r1, err = b.Block.HashTreeRoot()
require.NoError(t, err)
finalized = &ethpb.Checkpoint{Root: r1[:], Epoch: 0}
st, _ = util.DeterministicGenesisState(t, 1)
service.head = &head{
slot: 1,
@@ -193,7 +182,6 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1})
service.store.SetFinalizedCheckptAndPayloadHash(finalized, [32]byte{})
service.notifyEngineIfChangedHead(ctx, r1)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
@@ -211,38 +199,56 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
opts = append(opts, WithAttestationPool(attestations.NewPool()), WithStateNotifier(&mockBeaconNode{}))
fcs := protoarray.New()
opts = append(opts,
WithAttestationPool(attestations.NewPool()),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fcs),
)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
service.genesisTime = prysmTime.Now().Add(-2 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(atts[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
// Generate a new block for attesters to attest
blk, err := util.GenerateFullBlock(copied, pks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: tRoot[:]}))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, 1, 1)
tRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
// Generate attestatios for this block in Slot 1
atts, err := util.NewAttestationUtil().GenerateAttestations(copied, pks, 1, 1, false)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
b := util.NewBeaconBlock()
wb, err := wrapper.WrappedSignedBeaconBlock(b)
// Verify the target is in forchoice
require.Equal(t, true, fcs.HasNode(bytesutil.ToBytes32(atts[0].Data.BeaconBlockRoot)))
// Insert a new block to forkchoice
ojc := &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}
b, err := util.GenerateFullBlock(genesisState, pks, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
b.Block.ParentRoot = service.originBlockRoot[:]
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wb))
state, blkRoot, err = prepareForkchoiceState(ctx, wb.Block().Slot(), r, bytesutil.ToBytes32(wb.Block().ParentRoot()), [32]byte{}, 0, 0)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())
service.head.root = r // Old head
require.Equal(t, 1, len(service.cfg.AttPool.ForkchoiceAttestations()))
require.NoError(t, err, service.UpdateHead(ctx))
require.Equal(t, tRoot, service.head.root) // Validate head is the new one
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, tRoot, service.head.root) // Validate head is the new one
}

View File

@@ -59,23 +59,21 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.SignedBeaco
}
// Reports on block and fork choice metrics.
justified, err := s.store.JustifiedCheckpt()
if err != nil {
return err
}
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errNilFinalizedInStore
}
finalized := s.FinalizedCheckpt()
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
// Log block sync status.
justified := s.CurrentJustifiedCheckpt()
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix())); err != nil {
return err
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data
if err := logPayload(blockCopy.Block()); err != nil {
log.WithError(err).Error("Unable to log debug block payload data")
}
// Log state transition data.
if err := logStateTransitionData(blockCopy.Block()); err != nil {
return err
log.WithError(err).Error("Unable to log state transition data")
}
return nil
@@ -109,20 +107,14 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Sig
})
// Reports on blockCopy and fork choice metrics.
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
finalized := s.FinalizedCheckpt()
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
}
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
finalized := s.FinalizedCheckpt()
if finalized == nil {
return errNilFinalizedInStore
}
@@ -175,10 +167,7 @@ func (s *Service) checkSaveHotStateDB(ctx context.Context) error {
currentEpoch := slots.ToEpoch(s.CurrentSlot())
// Prevent `sinceFinality` going underflow.
var sinceFinality types.Epoch
finalized, err := s.store.FinalizedCheckpt()
if err != nil {
return err
}
finalized := s.FinalizedCheckpt()
if finalized == nil {
return errNilFinalizedInStore
}

View File

@@ -127,7 +127,7 @@ func TestService_ReceiveBlock(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
@@ -137,12 +137,6 @@ func TestService_ReceiveBlock(t *testing.T) {
s, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, s.saveGenesisData(ctx, genesis))
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
h := [32]byte{'a'}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, h)
root, err := tt.args.block.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(tt.args.block)
@@ -168,7 +162,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
@@ -178,11 +172,6 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
s, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, s.saveGenesisData(ctx, genesis))
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wg := sync.WaitGroup{}
@@ -248,7 +237,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
@@ -256,12 +245,6 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
require.NoError(t, err)
err = s.saveGenesisData(ctx, genesis)
require.NoError(t, err)
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Root: gRoot[:]}, [32]byte{'a'})
root, err := tt.args.block.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(tt.args.block)
@@ -290,17 +273,15 @@ func TestService_HasBlock(t *testing.T) {
}
wsb, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
s.saveInitSyncBlock(r, wsb)
require.NoError(t, s.saveInitSyncBlock(context.Background(), r, wsb))
if !s.HasBlock(context.Background(), r) {
t.Error("Should have block")
}
b := util.NewBeaconBlock()
b.Block.Slot = 1
wsb, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
util.SaveBlock(t, context.Background(), s.cfg.BeaconDB, b)
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(context.Background(), wsb))
require.Equal(t, true, s.HasBlock(context.Background(), r))
}
@@ -311,7 +292,6 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{})
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
assert.LogsContain(t, hook, "Entering mode to save hot states in DB")
@@ -322,7 +302,8 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{})
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
s.genesisTime = time.Now()
@@ -335,7 +316,6 @@ func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{}, [32]byte{})
s.genesisTime = time.Now()
require.NoError(t, s.checkSaveHotStateDB(context.Background()))

View File

@@ -12,7 +12,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
@@ -23,6 +22,7 @@ import (
f "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
@@ -64,7 +64,6 @@ type Service struct {
initSyncBlocksLock sync.RWMutex
justifiedBalances *stateBalanceCache
wsVerifier *WeakSubjectivityVerifier
store *store.Store
processAttestationsLock sync.Mutex
}
@@ -102,7 +101,6 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.SignedBeaconBlock),
cfg: &config{},
store: &store.Store{},
}
for _, opt := range opts {
if err := opt(srv); err != nil {
@@ -203,16 +201,25 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if finalized == nil {
return errNilFinalizedCheckpoint
}
s.store = store.New(justified, finalized)
var forkChoicer f.ForkChoicer
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
if features.Get().EnableForkChoiceDoublyLinkedTree {
forkChoicer = doublylinkedtree.New(justified.Epoch, finalized.Epoch)
forkChoicer = doublylinkedtree.New()
} else {
forkChoicer = protoarray.New(justified.Epoch, finalized.Epoch)
forkChoicer = protoarray.New()
}
s.cfg.ForkChoiceStore = forkChoicer
if err := forkChoicer.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := forkChoicer.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")
}
forkChoicer.SetGenesisTime(uint64(s.genesisTime.Unix()))
st, err := s.cfg.StateGen.StateByRoot(s.ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint state")
@@ -461,16 +468,15 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
s.originBlockRoot = genesisBlkRoot
s.cfg.StateGen.SaveFinalizedState(0 /*slot*/, genesisBlkRoot, genesisState)
// Finalized checkpoint at genesis is a zero hash.
genesisCheckpoint := genesisState.FinalizedCheckpoint()
s.store = store.New(genesisCheckpoint, genesisCheckpoint)
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, genesisState, genesisBlkRoot); err != nil {
log.Fatalf("Could not process genesis block for fork choice: %v", err)
}
s.cfg.ForkChoiceStore.SetOriginRoot(genesisBlkRoot)
// Set genesis as fully validated
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, genesisBlkRoot); err != nil {
log.Fatalf("Could not set optimistic status of genesis block to false: %v", err)
return errors.Wrap(err, "Could not set optimistic status of genesis block to false")
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
s.setHead(genesisBlkRoot, genesisBlk, genesisState)
return nil

View File

@@ -9,10 +9,9 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
@@ -130,7 +129,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(protoarray.New(0, 0)),
WithForkChoiceStore(protoarray.New()),
WithAttestationService(attService),
WithStateGen(stateGen),
}
@@ -152,9 +151,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(genesisBlk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(1))
@@ -189,9 +186,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(genesisBlk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
wsb := util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
@@ -222,9 +217,9 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
count := uint64(10)
deposits, _, err := util.DeterministicDepositsAndKeys(count)
require.NoError(t, err)
trie, _, err := util.DepositTrieFromDeposits(deposits)
dt, _, err := util.DepositTrieFromDeposits(deposits)
require.NoError(t, err)
hashTreeRoot, err := trie.HashTreeRoot()
hashTreeRoot, err := dt.HashTreeRoot()
require.NoError(t, err)
genState, err := transition.EmptyGenesisState()
require.NoError(t, err)
@@ -234,7 +229,7 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
BlockHash: make([]byte, 32),
})
require.NoError(t, err)
genState, err = b.ProcessPreGenesisDeposits(ctx, genState, deposits)
genState, err = blocks.ProcessPreGenesisDeposits(ctx, genState, deposits)
require.NoError(t, err)
_, err = bc.initializeBeaconChain(ctx, time.Unix(0, 0), genState, &ethpb.Eth1Data{DepositRoot: hashTreeRoot[:], BlockHash: make([]byte, 32)})
@@ -263,9 +258,7 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(genesisBlk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(0))
@@ -277,10 +270,9 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
// Test the start function.
chainService.Start()
cp, err := chainService.store.FinalizedCheckpt()
require.NoError(t, err)
cp := chainService.FinalizedCheckpt()
require.DeepEqual(t, blkRoot[:], cp.Root, "Finalize Checkpoint root is incorrect")
cp, err = chainService.store.JustifiedCheckpt()
cp = chainService.CurrentJustifiedCheckpt()
require.NoError(t, err)
require.DeepEqual(t, params.BeaconConfig().ZeroHash[:], cp.Root, "Justified Checkpoint root is incorrect")
@@ -296,9 +288,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
wsb, err := wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, genesis)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := util.NewBeaconBlock()
@@ -312,9 +302,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
wsb, err = wrapper.WrappedSignedBeaconBlock(headBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
@@ -346,9 +334,7 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
wsb, err := wrapper.WrappedSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, genesis)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := util.NewBeaconBlock()
@@ -362,9 +348,7 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
wsb, err = wrapper.WrappedSignedBeaconBlock(headBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, headBlock)
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
ss := &ethpb.StateSummary{
@@ -403,18 +387,14 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
genesisRoot, err := genesisBlock.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
wsb, err := wrapper.WrappedSignedBeaconBlock(genesisBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, genesisBlock)
finalizedBlock := util.NewBeaconBlock()
finalizedBlock.Block.Slot = finalizedSlot
finalizedBlock.Block.ParentRoot = genesisRoot[:]
finalizedRoot, err := finalizedBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(finalizedBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, finalizedBlock)
// Set head slot close to the finalization point, no head sync is triggered.
headBlock := util.NewBeaconBlock()
@@ -422,9 +402,7 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
headBlock.Block.ParentRoot = finalizedRoot[:]
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(headBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, headBlock)
headState, err := util.NewBeaconState()
require.NoError(t, err)
@@ -459,9 +437,7 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
headBlock.Block.ParentRoot = finalizedRoot[:]
headRoot, err = headBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(headBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, headRoot))
@@ -481,7 +457,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: protoarray.New()},
}
blk := util.NewBeaconBlock()
blk.Block.Slot = 1
@@ -505,10 +481,8 @@ func TestHasBlock_ForkChoiceAndDB_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(0, 0), BeaconDB: beaconDB},
store: &store.Store{},
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -526,10 +500,8 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0), BeaconDB: beaconDB},
store: &store.Store{},
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -552,12 +524,12 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
cancel: cancel,
initSyncBlocks: make(map[[32]byte]interfaces.SignedBeaconBlock),
}
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
bb := util.NewBeaconBlock()
r, err := bb.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
wsb, err := wrapper.WrappedSignedBeaconBlock(bb)
require.NoError(t, err)
s.saveInitSyncBlock(r, wsb)
require.NoError(t, s.saveInitSyncBlock(ctx, r, wsb))
require.NoError(t, s.Stop())
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(ctx, r))
}
@@ -599,10 +571,8 @@ func BenchmarkHasBlockForkChoiceStore_ProtoArray(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(0, 0), BeaconDB: beaconDB},
store: &store.Store{},
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
@@ -622,10 +592,8 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0), BeaconDB: beaconDB},
store: &store.Store{},
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}, [32]byte{})
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)

View File

@@ -117,7 +117,7 @@ func TestStateBalanceCache(t *testing.T) {
balances []uint64
name string
}
sentinelCacheMiss := errors.New("Cache missed, as expected!")
sentinelCacheMiss := errors.New("cache missed, as expected")
sentinelBalances := []uint64{1, 2, 3, 4, 5}
halfExpiredValidators, halfExpiredBalances := testHalfExpiredValidators()
halfQueuedValidators, halfQueuedBalances := testHalfQueuedValidators()

View File

@@ -1,30 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"new.go",
"setter_getter.go",
"type.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"new_test.go",
"setter_getter_test.go",
],
embed = [":go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,4 +0,0 @@
// Package store implements the store object defined in the phase0 fork choice spec.
// It serves as a helpful middleware layer in between blockchain pkg and fork choice protoarray pkg.
// All the getters and setters are concurrent thread safe
package store

View File

@@ -1,16 +0,0 @@
package store
import (
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// New creates a store object.
func New(justifiedCheckpt *ethpb.Checkpoint, finalizedCheckpt *ethpb.Checkpoint) *Store {
return &Store{
justifiedCheckpt: justifiedCheckpt,
prevJustifiedCheckpt: justifiedCheckpt,
bestJustifiedCheckpt: justifiedCheckpt,
finalizedCheckpt: finalizedCheckpt,
prevFinalizedCheckpt: finalizedCheckpt,
}
}

View File

@@ -1,35 +0,0 @@
package store
import (
"testing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestNew(t *testing.T) {
j := &ethpb.Checkpoint{
Epoch: 0,
Root: []byte("hi"),
}
f := &ethpb.Checkpoint{
Epoch: 0,
Root: []byte("hello"),
}
s := New(j, f)
cp, err := s.JustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, j, cp)
cp, err = s.BestJustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, j, cp)
cp, err = s.PrevJustifiedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, j, cp)
cp, err = s.FinalizedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, f, cp)
cp, err = s.PrevFinalizedCheckpt()
require.NoError(t, err)
require.DeepSSZEqual(t, f, cp)
}

View File

@@ -1,111 +0,0 @@
package store
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
var (
ErrNilCheckpoint = errors.New("nil checkpoint")
)
// PrevJustifiedCheckpt returns the previous justified checkpoint in the Store.
func (s *Store) PrevJustifiedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
if s.prevJustifiedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.prevJustifiedCheckpt, nil
}
// BestJustifiedCheckpt returns the best justified checkpoint in the Store.
func (s *Store) BestJustifiedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
if s.bestJustifiedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.bestJustifiedCheckpt, nil
}
// JustifiedCheckpt returns the justified checkpoint in the Store.
func (s *Store) JustifiedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
if s.justifiedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.justifiedCheckpt, nil
}
// JustifiedPayloadBlockHash returns the justified payload block hash reflecting justified check point.
func (s *Store) JustifiedPayloadBlockHash() [32]byte {
s.RLock()
defer s.RUnlock()
return s.justifiedPayloadBlockHash
}
// PrevFinalizedCheckpt returns the previous finalized checkpoint in the Store.
func (s *Store) PrevFinalizedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
if s.prevFinalizedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.prevFinalizedCheckpt, nil
}
// FinalizedCheckpt returns the finalized checkpoint in the Store.
func (s *Store) FinalizedCheckpt() (*ethpb.Checkpoint, error) {
s.RLock()
defer s.RUnlock()
if s.finalizedCheckpt == nil {
return nil, ErrNilCheckpoint
}
return s.finalizedCheckpt, nil
}
// FinalizedPayloadBlockHash returns the finalized payload block hash reflecting finalized check point.
func (s *Store) FinalizedPayloadBlockHash() [32]byte {
s.RLock()
defer s.RUnlock()
return s.finalizedPayloadBlockHash
}
// SetPrevJustifiedCheckpt sets the previous justified checkpoint in the Store.
func (s *Store) SetPrevJustifiedCheckpt(cp *ethpb.Checkpoint) {
s.Lock()
defer s.Unlock()
s.prevJustifiedCheckpt = cp
}
// SetBestJustifiedCheckpt sets the best justified checkpoint in the Store.
func (s *Store) SetBestJustifiedCheckpt(cp *ethpb.Checkpoint) {
s.Lock()
defer s.Unlock()
s.bestJustifiedCheckpt = cp
}
// SetJustifiedCheckptAndPayloadHash sets the justified checkpoint and blockhash in the Store.
func (s *Store) SetJustifiedCheckptAndPayloadHash(cp *ethpb.Checkpoint, h [32]byte) {
s.Lock()
defer s.Unlock()
s.justifiedCheckpt = cp
s.justifiedPayloadBlockHash = h
}
// SetFinalizedCheckptAndPayloadHash sets the finalized checkpoint and blockhash in the Store.
func (s *Store) SetFinalizedCheckptAndPayloadHash(cp *ethpb.Checkpoint, h [32]byte) {
s.Lock()
defer s.Unlock()
s.finalizedCheckpt = cp
s.finalizedPayloadBlockHash = h
}
// SetPrevFinalizedCheckpt sets the previous finalized checkpoint in the Store.
func (s *Store) SetPrevFinalizedCheckpt(cp *ethpb.Checkpoint) {
s.Lock()
defer s.Unlock()
s.prevFinalizedCheckpt = cp
}

View File

@@ -1,72 +0,0 @@
package store
import (
"testing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
)
func Test_store_PrevJustifiedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
_, err := s.PrevJustifiedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetPrevJustifiedCheckpt(cp)
got, err := s.PrevJustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
}
func Test_store_BestJustifiedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
_, err := s.BestJustifiedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetBestJustifiedCheckpt(cp)
got, err := s.BestJustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
}
func Test_store_JustifiedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
_, err := s.JustifiedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
h := [32]byte{'b'}
s.SetJustifiedCheckptAndPayloadHash(cp, h)
got, err := s.JustifiedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
require.Equal(t, h, s.JustifiedPayloadBlockHash())
}
func Test_store_FinalizedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
_, err := s.FinalizedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
h := [32]byte{'b'}
s.SetFinalizedCheckptAndPayloadHash(cp, h)
got, err := s.FinalizedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
require.Equal(t, h, s.FinalizedPayloadBlockHash())
}
func Test_store_PrevFinalizedCheckpt(t *testing.T) {
s := &Store{}
var cp *ethpb.Checkpoint
_, err := s.PrevFinalizedCheckpt()
require.ErrorIs(t, ErrNilCheckpoint, err)
cp = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'a'}}
s.SetPrevFinalizedCheckpt(cp)
got, err := s.PrevFinalizedCheckpt()
require.NoError(t, err)
require.Equal(t, cp, got)
}

View File

@@ -1,30 +0,0 @@
package store
import (
"sync"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// Store is defined in the fork choice consensus spec for tracking current time and various versions of checkpoints.
//
// Spec code:
// class Store(object):
// time: uint64
// genesis_time: uint64
// justified_checkpoint: Checkpoint
// finalized_checkpoint: Checkpoint
// best_justified_checkpoint: Checkpoint
// proposerBoostRoot: Root
type Store struct {
justifiedCheckpt *ethpb.Checkpoint
justifiedPayloadBlockHash [32]byte
finalizedCheckpt *ethpb.Checkpoint
finalizedPayloadBlockHash [32]byte
bestJustifiedCheckpt *ethpb.Checkpoint
sync.RWMutex
// These are not part of the consensus spec, but we do use them to return gRPC API requests.
// TODO(10094): Consider removing in v3.
prevFinalizedCheckpt *ethpb.Checkpoint
prevJustifiedCheckpt *ethpb.Checkpoint
}

View File

@@ -62,6 +62,8 @@ type ChainService struct {
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
}
// ForkChoicer mocks the same method in the chain service
@@ -282,18 +284,18 @@ func (s *ChainService) CurrentFork() *ethpb.Fork {
}
// FinalizedCheckpt mocks FinalizedCheckpt method in chain service.
func (s *ChainService) FinalizedCheckpt() (*ethpb.Checkpoint, error) {
return s.FinalizedCheckPoint, nil
func (s *ChainService) FinalizedCheckpt() *ethpb.Checkpoint {
return s.FinalizedCheckPoint
}
// CurrentJustifiedCheckpt mocks CurrentJustifiedCheckpt method in chain service.
func (s *ChainService) CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error) {
return s.CurrentJustifiedCheckPoint, nil
func (s *ChainService) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
return s.CurrentJustifiedCheckPoint
}
// PreviousJustifiedCheckpt mocks PreviousJustifiedCheckpt method in chain service.
func (s *ChainService) PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error) {
return s.PreviousJustifiedCheckPoint, nil
func (s *ChainService) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
return s.PreviousJustifiedCheckPoint
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
@@ -447,7 +449,8 @@ func (s *ChainService) IsOptimistic(_ context.Context) (bool, error) {
}
// IsOptimisticForRoot mocks the same method in the chain service.
func (s *ChainService) IsOptimisticForRoot(_ context.Context, _ [32]byte) (bool, error) {
func (s *ChainService) IsOptimisticForRoot(_ context.Context, root [32]byte) (bool, error) {
s.OptimisticCheckRootReceived = root
return s.Optimistic, nil
}
@@ -456,3 +459,8 @@ func (s *ChainService) UpdateHead(_ context.Context) error { return nil }
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
// IsFinalized mocks the same method in the chain service.
func (s *ChainService) IsFinalized(_ context.Context, blockRoot [32]byte) bool {
return s.FinalizedRoots[blockRoot]
}

View File

@@ -30,8 +30,8 @@ type WeakSubjectivityVerifier struct {
// NewWeakSubjectivityVerifier validates a checkpoint, and if valid, uses it to initialize a weak subjectivity verifier.
func NewWeakSubjectivityVerifier(wsc *ethpb.Checkpoint, db weakSubjectivityDB) (*WeakSubjectivityVerifier, error) {
if wsc == nil || len(wsc.Root) == 0 || wsc.Epoch == 0 {
log.Info("No checkpoint for syncing provided, node will begin syncing from genesis. Checkpoint Sync is an optional feature that allows your node to sync from a more recent checkpoint, " +
"which enhances the security of your local beacon node and the broader network. See https://docs.prylabs.network/docs/next/prysm-usage/checkpoint-sync/ to learn how to configure Checkpoint Sync.")
log.Info("--weak-subjectivity-checkpoint not provided. Prysm recommends providing a weak subjectivity checkpoint" +
"for nodes synced from genesis, or manual verification of block and state roots for checkpoint sync nodes.")
return &WeakSubjectivityVerifier{
enabled: false,
}, nil

View File

@@ -5,11 +5,11 @@ import (
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
@@ -22,9 +22,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Slot = 1792480
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
util.SaveBlock(t, context.Background(), beaconDB, b)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -74,14 +72,13 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, err)
fcs := protoarray.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt},
store: &store.Store{},
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt, ForkChoiceStore: fcs},
wsVerifier: wv,
}
s.store.SetFinalizedCheckptAndPayloadHash(&ethpb.Checkpoint{Epoch: tt.finalizedEpoch}, [32]byte{})
cp, err := s.store.FinalizedCheckpt()
require.NoError(t, err)
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
cp := s.ForkChoicer().FinalizedCheckpoint()
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), cp.Epoch)
if tt.wantErr == nil {
require.NoError(t, err)

View File

@@ -3,6 +3,8 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"error.go",
"metric.go",
"option.go",
"service.go",
],
@@ -10,13 +12,20 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api/client/builder:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/db:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network:go_default_library",
"//network/authorization:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -0,0 +1,7 @@
package builder
import "github.com/pkg/errors"
var (
ErrNotRunning = errors.New("builder is not running")
)

View File

@@ -0,0 +1,37 @@
package builder
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
submitBlindedBlockLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "submit_blinded_block_latency_milliseconds",
Help: "Captures RPC latency for submitting blinded block in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
getHeaderLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_header_latency_milliseconds",
Help: "Captures RPC latency for get header in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
getStatusLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_status_latency_milliseconds",
Help: "Captures RPC latency for get status in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
registerValidatorLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "register_validator_latency_milliseconds",
Help: "Captures RPC latency for register validator in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
)

View File

@@ -1,6 +1,8 @@
package builder
import (
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/network"
"github.com/prysmaticlabs/prysm/network/authorization"
@@ -26,6 +28,22 @@ func WithBuilderEndpoints(endpoint string) Option {
}
}
// WithHeadFetcher gets the head info from chain service.
func WithHeadFetcher(svc *blockchain.Service) Option {
return func(s *Service) error {
s.cfg.headFetcher = svc
return nil
}
}
// WithDatabase for head access.
func WithDatabase(beaconDB db.HeadAccessDatabase) Option {
return func(s *Service) error {
s.cfg.beaconDB = beaconDB
return nil
}
}
func covertEndPoint(ep string) network.Endpoint {
return network.Endpoint{
Url: ep,

View File

@@ -2,13 +2,19 @@ package builder
import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/api/client/builder"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/network"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// BlockBuilder defines the interface for interacting with the block builder
@@ -16,23 +22,33 @@ type BlockBuilder interface {
SubmitBlindedBlock(ctx context.Context, block *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error)
GetHeader(ctx context.Context, slot types.Slot, parentHash [32]byte, pubKey [48]byte) (*ethpb.SignedBuilderBid, error)
Status() error
RegisterValidator(ctx context.Context, reg *ethpb.SignedValidatorRegistrationV1) error
RegisterValidator(ctx context.Context, reg []*ethpb.SignedValidatorRegistrationV1) error
Configured() bool
}
// config defines a config struct for dependencies into the service.
type config struct {
builderEndpoint network.Endpoint
beaconDB db.HeadAccessDatabase
headFetcher blockchain.HeadFetcher
}
// Service defines a service that provides a client for interacting with the beacon chain and MEV relay network.
type Service struct {
cfg *config
c *builder.Client
cfg *config
c *builder.Client
ctx context.Context
cancel context.CancelFunc
}
// NewService instantiates a new service.
func NewService(ctx context.Context, opts ...Option) (*Service, error) {
s := &Service{}
ctx, cancel := context.WithCancel(ctx)
s := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{},
}
for _, opt := range opts {
if err := opt(s); err != nil {
return nil, err
@@ -44,6 +60,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
return nil, err
}
s.c = c
log.WithField("endpoint", c.NodeURL()).Info("Builder has been configured")
}
return s, nil
}
@@ -56,22 +73,81 @@ func (*Service) Stop() error {
return nil
}
// SubmitBlindedBlock is currently a stub.
func (*Service) SubmitBlindedBlock(context.Context, *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error) {
return nil, errors.New("not implemented")
// SubmitBlindedBlock submits a blinded block to the builder relay network.
func (s *Service) SubmitBlindedBlock(ctx context.Context, b *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error) {
ctx, span := trace.StartSpan(ctx, "builder.SubmitBlindedBlock")
defer span.End()
start := time.Now()
defer func() {
submitBlindedBlockLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
return s.c.SubmitBlindedBlock(ctx, b)
}
// GetHeader is currently a stub.
func (*Service) GetHeader(context.Context, types.Slot, [32]byte, [48]byte) (*ethpb.SignedBuilderBid, error) {
return nil, errors.New("not implemented")
// GetHeader retrieves the header for a given slot and parent hash from the builder relay network.
func (s *Service) GetHeader(ctx context.Context, slot types.Slot, parentHash [32]byte, pubKey [48]byte) (*ethpb.SignedBuilderBid, error) {
ctx, span := trace.StartSpan(ctx, "builder.GetHeader")
defer span.End()
start := time.Now()
defer func() {
getHeaderLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
return s.c.GetHeader(ctx, slot, parentHash, pubKey)
}
// Status is currently a stub.
func (*Service) Status() error {
return errors.New("not implemented")
// Status retrieves the status of the builder relay network.
func (s *Service) Status() error {
ctx, span := trace.StartSpan(context.Background(), "builder.Status")
defer span.End()
start := time.Now()
defer func() {
getStatusLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
// Return early if builder isn't initialized in service.
if s.c == nil {
return nil
}
return s.c.Status(ctx)
}
// RegisterValidator is currently a stub.
func (*Service) RegisterValidator(context.Context, *ethpb.SignedValidatorRegistrationV1) error {
return errors.New("not implemented")
// RegisterValidator registers a validator with the builder relay network.
// It also saves the registration object to the DB.
func (s *Service) RegisterValidator(ctx context.Context, reg []*ethpb.SignedValidatorRegistrationV1) error {
ctx, span := trace.StartSpan(ctx, "builder.RegisterValidator")
defer span.End()
start := time.Now()
defer func() {
registerValidatorLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
idxs := make([]types.ValidatorIndex, 0)
msgs := make([]*ethpb.ValidatorRegistrationV1, 0)
valid := make([]*ethpb.SignedValidatorRegistrationV1, 0)
for i := 0; i < len(reg); i++ {
r := reg[i]
nx, exists := s.cfg.headFetcher.HeadPublicKeyToValidatorIndex(bytesutil.ToBytes48(r.Message.Pubkey))
if !exists {
// we want to allow validators to set up keys that haven't been added to the beaconstate validator list yet,
// so we should tolerate keys that do not seem to be valid by skipping past them.
log.Warnf("Skipping validator registration for pubkey=%#x - not in current validator set.", r.Message.Pubkey)
continue
}
idxs = append(idxs, nx)
msgs = append(msgs, r.Message)
valid = append(valid, r)
}
if err := s.c.RegisterValidator(ctx, valid); err != nil {
return errors.Wrap(err, "could not register validator(s)")
}
return s.cfg.beaconDB.SaveRegistrationsByValidatorIDs(ctx, idxs, msgs)
}
// Configured returns true if the user has input a builder URL.
func (s *Service) Configured() bool {
return s.cfg.builderEndpoint.Url != ""
}

View File

@@ -0,0 +1,14 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["mock.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/builder/testing",
visibility = ["//visibility:public"],
deps = [
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],
)

View File

@@ -0,0 +1,45 @@
package testing
import (
"context"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// MockBuilderService to mock builder.
type MockBuilderService struct {
HasConfigured bool
Payload *v1.ExecutionPayload
ErrSubmitBlindedBlock error
Bid *ethpb.SignedBuilderBid
ErrGetHeader error
ErrStatus error
ErrRegisterValidator error
}
// Configured for mocking.
func (s *MockBuilderService) Configured() bool {
return s.HasConfigured
}
// SubmitBlindedBlock for mocking.
func (s *MockBuilderService) SubmitBlindedBlock(context.Context, *ethpb.SignedBlindedBeaconBlockBellatrix) (*v1.ExecutionPayload, error) {
return s.Payload, s.ErrSubmitBlindedBlock
}
// GetHeader for mocking.
func (s *MockBuilderService) GetHeader(context.Context, types.Slot, [32]byte, [48]byte) (*ethpb.SignedBuilderBid, error) {
return s.Bid, s.ErrGetHeader
}
// Status for mocking.
func (s *MockBuilderService) Status() error {
return s.ErrStatus
}
// RegisterValidator for mocking.
func (s *MockBuilderService) RegisterValidator(context.Context, []*ethpb.SignedValidatorRegistrationV1) error {
return s.ErrRegisterValidator
}

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build fuzz
// +build fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build fuzz
// +build fuzz
// This file is used in fuzzer builds to bypass global committee caches.
package cache

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -428,9 +428,9 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
require.NoError(t, err, "Could not hash deposit data")
deps = append(deps, hash[:])
}
trie, err := trie.GenerateTrieFromItems(deps, params.BeaconConfig().DepositContractTreeDepth)
generatedTrie, err := trie.GenerateTrieFromItems(deps, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate deposit trie")
rootA, err := trie.HashTreeRoot()
rootA, err := generatedTrie.HashTreeRoot()
require.NoError(t, err)
rootB, err := cachedDeposits.Deposits.HashTreeRoot()
require.NoError(t, err)
@@ -490,9 +490,9 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
require.NoError(t, err, "Could not hash deposit data")
deps = append(deps, hash[:])
}
trie, err := trie.GenerateTrieFromItems(deps, params.BeaconConfig().DepositContractTreeDepth)
generatedTrie, err := trie.GenerateTrieFromItems(deps, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate deposit trie")
rootA, err := trie.HashTreeRoot()
rootA, err := generatedTrie.HashTreeRoot()
require.NoError(t, err)
rootB, err := cachedDeposits.Deposits.HashTreeRoot()
require.NoError(t, err)

View File

@@ -109,12 +109,12 @@ func (dc *DepositCache) RemovePendingDeposit(ctx context.Context, d *ethpb.Depos
idx := -1
for i, ctnr := range dc.pendingDeposits {
hash, err := hash.HashProto(ctnr.Deposit)
h, err := hash.HashProto(ctnr.Deposit)
if err != nil {
log.Errorf("Could not hash deposit %v", err)
continue
}
if hash == depRoot {
if h == depRoot {
idx = i
break
}

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build fuzz
// +build fuzz
// This file is used in fuzzer builds to bypass proposer indices caches.
package cache

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build !fuzz
// +build !fuzz
package cache

View File

@@ -1,5 +1,4 @@
//go:build fuzz
// +build fuzz
package cache

View File

@@ -161,16 +161,16 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, uint64(numValidators))
require.NoError(t, s.SetCurrentSyncCommittee(tt.currentSyncCommittee))
require.NoError(t, s.SetNextSyncCommittee(tt.nextSyncCommittee))
cache := cache.NewSyncCommittee()
c := cache.NewSyncCommittee()
r := [32]byte{'a'}
require.NoError(t, cache.UpdatePositionsInCommittee(r, s))
require.NoError(t, c.UpdatePositionsInCommittee(r, s))
for key, indices := range tt.currentSyncMap {
pos, err := cache.CurrentPeriodIndexPosition(r, key)
pos, err := c.CurrentPeriodIndexPosition(r, key)
require.NoError(t, err)
require.DeepEqual(t, indices, pos)
}
for key, indices := range tt.nextSyncMap {
pos, err := cache.NextPeriodIndexPosition(r, key)
pos, err := c.NextPeriodIndexPosition(r, key)
require.NoError(t, err)
require.DeepEqual(t, indices, pos)
}

View File

@@ -34,8 +34,8 @@ func ProcessAttestationsNoVerifySignature(
if err != nil {
return nil, err
}
for idx, attestation := range body.Attestations() {
beaconState, err = ProcessAttestationNoVerifySignature(ctx, beaconState, attestation, totalBalance)
for idx, att := range body.Attestations() {
beaconState, err = ProcessAttestationNoVerifySignature(ctx, beaconState, att, totalBalance)
if err != nil {
return nil, errors.Wrapf(err, "could not verify attestation at index %d in block", idx)
}

View File

@@ -27,7 +27,7 @@ import (
func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
attestations := []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{
util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, fieldparams.RootLength)},
Slot: 5,
@@ -55,7 +55,7 @@ func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
}
func TestProcessAttestations_NeitherCurrentNorPrevEpoch(t *testing.T) {
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Target: &ethpb.Checkpoint{Epoch: 0}}})
@@ -201,7 +201,7 @@ func TestProcessAttestations_OK(t *testing.T) {
aggBits.SetBitAt(0, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Root: mockRoot[:]},
@@ -411,21 +411,21 @@ func TestValidatorFlag_Add_ExceedsLength(t *testing.T) {
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
st := &ethpb.BeaconStateAltair{}
b := &ethpb.SignedBeaconBlockAltair{Block: &ethpb.BeaconBlockAltair{}}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(st)
fuzzer.Fuzz(b)
if b.Block == nil {
b.Block = &ethpb.BeaconBlockAltair{}
}
s, err := stateAltair.InitializeFromProtoUnsafe(state)
s, err := stateAltair.InitializeFromProtoUnsafe(st)
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
r, err := altair.ProcessAttestationsNoVerifySignature(context.Background(), s, wsb)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, s, b)
}
}
}

View File

@@ -25,7 +25,7 @@ func ProcessDeposits(
if deposit == nil || deposit.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, err = ProcessDeposit(ctx, beaconState, deposit, batchVerified)
beaconState, err = ProcessDeposit(beaconState, deposit, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
@@ -34,7 +34,7 @@ func ProcessDeposits(
}
// ProcessDeposit processes validator deposit for beacon state Altair.
func ProcessDeposit(ctx context.Context, beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
beaconState, isNewValidator, err := blocks.ProcessDeposit(beaconState, deposit, verifySignature)
if err != nil {
return nil, err

View File

@@ -40,7 +40,7 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := stateAltair.InitializeFromProtoUnsafe(state)
require.NoError(t, err)
r, err := altair.ProcessDeposit(context.Background(), s, deposit, true)
r, err := altair.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}

View File

@@ -183,7 +183,7 @@ func TestProcessDeposit_AddsNewValidatorDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(context.Background(), beaconState, dep[0], true)
newState, err := altair.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Process deposit failed")
require.Equal(t, 2, len(newState.Validators()), "Expected validator list to have length 2")
require.Equal(t, 2, len(newState.Balances()), "Expected validator balances list to have length 2")
@@ -201,9 +201,9 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
trie, _, err := util.DepositTrieFromDeposits(dep)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root, err := trie.HashTreeRoot()
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
@@ -226,7 +226,7 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(context.Background(), beaconState, dep[0], true)
newState, err := altair.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
if newState.Eth1DepositIndex() != 1 {

View File

@@ -214,9 +214,12 @@ func ValidateSyncMessageTime(slot types.Slot, genesisTime time.Time, clockDispar
// Verify sync message slot is within the time range.
if messageTime.Before(lowerBound) || messageTime.After(upperBound) {
return fmt.Errorf(
"sync message slot %d not within allowable range of %d to %d (current slot)",
"sync message time %v (slot %d) not within allowable range of %v (slot %d) to %v (slot %d)",
messageTime,
slot,
lowerBound,
uint64(lowerBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
upperBound,
uint64(upperBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
)
}

View File

@@ -28,12 +28,12 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
EffectiveBalance: params.BeaconConfig().MinDepositAmount,
}
}
state, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
st, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
return state
return st
}
type args struct {
@@ -103,27 +103,27 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
EffectiveBalance: params.BeaconConfig().MinDepositAmount,
}
}
state, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
st, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
return state
return st
}
state := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
got1, err := altair.NextSyncCommitteeIndices(context.Background(), state)
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
got1, err := altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch))
got2, err := altair.NextSyncCommitteeIndices(context.Background(), state)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch))
got2, err := altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch*types.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(context.Background(), state)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*types.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch*types.Slot(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(context.Background(), state)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*types.Slot(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
}
@@ -140,12 +140,12 @@ func TestSyncCommittee_CanGet(t *testing.T) {
PublicKey: blsKey.PublicKey().Marshal(),
}
}
state, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
st, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
return state
return st
}
type args struct {
@@ -260,8 +260,8 @@ func TestValidateNilSyncContribution(t *testing.T) {
func TestSyncSubCommitteePubkeys_CanGet(t *testing.T) {
helpers.ClearCache()
state := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
com, err := altair.NextSyncCommittee(context.Background(), state)
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
com, err := altair.NextSyncCommittee(context.Background(), st)
require.NoError(t, err)
sub, err := altair.SyncSubCommitteePubkeys(com, 0)
require.NoError(t, err)
@@ -328,7 +328,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 16,
genesisTime: prysmTime.Now().Add(-(15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)),
},
wantedErr: "sync message slot 16 not within allowable range of",
wantedErr: "(slot 16) not within allowable range of",
},
{
name: "sync_message.slot == current_slot+CLOCK_DISPARITY",
@@ -344,7 +344,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 100,
genesisTime: prysmTime.Now().Add(-(100 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second) + params.BeaconNetworkConfig().MaximumGossipClockDisparity + 1000*time.Millisecond),
},
wantedErr: "sync message slot 100 not within allowable range of",
wantedErr: "(slot 100) not within allowable range of",
},
{
name: "sync_message.slot == current_slot-CLOCK_DISPARITY",
@@ -360,7 +360,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 101,
genesisTime: prysmTime.Now().Add(-(100*time.Duration(params.BeaconConfig().SecondsPerSlot)*time.Second + params.BeaconNetworkConfig().MaximumGossipClockDisparity)),
},
wantedErr: "sync message slot 101 not within allowable range of",
wantedErr: "(slot 101) not within allowable range of",
},
{
name: "sync_message.slot is well beyond current slot",
@@ -395,10 +395,10 @@ func getState(t *testing.T, count uint64) state.BeaconState {
PublicKey: blsKey.PublicKey().Marshal(),
}
}
state, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
st, err := stateAltair.InitializeFromProto(&ethpb.BeaconStateAltair{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
return state
return st
}

View File

@@ -31,8 +31,8 @@ func ProcessAttestationsNoVerifySignature(
}
body := b.Block().Body()
var err error
for idx, attestation := range body.Attestations() {
beaconState, err = ProcessAttestationNoVerifySignature(ctx, beaconState, attestation)
for idx, att := range body.Attestations() {
beaconState, err = ProcessAttestationNoVerifySignature(ctx, beaconState, att)
if err != nil {
return nil, errors.Wrapf(err, "could not verify attestation at index %d in block", idx)
}

View File

@@ -25,7 +25,7 @@ import (
func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
data := util.HydrateAttestationData(&ethpb.AttestationData{
data := util.NewAttestationUtil().HydrateAttestationData(&ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: bytesutil.PadTo([]byte("hello-world"), 32)},
Target: &ethpb.Checkpoint{Epoch: 0, Root: bytesutil.PadTo([]byte("hello-world"), 32)},
})
@@ -85,7 +85,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
func TestVerifyAttestationNoVerifySignature_IncorrectSlotTargetEpoch(t *testing.T) {
beaconState, _ := util.DeterministicGenesisState(t, 1)
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: params.BeaconConfig().SlotsPerEpoch,
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
@@ -218,7 +218,7 @@ func TestConvertToIndexed_OK(t *testing.T) {
var sig [fieldparams.BLSSignatureLength]byte
copy(sig[:], "signed")
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Signature: sig[:],
})
for _, tt := range tests {
@@ -261,11 +261,12 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
au := util.AttestationUtil{}
tests := []struct {
attestation *ethpb.IndexedAttestation
}{
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 2,
},
@@ -275,7 +276,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
Signature: make([]byte, fieldparams.BLSSignatureLength),
}},
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 1,
},
@@ -284,7 +285,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
Signature: make([]byte, fieldparams.BLSSignatureLength),
}},
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 4,
},
@@ -293,7 +294,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
Signature: make([]byte, fieldparams.BLSSignatureLength),
}},
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 7,
},
@@ -411,7 +412,8 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -430,7 +432,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
att2 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1*params.BeaconConfig().SlotsPerEpoch + 1,
@@ -470,7 +472,8 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -489,7 +492,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
att2 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -534,7 +537,8 @@ func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
comm1, err := helpers.BeaconCommitteeFromState(ctx, st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -553,7 +557,7 @@ func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
comm2, err := helpers.BeaconCommitteeFromState(ctx, st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
att2 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1,

View File

@@ -19,11 +19,12 @@ import (
)
func TestSlashableAttestationData_CanSlash(t *testing.T) {
att1 := util.HydrateAttestationData(&ethpb.AttestationData{
au := util.AttestationUtil{}
att1 := au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 1, Root: make([]byte, 32)},
Source: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'A'}, 32)},
})
att2 := util.HydrateAttestationData(&ethpb.AttestationData{
att2 := au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 1, Root: make([]byte, 32)},
Source: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'B'}, 32)},
})
@@ -35,9 +36,10 @@ func TestSlashableAttestationData_CanSlash(t *testing.T) {
}
func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
au := util.AttestationUtil{}
slashings := []*ethpb.AttesterSlashing{{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_1: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{}),
Attestation_2: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
Target: &ethpb.Checkpoint{Epoch: 1}},
@@ -71,15 +73,16 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
})
require.NoError(t, err)
au := util.AttestationUtil{}
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_1: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: make([]uint64, params.BeaconConfig().MaxValidatorsPerCommittee+1),
}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_2: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: make([]uint64, params.BeaconConfig().MaxValidatorsPerCommittee+1),
}),
},
@@ -102,7 +105,8 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -117,7 +121,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
@@ -171,7 +175,8 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -186,7 +191,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
@@ -240,7 +245,8 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -255,7 +261,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)

View File

@@ -39,8 +39,9 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
expectedSlashedVal := 2800
root1 := [32]byte{'d', 'o', 'u', 'b', 'l', 'e', '1'}
au := util.AttestationUtil{}
att1 := &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: 0, Root: root1[:]}}),
Data: au.HydrateAttestationData(&ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: 0, Root: root1[:]}}),
AttestingIndices: setA,
Signature: make([]byte, 96),
}
@@ -58,7 +59,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
root2 := [32]byte{'d', 'o', 'u', 'b', 'l', 'e', '2'}
att2 := &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root2[:]},
}),
AttestingIndices: setB,

View File

@@ -37,8 +37,8 @@ func ProcessPreGenesisDeposits(
// ActivateValidatorWithEffectiveBalance updates validator's effective balance, and if it's above MaxEffectiveBalance, validator becomes active in genesis.
func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposits []*ethpb.Deposit) (state.BeaconState, error) {
for _, deposit := range deposits {
pubkey := deposit.Data.PublicKey
for _, d := range deposits {
pubkey := d.Data.PublicKey
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubkey))
// In the event of the pubkey not existing, we continue processing the other
// deposits.
@@ -85,13 +85,13 @@ func ProcessDeposits(
return nil, err
}
for _, deposit := range deposits {
if deposit == nil || deposit.Data == nil {
for _, d := range deposits {
if d == nil || d.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, _, err = ProcessDeposit(beaconState, deposit, batchVerified)
beaconState, _, err = ProcessDeposit(beaconState, d, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(d.Data.PublicKey))
}
}
return beaconState, nil

View File

@@ -232,9 +232,9 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
trie, _, err := util.DepositTrieFromDeposits(dep)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root, err := trie.HashTreeRoot()
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
@@ -283,15 +283,15 @@ func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(100)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
trie, _, err := util.DepositTrieFromDeposits(dep)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
for i := range dep {
proof, err := trie.MerkleProof(i)
proof, err := dt.MerkleProof(i)
require.NoError(t, err)
dep[i].Proof = proof
}
root, err := trie.HashTreeRoot()
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{

View File

@@ -12,7 +12,6 @@ import (
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -46,7 +45,7 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
// IsMergeTransitionBlockUsingPreStatePayloadHeader returns true if the input block is the terminal merge block.
// Terminal merge block must be associated with an empty payload header.
// This assumes the header `h` is referenced as the parent state for block body `body.
func IsMergeTransitionBlockUsingPreStatePayloadHeader(h *ethpb.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
func IsMergeTransitionBlockUsingPreStatePayloadHeader(h *enginev1.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
if h == nil || body == nil {
return false, errors.New("nil header or block body")
}
@@ -98,7 +97,7 @@ func IsExecutionEnabled(st state.BeaconState, body interfaces.BeaconBlockBody) (
// IsExecutionEnabledUsingHeader returns true if the execution is enabled using post processed payload header and block body.
// This is an optimized version of IsExecutionEnabled where beacon state is not required as an argument.
func IsExecutionEnabledUsingHeader(header *ethpb.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
func IsExecutionEnabledUsingHeader(header *enginev1.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
if !bellatrix.IsEmptyHeader(header) {
return true, nil
}
@@ -215,7 +214,7 @@ func ProcessPayload(st state.BeaconState, payload *enginev1.ExecutionPayload) (s
}
// ValidatePayloadHeaderWhenMergeCompletes validates the payload header when the merge completes.
func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header *ethpb.ExecutionPayloadHeader) error {
func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header *enginev1.ExecutionPayloadHeader) error {
// Skip validation if the state is not merge compatible.
complete, err := IsMergeTransitionComplete(st)
if err != nil {
@@ -236,7 +235,7 @@ func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header *ethpb
}
// ValidatePayloadHeader validates the payload header.
func ValidatePayloadHeader(st state.BeaconState, header *ethpb.ExecutionPayloadHeader) error {
func ValidatePayloadHeader(st state.BeaconState, header *enginev1.ExecutionPayloadHeader) error {
// Validate header's random mix matches with state in current epoch
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
@@ -258,7 +257,7 @@ func ValidatePayloadHeader(st state.BeaconState, header *ethpb.ExecutionPayloadH
}
// ProcessPayloadHeader processes the payload header.
func ProcessPayloadHeader(st state.BeaconState, header *ethpb.ExecutionPayloadHeader) (state.BeaconState, error) {
func ProcessPayloadHeader(st state.BeaconState, header *enginev1.ExecutionPayloadHeader) (state.BeaconState, error) {
if err := ValidatePayloadHeaderWhenMergeCompletes(st, header); err != nil {
return nil, err
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
@@ -23,7 +22,7 @@ import (
func Test_IsMergeComplete(t *testing.T) {
tests := []struct {
name string
payload *ethpb.ExecutionPayloadHeader
payload *enginev1.ExecutionPayloadHeader
want bool
}{
{
@@ -33,7 +32,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has parent hash",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -42,7 +41,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has fee recipient",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -51,7 +50,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has state root",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -60,7 +59,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has receipt root",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -69,7 +68,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has logs bloom",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
return h
@@ -78,7 +77,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has random",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -87,7 +86,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has base fee",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -96,7 +95,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has block hash",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -105,7 +104,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has tx root",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.TransactionsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -114,7 +113,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has extra data",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ExtraData = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -123,7 +122,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has block number",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockNumber = 1
return h
@@ -132,7 +131,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has gas limit",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.GasLimit = 1
return h
@@ -141,7 +140,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has gas used",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.GasUsed = 1
return h
@@ -150,7 +149,7 @@ func Test_IsMergeComplete(t *testing.T) {
},
{
name: "has time stamp",
payload: func() *ethpb.ExecutionPayloadHeader {
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.Timestamp = 1
return h
@@ -175,7 +174,7 @@ func Test_IsMergeTransitionBlockUsingPayloadHeader(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
want bool
}{
{
@@ -187,7 +186,7 @@ func Test_IsMergeTransitionBlockUsingPayloadHeader(t *testing.T) {
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -388,7 +387,7 @@ func Test_IsExecutionEnabled(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
useAltairSt bool
want bool
}{
@@ -408,7 +407,7 @@ func Test_IsExecutionEnabled(t *testing.T) {
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -427,7 +426,7 @@ func Test_IsExecutionEnabled(t *testing.T) {
},
{
name: "non-empty header, non-empty payload",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -464,7 +463,7 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
want bool
}{
{
@@ -476,7 +475,7 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -495,7 +494,7 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
},
{
name: "non-empty header, non-empty payload",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -527,7 +526,7 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
err error
}{
{
@@ -543,7 +542,7 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
@@ -557,7 +556,7 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength)
return h
@@ -688,12 +687,12 @@ func Test_ProcessPayloadHeader(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
err error
}{
{
name: "process passes",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
@@ -707,7 +706,7 @@ func Test_ProcessPayloadHeader(t *testing.T) {
},
{
name: "incorrect timestamp",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = 1
@@ -739,12 +738,12 @@ func Test_ValidatePayloadHeader(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
err error
}{
{
name: "process passes",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
@@ -758,7 +757,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
},
{
name: "incorrect timestamp",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = 1
@@ -778,16 +777,16 @@ func Test_ValidatePayloadHeader(t *testing.T) {
func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
emptySt := st.Copy()
require.NoError(t, st.SetLatestExecutionPayloadHeader(&ethpb.ExecutionPayloadHeader{BlockHash: []byte{'a'}}))
require.NoError(t, st.SetLatestExecutionPayloadHeader(&enginev1.ExecutionPayloadHeader{BlockHash: []byte{'a'}}))
tests := []struct {
name string
state state.BeaconState
header *ethpb.ExecutionPayloadHeader
header *enginev1.ExecutionPayloadHeader
err error
}{
{
name: "no merge",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
return h
}(),
@@ -796,7 +795,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
},
{
name: "process passes",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = []byte{'a'}
return h
@@ -806,7 +805,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
},
{
name: "invalid block hash",
header: func() *ethpb.ExecutionPayloadHeader {
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = []byte{'b'}
return h
@@ -873,8 +872,8 @@ func BenchmarkBellatrixComplete(b *testing.B) {
}
}
func emptyPayloadHeader() *ethpb.ExecutionPayloadHeader {
return &ethpb.ExecutionPayloadHeader{
func emptyPayloadHeader() *enginev1.ExecutionPayloadHeader {
return &enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),

View File

@@ -39,6 +39,7 @@ go_test(
"attestation_test.go",
"justification_finalization_test.go",
"new_test.go",
"precompute_test.go",
"reward_penalty_test.go",
"slashing_test.go",
],

View File

@@ -1,11 +1,14 @@
package precompute
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -14,11 +17,17 @@ var errNilState = errors.New("nil state")
// UnrealizedCheckpoints returns the justification and finalization checkpoints of the
// given state as if it was progressed with empty slots until the next epoch.
func UnrealizedCheckpoints(st state.BeaconState) (*ethpb.Checkpoint, *ethpb.Checkpoint, error) {
func UnrealizedCheckpoints(ctx context.Context, st state.BeaconState) (*ethpb.Checkpoint, *ethpb.Checkpoint, error) {
if st == nil || st.IsNil() {
return nil, nil, errNilState
}
if slots.ToEpoch(st.Slot()) <= params.BeaconConfig().GenesisEpoch+1 {
jc := st.CurrentJustifiedCheckpoint()
fc := st.FinalizedCheckpoint()
return jc, fc, nil
}
activeBalance, prevTarget, currentTarget, err := st.UnrealizedCheckpointBalances()
if err != nil {
return nil, nil, err
@@ -149,14 +158,14 @@ func computeCheckpoints(state state.BeaconState, newBits bitfield.Bitvector4) (*
finalizedCheckpoint := state.FinalizedCheckpoint()
// If 2/3 or more of the total balance attested in the current epoch.
if newBits.BitAt(0) {
if newBits.BitAt(0) && currentEpoch >= justifiedCheckpoint.Epoch {
blockRoot, err := helpers.BlockRoot(state, currentEpoch)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get block root for current epoch %d", currentEpoch)
}
justifiedCheckpoint.Epoch = currentEpoch
justifiedCheckpoint.Root = blockRoot
} else if newBits.BitAt(1) {
} else if newBits.BitAt(1) && prevEpoch >= justifiedCheckpoint.Epoch {
// If 2/3 or more of total balance attested in the previous epoch.
blockRoot, err := helpers.BlockRoot(state, prevEpoch)
if err != nil {

View File

@@ -243,10 +243,25 @@ func TestUnrealizedCheckpoints(t *testing.T) {
_, _, err = altair.InitializePrecomputeValidators(context.Background(), state)
require.NoError(t, err)
jc, fc, err := precompute.UnrealizedCheckpoints(state)
jc, fc, err := precompute.UnrealizedCheckpoints(context.Background(), state)
require.NoError(t, err)
require.DeepEqual(t, test.expectedJustified, jc.Epoch)
require.DeepEqual(t, test.expectedFinalized, fc.Epoch)
})
}
}
func Test_ComputeCheckpoints_CantUpdateToLower(t *testing.T) {
st, err := v2.InitializeFromProto(&ethpb.BeaconStateAltair{
Slot: params.BeaconConfig().SlotsPerEpoch * 2,
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 2,
},
})
require.NoError(t, err)
jb := make(bitfield.Bitvector4, 1)
jb.SetBitAt(1, true)
cp, _, err := precompute.ComputeCheckpoints(st, jb)
require.NoError(t, err)
require.Equal(t, types.Epoch(2), cp.Epoch)
}

View File

@@ -0,0 +1,3 @@
package precompute
var ComputeCheckpoints = computeCheckpoints

View File

@@ -14,6 +14,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v3:go_default_library",
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],
)
@@ -25,6 +26,7 @@ go_test(
":go_default_library",
"//beacon-chain/core/time:go_default_library",
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -1,18 +1,17 @@
package execution
import (
"context"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v3 "github.com/prysmaticlabs/prysm/beacon-chain/state/v3"
"github.com/prysmaticlabs/prysm/config/params"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// UpgradeToBellatrix updates inputs a generic state to return the version Bellatrix state.
// It inserts an empty `ExecutionPayloadHeader` into the state.
func UpgradeToBellatrix(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
currentSyncCommittee, err := state.CurrentSyncCommittee()
@@ -65,7 +64,7 @@ func UpgradeToBellatrix(ctx context.Context, state state.BeaconState) (state.Bea
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &ethpb.ExecutionPayloadHeader{
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),

View File

@@ -1,12 +1,12 @@
package execution_test
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/execution"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/config/params"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
@@ -15,7 +15,7 @@ import (
func TestUpgradeToBellatrix(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
preForkState := st.Copy()
mSt, err := execution.UpgradeToBellatrix(context.Background(), st)
mSt, err := execution.UpgradeToBellatrix(st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), mSt.GenesisTime())
@@ -61,7 +61,7 @@ func TestUpgradeToBellatrix(t *testing.T) {
header, err := mSt.LatestExecutionPayloadHeader()
require.NoError(t, err)
wanted := &ethpb.ExecutionPayloadHeader{
wanted := &enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),

Some files were not shown because too many files have changed in this diff Show More