Compare commits

...

312 Commits

Author SHA1 Message Date
Zahoor Mohamed
59e97d268e add db functions for last checkpoint 2022-03-11 16:35:21 +05:30
terence tsao
f6eb6cd6bf Fix run time 2022-03-10 14:01:18 -08:00
terence tsao
a08c809073 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-10 13:53:01 -08:00
james-prysm
c58ce41b3b Flagutil: utility for json file from directory or url and unmarshal to struct (#10333)
* initial commit

* adding first testcase wip

* fixing test

* adding more unit tests

* adding bazel file

* adding more unit tests and file checks

* addressing comments

* refactoring based on comments

* added bazel

* fixing build
2022-03-10 14:25:26 -06:00
terence tsao
ef0435493d Revert "Update receive_block.go"
This reverts commit 5b4a87c512.
2022-03-10 11:09:17 -08:00
terence tsao
6f15d2b0b2 Update bzl 2022-03-10 11:02:07 -08:00
kasey
d3b09d1e9d --curses=no for cleaner logs (no term refresh) (#10334)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-10 18:05:36 +00:00
Michael Neuder
27082e2cd2 Adding ExtractKeystores to the Keymanager interface (#10313)
* adding ExtractKeystores to the Keymanager interface

* adding ExtractKeystore to mockKeymanager struct types

* bazel run //:gazelle -- fix

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-03-10 17:19:04 +00:00
terence tsao
12080727ea Save and retrieve fee recipients for db (#10336)
* Can save fee recipients in db

* Update BUILD.bazel

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-03-10 15:05:10 +00:00
Potuz
a731b8c0bc Boost proposer later (#10338)
* Boost Proposer after inserting block

* add regression test
2022-03-10 13:13:06 +00:00
terence tsao
5b4a87c512 Update receive_block.go 2022-03-09 19:47:11 -08:00
terence tsao
5d7704e3a9 Update process_block.go 2022-03-09 19:41:03 -08:00
terence tsao
c4093f8adb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-09 19:36:56 -08:00
Raul Jordan
f6eed74500 PrepareBeaconProposer Protobuf Schema (#10332)
* prepare v1 protos

* add in protos

* prepare beacon proposer

* stubs

* builds

* gaz

* build

* rem

* ssz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-09 21:40:26 +00:00
Potuz
329a4a600c Add USE_PRYSM_MODERN environment variable (#10229)
* Add USE_PRYSM_MODERN environment variable

* fix deepsource

* change naming convention

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-09 21:00:12 +00:00
terence tsao
7ce712bb5e Kiln run time changes (#10321)
* First take

* Update proposer_execution_payload.go

* Update optimistic_sync_test.go

* Add tests

* Update proposer_execution_payload.go

* Fix tests

* Add deprecation

* Fix bad merge

* New test

* Update beacon-chain/core/blocks/payload.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Proposer test can get and compare payload

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-03-09 20:26:23 +00:00
kasey
17a43c1158 Retrieving state by slot to always apply canonical block at slot, when available (#10255)
* new stategen.StateReplayer/ReplayerBuilder to give more fine-grained
  control of replaying state+block history
* all rpc/api methods updated to use the new interface, return post-state

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2022-03-09 13:33:18 -06:00
Radosław Kapka
77b8b13eff Bellatrix API support for block endpoints (#10324)
* refactor GetBlockV2

* Add bellatrix to GetBlockSSZV2

* Add bellatrix to ListBlockAttestations

* Add bellatrix to SubmitBlock

* gzl

* return error from SubmitBlock

* return nil

* Better code flow when getting blocks

* remove tautology

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-09 17:18:52 +00:00
Potuz
b59484b285 Export SetOptimisticToInvalid (#10330)
* temp

* export remove invalid blocks
2022-03-09 13:50:57 -03:00
Radosław Kapka
1964fb8146 Correct package name in features' README file (#10329) 2022-03-09 14:24:00 +00:00
Nishant Das
63825290cb Handle Port Registration Better (#10325) 2022-03-09 13:52:50 +01:00
kmax.eth
1619d880d4 fix TestSyncHandlers_WaitTillSynced (#10328)
* fix TestLockUnlock_CleansUnused

* fix TestSyncHandlers_WaitTillSynced
The test is failing silently (while go test showing 'PASS') due to panic caused by feed type mismatch. As a result, intended testing logic is not exercised at all. There are a couuple fixes here:
1. fix the feed event type to be pointer
2. add seendCache to avoid nil pointer panic
3. fill block in beaconDB so validateBeaconBlockPubSub gets short cut
4. replace time.Sleep() with deterministic channel waiting

* handle cancel func
2022-03-09 05:42:03 +00:00
kmax.eth
af2b858aa2 fix TestLockUnlock_CleansUnused (#10326) 2022-03-09 04:08:28 +00:00
Potuz
57a323f083 Forkchoice featureflag (#10299)
* Compiling main beacon-chain binary

* Add feature flag

* passing protoarray tests

* passing nodetree tests

* passing blockchain package tests

* passing rpc tests

* go fmt

* re-export forkchoice store from blockchain package

* remove duplicated import

* remove unused var

* add nodetree rpc method

* remove slot from IsOptimisticForRoot

* release lock in IsOptimistic

* change package name

* Revert "change package name"

This reverts commit 679112f9ef.

* rename package

* Update doc

* Fix span names

* Terence + Raul review

* remove go:build flags

* add errors dep

* spec tests

* fix call to IsOptimisticForRoot

* fix test

* Fix conflict

* change name of function

* remove ctx from store.head

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2022-03-09 03:05:51 +00:00
Nishant Das
738f00129b control test (#10323) 2022-03-08 07:58:19 -08:00
Radosław Kapka
3a03623094 E2E: Cleanup old Eth1 code (#10320) 2022-03-07 23:14:10 +00:00
terence tsao
4724b8430f Sync with devleop 2022-03-07 12:23:37 -08:00
terence tsao
55c8922f51 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-07 12:20:36 -08:00
terence tsao
cde58f6924 Rest of kiln changes (#10319)
* Update configs

* gazelle

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-07 19:47:08 +00:00
Preston Van Loon
78fe712e53 Validator: deduplicate fork logic for block proposals (#10297)
* Deduplicate block proposals

* fix tests, more dedup of tests

* Add godoc to BuildSignedBeaconBlock.

* Rename SignRequest_Object => SignRequestObject

* Fix error messages

* Add tests for new methods

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-07 19:18:54 +00:00
Raul Jordan
7f8d66c919 Merge branch 'develop' into kiln 2022-03-07 18:55:37 +00:00
terence tsao
40e5a5d796 RPC: add prepare execution payload (#10311)
* Add prepare execution payload

* Add prepare execution payload

* Add prepare execution payload

* Update client.go

* Update proposer_execution_payload.go

* Update proposer_execution_payload_test.go

* Handle post bellatrix finalied blk

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_execution_payload.go

* Use BeaconBlockIsNil

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-03-07 18:43:20 +00:00
Radosław Kapka
87507cbfe2 Support Bellatrix state in getStateV2 standard API (#10314) 2022-03-07 17:56:54 +01:00
Mohamed Zahoor
b516cfd998 cherry picked PR of #10233 (#10301)
* fix merge conflicts

* fix more merge conflicts

* fix fmt

* remove unnecessary cfg

* setting bad block if the bellatrix validation fails

* added an edge condition

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-07 13:19:07 +00:00
Radosław Kapka
f98d1ce64b Run multiple go-ethereum nodes in e2e (#10277)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-03-07 13:23:15 +01:00
Mohamed Zahoor
a103dd91c0 cherry picked PR for #10274 (#10300)
* ignore topic messages (except block topic) during optimistic sync

* address review comments

* nit pick fix

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-03-06 08:09:12 +00:00
terence tsao
74fe2cc8d0 Spawn attestation routine at a better place (#10303)
* Spawn attestation routine at a better place

* Revert

* Update service.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-04 22:25:04 +00:00
Nishant Das
a4bbaac262 Add Better Error Logs for Context Deadlines (#10310)
* better log here too

* remove period

* Update beacon-chain/p2p/pubsub.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-03-04 16:21:19 +00:00
Nishant Das
1af11885ee Remove Support for 2d-list Hashers (#10290)
* add changes

* fix logic bug

* fix

* potuz's review

* Update beacon-chain/state/stateutil/eth1_root.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-03-04 15:19:07 +00:00
Nishant Das
1437cb8982 Update GoHashTree Library (#10308) 2022-03-04 14:18:52 +00:00
terence tsao
e2e5a0d86c Return early if ttd is not reached 2022-03-04 05:55:30 -08:00
Nishant Das
0b559afe30 Better Error Log In Context Deadlines (#10309)
* add better error log

* radek's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-04 09:54:06 +00:00
Radosław Kapka
54915850a2 Refactor E2E port registration (#10306)
* Refactor e2e port registration

* uncomment tests

* explain calculation

* fix things

* change param to pointer

* fix errors

* unit test and constant
2022-03-04 09:26:28 +00:00
terence tsao
4acc40ffed Update proposer_execution_payload.go 2022-03-03 15:48:28 -08:00
terence tsao
acc528ff75 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-03 15:45:27 -08:00
terence tsao
69618d157a Bellatrix validator terminal block helpers (#10305)
* Add get transition block

* Update proposer_execution_payload.go
2022-03-03 15:00:11 -05:00
terence tsao
87395141e8 Bypass eth1 data checks 2022-03-03 09:36:48 -08:00
Nishant Das
fa750650ed Disable Vectorized HTR from Our Dev Flag (#10304) 2022-03-03 03:49:26 +00:00
terence tsao
a69901bd7c Rm post state check 2022-03-02 16:45:03 -08:00
terence tsao
72d2bc7ce1 Sync with develop 2022-03-02 16:32:07 -08:00
terence tsao
aecd34a1ea Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-02 16:22:28 -08:00
terence tsao
7303985232 Validator RPC: add optimistic status check (#10291)
* Add optimistic status check

* Simplify a bit more

* Update status_test.go

* Add non opt tests

* Update aggregator_test.go

* More tests

* Preston's feedback
2022-03-03 00:16:34 +00:00
Raul Jordan
958dd9d783 Check Engine API Transition Configuration in Background (#10250)
* transition proto

* gen pb

* builds

* impl transition config

* begin tests

* transition config messed up

* amend proto

* use str

* passing

* gaz

* config

* client test

* pb

* set to 0

* rem log

* gaz

* check transition config

* check config differences

* check transition config in background

* gaz

* pass

* redundant

* fix up error handling and healthz

* simplify status

* gazelle

* build

* err config check

* test

* gaz

* Fix run time

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-01 21:38:51 +00:00
Preston Van Loon
a3f8ccd924 validator: Fix flaky test TestServer_RefreshJWTSecretOnFileChange (#10296) 2022-03-01 20:34:05 +00:00
terence tsao
1daae0f5cf Update proposer_execution_payload.go 2022-03-01 09:15:41 -08:00
terence tsao
1583c77bdf Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-01 09:10:10 -08:00
terence tsao
d64f6cb7a8 Add engine methods to block processing (#10285)
* Add notify newPayload and forkchoiceUpdate

* Tests

* Raul's feedback

* Update optimistic_sync.go

* Simplify

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-01 16:43:06 +00:00
terence tsao
a9a75e0004 Release lock before return validatedTips (#10289)
* Release lock before return

* add test

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-03-01 15:36:47 +00:00
terence tsao
8baf2179b3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-01 07:26:05 -08:00
terence tsao
d41947c60d Fix deadlock, uncomment duty opt sync 2022-02-28 20:12:27 -08:00
Nishant Das
339540274b Integration of Vectorized Sha256 In Prysm (#10166)
* add changes

* fix for vectorize

* fix bug

* add new bench

* use new algorithms

* add latest updates

* save progress

* hack even more

* add more changes

* change library

* go mod

* fix deps

* fix dumb bug

* add flag and remove redundant code

* clean up better

* remove those ones

* clean up benches

* clean up benches

* cleanup

* gaz

* revert change

* potuz's review

* potuz's review

* potuz's review

* gaz

* potuz's review

* remove cyclical import

* revert ide changes

* potuz's review

* return
2022-02-28 21:56:12 +08:00
terence tsao
d8f9ecbd4d Update process_block.go 2022-02-27 14:43:28 -08:00
terence tsao
9da43e4170 refactor engine calls 2022-02-27 14:42:09 -08:00
Raul Jordan
12ba8f3645 Renaming Random in ExecutionPayloads to PrevRandao (#10283)
* rename proto

* p header

* regen

* regen ssz

* fix randao

* random name changes

* bazel builds

* bt

* incorrect prev randao

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-02-26 03:47:16 +00:00
terence tsao
d3756ea4ea clean ups 2022-02-25 18:41:42 -08:00
Raul Jordan
66418ec0ff Merge branch 'develop' into kiln 2022-02-25 16:52:46 -05:00
Raul Jordan
f3a7f399c0 Engine API Client Authentication for the Merge via HTTP (#10236)
* round tripper with claims

* auth

* edit auth

* test out jwt

* passing

* jwt flag

* comment

* passing

* commentary

* fix up jwt parsing

* gaz

* update jwt libs

* tidy

* gaz

* lint

* tidy up

* comment too long

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-02-25 19:08:43 +00:00
terence tsao
e3963094d4 Sync with develop 2022-02-24 20:26:57 -08:00
james-prysm
6163e091a7 web3signer: fixes for e2e (#10281)
* fixing logs and caught bug in mappers

* Fix schema

* improving logging

* Update validator/keymanager/remote-web3signer/internal/client.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* adding logurus dependency

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-02-25 02:42:26 +00:00
terence tsao
a5240cf4b8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-24 15:09:08 -08:00
terence tsao
2fb4ddcbe7 Engine API: add payload status handling and tests (#10282)
* Add status handling and tests

* Update client.go

* Fmt

* Update mock_engine_test.go

* Update client_test.go
2022-02-24 19:35:01 +00:00
Radosław Kapka
1c2e463a30 --api-timeout flag (#10260)
* `--api-timeout` flag

* simplify code

* review feedback

* better error handling

* better docs

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-02-24 18:01:37 +00:00
james-prysm
01e9125761 web3signer: url parsing bug (#10278)
* adding in fixes for url

* fixing gazelle

* fixing wrong keymanager kind

* adding required scheme to urls

* fixing another unit test

* removing unused file

* adding new commit to retrigger deepsource ci
2022-02-24 10:24:11 -06:00
terence tsao
02a088d93c Add validate_merge_block (#10273)
* Add

* Update pow_block.go

* Update BUILD.bazel

* Update beacon-chain/blockchain/pow_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* feedbacks

* Feedbacks

* Fmt

* Update BUILD.bazel

* Update BUILD.bazel

* Update pow_block_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-02-23 22:41:11 +00:00
terence tsao
8cadb2ac6f Update IsOptimistic to always false (#10276)
* Update  to always false

* Use epoch

* Update chain_info_test.go

* Update chain_info_test.go
2022-02-23 20:16:45 +00:00
terence tsao
78a90af679 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-23 10:11:49 -08:00
Potuz
be722604f7 Fix logarithm of 2 (#10275)
* Fix logarithm of 2

* add regression test

* Update encoding/ssz/merkleize.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-02-23 12:44:04 +00:00
Raul Jordan
3bb2acfc7d Update Web UI to v1.0.3 (#10264)
* update web UI version

* fixing format

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: James He <james@prysmaticlabs.com>
2022-02-22 18:26:15 +00:00
terence tsao
b280e796da Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-22 10:00:50 -08:00
terence tsao
7719356b69 Add opt sync bool to IsOptimistic (#10270)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-02-22 16:46:42 +00:00
Raul Jordan
75b9bdba7c Small Comment Fix in Exchanging Transition Config (#10271) 2022-02-22 16:16:02 +00:00
terence tsao
1e32cd5596 Sync with devleop 2022-02-22 07:50:44 -08:00
terence tsao
72c1720704 Merge branch 'kiln' of github.com:prysmaticlabs/prysm into kiln 2022-02-22 07:45:21 -08:00
terence tsao
b6fd9e5315 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-22 07:45:06 -08:00
Nishant Das
fa1509c970 remove kiln flag here (#10269) 2022-02-22 06:48:39 -08:00
Nishant Das
525c818672 Remove SSZ Cache (#10256)
* remove ssz cache

* gaz

* lint

* analyze more

* fix
2022-02-22 17:27:51 +08:00
terence tsao
f9fbda80c2 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-21 08:18:37 -08:00
terence tsao
a55fdf8949 Use type string for total_difficulty (#10265)
* Use string for difficulty

* fix go

* fix test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-02-21 14:03:12 +00:00
Preston Van Loon
7f41b69281 Wrapper: Update block interface and reorganize fork logic (#10267)
* Add ssz.HashRoot interface composition to BeaconBlock interface, move fork specific logic into it's own files

* Remove needless underscore
2022-02-20 20:53:05 +00:00
terence tsao
9a56a5d101 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-20 10:38:43 -07:00
Preston Van Loon
c5189a6862 Deduplicate TestProposer_ProposeBlock_OK (#10266)
* Deduplicate tests for TestProposer_ProposeBlock

* remove erroneous save of block

* ensure block root is returned
2022-02-19 22:52:03 +00:00
Raul Jordan
5e8c49c871 Merge branch 'develop' into kiln 2022-02-19 13:22:44 -07:00
terence tsao
4c7daf7a1f Comment out optimistic status 2022-02-19 11:51:05 -07:00
Leo Lara
b4b976c28b Experimental prototype of Apple M1 processor support (#10192)
* Experimental prototype of Apple M1 processor support

* Enable Apple M1 compilation of herumi MCL by adding a precompiled library

* Renable nogo

* Fix by gazelle

* Update go.mod to reflect go 1.17.6 changes in WORKSPACE

* go mod tidy

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-02-19 18:17:08 +00:00
Raul Jordan
c4454cae78 gaz 2022-02-19 11:32:13 -06:00
Raul Jordan
1cedf4ba9a json 2022-02-18 17:43:49 -07:00
Raul Jordan
4ce3da7ecc set string 2022-02-18 17:28:45 -07:00
Raul Jordan
70a6fc4222 marshal un 2022-02-18 17:21:28 -07:00
Raul Jordan
bb126a9829 str 2022-02-18 18:20:34 -06:00
Raul Jordan
99deee57d1 string total diff 2022-02-18 17:19:44 -07:00
terence tsao
4c23401a3b Log 2022-02-16 11:45:47 -08:00
terence tsao
9636fde1eb Sync 2022-02-16 07:51:34 -08:00
terence tsao
a424f523a1 Update json_marshal_unmarshal.go 2022-02-15 20:26:26 -08:00
terence tsao
68e75d5851 Fix marshal 2022-02-15 15:42:39 -08:00
terence tsao
176ea137ee Merge branch 'payload-pointers' into kiln 2022-02-15 14:55:47 -08:00
terence tsao
b15cd763b6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-15 14:55:00 -08:00
terence tsao
8e78eae897 Update json_marshal_unmarshal.go 2022-02-15 11:19:44 -08:00
terence tsao
f6883f2aa9 Use pointers 2022-02-15 11:04:19 -08:00
terence tsao
19782d2563 Use pointers 2022-02-15 11:00:58 -08:00
terence tsao
032cf433c5 Use pointers 2022-02-15 11:00:21 -08:00
terence tsao
fa656a86a5 Logs to reproduce 2022-02-15 09:38:31 -08:00
terence tsao
5f414b3e82 Optimistic sync: prysm validator rpcs (#10200) 2022-02-15 07:25:02 -08:00
terence tsao
41b8b1a0f8 Merge branch 'kiln2' into kiln 2022-02-15 06:50:17 -08:00
terence tsao
1d36ecb98d Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-15 06:50:03 -08:00
terence tsao
80cd539297 Rm uncommented 2022-02-15 06:48:33 -08:00
terence tsao
f47b6af910 Done 2022-02-14 22:09:29 -08:00
terence tsao
443df77bb3 c 2022-02-14 18:25:59 -08:00
terence tsao
94fe3884a0 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-14 08:07:12 -08:00
terence tsao
7d6046276d Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-11 18:02:58 -08:00
terence tsao
4f77ad20c8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-10 14:24:28 -08:00
terence tsao
1b5a6d4195 Add back get payload 2022-02-10 14:22:35 -08:00
terence tsao
eae0db383f Clean ups 2022-02-10 12:29:52 -08:00
terence tsao
b56bd9e9d8 Fix build 2022-02-10 12:21:35 -08:00
terence tsao
481d8847c2 Merge branch 'kintsugi' of github.com:prysmaticlabs/prysm into kiln 2022-02-10 08:54:35 -08:00
terence tsao
42d5416658 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-10 08:51:51 -08:00
terence tsao
a1d8833749 Logs and err handling 2022-02-10 08:47:23 -08:00
terence tsao
695389b7bb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-10 08:04:48 -08:00
Potuz
a67b8610f0 Change optimistic logic (#10194) 2022-02-10 09:59:09 -03:00
terence tsao
29eceba4d2 delete deprecated client, update testnet flag 2022-02-09 16:05:42 -08:00
terence tsao
4c34e5d424 Sync with develop 2022-02-09 15:53:01 -08:00
terence tsao
569375286e Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-09 15:25:28 -08:00
terence tsao
924758a557 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-08 18:28:31 -08:00
terence tsao
eedcb529fd Merge commit '8eaf3919189cd6d5f51904d8e9d74995ab70d4ac' into kintsugi 2022-02-08 07:43:49 -08:00
terence tsao
30e796a4f1 Update proposer.go 2022-02-08 07:29:51 -08:00
terence tsao
ea6ca456e6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-08 07:29:47 -08:00
terence tsao
4b75b991dd Fix merge transition block validation 2022-02-07 15:11:05 -08:00
Potuz
8eaf391918 allow optimistic sync 2022-02-07 17:35:17 -03:00
terence tsao
cbdb3c9e86 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-07 12:02:32 -08:00
Potuz
12754adddc Sync optimistically candidate blocks (#10193) 2022-02-07 07:22:45 -03:00
Potuz
08a5155ee3 Revert "Sync optimistically candidate blocks (#10193)"
This reverts commit f99a0419ef.
2022-02-07 07:20:40 -03:00
Potuz
f99a0419ef Sync optimistically candidate blocks (#10193) 2022-02-07 10:14:25 +00:00
terence tsao
4ad31f9c05 Sync with develop 2022-02-06 19:41:39 -08:00
terence tsao
26876d64d7 Clean ups 2022-02-04 14:20:37 -08:00
terence tsao
3450923661 Sync with develop 2022-02-04 10:08:46 -08:00
terence tsao
aba628b56b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-28 13:59:31 -08:00
terence tsao
5effb92d11 Update mainnet_config.go 2022-01-27 11:40:45 -08:00
terence tsao
2b55368c99 sync with develop 2022-01-27 11:40:39 -08:00
terence tsao
327903b7bb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-27 10:35:38 -08:00
terence tsao
77f815a39f Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-24 09:06:44 -08:00
terence tsao
80dc725412 sync with develop 2022-01-14 18:42:45 -08:00
terence tsao
263c18992e Update generate_keys.go 2022-01-13 15:26:54 -08:00
terence tsao
9e220f9052 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-13 15:22:58 -08:00
terence tsao
99878d104c Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-12 10:40:04 -08:00
terence tsao
a870bf7a74 Clean up after sync 2022-01-10 18:50:27 -08:00
terence tsao
dc42ff382f Sync with develop 2022-01-10 11:20:05 -08:00
terence tsao
53b78a38a3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-10 11:07:23 -08:00
terence tsao
b45826e731 Speed up syncing, hide cosmetic errors 2022-01-04 10:15:40 -08:00
terence tsao
7b59ecac5e Sync with develop, fix payload nil check bug 2022-01-03 07:55:37 -08:00
terence tsao
9149178a9c Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-03 07:54:56 -08:00
terence tsao
51ef502b04 clean ups 2021-12-23 09:29:26 -08:00
terence tsao
8d891821ee Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-23 08:44:36 -08:00
terence tsao
762863ce6a Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-20 08:20:07 -08:00
terence tsao
41f5fa7524 visibility 2021-12-16 12:41:30 -08:00
terence tsao
09744bac70 correct gossip sizes this time 2021-12-16 11:57:17 -08:00
terence tsao
f5db847237 use merge gossip sizes 2021-12-16 11:15:00 -08:00
terence tsao
8600f70b0b sync with develop 2021-12-16 07:25:02 -08:00
terence tsao
6fe430de44 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-16 07:23:52 -08:00
Zahoor Mohamed
42a5f96d3f ReverseByteOrder function does not mess the input 2021-12-15 22:19:50 +05:30
Mohamed Zahoor
e7f0fcf202 converting base fee to big endian format (#10018) 2021-12-15 06:41:06 -08:00
terence tsao
5ae564f1bf fix conflicts 2021-12-09 09:01:23 +01:00
terence tsao
719109c219 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-09 08:42:02 +01:00
terence tsao
64533a4b0c Merge branch 'kintsugi' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-08 17:27:01 +01:00
terence tsao
9fecd761d7 latest kintusgi execution api 2021-12-08 17:24:45 +01:00
Zahoor Mohamed
f84c95667c change EP field names 2021-12-08 21:52:03 +05:30
terence tsao
9af081797e Go mod tidy 2021-12-06 09:34:54 +01:00
terence tsao
e4e9f12c8b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-06 09:24:49 +01:00
terence tsao
2f4e8beae6 Sync 2021-12-04 15:40:18 +01:00
terence tsao
81c7b90d26 Sync 2021-12-04 15:30:59 +01:00
Potuz
dd3d65ff18 Add v2 endpoint for merge blocks (#9802)
* Add V2 blocks endpoint for merge blocks

* Update beacon-chain/rpc/apimiddleware/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* go mod

* fix transactions

* Terence's comments

* add missing file

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-12-01 14:09:28 -03:00
terence tsao
ac5a227aeb Fix transactions root 2021-11-29 13:56:58 -08:00
terence tsao
33f4d5c3cc Fix a bug with loading mainnet state 2021-11-29 09:59:41 -08:00
terence tsao
67d7f8baee State pkg cleanup 2021-11-24 11:29:01 -08:00
terence tsao
3c54aef7b1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-23 15:34:47 -08:00
terence tsao
938c28c42e Fix build 2021-11-23 14:55:31 -08:00
terence tsao
8ddb2c26c4 Merge commit '4858de787558c792b01aae44bc3902859b98fcac' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-23 14:35:39 -08:00
terence tsao
cf0e78c2f6 Handle merge test case for update balance 2021-11-23 09:56:38 -08:00
terence tsao
4c0b262fdc Fix state merge 2021-11-23 09:13:50 -08:00
terence tsao
33e675e204 Update config to devnet1 2021-11-23 08:21:44 -08:00
terence tsao
e599f6a8a1 Fix build 2021-11-22 19:58:00 -08:00
terence tsao
49c9ab9fda Clean up conflicts 2021-11-22 19:40:57 -08:00
terence tsao
f90dec287b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-22 19:29:07 -08:00
terence tsao
12c36cff9d Update state_trie.go 2021-11-17 08:07:26 -08:00
terence tsao
bc565d9ee6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-17 08:07:03 -08:00
terence tsao
db67d5bad8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-15 11:07:42 -08:00
terence tsao
3bc0c2be54 Merge branch 'develop' into kintsugi 2021-11-15 09:42:21 -08:00
terence tsao
1bed9ef749 Sync with develop 2021-11-15 09:41:24 -08:00
terence tsao
ec772beeaf Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-15 09:35:29 -08:00
Mohamed Zahoor
56407dde02 Change Gossip message size and Chunk SIze from 1 MB t0 10MB (#9860)
* change gossip size and chunk size after merge

* change ssz to accomodate both changes

* gofmt config file

* add testcase for merge MsgId

* Update beacon-chain/p2p/message_id.go

Change MB to Mib in comment

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* change function name from altairMsgID to postAltairMsgID

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-15 10:37:02 +05:30
terence tsao
445f17881e Fix bad hex conversion 2021-11-12 11:56:22 -08:00
terence tsao
183d40d8f1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-11 09:36:26 -08:00
terence tsao
87bc6aa5e5 Manually override nil transaction field. M2 works 2021-11-09 16:06:01 -08:00
terence tsao
5b5065b01d Remove unused merge genesis state gen tool 2021-11-09 11:09:59 -08:00
terence tsao
ee1c567561 Remove secp256k1 2021-11-09 08:43:10 -08:00
terence tsao
ff1416c98d Update Kintsugi consensus implementations (#9872) 2021-11-08 21:26:58 -08:00
terence tsao
471c94031f Update spec test shas 2021-11-08 19:39:13 -08:00
terence tsao
9863fb3d6a All spec tests pass 2021-11-08 19:31:28 -08:00
kasey
f3c2d1a00b Kintsugi ssz (#9867) 2021-11-08 18:42:23 -08:00
terence tsao
5d8879a4df Update Kintsugi engine API (#9865) 2021-11-08 09:56:14 -08:00
terence tsao
abea0a11bc Update WORKSPACE 2021-11-05 12:06:19 -07:00
terence tsao
80ce1603bd Merge branch 'kintsugi' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-03 20:40:22 -07:00
terence tsao
ca478244e0 Add and use TBH_ACTIVATION_EPOCH 2021-11-03 20:39:51 -07:00
terence tsao
8a864b66a1 Add and use 2021-11-03 20:38:40 -07:00
terence tsao
72f3b9e84b Remove extraneous p2p condition 2021-11-03 19:17:12 -07:00
terence tsao
493e95060f Fix gossip and tx size limits for the merge part 1 2021-11-03 17:03:06 -07:00
terence tsao
e7e1ecd72f Update penalty params for Merge 2021-11-03 16:37:17 -07:00
terence tsao
c286ac8b87 Remove gas validations 2021-11-03 14:47:33 -07:00
terence tsao
bde315224c Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-11-03 12:52:50 -07:00
terence tsao
00520705bc Sync with develop 2021-11-02 20:52:33 -07:00
Zahoor Mohamed
c7fcd804d7 all gossip tests passing 2021-10-27 18:48:22 +05:30
terence tsao
985ac2e848 Update htrutils.go 2021-10-24 11:35:59 -07:00
terence tsao
f4a0e98926 Disable genesis ETH1.0 chain header logging 2021-10-19 22:13:59 -07:00
terence tsao
5f93ff10ea Merge: switch from go bindings to raw rpc calls (#9803) 2021-10-19 21:00:11 -07:00
terence tsao
544248f60f Go fmt 2021-10-18 22:38:57 -07:00
terence tsao
3b41968510 Merge branch 'merge-oct' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-18 22:38:21 -07:00
terence tsao
7fc418042a Disable deposit contract lookback 2021-10-18 22:38:09 -07:00
terence tsao
9a03946706 Disable contract lookback 2021-10-18 22:34:50 -07:00
terence tsao
33dd6dd5f2 Use proper receive block path for initial syncing 2021-10-18 21:28:16 -07:00
terence tsao
56542e1958 Correctly upgrade to merge state + object mapping fixes 2021-10-18 17:46:55 -07:00
terence tsao
e82d7b4c0b Use uint64 for ttd 2021-10-18 14:00:45 -07:00
terence tsao
6cb69d8ff0 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-18 09:26:42 -07:00
terence tsao
70b55a0191 Proper upgrade altair to merge state 2021-10-15 12:48:21 -07:00
terence tsao
50f4951194 Various fixes to pass all spec tests for Merge (#9777) 2021-10-14 15:34:31 -07:00
terence tsao
1a14f2368d Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-14 11:52:28 -07:00
terence tsao
bb8cad58f1 Update beacon_block.pb.go 2021-10-13 13:49:16 -07:00
terence tsao
05412c1f0e Update mainnet_config.go 2021-10-13 13:26:48 -07:00
terence tsao
b03441fed8 Fix finding terminal block hash calculation 2021-10-13 11:29:17 -07:00
terence tsao
fa7d7cef69 Merge: support terminal difficulty override (#9769) 2021-10-12 20:40:01 -07:00
terence tsao
1caa6c969f Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-12 09:43:42 -07:00
Kasey Kirkham
eeb7d5bbfb tell bazel about this new file 2021-10-08 13:31:57 -05:00
Kasey Kirkham
d7c7d150b1 separate ExecutionPayload/Header from codegen 2021-10-08 11:06:21 -05:00
Kasey Kirkham
63c4d2eb2b defensive nil check 2021-10-08 09:18:02 -05:00
Kasey Kirkham
9de1f694a0 restoring generated pb field ordering 2021-10-08 08:16:43 -05:00
terence tsao
8a79d06cbd Fix bazel build //... 2021-10-07 15:31:49 -07:00
terence tsao
5290ad93b8 Merge conflict. Sync with upstream 2021-10-07 15:07:29 -07:00
terence tsao
2128208ef7 M2 works with Geth 🎉 2021-10-07 14:57:20 -07:00
Kasey Kirkham
296323719c get rid of codegen garbage 2021-10-07 16:31:35 -05:00
Kasey Kirkham
5e9583ea85 noisy commit, restoring pb field order codegen 2021-10-07 15:59:28 -05:00
Zahoor Mohamed
17196e0f80 changes test cases per ssz changes 2021-10-08 01:39:30 +05:30
kasey
c50d54000d Merge union debugging (#9751) 2021-10-07 10:44:26 -07:00
terence tsao
85b3061d1b Update go commit 2021-10-07 10:10:46 -07:00
terence tsao
0146c5317a Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-07 09:06:55 -07:00
Zahoor Mohamed
fcbc48ffd9 fix finding Transactions size 2021-10-07 14:16:15 +05:30
terence tsao
76ee51af9d Interop merge beacon state 2021-10-06 17:22:47 -07:00
terence tsao
370b0b97ed Fix beacon chain build 2021-10-06 14:41:43 -07:00
terence tsao
990ebd3fe3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-06 14:34:33 -07:00
Zahoor Mohamed
54449c72e8 Merge branch 'merge-oct' of https://github.com/prysmaticlabs/prysm into merge-oct 2021-10-06 23:53:43 +05:30
Zahoor Mohamed
1dbd0b98eb add merge specific checks when receiving a block from gossip 2021-10-06 23:53:24 +05:30
terence tsao
09c3896c6b Go fmt 2021-10-06 09:38:24 -07:00
terence tsao
d494845e19 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-06 09:36:13 -07:00
terence tsao
4d0c0f7234 Update todo strings 2021-10-05 14:43:01 -07:00
terence tsao
bfe570b1aa Merge branch 'merge-oct-net' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-05 14:41:24 -07:00
terence tsao
56db696823 Clean up and fix a test 2021-10-05 14:38:09 -07:00
terence tsao
d312e15db8 Clean up misc state store 2021-10-05 14:17:44 -07:00
terence tsao
907d4cf7e6 Clean up validator additions 2021-10-05 14:06:03 -07:00
terence tsao
891353d6ad Clean up beacon chain additions 2021-10-05 11:28:36 -07:00
terence tsao
0adc08660c Rest of the validator changes 2021-10-05 10:18:26 -07:00
terence tsao
de31425dcd Add proposer get execution payload helpers 2021-10-04 16:37:22 -07:00
terence tsao
2094e0f21f Update rpc service and proposer get block 2021-10-04 16:37:01 -07:00
terence tsao
2c6f554500 Update process_block.go 2021-10-04 10:45:56 -07:00
terence tsao
18a1e07711 Update and use forked go-ethereum with catalyst go binding 2021-10-04 10:45:39 -07:00
prestonvanloon
5e432f5aaa Use MariusVanDerWijden go-ethereum fork with latest catalyst updates 2021-10-03 22:05:21 -05:00
prestonvanloon
284e2696cb Merge branch 'rm-bazel-go-ethereum' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-03 21:52:13 -05:00
terence tsao
7547aaa6ce Fix build, update comments 2021-10-03 19:11:43 -07:00
prestonvanloon
953315c2cc fix geth e2e flags 2021-10-03 14:01:26 -05:00
terence tsao
9662d06b08 Update catalyst merge commit 2021-10-03 11:51:12 -07:00
prestonvanloon
ecaea26ace fix geth e2e flags 2021-10-03 13:31:52 -05:00
prestonvanloon
63819e2690 move vendor stuff to third_party so that go mod wont be mad anymore 2021-10-03 13:21:27 -05:00
prestonvanloon
a6d0cd06b3 Remove bazel-go-ethereum, use vendored libraries only 2021-10-03 13:11:50 -05:00
prestonvanloon
2dbe4f5e67 viz improvement 2021-10-03 13:11:26 -05:00
prestonvanloon
2689d6814d Add karalabe/usb 2021-10-03 13:11:16 -05:00
prestonvanloon
69a681ddc0 gaz 2021-10-03 12:29:17 -05:00
prestonvanloon
7f9f1fd36c Check in go-ethereum crypto/sepc256k1 package with proper build rules 2021-10-03 12:27:40 -05:00
terence tsao
57c97eb561 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-03 09:42:34 -07:00
terence tsao
f0f94a8193 Handle more version merge cases 2021-10-02 11:43:50 -07:00
Zahoor Mohamed
87b0bf2c2a fix more merge conflicts 2021-10-02 12:27:12 +05:30
Zahoor Mohamed
d8ad317dec fix mrge conflicts 2021-10-02 12:19:29 +05:30
terence tsao
ab5f488cf4 Fix spectest merge fork 2021-10-01 16:27:21 -07:00
terence tsao
296d7464ad Add powchain execution methods 2021-10-01 16:07:33 -07:00
terence tsao
221c542e4f Go mod tidy and build 2021-10-01 13:43:57 -07:00
terence tsao
7ad32aaa96 Add execution caller engine interface 2021-10-01 12:59:04 -07:00
terence tsao
3dc0969c0c Point go-ethereum to https://github.com/ethereum/go-ethereum/pull/23607 2021-10-01 08:30:15 -07:00
Zahoor Mohamed
0e18e835c3 req/resp structure has not changed. so no need of a new version 2021-09-30 21:34:19 +05:30
terence tsao
8adfbfc382 Update sync_committee.go 2021-09-30 08:21:40 -07:00
Zahoor Mohamed
68b0b5e0ce add merge in fork watcher 2021-09-30 14:09:18 +05:30
terence tsao
eede309e0f Fix build 2021-09-29 16:26:13 -07:00
terence tsao
b11628dc53 Can configure flags 2021-09-29 16:04:39 -07:00
terence tsao
ea3ae22d3b Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-29 14:25:16 -07:00
terence tsao
02bb39ddeb Minor clean up to improve readability 2021-09-29 10:21:00 -07:00
terence tsao
1618c1f55d Fix comment 2021-09-29 07:57:43 -07:00
terence tsao
73c8493fd7 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-28 14:54:24 -07:00
terence tsao
a4f59a4f15 Forkchoice and upgrade changes 2021-09-28 14:53:11 -07:00
Zahoor Mohamed
3c497efdb8 Merge branch 'merge-oct' of https://github.com/prysmaticlabs/prysm into merge-oct-net 2021-09-28 21:58:22 +05:30
Zahoor Mohamed
9f5daafbb7 initial networking code 2021-09-28 20:02:47 +05:30
terence tsao
11d7ffdfa8 Add merge spec tests 2021-09-26 11:07:31 -07:00
terence tsao
c26b3305e6 Resolve conflict 2021-09-25 09:49:53 -07:00
terence tsao
38d8b63fbf Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-25 09:15:20 -07:00
terence tsao
aea67405c8 Add upgrade to merge path 2021-09-21 14:34:03 -07:00
terence tsao
57d830f8b3 Add wrapper, cloner and interface 2021-09-21 13:34:10 -07:00
terence tsao
ac4b1ef4ea Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-21 13:07:01 -07:00
terence tsao
1d32119f5a can process execution header 2021-09-20 17:08:53 -07:00
terence tsao
3540cc7b05 Add state v3 2021-09-16 21:31:08 -07:00
terence tsao
191e7767a6 Add beacon block and state protos 2021-09-16 16:15:55 -07:00
372 changed files with 17754 additions and 6706 deletions

View File

@@ -34,7 +34,7 @@ build --sandbox_tmpfs_path=/tmp
build --verbose_failures
build --announce_rc
build --show_progress_rate_limit=5
build --curses=yes --color=no
build --curses=no --color=no
build --keep_going
build --test_output=errors
build --flaky_test_attempts=5

View File

@@ -183,7 +183,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.16.4",
go_version = "1.17.6",
nogo = "@//:nogo",
)
@@ -349,9 +349,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "f196fe4367c2d2d01d36565c0dc6eecfa4f03adba1fc03a61d62953fce606e1f",
sha256 = "4797a7e594a5b1f4c1c8080701613f3ee451b01ec0861499ea7d9b60877a6b23",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v1.0.2/prysm-web-ui.tar.gz",
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v1.0.3/prysm-web-ui.tar.gz",
],
)

View File

@@ -3,6 +3,7 @@ package apimiddleware
import (
"net/http"
"reflect"
"time"
"github.com/gorilla/mux"
)
@@ -14,6 +15,7 @@ import (
type ApiProxyMiddleware struct {
GatewayAddress string
EndpointCreator EndpointFactory
Timeout time.Duration
router *mux.Router
}
@@ -120,7 +122,7 @@ func (m *ApiProxyMiddleware) WithMiddleware(path string) http.HandlerFunc {
WriteError(w, errJson, nil)
return
}
grpcResp, errJson := ProxyRequest(req)
grpcResp, errJson := m.ProxyRequest(req)
if errJson != nil {
WriteError(w, errJson, nil)
return

View File

@@ -5,10 +5,10 @@ import (
"encoding/json"
"io"
"io/ioutil"
"net"
"net/http"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/api/grpc"
@@ -75,11 +75,14 @@ func (m *ApiProxyMiddleware) PrepareRequestForProxying(endpoint Endpoint, req *h
}
// ProxyRequest proxies the request to grpc-gateway.
func ProxyRequest(req *http.Request) (*http.Response, ErrorJson) {
func (m *ApiProxyMiddleware) ProxyRequest(req *http.Request) (*http.Response, ErrorJson) {
// We do not use http.DefaultClient because it does not have any timeout.
netClient := &http.Client{Timeout: time.Minute * 2}
netClient := &http.Client{Timeout: m.Timeout}
grpcResp, err := netClient.Do(req)
if err != nil {
if err, ok := err.(net.Error); ok && err.Timeout() {
return nil, TimeoutError()
}
return nil, InternalServerErrorWithMessage(err, "could not proxy request")
}
if grpcResp == nil {
@@ -111,9 +114,14 @@ func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, respBody []
w.Header().Set(h, v)
}
}
// Set code to HTTP code because unmarshalled body contained gRPC code.
errJson.SetCode(resp.StatusCode)
WriteError(w, errJson, resp.Header)
// Handle gRPC timeout.
if resp.StatusCode == http.StatusGatewayTimeout {
WriteError(w, TimeoutError(), resp.Header)
} else {
// Set code to HTTP code because unmarshalled body contained gRPC code.
errJson.SetCode(resp.StatusCode)
WriteError(w, errJson, resp.Header)
}
}
return responseHasError, nil
}

View File

@@ -41,6 +41,13 @@ func InternalServerError(err error) *DefaultErrorJson {
}
}
func TimeoutError() *DefaultErrorJson {
return &DefaultErrorJson{
Message: "Request timeout",
Code: http.StatusRequestTimeout,
}
}
// StatusCode returns the error's underlying error code.
func (e *DefaultErrorJson) StatusCode() int {
return e.Code

View File

@@ -52,6 +52,7 @@ type config struct {
muxHandler MuxHandler
pbHandlers []*PbMux
router *mux.Router
timeout time.Duration
}
// Gateway is the gRPC gateway to serve HTTP JSON traffic as a proxy and forward it to the gRPC server.
@@ -248,6 +249,7 @@ func (g *Gateway) registerApiMiddleware() {
g.proxy = &apimiddleware.ApiProxyMiddleware{
GatewayAddress: g.cfg.gatewayAddr,
EndpointCreator: g.cfg.apiMiddlewareEndpointFactory,
Timeout: g.cfg.timeout,
}
log.Info("Starting API middleware")
g.proxy.Run(g.cfg.router)

View File

@@ -1,7 +1,10 @@
package gateway
import (
"time"
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/prysm/api/gateway/apimiddleware"
)
@@ -79,3 +82,12 @@ func WithApiMiddleware(endpointFactory apimiddleware.EndpointFactory) Option {
return nil
}
}
// WithTimeout allows changing the timeout value for API calls.
func WithTimeout(seconds uint64) Option {
return func(g *Gateway) error {
g.cfg.timeout = time.Second * time.Duration(seconds)
gwruntime.DefaultContextTimeout = time.Second * time.Duration(seconds)
return nil
}
}

View File

@@ -110,9 +110,8 @@ func TestLockUnlock_CleansUnused(t *testing.T) {
lock := NewMultilock("dog", "cat", "owl")
lock.Lock()
assert.Equal(t, 3, len(locks.list))
defer lock.Unlock()
lock.Unlock()
<-time.After(100 * time.Millisecond)
wg.Done()
}()
wg.Wait()

View File

@@ -7,7 +7,6 @@ go_library(
"error.go",
"head.go",
"head_sync_committee_info.go",
"info.go",
"init_sync_process_block.go",
"log.go",
"metrics.go",
@@ -50,12 +49,14 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
@@ -65,6 +66,7 @@ go_library(
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
@@ -72,7 +74,8 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_emicklei_dot//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
@@ -100,10 +103,10 @@ go_test(
"checktags_test.go",
"head_sync_committee_info_test.go",
"head_test.go",
"info_test.go",
"init_test.go",
"log_test.go",
"metrics_test.go",
"mock_engine_test.go",
"mock_test.go",
"optimistic_sync_test.go",
"pow_block_test.go",

View File

@@ -6,13 +6,14 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
)
@@ -52,12 +53,12 @@ type HeadFetcher interface {
HeadETH1Data() *ethpb.Eth1Data
HeadPublicKeyToValidatorIndex(pubKey [fieldparams.BLSPubkeyLength]byte) (types.ValidatorIndex, bool)
HeadValidatorIndexToPublicKey(ctx context.Context, index types.ValidatorIndex) ([fieldparams.BLSPubkeyLength]byte, error)
ProtoArrayStore() *protoarray.Store
ChainHeads() ([][32]byte, []types.Slot)
IsOptimistic(ctx context.Context) (bool, error)
IsOptimisticForRoot(ctx context.Context, root [32]byte, slot types.Slot) (bool, error)
IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error)
HeadSyncCommitteeFetcher
HeadDomainFetcher
ForkChoicer() forkchoice.ForkChoicer
}
// ForkFetcher retrieves the current fork information of the Ethereum beacon chain.
@@ -237,11 +238,6 @@ func (s *Service) HeadETH1Data() *ethpb.Eth1Data {
return s.head.state.Eth1Data()
}
// ProtoArrayStore returns the proto array store object.
func (s *Service) ProtoArrayStore() *protoarray.Store {
return s.cfg.ForkChoiceStore.Store()
}
// GenesisTime returns the genesis time of beacon chain.
func (s *Service) GenesisTime() time.Time {
return s.genesisTime
@@ -287,23 +283,7 @@ func (s *Service) IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, er
// ChainHeads returns all possible chain heads (leaves of fork choice tree).
// Heads roots and heads slots are returned.
func (s *Service) ChainHeads() ([][32]byte, []types.Slot) {
nodes := s.ProtoArrayStore().Nodes()
// Deliberate choice to not preallocate space for below.
// Heads cant be more than 2-3 in the worst case where pre-allocation will be 64 to begin with.
headsRoots := make([][32]byte, 0)
headsSlots := make([]types.Slot, 0)
nonExistentNode := ^uint64(0)
for _, node := range nodes {
// Possible heads have no children.
if node.BestDescendant() == nonExistentNode && node.BestChild() == nonExistentNode {
headsRoots = append(headsRoots, node.Root())
headsSlots = append(headsSlots, node.Slot())
}
}
return headsRoots, headsSlots
return s.cfg.ForkChoiceStore.Tips()
}
// HeadPublicKeyToValidatorIndex returns the validator index of the `pubkey` in current head state.
@@ -330,20 +310,34 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index types.V
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface
func (s *Service) ForkChoicer() forkchoice.ForkChoicer {
return s.cfg.ForkChoiceStore
}
// IsOptimistic returns true if the current head is optimistic.
func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.cfg.ForkChoiceStore.Optimistic(ctx, s.head.root, s.head.slot)
if slots.ToEpoch(s.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
return false, nil
}
return s.cfg.ForkChoiceStore.IsOptimistic(ctx, s.head.root)
}
// IsOptimisticForRoot takes the root and slot as aguments instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte, slot types.Slot) (bool, error) {
return s.cfg.ForkChoiceStore.Optimistic(ctx, root, slot)
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {
return s.cfg.ForkChoiceStore.IsOptimistic(ctx, root)
}
// SetGenesisTime sets the genesis time of beacon chain.
func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t
}
// ForkChoiceStore returns the fork choice store in the service
func (s *Service) ForkChoiceStore() forkchoice.ForkChoicer {
return s.cfg.ForkChoiceStore
}

View File

@@ -8,6 +8,7 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
@@ -40,6 +41,12 @@ func TestHeadRoot_Nil(t *testing.T) {
assert.DeepEqual(t, params.BeaconConfig().ZeroHash[:], headRoot, "Incorrect pre chain start value")
}
func TestService_ForkChoiceStore(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}}
p := c.ForkChoiceStore()
require.Equal(t, 0, int(p.FinalizedEpoch()))
}
func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
beaconDB := testDB.SetupDB(t)
@@ -277,27 +284,40 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
root = c.HeadGenesisValidatorsRoot()
require.DeepEqual(t, root[:], s.GenesisValidatorsRoot())
}
func TestService_ProtoArrayStore(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}}
p := c.ProtoArrayStore()
require.Equal(t, 0, int(p.FinalizedEpoch()))
}
func TestService_ChainHeads(t *testing.T) {
func TestService_ChainHeads_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, [32]byte{}, 0, 0))
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0,
params.BeaconConfig().ZeroHash)}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, 0, 0, false))
roots, slots := c.ChainHeads()
require.DeepEqual(t, [][32]byte{{'c'}, {'d'}, {'e'}}, roots)
require.DeepEqual(t, []types.Slot{102, 103, 104}, slots)
}
func TestService_ChainHeads_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{}, 0, 0, false))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, 0, 0, false))
roots, slots := c.ChainHeads()
require.Equal(t, 3, len(roots))
rootMap := map[[32]byte]types.Slot{[32]byte{'c'}: 102, [32]byte{'d'}: 103, [32]byte{'e'}: 104}
for i, root := range roots {
slot, ok := rootMap[root]
require.Equal(t, true, ok)
require.Equal(t, slot, slots[i])
}
}
func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
@@ -356,24 +376,64 @@ func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
}
func TestService_IsOptimistic(t *testing.T) {
func TestService_IsOptimistic_ProtoArray(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, 0, 0, true))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 0, 0, true))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimisticForRoot(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0))
func TestService_IsOptimistic_DoublyLinkedTree(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'}, 100)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, 0, 0, true))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 0, 0, true))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
ctx := context.Background()
c := &Service{genesisTime: time.Now()}
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, opt)
}
func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, 0, 0, true))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 0, 0, true))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, 0, 0, true))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 0, 0, true))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
require.Equal(t, true, opt)
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
@@ -77,8 +78,13 @@ func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
if err != nil {
return err
}
s.cfg.ForkChoiceStore = protoarray.New(j.Epoch, f.Epoch, bytesutil.ToBytes32(f.Root))
if err := s.insertBlockToForkChoiceStore(ctx, jb.Block(), headStartRoot, f, j); err != nil {
if features.Get().EnableForkChoiceDoublyLinkedTree {
s.cfg.ForkChoiceStore = doublylinkedtree.New(j.Epoch, f.Epoch)
} else {
s.cfg.ForkChoiceStore = protoarray.New(j.Epoch, f.Epoch, bytesutil.ToBytes32(f.Root))
}
// TODO(10261) send optimistic status
if err := s.insertBlockToForkChoiceStore(ctx, jb.Block(), headStartRoot, f, j, false /* optimistic status */); err != nil {
return err
}
}

View File

@@ -1,99 +0,0 @@
package blockchain
import (
"encoding/hex"
"fmt"
"net/http"
"github.com/emicklei/dot"
"github.com/prysmaticlabs/prysm/config/params"
)
const template = `<html>
<head>
<script src="//cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/viz.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/full.render.js"></script>
<body>
<script type="application/javascript">
var graph = ` + "`%s`;" + `
var viz = new Viz();
viz.renderSVGElement(graph) // reading the graph.
.then(function(element) {
document.body.appendChild(element); // appends to document.
})
.catch(error => {
// Create a new Viz instance (@see Caveats page for more info)
viz = new Viz();
// Possibly display the error
console.error(error);
});
</script>
</head>
</body>
</html>`
// TreeHandler is a handler to serve /tree page in metrics.
func (s *Service) TreeHandler(w http.ResponseWriter, r *http.Request) {
headState, err := s.HeadState(r.Context())
if err != nil {
log.WithError(err).Error("Could not get head state")
return
}
if headState == nil || headState.IsNil() {
if _, err := w.Write([]byte("Unavailable during initial syncing")); err != nil {
log.WithError(err).Error("Failed to render p2p info page")
}
}
nodes := s.cfg.ForkChoiceStore.Nodes()
graph := dot.NewGraph(dot.Directed)
graph.Attr("rankdir", "RL")
graph.Attr("labeljust", "l")
dotNodes := make([]*dot.Node, len(nodes))
avgBalance := uint64(averageBalance(headState.Balances()))
for i := len(nodes) - 1; i >= 0; i-- {
// Construct label for each node.
slot := fmt.Sprintf("%d", nodes[i].Slot())
weight := fmt.Sprintf("%d", nodes[i].Weight()/1e9) // Convert unit Gwei to unit ETH.
votes := fmt.Sprintf("%d", nodes[i].Weight()/1e9/avgBalance)
index := fmt.Sprintf("%d", i)
g := nodes[i].Graffiti()
graffiti := hex.EncodeToString(g[:8])
label := "slot: " + slot + "\n votes: " + votes + "\n weight: " + weight + "\n graffiti: " + graffiti
var dotN dot.Node
if nodes[i].Parent() != ^uint64(0) {
dotN = graph.Node(index).Box().Attr("label", label)
}
if nodes[i].Slot() == s.HeadSlot() &&
nodes[i].BestDescendant() == ^uint64(0) &&
nodes[i].Parent() != ^uint64(0) {
dotN = dotN.Attr("color", "green")
}
dotNodes[i] = &dotN
}
for i := len(nodes) - 1; i >= 0; i-- {
if nodes[i].Parent() != ^uint64(0) && nodes[i].Parent() < uint64(len(dotNodes)) {
graph.Edge(*dotNodes[i], *dotNodes[nodes[i].Parent()])
}
}
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "text/html")
if _, err := fmt.Fprintf(w, template, graph.String()); err != nil {
log.WithError(err).Error("Failed to render p2p info page")
}
}
func averageBalance(balances []uint64) float64 {
total := uint64(0)
for i := 0; i < len(balances); i++ {
total += balances[i]
}
return float64(total) / float64(len(balances)) / float64(params.BeaconConfig().GweiPerEth)
}

View File

@@ -1,50 +0,0 @@
package blockchain
import (
"context"
"net/http"
"net/http/httptest"
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestService_TreeHandler(t *testing.T) {
req, err := http.NewRequest("GET", "/tree", nil)
require.NoError(t, err)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetBalances([]uint64{params.BeaconConfig().GweiPerEth}))
fcs := protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
[32]byte{'a'},
)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
s, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, s.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, [32]byte{'a'}, [32]byte{'g'}, [32]byte{'c'}, 0, 0))
require.NoError(t, s.cfg.ForkChoiceStore.ProcessBlock(ctx, 1, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'c'}, 0, 0))
s.setHead([32]byte{'a'}, wrapper.WrappedPhase0SignedBeaconBlock(util.NewBeaconBlock()), headState)
rr := httptest.NewRecorder()
handler := http.HandlerFunc(s.TreeHandler)
handler.ServeHTTP(rr, req)
assert.Equal(t, http.StatusOK, rr.Code)
}

View File

@@ -0,0 +1,49 @@
package blockchain
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
)
type mockEngineService struct {
newPayloadError error
forkchoiceError error
blks map[[32]byte]*enginev1.ExecutionBlock
}
func (m *mockEngineService) NewPayload(context.Context, *enginev1.ExecutionPayload) ([]byte, error) {
return nil, m.newPayloadError
}
func (m *mockEngineService) ForkchoiceUpdated(context.Context, *enginev1.ForkchoiceState, *enginev1.PayloadAttributes) (*enginev1.PayloadIDBytes, []byte, error) {
return nil, nil, m.forkchoiceError
}
func (*mockEngineService) GetPayloadV1(
_ context.Context, _ enginev1.PayloadIDBytes,
) *enginev1.ExecutionPayload {
return nil
}
func (*mockEngineService) GetPayload(context.Context, [8]byte) (*enginev1.ExecutionPayload, error) {
return nil, nil
}
func (*mockEngineService) ExchangeTransitionConfiguration(context.Context, *enginev1.TransitionConfiguration) error {
return nil
}
func (*mockEngineService) LatestExecutionBlock(context.Context) (*enginev1.ExecutionBlock, error) {
return nil, nil
}
func (m *mockEngineService) ExecutionBlockByHash(_ context.Context, hash common.Hash) (*enginev1.ExecutionBlock, error) {
blk, ok := m.blks[common.BytesToHash(hash.Bytes())]
if !ok {
return nil, errors.New("block not found")
}
return blk, nil
}

View File

@@ -25,11 +25,11 @@ func TestService_newSlot(t *testing.T) {
}
ctx := context.Background()
require.NoError(t, fcs.ProcessBlock(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, 0, 0)) // genesis
require.NoError(t, fcs.ProcessBlock(ctx, 32, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0)) // finalized
require.NoError(t, fcs.ProcessBlock(ctx, 64, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0)) // justified
require.NoError(t, fcs.ProcessBlock(ctx, 96, [32]byte{'c'}, [32]byte{'a'}, [32]byte{}, 0, 0)) // best justified
require.NoError(t, fcs.ProcessBlock(ctx, 97, [32]byte{'d'}, [32]byte{}, [32]byte{}, 0, 0)) // bad
require.NoError(t, fcs.ProcessBlock(ctx, 0, [32]byte{}, [32]byte{}, 0, 0, false)) // genesis
require.NoError(t, fcs.ProcessBlock(ctx, 32, [32]byte{'a'}, [32]byte{}, 0, 0, false)) // finalized
require.NoError(t, fcs.ProcessBlock(ctx, 64, [32]byte{'b'}, [32]byte{'a'}, 0, 0, false)) // justified
require.NoError(t, fcs.ProcessBlock(ctx, 96, [32]byte{'c'}, [32]byte{'a'}, 0, 0, false)) // best justified
require.NoError(t, fcs.ProcessBlock(ctx, 97, [32]byte{'d'}, [32]byte{}, 0, 0, false)) // bad
type args struct {
slot types.Slot

View File

@@ -2,15 +2,140 @@ package blockchain
import (
"context"
"fmt"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/sirupsen/logrus"
)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headBlk block.BeaconBlock, finalizedRoot [32]byte) (*enginev1.PayloadIDBytes, error) {
if headBlk == nil || headBlk.IsNil() || headBlk.Body().IsNil() {
return nil, errors.New("nil head block")
}
// Must not call fork choice updated until the transition conditions are met on the Pow network.
if isPreBellatrix(headBlk.Version()) {
return nil, nil
}
isExecutionBlk, err := blocks.ExecutionBlock(headBlk.Body())
if err != nil {
return nil, errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
return nil, nil
}
headPayload, err := headBlk.Body().ExecutionPayload()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, s.ensureRootNotZeros(finalizedRoot))
if err != nil {
return nil, errors.Wrap(err, "could not get finalized block")
}
var finalizedHash []byte
if isPreBellatrix(finalizedBlock.Block().Version()) {
finalizedHash = params.BeaconConfig().ZeroHash[:]
} else {
payload, err := finalizedBlock.Block().Body().ExecutionPayload()
if err != nil {
return nil, errors.Wrap(err, "could not get finalized block execution payload")
}
finalizedHash = payload.BlockHash
}
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: headPayload.BlockHash,
SafeBlockHash: headPayload.BlockHash,
FinalizedBlockHash: finalizedHash,
}
// payload attribute is only required when requesting payload, here we are just updating fork choice, so it is nil.
payloadID, _, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, nil /*payload attribute*/)
if err != nil {
switch err {
case v1.ErrAcceptedSyncingPayloadStatus:
log.WithFields(logrus.Fields{
"headSlot": headBlk.Slot(),
"headHash": fmt.Sprintf("%#x", bytesutil.Trunc(headPayload.BlockHash)),
"finalizedHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash)),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
default:
return nil, errors.Wrap(err, "could not notify forkchoice update from execution engine")
}
}
return payloadID, nil
}
// notifyForkchoiceUpdate signals execution engine on a new payload
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int, header *ethpb.ExecutionPayloadHeader, postState state.BeaconState, blk block.SignedBeaconBlock) error {
if postState == nil {
return errors.New("pre and post states must not be nil")
}
// Execution payload is only supported in Bellatrix and beyond.
if isPreBellatrix(postState.Version()) {
return nil
}
if err := helpers.BeaconBlockIsNil(blk); err != nil {
return err
}
body := blk.Block().Body()
enabled, err := blocks.ExecutionEnabled(postState, blk.Block().Body())
if err != nil {
return errors.Wrap(err, "could not determine if execution is enabled")
}
if !enabled {
return nil
}
payload, err := body.ExecutionPayload()
if err != nil {
return errors.Wrap(err, "could not get execution payload")
}
_, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload)
if err != nil {
switch err {
case v1.ErrAcceptedSyncingPayloadStatus:
log.WithFields(logrus.Fields{
"slot": postState.Slot(),
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash)),
}).Info("Called new payload with optimistic block")
return nil
default:
return errors.Wrap(err, "could not validate execution payload from execution engine")
}
}
// During the transition event, the transition block should be verified for sanity.
if isPreBellatrix(preStateVersion) {
return nil
}
atTransition, err := blocks.IsMergeTransitionBlockUsingPayloadHeader(header, body)
if err != nil {
return errors.Wrap(err, "could not check if merge block is terminal")
}
if !atTransition {
return nil
}
return s.validateMergeBlock(ctx, blk)
}
// isPreBellatrix returns true if input version is before bellatrix fork.
func isPreBellatrix(v int) bool {
return v == version.Phase0 || v == version.Altair
}
// optimisticCandidateBlock returns true if this block can be optimistically synced.
//
// Spec pseudocode definition:
@@ -33,21 +158,3 @@ func (s *Service) optimisticCandidateBlock(ctx context.Context, blk block.Beacon
}
return blocks.ExecutionBlock(jBlock.Block().Body())
}
// loadSyncedTips loads a previously saved synced Tips from DB
// if no synced tips are saved, then it creates one from the given
// root and slot number.
func (s *Service) loadSyncedTips(root [32]byte, slot types.Slot) error {
// Initialize synced tips
tips, err := s.cfg.BeaconDB.ValidatedTips(s.ctx)
if err != nil || len(tips) == 0 {
tips[root] = slot
if err != nil {
log.WithError(err).Warn("Could not read synced tips from DB, using finalized checkpoint as synced tip")
}
}
if err := s.cfg.ForkChoiceStore.SetSyncedTips(tips); err != nil {
return errors.Wrap(err, "could not set synced tips")
}
return nil
}

View File

@@ -7,18 +7,327 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
engine "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
)
func Test_NotifyForkchoiceUpdate(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
altairBlk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
altairBlkRoot, err := altairBlk.Block().HashTreeRoot()
require.NoError(t, err)
bellatrixBlk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
bellatrixBlkRoot, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, altairBlk))
require.NoError(t, beaconDB.SaveBlock(ctx, bellatrixBlk))
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
tests := []struct {
name string
blk block.BeaconBlock
finalizedRoot [32]byte
newForkchoiceErr error
errString string
}{
{
name: "nil block",
errString: "nil head block",
},
{
name: "phase0 block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}})
require.NoError(t, err)
return b
}(),
},
{
name: "altair block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlockAltair{Body: &ethpb.BeaconBlockBodyAltair{}})
require.NoError(t, err)
return b
}(),
},
{
name: "not execution block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
},
},
})
require.NoError(t, err)
return b
}(),
},
{
name: "happy case: finalized root is altair block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
require.NoError(t, err)
return b
}(),
finalizedRoot: altairBlkRoot,
},
{
name: "happy case: finalized root is bellatrix block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
require.NoError(t, err)
return b
}(),
finalizedRoot: bellatrixBlkRoot,
},
{
name: "forkchoice updated with optimistic block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
require.NoError(t, err)
return b
}(),
newForkchoiceErr: engine.ErrAcceptedSyncingPayloadStatus,
finalizedRoot: bellatrixBlkRoot,
},
{
name: "forkchoice updated with invalid block",
blk: func() block.BeaconBlock {
b, err := wrapper.WrappedBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
require.NoError(t, err)
return b
}(),
newForkchoiceErr: engine.ErrInvalidPayloadStatus,
finalizedRoot: bellatrixBlkRoot,
errString: "could not notify forkchoice update from execution engine: payload status is INVALID",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
engine := &mockEngineService{forkchoiceError: tt.newForkchoiceErr}
service.cfg.ExecutionEngineCaller = engine
_, err := service.notifyForkchoiceUpdate(ctx, tt.blk, tt.finalizedRoot)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
}
})
}
}
func Test_NotifyNewPayload(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = "2"
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
phase0State, _ := util.DeterministicGenesisState(t, 1)
altairState, _ := util.DeterministicGenesisStateAltair(t, 1)
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
},
}
bellatrixBlk, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
tests := []struct {
name string
preState state.BeaconState
postState state.BeaconState
blk block.SignedBeaconBlock
newPayloadErr error
errString string
}{
{
name: "phase 0 post state",
postState: phase0State,
preState: phase0State,
},
{
name: "altair post state",
postState: altairState,
preState: altairState,
},
{
name: "nil post state",
preState: phase0State,
errString: "pre and post states must not be nil",
},
{
name: "nil beacon block",
postState: bellatrixState,
preState: bellatrixState,
errString: "signed beacon block can't be nil",
},
{
name: "new payload with optimistic block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
newPayloadErr: engine.ErrAcceptedSyncingPayloadStatus,
},
{
name: "new payload with invalid block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
newPayloadErr: engine.ErrInvalidPayloadStatus,
errString: "could not validate execution payload from execution engine: payload status is INVALID",
},
{
name: "altair pre state",
postState: bellatrixState,
preState: altairState,
blk: bellatrixBlk,
},
{
name: "could not get merge block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
errString: "could not get merge block parent hash and total difficulty",
},
{
name: "not at merge transition",
postState: bellatrixState,
preState: bellatrixState,
blk: func() block.SignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
},
},
},
}
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
return b
}(),
},
{
name: "could not get merge block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
errString: "could not get merge block parent hash and total difficulty",
},
{
name: "happy case",
postState: bellatrixState,
preState: bellatrixState,
blk: func() block.SignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
return b
}(),
},
}
for _, tt := range tests {
engine := &mockEngineService{newPayloadError: tt.newPayloadErr, blks: map[[32]byte]*v1.ExecutionBlock{}}
engine.blks[[32]byte{'a'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
TotalDifficulty: "0x2",
}
engine.blks[[32]byte{'b'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
TotalDifficulty: "0x1",
}
service.cfg.ExecutionEngineCaller = engine
var payload *ethpb.ExecutionPayloadHeader
if tt.preState.Version() == version.Bellatrix {
payload, err = tt.preState.LatestExecutionPayloadHeader()
require.NoError(t, err)
}
err := service.notifyNewPayload(ctx, tt.preState.Version(), payload, tt.postState, tt.blk)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
}
}
}
func Test_IsOptimisticCandidateBlock(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MainnetConfig())
@@ -115,7 +424,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
blk.Block.Body.ExecutionPayload.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
blk.Block.Body.ExecutionPayload.Random = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
wr, err := wrapper.WrappedBellatrixSignedBeaconBlock(blk)

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
enginev1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -50,6 +51,14 @@ func WithChainStartFetcher(f powchain.ChainStartFetcher) Option {
}
}
// WithExecutionEngineCaller to call execution engine.
func WithExecutionEngineCaller(c enginev1.Caller) Option {
return func(s *Service) error {
s.cfg.ExecutionEngineCaller = c
return nil
}
}
// WithDepositCache for deposit lifecycle after chain inclusion.
func WithDepositCache(c *depositcache.DepositCache) Option {
return func(s *Service) error {

View File

@@ -1,20 +1,132 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
)
// validates terminal pow block by comparing own total difficulty with parent's total difficulty.
// validateMergeBlock validates terminal block hash in the event of manual overrides before checking for total difficulty.
//
// def validate_merge_block(block: BeaconBlock) -> None:
// if TERMINAL_BLOCK_HASH != Hash32():
// # If `TERMINAL_BLOCK_HASH` is used as an override, the activation epoch must be reached.
// assert compute_epoch_at_slot(block.slot) >= TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
// assert block.body.execution_payload.parent_hash == TERMINAL_BLOCK_HASH
// return
//
// pow_block = get_pow_block(block.body.execution_payload.parent_hash)
// # Check if `pow_block` is available
// assert pow_block is not None
// pow_parent = get_pow_block(pow_block.parent_hash)
// # Check if `pow_parent` is available
// assert pow_parent is not None
// # Check if `pow_block` is a valid terminal PoW block
// assert is_valid_terminal_pow_block(pow_block, pow_parent)
func (s *Service) validateMergeBlock(ctx context.Context, b block.SignedBeaconBlock) error {
if err := helpers.BeaconBlockIsNil(b); err != nil {
return err
}
payload, err := b.Block().Body().ExecutionPayload()
if err != nil {
return err
}
if payload == nil {
return errors.New("nil execution payload")
}
if err := validateTerminalBlockHash(b.Block().Slot(), payload); err != nil {
return errors.Wrap(err, "could not validate terminal block hash")
}
mergeBlockParentHash, mergeBlockTD, err := s.getBlkParentHashAndTD(ctx, payload.ParentHash)
if err != nil {
return errors.Wrap(err, "could not get merge block parent hash and total difficulty")
}
_, mergeBlockParentTD, err := s.getBlkParentHashAndTD(ctx, mergeBlockParentHash)
if err != nil {
return errors.Wrap(err, "could not get merge parent block total difficulty")
}
valid, err := validateTerminalBlockDifficulties(mergeBlockTD, mergeBlockParentTD)
if err != nil {
return err
}
if !valid {
return fmt.Errorf("invalid TTD, configTTD: %s, currentTTD: %s, parentTTD: %s",
params.BeaconConfig().TerminalTotalDifficulty, mergeBlockTD, mergeBlockParentTD)
}
log.WithFields(logrus.Fields{
"slot": b.Block().Slot(),
"mergeBlockHash": common.BytesToHash(payload.ParentHash).String(),
"mergeBlockParentHash": common.BytesToHash(mergeBlockParentHash).String(),
"terminalTotalDifficulty": params.BeaconConfig().TerminalTotalDifficulty,
"mergeBlockTotalDifficulty": mergeBlockTD,
"mergeBlockParentTotalDifficulty": mergeBlockParentTD,
}).Info("Validated terminal block")
return nil
}
// getBlkParentHashAndTD retrieves the parent hash and total difficulty of the given block.
func (s *Service) getBlkParentHashAndTD(ctx context.Context, blkHash []byte) ([]byte, *uint256.Int, error) {
blk, err := s.cfg.ExecutionEngineCaller.ExecutionBlockByHash(ctx, common.BytesToHash(blkHash))
if err != nil {
return nil, nil, errors.Wrap(err, "could not get pow block")
}
if blk == nil {
return nil, nil, errors.New("pow block is nil")
}
blkTDBig, err := hexutil.DecodeBig(blk.TotalDifficulty)
if err != nil {
return nil, nil, errors.Wrap(err, "could not decode merge block total difficulty")
}
blkTDUint256, overflows := uint256.FromBig(blkTDBig)
if overflows {
return nil, nil, errors.New("total difficulty overflows")
}
return blk.ParentHash, blkTDUint256, nil
}
// validateTerminalBlockHash validates if the merge block is a valid terminal PoW block.
// spec code:
// if TERMINAL_BLOCK_HASH != Hash32():
// # If `TERMINAL_BLOCK_HASH` is used as an override, the activation epoch must be reached.
// assert compute_epoch_at_slot(block.slot) >= TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
// assert block.body.execution_payload.parent_hash == TERMINAL_BLOCK_HASH
// return
func validateTerminalBlockHash(blkSlot types.Slot, payload *enginev1.ExecutionPayload) error {
if bytesutil.ToBytes32(params.BeaconConfig().TerminalBlockHash.Bytes()) == [32]byte{} {
return nil
}
if params.BeaconConfig().TerminalBlockHashActivationEpoch > slots.ToEpoch(blkSlot) {
return errors.New("terminal block hash activation epoch not reached")
}
if !bytes.Equal(payload.ParentHash, params.BeaconConfig().TerminalBlockHash.Bytes()) {
return errors.New("parent hash does not match terminal block hash")
}
return nil
}
// validateTerminalBlockDifficulties validates terminal pow block by comparing own total difficulty with parent's total difficulty.
//
// def is_valid_terminal_pow_block(block: PowBlock, parent: PowBlock) -> bool:
// is_total_difficulty_reached = block.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// is_parent_total_difficulty_valid = parent.total_difficulty < TERMINAL_TOTAL_DIFFICULTY
// return is_total_difficulty_reached and is_parent_total_difficulty_valid
func validTerminalPowBlock(currentDifficulty *uint256.Int, parentDifficulty *uint256.Int) (bool, error) {
func validateTerminalBlockDifficulties(currentDifficulty *uint256.Int, parentDifficulty *uint256.Int) (bool, error) {
b, ok := new(big.Int).SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
if !ok {
return false, errors.New("failed to parse terminal total difficulty")

View File

@@ -1,12 +1,21 @@
package blockchain
import (
"context"
"fmt"
"math/big"
"testing"
"github.com/holiman/uint256"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
)
@@ -66,10 +75,10 @@ func Test_validTerminalPowBlock(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = fmt.Sprint(tt.ttd)
params.OverrideBeaconConfig(cfg)
got, err := validTerminalPowBlock(tt.currentDifficulty, tt.parentDifficulty)
got, err := validateTerminalBlockDifficulties(tt.currentDifficulty, tt.parentDifficulty)
require.NoError(t, err)
if got != tt.want {
t.Errorf("validTerminalPowBlock() = %v, want %v", got, tt.want)
t.Errorf("validateTerminalBlockDifficulties() = %v, want %v", got, tt.want)
}
})
}
@@ -87,7 +96,115 @@ func Test_validTerminalPowBlockSpecConfig(t *testing.T) {
parent, of := uint256.FromBig(i)
require.Equal(t, of, false)
got, err := validTerminalPowBlock(current, parent)
got, err := validateTerminalBlockDifficulties(current, parent)
require.NoError(t, err)
require.Equal(t, true, got)
}
func Test_validateMergeBlock(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = "2"
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
engine := &mockEngineService{blks: map[[32]byte]*enginev1.ExecutionBlock{}}
service.cfg.ExecutionEngineCaller = engine
engine.blks[[32]byte{'a'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
TotalDifficulty: "0x2",
}
engine.blks[[32]byte{'b'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
TotalDifficulty: "0x1",
}
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Slot: 1,
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &enginev1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.validateMergeBlock(ctx, b))
cfg.TerminalTotalDifficulty = "1"
params.OverrideBeaconConfig(cfg)
require.ErrorContains(t, "invalid TTD, configTTD: 1, currentTTD: 2, parentTTD: 1", service.validateMergeBlock(ctx, b))
}
func Test_getBlkParentHashAndTD(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
engine := &mockEngineService{blks: map[[32]byte]*enginev1.ExecutionBlock{}}
service.cfg.ExecutionEngineCaller = engine
h := [32]byte{'a'}
p := [32]byte{'b'}
td := "0x1"
engine.blks[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
TotalDifficulty: td,
}
parentHash, totalDifficulty, err := service.getBlkParentHashAndTD(ctx, h[:])
require.NoError(t, err)
require.Equal(t, p, bytesutil.ToBytes32(parentHash))
require.Equal(t, td, totalDifficulty.String())
_, _, err = service.getBlkParentHashAndTD(ctx, []byte{'c'})
require.ErrorContains(t, "could not get pow block: block not found", err)
engine.blks[h] = nil
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])
require.ErrorContains(t, "pow block is nil", err)
engine.blks[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
TotalDifficulty: "1",
}
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])
require.ErrorContains(t, "could not decode merge block total difficulty: hex string without 0x prefix", err)
engine.blks[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
TotalDifficulty: "0XFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
}
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])
require.ErrorContains(t, "could not decode merge block total difficulty: hex number > 256 bits", err)
}
func Test_validateTerminalBlockHash(t *testing.T) {
require.NoError(t, validateTerminalBlockHash(1, &enginev1.ExecutionPayload{}))
cfg := params.BeaconConfig()
cfg.TerminalBlockHash = [32]byte{0x01}
params.OverrideBeaconConfig(cfg)
require.ErrorContains(t, "terminal block hash activation epoch not reached", validateTerminalBlockHash(1, &enginev1.ExecutionPayload{}))
cfg.TerminalBlockHashActivationEpoch = 0
params.OverrideBeaconConfig(cfg)
require.ErrorContains(t, "parent hash does not match terminal block hash", validateTerminalBlockHash(1, &enginev1.ExecutionPayload{}))
require.NoError(t, validateTerminalBlockHash(1, &enginev1.ExecutionPayload{ParentHash: cfg.TerminalBlockHash.Bytes()}))
}

View File

@@ -8,6 +8,7 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
@@ -21,7 +22,7 @@ import (
"github.com/prysmaticlabs/prysm/time/slots"
)
func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -127,7 +128,113 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
}
}
func TestStore_OnAttestation_Ok(t *testing.T) {
func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New(0, 0)),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
BlkWithOutState := util.NewBeaconBlock()
BlkWithOutState.Block.Slot = 0
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(BlkWithOutState)))
BlkWithOutStateRoot, err := BlkWithOutState.Block.HashTreeRoot()
require.NoError(t, err)
BlkWithStateBadAtt := util.NewBeaconBlock()
BlkWithStateBadAtt.Block.Slot = 1
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(BlkWithStateBadAtt)))
BlkWithStateBadAttRoot, err := BlkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(100*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
BlkWithValidState := util.NewBeaconBlock()
BlkWithValidState.Block.Slot = 2
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(BlkWithValidState)))
BlkWithValidStateRoot, err := BlkWithValidState.Block.HashTreeRoot()
require.NoError(t, err)
s, err = util.NewBeaconState()
require.NoError(t, err)
err = s.SetFork(&ethpb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
})
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithValidStateRoot))
tests := []struct {
name string
a *ethpb.Attestation
wantedErr string
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation can't be nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation's data can't be nil",
},
{
name: "process nil field (a.Target) in attestation",
a: &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
Target: nil,
Source: &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)},
},
AggregationBits: make([]byte, 1),
Signature: make([]byte, 96),
},
wantedErr: "attestation's target can't be nil",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := service.OnAttestation(ctx, tt.a)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -149,7 +256,33 @@ func TestStore_OnAttestation_Ok(t *testing.T) {
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, tRoot, tRoot, tRoot, 1, 1))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, tRoot, tRoot, 1, 1, false))
require.NoError(t, service.OnAttestation(ctx, att[0]))
}
func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, tRoot, tRoot, 1, 1, false))
require.NoError(t, service.OnAttestation(ctx, att[0]))
}
@@ -330,7 +463,7 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
assert.NoError(t, service.verifyBeaconBlock(ctx, d), "Did not receive the wanted error")
}
func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -361,6 +494,37 @@ func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
require.ErrorContains(t, "Root and finalized store are not consistent", err)
}
func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b32)))
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 1})
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b33)))
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(context.Background(), r33[:])
require.ErrorContains(t, "Root and finalized store are not consistent", err)
}
func TestVerifyFinalizedConsistency_OK(t *testing.T) {
ctx := context.Background()
@@ -407,8 +571,8 @@ func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, b32.Block.Slot, r32, [32]byte{}, [32]byte{}, 0, 0))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, b33.Block.Slot, r33, r32, [32]byte{}, 0, 0))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, b32.Block.Slot, r32, [32]byte{}, 0, 0, false))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, b33.Block.Slot, r33, r32, 0, 0, false))
_, err = service.cfg.ForkChoiceStore.Head(ctx, 0, r32, []uint64{}, 0)
require.NoError(t, err)

View File

@@ -21,6 +21,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
)
@@ -98,11 +99,24 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
return err
}
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return err
}
if err := s.notifyNewPayload(ctx, preStateVersion, preStateHeader, postState, signed); err != nil {
return errors.Wrap(err, "could not verify new payload")
}
// TODO(10261) Check optimistic status
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState, false /* reg sync */, false /*optimistic sync*/); err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
return err
}
// We add a proposer score boost to fork choice for the block root if applicable, right after
// running a successful state transition for the block.
if err := s.cfg.ForkChoiceStore.BoostProposerRoot(
@@ -111,10 +125,6 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
return err
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState, false /* reg sync */); err != nil {
return err
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
@@ -174,9 +184,8 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
if err := s.updateHead(ctx, balances); err != nil {
log.WithError(err).Warn("Could not update head")
}
if err := s.saveSyncedTipsDB(ctx); err != nil {
return errors.Wrap(err, "could not save synced tips")
if _, err := s.notifyForkchoiceUpdate(ctx, s.headBlock().Block(), bytesutil.ToBytes32(finalized.Root)); err != nil {
return err
}
if err := s.pruneCanonicalAttsFromPool(ctx, blockRoot, signed); err != nil {
@@ -250,33 +259,49 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
return s.handleEpochBoundary(ctx, postState)
}
func getStateVersionAndPayload(preState state.BeaconState) (int, *ethpb.ExecutionPayloadHeader, error) {
var preStateHeader *ethpb.ExecutionPayloadHeader
var err error
preStateVersion := preState.Version()
switch preStateVersion {
case version.Phase0, version.Altair:
default:
preStateHeader, err = preState.LatestExecutionPayloadHeader()
if err != nil {
return 0, nil, err
}
}
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []block.SignedBeaconBlock,
blockRoots [][32]byte) ([]*ethpb.Checkpoint, []*ethpb.Checkpoint, error) {
blockRoots [][32]byte) ([]*ethpb.Checkpoint, []*ethpb.Checkpoint, []bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
if len(blks) == 0 || len(blockRoots) == 0 {
return nil, nil, errors.New("no blocks provided")
return nil, nil, nil, errors.New("no blocks provided")
}
if err := helpers.BeaconBlockIsNil(blks[0]); err != nil {
return nil, nil, err
return nil, nil, nil, err
}
b := blks[0].Block()
// Retrieve incoming block's pre state.
if err := s.verifyBlkPreState(ctx, b); err != nil {
return nil, nil, err
return nil, nil, nil, err
}
preState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, bytesutil.ToBytes32(b.ParentRoot()))
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if preState == nil || preState.IsNil() {
return nil, nil, fmt.Errorf("nil pre state for slot %d", b.Slot())
return nil, nil, nil, fmt.Errorf("nil pre state for slot %d", b.Slot())
}
jCheckpoints := make([]*ethpb.Checkpoint, len(blks))
fCheckpoints := make([]*ethpb.Checkpoint, len(blks))
optimistic := make([]bool, len(blks))
sigSet := &bls.SignatureBatch{
Signatures: [][]byte{},
PublicKeys: []bls.PublicKey{},
@@ -285,59 +310,67 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []block.SignedBeaconBlo
var set *bls.SignatureBatch
boundaries := make(map[[32]byte]state.BeaconState)
for i, b := range blks {
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return nil, nil, nil, err
}
set, preState, err = transition.ExecuteStateTransitionNoVerifyAnySig(ctx, preState, b)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if err := s.notifyNewPayload(ctx, preStateVersion, preStateHeader, preState, b); err != nil {
return nil, nil, nil, err
}
// Save potential boundary states.
if slots.IsEpochStart(preState.Slot()) {
boundaries[blockRoots[i]] = preState.Copy()
if err := s.handleEpochBoundary(ctx, preState); err != nil {
return nil, nil, errors.Wrap(err, "could not handle epoch boundary state")
return nil, nil, nil, errors.Wrap(err, "could not handle epoch boundary state")
}
}
jCheckpoints[i] = preState.CurrentJustifiedCheckpoint()
fCheckpoints[i] = preState.FinalizedCheckpoint()
sigSet.Join(set)
}
verify, err := sigSet.Verify()
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if !verify {
return nil, nil, errors.New("batch block signature verification failed")
return nil, nil, nil, errors.New("batch block signature verification failed")
}
for r, st := range boundaries {
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return nil, nil, err
return nil, nil, nil, err
}
}
// Also saves the last post state which to be used as pre state for the next batch.
lastB := blks[len(blks)-1]
lastBR := blockRoots[len(blockRoots)-1]
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if err := s.saveHeadNoDB(ctx, lastB, lastBR, preState); err != nil {
return nil, nil, err
return nil, nil, nil, err
}
return fCheckpoints, jCheckpoints, nil
return fCheckpoints, jCheckpoints, optimistic, nil
}
// handles a block after the block's batch has been verified, where we can save blocks
// their state summaries and split them off to relative hot/cold storage.
func (s *Service) handleBlockAfterBatchVerify(ctx context.Context, signed block.SignedBeaconBlock,
blockRoot [32]byte, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
blockRoot [32]byte, fCheckpoint, jCheckpoint *ethpb.Checkpoint, optimistic bool) error {
b := signed.Block()
s.saveInitSyncBlock(blockRoot, signed)
if err := s.insertBlockToForkChoiceStore(ctx, b, blockRoot, fCheckpoint, jCheckpoint); err != nil {
if err := s.insertBlockToForkChoiceStore(ctx, b, blockRoot, fCheckpoint, jCheckpoint, optimistic); err != nil {
return err
}
if err := s.saveSyncedTipsDB(ctx); err != nil {
return errors.Wrap(err, "could not save synced tips")
if _, err := s.notifyForkchoiceUpdate(ctx, b, bytesutil.ToBytes32(fCheckpoint.Root)); err != nil {
return err
}
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: signed.Block().Slot(),
Root: blockRoot[:],
@@ -422,13 +455,13 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
// This feeds in the block and block's attestations to fork choice store. It's allows fork choice store
// to gain information on the most current chain.
func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Context, blk block.BeaconBlock, root [32]byte,
st state.BeaconState) error {
st state.BeaconState, optimistic bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.insertBlockAndAttestationsToForkChoiceStore")
defer span.End()
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.insertBlockToForkChoiceStore(ctx, blk, root, fCheckpoint, jCheckpoint); err != nil {
if err := s.insertBlockToForkChoiceStore(ctx, blk, root, fCheckpoint, jCheckpoint, optimistic); err != nil {
return err
}
// Feed in block's attestations to fork choice store.
@@ -447,15 +480,16 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
}
func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk block.BeaconBlock,
root [32]byte, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
root [32]byte, fCheckpoint, jCheckpoint *ethpb.Checkpoint, optimistic bool) error {
//TODO(10261) check if the blocks are optimistic or not when filling fork choice
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
// Feed in block to fork choice store.
if err := s.cfg.ForkChoiceStore.ProcessBlock(ctx,
blk.Slot(), root, bytesutil.ToBytes32(blk.ParentRoot()), bytesutil.ToBytes32(blk.Body().Graffiti()),
blk.Slot(), root, bytesutil.ToBytes32(blk.ParentRoot()),
jCheckpoint.Epoch,
fCheckpoint.Epoch); err != nil {
fCheckpoint.Epoch, optimistic); err != nil {
return errors.Wrap(err, "could not process block for proto array fork choice")
}
return nil
@@ -463,7 +497,7 @@ func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk block.Be
// This saves post state info to DB or cache. This also saves post state info to fork choice store.
// Post state info consists of processed block and state. Do not call this method unless the block and state are verified.
func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b block.SignedBeaconBlock, st state.BeaconState, initSync bool) error {
func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b block.SignedBeaconBlock, st state.BeaconState, initSync bool, optimistic bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.savePostStateInfo")
defer span.End()
if initSync {
@@ -474,7 +508,7 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b block.Sig
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return errors.Wrap(err, "could not save state")
}
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, b.Block(), r, st); err != nil {
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, b.Block(), r, st, optimistic); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", b.Block().Slot())
}
return nil
@@ -509,12 +543,3 @@ func (s *Service) pruneCanonicalAttsFromPool(ctx context.Context, r [32]byte, b
}
return nil
}
// Saves synced and validated tips to DB.
func (s *Service) saveSyncedTipsDB(ctx context.Context) error {
tips := s.cfg.ForkChoiceStore.SyncedTips()
if len(tips) == 0 {
return nil
}
return s.cfg.BeaconDB.UpdateValidatedTips(ctx, tips)
}

View File

@@ -367,9 +367,9 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk block.B
b := pendingNodes[i]
r := pendingRoots[i]
if err := s.cfg.ForkChoiceStore.ProcessBlock(ctx,
b.Slot(), r, bytesutil.ToBytes32(b.ParentRoot()), bytesutil.ToBytes32(b.Body().Graffiti()),
b.Slot(), r, bytesutil.ToBytes32(b.ParentRoot()),
jCheckpoint.Epoch,
fCheckpoint.Epoch); err != nil {
fCheckpoint.Epoch, false /* optimistic status */); err != nil {
return errors.Wrap(err, "could not process block for proto array fork choice")
}
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
@@ -27,13 +28,14 @@ import (
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/time"
)
func TestStore_OnBlock(t *testing.T) {
func TestStore_OnBlock_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -130,7 +132,122 @@ func TestStore_OnBlock(t *testing.T) {
}
}
func TestStore_OnBlockBatch(t *testing.T) {
func TestStore_OnBlock_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
random := util.NewBeaconBlock()
random.Block.Slot = 1
random.Block.ParentRoot = validGenesisRoot[:]
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(random)))
randomParentRoot, err := random.Block.HashTreeRoot()
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), randomParentRoot))
randomParentRoot2 := roots[1]
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot2}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)))
tests := []struct {
name string
blk *ethpb.SignedBeaconBlock
s state.BeaconState
time uint64
wantErrString string
}{
{
name: "parent block root does not have a state",
blk: util.NewBeaconBlock(),
s: st.Copy(),
wantErrString: "could not reconstruct parent state",
},
{
name: "block is from the future",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot2
b.Block.Slot = params.BeaconConfig().FarFutureSlot
return b
}(),
s: st.Copy(),
wantErrString: "is in the far distant future",
},
{
name: "could not get finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot[:]
return b
}(),
s: st.Copy(),
wantErrString: "is not a descendant of the current finalized block",
},
{
name: "same slot as finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Slot = 0
b.Block.ParentRoot = randomParentRoot2
return b
}(),
s: st.Copy(),
wantErrString: "block is equal or earlier than finalized block, slot 0 < slot 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: roots[0]})
service.store.SetPrevFinalizedCheckpt(&ethpb.Checkpoint{Root: validGenesisRoot[:]})
root, err := tt.blk.Block.HashTreeRoot()
assert.NoError(t, err)
err = service.onBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(tt.blk), root)
assert.ErrorContains(t, tt.wantErrString, err)
})
}
}
func TestStore_OnBlock_ProposerBoostEarly(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
opts := []Option{
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.BoostProposerRoot(ctx, 0, [32]byte{'A'}, time.Now()))
_, err = service.cfg.ForkChoiceStore.Head(ctx, 0,
params.BeaconConfig().ZeroHash, []uint64{}, 0)
require.ErrorContains(t, "could not apply proposer boost score: invalid proposer boost root", err)
}
func TestStore_OnBlockBatch_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -173,6 +290,58 @@ func TestStore_OnBlockBatch(t *testing.T) {
blkRoots = append(blkRoots, root)
}
rBlock, err := blks[0].PbPhase0Block()
assert.NoError(t, err)
rBlock.Block.ParentRoot = gRoot[:]
require.NoError(t, beaconDB.SaveBlock(context.Background(), blks[0]))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, blkRoots[0], firstState))
_, _, _, err = service.onBlockBatch(ctx, blks[1:], blkRoots[1:])
require.NoError(t, err)
}
func TestStore_OnBlockBatch_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.saveInitSyncBlock(gRoot, wrapper.WrappedPhase0SignedBeaconBlock(genesis))
st, keys := util.DeterministicGenesisState(t, 64)
bState := st.Copy()
var blks []block.SignedBeaconBlock
var blkRoots [][32]byte
var firstState state.BeaconState
for i := 1; i < 10; i++ {
b, err := util.GenerateFullBlock(bState, keys, util.DefaultBlockGenConfig(), types.Slot(i))
require.NoError(t, err)
bState, err = transition.ExecuteStateTransition(ctx, bState, wrapper.WrappedPhase0SignedBeaconBlock(b))
require.NoError(t, err)
if i == 1 {
firstState = bState.Copy()
}
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
service.saveInitSyncBlock(root, wrapper.WrappedPhase0SignedBeaconBlock(b))
blks = append(blks, wrapper.WrappedPhase0SignedBeaconBlock(b))
blkRoots = append(blkRoots, root)
}
rBlock, err := blks[0].PbPhase0Block()
assert.NoError(t, err)
rBlock.Block.ParentRoot = gRoot[:]
@@ -215,7 +384,7 @@ func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
assert.Equal(t, true, update, "Should be able to update justified")
}
func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
func TestShouldUpdateJustified_ReturnFalse_ProtoArray(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
@@ -244,7 +413,36 @@ func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
assert.Equal(t, false, update, "Should not be able to update justified, received true")
}
func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
func TestShouldUpdateJustified_ReturnFalse_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
lastJustifiedBlk := util.NewBeaconBlock()
lastJustifiedBlk.Block.ParentRoot = bytesutil.PadTo([]byte{'G'}, 32)
lastJustifiedRoot, err := lastJustifiedBlk.Block.HashTreeRoot()
require.NoError(t, err)
newJustifiedBlk := util.NewBeaconBlock()
newJustifiedBlk.Block.ParentRoot = bytesutil.PadTo(lastJustifiedRoot[:], 32)
newJustifiedRoot, err := newJustifiedBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(newJustifiedBlk)))
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(lastJustifiedBlk)))
diff := params.BeaconConfig().SlotsPerEpoch.Sub(1).Mul(params.BeaconConfig().SecondsPerSlot)
service.genesisTime = time.Unix(time.Now().Unix()-int64(diff), 0)
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: lastJustifiedRoot[:]})
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
require.NoError(t, err)
assert.Equal(t, false, update, "Should not be able to update justified, received true")
}
func TestCachedPreState_CanGetFromStateSummary_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -275,6 +473,37 @@ func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
require.NoError(t, service.verifyBlkPreState(ctx, wrapper.WrappedPhase0BeaconBlock(b.Block)))
}
func TestCachedPreState_CanGetFromStateSummary_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
s, err := v1.InitializeFromProto(&ethpb.BeaconState{Slot: 1, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
require.NoError(t, err)
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.saveInitSyncBlock(gRoot, wrapper.WrappedPhase0SignedBeaconBlock(genesis))
b := util.NewBeaconBlock()
b.Block.Slot = 1
b.Block.ParentRoot = gRoot[:]
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: 1, Root: gRoot[:]}))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, gRoot, s))
require.NoError(t, service.verifyBlkPreState(ctx, wrapper.WrappedPhase0BeaconBlock(b.Block)))
}
func TestCachedPreState_CanGetFromDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -347,7 +576,7 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
assert.Equal(t, types.Epoch(2), service.store.BestJustifiedCheckpt().Epoch, "Incorrect justified epoch in service")
}
func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
func TestFillForkChoiceMissingBlocks_CanSave_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -382,13 +611,54 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, len(service.cfg.ForkChoiceStore.Nodes()), "Miss match nodes")
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[4])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[6])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[8])), "Didn't save node")
}
func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
func TestFillForkChoiceMissingBlocks_CanSave_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: make([]byte, 32)})
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
beaconState, _ := util.DeterministicGenesisState(t, 32)
block := util.NewBeaconBlock()
block.Block.Slot = 9
block.Block.ParentRoot = roots[8]
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[4])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[6])), "Didn't save node")
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(roots[8])), "Didn't save node")
}
func TestFillForkChoiceMissingBlocks_RootsMatch_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -423,7 +693,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, len(service.cfg.ForkChoiceStore.Nodes()), "Miss match nodes")
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Ensure all roots and their respective blocks exist.
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
for i, rt := range wantedRoots {
@@ -432,7 +702,51 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
}
}
func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
func TestFillForkChoiceMissingBlocks_RootsMatch_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: make([]byte, 32)})
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
beaconState, _ := util.DeterministicGenesisState(t, 32)
block := util.NewBeaconBlock()
block.Block.Slot = 9
block.Block.ParentRoot = roots[8]
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Ensure all roots and their respective blocks exist.
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
for i, rt := range wantedRoots {
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(rt)), fmt.Sprintf("Didn't save node: %d", i))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(context.Background(), bytesutil.ToBytes32(rt)))
}
}
func TestFillForkChoiceMissingBlocks_FilterFinalized_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -479,7 +793,60 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
require.NoError(t, err)
// There should be 2 nodes, block 65 and block 64.
assert.Equal(t, 2, len(service.cfg.ForkChoiceStore.Nodes()), "Miss match nodes")
assert.Equal(t, 2, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Block with slot 63 should be in fork choice because it's less than finalized epoch 1.
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r63), "Didn't save node")
}
func TestFillForkChoiceMissingBlocks_FilterFinalized_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = doublylinkedtree.New(0, 0)
// Set finalized epoch to 1.
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 1})
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
assert.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
// Define a tree branch, slot 63 <- 64 <- 65
b63 := util.NewBeaconBlock()
b63.Block.Slot = 63
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b63)))
r63, err := b63.Block.HashTreeRoot()
require.NoError(t, err)
b64 := util.NewBeaconBlock()
b64.Block.Slot = 64
b64.Block.ParentRoot = r63[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b64)))
r64, err := b64.Block.HashTreeRoot()
require.NoError(t, err)
b65 := util.NewBeaconBlock()
b65.Block.Slot = 65
b65.Block.ParentRoot = r64[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b65)))
beaconState, _ := util.DeterministicGenesisState(t, 32)
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(b65).Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// There should be 2 nodes, block 65 and block 64.
assert.Equal(t, 2, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes")
// Block with slot 63 should be in fork choice because it's less than finalized epoch 1.
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r63), "Didn't save node")
@@ -668,7 +1035,7 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), [32]byte{}, 0, 0)) // Saves blocks to fork choice store.
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), 0, 0, false)) // Saves blocks to fork choice store.
}
r, err := service.ancestor(context.Background(), r200[:], 150)
@@ -713,7 +1080,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(beaconBlock))) // Saves blocks to DB.
}
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(context.Background(), 200, r200, r200, [32]byte{}, 0, 0))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(context.Background(), 200, r200, r200, 0, 0, false))
r, err := service.ancestor(context.Background(), r200[:], 150)
require.NoError(t, err)
@@ -913,6 +1280,48 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.Equal(t, f.Epoch, service.FinalizedCheckpt().Epoch)
}
func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
config.BellatrixForkEpoch = 2
params.OverrideBeaconConfig(config)
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
WithStateNotifier(&mock.MockStateNotifier{}),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gBlk, err := service.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
testState := gs.Copy()
for i := types.Slot(1); i < params.BeaconConfig().SlotsPerEpoch; i++ {
blk, err := util.GenerateFullBlock(testState, keys, util.DefaultBlockGenConfig(), i)
require.NoError(t, err)
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(blk), r))
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
}
func TestInsertFinalizedDeposits(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
@@ -997,48 +1406,52 @@ func TestRemoveBlockAttestationsInPool_NonCanonical(t *testing.T) {
require.Equal(t, 1, service.cfg.AttPool.AggregatedAttestationCount())
}
func TestService_saveSyncedTipsDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.ParentRoot = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
b100 := util.NewBeaconBlock()
b100.Block.Slot = 100
b100.Block.ParentRoot = r1[:]
r100, err := b100.Block.HashTreeRoot()
require.NoError(t, err)
b200 := util.NewBeaconBlock()
b200.Block.Slot = 200
b200.Block.ParentRoot = r1[:]
r200, err := b200.Block.HashTreeRoot()
require.NoError(t, err)
for _, b := range []*ethpb.SignedBeaconBlock{b1, b100, b200} {
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), [32]byte{}, 0, 0))
func Test_getStateVersionAndPayload(t *testing.T) {
tests := []struct {
name string
st state.BeaconState
version int
header *ethpb.ExecutionPayloadHeader
}{
{
name: "phase 0 state",
st: func() state.BeaconState {
s, _ := util.DeterministicGenesisState(t, 1)
return s
}(),
version: version.Phase0,
header: (*ethpb.ExecutionPayloadHeader)(nil),
},
{
name: "altair state",
st: func() state.BeaconState {
s, _ := util.DeterministicGenesisStateAltair(t, 1)
return s
}(),
version: version.Altair,
header: (*ethpb.ExecutionPayloadHeader)(nil),
},
{
name: "bellatrix state",
st: func() state.BeaconState {
s, _ := util.DeterministicGenesisStateBellatrix(t, 1)
require.NoError(t, s.SetLatestExecutionPayloadHeader(&ethpb.ExecutionPayloadHeader{
BlockNumber: 1,
}))
return s
}(),
version: version.Bellatrix,
header: &ethpb.ExecutionPayloadHeader{
BlockNumber: 1,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
version, header, err := getStateVersionAndPayload(tt.st)
require.NoError(t, err)
require.Equal(t, tt.version, version)
require.DeepEqual(t, tt.header, header)
})
}
require.NoError(t, service.cfg.ForkChoiceStore.UpdateSyncedTipsWithValidRoot(ctx, r100))
require.NoError(t, service.saveSyncedTipsDB(ctx))
savedTips, err := service.cfg.BeaconDB.ValidatedTips(ctx)
require.NoError(t, err)
require.Equal(t, 2, len(savedTips))
require.Equal(t, types.Slot(1), savedTips[r1])
require.Equal(t, types.Slot(100), savedTips[r100])
// Delete invalid root
require.NoError(t, service.cfg.ForkChoiceStore.UpdateSyncedTipsWithInvalidRoot(ctx, r200))
require.NoError(t, service.saveSyncedTipsDB(ctx))
savedTips, err = service.cfg.BeaconDB.ValidatedTips(ctx)
require.NoError(t, err)
require.Equal(t, 1, len(savedTips))
require.Equal(t, types.Slot(100), savedTips[r100])
}

View File

@@ -111,7 +111,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, tRoot, tRoot, tRoot, 1, 1))
require.NoError(t, service.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, tRoot, tRoot, 1, 1, false))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
service.processAttestations(ctx)
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations()))

View File

@@ -77,7 +77,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []block.SignedBe
defer span.End()
// Apply state transition on the incoming newly received blockCopy without verifying its BLS contents.
fCheckpoints, jCheckpoints, err := s.onBlockBatch(ctx, blocks, blkRoots)
fCheckpoints, jCheckpoints, optimistic, err := s.onBlockBatch(ctx, blocks, blkRoots)
if err != nil {
err := errors.Wrap(err, "could not process block in batch")
tracing.AnnotateError(span, err)
@@ -86,10 +86,21 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []block.SignedBe
for i, b := range blocks {
blockCopy := b.Copy()
if err = s.handleBlockAfterBatchVerify(ctx, blockCopy, blkRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
// TODO(10261) check optimistic status
if err = s.handleBlockAfterBatchVerify(ctx, blockCopy, blkRoots[i], fCheckpoints[i], jCheckpoints[i], false /*optimistic status*/); err != nil {
tracing.AnnotateError(span, err)
return err
}
if !optimistic[i] {
root, err := b.Block().HashTreeRoot()
if err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, root); err != nil {
return err
}
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,

View File

@@ -192,7 +192,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
// Verify fork choice has processed the block. (Genesis block and the new block)
assert.Equal(t, 2, len(s.cfg.ForkChoiceStore.Nodes()))
assert.Equal(t, 2, s.cfg.ForkChoiceStore.NodeCount())
}
func TestService_ReceiveBlockBatch(t *testing.T) {

View File

@@ -21,15 +21,18 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
f "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
enginev1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -82,7 +85,9 @@ type config struct {
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
BlockFetcher powchain.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller enginev1.Caller
}
// NewService instantiates a new block service instance that will
@@ -130,6 +135,7 @@ func (s *Service) Start() {
log.Fatal(err)
}
}
s.spawnProcessAttestationsRoutine(s.cfg.StateNotifier.StateFeed())
}
// Stop the blockchain service's main event loop and associated goroutines.
@@ -184,12 +190,13 @@ func (s *Service) startFromSavedState(saved state.BeaconState) error {
}
s.store = store.New(justified, finalized)
store := protoarray.New(justified.Epoch, finalized.Epoch, bytesutil.ToBytes32(finalized.Root))
s.cfg.ForkChoiceStore = store
if err := s.loadSyncedTips(originRoot, saved.Slot()); err != nil {
return err
var store f.ForkChoicer
if features.Get().EnableForkChoiceDoublyLinkedTree {
store = doublylinkedtree.New(justified.Epoch, finalized.Epoch)
} else {
store = protoarray.New(justified.Epoch, finalized.Epoch, bytesutil.ToBytes32(finalized.Root))
}
s.cfg.ForkChoiceStore = store
ss, err := slots.EpochStart(finalized.Epoch)
if err != nil {
@@ -221,8 +228,6 @@ func (s *Service) startFromSavedState(saved state.BeaconState) error {
},
})
s.spawnProcessAttestationsRoutine(s.cfg.StateNotifier.StateFeed())
return nil
}
@@ -331,7 +336,6 @@ func (s *Service) startFromPOWChain() error {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
s.spawnProcessAttestationsRoutine(s.cfg.StateNotifier.StateFeed())
for {
select {
case event := <-stateChannel:
@@ -449,9 +453,8 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
genesisBlk.Block().Slot(),
genesisBlkRoot,
params.BeaconConfig().ZeroHash,
[32]byte{},
genesisCheckpoint.Epoch,
genesisCheckpoint.Epoch); err != nil {
genesisCheckpoint.Epoch, false /* optimistic status */); err != nil {
log.Fatalf("Could not process genesis block for fork choice: %v", err)
}

View File

@@ -8,7 +8,6 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/store"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
@@ -20,6 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
@@ -164,74 +164,6 @@ func TestChainStartStop_Initialized(t *testing.T) {
require.LogsContain(t, hook, "data already exists")
}
func TestChainStart_SyncedTipsInDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesisBlk)))
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(1))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
tips := make(map[[32]byte]types.Slot)
tips[bytesutil.ToBytes32([]byte{'a'})] = 1
tips[bytesutil.ToBytes32([]byte{'b'})] = 2
require.NoError(t, beaconDB.UpdateValidatedTips(ctx, tips))
// Test the start function.
chainService.Start()
// Test synced Tips in DB
tips2 := chainService.cfg.ForkChoiceStore.SyncedTips()
require.Equal(t, len(tips2), len(tips))
for k, v := range tips {
v2, ok := tips2[k]
require.Equal(t, true, ok)
require.Equal(t, v, v2)
}
}
func TestChainStart_SyncedTipsNotInDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesisBlk)))
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(1))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
// Test synced Tips in DB
tips := chainService.cfg.ForkChoiceStore.SyncedTips()
require.Equal(t, 1, len(tips))
slot, ok := tips[blkRoot]
require.Equal(t, true, ok)
require.Equal(t, types.Slot(1), slot)
}
func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
@@ -515,7 +447,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
}
}
func TestHasBlock_ForkChoiceAndDB(t *testing.T) {
func TestHasBlock_ForkChoiceAndDB_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
@@ -528,7 +460,26 @@ func TestHasBlock_ForkChoiceAndDB(t *testing.T) {
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState))
require.NoError(t, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState, false))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
}
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0), BeaconDB: beaconDB},
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
block := util.NewBeaconBlock()
r, err := block.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState, false))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
@@ -582,7 +533,7 @@ func BenchmarkHasBlockDB(b *testing.B) {
}
}
func BenchmarkHasBlockForkChoiceStore(b *testing.B) {
func BenchmarkHasBlockForkChoiceStore_ProtoArray(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
@@ -596,7 +547,28 @@ func BenchmarkHasBlockForkChoiceStore(b *testing.B) {
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := v1.InitializeFromProto(bs)
require.NoError(b, err)
require.NoError(b, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState))
require.NoError(b, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState, false))
b.ResetTimer()
for i := 0; i < b.N; i++ {
require.Equal(b, true, s.cfg.ForkChoiceStore.HasNode(r), "Block is not in fork choice store")
}
}
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(0, 0), BeaconDB: beaconDB},
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := block.Block.HashTreeRoot()
require.NoError(b, err)
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := v1.InitializeFromProto(bs)
require.NoError(b, err)
require.NoError(b, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState, false))
b.ResetTimer()
for i := 0; i < b.N; i++ {

View File

@@ -18,7 +18,7 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -18,7 +18,7 @@ import (
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
@@ -32,34 +32,40 @@ var ErrNilState = errors.New("nil state")
// ChainService defines the mock interface for testing
type ChainService struct {
State state.BeaconState
Root []byte
Block block.SignedBeaconBlock
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
BlocksReceived []block.SignedBeaconBlock
Slot *types.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
Genesis time.Time
ValidatorsRoot [32]byte
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block block.SignedBeaconBlock
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []block.SignedBeaconBlock
SyncCommitteeIndices []types.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
ValidAttestation bool
ForkChoiceStore *protoarray.Store
VerifyBlkDescendantErr error
Slot *types.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
SyncCommitteeIndices []types.CommitteeIndex
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
PublicKey [fieldparams.BLSPubkeyLength]byte
SyncCommitteePubkeys [][]byte
InitSyncBlockRoots map[[32]byte]bool
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
}
// ForkChoicer mocks the same method in the chain service
func (s *ChainService) ForkChoicer() forkchoice.ForkChoicer {
return s.ForkChoiceStore
}
// StateNotifier mocks the same method in the chain service.
@@ -319,11 +325,6 @@ func (s *ChainService) HeadETH1Data() *ethpb.Eth1Data {
return s.ETH1Data
}
// ProtoArrayStore mocks the same method in the chain service.
func (s *ChainService) ProtoArrayStore() *protoarray.Store {
return s.ForkChoiceStore
}
// GenesisTime mocks the same method in the chain service.
func (s *ChainService) GenesisTime() time.Time {
return s.Genesis
@@ -442,10 +443,10 @@ func (s *ChainService) HeadSyncContributionProofDomain(_ context.Context, _ type
// IsOptimistic mocks the same method in the chain service.
func (s *ChainService) IsOptimistic(_ context.Context) (bool, error) {
return false, nil
return s.Optimistic, nil
}
// IsOptimisticForRoot mocks the same method in the chain service.
func (s *ChainService) IsOptimisticForRoot(_ context.Context, _ [32]byte, _ types.Slot) (bool, error) {
return false, nil
func (s *ChainService) IsOptimisticForRoot(_ context.Context, _ [32]byte) (bool, error) {
return s.Optimistic, nil
}

View File

@@ -23,6 +23,8 @@ import (
// Spec code:
// def is_merge_transition_complete(state: BeaconState) -> bool:
// return state.latest_execution_payload_header != ExecutionPayloadHeader()
//
// Deprecated: Use `IsMergeTransitionBlockUsingPayloadHeader` instead.
func MergeTransitionComplete(st state.BeaconState) (bool, error) {
h, err := st.LatestExecutionPayloadHeader()
if err != nil {
@@ -51,6 +53,16 @@ func MergeTransitionBlock(st state.BeaconState, body block.BeaconBlockBody) (boo
return ExecutionBlock(body)
}
// IsMergeTransitionBlockUsingPayloadHeader returns true if the input block is the terminal merge block.
// Terminal merge block must be associated with an empty payload header.
// This is an optimized version of MergeTransitionComplete where beacon state is not required as an argument.
func IsMergeTransitionBlockUsingPayloadHeader(h *ethpb.ExecutionPayloadHeader, body block.BeaconBlockBody) (bool, error) {
if !isEmptyHeader(h) {
return false, nil
}
return ExecutionBlock(body)
}
// ExecutionBlock returns whether the block has a non-empty ExecutionPayload.
//
// Spec code:
@@ -124,8 +136,8 @@ func ValidatePayload(st state.BeaconState, payload *enginev1.ExecutionPayload) e
return err
}
if !bytes.Equal(payload.Random, random) {
return errors.New("incorrect random")
if !bytes.Equal(payload.PrevRandao, random) {
return errors.New("incorrect prev randao")
}
t, err := slots.ToTime(st.GenesisTime(), st.Slot())
if err != nil {
@@ -201,7 +213,7 @@ func PayloadToHeader(payload *enginev1.ExecutionPayload) (*ethpb.ExecutionPayloa
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
ReceiptRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
Random: bytesutil.SafeCopyBytes(payload.Random),
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
BlockNumber: payload.BlockNumber,
GasLimit: payload.GasLimit,
GasUsed: payload.GasUsed,
@@ -229,7 +241,7 @@ func isEmptyPayload(p *enginev1.ExecutionPayload) bool {
if !bytes.Equal(p.LogsBloom, make([]byte, fieldparams.LogsBloomLength)) {
return false
}
if !bytes.Equal(p.Random, make([]byte, fieldparams.RootLength)) {
if !bytes.Equal(p.PrevRandao, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(p.BaseFeePerGas, make([]byte, fieldparams.RootLength)) {
@@ -275,7 +287,7 @@ func isEmptyHeader(h *ethpb.ExecutionPayloadHeader) bool {
if !bytes.Equal(h.LogsBloom, make([]byte, fieldparams.LogsBloomLength)) {
return false
}
if !bytes.Equal(h.Random, make([]byte, fieldparams.RootLength)) {
if !bytes.Equal(h.PrevRandao, make([]byte, fieldparams.RootLength)) {
return false
}
if !bytes.Equal(h.BaseFeePerGas, make([]byte, fieldparams.RootLength)) {

View File

@@ -78,7 +78,7 @@ func Test_MergeComplete(t *testing.T) {
name: "has random",
payload: func() *ethpb.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.Random = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
h.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
@@ -246,7 +246,7 @@ func Test_MergeBlock(t *testing.T) {
name: "empty header, payload has random",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Random = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
p.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
@@ -350,6 +350,185 @@ func Test_MergeBlock(t *testing.T) {
}
}
func Test_IsMergeTransitionBlockUsingPayloadHeader(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *ethpb.ExecutionPayloadHeader
want bool
}{
{
name: "empty header, empty payload",
payload: emptyPayload(),
header: emptyPayloadHeader(),
want: false,
},
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *ethpb.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: false,
},
{
name: "empty header, payload has parent hash",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has fee recipient",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.FeeRecipientLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has state root",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has receipt root",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has logs bloom",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has random",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has base fee",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has block hash",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has tx",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Transactions = [][]byte{{'a'}}
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has extra data",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.ExtraData = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has block number",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.BlockNumber = 1
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has gas limit",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.GasLimit = 1
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has gas used",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.GasUsed = 1
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
{
name: "empty header, payload has timestamp",
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Timestamp = 1
return p
}(),
header: emptyPayloadHeader(),
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload = tt.payload
body, err := wrapper.WrappedBellatrixBeaconBlockBody(blk.Block.Body)
require.NoError(t, err)
got, err := blocks.IsMergeTransitionBlockUsingPayloadHeader(tt.header, body)
require.NoError(t, err)
if got != tt.want {
t.Errorf("MergeTransitionBlock() got = %v, want %v", got, tt.want)
}
})
}
}
func Test_IsExecutionBlock(t *testing.T) {
tests := []struct {
name string
@@ -520,21 +699,21 @@ func Test_ValidatePayload(t *testing.T) {
name: "validate passes",
payload: func() *enginev1.ExecutionPayload {
h := emptyPayload()
h.Random = random
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
return h
}(), err: nil,
},
{
name: "incorrect random",
name: "incorrect prev randao",
payload: emptyPayload(),
err: errors.New("incorrect random"),
err: errors.New("incorrect prev randao"),
},
{
name: "incorrect timestamp",
payload: func() *enginev1.ExecutionPayload {
h := emptyPayload()
h.Random = random
h.PrevRandao = random
h.Timestamp = 1
return h
}(),
@@ -568,21 +747,21 @@ func Test_ProcessPayload(t *testing.T) {
name: "process passes",
payload: func() *enginev1.ExecutionPayload {
h := emptyPayload()
h.Random = random
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
return h
}(), err: nil,
},
{
name: "incorrect random",
name: "incorrect prev randao",
payload: emptyPayload(),
err: errors.New("incorrect random"),
err: errors.New("incorrect prev randao"),
},
{
name: "incorrect timestamp",
payload: func() *enginev1.ExecutionPayload {
h := emptyPayload()
h.Random = random
h.PrevRandao = random
h.Timestamp = 1
return h
}(),
@@ -621,7 +800,7 @@ func Test_PayloadToHeader(t *testing.T) {
p.StateRoot = b
p.ReceiptsRoot = b
p.LogsBloom = b
p.Random = b
p.PrevRandao = b
p.ExtraData = b
p.BaseFeePerGas = b
p.BlockHash = b
@@ -635,7 +814,7 @@ func Test_PayloadToHeader(t *testing.T) {
require.DeepSSZEqual(t, h.StateRoot, make([]byte, fieldparams.RootLength))
require.DeepSSZEqual(t, h.ReceiptRoot, make([]byte, fieldparams.RootLength))
require.DeepSSZEqual(t, h.LogsBloom, make([]byte, fieldparams.LogsBloomLength))
require.DeepSSZEqual(t, h.Random, make([]byte, fieldparams.RootLength))
require.DeepSSZEqual(t, h.PrevRandao, make([]byte, fieldparams.RootLength))
require.DeepSSZEqual(t, h.ExtraData, make([]byte, 0))
require.DeepSSZEqual(t, h.BaseFeePerGas, make([]byte, fieldparams.RootLength))
require.DeepSSZEqual(t, h.BlockHash, make([]byte, fieldparams.RootLength))
@@ -663,7 +842,7 @@ func emptyPayloadHeader() *ethpb.ExecutionPayloadHeader {
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
Random: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
@@ -678,7 +857,7 @@ func emptyPayload() *enginev1.ExecutionPayload {
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
Random: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),

View File

@@ -71,7 +71,7 @@ func UpgradeToBellatrix(ctx context.Context, state state.BeaconState) (state.Bea
StateRoot: make([]byte, 32),
ReceiptRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
Random: make([]byte, 32),
PrevRandao: make([]byte, 32),
BlockNumber: 0,
GasLimit: 0,
GasUsed: 0,

View File

@@ -67,7 +67,7 @@ func TestUpgradeToBellatrix(t *testing.T) {
StateRoot: make([]byte, 32),
ReceiptRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
Random: make([]byte, 32),
PrevRandao: make([]byte, 32),
BlockNumber: 0,
GasLimit: 0,
GasUsed: 0,

View File

@@ -11,18 +11,22 @@ import (
"github.com/prysmaticlabs/prysm/time/slots"
)
var ErrNilSignedBeaconBlock = errors.New("signed beacon block can't be nil")
var ErrNilBeaconBlock = errors.New("beacon block can't be nil")
var ErrNilBeaconBlockBody = errors.New("beacon block body can't be nil")
// BeaconBlockIsNil checks if any composite field of input signed beacon block is nil.
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
func BeaconBlockIsNil(b block.SignedBeaconBlock) error {
if b == nil || b.IsNil() {
return errors.New("signed beacon block can't be nil")
return ErrNilSignedBeaconBlock
}
if b.Block().IsNil() {
return errors.New("beacon block can't be nil")
return ErrNilBeaconBlock
}
if b.Block().Body().IsNil() {
return errors.New("beacon block body can't be nil")
return ErrNilBeaconBlockBody
}
return nil
}

View File

@@ -244,7 +244,7 @@ func createFullBellatrixBlockWithOperations(t *testing.T) (state.BeaconState,
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
Random: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),

View File

@@ -137,16 +137,6 @@ func CalculateStateRoot(
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block")
}
if signed.Version() == version.Altair || signed.Version() == version.Bellatrix {
sa, err := signed.Block().Body().SyncAggregate()
if err != nil {
return [32]byte{}, err
}
state, err = altair.ProcessSyncAggregate(ctx, state, sa)
if err != nil {
return [32]byte{}, err
}
}
return state.HashTreeRoot(ctx)
}
@@ -183,16 +173,6 @@ func ProcessBlockNoVerifyAnySig(
if err != nil {
return nil, nil, err
}
if signed.Version() == version.Altair || signed.Version() == version.Bellatrix {
sa, err := signed.Block().Body().SyncAggregate()
if err != nil {
return nil, nil, err
}
state, err = altair.ProcessSyncAggregate(ctx, state, sa)
if err != nil {
return nil, nil, err
}
}
bSet, err := b.BlockSignatureBatch(state, blk.ProposerIndex(), signed.Signature(), blk.HashTreeRoot)
if err != nil {
@@ -340,6 +320,19 @@ func ProcessBlockForStateRoot(
return nil, errors.Wrap(err, "could not process block operation")
}
if signed.Block().Version() == version.Phase0 {
return state, nil
}
sa, err := signed.Block().Body().SyncAggregate()
if err != nil {
return nil, errors.Wrap(err, "could not get sync aggregate from block")
}
state, err = altair.ProcessSyncAggregate(ctx, state, sa)
if err != nil {
return nil, errors.Wrap(err, "process_sync_aggregate failed")
}
return state, nil
}

View File

@@ -7,3 +7,9 @@ import "github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
// i/o error. This variable copies the value in the kv package to the same scope as the Database interfaces,
// so that it is available to code paths that do not interact directly with the kv package.
var ErrNotFound = kv.ErrNotFound
// ErrNotFoundState wraps ErrNotFound for an error specific to a state not being found in the database.
var ErrNotFoundState = kv.ErrNotFoundState
// ErrNotFoundOriginBlockRoot wraps ErrNotFound for an error specific to the origin block root.
var ErrNotFoundOriginBlockRoot = kv.ErrNotFoundOriginBlockRoot

View File

@@ -30,9 +30,9 @@ type ReadOnlyDatabase interface {
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
FinalizedChildBlock(ctx context.Context, blockRoot [32]byte) (block.SignedBeaconBlock, error)
HighestSlotBlocksBelow(ctx context.Context, slot types.Slot) ([]block.SignedBeaconBlock, error)
ValidatedTips(ctx context.Context) (map[[32]byte]types.Slot, error)
// State related methods.
State(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error)
StateOrError(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error)
GenesisState(ctx context.Context) (state.BeaconState, error)
HasState(ctx context.Context, blockRoot [32]byte) bool
StateSummary(ctx context.Context, blockRoot [32]byte) (*ethpb.StateSummary, error)
@@ -49,7 +49,8 @@ type ReadOnlyDatabase interface {
DepositContractAddress(ctx context.Context) ([]byte, error)
// Powchain operations.
PowchainData(ctx context.Context) (*ethpb.ETH1ChainData, error)
// Fee reicipients operations.
FeeRecipientByValidatorID(ctx context.Context, id uint64) (common.Address, error)
// origin checkpoint sync support
OriginBlockRoot(ctx context.Context) ([32]byte, error)
}
@@ -63,7 +64,6 @@ type NoHeadAccessDatabase interface {
SaveBlock(ctx context.Context, block block.SignedBeaconBlock) error
SaveBlocks(ctx context.Context, blocks []block.SignedBeaconBlock) error
SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) error
UpdateValidatedTips(ctx context.Context, newVals map[[32]byte]types.Slot) error
// State related methods.
SaveState(ctx context.Context, state state.ReadOnlyBeaconState, blockRoot [32]byte) error
SaveStates(ctx context.Context, states []state.ReadOnlyBeaconState, blockRoots [][32]byte) error
@@ -80,6 +80,8 @@ type NoHeadAccessDatabase interface {
SavePowchainData(ctx context.Context, data *ethpb.ETH1ChainData) error
// Run any required database migrations.
RunMigrations(ctx context.Context) error
// Fee reicipients operations.
SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []uint64, addrs []common.Address) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint types.Slot) error
}

View File

@@ -25,7 +25,7 @@ go_library(
"state_summary.go",
"state_summary_cache.go",
"utils.go",
"validated_tips.go",
"validated_checkpoint.go",
"wss.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/db/kv",
@@ -81,7 +81,6 @@ go_test(
"checkpoint_test.go",
"deposit_contract_test.go",
"encoding_test.go",
"error_test.go",
"finalized_block_roots_test.go",
"genesis_test.go",
"init_test.go",
@@ -93,7 +92,7 @@ go_test(
"state_summary_test.go",
"state_test.go",
"utils_test.go",
"validated_tips_test.go",
"validated_checkpoint_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
@@ -116,6 +115,7 @@ go_test(
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/golang/snappy"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
@@ -392,6 +393,44 @@ func (s *Store) HighestSlotBlocksBelow(ctx context.Context, slot types.Slot) ([]
return []block.SignedBeaconBlock{blk}, nil
}
// FeeRecipientByValidatorID returns the fee recipient for a validator id.
// `ErrNotFoundFeeRecipient` is returned if the validator id is not found.
func (s *Store) FeeRecipientByValidatorID(ctx context.Context, id uint64) (common.Address, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.FeeRecipientByValidatorID")
defer span.End()
var addr []byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(feeRecipientBucket)
addr = bkt.Get(bytesutil.Uint64ToBytesBigEndian(id))
if addr == nil {
return errors.Wrapf(ErrNotFoundFeeRecipient, "validator id %d", id)
}
return nil
})
return common.BytesToAddress(addr), err
}
// SaveFeeRecipientsByValidatorIDs saves the fee recipients for validator ids.
// Error is returned if `ids` and `recipients` are not the same length.
func (s *Store) SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []uint64, feeRecipients []common.Address) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveFeeRecipientByValidatorID")
defer span.End()
if len(ids) != len(feeRecipients) {
return errors.New("validatorIDs and feeRecipients must be the same length")
}
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(feeRecipientBucket)
for i, id := range ids {
if err := bkt.Put(bytesutil.Uint64ToBytesBigEndian(id), feeRecipients[i].Bytes()); err != nil {
return err
}
}
return nil
})
}
// blockRootsByFilter retrieves the block roots given the filter criteria.
func blockRootsByFilter(ctx context.Context, tx *bolt.Tx, f *filters.QueryFilter) ([][]byte, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.blockRootsByFilter")

View File

@@ -4,6 +4,8 @@ import (
"context"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/config/params"
@@ -589,3 +591,27 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
})
}
}
func TestStore_FeeRecipientByValidatorID(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
ids := []uint64{0, 0, 0}
feeRecipients := []common.Address{{}, {}, {}, {}}
require.ErrorContains(t, "validatorIDs and feeRecipients must be the same length", db.SaveFeeRecipientsByValidatorIDs(ctx, ids, feeRecipients))
ids = []uint64{0, 1, 2}
feeRecipients = []common.Address{{'a'}, {'b'}, {'c'}}
require.NoError(t, db.SaveFeeRecipientsByValidatorIDs(ctx, ids, feeRecipients))
f, err := db.FeeRecipientByValidatorID(ctx, 0)
require.NoError(t, err)
require.Equal(t, common.Address{'a'}, f)
f, err = db.FeeRecipientByValidatorID(ctx, 1)
require.NoError(t, err)
require.Equal(t, common.Address{'b'}, f)
f, err = db.FeeRecipientByValidatorID(ctx, 2)
require.NoError(t, err)
require.Equal(t, common.Address{'c'}, f)
_, err = db.FeeRecipientByValidatorID(ctx, 3)
want := errors.Wrap(ErrNotFoundFeeRecipient, "validator id 3")
require.Equal(t, want.Error(), err.Error())
}

View File

@@ -1,6 +1,6 @@
package kv
import "errors"
import "github.com/pkg/errors"
// errDeleteFinalized is raised when we attempt to delete a finalized block/state
var errDeleteFinalized = errors.New("cannot delete finalized block or state")
@@ -8,42 +8,10 @@ var errDeleteFinalized = errors.New("cannot delete finalized block or state")
// ErrNotFound can be used directly, or as a wrapped DBError, whenever a db method needs to
// indicate that a value couldn't be found.
var ErrNotFound = errors.New("not found in db")
var ErrNotFoundState = errors.Wrap(ErrNotFound, "state not found")
// ErrNotFoundOriginBlockRoot is an error specifically for the origin block root getter
var ErrNotFoundOriginBlockRoot = WrapDBError(ErrNotFound, "OriginBlockRoot")
var ErrNotFoundOriginBlockRoot = errors.Wrap(ErrNotFound, "OriginBlockRoot")
// WrapDBError wraps an error in a DBError. See commentary on DBError for more context.
func WrapDBError(e error, outer string) error {
return DBError{
Wraps: e,
Outer: errors.New(outer),
}
}
// DBError implements the Error method so that it can be asserted as an error.
// The Unwrap method supports error wrapping, enabling it to be used with errors.Is/As.
// The primary use case is to make it simple for database methods to return errors
// that wrap ErrNotFound, allowing calling code to check for "not found" errors
// like: `error.Is(err, ErrNotFound)`. This is intended to improve error handling
// in db lookup methods that need to differentiate between a missing value and some
// other database error. for more background see:
// https://go.dev/blog/go1.13-errors
type DBError struct {
Wraps error
Outer error
}
// Error satisfies the error interface, so that DBErrors can be used anywhere that
// expects an `error`.
func (e DBError) Error() string {
es := e.Outer.Error()
if e.Wraps != nil {
es += ": " + e.Wraps.Error()
}
return es
}
// Unwrap is used by the errors package Is and As methods.
func (e DBError) Unwrap() error {
return e.Wraps
}
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")

View File

@@ -1,24 +0,0 @@
package kv
import (
"errors"
"testing"
)
func TestWrappedSentinelError(t *testing.T) {
e := ErrNotFoundOriginBlockRoot
if !errors.Is(e, ErrNotFoundOriginBlockRoot) {
t.Error("expected that a copy of ErrNotFoundOriginBlockRoot would have an is-a relationship")
}
outer := errors.New("wrapped error")
e2 := DBError{Wraps: ErrNotFoundOriginBlockRoot, Outer: outer}
if !errors.Is(e2, ErrNotFoundOriginBlockRoot) {
t.Error("expected that errors.Is would know DBError wraps ErrNotFoundOriginBlockRoot")
}
// test that the innermost not found error is detected
if !errors.Is(e2, ErrNotFound) {
t.Error("expected that errors.Is would know ErrNotFoundOriginBlockRoot wraps ErrNotFound")
}
}

View File

@@ -175,7 +175,7 @@ func NewKVStore(ctx context.Context, dirPath string, config *Config) (*Store, er
powchainBucket,
stateSummaryBucket,
stateValidatorsBucket,
validatedTips,
lastValidatedCheckpoint,
// Indices buckets.
attestationHeadBlockRootBucket,
attestationSourceRootIndicesBucket,
@@ -191,6 +191,8 @@ func NewKVStore(ctx context.Context, dirPath string, config *Config) (*Store, er
newStateServiceCompatibleBucket,
// Migrations
migrationsBucket,
feeRecipientBucket,
)
}); err != nil {
log.WithField("elapsed", time.Since(start)).Error("Failed to update db and create buckets")

View File

@@ -18,7 +18,8 @@ var (
checkpointBucket = []byte("check-point")
powchainBucket = []byte("powchain")
stateValidatorsBucket = []byte("state-validators")
validatedTips = []byte("validated-synced-tips")
feeRecipientBucket = []byte("fee-recipient")
lastValidatedCheckpoint = []byte("last-validated-checkpoint")
// Deprecated: This bucket was migrated in PR 6461. Do not use, except for migrations.
slotsHasObjectBucket = []byte("slots-has-objects")

View File

@@ -3,6 +3,7 @@ package kv
import (
"bytes"
"context"
"fmt"
"github.com/golang/snappy"
"github.com/pkg/errors"
@@ -46,6 +47,19 @@ func (s *Store) State(ctx context.Context, blockRoot [32]byte) (state.BeaconStat
return s.unmarshalState(ctx, enc, valEntries)
}
// StateOrError is just like State(), except it only returns a non-error response
// if the requested state is found in the database.
func (s *Store) StateOrError(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error) {
st, err := s.State(ctx, blockRoot)
if err != nil {
return nil, err
}
if st == nil || st.IsNil() {
return nil, errors.Wrap(ErrNotFoundState, fmt.Sprintf("no state with blockroot=%#x", blockRoot))
}
return st, nil
}
// GenesisState returns the genesis state in beacon chain.
func (s *Store) GenesisState(ctx context.Context) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.GenesisState")

View File

@@ -21,6 +21,12 @@ import (
bolt "go.etcd.io/bbolt"
)
func TestStateNil(t *testing.T) {
db := setupDB(t)
_, err := db.StateOrError(context.Background(), [32]byte{})
require.ErrorIs(t, err, ErrNotFoundState)
}
func TestState_CanSaveRetrieve(t *testing.T) {
db := setupDB(t)

View File

@@ -0,0 +1,47 @@
package kv
import (
"context"
"encoding/binary"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
func (s *Store) LastValidatedCheckpoint(ctx context.Context) ([32]byte, types.Slot, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.LastValidatedCheckpoint")
defer span.End()
var lastChkPoint [32]byte
var slot types.Slot
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lastValidatedCheckpoint)
val := bkt.Get([]byte("lastChkPoint"))
if len(val) != 40 {
return errors.New("invalid checkpoint point")
}
lastChkPoint = bytesutil.ToBytes32(val[:32])
slot = types.Slot(binary.LittleEndian.Uint64(val[32:]))
return nil
})
return lastChkPoint, slot, err
}
func (s *Store) saveLastValidatedCheckpoint(ctx context.Context, checkPoint [32]byte, slot types.Slot) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.saveLastValidatedCheckpoint")
defer span.End()
updateErr := s.db.Update(func(tx *bolt.Tx) error {
value := make([]byte, 40)
copy(value[:32], checkPoint[:])
binary.LittleEndian.PutUint64(value[32:], uint64(slot))
bkt := tx.Bucket(lastValidatedCheckpoint)
err := bkt.Put([]byte("lastChkPoint"), value)
return err
})
return updateErr
}

View File

@@ -0,0 +1,39 @@
package kv
import (
"bytes"
"context"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestValidateCheckpoint(t *testing.T) {
ctx := context.Background()
db := setupDB(t)
checkpointA := [32]byte{'A'}
slotA := types.Slot(1)
checkpointB := [32]byte{'B'}
slotB := types.Slot(2)
// add first checkpoint
require.NoError(t, db.saveLastValidatedCheckpoint(ctx, checkpointA, slotA))
rcvdRoot, rcvdSlot, err := db.LastValidatedCheckpoint(ctx)
require.NoError(t, err)
require.Equal(t, 0, bytes.Compare(checkpointA[:], rcvdRoot[:]))
require.Equal(t, true, uint64(slotA) == uint64(rcvdSlot))
// update the checkpoint and slot
require.NoError(t, db.saveLastValidatedCheckpoint(ctx, checkpointB, slotB))
rcvdRoot, rcvdSlot, err = db.LastValidatedCheckpoint(ctx)
require.NoError(t, err)
require.Equal(t, 0, bytes.Compare(checkpointB[:], rcvdRoot[:]))
require.Equal(t, true, uint64(slotB) == uint64(rcvdSlot))
}

View File

@@ -1,71 +0,0 @@
package kv
import (
"context"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
// ValidatedTips returns all the validated_tips that are present in the DB.
func (s *Store) ValidatedTips(ctx context.Context) (map[[32]byte]types.Slot, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.ValidatedTips")
defer span.End()
valTips := make(map[[32]byte]types.Slot, 1)
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(validatedTips)
c := bkt.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
if ctx.Err() != nil {
return ctx.Err()
}
valTips[bytesutil.ToBytes32(k)] = bytesutil.BytesToSlotBigEndian(v)
}
return nil
})
return valTips, err
}
// UpdateValidatedTips clears off all the old validated_tips from the DB and
// adds the new tips that are provided.
func (s *Store) UpdateValidatedTips(ctx context.Context, newVals map[[32]byte]types.Slot) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.UpdateValidatedTips")
defer span.End()
// Get the already existing tips.
oldVals, err := s.ValidatedTips(ctx)
if err != nil {
return err
}
updateErr := s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(validatedTips)
// Delete keys that are present and not in the new set.
for k := range oldVals {
if _, ok := newVals[k]; !ok {
deleteErr := bkt.Delete(k[:])
if deleteErr != nil {
return deleteErr
}
}
}
// Add keys not present already.
for k, v := range newVals {
if _, ok := oldVals[k]; !ok {
putErr := bkt.Put(k[:], bytesutil.SlotToBytesBigEndian(v))
if putErr != nil {
return putErr
}
}
}
return nil
})
return updateErr
}

View File

@@ -1,93 +0,0 @@
package kv
import (
"context"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestTips_AddNewTips(t *testing.T) {
ctx := context.Background()
db := setupDB(t)
newTips := make(map[[32]byte]types.Slot)
newTips[[32]byte{'A'}] = types.Slot(1)
newTips[[32]byte{'B'}] = types.Slot(2)
newTips[[32]byte{'C'}] = types.Slot(3)
require.NoError(t, db.UpdateValidatedTips(ctx, newTips))
gotTips, err := db.ValidatedTips(ctx)
require.NoError(t, err)
require.Equal(t, true, areTipsSame(gotTips, newTips))
}
func TestTips_UpdateTipsWithoutOverlap(t *testing.T) {
ctx := context.Background()
db := setupDB(t)
oldTips := make(map[[32]byte]types.Slot)
oldTips[[32]byte{'A'}] = types.Slot(1)
oldTips[[32]byte{'B'}] = types.Slot(2)
oldTips[[32]byte{'C'}] = types.Slot(3)
require.NoError(t, db.UpdateValidatedTips(ctx, oldTips))
// create a new non-overlapping tips to add
newTips := make(map[[32]byte]types.Slot)
newTips[[32]byte{'D'}] = types.Slot(4)
newTips[[32]byte{'E'}] = types.Slot(5)
newTips[[32]byte{'F'}] = types.Slot(6)
require.NoError(t, db.UpdateValidatedTips(ctx, newTips))
gotTips, err := db.ValidatedTips(ctx)
require.NoError(t, err)
require.Equal(t, true, areTipsSame(gotTips, newTips))
}
func TestTips_UpdateTipsWithOverlap(t *testing.T) {
ctx := context.Background()
db := setupDB(t)
oldTips := make(map[[32]byte]types.Slot)
oldTips[[32]byte{'A'}] = types.Slot(1)
oldTips[[32]byte{'B'}] = types.Slot(2)
oldTips[[32]byte{'C'}] = types.Slot(3)
require.NoError(t, db.UpdateValidatedTips(ctx, oldTips))
// create a new overlapping tips to add
newTips := make(map[[32]byte]types.Slot)
newTips[[32]byte{'C'}] = types.Slot(3)
newTips[[32]byte{'D'}] = types.Slot(4)
newTips[[32]byte{'E'}] = types.Slot(5)
require.NoError(t, db.UpdateValidatedTips(ctx, newTips))
gotTips, err := db.ValidatedTips(ctx)
require.NoError(t, err)
require.Equal(t, true, areTipsSame(gotTips, newTips))
}
func areTipsSame(got map[[32]byte]types.Slot, required map[[32]byte]types.Slot) bool {
if len(got) != len(required) {
return false
}
for k, v := range got {
if val, ok := required[k]; ok {
if uint64(v) != uint64(val) {
return false
}
} else {
return false
}
}
return true
}

View File

@@ -12,7 +12,8 @@ go_library(
"//testing/spectest:__subpackages__",
],
deps = [
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//config/fieldparams:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
],
)

View File

@@ -0,0 +1,55 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"errors.go",
"forkchoice.go",
"metrics.go",
"node.go",
"optimistic_sync.go",
"proposer_boost.go",
"store.go",
"types.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/spectest:__subpackages__",
],
deps = [
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"ffg_update_test.go",
"forkchoice_test.go",
"no_vote_test.go",
"node_test.go",
"optimistic_sync_test.go",
"proposer_boost_test.go",
"store_test.go",
"vote_test.go",
],
embed = [":go_default_library"],
deps = [
"//config/params:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
],
)

View File

@@ -0,0 +1,4 @@
/*
Package doublylinkedtree implements eth2 LMD GHOST fork choice using the doubly linked proto array node structure.
*/
package doublylinkedtree

View File

@@ -0,0 +1,10 @@
package doublylinkedtree
import "errors"
var errNilNode = errors.New("invalid nil or unknown node")
var errInvalidBalance = errors.New("invalid node balance")
var errInvalidProposerBoostRoot = errors.New("invalid proposer boost root")
var errUnknownFinalizedRoot = errors.New("unknown finalized root")
var errUnknownJustifiedRoot = errors.New("unknown justified root")
var errInvalidOptimisticStatus = errors.New("invalid optimistic status")

View File

@@ -0,0 +1,193 @@
package doublylinkedtree
import (
"context"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestFFGUpdates_OneBranch(t *testing.T) {
balances := []uint64{1, 1}
f := setup(0, 0)
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), 0, params.BeaconConfig().ZeroHash, balances, 0)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().ZeroHash, r, "Incorrect head with genesis")
// Define the following tree:
// 0 <- justified: 0, finalized: 0
// |
// 1 <- justified: 0, finalized: 0
// |
// 2 <- justified: 1, finalized: 0
// |
// 3 <- justified: 2, finalized: 1
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(2), indexToHash(1), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(3), indexToHash(2), 2, 1, false))
// With starting justified epoch at 0, the head should be 3:
// 0 <- start
// |
// 1
// |
// 2
// |
// 3 <- head
r, err = f.Head(context.Background(), 0, params.BeaconConfig().ZeroHash, balances, 0)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head for with justified epoch at 0")
// With starting justified epoch at 1, the head should be 2:
// 0
// |
// 1 <- start
// |
// 2 <- head
// |
// 3
r, err = f.Head(context.Background(), 1, indexToHash(2), balances, 0)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head with justified epoch at 1")
// With starting justified epoch at 2, the head should be 3:
// 0
// |
// 1
// |
// 2 <- start
// |
// 3 <- head
r, err = f.Head(context.Background(), 2, indexToHash(3), balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head with justified epoch at 2")
}
func TestFFGUpdates_TwoBranches(t *testing.T) {
balances := []uint64{1, 1}
f := setup(0, 0)
r, err := f.Head(context.Background(), 0, params.BeaconConfig().ZeroHash, balances, 0)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().ZeroHash, r, "Incorrect head with genesis")
// Define the following tree:
// 0
// / \
// justified: 0, finalized: 0 -> 1 2 <- justified: 0, finalized: 0
// | |
// justified: 1, finalized: 0 -> 3 4 <- justified: 0, finalized: 0
// | |
// justified: 1, finalized: 0 -> 5 6 <- justified: 0, finalized: 0
// | |
// justified: 1, finalized: 0 -> 7 8 <- justified: 1, finalized: 0
// | |
// justified: 2, finalized: 0 -> 9 10 <- justified: 2, finalized: 0
// Left branch.
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(3), indexToHash(1), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(5), indexToHash(3), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(7), indexToHash(5), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(9), indexToHash(7), 2, 0, false))
// Right branch.
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(4), indexToHash(2), 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(6), indexToHash(4), 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(8), indexToHash(6), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(10), indexToHash(8), 2, 0, false))
// With start at 0, the head should be 10:
// 0 <-- start
// / \
// 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10 <-- head
r, err = f.Head(context.Background(), 0, params.BeaconConfig().ZeroHash, balances, 0)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head with justified epoch at 0")
// Add a vote to 1:
// 0
// / \
// +1 vote -> 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10
f.ProcessAttestation(context.Background(), []uint64{0}, indexToHash(1), 0)
// With the additional vote to the left branch, the head should be 9:
// 0 <-- start
// / \
// 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// head -> 9 10
r, err = f.Head(context.Background(), 0, params.BeaconConfig().ZeroHash, balances, 0)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head with justified epoch at 0")
// Add a vote to 2:
// 0
// / \
// 1 2 <- +1 vote
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10
f.ProcessAttestation(context.Background(), []uint64{1}, indexToHash(2), 0)
// With the additional vote to the right branch, the head should be 10:
// 0 <-- start
// / \
// 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10 <-- head
r, err = f.Head(context.Background(), 0, params.BeaconConfig().ZeroHash, balances, 0)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head with justified epoch at 0")
r, err = f.Head(context.Background(), 1, indexToHash(1), balances, 0)
require.NoError(t, err)
assert.Equal(t, indexToHash(7), r, "Incorrect head with justified epoch at 0")
}
func setup(justifiedEpoch, finalizedEpoch types.Epoch) *ForkChoice {
ctx := context.Background()
f := New(justifiedEpoch, finalizedEpoch)
err := f.ProcessBlock(ctx, 0, params.BeaconConfig().ZeroHash, [32]byte{}, justifiedEpoch, finalizedEpoch, false)
if err != nil {
return nil
}
return f
}

View File

@@ -0,0 +1,307 @@
package doublylinkedtree
import (
"context"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
pbrpc "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
)
// New initializes a new fork choice store.
func New(justifiedEpoch, finalizedEpoch types.Epoch) *ForkChoice {
s := &Store{
justifiedEpoch: justifiedEpoch,
finalizedEpoch: finalizedEpoch,
proposerBoostRoot: [32]byte{},
nodeByRoot: make(map[[fieldparams.RootLength]byte]*Node),
pruneThreshold: defaultPruneThreshold,
}
b := make([]uint64, 0)
v := make([]Vote, 0)
return &ForkChoice{store: s, balances: b, votes: v}
}
// NodeCount returns the current number of nodes in the Store.
func (f *ForkChoice) NodeCount() int {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
return len(f.store.nodeByRoot)
}
// Head returns the head root from fork choice store.
// It firsts computes validator's balance changes then recalculates block tree from leaves to root.
func (f *ForkChoice) Head(
ctx context.Context,
justifiedEpoch types.Epoch,
justifiedRoot [32]byte,
justifiedStateBalances []uint64,
finalizedEpoch types.Epoch,
) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.Head")
defer span.End()
f.votesLock.Lock()
defer f.votesLock.Unlock()
calledHeadCount.Inc()
// Using the write lock here because `applyWeightChanges` that gets called subsequently requires a write operation.
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
f.store.updateCheckpoints(justifiedEpoch, finalizedEpoch)
if err := f.updateBalances(justifiedStateBalances); err != nil {
return [32]byte{}, errors.Wrap(err, "could not update balances")
}
if err := f.store.applyProposerBoostScore(justifiedStateBalances); err != nil {
return [32]byte{}, errors.Wrap(err, "could not apply proposer boost score")
}
if err := f.store.treeRootNode.applyWeightChanges(ctx); err != nil {
return [32]byte{}, errors.Wrap(err, "could not apply weight changes")
}
if err := f.store.treeRootNode.updateBestDescendant(ctx, justifiedEpoch, finalizedEpoch); err != nil {
return [32]byte{}, errors.Wrap(err, "could not update best descendant")
}
return f.store.head(ctx, justifiedRoot)
}
// ProcessAttestation processes attestation for vote accounting, it iterates around validator indices
// and update their votes accordingly.
func (f *ForkChoice) ProcessAttestation(ctx context.Context, validatorIndices []uint64, blockRoot [32]byte, targetEpoch types.Epoch) {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.ProcessAttestation")
defer span.End()
f.votesLock.Lock()
defer f.votesLock.Unlock()
for _, index := range validatorIndices {
// Validator indices will grow the vote cache.
for index >= uint64(len(f.votes)) {
f.votes = append(f.votes, Vote{currentRoot: params.BeaconConfig().ZeroHash, nextRoot: params.BeaconConfig().ZeroHash})
}
// Newly allocated vote if the root fields are untouched.
newVote := f.votes[index].nextRoot == params.BeaconConfig().ZeroHash &&
f.votes[index].currentRoot == params.BeaconConfig().ZeroHash
// Vote gets updated if it's newly allocated or high target epoch.
if newVote || targetEpoch > f.votes[index].nextEpoch {
f.votes[index].nextEpoch = targetEpoch
f.votes[index].nextRoot = blockRoot
}
}
processedAttestationCount.Inc()
}
// ProcessBlock processes a new block by inserting it to the fork choice store.
func (f *ForkChoice) ProcessBlock(
ctx context.Context,
slot types.Slot,
blockRoot, parentRoot [fieldparams.RootLength]byte,
justifiedEpoch, finalizedEpoch types.Epoch, optimistic bool,
) error {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.ProcessBlock")
defer span.End()
return f.store.insert(ctx, slot, blockRoot, parentRoot, justifiedEpoch, finalizedEpoch, optimistic)
}
// Prune prunes the fork choice store with the new finalized root. The store is only pruned if the input
// root is different than the current store finalized root, and the number of the store has met prune threshold.
func (f *ForkChoice) Prune(ctx context.Context, finalizedRoot [32]byte) error {
return f.store.prune(ctx, finalizedRoot)
}
// HasNode returns true if the node exists in fork choice store,
// false else wise.
func (f *ForkChoice) HasNode(root [32]byte) bool {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
_, ok := f.store.nodeByRoot[root]
return ok
}
// HasParent returns true if the node parent exists in fork choice store,
// false else wise.
func (f *ForkChoice) HasParent(root [32]byte) bool {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return false
}
return node.parent != nil
}
// IsCanonical returns true if the given root is part of the canonical chain.
func (f *ForkChoice) IsCanonical(root [32]byte) bool {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return false
}
if node.bestDescendant == nil {
if f.store.headNode.bestDescendant == nil {
return node == f.store.headNode
}
return node == f.store.headNode.bestDescendant
}
if f.store.headNode.bestDescendant == nil {
return node.bestDescendant == f.store.headNode
}
return node.bestDescendant == f.store.headNode.bestDescendant
}
// IsOptimistic returns true if the given root has been optimistically synced.
func (f *ForkChoice) IsOptimistic(_ context.Context, root [32]byte) (bool, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return false, errNilNode
}
return node.optimistic, nil
}
// AncestorRoot returns the ancestor root of input block root at a given slot.
func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot types.Slot) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "protoArray.AncestorRoot")
defer span.End()
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return nil, errNilNode
}
n := node
for n != nil && n.slot > slot {
if ctx.Err() != nil {
return nil, ctx.Err()
}
n = n.parent
}
if n == nil {
return nil, errNilNode
}
return n.root[:], nil
}
// updateBalances updates the balances that directly voted for each block taking into account the
// validators' latest votes.
func (f *ForkChoice) updateBalances(newBalances []uint64) error {
for index, vote := range f.votes {
// Skip if validator has never voted for current root and next root (i.e. if the
// votes are zero hash aka genesis block), there's nothing to compute.
if vote.currentRoot == params.BeaconConfig().ZeroHash && vote.nextRoot == params.BeaconConfig().ZeroHash {
continue
}
oldBalance := uint64(0)
newBalance := uint64(0)
// If the validator index did not exist in `f.balances` or
// `newBalances` list above, the balance is just 0.
if index < len(f.balances) {
oldBalance = f.balances[index]
}
if index < len(newBalances) {
newBalance = newBalances[index]
}
// Update only if the validator's balance or vote has changed.
if vote.currentRoot != vote.nextRoot || oldBalance != newBalance {
// Ignore the vote if the root is not in fork choice
// store, that means we have not seen the block before.
nextNode, ok := f.store.nodeByRoot[vote.nextRoot]
if ok && vote.nextRoot != params.BeaconConfig().ZeroHash {
// Protection against nil node
if nextNode == nil {
return errNilNode
}
nextNode.balance += newBalance
}
currentNode, ok := f.store.nodeByRoot[vote.currentRoot]
if ok && vote.currentRoot != params.BeaconConfig().ZeroHash {
// Protection against nil node
if currentNode == nil {
return errNilNode
}
if currentNode.balance < oldBalance {
return errInvalidBalance
}
currentNode.balance -= oldBalance
}
}
// Rotate the validator vote.
f.votes[index].currentRoot = vote.nextRoot
}
f.balances = newBalances
return nil
}
// Tips returns a list of possible heads from fork choice store, it returns the
// roots and the slots of the leaf nodes.
func (f *ForkChoice) Tips() ([][32]byte, []types.Slot) {
return f.store.tips()
}
// ProposerBoost returns the proposerBoost of the store
func (f *ForkChoice) ProposerBoost() [fieldparams.RootLength]byte {
return f.store.proposerBoost()
}
// SetOptimisticToValid sets the node with the given root as a fully validated node
func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [fieldparams.RootLength]byte) error {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return errNilNode
}
return node.setNodeAndParentValidated(ctx)
}
// JustifiedEpoch of fork choice store.
func (f *ForkChoice) JustifiedEpoch() types.Epoch {
return f.store.justifiedEpoch
}
// FinalizedEpoch of fork choice store.
func (f *ForkChoice) FinalizedEpoch() types.Epoch {
return f.store.finalizedEpoch
}
func (f *ForkChoice) ForkChoiceNodes() []*pbrpc.ForkChoiceNode {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
ret := make([]*pbrpc.ForkChoiceNode, len(f.store.nodeByRoot))
return f.store.treeRootNode.rpcNodes(ret)
}
// SetOptimisticToInvalid removes a block with an invalid execution payload from fork choice store
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root [fieldparams.RootLength]byte) error {
return f.store.removeNode(ctx, root)
}

View File

@@ -0,0 +1,168 @@
package doublylinkedtree
import (
"context"
"encoding/binary"
"testing"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/crypto/hash"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestForkChoice_UpdateBalancesPositiveChange(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(2), 0, 0, false))
f.votes = []Vote{
{indexToHash(1), indexToHash(1), 0},
{indexToHash(2), indexToHash(2), 0},
{indexToHash(3), indexToHash(3), 0},
}
// Each node gets one unique vote. The weight should look like 103 <- 102 <- 101 because
// they get propagated back.
require.NoError(t, f.updateBalances([]uint64{10, 20, 30}))
s := f.store
assert.Equal(t, uint64(10), s.nodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.nodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.nodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_UpdateBalancesNegativeChange(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(2), 0, 0, false))
s := f.store
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
f.balances = []uint64{100, 100, 100}
f.votes = []Vote{
{indexToHash(1), indexToHash(1), 0},
{indexToHash(2), indexToHash(2), 0},
{indexToHash(3), indexToHash(3), 0},
}
require.NoError(t, f.updateBalances([]uint64{10, 20, 30}))
assert.Equal(t, uint64(10), s.nodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.nodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.nodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_IsCanonical(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(1), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 4, indexToHash(4), indexToHash(2), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 5, indexToHash(5), indexToHash(4), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 6, indexToHash(6), indexToHash(5), 1, 1, false))
require.Equal(t, true, f.IsCanonical(params.BeaconConfig().ZeroHash))
require.Equal(t, false, f.IsCanonical(indexToHash(1)))
require.Equal(t, true, f.IsCanonical(indexToHash(2)))
require.Equal(t, false, f.IsCanonical(indexToHash(3)))
require.Equal(t, true, f.IsCanonical(indexToHash(4)))
require.Equal(t, true, f.IsCanonical(indexToHash(5)))
require.Equal(t, true, f.IsCanonical(indexToHash(6)))
}
func TestForkChoice_IsCanonicalReorg(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, [32]byte{'1'}, params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, [32]byte{'2'}, params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 3, [32]byte{'3'}, [32]byte{'1'}, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 4, [32]byte{'4'}, [32]byte{'2'}, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 5, [32]byte{'5'}, [32]byte{'4'}, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 6, [32]byte{'6'}, [32]byte{'5'}, 1, 1, false))
f.store.nodesLock.Lock()
f.store.nodeByRoot[[32]byte{'3'}].balance = 10
require.NoError(t, f.store.treeRootNode.applyWeightChanges(ctx))
require.Equal(t, uint64(10), f.store.nodeByRoot[[32]byte{'1'}].weight)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'2'}].weight)
require.NoError(t, f.store.treeRootNode.updateBestDescendant(ctx, 1, 1))
require.DeepEqual(t, [32]byte{'3'}, f.store.treeRootNode.bestDescendant.root)
f.store.nodesLock.Unlock()
h, err := f.store.head(ctx, [32]byte{'1'})
require.NoError(t, err)
require.DeepEqual(t, [32]byte{'3'}, h)
require.DeepEqual(t, h, f.store.headNode.root)
require.Equal(t, true, f.IsCanonical(params.BeaconConfig().ZeroHash))
require.Equal(t, true, f.IsCanonical([32]byte{'1'}))
require.Equal(t, false, f.IsCanonical([32]byte{'2'}))
require.Equal(t, true, f.IsCanonical([32]byte{'3'}))
require.Equal(t, false, f.IsCanonical([32]byte{'4'}))
require.Equal(t, false, f.IsCanonical([32]byte{'5'}))
require.Equal(t, false, f.IsCanonical([32]byte{'6'}))
}
func TestForkChoice_AncestorRoot(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 5, indexToHash(3), indexToHash(2), 1, 1, false))
f.store.treeRootNode = f.store.nodeByRoot[indexToHash(1)]
f.store.treeRootNode.parent = nil
r, err := f.AncestorRoot(ctx, indexToHash(3), 6)
assert.NoError(t, err)
assert.Equal(t, bytesutil.ToBytes32(r), indexToHash(3))
_, err = f.AncestorRoot(ctx, indexToHash(3), 0)
assert.ErrorContains(t, errNilNode.Error(), err)
root, err := f.AncestorRoot(ctx, indexToHash(3), 5)
require.NoError(t, err)
hash3 := indexToHash(3)
require.DeepEqual(t, hash3[:], root)
root, err = f.AncestorRoot(ctx, indexToHash(3), 1)
require.NoError(t, err)
hash1 := indexToHash(1)
require.DeepEqual(t, hash1[:], root)
}
func TestForkChoice_AncestorEqualSlot(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'1'}, params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'3'}, [32]byte{'1'}, 1, 1, false))
r, err := f.AncestorRoot(ctx, [32]byte{'3'}, 100)
require.NoError(t, err)
root := bytesutil.ToBytes32(r)
require.Equal(t, root, [32]byte{'1'})
}
func TestForkChoice_AncestorLowerSlot(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'1'}, params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 200, [32]byte{'3'}, [32]byte{'1'}, 1, 1, false))
r, err := f.AncestorRoot(ctx, [32]byte{'3'}, 150)
require.NoError(t, err)
root := bytesutil.ToBytes32(r)
require.Equal(t, root, [32]byte{'1'})
}
func indexToHash(i uint64) [32]byte {
var b [8]byte
binary.LittleEndian.PutUint64(b[:], i)
return hash.Hash(b[:])
}

View File

@@ -0,0 +1,57 @@
package doublylinkedtree
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
headSlotNumber = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "doublylinkedtree_head_slot",
Help: "The slot number of the current head.",
},
)
nodeCount = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "doublylinkedtree_node_count",
Help: "The number of nodes in the DAG array based store structure.",
},
)
headChangesCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "doublylinkedtree_head_changed_count",
Help: "The number of times head changes.",
},
)
calledHeadCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "doublylinkedtree_head_requested_count",
Help: "The number of times someone called head.",
},
)
processedBlockCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "doublylinkedtree_block_processed_count",
Help: "The number of times a block is processed for fork choice.",
},
)
processedAttestationCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "doublylinkedtree_attestation_processed_count",
Help: "The number of times an attestation is processed for fork choice.",
},
)
prunedCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "doublylinkedtree_pruned_count",
Help: "The number of times pruning happened.",
},
)
optimisticCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "doublylinkedtree_optimistic_count",
Help: "The number of blocks that have been optimistically synced.",
},
)
)

View File

@@ -0,0 +1,114 @@
package doublylinkedtree
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestNoVote_CanFindHead(t *testing.T) {
balances := make([]uint64, 16)
f := setup(1, 1)
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
if r != params.BeaconConfig().ZeroHash {
t.Errorf("Incorrect head with genesis")
}
// Insert block 2 into the tree and verify head is at 2:
// 0
// /
// 2 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 1 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 3 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
// |
// 3
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(3), indexToHash(1), 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 4 into the tree and verify head is at 4:
// 0
// / \
// 2 1
// | |
// head -> 4 3
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(4), indexToHash(2), 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
// Insert block 5 with justified epoch of 2, verify head is still at 4.
// 0
// / \
// 2 1
// | |
// head -> 4 3
// |
// 5 <- justified epoch = 2
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(5), indexToHash(4), 2, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
// Verify there's an error when starting from a block with wrong justified epoch.
// 0
// / \
// 2 1
// | |
// head -> 4 3
// |
// 5 <- starting from 5 with justified epoch 0 should error
_, err = f.Head(context.Background(), 1, indexToHash(5), balances, 1)
wanted := "head at slot 0 with weight 0 is not eligible, finalizedEpoch 1 != 1, justifiedEpoch 2 != 1"
require.ErrorContains(t, wanted, err)
// Set the justified epoch to 2 and start block to 5 to verify head is 5.
// 0
// / \
// 2 1
// | |
// 4 3
// |
// 5 <- head
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(5), r, "Incorrect head for with justified epoch at 2")
// Insert block 6 with justified epoch of 2, verify head is at 6.
// 0
// / \
// 2 1
// | |
// 4 3
// |
// 5
// |
// 6 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(5), 2, 1, false))
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 2")
}

View File

@@ -0,0 +1,152 @@
package doublylinkedtree
import (
"bytes"
"context"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
pbrpc "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// depth returns the length of the path to the root of Fork Choice
func (n *Node) depth() uint64 {
ret := uint64(0)
for node := n.parent; node != nil; node = node.parent {
ret += 1
}
return ret
}
// applyWeightChanges recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node. This function requires a lock
// in Store.nodesLock
func (n *Node) applyWeightChanges(ctx context.Context) error {
// Recursively calling the children to sum their weights.
childrenWeight := uint64(0)
for _, child := range n.children {
if ctx.Err() != nil {
return ctx.Err()
}
if err := child.applyWeightChanges(ctx); err != nil {
return err
}
childrenWeight += child.weight
}
if n.root == params.BeaconConfig().ZeroHash {
return nil
}
n.weight = n.balance + childrenWeight
return nil
}
// updateBestDescendant updates the best descendant of this node and its children.
func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finalizedEpoch types.Epoch) error {
if ctx.Err() != nil {
return ctx.Err()
}
if len(n.children) == 0 {
n.bestDescendant = nil
return nil
}
var bestChild *Node
bestWeight := uint64(0)
hasViableDescendant := false
for _, child := range n.children {
if child == nil {
return errNilNode
}
if err := child.updateBestDescendant(ctx, justifiedEpoch, finalizedEpoch); err != nil {
return err
}
childLeadsToViableHead := child.leadsToViableHead(justifiedEpoch, finalizedEpoch)
if childLeadsToViableHead && !hasViableDescendant {
// The child leads to a viable head, but the current
// parent's best child doesn't.
bestWeight = child.weight
bestChild = child
hasViableDescendant = true
} else if childLeadsToViableHead {
// If both are viable, compare their weights.
if child.weight == bestWeight {
// Tie-breaker of equal weights by root.
if bytes.Compare(child.root[:], bestChild.root[:]) > 0 {
bestChild = child
}
} else if child.weight > bestWeight {
bestChild = child
bestWeight = child.weight
}
}
}
if hasViableDescendant {
if bestChild.bestDescendant == nil {
n.bestDescendant = bestChild
} else {
n.bestDescendant = bestChild.bestDescendant
}
} else {
n.bestDescendant = nil
}
return nil
}
// viableForHead returns true if the node is viable to head.
// Any node with different finalized or justified epoch than
// the ones in fork choice store should not be viable to head.
func (n *Node) viableForHead(justifiedEpoch, finalizedEpoch types.Epoch) bool {
justified := justifiedEpoch == n.justifiedEpoch || justifiedEpoch == 0
finalized := finalizedEpoch == n.finalizedEpoch || finalizedEpoch == 0
return justified && finalized
}
func (n *Node) leadsToViableHead(justifiedEpoch, finalizedEpoch types.Epoch) bool {
if n.bestDescendant == nil {
return n.viableForHead(justifiedEpoch, finalizedEpoch)
}
return n.bestDescendant.viableForHead(justifiedEpoch, finalizedEpoch)
}
// setNodeAndParentValidated sets the current node and the parent as validated (i.e. non-optimistic).
func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
if ctx.Err() != nil {
return ctx.Err()
}
if !n.optimistic || n.parent == nil {
return nil
}
n.optimistic = false
return n.parent.setNodeAndParentValidated(ctx)
}
// rpcNodes is used by the RPC Debug endpoint to return information
// about all nodes in the fork choice store
func (n *Node) rpcNodes(ret []*pbrpc.ForkChoiceNode) []*pbrpc.ForkChoiceNode {
for _, child := range n.children {
ret = child.rpcNodes(ret)
}
r := n.root
p := [32]byte{}
if n.parent != nil {
copy(p[:], n.parent.root[:])
}
b := [32]byte{}
if n.bestDescendant != nil {
copy(b[:], n.bestDescendant.root[:])
}
node := &pbrpc.ForkChoiceNode{
Slot: n.slot,
Root: r[:],
Parent: p[:],
JustifiedEpoch: n.justifiedEpoch,
FinalizedEpoch: n.finalizedEpoch,
Weight: n.weight,
BestDescendant: b[:],
}
ret = append(ret, node)
return ret
}

View File

@@ -0,0 +1,204 @@
package doublylinkedtree
import (
"context"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestNode_ApplyWeightChanges_PositiveChange(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(2), 0, 0, false))
// The updated balances of each node is 100
s := f.store
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
assert.NoError(t, s.treeRootNode.applyWeightChanges(ctx))
assert.Equal(t, uint64(300), s.nodeByRoot[indexToHash(1)].weight)
assert.Equal(t, uint64(200), s.nodeByRoot[indexToHash(2)].weight)
assert.Equal(t, uint64(100), s.nodeByRoot[indexToHash(3)].weight)
}
func TestNode_ApplyWeightChanges_NegativeChange(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(2), 0, 0, false))
// The updated balances of each node is 100
s := f.store
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
s.nodeByRoot[indexToHash(1)].weight = 400
s.nodeByRoot[indexToHash(2)].weight = 400
s.nodeByRoot[indexToHash(3)].weight = 400
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
assert.NoError(t, s.treeRootNode.applyWeightChanges(ctx))
assert.Equal(t, uint64(300), s.nodeByRoot[indexToHash(1)].weight)
assert.Equal(t, uint64(200), s.nodeByRoot[indexToHash(2)].weight)
assert.Equal(t, uint64(100), s.nodeByRoot[indexToHash(3)].weight)
}
func TestNode_UpdateBestDescendant_NonViableChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is not viable.
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 2, 3, false))
// Verify parent's best child and best descendant are `none`.
s := f.store
assert.Equal(t, 1, len(s.treeRootNode.children))
nilBestDescendant := s.treeRootNode.bestDescendant == nil
assert.Equal(t, true, nilBestDescendant)
}
func TestNode_UpdateBestDescendant_ViableChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is best descendant
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
s := f.store
assert.Equal(t, 1, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[0], s.treeRootNode.bestDescendant)
}
func TestNode_UpdateBestDescendant_HigherWeightChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is best descendant
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, false))
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 100
s.nodeByRoot[indexToHash(2)].weight = 200
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1))
assert.Equal(t, 2, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[1], s.treeRootNode.bestDescendant)
}
func TestNode_UpdateBestDescendant_LowerWeightChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is best descendant
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, false))
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 200
s.nodeByRoot[indexToHash(2)].weight = 100
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1))
assert.Equal(t, 2, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[0], s.treeRootNode.bestDescendant)
}
func TestNode_TestDepth(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is best descendant
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), params.BeaconConfig().ZeroHash, 1, 1, false))
s := f.store
require.Equal(t, s.nodeByRoot[indexToHash(2)].depth(), uint64(2))
require.Equal(t, s.nodeByRoot[indexToHash(3)].depth(), uint64(1))
}
func TestNode_ViableForHead(t *testing.T) {
tests := []struct {
n *Node
justifiedEpoch types.Epoch
finalizedEpoch types.Epoch
want bool
}{
{&Node{}, 0, 0, true},
{&Node{}, 1, 0, false},
{&Node{}, 0, 1, false},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1}, 1, 1, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1}, 2, 2, false},
{&Node{finalizedEpoch: 3, justifiedEpoch: 4}, 4, 3, true},
}
for _, tc := range tests {
got := tc.n.viableForHead(tc.justifiedEpoch, tc.finalizedEpoch)
assert.Equal(t, tc.want, got)
}
}
func TestNode_LeadsToViableHead(t *testing.T) {
f := setup(4, 3)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(1), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 4, indexToHash(4), indexToHash(2), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 5, indexToHash(5), indexToHash(3), 4, 3, false))
require.Equal(t, true, f.store.treeRootNode.leadsToViableHead(4, 3))
require.Equal(t, true, f.store.nodeByRoot[indexToHash(5)].leadsToViableHead(4, 3))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(2)].leadsToViableHead(4, 3))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(4)].leadsToViableHead(4, 3))
}
func TestNode_SetFullyValidated(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// insert blocks in the fork pattern (optimistic status in parenthesis)
//
// 0 (false) -- 1 (false) -- 2 (false) -- 3 (true) -- 4 (true)
// \
// -- 5 (true)
//
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(2), 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 4, indexToHash(4), indexToHash(3), 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 5, indexToHash(5), indexToHash(1), 1, 1, true))
opt, err := f.IsOptimistic(ctx, indexToHash(5))
require.NoError(t, err)
require.Equal(t, true, opt)
opt, err = f.IsOptimistic(ctx, indexToHash(4))
require.NoError(t, err)
require.Equal(t, true, opt)
require.NoError(t, f.store.nodeByRoot[indexToHash(4)].setNodeAndParentValidated(ctx))
// block 5 should still be optimistic
opt, err = f.IsOptimistic(ctx, indexToHash(5))
require.NoError(t, err)
require.Equal(t, true, opt)
// block 4 and 3 should now be valid
opt, err = f.IsOptimistic(ctx, indexToHash(4))
require.NoError(t, err)
require.Equal(t, false, opt)
opt, err = f.IsOptimistic(ctx, indexToHash(3))
require.NoError(t, err)
require.Equal(t, false, opt)
}

View File

@@ -0,0 +1,50 @@
package doublylinkedtree
import (
"context"
)
// removeNode removes the node with the given root and all of its children
// from the Fork Choice Store.
func (s *Store) removeNode(ctx context.Context, root [32]byte) error {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
return errNilNode
}
if !node.optimistic || node.parent == nil {
return errInvalidOptimisticStatus
}
children := node.parent.children
if len(children) == 1 {
node.parent.children = []*Node{}
} else {
for i, n := range children {
if n == node {
if i != len(children)-1 {
children[i] = children[len(children)-1]
}
node.parent.children = children[:len(children)-2]
break
}
}
}
return s.removeNodeAndChildren(ctx, node)
}
// removeNodeAndChildren removes `node` and all of its descendant from the Store
func (s *Store) removeNodeAndChildren(ctx context.Context, node *Node) error {
for _, child := range node.children {
if ctx.Err() != nil {
return ctx.Err()
}
if err := s.removeNodeAndChildren(ctx, child); err != nil {
return err
}
}
delete(s.nodeByRoot, node.root)
return nil
}

View File

@@ -0,0 +1,70 @@
package doublylinkedtree
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/require"
)
// We test the algorithm to update a node from SYNCING to INVALID
// We start with the same diagram as above:
//
// E -- F
// /
// C -- D
// / \
// A -- B G -- H -- I
// \ \
// J -- K -- L
//
// And every block in the Fork choice is optimistic.
//
func TestPruneInvalid(t *testing.T) {
tests := []struct {
root [32]byte // the root of the new INVALID block
wantedNodeNumber int
}{
{
[32]byte{'j'},
12,
},
{
[32]byte{'c'},
4,
},
{
[32]byte{'i'},
12,
},
{
[32]byte{'h'},
11,
},
{
[32]byte{'g'},
8,
},
}
for _, tc := range tests {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, 1, 1, true))
require.NoError(t, f.store.removeNode(context.Background(), tc.root))
require.Equal(t, tc.wantedNodeNumber, f.NodeCount())
}
}

View File

@@ -0,0 +1,73 @@
package doublylinkedtree
import (
"context"
"time"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/time/slots"
)
// BoostProposerRoot sets the block root which should be boosted during
// the LMD fork choice algorithm calculations. This is meant to reward timely,
// proposed blocks which occur before a cutoff interval set to
// SECONDS_PER_SLOT // INTERVALS_PER_SLOT.
//
// time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT
// is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT
// if get_current_slot(store) == block.slot and is_before_attesting_interval:
// store.proposer_boost_root = hash_tree_root(block)
func (f *ForkChoice) BoostProposerRoot(_ context.Context, blockSlot types.Slot, blockRoot [32]byte, genesisTime time.Time) error {
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
timeIntoSlot := uint64(time.Since(genesisTime).Seconds()) % secondsPerSlot
isBeforeAttestingInterval := timeIntoSlot < secondsPerSlot/params.BeaconConfig().IntervalsPerSlot
currentSlot := slots.SinceGenesis(genesisTime)
// Only update the boosted proposer root to the incoming block root
// if the block is for the current, clock-based slot and the block was timely.
if currentSlot == blockSlot && isBeforeAttestingInterval {
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = blockRoot
f.store.proposerBoostLock.Unlock()
}
return nil
}
// ResetBoostedProposerRoot sets the value of the proposer boosted root to zeros.
func (f *ForkChoice) ResetBoostedProposerRoot(_ context.Context) error {
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{}
f.store.proposerBoostLock.Unlock()
return nil
}
// Given a list of validator balances, we compute the proposer boost score
// that should be given to a proposer based on their committee weight, derived from
// the total active balances, the size of a committee, and a boost score constant.
// IMPORTANT: The caller MUST pass in a list of validator balances where balances > 0 refer to active
// validators while balances == 0 are for inactive validators.
func computeProposerBoostScore(validatorBalances []uint64) (score uint64, err error) {
totalActiveBalance := uint64(0)
numActive := uint64(0)
for _, balance := range validatorBalances {
// We only consider balances > 0. The input slice should be constructed
// as balance > 0 for all active validators and 0 for inactive ones.
if balance == 0 {
continue
}
totalActiveBalance += balance
numActive += 1
}
if numActive == 0 {
// Should never happen.
err = errors.New("no active validators")
return
}
avgBalance := totalActiveBalance / numActive
committeeSize := numActive / uint64(params.BeaconConfig().SlotsPerEpoch)
committeeWeight := committeeSize * avgBalance
score = (committeeWeight * params.BeaconConfig().ProposerScoreBoost) / 100
return
}

View File

@@ -0,0 +1,491 @@
package doublylinkedtree
import (
"context"
"fmt"
"testing"
"time"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
// Simple, ex-ante attack mitigation using proposer boost.
// In a nutshell, an adversarial block proposer in slot n+1 keeps its proposal hidden.
// The honest block proposer in slot n+2 will then propose an honest block. The
// adversary can now use its committee members votes from both slots n+1 and n+2.
// and release their withheld block of slot n+2 in an attempt to win fork choice.
// If the honest proposal is boosted at slot n+2, it will win against this attacker.
func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
ctx := context.Background()
zeroHash := params.BeaconConfig().ZeroHash
balances := make([]uint64, 64) // 64 active validators.
for i := 0; i < len(balances); i++ {
balances[i] = 10
}
jEpoch, fEpoch := types.Epoch(0), types.Epoch(0)
t.Run("back-propagates boost score to ancestors after proposer boosting", func(t *testing.T) {
f := setup(jEpoch, fEpoch)
// The head should always start at the finalized block.
headRoot, err := f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, zeroHash, headRoot, "Incorrect head with genesis")
// Insert block at slot 1 into the tree and verify head is at that block:
// 0
// |
// 1 <- HEAD
slot := types.Slot(1)
newRoot := indexToHash(1)
require.NoError(t,
f.ProcessBlock(
ctx,
slot,
newRoot,
headRoot,
jEpoch,
fEpoch,
true,
),
)
f.ProcessAttestation(ctx, []uint64{0}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 1")
// Insert block at slot 2 into the tree and verify head is at that block:
// 0
// |
// 1
// |
// 2 <- HEAD
slot = types.Slot(2)
newRoot = indexToHash(2)
require.NoError(t,
f.ProcessBlock(
ctx,
slot,
newRoot,
headRoot,
jEpoch,
fEpoch,
true,
),
)
f.ProcessAttestation(ctx, []uint64{1}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 2")
// Insert block at slot 3 into the tree and verify head is at that block:
// 0
// |
// 1
// |
// 2
// |
// 3 <- HEAD
slot = types.Slot(3)
newRoot = indexToHash(3)
require.NoError(t,
f.ProcessBlock(
ctx,
slot,
newRoot,
headRoot,
jEpoch,
fEpoch,
true,
),
)
f.ProcessAttestation(ctx, []uint64{2}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 3")
// Insert a second block at slot 3 into the tree and boost its score.
// 0
// |
// 1
// |
// 2
// / \
// 3 4 <- HEAD
slot = types.Slot(3)
newRoot = indexToHash(4)
require.NoError(t,
f.ProcessBlock(
ctx,
slot,
newRoot,
headRoot,
jEpoch,
fEpoch,
true,
),
)
f.ProcessAttestation(ctx, []uint64{3}, newRoot, fEpoch)
threeSlots := 3 * params.BeaconConfig().SecondsPerSlot
genesisTime := time.Now().Add(-time.Second * time.Duration(threeSlots))
require.NoError(t, f.BoostProposerRoot(ctx, slot, newRoot, genesisTime))
headRoot, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 3")
// Check the ancestor scores from the store.
require.Equal(t, 5, len(f.store.nodeByRoot))
// Expect nodes to have a boosted, back-propagated score.
// Ancestors have the added weights of their children. Genesis is a special exception at 0 weight,
require.Equal(t, f.store.treeRootNode.weight, uint64(0))
// Otherwise, assuming a block, A, that is not-genesis:
//
// A -> B -> C
//
//Where each one has a weight of 10 individually, the final weights will look like
//
// (A: 30) -> (B: 20) -> (C: 10)
//
// The boost adds 14 to the weight, so if C is boosted, we would have
//
// (A: 44) -> (B: 34) -> (C: 24)
//
// In this case, we have a small fork:
//
// (A: 54) -> (B: 44) -> (C: 34)
// \_->(D: 24)
//
// So B has its own weight, 10, and the sum of both C and D. That's why we see weight 54 in the
// middle instead of the normal progression of (44 -> 34 -> 24).
node1 := f.store.nodeByRoot[indexToHash(1)]
require.Equal(t, node1.weight, uint64(54))
node2 := f.store.nodeByRoot[indexToHash(2)]
require.Equal(t, node2.weight, uint64(44))
node3 := f.store.nodeByRoot[indexToHash(4)]
require.Equal(t, node3.weight, uint64(24))
})
t.Run("vanilla ex ante attack", func(t *testing.T) {
f := setup(jEpoch, fEpoch)
// The head should always start at the finalized block.
r, err := f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, zeroHash, r, "Incorrect head with genesis")
// Proposer from slot 1 does not reveal their block, B, at slot 1.
// Proposer at slot 2 does reveal their block, C, and it becomes the head.
// C builds on A, as proposer at slot 1 did not reveal B.
// A
// / \
// (B?) \
// \
// C <- Slot 2 HEAD
honestBlockSlot := types.Slot(2)
honestBlock := indexToHash(2)
require.NoError(t,
f.ProcessBlock(
ctx,
honestBlockSlot,
honestBlock,
zeroHash,
jEpoch,
fEpoch,
true,
),
)
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
maliciouslyWithheldBlockSlot := types.Slot(1)
maliciouslyWithheldBlock := indexToHash(1)
require.NoError(t,
f.ProcessBlock(
ctx,
maliciouslyWithheldBlockSlot,
maliciouslyWithheldBlock,
zeroHash,
jEpoch,
fEpoch,
true,
),
)
// Ensure the head is C, the honest block.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
// We boost the honest proposal at slot 2.
secondsPerSlot := time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot)
genesis := time.Now().Add(-2 * secondsPerSlot)
require.NoError(t, f.BoostProposerRoot(ctx, honestBlockSlot, honestBlock, genesis))
// The maliciously withheld block has one vote.
votes := []uint64{1}
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, fEpoch)
// Ensure the head is STILL C, the honest block, as the honest block had proposer boost.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
})
t.Run("adversarial attestations > proposer boosting", func(t *testing.T) {
f := setup(jEpoch, fEpoch)
// The head should always start at the finalized block.
r, err := f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, zeroHash, r, "Incorrect head with genesis")
// Proposer from slot 1 does not reveal their block, B, at slot 1.
// Proposer at slot 2 does reveal their block, C, and it becomes the head.
// C builds on A, as proposer at slot 1 did not reveal B.
// A
// / \
// (B?) \
// \
// C <- Slot 2 HEAD
honestBlockSlot := types.Slot(2)
honestBlock := indexToHash(2)
require.NoError(t,
f.ProcessBlock(
ctx,
honestBlockSlot,
honestBlock,
zeroHash,
jEpoch,
fEpoch,
true,
),
)
// Ensure C is the head.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
maliciouslyWithheldBlockSlot := types.Slot(1)
maliciouslyWithheldBlock := indexToHash(1)
require.NoError(t,
f.ProcessBlock(
ctx,
maliciouslyWithheldBlockSlot,
maliciouslyWithheldBlock,
zeroHash,
jEpoch,
fEpoch,
true,
),
)
// Ensure C is still the head after the malicious proposer reveals their block.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
// We boost the honest proposal at slot 2.
secondsPerSlot := time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot)
genesis := time.Now().Add(-2 * secondsPerSlot)
require.NoError(t, f.BoostProposerRoot(ctx, honestBlockSlot, honestBlock, genesis))
// An attestation is received for B that has more voting power than C with the proposer boost,
// allowing B to then become the head if their attestation has enough adversarial votes.
votes := []uint64{1, 2}
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, fEpoch)
// Expect the head to have switched to B.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, maliciouslyWithheldBlock, r, "Expected B to become the head")
})
t.Run("boosting necessary to sandwich attack", func(t *testing.T) {
// Boosting necessary to sandwich attack.
// Objects:
// Block A - slot N
// Block B (parent A) - slot N+1
// Block C (parent A) - slot N+2
// Block D (parent B) - slot N+3
// Attestation_1 (Block C); size 1 - slot N+2 (honest)
// Steps:
// Block A received at N — A is head
// Block C received at N+2 — C is head
// Block B received at N+2 — C is head
// Attestation_1 received at N+3 — C is head
// Block D received at N+3 — D is head
f := setup(jEpoch, fEpoch)
a := zeroHash
// The head should always start at the finalized block.
r, err := f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, zeroHash, r, "Incorrect head with genesis")
cSlot := types.Slot(2)
c := indexToHash(2)
require.NoError(t,
f.ProcessBlock(
ctx,
cSlot,
c,
a, // parent
jEpoch,
fEpoch,
true,
),
)
// Ensure C is the head.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, c, r, "Incorrect head for justified epoch at slot 2")
// We boost C.
secondsPerSlot := time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot)
genesis := time.Now().Add(-2 * secondsPerSlot)
require.NoError(t, f.BoostProposerRoot(ctx, cSlot /* slot */, c, genesis))
bSlot := types.Slot(1)
b := indexToHash(1)
require.NoError(t,
f.ProcessBlock(
ctx,
bSlot,
b,
a, // parent
jEpoch,
fEpoch,
true,
),
)
// Ensure C is still the head.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, c, r, "Incorrect head for justified epoch at slot 2")
// An attestation for C is received at slot N+3.
votes := []uint64{1}
f.ProcessAttestation(ctx, votes, c, fEpoch)
// A block D, building on B, is received at slot N+3. It should not be able to win without boosting.
dSlot := types.Slot(3)
d := indexToHash(3)
require.NoError(t,
f.ProcessBlock(
ctx,
dSlot,
d,
b, // parent
jEpoch,
fEpoch,
true,
),
)
// D cannot win without a boost.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, c, r, "Expected C to remain the head")
// Block D receives the boost.
genesis = time.Now().Add(-3 * secondsPerSlot)
require.NoError(t, f.BoostProposerRoot(ctx, dSlot /* slot */, d, genesis))
// Ensure D becomes the head thanks to boosting.
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
require.NoError(t, err)
assert.Equal(t, d, r, "Expected D to become the head")
})
}
func TestForkChoice_BoostProposerRoot(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.SecondsPerSlot = 6
cfg.IntervalsPerSlot = 3
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
t.Run("does not boost block from different slot", func(t *testing.T) {
f := &ForkChoice{
store: &Store{},
}
// Genesis set to 1 slot ago.
genesis := time.Now().Add(-time.Duration(cfg.SecondsPerSlot) * time.Second)
blockRoot := [32]byte{'A'}
// Trying to boost a block from slot 0 should not work.
err := f.BoostProposerRoot(ctx, types.Slot(0), blockRoot, genesis)
require.NoError(t, err)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
})
t.Run("does not boost untimely block from same slot", func(t *testing.T) {
f := &ForkChoice{
store: &Store{},
}
// Genesis set to 1 slot ago + X where X > attesting interval.
genesis := time.Now().Add(-time.Duration(cfg.SecondsPerSlot) * time.Second)
attestingInterval := time.Duration(cfg.SecondsPerSlot / cfg.IntervalsPerSlot)
greaterThanAttestingInterval := attestingInterval + 100*time.Millisecond
genesis = genesis.Add(-greaterThanAttestingInterval * time.Second)
blockRoot := [32]byte{'A'}
// Trying to boost a block from slot 1 that is untimely should not work.
err := f.BoostProposerRoot(ctx, types.Slot(1), blockRoot, genesis)
require.NoError(t, err)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
})
t.Run("boosts perfectly timely block from same slot", func(t *testing.T) {
f := &ForkChoice{
store: &Store{},
}
// Genesis set to 1 slot ago + 0 seconds into the attesting interval.
genesis := time.Now().Add(-time.Duration(cfg.SecondsPerSlot) * time.Second)
fmt.Println(genesis)
blockRoot := [32]byte{'A'}
err := f.BoostProposerRoot(ctx, types.Slot(1), blockRoot, genesis)
require.NoError(t, err)
require.DeepEqual(t, [32]byte{'A'}, f.store.proposerBoostRoot)
})
t.Run("boosts timely block from same slot", func(t *testing.T) {
f := &ForkChoice{
store: &Store{},
}
// Genesis set to 1 slot ago + (attesting interval / 2).
genesis := time.Now().Add(-time.Duration(cfg.SecondsPerSlot) * time.Second)
blockRoot := [32]byte{'A'}
halfAttestingInterval := time.Second
genesis = genesis.Add(-halfAttestingInterval)
err := f.BoostProposerRoot(ctx, types.Slot(1), blockRoot, genesis)
require.NoError(t, err)
require.DeepEqual(t, [32]byte{'A'}, f.store.proposerBoostRoot)
})
}
func TestForkChoice_computeProposerBoostScore(t *testing.T) {
t.Run("nil justified balances throws error", func(t *testing.T) {
_, err := computeProposerBoostScore(nil)
require.ErrorContains(t, "no active validators", err)
})
t.Run("normal active balances computes score", func(t *testing.T) {
validatorBalances := make([]uint64, 64) // Num validators
for i := 0; i < len(validatorBalances); i++ {
validatorBalances[i] = 10
}
// Avg balance is 10, and the number of validators is 64.
// With a committee size of num validators (64) / slots per epoch (32) == 2.
// we then have a committee weight of avg balance * committee size = 10 * 2 = 20.
// The score then becomes 10 * PROPOSER_SCORE_BOOST // 100, which is
// 20 * 70 / 100 = 14.
score, err := computeProposerBoostScore(validatorBalances)
require.NoError(t, err)
require.Equal(t, uint64(14), score)
})
}

View File

@@ -0,0 +1,223 @@
package doublylinkedtree
import (
"context"
"fmt"
types "github.com/prysmaticlabs/eth2-types"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"go.opencensus.io/trace"
)
// This defines the minimal number of block nodes that can be in the tree
// before getting pruned upon new finalization.
const defaultPruneThreshold = 256
// applyProposerBoostScore applies the current proposer boost scores to the
// relevant nodes
func (s *Store) applyProposerBoostScore(newBalances []uint64) error {
s.proposerBoostLock.Lock()
defer s.proposerBoostLock.Unlock()
proposerScore := uint64(0)
var err error
if s.previousProposerBoostRoot != params.BeaconConfig().ZeroHash {
previousNode, ok := s.nodeByRoot[s.previousProposerBoostRoot]
if !ok || previousNode == nil {
return errInvalidProposerBoostRoot
}
previousNode.balance -= s.previousProposerBoostScore
}
if s.proposerBoostRoot != params.BeaconConfig().ZeroHash {
currentNode, ok := s.nodeByRoot[s.proposerBoostRoot]
if !ok || currentNode == nil {
return errInvalidProposerBoostRoot
}
proposerScore, err = computeProposerBoostScore(newBalances)
if err != nil {
return err
}
currentNode.balance += proposerScore
}
s.previousProposerBoostRoot = s.proposerBoostRoot
s.previousProposerBoostScore = proposerScore
return nil
}
// ProposerBoost of fork choice store.
func (s *Store) proposerBoost() [fieldparams.RootLength]byte {
s.proposerBoostLock.RLock()
defer s.proposerBoostLock.RUnlock()
return s.proposerBoostRoot
}
// PruneThreshold of fork choice store.
func (s *Store) PruneThreshold() uint64 {
return s.pruneThreshold
}
// head starts from justified root and then follows the best descendant links
// to find the best block for head. This function assumes a lock on s.nodesLock
func (s *Store) head(ctx context.Context, justifiedRoot [32]byte) ([32]byte, error) {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.head")
defer span.End()
// JustifiedRoot has to be known
justifiedNode, ok := s.nodeByRoot[justifiedRoot]
if !ok || justifiedNode == nil {
return [32]byte{}, errUnknownJustifiedRoot
}
// If the justified node doesn't have a best descendant,
// the best node is itself.
bestDescendant := justifiedNode.bestDescendant
if bestDescendant == nil {
bestDescendant = justifiedNode
}
if !bestDescendant.viableForHead(s.justifiedEpoch, s.finalizedEpoch) {
return [32]byte{}, fmt.Errorf("head at slot %d with weight %d is not eligible, finalizedEpoch %d != %d, justifiedEpoch %d != %d",
bestDescendant.slot, bestDescendant.weight/10e9, bestDescendant.finalizedEpoch, s.finalizedEpoch, bestDescendant.justifiedEpoch, s.justifiedEpoch)
}
// Update metrics.
if bestDescendant != s.headNode {
headChangesCount.Inc()
headSlotNumber.Set(float64(bestDescendant.slot))
s.headNode = bestDescendant
}
return bestDescendant.root, nil
}
// insert registers a new block node to the fork choice store's node list.
// It then updates the new node's parent with best child and descendant node.
func (s *Store) insert(ctx context.Context,
slot types.Slot,
root, parentRoot [fieldparams.RootLength]byte,
justifiedEpoch, finalizedEpoch types.Epoch, optimistic bool) error {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.insert")
defer span.End()
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
// Return if the block has been inserted into Store before.
if _, ok := s.nodeByRoot[root]; ok {
return nil
}
parent := s.nodeByRoot[parentRoot]
n := &Node{
slot: slot,
root: root,
parent: parent,
justifiedEpoch: justifiedEpoch,
finalizedEpoch: finalizedEpoch,
optimistic: optimistic,
}
s.nodeByRoot[root] = n
if parent != nil {
parent.children = append(parent.children, n)
if err := s.treeRootNode.updateBestDescendant(ctx, s.justifiedEpoch, s.finalizedEpoch); err != nil {
return err
}
}
if !optimistic {
if err := n.setNodeAndParentValidated(ctx); err != nil {
return err
}
} else {
optimisticCount.Inc()
}
// Set the node as root if the store was empty
if s.treeRootNode == nil {
s.treeRootNode = n
s.headNode = n
}
// Update metrics.
processedBlockCount.Inc()
nodeCount.Set(float64(len(s.nodeByRoot)))
return nil
}
// updateCheckpoints Update the justified / finalized epochs in store if necessary.
func (s *Store) updateCheckpoints(justifiedEpoch, finalizedEpoch types.Epoch) {
s.justifiedEpoch = justifiedEpoch
s.finalizedEpoch = finalizedEpoch
}
// pruneFinalizedNodeByRootMap prunes the `nodeByRoot` map
// starting from `node` down to the finalized Node or to a leaf of the Fork
// choice store. This method assumes a lock on nodesLock.
func (s *Store) pruneFinalizedNodeByRootMap(ctx context.Context, node, finalizedNode *Node) error {
if ctx.Err() != nil {
return ctx.Err()
}
if node == finalizedNode {
return nil
}
for _, child := range node.children {
if err := s.pruneFinalizedNodeByRootMap(ctx, child, finalizedNode); err != nil {
return err
}
}
delete(s.nodeByRoot, node.root)
return nil
}
// prune prunes the fork choice store with the new finalized root. The store is only pruned if the input
// root is different than the current store finalized root, and the number of the store has met prune threshold.
// This function does not prune for invalid optimistically synced nodes, it deals only with pruning upon finalization
func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.Prune")
defer span.End()
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
finalizedNode, ok := s.nodeByRoot[finalizedRoot]
if !ok || finalizedNode == nil {
return errUnknownFinalizedRoot
}
// The number of the nodes has not met the prune threshold.
// Pruning at small numbers incurs more cost than benefit.
if finalizedNode.depth() < s.pruneThreshold {
return nil
}
// Prune nodeByRoot starting from root
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
return err
}
finalizedNode.parent = nil
s.treeRootNode = finalizedNode
prunedCount.Inc()
return nil
}
// tips returns a list of possible heads from fork choice store, it returns the
// roots and the slots of the leaf nodes.
func (s *Store) tips() ([][32]byte, []types.Slot) {
var roots [][32]byte
var slots []types.Slot
for root, node := range s.nodeByRoot {
if len(node.children) == 0 {
roots = append(roots, root)
slots = append(slots, node.slot)
}
}
return roots, slots
}

View File

@@ -0,0 +1,274 @@
package doublylinkedtree
import (
"context"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestStore_PruneThreshold(t *testing.T) {
s := &Store{
pruneThreshold: defaultPruneThreshold,
}
if got := s.PruneThreshold(); got != defaultPruneThreshold {
t.Errorf("PruneThreshold() = %v, want %v", got, defaultPruneThreshold)
}
}
func TestStore_JustifiedEpoch(t *testing.T) {
j := types.Epoch(100)
f := setup(j, j)
require.Equal(t, j, f.JustifiedEpoch())
}
func TestStore_FinalizedEpoch(t *testing.T) {
j := types.Epoch(50)
f := setup(j, j)
require.Equal(t, j, f.FinalizedEpoch())
}
func TestStore_NodeCount(t *testing.T) {
f := setup(0, 0)
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.Equal(t, 2, f.NodeCount())
}
func TestStore_NodeByRoot(t *testing.T) {
f := setup(0, 0)
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(2), indexToHash(1), 0, 0, false))
node0 := f.store.treeRootNode
node1 := node0.children[0]
node2 := node1.children[0]
expectedRoots := map[[32]byte]*Node{
params.BeaconConfig().ZeroHash: node0,
indexToHash(1): node1,
indexToHash(2): node2,
}
require.Equal(t, 3, f.NodeCount())
for root, node := range f.store.nodeByRoot {
v, ok := expectedRoots[root]
require.Equal(t, ok, true)
require.Equal(t, v, node)
}
}
func TestForkChoice_HasNode(t *testing.T) {
f := setup(0, 0)
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.Equal(t, true, f.HasNode(indexToHash(1)))
}
func TestStore_Head_UnknownJustifiedRoot(t *testing.T) {
f := setup(0, 0)
_, err := f.store.head(context.Background(), [32]byte{'a'})
assert.ErrorContains(t, errUnknownJustifiedRoot.Error(), err)
}
func TestStore_Head_Itself(t *testing.T) {
f := setup(0, 0)
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
// Since the justified node does not have a best descendant so the best node
// is itself.
h, err := f.store.head(context.Background(), indexToHash(1))
require.NoError(t, err)
assert.Equal(t, indexToHash(1), h)
}
func TestStore_Head_BestDescendant(t *testing.T) {
f := setup(0, 0)
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(2), indexToHash(1), 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(3), indexToHash(1), 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(4), indexToHash(2), 0, 0, false))
h, err := f.store.head(context.Background(), indexToHash(1))
require.NoError(t, err)
require.Equal(t, h, indexToHash(4))
}
func TestStore_UpdateBestDescendant_ContextCancelled(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
f := setup(0, 0)
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
cancel()
err := f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 0, 0, false)
require.ErrorContains(t, "context canceled", err)
}
func TestStore_Insert(t *testing.T) {
// The new node does not have a parent.
treeRootNode := &Node{slot: 0, root: indexToHash(0)}
nodeByRoot := map[[32]byte]*Node{indexToHash(0): treeRootNode}
s := &Store{nodeByRoot: nodeByRoot, treeRootNode: treeRootNode}
require.NoError(t, s.insert(context.Background(), 100, indexToHash(100), indexToHash(0), 1, 1, false))
assert.Equal(t, 2, len(s.nodeByRoot), "Did not insert block")
assert.Equal(t, (*Node)(nil), treeRootNode.parent, "Incorrect parent")
assert.Equal(t, 1, len(treeRootNode.children), "Incorrect children number")
child := treeRootNode.children[0]
assert.Equal(t, types.Epoch(1), child.justifiedEpoch, "Incorrect justification")
assert.Equal(t, types.Epoch(1), child.finalizedEpoch, "Incorrect finalization")
assert.Equal(t, indexToHash(100), child.root, "Incorrect root")
}
func TestStore_updateCheckpoints(t *testing.T) {
f := setup(0, 0)
s := f.store
s.updateCheckpoints(1, 1)
assert.Equal(t, types.Epoch(1), s.justifiedEpoch, "Did not update justified epoch")
assert.Equal(t, types.Epoch(1), s.finalizedEpoch, "Did not update finalized epoch")
}
func TestStore_Prune_LessThanThreshold(t *testing.T) {
// Define 100 nodes in store.
numOfNodes := uint64(100)
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
for i := uint64(2); i < numOfNodes; i++ {
require.NoError(t, f.ProcessBlock(ctx, types.Slot(i), indexToHash(i), indexToHash(i-1), 0, 0, false))
}
s := f.store
s.pruneThreshold = 100
// Finalized root has depth 99 so everything before it should be pruned,
// but PruneThreshold is at 100 so nothing will be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(99)))
assert.Equal(t, 100, len(s.nodeByRoot), "Incorrect nodes count")
}
func TestStore_Prune_MoreThanThreshold(t *testing.T) {
// Define 100 nodes in store.
numOfNodes := uint64(100)
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
for i := uint64(2); i < numOfNodes; i++ {
require.NoError(t, f.ProcessBlock(ctx, types.Slot(i), indexToHash(i), indexToHash(i-1), 0, 0, false))
}
s := f.store
s.pruneThreshold = 0
// Finalized root is at index 99 so everything before 99 should be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(99)))
assert.Equal(t, 1, len(s.nodeByRoot), "Incorrect nodes count")
}
func TestStore_Prune_MoreThanOnce(t *testing.T) {
// Define 100 nodes in store.
numOfNodes := uint64(100)
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
for i := uint64(2); i < numOfNodes; i++ {
require.NoError(t, f.ProcessBlock(ctx, types.Slot(i), indexToHash(i), indexToHash(i-1), 0, 0, false))
}
s := f.store
s.pruneThreshold = 0
// Finalized root is at index 11 so everything before 11 should be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(10)))
assert.Equal(t, 90, len(s.nodeByRoot), "Incorrect nodes count")
// One more time.
require.NoError(t, s.prune(context.Background(), indexToHash(20)))
assert.Equal(t, 80, len(s.nodeByRoot), "Incorrect nodes count")
}
// This unit tests starts with a simple branch like this
//
// - 1
// /
// -- 0 -- 2
//
// And we finalize 1. As a result only 1 should survive
func TestStore_Prune_NoDanglingBranch(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0, false))
f.store.pruneThreshold = 0
s := f.store
require.NoError(t, s.prune(context.Background(), indexToHash(1)))
require.Equal(t, len(s.nodeByRoot), 1)
}
// This test starts with the following branching diagram
/// We start with the following diagram
//
// E -- F
// /
// C -- D
// / \
// A -- B G -- H -- I
// \ \
// J -- K -- L
//
//
func TestStore_tips(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, 1, 1, true))
expectedMap := map[[32]byte]types.Slot{
[32]byte{'f'}: 105,
[32]byte{'i'}: 106,
[32]byte{'l'}: 106,
[32]byte{'j'}: 102,
}
roots, slots := f.store.tips()
for i, r := range roots {
expectedSlot, ok := expectedMap[r]
require.Equal(t, true, ok)
require.Equal(t, slots[i], expectedSlot)
}
}
func TestStore_PruneMapsNodes(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0, false))
s := f.store
s.pruneThreshold = 0
require.NoError(t, s.prune(context.Background(), indexToHash(uint64(1))))
require.Equal(t, len(s.nodeByRoot), 1)
}
func TestStore_HasParent(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
require.NoError(t, f.ProcessBlock(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 2, indexToHash(2), indexToHash(1), 1, 1, false))
require.NoError(t, f.ProcessBlock(ctx, 3, indexToHash(3), indexToHash(2), 1, 1, false))
require.Equal(t, false, f.HasParent(params.BeaconConfig().ZeroHash))
require.Equal(t, true, f.HasParent(indexToHash(1)))
require.Equal(t, true, f.HasParent(indexToHash(2)))
require.Equal(t, true, f.HasParent(indexToHash(3)))
require.Equal(t, false, f.HasParent(indexToHash(4)))
}

View File

@@ -0,0 +1,53 @@
package doublylinkedtree
import (
"sync"
types "github.com/prysmaticlabs/eth2-types"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
)
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
type ForkChoice struct {
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
balances []uint64 // tracks individual validator's last justified balances.
}
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.
type Store struct {
justifiedEpoch types.Epoch // latest justified epoch in store.
finalizedEpoch types.Epoch // latest finalized epoch in store.
pruneThreshold uint64 // do not prune tree unless threshold is reached.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
nodeByRoot map[[fieldparams.RootLength]byte]*Node // nodes indexed by roots.
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
}
// Node defines the individual block which includes its block parent, ancestor and how much weight accounted for it.
// This is used as an array based stateful DAG for efficient fork choice look up.
type Node struct {
slot types.Slot // slot of the block converted to the node.
root [fieldparams.RootLength]byte // root of the block converted to the node.
parent *Node // parent index of this node.
children []*Node // the list of direct children of this Node
justifiedEpoch types.Epoch // justifiedEpoch of this node.
finalizedEpoch types.Epoch // finalizedEpoch of this node.
balance uint64 // the balance that voted for this node directly
weight uint64 // weight of this node: the total balance including children
bestDescendant *Node // bestDescendant node of this node.
optimistic bool // whether the block has been fully validated or not
}
// Vote defines an individual validator's vote.
type Vote struct {
currentRoot [fieldparams.RootLength]byte // current voting root.
nextRoot [fieldparams.RootLength]byte // next voting root.
nextEpoch types.Epoch // epoch of next voting period.
}

View File

@@ -0,0 +1,297 @@
package doublylinkedtree
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestVotes_CanFindHead(t *testing.T) {
balances := []uint64{1, 1}
f := setup(1, 1)
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().ZeroHash, r, "Incorrect head with genesis")
// Insert block 2 into the tree and verify head is at 2:
// 0
// /
// 2 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 1 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Add a vote to block 1 of the tree and verify head is switched to 1:
// 0
// / \
// 2 1 <- +vote, new head
f.ProcessAttestation(context.Background(), []uint64{0}, indexToHash(1), 2)
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(1), r, "Incorrect head for with justified epoch at 1")
// Add a vote to block 2 of the tree and verify head is switched to 2:
// 0
// / \
// vote, new head -> 2 1
f.ProcessAttestation(context.Background(), []uint64{1}, indexToHash(2), 2)
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 3 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
// |
// 3
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(3), indexToHash(1), 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Move validator 0's vote from 1 to 3 and verify head is still at 2:
// 0
// / \
// head -> 2 1 <- old vote
// |
// 3 <- new vote
f.ProcessAttestation(context.Background(), []uint64{0}, indexToHash(3), 3)
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Move validator 1's vote from 2 to 1 and verify head is switched to 3:
// 0
// / \
// old vote -> 2 1 <- new vote
// |
// 3 <- head
f.ProcessAttestation(context.Background(), []uint64{1}, indexToHash(1), 3)
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head for with justified epoch at 1")
// Insert block 4 into the tree and verify head is at 4:
// 0
// / \
// 2 1
// |
// 3
// |
// 4 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(4), indexToHash(3), 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
// Insert block 5 with justified epoch 2, it should be filtered out:
// 0
// / \
// 2 1
// |
// 3
// |
// 4 <- head
// /
// 5 <- justified epoch = 2
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(5), indexToHash(4), 2, 2, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
// Insert block 6 with justified epoch 0:
// 0
// / \
// 2 1
// |
// 3
// |
// 4 <- head
// / \
// 5 6 <- justified epoch = 0
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(4), 1, 1, true))
// Moved 2 votes to block 5:
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 2 votes-> 5 6
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(4), 1, 1, true))
f.ProcessAttestation(context.Background(), []uint64{0, 1}, indexToHash(5), 4)
// Inset blocks 7, 8 and 9:
// 6 should still be the head, even though 5 has all the votes.
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 5 6 <- head
// |
// 7
// |
// 8
// |
// 9
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(7), indexToHash(5), 2, 2, true))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(8), indexToHash(7), 2, 2, true))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(9), indexToHash(8), 2, 2, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 1")
// Update fork choice justified epoch to 1 and start block to 5.
// Verify 9 is the head:
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 5 6
// |
// 7
// |
// 8
// |
// 9 <- head
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 2")
// Insert block 10 and 2 validators updated their vote to 9.
// Verify 9 is the head:
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 5 6
// |
// 7
// |
// 8
// / \
// 2 votes->9 10
f.ProcessAttestation(context.Background(), []uint64{0, 1}, indexToHash(9), 5)
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(10), indexToHash(8), 2, 2, true))
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 2")
// Add 3 more validators to the system.
balances = []uint64{1, 1, 1, 1, 1}
// The new validators voted for 10.
f.ProcessAttestation(context.Background(), []uint64{2, 3, 4}, indexToHash(10), 5)
// The new head should be 10.
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 2")
// Set the balances of the last 2 validators to 0.
balances = []uint64{1, 1, 1, 0, 0}
// The head should be back to 9.
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 1")
// Set the balances back to normal.
balances = []uint64{1, 1, 1, 1, 1}
// The head should be back to 10.
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 2")
// Remove the last 2 validators.
balances = []uint64{1, 1, 1}
// The head should be back to 9.
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 1")
// Verify pruning below the prune threshold does not affect head.
f.store.pruneThreshold = 1000
require.NoError(t, f.store.prune(context.Background(), indexToHash(5)))
assert.Equal(t, 11, len(f.store.nodeByRoot), "Incorrect nodes length after prune")
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 2")
// Verify pruning above the prune threshold does prune:
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// -------pruned here ------
// 5 6
// |
// 7
// |
// 8
// / \
// 9 10
f.store.pruneThreshold = 1
require.NoError(t, f.store.prune(context.Background(), indexToHash(5)))
assert.Equal(t, 5, len(f.store.nodeByRoot), "Incorrect nodes length after prune")
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 2")
// Insert new block 11 and verify head is at 11.
// 5 6
// |
// 7
// |
// 8
// / \
// 9 10
// |
// head-> 11
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(11), indexToHash(9), 2, 2, true))
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
assert.Equal(t, indexToHash(11), r, "Incorrect head for with justified epoch at 2")
}

View File

@@ -5,7 +5,8 @@ import (
"time"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
pbrpc "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// ForkChoicer represents the full fork choice interface composed of all the sub-interfaces.
@@ -15,19 +16,26 @@ type ForkChoicer interface {
AttestationProcessor // to track new attestation for fork choice.
Pruner // to clean old data for fork choice.
Getter // to retrieve fork choice information.
Setter // to set fork choice information.
ProposerBooster // ability to boost timely-proposed block roots.
SyncTipper // to update and retrieve validated sync tips.
}
// HeadRetriever retrieves head root and optimistic info of the current chain.
type HeadRetriever interface {
Head(context.Context, types.Epoch, [32]byte, []uint64, types.Epoch) ([32]byte, error)
Optimistic(ctx context.Context, root [32]byte, slot types.Slot) (bool, error)
Tips() ([][32]byte, []types.Slot)
IsOptimistic(ctx context.Context, root [32]byte) (bool, error)
}
// BlockProcessor processes the block that's used for accounting fork choice.
type BlockProcessor interface {
ProcessBlock(context.Context, types.Slot, [32]byte, [32]byte, [32]byte, types.Epoch, types.Epoch) error
ProcessBlock(ctx context.Context,
slot types.Slot,
root [32]byte,
parentRoot [32]byte,
justifiedEpoch types.Epoch,
finalizedEpoch types.Epoch,
optimisticStatus bool) error
}
// AttestationProcessor processes the attestation that's used for accounting fork choice.
@@ -48,19 +56,19 @@ type ProposerBooster interface {
// Getter returns fork choice related information.
type Getter interface {
Nodes() []*protoarray.Node
Node([32]byte) *protoarray.Node
HasNode([32]byte) bool
Store() *protoarray.Store
ProposerBoost() [fieldparams.RootLength]byte
HasParent(root [32]byte) bool
AncestorRoot(ctx context.Context, root [32]byte, slot types.Slot) ([]byte, error)
IsCanonical(root [32]byte) bool
FinalizedEpoch() types.Epoch
JustifiedEpoch() types.Epoch
ForkChoiceNodes() []*pbrpc.ForkChoiceNode
NodeCount() int
}
// SyncTipper returns sync tips related information.
type SyncTipper interface {
SyncedTips() map[[32]byte]types.Slot
SetSyncedTips(tips map[[32]byte]types.Slot) error
UpdateSyncedTipsWithValidRoot(ctx context.Context, root [32]byte) error
UpdateSyncedTipsWithInvalidRoot(ctx context.Context, root [32]byte) error
// Setter allows to set forkchoice information
type Setter interface {
SetOptimisticToValid(context.Context, [fieldparams.RootLength]byte) error
SetOptimisticToInvalid(context.Context, [fieldparams.RootLength]byte) error
}

View File

@@ -21,7 +21,7 @@ go_library(
deps = [
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",

View File

@@ -5,6 +5,7 @@ import "errors"
var errUnknownFinalizedRoot = errors.New("unknown finalized root")
var errUnknownJustifiedRoot = errors.New("unknown justified root")
var errInvalidNodeIndex = errors.New("node index is invalid")
var errUnknownNodeRoot = errors.New("unknown block root")
var errInvalidJustifiedIndex = errors.New("justified index is invalid")
var errInvalidBestChildIndex = errors.New("best child index is invalid")
var errInvalidBestDescendantIndex = errors.New("best descendant index is invalid")

View File

@@ -27,9 +27,9 @@ func TestFFGUpdates_OneBranch(t *testing.T) {
// 2 <- justified: 1, finalized: 0
// |
// 3 <- justified: 2, finalized: 1
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, [32]byte{}, 0, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(2), indexToHash(1), [32]byte{}, 1, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(3), indexToHash(2), [32]byte{}, 2, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(2), indexToHash(1), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(3), indexToHash(2), 2, 1, false))
// With starting justified epoch at 0, the head should be 3:
// 0 <- start
@@ -89,17 +89,17 @@ func TestFFGUpdates_TwoBranches(t *testing.T) {
// | |
// justified: 2, finalized: 0 -> 9 10 <- justified: 2, finalized: 0
// Left branch.
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, [32]byte{}, 0, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(3), indexToHash(1), [32]byte{}, 1, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(5), indexToHash(3), [32]byte{}, 1, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(7), indexToHash(5), [32]byte{}, 1, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(9), indexToHash(7), [32]byte{}, 2, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(3), indexToHash(1), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(5), indexToHash(3), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(7), indexToHash(5), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(9), indexToHash(7), 2, 0, false))
// Right branch.
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(2), params.BeaconConfig().ZeroHash, [32]byte{}, 0, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(4), indexToHash(2), [32]byte{}, 0, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(6), indexToHash(4), [32]byte{}, 0, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(8), indexToHash(6), [32]byte{}, 1, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(10), indexToHash(8), [32]byte{}, 2, 0))
require.NoError(t, f.ProcessBlock(context.Background(), 1, indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 2, indexToHash(4), indexToHash(2), 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 3, indexToHash(6), indexToHash(4), 0, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(8), indexToHash(6), 1, 0, false))
require.NoError(t, f.ProcessBlock(context.Background(), 4, indexToHash(10), indexToHash(8), 2, 0, false))
// With start at 0, the head should be 10:
// 0 <-- start
@@ -183,7 +183,7 @@ func TestFFGUpdates_TwoBranches(t *testing.T) {
}
func setup(justifiedEpoch, finalizedEpoch types.Epoch) *ForkChoice {
f := New(0, 0, params.BeaconConfig().ZeroHash)
f := New(justifiedEpoch, finalizedEpoch, params.BeaconConfig().ZeroHash)
f.store.nodesIndices[params.BeaconConfig().ZeroHash] = 0
f.store.nodes = append(f.store.nodes, &Node{
slot: 0,

View File

@@ -15,7 +15,7 @@ func computeDeltas(
votes []Vote,
oldBalances, newBalances []uint64,
) ([]int, []Vote, error) {
_, span := trace.StartSpan(ctx, "protoArrayForkChoice.computeDeltas")
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.computeDeltas")
defer span.End()
deltas := make([]int, len(blockIndices))

View File

@@ -24,7 +24,7 @@ func TestNoVote_CanFindHead(t *testing.T) {
// 0
// /
// 2 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
@@ -33,7 +33,7 @@ func TestNoVote_CanFindHead(t *testing.T) {
// 0
// / \
// head -> 2 1
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
@@ -44,7 +44,7 @@ func TestNoVote_CanFindHead(t *testing.T) {
// head -> 2 1
// |
// 3
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(3), indexToHash(1), [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(3), indexToHash(1), 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
@@ -55,7 +55,7 @@ func TestNoVote_CanFindHead(t *testing.T) {
// 2 1
// | |
// head -> 4 3
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(4), indexToHash(2), [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(4), indexToHash(2), 1, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
@@ -68,7 +68,7 @@ func TestNoVote_CanFindHead(t *testing.T) {
// head -> 4 3
// |
// 5 <- justified epoch = 2
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(5), indexToHash(4), [32]byte{}, 2, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(5), indexToHash(4), 2, 1, false))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
@@ -107,7 +107,7 @@ func TestNoVote_CanFindHead(t *testing.T) {
// 5
// |
// 6 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(5), [32]byte{}, 2, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(5), 2, 1, false))
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 1)
require.NoError(t, err)
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 2")

View File

@@ -2,11 +2,9 @@ package protoarray
import (
"context"
"fmt"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
)
// This returns the minimum and maximum slot of the synced_tips tree
@@ -27,26 +25,32 @@ func (f *ForkChoice) boundarySyncedTips() (types.Slot, types.Slot) {
return min, max
}
// Optimistic returns true if this node is optimistically synced
// IsOptimistic returns true if this node is optimistically synced
// A optimistically synced block is synced as usual, but its
// execution payload is not validated, while the EL is still syncing.
// WARNING: this function does not check if slot corresponds to the
// block with the given root. An incorrect response may be
// returned when requesting earlier than finalized epoch due
// to pruning of non-canonical branches. A requests for a
// combination root/slot of an available block is guaranteed
// to yield the correct result. The caller is responsible for
// checking the block's availability. A consensus bug could be
// a cause of getting this wrong, so think twice before passing
// a wrong pair.
func (f *ForkChoice) Optimistic(ctx context.Context, root [32]byte, slot types.Slot) (bool, error) {
// This function returns an error if the block is not found in the fork choice
// store
func (f *ForkChoice) IsOptimistic(ctx context.Context, root [32]byte) (bool, error) {
if ctx.Err() != nil {
return false, ctx.Err()
}
// If we reached this point then the block has to be in the Fork Choice
// Store!
f.store.nodesLock.RLock()
index, ok := f.store.nodesIndices[root]
if !ok {
f.store.nodesLock.RUnlock()
return false, errUnknownNodeRoot
}
node := f.store.nodes[index]
slot := node.slot
// If the node is a synced tip, then it's fully validated
f.syncedTips.RLock()
_, ok := f.syncedTips.validatedTips[root]
_, ok = f.syncedTips.validatedTips[root]
if ok {
f.syncedTips.RUnlock()
f.store.nodesLock.RUnlock()
return false, nil
}
f.syncedTips.RUnlock()
@@ -54,39 +58,29 @@ func (f *ForkChoice) Optimistic(ctx context.Context, root [32]byte, slot types.S
// If the slot is higher than the max synced tip, it's optimistic
min, max := f.boundarySyncedTips()
if slot > max {
f.store.nodesLock.RUnlock()
return true, nil
}
// If the slot is lower than the min synced tip, it's fully validated
if slot <= min {
f.store.nodesLock.RUnlock()
return false, nil
}
// If we reached this point then the block has to be in the Fork Choice
// Store!
f.store.nodesLock.RLock()
index, ok := f.store.nodesIndices[root]
if !ok {
// This should not happen
f.store.nodesLock.RUnlock()
return false, fmt.Errorf("invalid root, slot combination, got %#x, %d",
bytesutil.Trunc(root[:]), slot)
}
node := f.store.nodes[index]
// if the node is a leaf of the Fork Choice tree, then it's
// optimistic
childIndex := node.BestChild()
if childIndex == NonExistentNode {
f.store.nodesLock.RUnlock()
return true, nil
}
// recurse to the child
child := f.store.nodes[childIndex]
root = child.root
slot = child.slot
f.store.nodesLock.RUnlock()
return f.Optimistic(ctx, root, slot)
return f.IsOptimistic(ctx, root)
}
// This function returns the index of sync tip node that's ancestor to the input node.
@@ -107,10 +101,10 @@ func (s *Store) findSyncedTip(ctx context.Context, node *Node, syncedTips *optim
}
}
// UpdateSyncedTipsWithValidRoot is called with the root of a block that was returned as
// SetOptimisticToValid is called with the root of a block that was returned as
// VALID by the EL. This routine recomputes and updates the synced_tips map to
// account for this new tip.
func (f *ForkChoice) UpdateSyncedTipsWithValidRoot(ctx context.Context, root [32]byte) error {
func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [32]byte) error {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
// We can only update if given root is in Fork Choice
@@ -211,8 +205,8 @@ func (f *ForkChoice) UpdateSyncedTipsWithValidRoot(ctx context.Context, root [32
return nil
}
// UpdateSyncedTipsWithInvalidRoot updates the synced_tips map when the block with the given root becomes INVALID.
func (f *ForkChoice) UpdateSyncedTipsWithInvalidRoot(ctx context.Context, root [32]byte) error {
// SetOptimisticToInvalid updates the synced_tips map when the block with the given root becomes INVALID.
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root [32]byte) error {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
idx, ok := f.store.nodesIndices[root]

View File

@@ -28,9 +28,7 @@ import (
func TestOptimistic(t *testing.T) {
root0 := bytesutil.ToBytes32([]byte("hello0"))
slot0 := types.Slot(98)
root1 := bytesutil.ToBytes32([]byte("hello1"))
slot1 := types.Slot(99)
nodeA := &Node{
slot: types.Slot(100),
@@ -148,58 +146,60 @@ func TestOptimistic(t *testing.T) {
require.Equal(t, max, types.Slot(103), "maximum tip slot is different")
// We test first nodes outside the Fork Choice store
op, err := f.Optimistic(ctx, root0, slot0)
require.NoError(t, err)
require.Equal(t, op, false)
_, err := f.IsOptimistic(ctx, root0)
require.ErrorIs(t, errUnknownNodeRoot, err)
op, err = f.Optimistic(ctx, root1, slot1)
require.NoError(t, err)
require.Equal(t, op, false)
_, err = f.IsOptimistic(ctx, root1)
require.ErrorIs(t, errUnknownNodeRoot, err)
// We check all nodes in the Fork Choice store.
op, err = f.Optimistic(ctx, nodeA.root, nodeA.slot)
op, err := f.IsOptimistic(ctx, nodeA.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.Optimistic(ctx, nodeB.root, nodeB.slot)
op, err = f.IsOptimistic(ctx, nodeB.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.Optimistic(ctx, nodeC.root, nodeC.slot)
op, err = f.IsOptimistic(ctx, nodeC.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.Optimistic(ctx, nodeD.root, nodeD.slot)
op, err = f.IsOptimistic(ctx, nodeD.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.Optimistic(ctx, nodeE.root, nodeE.slot)
op, err = f.IsOptimistic(ctx, nodeE.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.Optimistic(ctx, nodeF.root, nodeF.slot)
op, err = f.IsOptimistic(ctx, nodeF.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.Optimistic(ctx, nodeG.root, nodeG.slot)
op, err = f.IsOptimistic(ctx, nodeG.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.Optimistic(ctx, nodeH.root, nodeH.slot)
op, err = f.IsOptimistic(ctx, nodeH.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.Optimistic(ctx, nodeI.root, nodeI.slot)
op, err = f.IsOptimistic(ctx, nodeI.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.Optimistic(ctx, nodeJ.root, nodeJ.slot)
op, err = f.IsOptimistic(ctx, nodeJ.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.Optimistic(ctx, nodeK.root, nodeK.slot)
op, err = f.IsOptimistic(ctx, nodeK.root)
require.NoError(t, err)
require.Equal(t, op, true)
// request a write Lock to synced Tips regression #10289
f.syncedTips.Lock()
defer f.syncedTips.Unlock()
}
// This tests the algorithm to update syncedTips
@@ -216,22 +216,22 @@ func TestOptimistic(t *testing.T) {
// And every block in the Fork choice is optimistic. Synced_Tips contains a
// single block that is outside of Fork choice
//
func TestUpdateSyncTipsWithValidRoots(t *testing.T) {
func TestSetOptimisticToValid(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, 1, 1, true))
tests := []struct {
root [32]byte // the root of the new VALID block
tips map[[32]byte]types.Slot // the old synced tips
@@ -321,7 +321,7 @@ func TestUpdateSyncTipsWithValidRoots(t *testing.T) {
f.syncedTips.Lock()
f.syncedTips.validatedTips = tc.tips
f.syncedTips.Unlock()
err := f.UpdateSyncedTipsWithValidRoot(context.Background(), tc.root)
err := f.SetOptimisticToValid(context.Background(), tc.root)
if tc.wantedErr != nil {
require.ErrorIs(t, err, tc.wantedErr)
} else {
@@ -348,7 +348,7 @@ func TestUpdateSyncTipsWithValidRoots(t *testing.T) {
// single block that is outside of Fork choice. The numbers in parentheses are
// the weights of the nodes before removal
//
func TestUpdateSyncTipsWithInvalidRoot(t *testing.T) {
func TestSetOptimisticToInvalid(t *testing.T) {
tests := []struct {
root [32]byte // the root of the new INVALID block
tips map[[32]byte]types.Slot // the old synced tips
@@ -409,18 +409,18 @@ func TestUpdateSyncTipsWithInvalidRoot(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, 1, 1, true))
weights := []uint64{10, 10, 9, 7, 1, 6, 2, 3, 1, 1, 1, 0, 0}
f.syncedTips.Lock()
f.syncedTips.validatedTips = tc.tips
@@ -439,7 +439,7 @@ func TestUpdateSyncTipsWithInvalidRoot(t *testing.T) {
require.NotEqual(t, NonExistentNode, parentIndex)
parent := f.store.nodes[parentIndex]
f.store.nodesLock.Unlock()
err := f.UpdateSyncedTipsWithInvalidRoot(context.Background(), tc.root)
err := f.SetOptimisticToInvalid(context.Background(), tc.root)
require.NoError(t, err)
f.syncedTips.RLock()
_, parentSyncedTip := f.syncedTips.validatedTips[parent.root]
@@ -467,18 +467,18 @@ func TestFindSyncedTip(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, 1, 1, true))
tests := []struct {
root [32]byte // the root of the block
tips map[[32]byte]types.Slot // the synced tips

View File

@@ -21,7 +21,6 @@ import (
func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
ctx := context.Background()
zeroHash := params.BeaconConfig().ZeroHash
graffiti := [32]byte{}
balances := make([]uint64, 64) // 64 active validators.
for i := 0; i < len(balances); i++ {
balances[i] = 10
@@ -47,9 +46,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
slot,
newRoot,
headRoot,
graffiti,
jEpoch,
fEpoch,
false,
),
)
f.ProcessAttestation(ctx, []uint64{0}, newRoot, fEpoch)
@@ -71,9 +70,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
slot,
newRoot,
headRoot,
graffiti,
jEpoch,
fEpoch,
false,
),
)
f.ProcessAttestation(ctx, []uint64{1}, newRoot, fEpoch)
@@ -97,9 +96,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
slot,
newRoot,
headRoot,
graffiti,
jEpoch,
fEpoch,
false,
),
)
f.ProcessAttestation(ctx, []uint64{2}, newRoot, fEpoch)
@@ -123,9 +122,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
slot,
newRoot,
headRoot,
graffiti,
jEpoch,
fEpoch,
false,
),
)
f.ProcessAttestation(ctx, []uint64{3}, newRoot, fEpoch)
@@ -190,9 +189,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
honestBlockSlot,
honestBlock,
zeroHash,
graffiti,
jEpoch,
fEpoch,
false,
),
)
r, err = f.Head(ctx, jEpoch, zeroHash, balances, fEpoch)
@@ -207,9 +206,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
maliciouslyWithheldBlockSlot,
maliciouslyWithheldBlock,
zeroHash,
graffiti,
jEpoch,
fEpoch,
false,
),
)
@@ -256,9 +255,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
honestBlockSlot,
honestBlock,
zeroHash,
graffiti,
jEpoch,
fEpoch,
false,
),
)
@@ -275,9 +274,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
maliciouslyWithheldBlockSlot,
maliciouslyWithheldBlock,
zeroHash,
graffiti,
jEpoch,
fEpoch,
false,
),
)
@@ -331,9 +330,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
cSlot,
c,
a, // parent
graffiti,
jEpoch,
fEpoch,
false,
),
)
@@ -355,9 +354,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
bSlot,
b,
a, // parent
graffiti,
jEpoch,
fEpoch,
false,
),
)
@@ -379,9 +378,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
dSlot,
d,
b, // parent
graffiti,
jEpoch,
fEpoch,
false,
),
)

View File

@@ -9,6 +9,7 @@ import (
types "github.com/prysmaticlabs/eth2-types"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
pbrpc "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
)
@@ -130,17 +131,35 @@ func (f *ForkChoice) ProcessAttestation(ctx context.Context, validatorIndices []
processedAttestationCount.Inc()
}
// NodeCount returns the current number of nodes in the Store
func (f *ForkChoice) NodeCount() int {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
return len(f.store.nodes)
}
// ProposerBoost returns the proposerBoost of the store
func (f *ForkChoice) ProposerBoost() [fieldparams.RootLength]byte {
return f.store.proposerBoost()
}
// ProcessBlock processes a new block by inserting it to the fork choice store.
func (f *ForkChoice) ProcessBlock(
ctx context.Context,
slot types.Slot,
blockRoot, parentRoot, graffiti [32]byte,
justifiedEpoch, finalizedEpoch types.Epoch,
) error {
blockRoot, parentRoot [32]byte,
justifiedEpoch, finalizedEpoch types.Epoch, optimistic bool) error {
ctx, span := trace.StartSpan(ctx, "protoArrayForkChoice.ProcessBlock")
defer span.End()
return f.store.insert(ctx, slot, blockRoot, parentRoot, graffiti, justifiedEpoch, finalizedEpoch)
if err := f.store.insert(ctx, slot, blockRoot, parentRoot, justifiedEpoch, finalizedEpoch); err != nil {
return err
}
if !optimistic {
return f.SetOptimisticToValid(ctx, blockRoot)
}
return nil
}
// Prune prunes the fork choice store with the new finalized root. The store is only pruned if the input
@@ -149,36 +168,6 @@ func (f *ForkChoice) Prune(ctx context.Context, finalizedRoot [32]byte) error {
return f.store.prune(ctx, finalizedRoot, f.syncedTips)
}
// Nodes returns the copied list of block nodes in the fork choice store.
func (f *ForkChoice) Nodes() []*Node {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
cpy := make([]*Node, len(f.store.nodes))
copy(cpy, f.store.nodes)
return cpy
}
// Store returns the fork choice store object which contains all the information regarding proto array fork choice.
func (f *ForkChoice) Store() *Store {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
return f.store
}
// Node returns the copied node in the fork choice store.
func (f *ForkChoice) Node(root [32]byte) *Node {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
index, ok := f.store.nodesIndices[root]
if !ok {
return nil
}
return copyNode(f.store.nodes[index])
}
// HasNode returns true if the node exists in fork choice store,
// false else wise.
func (f *ForkChoice) HasNode(root [32]byte) bool {
@@ -248,36 +237,22 @@ func (s *Store) PruneThreshold() uint64 {
}
// JustifiedEpoch of fork choice store.
func (s *Store) JustifiedEpoch() types.Epoch {
return s.justifiedEpoch
func (f *ForkChoice) JustifiedEpoch() types.Epoch {
return f.store.justifiedEpoch
}
// FinalizedEpoch of fork choice store.
func (s *Store) FinalizedEpoch() types.Epoch {
return s.finalizedEpoch
func (f *ForkChoice) FinalizedEpoch() types.Epoch {
return f.store.finalizedEpoch
}
// ProposerBoost of fork choice store.
func (s *Store) ProposerBoost() [fieldparams.RootLength]byte {
// proposerBoost of fork choice store.
func (s *Store) proposerBoost() [fieldparams.RootLength]byte {
s.proposerBoostLock.RLock()
defer s.proposerBoostLock.RUnlock()
return s.proposerBoostRoot
}
// Nodes of fork choice store.
func (s *Store) Nodes() []*Node {
s.nodesLock.RLock()
defer s.nodesLock.RUnlock()
return s.nodes
}
// NodesIndices of fork choice store.
func (s *Store) NodesIndices() map[[32]byte]uint64 {
s.nodesLock.RLock()
defer s.nodesLock.RUnlock()
return s.nodesIndices
}
// head starts from justified root and then follows the best descendant links
// to find the best block for head.
func (s *Store) head(ctx context.Context, justifiedRoot [32]byte) ([32]byte, error) {
@@ -373,7 +348,7 @@ func (s *Store) updateCanonicalNodes(ctx context.Context, root [32]byte) error {
// It then updates the new node's parent with best child and descendant node.
func (s *Store) insert(ctx context.Context,
slot types.Slot,
root, parent, graffiti [32]byte,
root, parent [32]byte,
justifiedEpoch, finalizedEpoch types.Epoch) error {
_, span := trace.StartSpan(ctx, "protoArrayForkChoice.insert")
defer span.End()
@@ -396,7 +371,6 @@ func (s *Store) insert(ctx context.Context,
n := &Node{
slot: slot,
root: root,
graffiti: graffiti,
parent: parentIndex,
justifiedEpoch: justifiedEpoch,
finalizedEpoch: finalizedEpoch,
@@ -744,3 +718,60 @@ func (s *Store) leaves() ([]uint64, error) {
}
return leaves, nil
}
// Tips returns all possible chain heads (leaves of fork choice tree).
// Heads roots and heads slots are returned.
func (f *ForkChoice) Tips() ([][32]byte, []types.Slot) {
// Deliberate choice to not preallocate space for below.
// Heads cant be more than 2-3 in the worst case where pre-allocation will be 64 to begin with.
headsRoots := make([][32]byte, 0)
headsSlots := make([]types.Slot, 0)
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
for _, node := range f.store.nodes {
// Possible heads have no children.
if node.BestDescendant() == NonExistentNode && node.BestChild() == NonExistentNode {
headsRoots = append(headsRoots, node.Root())
headsSlots = append(headsSlots, node.Slot())
}
}
return headsRoots, headsSlots
}
func (f *ForkChoice) ForkChoiceNodes() []*pbrpc.ForkChoiceNode {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
ret := make([]*pbrpc.ForkChoiceNode, len(f.store.nodes))
var parentRoot [32]byte
for i, node := range f.store.nodes {
root := node.Root()
parentIdx := node.parent
if parentIdx == NonExistentNode {
parentRoot = params.BeaconConfig().ZeroHash
} else {
parent := f.store.nodes[parentIdx]
parentRoot = parent.Root()
}
bestDescendantIdx := node.BestDescendant()
var bestDescendantRoot [32]byte
if bestDescendantIdx == NonExistentNode {
bestDescendantRoot = params.BeaconConfig().ZeroHash
} else {
bestDescendantNode := f.store.nodes[bestDescendantIdx]
bestDescendantRoot = bestDescendantNode.Root()
}
ret[i] = &pbrpc.ForkChoiceNode{
Slot: node.Slot(),
Root: root[:],
Parent: parentRoot[:],
JustifiedEpoch: node.JustifiedEpoch(),
FinalizedEpoch: node.FinalizedEpoch(),
Weight: node.Weight(),
BestDescendant: bestDescendantRoot[:],
}
}
return ret
}

View File

@@ -22,44 +22,14 @@ func TestStore_PruneThreshold(t *testing.T) {
func TestStore_JustifiedEpoch(t *testing.T) {
j := types.Epoch(100)
s := &Store{
justifiedEpoch: j,
}
if got := s.JustifiedEpoch(); got != j {
t.Errorf("JustifiedEpoch() = %v, want %v", got, j)
}
f := setup(j, j)
require.Equal(t, j, f.JustifiedEpoch())
}
func TestStore_FinalizedEpoch(t *testing.T) {
f := types.Epoch(50)
s := &Store{
finalizedEpoch: f,
}
if got := s.FinalizedEpoch(); got != f {
t.Errorf("FinalizedEpoch() = %v, want %v", got, f)
}
}
func TestStore_Nodes(t *testing.T) {
nodes := []*Node{
{slot: 100},
{slot: 101},
}
s := &Store{
nodes: nodes,
}
require.DeepEqual(t, nodes, s.Nodes())
}
func TestStore_NodesIndices(t *testing.T) {
nodeIndices := map[[32]byte]uint64{
{'a'}: 1,
{'b'}: 2,
}
s := &Store{
nodesIndices: nodeIndices,
}
require.DeepEqual(t, nodeIndices, s.NodesIndices())
j := types.Epoch(50)
f := setup(j, j)
require.Equal(t, j, f.FinalizedEpoch())
}
func TestForkChoice_HasNode(t *testing.T) {
@@ -74,30 +44,6 @@ func TestForkChoice_HasNode(t *testing.T) {
require.Equal(t, true, f.HasNode([32]byte{'a'}))
}
func TestForkChoice_Store(t *testing.T) {
nodeIndices := map[[32]byte]uint64{
{'a'}: 1,
{'b'}: 2,
}
s := &Store{
nodesIndices: nodeIndices,
}
f := &ForkChoice{store: s}
require.DeepEqual(t, s, f.Store())
}
func TestForkChoice_Nodes(t *testing.T) {
nodes := []*Node{
{slot: 100},
{slot: 101},
}
s := &Store{
nodes: nodes,
}
f := &ForkChoice{store: s}
require.DeepEqual(t, s.nodes, f.Nodes())
}
func TestStore_Head_UnknownJustifiedRoot(t *testing.T) {
s := &Store{nodesIndices: make(map[[32]byte]uint64)}
@@ -155,7 +101,7 @@ func TestStore_Head_ContextCancelled(t *testing.T) {
func TestStore_Insert_UnknownParent(t *testing.T) {
// The new node does not have a parent.
s := &Store{nodesIndices: make(map[[32]byte]uint64)}
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, [32]byte{'B'}, [32]byte{}, 1, 1))
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, [32]byte{'B'}, 1, 1))
assert.Equal(t, 1, len(s.nodes), "Did not insert block")
assert.Equal(t, 1, len(s.nodesIndices), "Did not insert block")
assert.Equal(t, NonExistentNode, s.nodes[0].parent, "Incorrect parent")
@@ -171,7 +117,7 @@ func TestStore_Insert_KnownParent(t *testing.T) {
s.nodes = []*Node{{}}
p := [32]byte{'B'}
s.nodesIndices[p] = 0
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, p, [32]byte{}, 1, 1))
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, p, 1, 1))
assert.Equal(t, 2, len(s.nodes), "Did not insert block")
assert.Equal(t, 2, len(s.nodesIndices), "Did not insert block")
assert.Equal(t, uint64(0), s.nodes[1].parent, "Incorrect parent")
@@ -545,18 +491,18 @@ func TestStore_PruneSyncedTips(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, 1, 1, true))
require.NoError(t, f.ProcessBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, 1, 1, true))
syncedTips := &optimisticStore{
validatedTips: map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
@@ -721,8 +667,10 @@ func TestStore_UpdateCanonicalNodes_WholeList(t *testing.T) {
require.Equal(t, true, f.IsCanonical([32]byte{'a'}))
require.Equal(t, true, f.IsCanonical([32]byte{'b'}))
require.Equal(t, true, f.IsCanonical([32]byte{'c'}))
require.DeepEqual(t, f.Node([32]byte{'c'}), f.store.nodes[2])
require.Equal(t, f.Node([32]byte{'d'}), (*Node)(nil))
idxc := f.store.nodesIndices[[32]byte{'c'}]
_, ok := f.store.nodesIndices[[32]byte{'d'}]
require.Equal(t, idxc, uint64(2))
require.Equal(t, false, ok)
}
func TestStore_UpdateCanonicalNodes_ParentAlreadyIn(t *testing.T) {

View File

@@ -23,7 +23,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 0
// /
// 2 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
@@ -33,7 +33,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 0
// / \
// head -> 2 1
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
@@ -63,7 +63,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// head -> 2 1
// |
// 3
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(3), indexToHash(1), [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(3), indexToHash(1), 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
@@ -99,7 +99,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 3
// |
// 4 <- head
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(4), indexToHash(3), [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(4), indexToHash(3), 1, 1, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
@@ -115,7 +115,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 4 <- head
// /
// 5 <- justified epoch = 2
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(5), indexToHash(4), [32]byte{}, 2, 2))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(5), indexToHash(4), 2, 2, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
@@ -131,7 +131,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 4 <- head
// / \
// 5 6 <- justified epoch = 0
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(4), [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(4), 1, 1, true))
// Moved 2 votes to block 5:
// 0
@@ -143,7 +143,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 4
// / \
// 2 votes-> 5 6
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(4), [32]byte{}, 1, 1))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(6), indexToHash(4), 1, 1, true))
f.ProcessAttestation(context.Background(), []uint64{0, 1}, indexToHash(5), 4)
@@ -164,9 +164,9 @@ func TestVotes_CanFindHead(t *testing.T) {
// 8
// |
// 9
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(7), indexToHash(5), [32]byte{}, 2, 2))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(8), indexToHash(7), [32]byte{}, 2, 2))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(9), indexToHash(8), [32]byte{}, 2, 2))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(7), indexToHash(5), 2, 2, true))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(8), indexToHash(7), 2, 2, true))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(9), indexToHash(8), 2, 2, true))
r, err = f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
require.NoError(t, err)
@@ -211,7 +211,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// / \
// 2 votes->9 10
f.ProcessAttestation(context.Background(), []uint64{0, 1}, indexToHash(9), 5)
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(10), indexToHash(8), [32]byte{}, 2, 2))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(10), indexToHash(8), 2, 2, true))
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)
@@ -290,7 +290,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 9 10
// |
// head-> 11
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(11), indexToHash(9), [32]byte{}, 2, 2))
require.NoError(t, f.ProcessBlock(context.Background(), 0, indexToHash(11), indexToHash(9), 2, 2, true))
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
require.NoError(t, err)

View File

@@ -24,6 +24,7 @@ go_library(
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/deterministic-genesis:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/gateway:go_default_library",
"//beacon-chain/monitor:go_default_library",

View File

@@ -27,6 +27,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/db/slasherkv"
interopcoldstart "github.com/prysmaticlabs/prysm/beacon-chain/deterministic-genesis"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/gateway"
"github.com/prysmaticlabs/prysm/beacon-chain/monitor"
@@ -119,6 +120,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
configureEth1Config(cliCtx)
configureNetwork(cliCtx)
configureInteropConfig(cliCtx)
configureExecutionSetting(cliCtx)
// Initializes any forks here.
params.BeaconConfig().InitializeForkSchedule()
@@ -310,8 +312,11 @@ func (b *BeaconNode) Close() {
}
func (b *BeaconNode) startForkChoice() {
f := protoarray.New(0, 0, params.BeaconConfig().ZeroHash)
b.forkChoiceStore = f
if features.Get().EnableForkChoiceDoublyLinkedTree {
b.forkChoiceStore = doublylinkedtree.New(0, 0)
} else {
b.forkChoiceStore = protoarray.New(0, 0, params.BeaconConfig().ZeroHash)
}
}
func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
@@ -554,6 +559,7 @@ func (b *BeaconNode) registerBlockchainService() error {
blockchain.WithDatabase(b.db),
blockchain.WithDepositCache(b.depositCache),
blockchain.WithChainStartFetcher(web3Service),
blockchain.WithExecutionEngineCaller(web3Service.EngineAPIClient()),
blockchain.WithAttestationPool(b.attestationPool),
blockchain.WithExitPool(b.exitPool),
blockchain.WithSlashingPool(b.slashingsPool),
@@ -780,6 +786,7 @@ func (b *BeaconNode) registerRPCService() error {
StateGen: b.stateGen,
EnableDebugRPCEndpoints: enableDebugRPCEndpoints,
MaxMsgSize: maxMsgSize,
ExecutionEngineCaller: web3Service.EngineAPIClient(),
})
return b.services.RegisterService(rpcService)
@@ -808,8 +815,6 @@ func (b *BeaconNode) registerPrometheusService(cliCtx *cli.Context) error {
)
}
additionalHandlers = append(additionalHandlers, prometheus.Handler{Path: "/tree", Handler: c.TreeHandler})
service := prometheus.NewService(
fmt.Sprintf("%s:%d", b.cliCtx.String(cmd.MonitoringHostFlag.Name), b.cliCtx.Int(flags.MonitoringPortFlag.Name)),
b.services,
@@ -834,6 +839,7 @@ func (b *BeaconNode) registerGRPCGateway() error {
selfCert := b.cliCtx.String(flags.CertFlag.Name)
maxCallSize := b.cliCtx.Uint64(cmd.GrpcMaxCallRecvMsgSizeFlag.Name)
httpModules := b.cliCtx.String(flags.HTTPModules.Name)
timeout := b.cliCtx.Int(cmd.ApiTimeoutFlag.Name)
if enableDebugRPCEndpoints {
maxCallSize = uint64(math.Max(float64(maxCallSize), debugGrpcMaxMsgSize))
}
@@ -855,6 +861,7 @@ func (b *BeaconNode) registerGRPCGateway() error {
apigateway.WithRemoteCert(selfCert),
apigateway.WithMaxCallRecvMsgSize(maxCallSize),
apigateway.WithAllowedOrigins(allowedOrigins),
apigateway.WithTimeout(uint64(timeout)),
}
if flags.EnableHTTPEthAPI(httpModules) {
opts = append(opts, apigateway.WithApiMiddleware(&apimiddleware.BeaconEndpointFactory{}))

View File

@@ -89,7 +89,7 @@ func (s *Service) PublishToTopic(ctx context.Context, topic string, data []byte,
}
select {
case <-ctx.Done():
return ctx.Err()
return errors.Wrapf(ctx.Err(), "unable to find requisite number of peers for topic %s, 0 peers found to publish to", topic)
default:
time.Sleep(100 * time.Millisecond)
}

View File

@@ -62,7 +62,8 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
wg := new(sync.WaitGroup)
for {
if err := ctx.Err(); err != nil {
return false, err
return false, errors.Errorf("unable to find requisite number of peers for topic %s - "+
"only %d out of %d peers were able to be found", topic, currNum, threshold)
}
if currNum >= threshold {
break

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"block_cache.go",
"block_reader.go",
"check_transition_config.go",
"deposit.go",
"log.go",
"log_processing.go",
@@ -44,6 +45,7 @@ go_library(
"//monitoring/tracing:go_default_library",
"//network:go_default_library",
"//network/authorization:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
@@ -54,6 +56,7 @@ go_library(
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//ethclient:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -69,6 +72,7 @@ go_test(
srcs = [
"block_cache_test.go",
"block_reader_test.go",
"check_transition_config_test.go",
"deposit_test.go",
"init_test.go",
"log_processing_test.go",
@@ -87,6 +91,8 @@ go_test(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1/mocks:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//beacon-chain/powchain/types:go_default_library",
"//beacon-chain/state/stategen:go_default_library",

View File

@@ -0,0 +1,82 @@
package powchain
import (
"context"
"errors"
"math/big"
"time"
"github.com/holiman/uint256"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/config/params"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
)
var (
checkTransitionPollingInterval = time.Second * 10
configMismatchLog = "Configuration mismatch between your execution client and Prysm. " +
"Please check your execution client and restart it with the proper configuration. If this is not done, " +
"your node will not be able to complete the proof-of-stake transition"
)
// Checks the transition configuration between Prysm and the connected execution node to ensure
// there are no differences in terminal block difficulty and block hash.
// If there are any discrepancies, we must log errors to ensure users can resolve
//the problem and be ready for the merge transition.
func (s *Service) checkTransitionConfiguration(ctx context.Context) {
if s.engineAPIClient == nil {
return
}
i := new(big.Int)
i.SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
ttd := new(uint256.Int)
ttd.SetFromBig(i)
cfg := &pb.TransitionConfiguration{
TerminalTotalDifficulty: ttd.Hex(),
TerminalBlockHash: params.BeaconConfig().TerminalBlockHash[:],
TerminalBlockNumber: big.NewInt(0).Bytes(), // A value of 0 is recommended in the request.
}
err := s.engineAPIClient.ExchangeTransitionConfiguration(ctx, cfg)
if err != nil {
if errors.Is(err, v1.ErrConfigMismatch) {
log.WithError(err).Fatal(configMismatchLog)
}
log.WithError(err).Error("Could not check configuration values between execution and consensus client")
}
// We poll the execution client to see if the transition configuration has changed.
// This serves as a heartbeat to ensure the execution client and Prysm are ready for the
// Bellatrix hard-fork transition.
ticker := time.NewTicker(checkTransitionPollingInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
err = s.engineAPIClient.ExchangeTransitionConfiguration(ctx, cfg)
s.handleExchangeConfigurationError(err)
}
}
}
// We check if there is a configuration mismatch error between the execution client
// and the Prysm beacon node. If so, we need to log errors in the node as it cannot successfully
// complete the merge transition for the Bellatrix hard fork.
func (s *Service) handleExchangeConfigurationError(err error) {
if err == nil {
// If there is no error in checking the exchange configuration error, we clear
// the run error of the service if we had previously set it to ErrConfigMismatch.
if errors.Is(s.runError, v1.ErrConfigMismatch) {
s.runError = nil
}
return
}
// If the error is a configuration mismatch, we set a runtime error in the service.
if errors.Is(err, v1.ErrConfigMismatch) {
s.runError = err
log.WithError(err).Error(configMismatchLog)
return
}
log.WithError(err).Error("Could not check configuration values between execution and consensus client")
}

View File

@@ -0,0 +1,59 @@
package powchain
import (
"context"
"errors"
"testing"
"time"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func Test_checkTransitionConfiguration(t *testing.T) {
ctx := context.Background()
hook := logTest.NewGlobal()
m := &mocks.EngineClient{}
m.Err = errors.New("something went wrong")
srv := &Service{}
srv.engineAPIClient = m
checkTransitionPollingInterval = time.Millisecond
ctx, cancel := context.WithCancel(ctx)
go srv.checkTransitionConfiguration(ctx)
<-time.After(100 * time.Millisecond)
cancel()
require.LogsContain(t, hook, "Could not check configuration values")
}
func TestService_handleExchangeConfigurationError(t *testing.T) {
hook := logTest.NewGlobal()
t.Run("clears existing service error", func(t *testing.T) {
srv := &Service{isRunning: true}
srv.runError = v1.ErrConfigMismatch
srv.handleExchangeConfigurationError(nil)
require.Equal(t, true, srv.Status() == nil)
})
t.Run("does not clear existing service error if wrong kind", func(t *testing.T) {
srv := &Service{isRunning: true}
err := errors.New("something else went wrong")
srv.runError = err
srv.handleExchangeConfigurationError(nil)
require.ErrorIs(t, err, srv.Status())
})
t.Run("sets service error on config mismatch", func(t *testing.T) {
srv := &Service{isRunning: true}
srv.handleExchangeConfigurationError(v1.ErrConfigMismatch)
require.Equal(t, v1.ErrConfigMismatch, srv.Status())
require.LogsContain(t, hook, configMismatchLog)
})
t.Run("does not set service error if unrelated problem", func(t *testing.T) {
srv := &Service{isRunning: true}
srv.handleExchangeConfigurationError(errors.New("foo"))
require.Equal(t, true, srv.Status() == nil)
require.LogsContain(t, hook, "Could not check configuration values")
})
}

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"auth.go",
"client.go",
"errors.go",
"options.go",
@@ -13,16 +14,22 @@ go_library(
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["client_test.go"],
srcs = [
"auth_test.go",
"client_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/powchain/engine-api-client/v1/mocks:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -30,6 +37,8 @@ go_test(
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],

View File

@@ -0,0 +1,44 @@
package v1
import (
"net/http"
"time"
"github.com/golang-jwt/jwt/v4"
"github.com/pkg/errors"
)
// This creates a custom HTTP transport which we can attach to our HTTP client
// in order to inject JWT auth strings into our HTTP request headers. Authentication
// is required when interacting with an Ethereum engine API server via HTTP, and JWT
// is chosen as the scheme of choice.
// For more details on the requirements of authentication when using the engine API, see
// the specification here: https://github.com/ethereum/execution-apis/blob/main/src/engine/authentication.md
//
// To use this transport, initialize a new &http.Client{} from the standard library
// and set the Transport field to &jwtTransport{} with values
// http.DefaultTransport and a JWT secret.
type jwtTransport struct {
underlyingTransport http.RoundTripper
jwtSecret []byte
}
// RoundTrip ensures our transport implements http.RoundTripper interface from the
// standard library. When used as the transport for an HTTP client, the code below
// will run every time our client makes an HTTP request. This is used to inject
// an JWT bearer token in the Authorization request header of every outgoing request
// our HTTP client makes.
func (t *jwtTransport) RoundTrip(req *http.Request) (*http.Response, error) {
token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
// Required claim for engine API auth. "iat" stands for issued at
// and it must be a unix timestamp that is +/- 5 seconds from the current
// timestamp at the moment the server verifies this value.
"iat": time.Now().Unix(),
})
tokenString, err := token.SignedString(t.jwtSecret)
if err != nil {
return nil, errors.Wrap(err, "could not produce signed JWT token")
}
req.Header.Set("Authorization", "Bearer "+tokenString)
return t.underlyingTransport.RoundTrip(req)
}

View File

@@ -0,0 +1,53 @@
package v1
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/golang-jwt/jwt/v4"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestJWTAuthTransport(t *testing.T) {
secret := bytesutil.PadTo([]byte("foo"), 32)
authTransport := &jwtTransport{
underlyingTransport: http.DefaultTransport,
jwtSecret: secret,
}
client := &http.Client{
Timeout: DefaultTimeout,
Transport: authTransport,
}
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
reqToken := r.Header.Get("Authorization")
splitToken := strings.Split(reqToken, "Bearer")
// The format should be `Bearer ${token}`.
require.Equal(t, 2, len(splitToken))
reqToken = strings.TrimSpace(splitToken[1])
token, err := jwt.Parse(reqToken, func(token *jwt.Token) (interface{}, error) {
// We should be doing HMAC signing.
_, ok := token.Method.(*jwt.SigningMethodHMAC)
require.Equal(t, true, ok)
return secret, nil
})
require.NoError(t, err)
require.Equal(t, true, token.Valid)
claims, ok := token.Claims.(jwt.MapClaims)
require.Equal(t, true, ok)
item, ok := claims["iat"]
require.Equal(t, true, ok)
iat, ok := item.(float64)
require.Equal(t, true, ok)
issuedAt := time.Unix(int64(iat), 0)
// The claims should have an "iat" field (issued at) that is at most, 5 seconds ago.
since := time.Since(issuedAt)
require.Equal(t, true, since <= time.Second*5)
}))
defer srv.Close()
_, err := client.Get(srv.URL)
require.NoError(t, err)
}

View File

@@ -6,11 +6,13 @@ package v1
import (
"bytes"
"context"
"fmt"
"math/big"
"net/url"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/rpc"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/params"
@@ -41,17 +43,17 @@ type ForkchoiceUpdatedResponse struct {
PayloadId *pb.PayloadIDBytes `json:"payloadId"`
}
// EngineCaller defines a client that can interact with an Ethereum
// Caller defines a client that can interact with an Ethereum
// execution node's engine service via JSON-RPC.
type EngineCaller interface {
NewPayload(ctx context.Context, payload *pb.ExecutionPayload) (*pb.PayloadStatus, error)
type Caller interface {
NewPayload(ctx context.Context, payload *pb.ExecutionPayload) ([]byte, error)
ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs *pb.PayloadAttributes,
) (*ForkchoiceUpdatedResponse, error)
) (*pb.PayloadIDBytes, []byte, error)
GetPayload(ctx context.Context, payloadId [8]byte) (*pb.ExecutionPayload, error)
ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) (*pb.TransitionConfiguration, error)
) error
LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock, error)
ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error)
}
@@ -73,6 +75,11 @@ func New(ctx context.Context, endpoint string, opts ...Option) (*Client, error)
c := &Client{
cfg: defaultConfig(),
}
for _, opt := range opts {
if err := opt(c); err != nil {
return nil, err
}
}
switch u.Scheme {
case "http", "https":
c.rpc, err = rpc.DialHTTPWithClient(endpoint, c.cfg.httpClient)
@@ -84,28 +91,59 @@ func New(ctx context.Context, endpoint string, opts ...Option) (*Client, error)
if err != nil {
return nil, err
}
for _, opt := range opts {
if err := opt(c); err != nil {
return nil, err
}
}
return c, nil
}
// NewPayload calls the engine_newPayloadV1 method via JSON-RPC.
func (c *Client) NewPayload(ctx context.Context, payload *pb.ExecutionPayload) (*pb.PayloadStatus, error) {
func (c *Client) NewPayload(ctx context.Context, payload *pb.ExecutionPayload) ([]byte, error) {
result := &pb.PayloadStatus{}
err := c.rpc.CallContext(ctx, result, NewPayloadMethod, payload)
return result, handleRPCError(err)
if err != nil {
return nil, handleRPCError(err)
}
switch result.Status {
case pb.PayloadStatus_INVALID_BLOCK_HASH:
return nil, fmt.Errorf("could not validate block hash: %v", result.ValidationError)
case pb.PayloadStatus_INVALID_TERMINAL_BLOCK:
return nil, fmt.Errorf("could not satisfy terminal block condition: %v", result.ValidationError)
case pb.PayloadStatus_ACCEPTED, pb.PayloadStatus_SYNCING:
return nil, ErrAcceptedSyncingPayloadStatus
case pb.PayloadStatus_INVALID:
return result.LatestValidHash, ErrInvalidPayloadStatus
case pb.PayloadStatus_VALID:
return result.LatestValidHash, nil
default:
return nil, ErrUnknownPayloadStatus
}
}
// ForkchoiceUpdated calls the engine_forkchoiceUpdatedV1 method via JSON-RPC.
func (c *Client) ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs *pb.PayloadAttributes,
) (*ForkchoiceUpdatedResponse, error) {
) (*pb.PayloadIDBytes, []byte, error) {
result := &ForkchoiceUpdatedResponse{}
err := c.rpc.CallContext(ctx, result, ForkchoiceUpdatedMethod, state, attrs)
return result, handleRPCError(err)
if err != nil {
return nil, nil, handleRPCError(err)
}
if result.Status == nil {
return nil, nil, ErrNilResponse
}
resp := result.Status
switch resp.Status {
case pb.PayloadStatus_INVALID_TERMINAL_BLOCK:
return nil, nil, fmt.Errorf("could not satisfy terminal block condition: %v", resp.ValidationError)
case pb.PayloadStatus_SYNCING:
return nil, nil, ErrAcceptedSyncingPayloadStatus
case pb.PayloadStatus_INVALID:
return nil, resp.LatestValidHash, ErrInvalidPayloadStatus
case pb.PayloadStatus_VALID:
return result.PayloadId, resp.LatestValidHash, nil
default:
return nil, nil, ErrUnknownPayloadStatus
}
}
// GetPayload calls the engine_getPayloadV1 method via JSON-RPC.
@@ -118,35 +156,39 @@ func (c *Client) GetPayload(ctx context.Context, payloadId [8]byte) (*pb.Executi
// ExchangeTransitionConfiguration calls the engine_exchangeTransitionConfigurationV1 method via JSON-RPC.
func (c *Client) ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) (*pb.TransitionConfiguration, error) {
// Terminal block number should be set to 0
) error {
// We set terminal block number to 0 as the parameter is not set on the consensus layer.
zeroBigNum := big.NewInt(0)
cfg.TerminalBlockNumber = zeroBigNum.Bytes()
result := &pb.TransitionConfiguration{}
if err := c.rpc.CallContext(ctx, result, ExchangeTransitionConfigurationMethod, cfg); err != nil {
return nil, handleRPCError(err)
return handleRPCError(err)
}
// We surface an error to the user if local configuration settings mismatch
// according to the response from the execution node.
cfgTerminalHash := params.BeaconConfig().TerminalBlockHash[:]
if !bytes.Equal(cfgTerminalHash, result.TerminalBlockHash) {
return nil, errors.Wrapf(
ErrMismatchTerminalBlockHash,
return errors.Wrapf(
ErrConfigMismatch,
"got %#x from execution node, wanted %#x",
result.TerminalBlockHash,
cfgTerminalHash,
)
}
ttdCfg := params.BeaconConfig().TerminalTotalDifficulty
if ttdCfg != result.TerminalTotalDifficulty {
return nil, errors.Wrapf(
ErrMismatchTerminalTotalDiff,
ttdResult, err := hexutil.DecodeBig(result.TerminalTotalDifficulty)
if err != nil {
return errors.Wrap(err, "could not decode received terminal total difficulty")
}
if ttdResult.String() != ttdCfg {
return errors.Wrapf(
ErrConfigMismatch,
"got %s from execution node, wanted %s",
result.TerminalTotalDifficulty,
ttdResult.String(),
ttdCfg,
)
}
return result, nil
return nil
}
// LatestExecutionBlock fetches the latest execution engine block by calling

View File

@@ -13,7 +13,9 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -22,7 +24,10 @@ import (
"google.golang.org/protobuf/proto"
)
var _ = EngineCaller(&Client{})
var (
_ = Caller(&Client{})
_ = Caller(&mocks.EngineClient{})
)
func TestClient_IPC(t *testing.T) {
server := newTestIPCServer(t)
@@ -45,35 +50,25 @@ func TestClient_IPC(t *testing.T) {
t.Run(ForkchoiceUpdatedMethod, func(t *testing.T) {
want, ok := fix["ForkchoiceUpdatedResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
resp, err := client.ForkchoiceUpdated(ctx, &pb.ForkchoiceState{}, &pb.PayloadAttributes{})
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, &pb.ForkchoiceState{}, &pb.PayloadAttributes{})
require.NoError(t, err)
require.DeepEqual(t, want.Status, resp.Status)
require.DeepEqual(t, want.PayloadId, resp.PayloadId)
require.DeepEqual(t, want.Status.LatestValidHash, validHash)
require.DeepEqual(t, want.PayloadId, payloadID)
})
t.Run(NewPayloadMethod, func(t *testing.T) {
want, ok := fix["PayloadStatus"].(*pb.PayloadStatus)
want, ok := fix["ValidPayloadStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
req, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
resp, err := client.NewPayload(ctx, req)
latestValidHash, err := client.NewPayload(ctx, req)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(NewPayloadMethod, func(t *testing.T) {
want, ok := fix["PayloadStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
req, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
resp, err := client.NewPayload(ctx, req)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
t.Run(ExchangeTransitionConfigurationMethod, func(t *testing.T) {
want, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
resp, err := client.ExchangeTransitionConfiguration(ctx, want)
err := client.ExchangeTransitionConfiguration(ctx, want)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(ExecutionBlockByNumberMethod, func(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
@@ -138,7 +133,7 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(ForkchoiceUpdatedMethod, func(t *testing.T) {
t.Run(ForkchoiceUpdatedMethod+" VALID status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
SafeBlockHash: []byte("safe"),
@@ -146,98 +141,174 @@ func TestClient_HTTP(t *testing.T) {
}
payloadAttributes := &pb.PayloadAttributes{
Timestamp: 1,
Random: []byte("random"),
PrevRandao: []byte("random"),
SuggestedFeeRecipient: []byte("suggestedFeeRecipient"),
}
want, ok := fix["ForkchoiceUpdatedResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := ioutil.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
forkChoiceStateReq, err := json.Marshal(forkChoiceState)
require.NoError(t, err)
payloadAttrsReq, err := json.Marshal(payloadAttributes)
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(forkChoiceStateReq),
))
require.Equal(t, true, strings.Contains(
jsonRequestString, string(payloadAttrsReq),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
client := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
require.NoError(t, err)
require.DeepEqual(t, want.Status, resp.Status)
require.DeepEqual(t, want.PayloadId, resp.PayloadId)
require.DeepEqual(t, want.Status.LatestValidHash, validHash)
require.DeepEqual(t, want.PayloadId, payloadID)
})
t.Run(NewPayloadMethod, func(t *testing.T) {
t.Run(ForkchoiceUpdatedMethod+" SYNCING status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
SafeBlockHash: []byte("safe"),
FinalizedBlockHash: []byte("finalized"),
}
payloadAttributes := &pb.PayloadAttributes{
Timestamp: 1,
PrevRandao: []byte("random"),
SuggestedFeeRecipient: []byte("suggestedFeeRecipient"),
}
want, ok := fix["ForkchoiceUpdatedSyncingResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
client := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
// We call the RPC method via HTTP and expect a proper result.
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
require.ErrorIs(t, err, ErrAcceptedSyncingPayloadStatus)
require.DeepEqual(t, (*pb.PayloadIDBytes)(nil), payloadID)
require.DeepEqual(t, []byte(nil), validHash)
})
t.Run(ForkchoiceUpdatedMethod+" INVALID status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
SafeBlockHash: []byte("safe"),
FinalizedBlockHash: []byte("finalized"),
}
payloadAttributes := &pb.PayloadAttributes{
Timestamp: 1,
PrevRandao: []byte("random"),
SuggestedFeeRecipient: []byte("suggestedFeeRecipient"),
}
want, ok := fix["ForkchoiceUpdatedInvalidResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
client := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
// We call the RPC method via HTTP and expect a proper result.
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
require.ErrorIs(t, err, ErrInvalidPayloadStatus)
require.DeepEqual(t, (*pb.PayloadIDBytes)(nil), payloadID)
require.DeepEqual(t, want.Status.LatestValidHash, validHash)
})
t.Run(ForkchoiceUpdatedMethod+" UNKNOWN status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
SafeBlockHash: []byte("safe"),
FinalizedBlockHash: []byte("finalized"),
}
payloadAttributes := &pb.PayloadAttributes{
Timestamp: 1,
PrevRandao: []byte("random"),
SuggestedFeeRecipient: []byte("suggestedFeeRecipient"),
}
want, ok := fix["ForkchoiceUpdatedAcceptedResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
client := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
// We call the RPC method via HTTP and expect a proper result.
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
require.ErrorIs(t, err, ErrUnknownPayloadStatus)
require.DeepEqual(t, (*pb.PayloadIDBytes)(nil), payloadID)
require.DeepEqual(t, []byte(nil), validHash)
})
t.Run(ForkchoiceUpdatedMethod+" INVALID_TERMINAL_BLOCK status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
SafeBlockHash: []byte("safe"),
FinalizedBlockHash: []byte("finalized"),
}
payloadAttributes := &pb.PayloadAttributes{
Timestamp: 1,
PrevRandao: []byte("random"),
SuggestedFeeRecipient: []byte("suggestedFeeRecipient"),
}
want, ok := fix["ForkchoiceUpdatedInvalidTerminalBlockResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
client := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
// We call the RPC method via HTTP and expect a proper result.
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
require.ErrorContains(t, "could not satisfy terminal block condition", err)
require.DeepEqual(t, (*pb.PayloadIDBytes)(nil), payloadID)
require.DeepEqual(t, []byte(nil), validHash)
})
t.Run(NewPayloadMethod+" VALID status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
want, ok := fix["PayloadStatus"].(*pb.PayloadStatus)
want, ok := fix["ValidPayloadStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := ioutil.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
reqArg, err := json.Marshal(execPayload)
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(reqArg),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
client := newPayloadSetup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
require.DeepEqual(t, want.LatestValidHash, resp)
})
t.Run(NewPayloadMethod+" SYNCING status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
want, ok := fix["SyncingStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
client := newPayloadSetup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(NewPayloadMethod+" INVALID_BLOCK_HASH status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
want, ok := fix["InvalidBlockHashStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
client := newPayloadSetup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.ErrorContains(t, "could not validate block hash", err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(NewPayloadMethod+" INVALID_TERMINAL_BLOCK status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
want, ok := fix["InvalidTerminalBlockStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
client := newPayloadSetup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.ErrorContains(t, "could not satisfy terminal block condition", err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(NewPayloadMethod+" INVALID status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
want, ok := fix["InvalidStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
client := newPayloadSetup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
t.Run(NewPayloadMethod+" UNKNOWN status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
want, ok := fix["UnknownStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
client := newPayloadSetup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.ErrorIs(t, ErrUnknownPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(ExecutionBlockByNumberMethod, func(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
@@ -304,9 +375,8 @@ func TestClient_HTTP(t *testing.T) {
client.rpc = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.ExchangeTransitionConfiguration(ctx, want)
err = client.ExchangeTransitionConfiguration(ctx, want)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(ExecutionBlockByHashMethod, func(t *testing.T) {
arg := common.BytesToHash([]byte("foo"))
@@ -382,8 +452,8 @@ func TestExchangeTransitionConfiguration(t *testing.T) {
client := &Client{}
client.rpc = rpcClient
_, err = client.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrMismatchTerminalBlockHash))
err = client.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrConfigMismatch))
})
t.Run("wrong terminal total difficulty", func(t *testing.T) {
request, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
@@ -398,7 +468,7 @@ func TestExchangeTransitionConfiguration(t *testing.T) {
}()
// Change the terminal block hash.
resp.TerminalTotalDifficulty = "bar"
resp.TerminalTotalDifficulty = "0x1"
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
@@ -415,8 +485,8 @@ func TestExchangeTransitionConfiguration(t *testing.T) {
client := &Client{}
client.rpc = rpcClient
_, err = client.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrMismatchTerminalTotalDiff))
err = client.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrConfigMismatch))
})
}
@@ -533,7 +603,7 @@ func fixtures() map[string]interface{} {
StateRoot: foo[:],
ReceiptsRoot: foo[:],
LogsBloom: baz,
Random: foo[:],
PrevRandao: foo[:],
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
@@ -563,7 +633,7 @@ func fixtures() map[string]interface{} {
ReceiptsRoot: receiptsRoot,
LogsBloom: logsBloom,
Difficulty: bytesutil.PadTo([]byte("1"), fieldparams.RootLength),
TotalDifficulty: bytesutil.PadTo([]byte("2"), fieldparams.RootLength),
TotalDifficulty: "2",
GasLimit: 3,
GasUsed: 4,
Timestamp: 5,
@@ -574,7 +644,7 @@ func fixtures() map[string]interface{} {
Uncles: [][]byte{foo[:]},
}
status := &pb.PayloadStatus{
Status: pb.PayloadStatus_ACCEPTED,
Status: pb.PayloadStatus_VALID,
LatestValidHash: foo[:],
ValidationError: "",
}
@@ -583,17 +653,86 @@ func fixtures() map[string]interface{} {
Status: status,
PayloadId: &id,
}
forkChoiceSyncingResp := &ForkchoiceUpdatedResponse{
Status: &pb.PayloadStatus{
Status: pb.PayloadStatus_SYNCING,
LatestValidHash: nil,
},
PayloadId: &id,
}
forkChoiceInvalidTerminalBlockResp := &ForkchoiceUpdatedResponse{
Status: &pb.PayloadStatus{
Status: pb.PayloadStatus_INVALID_TERMINAL_BLOCK,
LatestValidHash: nil,
},
PayloadId: &id,
}
forkChoiceAcceptedResp := &ForkchoiceUpdatedResponse{
Status: &pb.PayloadStatus{
Status: pb.PayloadStatus_ACCEPTED,
LatestValidHash: nil,
},
PayloadId: &id,
}
forkChoiceInvalidResp := &ForkchoiceUpdatedResponse{
Status: &pb.PayloadStatus{
Status: pb.PayloadStatus_INVALID,
LatestValidHash: []byte("latestValidHash"),
},
PayloadId: &id,
}
b, _ := new(big.Int).SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
ttd, _ := uint256.FromBig(b)
transitionCfg := &pb.TransitionConfiguration{
TerminalBlockHash: params.BeaconConfig().TerminalBlockHash[:],
TerminalTotalDifficulty: params.BeaconConfig().TerminalTotalDifficulty,
TerminalTotalDifficulty: ttd.Hex(),
TerminalBlockNumber: big.NewInt(0).Bytes(),
}
validStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_VALID,
LatestValidHash: foo[:],
ValidationError: "",
}
inValidBlockHashStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_INVALID_BLOCK_HASH,
LatestValidHash: nil,
}
inValidTerminalBlockStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_INVALID_TERMINAL_BLOCK,
LatestValidHash: nil,
}
acceptedStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_ACCEPTED,
LatestValidHash: nil,
}
syncingStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_SYNCING,
LatestValidHash: nil,
}
invalidStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_INVALID,
LatestValidHash: foo[:],
}
unknownStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_UNKNOWN,
LatestValidHash: foo[:],
}
return map[string]interface{}{
"ExecutionBlock": executionBlock,
"ExecutionPayload": executionPayloadFixture,
"PayloadStatus": status,
"ForkchoiceUpdatedResponse": forkChoiceResp,
"TransitionConfiguration": transitionCfg,
"ExecutionBlock": executionBlock,
"ExecutionPayload": executionPayloadFixture,
"ValidPayloadStatus": validStatus,
"InvalidBlockHashStatus": inValidBlockHashStatus,
"InvalidTerminalBlockStatus": inValidTerminalBlockStatus,
"AcceptedStatus": acceptedStatus,
"SyncingStatus": syncingStatus,
"InvalidStatus": invalidStatus,
"UnknownStatus": unknownStatus,
"ForkchoiceUpdatedResponse": forkChoiceResp,
"ForkchoiceUpdatedSyncingResponse": forkChoiceSyncingResp,
"ForkchoiceUpdatedInvalidTerminalBlockResponse": forkChoiceInvalidTerminalBlockResp,
"ForkchoiceUpdatedAcceptedResponse": forkChoiceAcceptedResp,
"ForkchoiceUpdatedInvalidResponse": forkChoiceInvalidResp,
"TransitionConfiguration": transitionCfg,
}
}
@@ -653,6 +792,7 @@ func (*testEngineService) ForkchoiceUpdatedV1(
if !ok {
panic("not found")
}
item.Status.Status = pb.PayloadStatus_VALID
return item
}
@@ -660,9 +800,83 @@ func (*testEngineService) NewPayloadV1(
_ context.Context, _ *pb.ExecutionPayload,
) *pb.PayloadStatus {
fix := fixtures()
item, ok := fix["PayloadStatus"].(*pb.PayloadStatus)
item, ok := fix["ValidPayloadStatus"].(*pb.PayloadStatus)
if !ok {
panic("not found")
}
return item
}
func forkchoiceUpdateSetup(t *testing.T, fcs *pb.ForkchoiceState, att *pb.PayloadAttributes, res *ForkchoiceUpdatedResponse) *Client {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := ioutil.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
forkChoiceStateReq, err := json.Marshal(fcs)
require.NoError(t, err)
payloadAttrsReq, err := json.Marshal(att)
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(forkChoiceStateReq),
))
require.Equal(t, true, strings.Contains(
jsonRequestString, string(payloadAttrsReq),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": res,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
client := &Client{}
client.rpc = rpcClient
return client
}
func newPayloadSetup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayload) *Client {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := ioutil.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
reqArg, err := json.Marshal(payload)
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(reqArg),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": status,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
client := &Client{}
client.rpc = rpcClient
return client
}

View File

@@ -17,12 +17,23 @@ var (
ErrServer = errors.New("client error while processing request")
// ErrUnknownPayload corresponds to JSON-RPC code -32001.
ErrUnknownPayload = errors.New("payload does not exist or is not available")
// ErrUnknownPayloadStatus when the payload status is unknown.
ErrUnknownPayloadStatus = errors.New("unknown payload status")
// ErrUnsupportedScheme for unsupported URL schemes.
ErrUnsupportedScheme = errors.New("unsupported url scheme, only http(s) and ipc are supported")
// ErrConfigMismatch when the execution node's terminal total difficulty or
// terminal block hash received via the API mismatches Prysm's configuration value.
ErrConfigMismatch = errors.New("execution client configuration mismatch")
// ErrMismatchTerminalBlockHash when the terminal block hash value received via
// the API mismatches Prysm's configuration value.
ErrMismatchTerminalBlockHash = errors.New("terminal block hash mismatch")
// ErrMismatchTerminalTotalDiff when the terminal total difficulty value received via
// the API mismatches Prysm's configuration value.
ErrMismatchTerminalTotalDiff = errors.New("terminal total difficulty mismatch")
// ErrAcceptedSyncingPayloadStatus when the status of the payload is syncing or accepted.
ErrAcceptedSyncingPayloadStatus = errors.New("payload status is SYNCING or ACCEPTED")
// ErrInvalidPayloadStatus when the status of the payload is invalid.
ErrInvalidPayloadStatus = errors.New("payload status is INVALID")
// ErrNilResponse when the response is nil.
ErrNilResponse = errors.New("nil response")
)

View File

@@ -0,0 +1,13 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["client.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks",
visibility = ["//visibility:public"],
deps = [
"//proto/engine/v1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,59 @@
package mocks
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
)
// EngineClient --
type EngineClient struct {
NewPayloadResp []byte
PayloadIDBytes *pb.PayloadIDBytes
ForkChoiceUpdatedResp []byte
ExecutionPayload *pb.ExecutionPayload
ExecutionBlock *pb.ExecutionBlock
Err error
ErrLatestExecBlock error
ErrExecBlockByHash error
ErrForkchoiceUpdated error
BlockByHashMap map[[32]byte]*pb.ExecutionBlock
}
// NewPayload --
func (e *EngineClient) NewPayload(_ context.Context, _ *pb.ExecutionPayload) ([]byte, error) {
return e.NewPayloadResp, nil
}
// ForkchoiceUpdated --
func (e *EngineClient) ForkchoiceUpdated(
_ context.Context, _ *pb.ForkchoiceState, _ *pb.PayloadAttributes,
) (*pb.PayloadIDBytes, []byte, error) {
return e.PayloadIDBytes, e.ForkChoiceUpdatedResp, e.ErrForkchoiceUpdated
}
// GetPayload --
func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte) (*pb.ExecutionPayload, error) {
return e.ExecutionPayload, nil
}
// ExchangeTransitionConfiguration --
func (e *EngineClient) ExchangeTransitionConfiguration(_ context.Context, _ *pb.TransitionConfiguration) error {
return e.Err
}
// LatestExecutionBlock --
func (e *EngineClient) LatestExecutionBlock(_ context.Context) (*pb.ExecutionBlock, error) {
return e.ExecutionBlock, e.ErrLatestExecBlock
}
// ExecutionBlockByHash --
func (e *EngineClient) ExecutionBlockByHash(_ context.Context, h common.Hash) (*pb.ExecutionBlock, error) {
b, ok := e.BlockByHashMap[h]
if !ok {
return nil, errors.New("block not found")
}
return b, e.ErrExecBlockByHash
}

Some files were not shown because too many files have changed in this diff Show More