Compare commits

...

401 Commits

Author SHA1 Message Date
terence tsao
ebd6d9a0b3 Refactor state input 2022-04-06 10:38:02 -07:00
terence tsao
1242e1521e Merge branch 'payloadid-cache' of github.com:prysmaticlabs/prysm into kiln 2022-04-06 10:15:53 -07:00
terence tsao
e9c921c78d Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-04-06 10:15:05 -07:00
terence tsao
c8b2d46187 Merge branch 'develop' of github.com:prysmaticlabs/prysm into payloadid-cache 2022-04-06 07:54:14 -07:00
Potuz
83a83279d4 Remove synced tips and use last valid hash (#10439)
* Remove synced tips

use last valid hash in removing invalid nodes.

* add test

* Remove unused code

* More unused parameters

* Fix proposer boost

* terence's review #1

* Fix conflicts

* terence's review 2

* rename argument

* terence's review #3

* rename optimistic -> status

* Minor clean up

* revert loop variable change

* do not mark lvh as valid

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2022-04-06 14:18:30 +00:00
terence tsao
bdab34fd01 Use correct head for notifyForkchoiceUpdate (#10485)
* Use correct head for notifyForkchoiceUpdate

* Fix existing tests

* Update receive_attestation_test.go
2022-04-06 19:05:53 +08:00
terence tsao
00d782bb55 Merge branch 'develop' of github.com:prysmaticlabs/prysm into payloadid-cache 2022-04-05 10:34:22 -07:00
terence tsao
32a4103785 Fix tests 2022-04-05 10:30:00 -07:00
terence tsao
9a3002d538 feedbacks 2022-04-05 10:22:44 -07:00
kasey
de0143e036 save origin block root before finalize (#10463)
* save origin block root before finalize

* add test for SaveOrigin

* goimports :(

* signature to LoadGenesis changed in a diff PR

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-04-05 17:13:53 +00:00
Raul Jordan
e5b418b164 Merge branch 'develop' into kiln 2022-04-05 12:55:10 -04:00
Nishant Das
2a7a09b112 Add Lock Analyzer (#10430)
* add lock analyzer

* fix locks

* progress

* fix failures

* fix error log

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-04-05 16:39:48 +00:00
terence tsao
984575ed57 Optimistic check: handle zeros check point root (#10464)
* Handle zero roots

* Update chain_info_test.go

* Update chain_info_test.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-04-05 14:37:25 +00:00
Nishant Das
927e338f9e Enable Bellatrix E2E (#10437)
* Fix finding terminal block hash calculation

* Update mainnet_config.go

* Update beacon_block.pb.go

* Various fixes to pass all spec tests for Merge (#9777)

* Proper upgrade altair to merge state

* Use uint64 for ttd

* Correctly upgrade to merge state + object mapping fixes

* Use proper receive block path for initial syncing

* Disable contract lookback

* Disable deposit contract lookback

* Go fmt

* Merge: switch from go bindings to raw rpc calls (#9803)

* Disable genesis ETH1.0 chain header logging

* Update htrutils.go

* all gossip tests passing

* Remove gas validations

* Update penalty params for Merge

* Fix gossip and tx size limits for the merge part 1

* Remove extraneous p2p condition

* Add and use

* Add and use TBH_ACTIVATION_EPOCH

* Update WORKSPACE

* Update Kintsugi engine API (#9865)

* Kintsugi ssz (#9867)

* All spec tests pass

* Update spec test shas

* Update Kintsugi consensus implementations (#9872)

* Remove secp256k1

* Remove unused merge genesis state gen tool

* Manually override nil transaction field. M2 works

* Fix bad hex conversion

* Change Gossip message size and Chunk SIze from 1 MB t0 10MB (#9860)

* change gossip size and chunk size after merge

* change ssz to accomodate both changes

* gofmt config file

* add testcase for merge MsgId

* Update beacon-chain/p2p/message_id.go

Change MB to Mib in comment

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* change function name from altairMsgID to postAltairMsgID

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* Sync with develop

* Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi

* Update state_trie.go

* Clean up conflicts

* Fix build

* Update config to devnet1

* Fix state merge

* Handle merge test case for update balance

* Fix build

* State pkg cleanup

* Fix a bug with loading mainnet state

* Fix transactions root

* Add v2 endpoint for merge blocks (#9802)

* Add V2 blocks endpoint for merge blocks

* Update beacon-chain/rpc/apimiddleware/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* go mod

* fix transactions

* Terence's comments

* add missing file

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Sync

* Go mod tidy

* change EP field names

* latest kintusgi execution api

* fix conflicts

* converting base fee to big endian format (#10018)

* ReverseByteOrder function does not mess the input

* sync with develop

* use merge gossip sizes

* correct gossip sizes this time

* visibility

* clean ups

* Sync with develop, fix payload nil check bug

* Speed up syncing, hide cosmetic errors

* Sync with develop

* Clean up after sync

* Update generate_keys.go

* sync with develop

* Update mainnet_config.go

* Clean ups

* Sync optimistically candidate blocks (#10193)

* Revert "Sync optimistically candidate blocks (#10193)"

This reverts commit f99a0419ef.

* Sync optimistically candidate blocks (#10193)

* allow optimistic sync

* Fix merge transition block validation

* Update proposer.go

* Sync with develop

* delete deprecated client, update testnet flag

* Change optimistic logic (#10194)

* Logs and err handling

* Fix build

* Clean ups

* Add back get payload

* c

* Done

* Rm uncommented

* Optimistic sync: prysm validator rpcs (#10200)

* Logs to reproduce

* Use pointers

* Use pointers

* Use pointers

* Update json_marshal_unmarshal.go

* Fix marshal

* Update json_marshal_unmarshal.go

* Log

* string total diff

* str

* marshal un

* set string

* json

* gaz

* Comment out optimistic status

* remove kiln flag here (#10269)

* Sync with devleop

* Sync with develop

* clean ups

* refactor engine calls

* Update process_block.go

* Fix deadlock, uncomment duty opt sync

* Update proposer_execution_payload.go

* Sync with develop

* Rm post state check

* Bypass eth1 data checks

* Update proposer_execution_payload.go

* Return early if ttd is not reached

* Sync with devleop

* Update process_block.go

* Update receive_block.go

* Update bzl

* Revert "Update receive_block.go"

This reverts commit 5b4a87c512.

* Fix run time

* add in all the fixes

* fix evaluator bugs

* latest fixes

* sum

* fix to be configurable

* Update go.mod

* Fix AltairCompatible to account for future state version

* Update proposer_execution_payload.go

* fix broken conditional checks

* fix all issues

* Handle pre state Altair with valid payload

* Handle pre state Altair with valid payload

* Log bellatrix fields

* Update log.go

* Revert "fix broken conditional checks"

This reverts commit e118db6c20.

* LH multiclient working

* Friendly fee recipient log

* Remove extra SetOptimisticToValid

* fix race

* fix test

* Fix base fee per gas

* Fix notifypayload headroot

* tx fuzzer

* clean up with develop branch

* save progress

* 200tx/block

* add LH flags

* Sync with devleop

* cleanup

* cleanup

* hash

* fix build

* fix test

* fix go check

* fmt

* gosec

* add deps

* cleanup

* fix up

* change gas price

* remove flag

* last fix

* use new LH version

* fix up

* fix finalized block panic

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Zahoor Mohamed <zahoor@zahoor.in>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Zahoor Mohamed <zahoor@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-04-05 14:02:46 +00:00
terence tsao
f44c99d92a Add blinded beacon block protobufs and wrappers (#10473)
Co-authored-by: rkapka <rkapka@wp.pl>
2022-04-05 14:57:19 +02:00
terence tsao
029e52be1e Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-04-04 20:43:06 -07:00
terence tsao
7d669f23ab Sync: process pending block with optimistic parent (#10480) 2022-04-04 18:46:05 -07:00
terence tsao
9b64c33bd1 RPC: GetValidatorPerformance for Bellatrix (#10482) 2022-04-04 19:50:57 -04:00
terence tsao
0800025005 Revert "Another option, remove optimistic sync assumption"
This reverts commit 0b5bdcb065.
2022-04-04 11:07:04 -07:00
terence tsao
0b5bdcb065 Another option, remove optimistic sync assumption 2022-04-04 11:05:36 -07:00
terence tsao
42f69523d5 Update service.go 2022-04-04 10:16:25 -07:00
terence tsao
a9fcd92485 Update BUILD.bazel 2022-04-04 10:12:43 -07:00
terence tsao
cac8a51091 Add payload id cache 2022-04-04 09:01:47 -07:00
Raul Jordan
7b7199b175 Merge branch 'develop' into kiln 2022-04-04 10:16:03 -04:00
Potuz
defa602e50 Adapt Doppelganger to Altair (#9969)
Co-authored-by: rkapka <rkapka@wp.pl>
2022-04-04 15:55:55 +02:00
Radosław Kapka
67c8776f3c Fix execution payload field names in API Middleware (#10479) 2022-04-04 13:19:31 +02:00
Mohamed Zahoor
896d186e3b process the optimistic blocks whose parent are optimistic too (#10350)
* process the optimistic blocks whose parent are optimistic too

* adding unit test

* fix test

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-04-03 23:57:52 -03:00
terence tsao
8005200819 Build 2022-04-03 17:21:42 -07:00
terence tsao
c9ddf90b66 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-04-03 11:57:09 -07:00
Raul Jordan
edb98cf499 Move Engine API to Powchain and Consolidate Connection Management (#10438)
* gazelle

* tests passing

* use the same engine client

* pass

* initialize in right place

* erge

* build

* imports

* ensure engine checks work

* pass powchain tests

* powchain tests pass

* deepsource

* fix up node issues

* gaz

* endpoint use

* baz

* b

* conf

* lint

* gaz

* move to start function

* test pass
2022-04-01 14:04:24 -04:00
Potuz
ce15823f8d Save head after FCU (#10466)
* Update head after FCU

* rename function

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2022-04-01 00:08:01 +00:00
terence tsao
7cdc741b2f Sync: don't set block to bad due to timeout (#10470)
* Add filter error and tests

* Update BUILD.bazel

* Kasey's feedback
2022-03-31 23:29:27 +00:00
terence tsao
5704fb34be Remove duplicated engine mock (#10472)
* Rm mock

* udpate

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-03-31 19:14:09 -04:00
Preston Van Loon
3e2a037d42 github workflows: pin go version to 1.17 (#10471)
* github workflows: pin go version to 1.17

* Update go.yml

* Revert "Update go.yml"

This reverts commit 4a2d36d05d.

* pin golangci-lint

* try go 1.17

* Revert "Revert "Update go.yml""

This reverts commit 8a89663874.

* move and increase version of checkout

* attempt to ignore export path

* try with entrypoint only

* try some rearranging of stuff

* Split up jobs

* Use hack mentioned in https://github.com/securego/gosec/issues/469\#issuecomment-643823092

* Delete dappnode release trigger

* rm id

* try pin golangci-lint version

* try pin golangci-lint version

* Do not provide a specific go version and lets see what happens

* comment checkout, wtf is wrong with github actions

* it works locally...

* trying with some cache key for lint...

* Revert "trying with some cache key for lint..."

This reverts commit c4f5ae4495.

* try tellign it to skip go installation

* revert commented line, do something to satisify deepsource

* do something to satisify deepsource
2022-03-31 22:16:14 +00:00
Potuz
177f9ccab0 Return historical non-canonical blocks as optimistic (#10446)
* Return historical non-canonical blocks as optimistic

* terence's review
2022-03-31 12:51:14 +00:00
Potuz
649dee532f insert block to forkchoice after notifyNewPayload (#10453)
* insert block to forkchoice after notifyNewPayload

* Add optimistic candidate check

* Terence's review

* Apply suggestions from code review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2022-03-31 09:17:49 -03:00
terence tsao
4b3364ac6b Log terminal total difficulty status (#10457)
* Log terminal total diff status

* Update check_transition_config.go

* Update BUILD.bazel

* Update check_transition_config.go

* Update check_transition_config.go

* Update check_transition_config.go
2022-03-31 03:05:23 +00:00
terence tsao
0950ba37a9 Don't set bad block for timeouts 2022-03-30 15:57:38 -07:00
kasey
0df8d7f0c0 refactor genesis state flag handling, support url (#10449)
* refactor genesis state flag handling, support url

* lint fix

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-03-30 22:23:34 +00:00
Mike Neuder
ade7d705ec Cleanup of Keymanager Deleter interface (#10415)
* Migrating Keymanager account list functionality into each keymanager type

* Addressing review comments

* Adding newline at end of BUILD.bazel

* bazel run //:gazelle -- fix

* account deleter cleanup

* bazel run //:gazelle -- fix

* remove stale logging statement

* adding deleter interface to mock functions

* fixing deepsource findings

* go.sum

* bazel run //:gazelle -- fix

* go mod t-dy -compat=1.17

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2022-03-30 16:52:54 -05:00
Preston Van Loon
c68894b77d Spectest: Improve test size and timeouts (#10461)
* Adjust test sizes

* Improve the bazel test attributes for forkchoice and other tests

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2022-03-30 20:04:42 +00:00
terence tsao
fe98d69c0a Clean up blockchain pkg (#10452)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-03-30 17:45:24 +00:00
terence tsao
7516bc0316 Fix execution block's base fee endianess marshal/unmarshal (#10459) 2022-03-30 10:17:18 -07:00
Radosław Kapka
be6f3892e8 Align Beacon API with 2.2.0 spec (#10455) 2022-03-30 15:17:19 +00:00
terence tsao
b9ffd66bf4 Merge branch 'kiln-payload-cache' of github.com:prysmaticlabs/prysm into kiln 2022-03-29 10:23:04 -07:00
terence tsao
9dc5df66e3 Reset state if valid proposal epoch 2022-03-29 10:22:16 -07:00
terence tsao
d3ca5ce9df Update optimistic_sync_test.go 2022-03-28 10:01:02 -07:00
terence tsao
78339b92b3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln-payload-cache 2022-03-28 09:53:04 -07:00
terence tsao
d821a6e7ca Add comments 2022-03-28 09:52:05 -07:00
terence tsao
47212f9d92 Update payload_test.go 2022-03-28 08:55:00 -07:00
terence tsao
deed872b6c Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-28 08:54:00 -07:00
terence tsao
f95d753cf5 Merge branch 'develop' into kiln-payload-cache 2022-03-28 07:34:08 -07:00
terence tsao
b13f4a8df4 Tests 2022-03-28 07:32:48 -07:00
terence tsao
0ab6b0410b Fix payload cache 2022-03-26 13:22:04 -07:00
terence tsao
f59aa60f77 Fix payload cache 2022-03-26 13:21:43 -07:00
terence tsao
769e652119 Update payload_id_test.go 2022-03-26 11:04:26 -07:00
terence tsao
42129c07c2 Merge branch 'kiln-payload-cache' of github.com:prysmaticlabs/prysm into kiln 2022-03-26 10:59:15 -07:00
terence tsao
13cefc61a1 Update BUILD.bazel 2022-03-26 10:57:46 -07:00
terence tsao
2338d3a874 Tests and metrics 2022-03-26 10:57:20 -07:00
terence tsao
dfd7854a4f Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln-payload-cache 2022-03-26 10:18:12 -07:00
terence tsao
5c39ddacd1 Update optimistic_sync.go 2022-03-25 15:34:19 -07:00
terence tsao
e0e8c30686 Clean up 2022-03-25 15:09:53 -07:00
terence tsao
ce6005fb28 Fix sync status 2022-03-25 14:57:40 -07:00
terence tsao
c7b0e819f4 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln-payload-cache 2022-03-25 14:50:51 -07:00
terence tsao
a6df54882c Merge branch 'cleanup' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 11:51:52 -07:00
terence tsao
172606bc7b clean up helpers for checking merge status 2022-03-25 11:46:33 -07:00
terence tsao
66642b553b Merge branch 'less-fragile-payload-check' of github.com:prysmaticlabs/prysm into cleanup 2022-03-25 11:23:13 -07:00
terence tsao
763b81ce79 Merge branch 'revert-10307-consolidate-endpoints' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 09:02:24 -07:00
terence tsao
ac6ecbac0b Revert "Use --http-web3provider for Execution Engine Connection (#10307)"
This reverts commit 3579551f15.
2022-03-25 08:52:33 -07:00
terence tsao
1130d4af22 Merge branch 'use-correct-pre-state' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 08:16:22 -07:00
terence tsao
d83c9cf43e Merge branch 'init-client' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 08:16:13 -07:00
terence tsao
a8a30a5305 Merge branch 'protoarray_set_inner_valid' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 08:15:37 -07:00
terence tsao
83ee276362 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 08:14:41 -07:00
Raul Jordan
dce3d1e8ea Merge branch 'develop' into init-client 2022-03-25 15:09:30 +00:00
terence tsao
bd255a5790 Comment 2022-03-25 08:08:53 -07:00
prylabs-bulldozer[bot]
c00e69d49f Merge refs/heads/develop into use-correct-pre-state 2022-03-25 15:08:45 +00:00
prylabs-bulldozer[bot]
dd8fb34e70 Merge refs/heads/develop into protoarray_set_inner_valid 2022-03-25 15:08:43 +00:00
terence tsao
ad400fcf6e Fix 2022-03-25 07:49:37 -07:00
terence tsao
7adf8d78ef Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-25 07:06:59 -07:00
terence tsao
6852310a35 Merge branch 'develop' into protoarray_set_inner_valid 2022-03-25 07:06:16 -07:00
Potuz
b5cf982a57 Simplify IsExecutionEnabledUsingHeader 2022-03-25 10:07:04 -03:00
prylabs-bulldozer[bot]
2f8bdb43d1 Merge refs/heads/develop into use-correct-pre-state 2022-03-24 23:56:02 +00:00
prylabs-bulldozer[bot]
fea3376bf9 Merge refs/heads/develop into use-correct-pre-state 2022-03-24 23:12:53 +00:00
terence tsao
0aae327d40 Fix nil finalized block 2022-03-24 16:07:04 -07:00
terence tsao
7d4008910f Properly initialize replayer builder 2022-03-24 15:40:01 -07:00
Raul Jordan
3aaf49de42 Merge branch 'develop' into less-fragile-payload-check 2022-03-24 21:12:44 +00:00
prylabs-bulldozer[bot]
66774b5e66 Merge refs/heads/develop into use-correct-pre-state 2022-03-24 20:53:12 +00:00
Raul Jordan
43d53f5c42 include field 2022-03-24 16:15:15 -04:00
terence tsao
053876643e Update optimistic_sync_test.go 2022-03-24 12:57:31 -07:00
Raul Jordan
f30b261abf Merge branch 'develop' into less-fragile-payload-check 2022-03-24 15:54:38 -04:00
Raul Jordan
a9b74ae10a gaz 2022-03-24 15:54:28 -04:00
prylabs-bulldozer[bot]
fc2e4e7d62 Merge refs/heads/develop into use-correct-pre-state 2022-03-24 19:00:58 +00:00
prylabs-bulldozer[bot]
7e83bc0cc4 Merge refs/heads/develop into use-correct-pre-state 2022-03-24 18:32:46 +00:00
terence tsao
f1a2d932be Merge branch 'use-correct-pre-state' of github.com:prysmaticlabs/prysm into use-correct-pre-state 2022-03-24 10:30:00 -07:00
terence tsao
21227e3efd fix tests 2022-03-24 10:29:35 -07:00
terence tsao
a2e86e6fb2 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-24 09:57:10 -07:00
terence tsao
aad1ea3e6c Merge branch 'develop' into use-correct-pre-state 2022-03-24 09:38:48 -07:00
Raul Jordan
56277d5e21 gaz 2022-03-24 12:34:21 -04:00
terence tsao
ee31675b64 revert some changes 2022-03-24 09:33:53 -07:00
Raul Jordan
48045e96c6 less fragile check for execution payload 2022-03-24 12:32:58 -04:00
terence tsao
5aa80098cb Merge branch 'kiln-fx' into use-correct-pre-state 2022-03-24 09:31:55 -07:00
terence tsao
43e6496a91 fix tests 2022-03-24 09:28:02 -07:00
terence tsao
dd3d421dec Fix 2022-03-24 09:15:50 -07:00
terence tsao
9c96f80db9 fix 2022-03-24 09:00:53 -07:00
Potuz
eb78422563 Allow inner protoarray nodes to become VALID 2022-03-24 08:50:59 -03:00
terence tsao
0d85422ebc Fix 2022-03-23 22:15:48 -07:00
terence tsao
293b9761ef Fix 2022-03-23 22:04:43 -07:00
terence tsao
8f9cfb0e0a Cache next epoch 2022-03-23 17:07:07 -07:00
terence tsao
b9945d21dd fix nil 2022-03-22 14:32:34 -07:00
terence tsao
dc965d3a23 Merge branch 'kiln' of github.com:prysmaticlabs/prysm into kiln-payload-cache 2022-03-22 14:32:10 -07:00
terence tsao
7ef03b767d Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln-payload-cache 2022-03-22 14:32:03 -07:00
terence tsao
0a1e93a0b7 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-22 07:28:18 -07:00
terence tsao
876fd5bfb1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-21 13:03:42 -07:00
terence tsao
1a4c073337 Works run time 2022-03-18 14:10:30 -07:00
terence tsao
11e00f61b1 Merge branch 'valid-pre-merge' into kiln-payload-cache 2022-03-18 13:18:05 -07:00
terence tsao
2c6e3916bb Merge branch 'kiln' of github.com:prysmaticlabs/prysm into kiln-payload-cache 2022-03-18 13:14:29 -07:00
terence tsao
8dd9e0f850 Sync with devleop 2022-03-18 13:14:26 -07:00
terence tsao
7529ceda2f Sync with devleop 2022-03-18 09:50:33 -07:00
terence tsao
37baf27b5b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-18 09:44:21 -07:00
terence tsao
c812117e68 valid pre bellatrix block 2022-03-17 14:19:51 -07:00
Potuz
f289f690da mark pre-merge blocks as valid 2022-03-17 14:00:19 -03:00
terence tsao
4d278de34d clean up with develop branch 2022-03-17 07:48:15 -07:00
terence tsao
49e8b9f1fe Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-17 07:44:57 -07:00
terence tsao
17cc44a3fd finish cache and test 2022-03-17 07:43:18 -07:00
terence tsao
0c41a34b2e starting 2022-03-16 21:53:00 -07:00
terence tsao
2716184852 Merge branch 'log-bellatrix-blk' into kiln 2022-03-16 15:43:46 -07:00
terence tsao
d056b21f09 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-16 15:27:07 -07:00
terence tsao
3ea8b79697 Fix notifypayload headroot 2022-03-15 12:40:10 -07:00
terence tsao
5566b2bb15 Fix base fee per gas 2022-03-15 12:02:13 -07:00
terence tsao
52171da8c8 Remove extra SetOptimisticToValid 2022-03-15 07:35:44 -07:00
terence tsao
ab9ece5263 Friendly fee recipient log 2022-03-15 06:44:48 -07:00
terence tsao
29296de277 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-15 06:20:24 -07:00
prylabs-bulldozer[bot]
1d4477c6d4 Merge refs/heads/develop into log-bellatrix-blk 2022-03-15 13:17:56 +00:00
terence tsao
649a345664 Merge branch 'develop' into log-bellatrix-blk 2022-03-14 20:36:15 -07:00
terence tsao
8d13ed12e1 Merge branch 'develop' into log-bellatrix-blk 2022-03-14 15:48:29 -07:00
terence tsao
ce86bfae66 Update log.go 2022-03-14 15:48:04 -07:00
terence tsao
7423c61292 Merge branch 'log-bellatrix-blk' into kiln 2022-03-14 14:30:05 -07:00
terence tsao
8029648849 Log bellatrix fields 2022-03-14 14:29:48 -07:00
terence tsao
ce0bd748eb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-14 13:19:59 -07:00
terence tsao
9fe7d5833c Merge branch 'handle-pre-state-altair' of github.com:prysmaticlabs/prysm into kiln 2022-03-14 11:36:22 -07:00
terence tsao
3ae6dc9068 Handle pre state Altair with valid payload 2022-03-14 11:31:23 -07:00
terence tsao
c489687336 Handle pre state Altair with valid payload 2022-03-14 11:16:11 -07:00
terence tsao
a37e0f19c3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-13 08:27:54 -07:00
terence tsao
ee1ee623c0 Update proposer_execution_payload.go 2022-03-12 10:54:06 -08:00
terence tsao
e62cdcea16 Fix AltairCompatible to account for future state version 2022-03-12 08:20:01 -08:00
terence tsao
546939dd33 Update go.mod 2022-03-12 08:18:45 -08:00
terence tsao
f6eb6cd6bf Fix run time 2022-03-10 14:01:18 -08:00
terence tsao
a08c809073 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-10 13:53:01 -08:00
terence tsao
ef0435493d Revert "Update receive_block.go"
This reverts commit 5b4a87c512.
2022-03-10 11:09:17 -08:00
terence tsao
6f15d2b0b2 Update bzl 2022-03-10 11:02:07 -08:00
terence tsao
5b4a87c512 Update receive_block.go 2022-03-09 19:47:11 -08:00
terence tsao
5d7704e3a9 Update process_block.go 2022-03-09 19:41:03 -08:00
terence tsao
c4093f8adb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-09 19:36:56 -08:00
terence tsao
4724b8430f Sync with devleop 2022-03-07 12:23:37 -08:00
terence tsao
55c8922f51 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-07 12:20:36 -08:00
Raul Jordan
7f8d66c919 Merge branch 'develop' into kiln 2022-03-07 18:55:37 +00:00
terence tsao
e2e5a0d86c Return early if ttd is not reached 2022-03-04 05:55:30 -08:00
terence tsao
4acc40ffed Update proposer_execution_payload.go 2022-03-03 15:48:28 -08:00
terence tsao
acc528ff75 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-03 15:45:27 -08:00
terence tsao
87395141e8 Bypass eth1 data checks 2022-03-03 09:36:48 -08:00
terence tsao
a69901bd7c Rm post state check 2022-03-02 16:45:03 -08:00
terence tsao
72d2bc7ce1 Sync with develop 2022-03-02 16:32:07 -08:00
terence tsao
aecd34a1ea Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-02 16:22:28 -08:00
terence tsao
1daae0f5cf Update proposer_execution_payload.go 2022-03-01 09:15:41 -08:00
terence tsao
1583c77bdf Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-01 09:10:10 -08:00
terence tsao
8baf2179b3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-03-01 07:26:05 -08:00
terence tsao
d41947c60d Fix deadlock, uncomment duty opt sync 2022-02-28 20:12:27 -08:00
terence tsao
d8f9ecbd4d Update process_block.go 2022-02-27 14:43:28 -08:00
terence tsao
9da43e4170 refactor engine calls 2022-02-27 14:42:09 -08:00
terence tsao
d3756ea4ea clean ups 2022-02-25 18:41:42 -08:00
Raul Jordan
66418ec0ff Merge branch 'develop' into kiln 2022-02-25 16:52:46 -05:00
terence tsao
e3963094d4 Sync with develop 2022-02-24 20:26:57 -08:00
terence tsao
a5240cf4b8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-24 15:09:08 -08:00
terence tsao
78a90af679 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-23 10:11:49 -08:00
terence tsao
b280e796da Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-22 10:00:50 -08:00
terence tsao
1e32cd5596 Sync with devleop 2022-02-22 07:50:44 -08:00
terence tsao
72c1720704 Merge branch 'kiln' of github.com:prysmaticlabs/prysm into kiln 2022-02-22 07:45:21 -08:00
terence tsao
b6fd9e5315 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-22 07:45:06 -08:00
Nishant Das
fa1509c970 remove kiln flag here (#10269) 2022-02-22 06:48:39 -08:00
terence tsao
f9fbda80c2 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-21 08:18:37 -08:00
terence tsao
9a56a5d101 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-20 10:38:43 -07:00
Raul Jordan
5e8c49c871 Merge branch 'develop' into kiln 2022-02-19 13:22:44 -07:00
terence tsao
4c7daf7a1f Comment out optimistic status 2022-02-19 11:51:05 -07:00
Raul Jordan
c4454cae78 gaz 2022-02-19 11:32:13 -06:00
Raul Jordan
1cedf4ba9a json 2022-02-18 17:43:49 -07:00
Raul Jordan
4ce3da7ecc set string 2022-02-18 17:28:45 -07:00
Raul Jordan
70a6fc4222 marshal un 2022-02-18 17:21:28 -07:00
Raul Jordan
bb126a9829 str 2022-02-18 18:20:34 -06:00
Raul Jordan
99deee57d1 string total diff 2022-02-18 17:19:44 -07:00
terence tsao
4c23401a3b Log 2022-02-16 11:45:47 -08:00
terence tsao
9636fde1eb Sync 2022-02-16 07:51:34 -08:00
terence tsao
a424f523a1 Update json_marshal_unmarshal.go 2022-02-15 20:26:26 -08:00
terence tsao
68e75d5851 Fix marshal 2022-02-15 15:42:39 -08:00
terence tsao
176ea137ee Merge branch 'payload-pointers' into kiln 2022-02-15 14:55:47 -08:00
terence tsao
b15cd763b6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-15 14:55:00 -08:00
terence tsao
8e78eae897 Update json_marshal_unmarshal.go 2022-02-15 11:19:44 -08:00
terence tsao
f6883f2aa9 Use pointers 2022-02-15 11:04:19 -08:00
terence tsao
19782d2563 Use pointers 2022-02-15 11:00:58 -08:00
terence tsao
032cf433c5 Use pointers 2022-02-15 11:00:21 -08:00
terence tsao
fa656a86a5 Logs to reproduce 2022-02-15 09:38:31 -08:00
terence tsao
5f414b3e82 Optimistic sync: prysm validator rpcs (#10200) 2022-02-15 07:25:02 -08:00
terence tsao
41b8b1a0f8 Merge branch 'kiln2' into kiln 2022-02-15 06:50:17 -08:00
terence tsao
1d36ecb98d Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-15 06:50:03 -08:00
terence tsao
80cd539297 Rm uncommented 2022-02-15 06:48:33 -08:00
terence tsao
f47b6af910 Done 2022-02-14 22:09:29 -08:00
terence tsao
443df77bb3 c 2022-02-14 18:25:59 -08:00
terence tsao
94fe3884a0 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-14 08:07:12 -08:00
terence tsao
7d6046276d Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-11 18:02:58 -08:00
terence tsao
4f77ad20c8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kiln 2022-02-10 14:24:28 -08:00
terence tsao
1b5a6d4195 Add back get payload 2022-02-10 14:22:35 -08:00
terence tsao
eae0db383f Clean ups 2022-02-10 12:29:52 -08:00
terence tsao
b56bd9e9d8 Fix build 2022-02-10 12:21:35 -08:00
terence tsao
481d8847c2 Merge branch 'kintsugi' of github.com:prysmaticlabs/prysm into kiln 2022-02-10 08:54:35 -08:00
terence tsao
42d5416658 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-10 08:51:51 -08:00
terence tsao
a1d8833749 Logs and err handling 2022-02-10 08:47:23 -08:00
terence tsao
695389b7bb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-10 08:04:48 -08:00
Potuz
a67b8610f0 Change optimistic logic (#10194) 2022-02-10 09:59:09 -03:00
terence tsao
29eceba4d2 delete deprecated client, update testnet flag 2022-02-09 16:05:42 -08:00
terence tsao
4c34e5d424 Sync with develop 2022-02-09 15:53:01 -08:00
terence tsao
569375286e Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-09 15:25:28 -08:00
terence tsao
924758a557 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-08 18:28:31 -08:00
terence tsao
eedcb529fd Merge commit '8eaf3919189cd6d5f51904d8e9d74995ab70d4ac' into kintsugi 2022-02-08 07:43:49 -08:00
terence tsao
30e796a4f1 Update proposer.go 2022-02-08 07:29:51 -08:00
terence tsao
ea6ca456e6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-08 07:29:47 -08:00
terence tsao
4b75b991dd Fix merge transition block validation 2022-02-07 15:11:05 -08:00
Potuz
8eaf391918 allow optimistic sync 2022-02-07 17:35:17 -03:00
terence tsao
cbdb3c9e86 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-02-07 12:02:32 -08:00
Potuz
12754adddc Sync optimistically candidate blocks (#10193) 2022-02-07 07:22:45 -03:00
Potuz
08a5155ee3 Revert "Sync optimistically candidate blocks (#10193)"
This reverts commit f99a0419ef.
2022-02-07 07:20:40 -03:00
Potuz
f99a0419ef Sync optimistically candidate blocks (#10193) 2022-02-07 10:14:25 +00:00
terence tsao
4ad31f9c05 Sync with develop 2022-02-06 19:41:39 -08:00
terence tsao
26876d64d7 Clean ups 2022-02-04 14:20:37 -08:00
terence tsao
3450923661 Sync with develop 2022-02-04 10:08:46 -08:00
terence tsao
aba628b56b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-28 13:59:31 -08:00
terence tsao
5effb92d11 Update mainnet_config.go 2022-01-27 11:40:45 -08:00
terence tsao
2b55368c99 sync with develop 2022-01-27 11:40:39 -08:00
terence tsao
327903b7bb Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-27 10:35:38 -08:00
terence tsao
77f815a39f Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-24 09:06:44 -08:00
terence tsao
80dc725412 sync with develop 2022-01-14 18:42:45 -08:00
terence tsao
263c18992e Update generate_keys.go 2022-01-13 15:26:54 -08:00
terence tsao
9e220f9052 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-13 15:22:58 -08:00
terence tsao
99878d104c Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-12 10:40:04 -08:00
terence tsao
a870bf7a74 Clean up after sync 2022-01-10 18:50:27 -08:00
terence tsao
dc42ff382f Sync with develop 2022-01-10 11:20:05 -08:00
terence tsao
53b78a38a3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-10 11:07:23 -08:00
terence tsao
b45826e731 Speed up syncing, hide cosmetic errors 2022-01-04 10:15:40 -08:00
terence tsao
7b59ecac5e Sync with develop, fix payload nil check bug 2022-01-03 07:55:37 -08:00
terence tsao
9149178a9c Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2022-01-03 07:54:56 -08:00
terence tsao
51ef502b04 clean ups 2021-12-23 09:29:26 -08:00
terence tsao
8d891821ee Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-23 08:44:36 -08:00
terence tsao
762863ce6a Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-20 08:20:07 -08:00
terence tsao
41f5fa7524 visibility 2021-12-16 12:41:30 -08:00
terence tsao
09744bac70 correct gossip sizes this time 2021-12-16 11:57:17 -08:00
terence tsao
f5db847237 use merge gossip sizes 2021-12-16 11:15:00 -08:00
terence tsao
8600f70b0b sync with develop 2021-12-16 07:25:02 -08:00
terence tsao
6fe430de44 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-16 07:23:52 -08:00
Zahoor Mohamed
42a5f96d3f ReverseByteOrder function does not mess the input 2021-12-15 22:19:50 +05:30
Mohamed Zahoor
e7f0fcf202 converting base fee to big endian format (#10018) 2021-12-15 06:41:06 -08:00
terence tsao
5ae564f1bf fix conflicts 2021-12-09 09:01:23 +01:00
terence tsao
719109c219 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-09 08:42:02 +01:00
terence tsao
64533a4b0c Merge branch 'kintsugi' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-08 17:27:01 +01:00
terence tsao
9fecd761d7 latest kintusgi execution api 2021-12-08 17:24:45 +01:00
Zahoor Mohamed
f84c95667c change EP field names 2021-12-08 21:52:03 +05:30
terence tsao
9af081797e Go mod tidy 2021-12-06 09:34:54 +01:00
terence tsao
e4e9f12c8b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-12-06 09:24:49 +01:00
terence tsao
2f4e8beae6 Sync 2021-12-04 15:40:18 +01:00
terence tsao
81c7b90d26 Sync 2021-12-04 15:30:59 +01:00
Potuz
dd3d65ff18 Add v2 endpoint for merge blocks (#9802)
* Add V2 blocks endpoint for merge blocks

* Update beacon-chain/rpc/apimiddleware/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* go mod

* fix transactions

* Terence's comments

* add missing file

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-12-01 14:09:28 -03:00
terence tsao
ac5a227aeb Fix transactions root 2021-11-29 13:56:58 -08:00
terence tsao
33f4d5c3cc Fix a bug with loading mainnet state 2021-11-29 09:59:41 -08:00
terence tsao
67d7f8baee State pkg cleanup 2021-11-24 11:29:01 -08:00
terence tsao
3c54aef7b1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-23 15:34:47 -08:00
terence tsao
938c28c42e Fix build 2021-11-23 14:55:31 -08:00
terence tsao
8ddb2c26c4 Merge commit '4858de787558c792b01aae44bc3902859b98fcac' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-23 14:35:39 -08:00
terence tsao
cf0e78c2f6 Handle merge test case for update balance 2021-11-23 09:56:38 -08:00
terence tsao
4c0b262fdc Fix state merge 2021-11-23 09:13:50 -08:00
terence tsao
33e675e204 Update config to devnet1 2021-11-23 08:21:44 -08:00
terence tsao
e599f6a8a1 Fix build 2021-11-22 19:58:00 -08:00
terence tsao
49c9ab9fda Clean up conflicts 2021-11-22 19:40:57 -08:00
terence tsao
f90dec287b Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-22 19:29:07 -08:00
terence tsao
12c36cff9d Update state_trie.go 2021-11-17 08:07:26 -08:00
terence tsao
bc565d9ee6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-17 08:07:03 -08:00
terence tsao
db67d5bad8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-15 11:07:42 -08:00
terence tsao
3bc0c2be54 Merge branch 'develop' into kintsugi 2021-11-15 09:42:21 -08:00
terence tsao
1bed9ef749 Sync with develop 2021-11-15 09:41:24 -08:00
terence tsao
ec772beeaf Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-15 09:35:29 -08:00
Mohamed Zahoor
56407dde02 Change Gossip message size and Chunk SIze from 1 MB t0 10MB (#9860)
* change gossip size and chunk size after merge

* change ssz to accomodate both changes

* gofmt config file

* add testcase for merge MsgId

* Update beacon-chain/p2p/message_id.go

Change MB to Mib in comment

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* change function name from altairMsgID to postAltairMsgID

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-15 10:37:02 +05:30
terence tsao
445f17881e Fix bad hex conversion 2021-11-12 11:56:22 -08:00
terence tsao
183d40d8f1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-11 09:36:26 -08:00
terence tsao
87bc6aa5e5 Manually override nil transaction field. M2 works 2021-11-09 16:06:01 -08:00
terence tsao
5b5065b01d Remove unused merge genesis state gen tool 2021-11-09 11:09:59 -08:00
terence tsao
ee1c567561 Remove secp256k1 2021-11-09 08:43:10 -08:00
terence tsao
ff1416c98d Update Kintsugi consensus implementations (#9872) 2021-11-08 21:26:58 -08:00
terence tsao
471c94031f Update spec test shas 2021-11-08 19:39:13 -08:00
terence tsao
9863fb3d6a All spec tests pass 2021-11-08 19:31:28 -08:00
kasey
f3c2d1a00b Kintsugi ssz (#9867) 2021-11-08 18:42:23 -08:00
terence tsao
5d8879a4df Update Kintsugi engine API (#9865) 2021-11-08 09:56:14 -08:00
terence tsao
abea0a11bc Update WORKSPACE 2021-11-05 12:06:19 -07:00
terence tsao
80ce1603bd Merge branch 'kintsugi' of github.com:prysmaticlabs/prysm into kintsugi 2021-11-03 20:40:22 -07:00
terence tsao
ca478244e0 Add and use TBH_ACTIVATION_EPOCH 2021-11-03 20:39:51 -07:00
terence tsao
8a864b66a1 Add and use 2021-11-03 20:38:40 -07:00
terence tsao
72f3b9e84b Remove extraneous p2p condition 2021-11-03 19:17:12 -07:00
terence tsao
493e95060f Fix gossip and tx size limits for the merge part 1 2021-11-03 17:03:06 -07:00
terence tsao
e7e1ecd72f Update penalty params for Merge 2021-11-03 16:37:17 -07:00
terence tsao
c286ac8b87 Remove gas validations 2021-11-03 14:47:33 -07:00
terence tsao
bde315224c Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-11-03 12:52:50 -07:00
terence tsao
00520705bc Sync with develop 2021-11-02 20:52:33 -07:00
Zahoor Mohamed
c7fcd804d7 all gossip tests passing 2021-10-27 18:48:22 +05:30
terence tsao
985ac2e848 Update htrutils.go 2021-10-24 11:35:59 -07:00
terence tsao
f4a0e98926 Disable genesis ETH1.0 chain header logging 2021-10-19 22:13:59 -07:00
terence tsao
5f93ff10ea Merge: switch from go bindings to raw rpc calls (#9803) 2021-10-19 21:00:11 -07:00
terence tsao
544248f60f Go fmt 2021-10-18 22:38:57 -07:00
terence tsao
3b41968510 Merge branch 'merge-oct' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-18 22:38:21 -07:00
terence tsao
7fc418042a Disable deposit contract lookback 2021-10-18 22:38:09 -07:00
terence tsao
9a03946706 Disable contract lookback 2021-10-18 22:34:50 -07:00
terence tsao
33dd6dd5f2 Use proper receive block path for initial syncing 2021-10-18 21:28:16 -07:00
terence tsao
56542e1958 Correctly upgrade to merge state + object mapping fixes 2021-10-18 17:46:55 -07:00
terence tsao
e82d7b4c0b Use uint64 for ttd 2021-10-18 14:00:45 -07:00
terence tsao
6cb69d8ff0 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-18 09:26:42 -07:00
terence tsao
70b55a0191 Proper upgrade altair to merge state 2021-10-15 12:48:21 -07:00
terence tsao
50f4951194 Various fixes to pass all spec tests for Merge (#9777) 2021-10-14 15:34:31 -07:00
terence tsao
1a14f2368d Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-14 11:52:28 -07:00
terence tsao
bb8cad58f1 Update beacon_block.pb.go 2021-10-13 13:49:16 -07:00
terence tsao
05412c1f0e Update mainnet_config.go 2021-10-13 13:26:48 -07:00
terence tsao
b03441fed8 Fix finding terminal block hash calculation 2021-10-13 11:29:17 -07:00
terence tsao
fa7d7cef69 Merge: support terminal difficulty override (#9769) 2021-10-12 20:40:01 -07:00
terence tsao
1caa6c969f Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-12 09:43:42 -07:00
Kasey Kirkham
eeb7d5bbfb tell bazel about this new file 2021-10-08 13:31:57 -05:00
Kasey Kirkham
d7c7d150b1 separate ExecutionPayload/Header from codegen 2021-10-08 11:06:21 -05:00
Kasey Kirkham
63c4d2eb2b defensive nil check 2021-10-08 09:18:02 -05:00
Kasey Kirkham
9de1f694a0 restoring generated pb field ordering 2021-10-08 08:16:43 -05:00
terence tsao
8a79d06cbd Fix bazel build //... 2021-10-07 15:31:49 -07:00
terence tsao
5290ad93b8 Merge conflict. Sync with upstream 2021-10-07 15:07:29 -07:00
terence tsao
2128208ef7 M2 works with Geth 🎉 2021-10-07 14:57:20 -07:00
Kasey Kirkham
296323719c get rid of codegen garbage 2021-10-07 16:31:35 -05:00
Kasey Kirkham
5e9583ea85 noisy commit, restoring pb field order codegen 2021-10-07 15:59:28 -05:00
Zahoor Mohamed
17196e0f80 changes test cases per ssz changes 2021-10-08 01:39:30 +05:30
kasey
c50d54000d Merge union debugging (#9751) 2021-10-07 10:44:26 -07:00
terence tsao
85b3061d1b Update go commit 2021-10-07 10:10:46 -07:00
terence tsao
0146c5317a Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-07 09:06:55 -07:00
Zahoor Mohamed
fcbc48ffd9 fix finding Transactions size 2021-10-07 14:16:15 +05:30
terence tsao
76ee51af9d Interop merge beacon state 2021-10-06 17:22:47 -07:00
terence tsao
370b0b97ed Fix beacon chain build 2021-10-06 14:41:43 -07:00
terence tsao
990ebd3fe3 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-06 14:34:33 -07:00
Zahoor Mohamed
54449c72e8 Merge branch 'merge-oct' of https://github.com/prysmaticlabs/prysm into merge-oct 2021-10-06 23:53:43 +05:30
Zahoor Mohamed
1dbd0b98eb add merge specific checks when receiving a block from gossip 2021-10-06 23:53:24 +05:30
terence tsao
09c3896c6b Go fmt 2021-10-06 09:38:24 -07:00
terence tsao
d494845e19 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-06 09:36:13 -07:00
terence tsao
4d0c0f7234 Update todo strings 2021-10-05 14:43:01 -07:00
terence tsao
bfe570b1aa Merge branch 'merge-oct-net' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-05 14:41:24 -07:00
terence tsao
56db696823 Clean up and fix a test 2021-10-05 14:38:09 -07:00
terence tsao
d312e15db8 Clean up misc state store 2021-10-05 14:17:44 -07:00
terence tsao
907d4cf7e6 Clean up validator additions 2021-10-05 14:06:03 -07:00
terence tsao
891353d6ad Clean up beacon chain additions 2021-10-05 11:28:36 -07:00
terence tsao
0adc08660c Rest of the validator changes 2021-10-05 10:18:26 -07:00
terence tsao
de31425dcd Add proposer get execution payload helpers 2021-10-04 16:37:22 -07:00
terence tsao
2094e0f21f Update rpc service and proposer get block 2021-10-04 16:37:01 -07:00
terence tsao
2c6f554500 Update process_block.go 2021-10-04 10:45:56 -07:00
terence tsao
18a1e07711 Update and use forked go-ethereum with catalyst go binding 2021-10-04 10:45:39 -07:00
prestonvanloon
5e432f5aaa Use MariusVanDerWijden go-ethereum fork with latest catalyst updates 2021-10-03 22:05:21 -05:00
prestonvanloon
284e2696cb Merge branch 'rm-bazel-go-ethereum' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-03 21:52:13 -05:00
terence tsao
7547aaa6ce Fix build, update comments 2021-10-03 19:11:43 -07:00
prestonvanloon
953315c2cc fix geth e2e flags 2021-10-03 14:01:26 -05:00
terence tsao
9662d06b08 Update catalyst merge commit 2021-10-03 11:51:12 -07:00
prestonvanloon
ecaea26ace fix geth e2e flags 2021-10-03 13:31:52 -05:00
prestonvanloon
63819e2690 move vendor stuff to third_party so that go mod wont be mad anymore 2021-10-03 13:21:27 -05:00
prestonvanloon
a6d0cd06b3 Remove bazel-go-ethereum, use vendored libraries only 2021-10-03 13:11:50 -05:00
prestonvanloon
2dbe4f5e67 viz improvement 2021-10-03 13:11:26 -05:00
prestonvanloon
2689d6814d Add karalabe/usb 2021-10-03 13:11:16 -05:00
prestonvanloon
69a681ddc0 gaz 2021-10-03 12:29:17 -05:00
prestonvanloon
7f9f1fd36c Check in go-ethereum crypto/sepc256k1 package with proper build rules 2021-10-03 12:27:40 -05:00
terence tsao
57c97eb561 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-10-03 09:42:34 -07:00
terence tsao
f0f94a8193 Handle more version merge cases 2021-10-02 11:43:50 -07:00
Zahoor Mohamed
87b0bf2c2a fix more merge conflicts 2021-10-02 12:27:12 +05:30
Zahoor Mohamed
d8ad317dec fix mrge conflicts 2021-10-02 12:19:29 +05:30
terence tsao
ab5f488cf4 Fix spectest merge fork 2021-10-01 16:27:21 -07:00
terence tsao
296d7464ad Add powchain execution methods 2021-10-01 16:07:33 -07:00
terence tsao
221c542e4f Go mod tidy and build 2021-10-01 13:43:57 -07:00
terence tsao
7ad32aaa96 Add execution caller engine interface 2021-10-01 12:59:04 -07:00
terence tsao
3dc0969c0c Point go-ethereum to https://github.com/ethereum/go-ethereum/pull/23607 2021-10-01 08:30:15 -07:00
Zahoor Mohamed
0e18e835c3 req/resp structure has not changed. so no need of a new version 2021-09-30 21:34:19 +05:30
terence tsao
8adfbfc382 Update sync_committee.go 2021-09-30 08:21:40 -07:00
Zahoor Mohamed
68b0b5e0ce add merge in fork watcher 2021-09-30 14:09:18 +05:30
terence tsao
eede309e0f Fix build 2021-09-29 16:26:13 -07:00
terence tsao
b11628dc53 Can configure flags 2021-09-29 16:04:39 -07:00
terence tsao
ea3ae22d3b Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-29 14:25:16 -07:00
terence tsao
02bb39ddeb Minor clean up to improve readability 2021-09-29 10:21:00 -07:00
terence tsao
1618c1f55d Fix comment 2021-09-29 07:57:43 -07:00
terence tsao
73c8493fd7 Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-28 14:54:24 -07:00
terence tsao
a4f59a4f15 Forkchoice and upgrade changes 2021-09-28 14:53:11 -07:00
Zahoor Mohamed
3c497efdb8 Merge branch 'merge-oct' of https://github.com/prysmaticlabs/prysm into merge-oct-net 2021-09-28 21:58:22 +05:30
Zahoor Mohamed
9f5daafbb7 initial networking code 2021-09-28 20:02:47 +05:30
terence tsao
11d7ffdfa8 Add merge spec tests 2021-09-26 11:07:31 -07:00
terence tsao
c26b3305e6 Resolve conflict 2021-09-25 09:49:53 -07:00
terence tsao
38d8b63fbf Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-25 09:15:20 -07:00
terence tsao
aea67405c8 Add upgrade to merge path 2021-09-21 14:34:03 -07:00
terence tsao
57d830f8b3 Add wrapper, cloner and interface 2021-09-21 13:34:10 -07:00
terence tsao
ac4b1ef4ea Merge branch 'develop' of github.com:prysmaticlabs/prysm into merge-oct 2021-09-21 13:07:01 -07:00
terence tsao
1d32119f5a can process execution header 2021-09-20 17:08:53 -07:00
terence tsao
3540cc7b05 Add state v3 2021-09-16 21:31:08 -07:00
terence tsao
191e7767a6 Add beacon block and state protos 2021-09-16 16:15:55 -07:00
213 changed files with 7216 additions and 3447 deletions

View File

@@ -1,8 +1,8 @@
#!/bin/sh -l
set -e
export PATH=$PATH:/usr/local/go/bin
export PATH="$PATH:/usr/local/go/bin"
cd $GITHUB_WORKSPACE
cd "$GITHUB_WORKSPACE"
cp go.mod go.mod.orig
cp go.sum go.sum.orig

View File

@@ -1,41 +0,0 @@
name: Update DAppNodePackages
on:
push:
tags:
- '*'
jobs:
dappnode-update-beacon-chain:
name: Trigger a beacon-chain release
runs-on: ubuntu-latest
steps:
- name: Get latest tag
id: get_tag
run: echo ::set-output name=TAG::${GITHUB_REF/refs\/tags\//}
- name: Send dispatch event to DAppNodePackage-prysm-beacon-chain
env:
DISPATCH_REPO: dappnode/DAppNodePackage-prysm-beacon-chain
run: |
curl -v -X POST -u "${{ secrets.PAT_GITHUB }}" \
-H "Accept: application/vnd.github.everest-preview+json" \
-H "Content-Type: application/json" \
--data '{"event_type":"new_release", "client_payload": { "tag":"${{ steps.get_tag.outputs.TAG }}"}}' \
https://api.github.com/repos/$DISPATCH_REPO/dispatches
dappnode-update-validator:
name: Trigger a validator release
runs-on: ubuntu-latest
steps:
- name: Get latest tag
id: get_tag
run: echo ::set-output name=TAG::${GITHUB_REF/refs\/tags\//}
- name: Send dispatch event to DAppNodePackage validator repository
env:
DISPATCH_REPO: dappnode/DAppNodePackage-prysm-validator
run: |
curl -v -X POST -u "${{ secrets.PAT_GITHUB }}" \
-H "Accept: application/vnd.github.everest-preview+json" \
-H "Content-Type: application/json" \
--data '{"event_type":"new_release", "client_payload": { "tag":"${{ steps.get_tag.outputs.TAG }}"}}' \
https://api.github.com/repos/$DISPATCH_REPO/dispatches

View File

@@ -7,13 +7,12 @@ on:
branches: [ '*' ]
jobs:
check:
name: Check
formatting:
name: Formatting
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@v2
- name: Go mod tidy checker
id: gomodtidy
@@ -31,15 +30,43 @@ jobs:
with:
goimports-path: ./
- name: Gosec security scanner
uses: securego/gosec@master
gosec:
name: Gosec scan
runs-on: ubuntu-latest
env:
GO111MODULE: on
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.17
uses: actions/setup-go@v3
with:
args: '-exclude=G307 -exclude-dir=crypto/bls/herumi ./...'
go-version: 1.17
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@latest
gosec -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.17
uses: actions/setup-go@v3
with:
go-version: 1.17
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v2
with:
args: --print-issued-lines --sort-results --no-config --timeout=10m --disable-all -E deadcode -E errcheck -E gosimple --skip-files=validator/web/site_data.go --skip-dirs=proto
args: --print-issued-lines --sort-results --no-config --timeout=10m --disable-all -E deadcode -E errcheck -E gosimple --skip-files=validator/web/site_data.go --skip-dirs=proto --go=1.17
version: v1.45.2
skip-go-installation: true
build:
name: Build
@@ -48,7 +75,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: ^1.14
go-version: 1.17
id: go
- name: Check out code into the Go module directory

View File

@@ -132,6 +132,15 @@ func validHostname(h string) (string, error) {
return fmt.Sprintf("%s:%s", host, port), nil
}
// NodeURL returns a human-readable string representation of the beacon node base url.
func (c *Client) NodeURL() string {
u := &url.URL{
Scheme: c.scheme,
Host: c.host,
}
return u.String()
}
func (c *Client) urlForPath(methodPath string) *url.URL {
u := &url.URL{
Scheme: c.scheme,

View File

@@ -48,6 +48,7 @@ go_library(
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
@@ -57,7 +58,6 @@ go_library(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
@@ -107,7 +107,6 @@ go_test(
"init_test.go",
"log_test.go",
"metrics_test.go",
"mock_engine_test.go",
"mock_test.go",
"optimistic_sync_test.go",
"pow_block_test.go",
@@ -131,6 +130,7 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//config/fieldparams:go_default_library",
@@ -184,6 +184,7 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -328,10 +328,10 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
return s.IsOptimisticForRoot(ctx, s.head.root)
}
// IsOptimisticForRoot takes the root and slot as aguments instead of the current head
// IsOptimisticForRoot takes the root and slot as arguments instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(ctx, root)
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(root)
if err == nil {
return optimistic, nil
}
@@ -358,11 +358,26 @@ func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool,
return false, nil
}
lastValidated, err := s.cfg.BeaconDB.StateSummary(ctx, bytesutil.ToBytes32(validatedCheckpoint.Root))
// checkpoint root could be zeros before the first finalized epoch. Use genesis root if the case.
lastValidated, err := s.cfg.BeaconDB.StateSummary(ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(validatedCheckpoint.Root)))
if err != nil {
return false, err
}
return ss.Slot > lastValidated.Slot, nil
if lastValidated == nil {
return false, errInvalidNilSummary
}
if ss.Slot > lastValidated.Slot {
return true, nil
}
isCanonical, err := s.IsCanonical(ctx, root)
if err != nil {
return false, err
}
// historical non-canonical blocks here are returned as optimistic for safety.
return !isCanonical, nil
}
// SetGenesisTime sets the genesis time of beacon chain.

View File

@@ -9,6 +9,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestHeadSlot_DataRace(t *testing.T) {
@@ -16,10 +17,13 @@ func TestHeadSlot_DataRace(t *testing.T) {
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
wait := make(chan struct{})
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
s.HeadSlot()
<-wait
@@ -31,12 +35,16 @@ func TestHeadRoot_DataRace(t *testing.T) {
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
head: &head{root: [32]byte{'A'}},
}
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
_, err := s.HeadRoot(context.Background())
_, err = s.HeadRoot(context.Background())
require.NoError(t, err)
<-wait
}
@@ -49,10 +57,14 @@ func TestHeadBlock_DataRace(t *testing.T) {
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
head: &head{block: wsb},
}
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
_, err = s.HeadBlock(context.Background())
require.NoError(t, err)
@@ -64,12 +76,16 @@ func TestHeadState_DataRace(t *testing.T) {
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
}
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
_, err := s.HeadState(context.Background())
_, err = s.HeadState(context.Background())
require.NoError(t, err)
<-wait
}

View File

@@ -459,7 +459,7 @@ func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 11
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(optimisticBlock)
@@ -477,18 +477,39 @@ func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
_, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.ErrorContains(t, "nil summary returned from the DB", err)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: validatedRoot[:],
}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, validatedRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, cp))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, false, validated)
// Before the first finalized epoch, finalized root could be zeros.
validatedCheckpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
}
func TestService_IsOptimisticForRoot__DB_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
@@ -503,7 +524,71 @@ func TestService_IsOptimisticForRoot__DB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 11
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(optimisticBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(validatedBlock)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
_, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.ErrorContains(t, "nil summary returned from the DB", err)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: validatedRoot[:],
}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, validatedRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, cp))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, false, validated)
// Before the first finalized epoch, finalized root could be zeros.
validatedCheckpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
}
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New(0, 0)}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wsb))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err = wrapper.WrappedSignedBeaconBlock(optimisticBlock)
@@ -529,5 +614,6 @@ func TestService_IsOptimisticForRoot__DB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, false, validated)
require.Equal(t, true, validated)
}

View File

@@ -13,7 +13,9 @@ var (
errInvalidNilSummary = errors.New("nil summary returned from the DB")
// errNilParentInDB is returned when a nil parent block is returned from the DB.
errNilParentInDB = errors.New("nil parent block in DB")
// errWrongBlockCound is returned when the wrong number of blocks or
// errWrongBlockCount is returned when the wrong number of blocks or
// block roots is used
errWrongBlockCount = errors.New("wrong number of blocks or block roots")
// block is not a valid optimistic candidate block
errNotOptimisticCandidate = errors.New("block is not suitable for optimistic sync")
)

View File

@@ -24,8 +24,9 @@ import (
"go.opencensus.io/trace"
)
// UpdateHeadWithBalances updates the beacon state head after getting justified balanced from cache.
func (s *Service) UpdateHeadWithBalances(ctx context.Context) error {
// UpdateAndSaveHeadWithBalances updates the beacon state head after getting justified balanced from cache.
// This function is only used in spec-tests, it does save the head after updating it.
func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
cp := s.store.JustifiedCheckpt()
if cp == nil {
return errors.New("no justified checkpoint")
@@ -35,8 +36,19 @@ func (s *Service) UpdateHeadWithBalances(ctx context.Context) error {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", cp.Root)
return errors.Wrap(err, msg)
}
return s.updateHead(ctx, balances)
headRoot, err := s.updateHead(ctx, balances)
if err != nil {
return errors.Wrap(err, "could not update head")
}
headBlock, err := s.cfg.BeaconDB.Block(ctx, headRoot)
if err != nil {
return err
}
headState, err := s.cfg.StateGen.StateByRoot(ctx, headRoot)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
return s.saveHead(ctx, headRoot, headBlock, headState)
}
// This defines the current chain service's view of head.
@@ -49,18 +61,18 @@ type head struct {
// Determined the head from the fork choice service and saves its new data
// (head root, head block, and head state) to the local service cache.
func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
func (s *Service) updateHead(ctx context.Context, balances []uint64) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.updateHead")
defer span.End()
// Get head from the fork choice service.
f := s.store.FinalizedCheckpt()
if f == nil {
return errNilFinalizedInStore
return [32]byte{}, errNilFinalizedInStore
}
j := s.store.JustifiedCheckpt()
if j == nil {
return errNilJustifiedInStore
return [32]byte{}, errNilJustifiedInStore
}
// To get head before the first justified epoch, the fork choice will start with origin root
// instead of zero hashes.
@@ -76,7 +88,7 @@ func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
if !s.cfg.ForkChoiceStore.HasNode(headStartRoot) {
jb, err := s.cfg.BeaconDB.Block(ctx, headStartRoot)
if err != nil {
return err
return [32]byte{}, err
}
if features.Get().EnableForkChoiceDoublyLinkedTree {
s.cfg.ForkChoiceStore = doublylinkedtree.New(j.Epoch, f.Epoch)
@@ -84,22 +96,16 @@ func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
s.cfg.ForkChoiceStore = protoarray.New(j.Epoch, f.Epoch, bytesutil.ToBytes32(f.Root))
}
if err := s.insertBlockToForkChoiceStore(ctx, jb.Block(), headStartRoot, f, j); err != nil {
return err
return [32]byte{}, err
}
}
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx, j.Epoch, headStartRoot, balances, f.Epoch)
if err != nil {
return err
}
// Save head to the local service cache.
return s.saveHead(ctx, headRoot)
return s.cfg.ForkChoiceStore.Head(ctx, j.Epoch, headStartRoot, balances, f.Epoch)
}
// This saves head info to the local service cache, it also saves the
// new head root to the DB.
func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
func (s *Service) saveHead(ctx context.Context, headRoot [32]byte, headBlock block.SignedBeaconBlock, headState state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.saveHead")
defer span.End()
@@ -111,6 +117,12 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
if headRoot == bytesutil.ToBytes32(r) {
return nil
}
if err := helpers.BeaconBlockIsNil(headBlock); err != nil {
return err
}
if headState == nil || headState.IsNil() {
return errors.New("cannot save nil head state")
}
// If the head state is not available, just return nil.
// There's nothing to cache
@@ -118,31 +130,13 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
return nil
}
// Get the new head block from DB.
newHeadBlock, err := s.cfg.BeaconDB.Block(ctx, headRoot)
if err != nil {
return err
}
if err := helpers.BeaconBlockIsNil(newHeadBlock); err != nil {
return err
}
// Get the new head state from cached state or DB.
newHeadState, err := s.cfg.StateGen.StateByRoot(ctx, headRoot)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
if newHeadState == nil || newHeadState.IsNil() {
return errors.New("cannot save nil head state")
}
// A chain re-org occurred, so we fire an event notifying the rest of the services.
headSlot := s.HeadSlot()
newHeadSlot := newHeadBlock.Block().Slot()
newHeadSlot := headBlock.Block().Slot()
oldHeadRoot := s.headRoot()
oldStateRoot := s.headBlock().Block().StateRoot()
newStateRoot := newHeadBlock.Block().StateRoot()
if bytesutil.ToBytes32(newHeadBlock.Block().ParentRoot()) != bytesutil.ToBytes32(r) {
newStateRoot := headBlock.Block().StateRoot()
if bytesutil.ToBytes32(headBlock.Block().ParentRoot()) != bytesutil.ToBytes32(r) {
log.WithFields(logrus.Fields{
"newSlot": fmt.Sprintf("%d", newHeadSlot),
"oldSlot": fmt.Sprintf("%d", headSlot),
@@ -174,7 +168,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
}
// Cache the new head info.
s.setHead(headRoot, newHeadBlock, newHeadState)
s.setHead(headRoot, headBlock, headState)
// Save the new head root to DB.
if err := s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, headRoot); err != nil {
@@ -184,7 +178,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
// Forward an event capturing a new chain head over a common event feed
// done in a goroutine to avoid blocking the critical runtime main routine.
go func() {
if err := s.notifyNewHeadEvent(ctx, newHeadSlot, newHeadState, newStateRoot, headRoot[:]); err != nil {
if err := s.notifyNewHeadEvent(ctx, newHeadSlot, headState, newStateRoot, headRoot[:]); err != nil {
log.WithError(err).Error("Could not notify event feed of new chain head")
}
}()

View File

@@ -9,6 +9,8 @@ import (
types "github.com/prysmaticlabs/eth2-types"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
@@ -27,8 +29,10 @@ func TestSaveHead_Same(t *testing.T) {
r := [32]byte{'A'}
service.head = &head{slot: 0, root: r}
require.NoError(t, service.saveHead(context.Background(), r))
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.saveHead(context.Background(), r, b, st))
assert.Equal(t, types.Slot(0), service.headSlot(), "Head did not stay the same")
assert.Equal(t, r, service.headRoot(), "Head did not stay the same")
}
@@ -66,7 +70,7 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
assert.Equal(t, types.Slot(1), service.HeadSlot(), "Head did not change")
@@ -112,7 +116,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
assert.Equal(t, types.Slot(1), service.HeadSlot(), "Head did not change")
@@ -154,8 +158,10 @@ func TestUpdateHead_MissingJustifiedRoot(t *testing.T) {
service.store.SetJustifiedCheckpt(&ethpb.Checkpoint{Root: r[:]})
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{})
service.store.SetBestJustifiedCheckpt(&ethpb.Checkpoint{})
require.NoError(t, service.updateHead(context.Background(), []uint64{}))
headRoot, err := service.updateHead(context.Background(), []uint64{})
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.saveHead(context.Background(), headRoot, wsb, st))
}
func Test_notifyNewHeadEvent(t *testing.T) {
@@ -279,3 +285,43 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
atts := b.Block.Body.Attestations
require.DeepNotSSZEqual(t, atts, savedAtts)
}
func TestUpdateHead_noSavedChanges(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New(0, 0)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
bellatrixBlk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
bellatrixBlkRoot, err := bellatrixBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, bellatrixBlk))
fcp := &ethpb.Checkpoint{
Root: bellatrixBlkRoot[:],
Epoch: 1,
}
service.store.SetFinalizedCheckpt(fcp)
service.store.SetJustifiedCheckpt(fcp)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bellatrixBlkRoot))
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
require.NoError(t, beaconDB.SaveState(ctx, bellatrixState, bellatrixBlkRoot))
service.cfg.StateGen.SaveFinalizedState(0, bellatrixBlkRoot, bellatrixState)
headRoot := service.headRoot()
require.Equal(t, [32]byte{}, headRoot)
newRoot, err := service.updateHead(ctx, []uint64{1, 2})
require.NoError(t, err)
require.NotEqual(t, headRoot, newRoot)
require.Equal(t, headRoot, service.headRoot())
}

View File

@@ -1,49 +0,0 @@
package blockchain
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
)
type mockEngineService struct {
newPayloadError error
forkchoiceError error
blks map[[32]byte]*enginev1.ExecutionBlock
}
func (m *mockEngineService) NewPayload(context.Context, *enginev1.ExecutionPayload) ([]byte, error) {
return nil, m.newPayloadError
}
func (m *mockEngineService) ForkchoiceUpdated(context.Context, *enginev1.ForkchoiceState, *enginev1.PayloadAttributes) (*enginev1.PayloadIDBytes, []byte, error) {
return nil, nil, m.forkchoiceError
}
func (*mockEngineService) GetPayloadV1(
_ context.Context, _ enginev1.PayloadIDBytes,
) *enginev1.ExecutionPayload {
return nil
}
func (*mockEngineService) GetPayload(context.Context, [8]byte) (*enginev1.ExecutionPayload, error) {
return nil, nil
}
func (*mockEngineService) ExchangeTransitionConfiguration(context.Context, *enginev1.TransitionConfiguration) error {
return nil
}
func (*mockEngineService) LatestExecutionBlock(context.Context) (*enginev1.ExecutionBlock, error) {
return nil, nil
}
func (m *mockEngineService) ExecutionBlockByHash(_ context.Context, hash common.Hash) (*enginev1.ExecutionBlock, error) {
blk, ok := m.blks[common.BytesToHash(hash.Bytes())]
if !ok {
return nil, errors.New("block not found")
}
return blk, nil
}

View File

@@ -86,9 +86,9 @@ func TestService_newSlot(t *testing.T) {
for _, test := range tests {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
store := store.New(test.args.justified, test.args.finalized)
store.SetBestJustifiedCheckpt(test.args.bestJustified)
service.store = store
s := store.New(test.args.justified, test.args.finalized)
s.SetBestJustifiedCheckpt(test.args.bestJustified)
service.store = s
require.NoError(t, service.NewSlot(ctx, test.args.slot))
if test.args.shouldEqual {

View File

@@ -5,14 +5,21 @@ import (
"fmt"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -20,7 +27,7 @@ import (
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headBlk block.BeaconBlock, headRoot [32]byte, finalizedRoot [32]byte) (*enginev1.PayloadIDBytes, error) {
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headState state.BeaconState, headBlk block.BeaconBlock, headRoot [32]byte, finalizedRoot [32]byte) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdate")
defer span.End()
@@ -43,6 +50,12 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headBlk block.Beac
if err != nil {
return nil, errors.Wrap(err, "could not get finalized block")
}
if finalizedBlock == nil || finalizedBlock.IsNil() {
finalizedBlock = s.getInitSyncBlock(s.ensureRootNotZeros(finalizedRoot))
if finalizedBlock == nil || finalizedBlock.IsNil() {
return nil, errors.Errorf("finalized block with root %#x does not exist in the db or our cache", s.ensureRootNotZeros(finalizedRoot))
}
}
var finalizedHash []byte
if blocks.IsPreBellatrixVersion(finalizedBlock.Block().Version()) {
finalizedHash = params.BeaconConfig().ZeroHash[:]
@@ -60,11 +73,16 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headBlk block.Beac
FinalizedBlockHash: finalizedHash,
}
// payload attribute is only required when requesting payload, here we are just updating fork choice, so it is nil.
payloadID, _, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, nil /*payload attribute*/)
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
hasAttr, attr, proposerId, err := s.getPayloadAttribute(ctx, headState, nextSlot)
if err != nil {
return nil, errors.Wrap(err, "could not get payload attribute")
}
payloadID, _, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attr)
if err != nil {
switch err {
case v1.ErrAcceptedSyncingPayloadStatus:
case powchain.ErrAcceptedSyncingPayloadStatus:
log.WithFields(logrus.Fields{
"headSlot": headBlk.Slot(),
"headHash": fmt.Sprintf("%#x", bytesutil.Trunc(headPayload.BlockHash)),
@@ -78,67 +96,69 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headBlk block.Beac
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, headRoot); err != nil {
return nil, errors.Wrap(err, "could not set block to valid")
}
if hasAttr { // If the forkchoice update call has an attribute, update the proposer payload ID cache.
var pId [8]byte
copy(pId[:], payloadID[:])
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId)
}
return payloadID, nil
}
// notifyForkchoiceUpdate signals execution engine on a new payload
// notifyForkchoiceUpdate signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion, postStateVersion int,
preStateHeader, postStateHeader *ethpb.ExecutionPayloadHeader, blk block.SignedBeaconBlock, root [32]byte) error {
preStateHeader, postStateHeader *ethpb.ExecutionPayloadHeader, blk block.SignedBeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
// Execution payload is only supported in Bellatrix and beyond. Pre
// merge blocks are never optimistic
if blocks.IsPreBellatrixVersion(postStateVersion) {
return s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, root)
return true, nil
}
if err := helpers.BeaconBlockIsNil(blk); err != nil {
return err
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(postStateHeader, body)
if err != nil {
return errors.Wrap(err, "could not determine if execution is enabled")
return false, errors.Wrap(err, "could not determine if execution is enabled")
}
if !enabled {
return s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, root)
return true, nil
}
payload, err := body.ExecutionPayload()
if err != nil {
return errors.Wrap(err, "could not get execution payload")
return false, errors.Wrap(err, "could not get execution payload")
}
_, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload)
if err != nil {
switch err {
case v1.ErrAcceptedSyncingPayloadStatus:
case powchain.ErrAcceptedSyncingPayloadStatus:
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash)),
}).Info("Called new payload with optimistic block")
return nil
return false, nil
default:
return errors.Wrap(err, "could not validate execution payload from execution engine")
return false, errors.Wrap(err, "could not validate execution payload from execution engine")
}
}
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, root); err != nil {
return errors.Wrap(err, "could not set optimistic status")
}
// During the transition event, the transition block should be verified for sanity.
if blocks.IsPreBellatrixVersion(preStateVersion) {
// Handle case where pre-state is Altair but block contains payload.
// To reach here, the block must have contained a valid payload.
return s.validateMergeBlock(ctx, blk)
return true, s.validateMergeBlock(ctx, blk)
}
atTransition, err := blocks.IsMergeTransitionBlockUsingPreStatePayloadHeader(preStateHeader, body)
if err != nil {
return errors.Wrap(err, "could not check if merge block is terminal")
return true, errors.Wrap(err, "could not check if merge block is terminal")
}
if !atTransition {
return nil
return true, nil
}
return s.validateMergeBlock(ctx, blk)
return true, s.validateMergeBlock(ctx, blk)
}
// optimisticCandidateBlock returns true if this block can be optimistically synced.
@@ -148,10 +168,6 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion, postSta
// if is_execution_block(opt_store.blocks[block.parent_root]):
// return True
//
// justified_root = opt_store.block_states[opt_store.head_block_root].current_justified_checkpoint.root
// if is_execution_block(opt_store.blocks[justified_root]):
// return True
//
// if block.slot + SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY <= current_slot:
// return True
//
@@ -173,17 +189,54 @@ func (s *Service) optimisticCandidateBlock(ctx context.Context, blk block.Beacon
if err != nil {
return false, err
}
if parentIsExecutionBlock {
return true, nil
return parentIsExecutionBlock, nil
}
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot types.Slot) (bool, *enginev1.PayloadAttributes, types.ValidatorIndex, error) {
proposerID, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot)
if !ok { // There's no need to build attribute if there is no proposer for slot.
return false, nil, 0, nil
}
j := s.store.JustifiedCheckpt()
if j == nil {
return false, errNilJustifiedInStore
}
jBlock, err := s.cfg.BeaconDB.Block(ctx, bytesutil.ToBytes32(j.Root))
// Get previous randao.
st = st.Copy()
st, err := transition.ProcessSlotsIfPossible(ctx, st, slot)
if err != nil {
return false, err
return false, nil, 0, err
}
return blocks.IsExecutionBlock(jBlock.Block().Body())
prevRando, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
return false, nil, 0, nil
}
// Get fee recipient.
feeRecipient := params.BeaconConfig().DefaultFeeRecipient
recipient, err := s.cfg.BeaconDB.FeeRecipientByValidatorID(ctx, proposerID)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
if feeRecipient.String() == fieldparams.EthBurnAddressHex {
logrus.WithFields(logrus.Fields{
"validatorIndex": proposerID,
"burnAddress": fieldparams.EthBurnAddressHex,
}).Error("Fee recipient not set. Using burn address")
}
case err != nil:
return false, nil, 0, errors.Wrap(err, "could not get fee recipient in db")
default:
feeRecipient = recipient
}
// Get timestamp.
t, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
if err != nil {
return false, nil, 0, err
}
attr := &enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
}
return true, attr, proposerID, nil
}

View File

@@ -5,10 +5,14 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
engine "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
@@ -23,6 +27,7 @@ import (
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func Test_NotifyForkchoiceUpdate(t *testing.T) {
@@ -43,8 +48,13 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
}
require.NoError(t, err)
require.NoError(t, fcs.InsertOptimisticBlock(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, 0, 0))
@@ -133,7 +143,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
require.NoError(t, err)
return b
}(),
newForkchoiceErr: engine.ErrAcceptedSyncingPayloadStatus,
newForkchoiceErr: powchain.ErrAcceptedSyncingPayloadStatus,
finalizedRoot: bellatrixBlkRoot,
},
{
@@ -147,7 +157,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
require.NoError(t, err)
return b
}(),
newForkchoiceErr: engine.ErrInvalidPayloadStatus,
newForkchoiceErr: powchain.ErrInvalidPayloadStatus,
finalizedRoot: bellatrixBlkRoot,
errString: "could not notify forkchoice update from execution engine: payload status is INVALID",
},
@@ -155,9 +165,9 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
engine := &mockEngineService{forkchoiceError: tt.newForkchoiceErr}
service.cfg.ExecutionEngineCaller = engine
_, err := service.notifyForkchoiceUpdate(ctx, tt.blk, service.headRoot(), tt.finalizedRoot)
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: tt.newForkchoiceErr}
st, _ := util.DeterministicGenesisState(t, 1)
_, err := service.notifyForkchoiceUpdate(ctx, st, tt.blk, service.headRoot(), tt.finalizedRoot)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -203,49 +213,56 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
preState state.BeaconState
postState state.BeaconState
blk block.SignedBeaconBlock
newPayloadErr error
errString string
name string
preState state.BeaconState
postState state.BeaconState
isValidPayload bool
blk block.SignedBeaconBlock
newPayloadErr error
errString string
}{
{
name: "phase 0 post state",
postState: phase0State,
preState: phase0State,
name: "phase 0 post state",
postState: phase0State,
preState: phase0State,
isValidPayload: true,
},
{
name: "altair post state",
postState: altairState,
preState: altairState,
name: "altair post state",
postState: altairState,
preState: altairState,
isValidPayload: true,
},
{
name: "nil beacon block",
postState: bellatrixState,
preState: bellatrixState,
errString: "signed beacon block can't be nil",
name: "nil beacon block",
postState: bellatrixState,
preState: bellatrixState,
errString: "signed beacon block can't be nil",
isValidPayload: false,
},
{
name: "new payload with optimistic block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
newPayloadErr: engine.ErrAcceptedSyncingPayloadStatus,
name: "new payload with optimistic block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
newPayloadErr: powchain.ErrAcceptedSyncingPayloadStatus,
isValidPayload: false,
},
{
name: "new payload with invalid block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
newPayloadErr: engine.ErrInvalidPayloadStatus,
errString: "could not validate execution payload from execution engine: payload status is INVALID",
name: "new payload with invalid block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
newPayloadErr: powchain.ErrInvalidPayloadStatus,
errString: "could not validate execution payload from execution engine: payload status is INVALID",
isValidPayload: false,
},
{
name: "altair pre state, altair block",
postState: bellatrixState,
preState: altairState,
blk: altairBlk,
name: "altair pre state, altair block",
postState: bellatrixState,
preState: altairState,
blk: altairBlk,
isValidPayload: true,
},
{
name: "altair pre state, happy case",
@@ -265,13 +282,15 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
return b
}(),
isValidPayload: true,
},
{
name: "could not get merge block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
errString: "could not get merge block parent hash and total difficulty",
name: "could not get merge block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
errString: "could not get merge block parent hash and total difficulty",
isValidPayload: false,
},
{
name: "not at merge transition",
@@ -298,13 +317,15 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
return b
}(),
isValidPayload: true,
},
{
name: "could not get merge block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
errString: "could not get merge block parent hash and total difficulty",
name: "could not get merge block",
postState: bellatrixState,
preState: bellatrixState,
blk: bellatrixBlk,
errString: "could not get merge block parent hash and total difficulty",
isValidPayload: false,
},
{
name: "happy case",
@@ -324,20 +345,21 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
return b
}(),
isValidPayload: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
engine := &mockEngineService{newPayloadError: tt.newPayloadErr, blks: map[[32]byte]*v1.ExecutionBlock{}}
engine.blks[[32]byte{'a'}] = &v1.ExecutionBlock{
e := &mockPOW.EngineClient{ErrNewPayload: tt.newPayloadErr, BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
TotalDifficulty: "0x2",
}
engine.blks[[32]byte{'b'}] = &v1.ExecutionBlock{
e.BlockByHashMap[[32]byte{'b'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
TotalDifficulty: "0x1",
}
service.cfg.ExecutionEngineCaller = engine
service.cfg.ExecutionEngineCaller = e
var payload *ethpb.ExecutionPayloadHeader
if tt.preState.Version() == version.Bellatrix {
payload, err = tt.preState.LatestExecutionPayloadHeader()
@@ -347,11 +369,12 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 0, root, root, params.BeaconConfig().ZeroHash, 0, 0))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
require.NoError(t, err)
err = service.notifyNewPayload(ctx, tt.preState.Version(), postVersion, payload, postHeader, tt.blk, root)
isValidPayload, err := service.notifyNewPayload(ctx, tt.preState.Version(), postVersion, payload, postHeader, tt.blk)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.Equal(t, tt.isValidPayload, isValidPayload)
}
})
}
@@ -383,27 +406,23 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
require.NoError(t, err)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
engine := &mockEngineService{blks: map[[32]byte]*v1.ExecutionBlock{}}
engine.blks[[32]byte{'a'}] = &v1.ExecutionBlock{
e := &mockPOW.EngineClient{BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
TotalDifficulty: "0x2",
}
engine.blks[[32]byte{'b'}] = &v1.ExecutionBlock{
e.BlockByHashMap[[32]byte{'b'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
TotalDifficulty: "0x1",
}
service.cfg.ExecutionEngineCaller = engine
service.cfg.ExecutionEngineCaller = e
payload, err := bellatrixState.LatestExecutionPayloadHeader()
require.NoError(t, err)
root := [32]byte{'c'}
require.NoError(t, service.cfg.ForkChoiceStore.InsertOptimisticBlock(ctx, 1, root, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 0, 0))
postVersion, postHeader, err := getStateVersionAndPayload(bellatrixState)
require.NoError(t, err)
err = service.notifyNewPayload(ctx, bellatrixState.Version(), postVersion, payload, postHeader, bellatrixBlk, root)
validated, err := service.notifyNewPayload(ctx, bellatrixState.Version(), postVersion, payload, postHeader, bellatrixBlk)
require.NoError(t, err)
optimistic, err := service.IsOptimisticForRoot(ctx, root)
require.NoError(t, err)
require.Equal(t, false, optimistic)
require.Equal(t, true, validated)
}
func Test_IsOptimisticCandidateBlock(t *testing.T) {
@@ -443,7 +462,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 1
blk.Block.ParentRoot = parentRoot[:]
wr, err := wrapper.WrappedBellatrixBeaconBlock(blk.Block)
wr, err := wrapper.WrappedBeaconBlock(blk.Block)
require.NoError(tt, err)
return wr
}(t),
@@ -463,7 +482,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
blk := util.NewBeaconBlockAltair()
blk.Block.Slot = 200
blk.Block.ParentRoot = parentRoot[:]
wr, err := wrapper.WrappedAltairBeaconBlock(blk.Block)
wr, err := wrapper.WrappedBeaconBlock(blk.Block)
require.NoError(tt, err)
return wr
}(t),
@@ -483,7 +502,7 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 200
blk.Block.ParentRoot = parentRoot[:]
wr, err := wrapper.WrappedBellatrixBeaconBlock(blk.Block)
wr, err := wrapper.WrappedBeaconBlock(blk.Block)
require.NoError(tt, err)
return wr
}(t),
@@ -497,42 +516,14 @@ func Test_IsOptimisticCandidateBlock(t *testing.T) {
}(t),
want: false,
},
{
name: "shallow block, execution enabled justified chkpt",
blk: func(tt *testing.T) block.BeaconBlock {
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 200
blk.Block.ParentRoot = parentRoot[:]
wr, err := wrapper.WrappedBellatrixBeaconBlock(blk.Block)
require.NoError(tt, err)
return wr
}(t),
justified: func(tt *testing.T) block.SignedBeaconBlock {
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 32
blk.Block.ParentRoot = parentRoot[:]
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.FeeRecipientLength)
blk.Block.Body.ExecutionPayload.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
blk.Block.Body.ExecutionPayload.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
blk.Block.Body.ExecutionPayload.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
wr, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(tt, err)
return wr
}(t),
want: true,
},
}
for _, tt := range tests {
jroot, err := tt.justified.Block().HashTreeRoot()
jRoot, err := tt.justified.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, tt.justified))
service.store.SetJustifiedCheckpt(
&ethpb.Checkpoint{
Root: jroot[:],
Root: jRoot[:],
Epoch: slots.ToEpoch(tt.justified.Block().Slot()),
})
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrappedParentBlock))
@@ -571,8 +562,8 @@ func Test_IsOptimisticShallowExecutionParent(t *testing.T) {
BlockNumber: 100,
}
body := &ethpb.BeaconBlockBodyBellatrix{ExecutionPayload: payload}
block := &ethpb.BeaconBlockBellatrix{Body: body, Slot: 200}
rawSigned := &ethpb.SignedBeaconBlockBellatrix{Block: block}
b := &ethpb.BeaconBlockBellatrix{Body: body, Slot: 200}
rawSigned := &ethpb.SignedBeaconBlockBellatrix{Block: b}
blk := util.HydrateSignedBeaconBlockBellatrix(rawSigned)
wr, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
@@ -592,6 +583,47 @@ func Test_IsOptimisticShallowExecutionParent(t *testing.T) {
require.Equal(t, true, candidate)
}
func Test_GetPayloadAttribute(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
// Cache miss
service, err := NewService(ctx, opts...)
require.NoError(t, err)
hasPayload, _, vId, err := service.getPayloadAttribute(ctx, nil, 0)
require.NoError(t, err)
require.Equal(t, false, hasPayload)
require.Equal(t, types.ValidatorIndex(0), vId)
// Cache hit, advance state, no fee recipient
suggestedVid := types.ValidatorIndex(1)
slot := types.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{})
st, _ := util.DeterministicGenesisState(t, 1)
hook := logTest.NewGlobal()
hasPayload, attr, vId, err := service.getPayloadAttribute(ctx, st, slot)
require.NoError(t, err)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, fieldparams.EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient).String())
require.LogsContain(t, hook, "Fee recipient not set. Using burn address")
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []types.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{})
hasPayload, attr, vId, err = service.getPayloadAttribute(ctx, st, slot)
require.NoError(t, err)
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient))
}
func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MainnetConfig())

View File

@@ -2,6 +2,7 @@ package blockchain
import (
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
@@ -11,7 +12,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
enginev1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -52,7 +52,7 @@ func WithChainStartFetcher(f powchain.ChainStartFetcher) Option {
}
// WithExecutionEngineCaller to call execution engine.
func WithExecutionEngineCaller(c enginev1.Caller) Option {
func WithExecutionEngineCaller(c powchain.EngineCaller) Option {
return func(s *Service) error {
s.cfg.ExecutionEngineCaller = c
return nil
@@ -67,6 +67,14 @@ func WithDepositCache(c *depositcache.DepositCache) Option {
}
}
// WithProposerIdsCache for proposer id cache.
func WithProposerIdsCache(c *cache.ProposerPayloadIDsCache) Option {
return func(s *Service) error {
s.cfg.ProposerSlotIndexCache = c
return nil
}
}
// WithAttestationPool for attestation lifecycle after chain inclusion.
func WithAttestationPool(p attestations.Pool) Option {
return func(s *Service) error {

View File

@@ -9,6 +9,7 @@ import (
"github.com/holiman/uint256"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
mocks "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
@@ -117,13 +118,13 @@ func Test_validateMergeBlock(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
engine := &mockEngineService{blks: map[[32]byte]*enginev1.ExecutionBlock{}}
engine := &mocks.EngineClient{BlockByHashMap: map[[32]byte]*enginev1.ExecutionBlock{}}
service.cfg.ExecutionEngineCaller = engine
engine.blks[[32]byte{'a'}] = &enginev1.ExecutionBlock{
engine.BlockByHashMap[[32]byte{'a'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
TotalDifficulty: "0x2",
}
engine.blks[[32]byte{'b'}] = &enginev1.ExecutionBlock{
engine.BlockByHashMap[[32]byte{'b'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
TotalDifficulty: "0x1",
}
@@ -158,12 +159,12 @@ func Test_getBlkParentHashAndTD(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
engine := &mockEngineService{blks: map[[32]byte]*enginev1.ExecutionBlock{}}
engine := &mocks.EngineClient{BlockByHashMap: map[[32]byte]*enginev1.ExecutionBlock{}}
service.cfg.ExecutionEngineCaller = engine
h := [32]byte{'a'}
p := [32]byte{'b'}
td := "0x1"
engine.blks[h] = &enginev1.ExecutionBlock{
engine.BlockByHashMap[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
TotalDifficulty: td,
}
@@ -175,18 +176,18 @@ func Test_getBlkParentHashAndTD(t *testing.T) {
_, _, err = service.getBlkParentHashAndTD(ctx, []byte{'c'})
require.ErrorContains(t, "could not get pow block: block not found", err)
engine.blks[h] = nil
engine.BlockByHashMap[h] = nil
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])
require.ErrorContains(t, "pow block is nil", err)
engine.blks[h] = &enginev1.ExecutionBlock{
engine.BlockByHashMap[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
TotalDifficulty: "1",
}
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])
require.ErrorContains(t, "could not decode merge block total difficulty: hex string without 0x prefix", err)
engine.blks[h] = &enginev1.ExecutionBlock{
engine.BlockByHashMap[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
TotalDifficulty: "0XFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
}

View File

@@ -109,16 +109,32 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
if err != nil {
return err
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState, false /* reg sync */); err != nil {
return err
}
postStateVersion, postStateHeader, err := getStateVersionAndPayload(postState)
if err != nil {
return err
}
if err := s.notifyNewPayload(ctx, preStateVersion, postStateVersion, preStateHeader, postStateHeader, signed, blockRoot); err != nil {
isValidPayload, err := s.notifyNewPayload(ctx, preStateVersion, postStateVersion, preStateHeader, postStateHeader, signed)
if err != nil {
return errors.Wrap(err, "could not verify new payload")
}
if !isValidPayload {
candidate, err := s.optimisticCandidateBlock(ctx, b)
if err != nil {
return errors.Wrap(err, "could not check if block is optimistic candidate")
}
if !candidate {
return errNotOptimisticCandidate
}
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState, false /* reg sync */); err != nil {
return err
}
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
// We add a proposer score boost to fork choice for the block root if applicable, right after
// running a successful state transition for the block.
@@ -188,12 +204,24 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", justified.Root)
return errors.Wrap(err, msg)
}
if err := s.updateHead(ctx, balances); err != nil {
headRoot, err := s.updateHead(ctx, balances)
if err != nil {
log.WithError(err).Warn("Could not update head")
}
if _, err := s.notifyForkchoiceUpdate(ctx, s.headBlock().Block(), s.headRoot(), bytesutil.ToBytes32(finalized.Root)); err != nil {
headBlock, err := s.cfg.BeaconDB.Block(ctx, headRoot)
if err != nil {
return err
}
headState, err := s.cfg.StateGen.StateByRoot(ctx, headRoot)
if err != nil {
return err
}
if _, err := s.notifyForkchoiceUpdate(ctx, headState, headBlock.Block(), headRoot, bytesutil.ToBytes32(finalized.Root)); err != nil {
return err
}
if err := s.saveHead(ctx, headRoot, headBlock, headState); err != nil {
return errors.Wrap(err, "could not save head")
}
if err := s.pruneCanonicalAttsFromPool(ctx, blockRoot, signed); err != nil {
return err
@@ -238,7 +266,7 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
if err := s.cfg.ForkChoiceStore.Prune(ctx, fRoot); err != nil {
return errors.Wrap(err, "could not prune proto array fork choice nodes")
}
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(ctx, fRoot)
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
@@ -376,18 +404,35 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []block.SignedBeaconBlo
// blocks have been verified, add them to forkchoice and call the engine
for i, b := range blks {
s.saveInitSyncBlock(blockRoots[i], b)
if err := s.insertBlockToForkChoiceStore(ctx, b.Block(), blockRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
return nil, nil, err
}
if err := s.notifyNewPayload(ctx,
isValidPayload, err := s.notifyNewPayload(ctx,
preVersionAndHeaders[i].version,
postVersionAndHeaders[i].version,
preVersionAndHeaders[i].header,
postVersionAndHeaders[i].header, b, blockRoots[i]); err != nil {
postVersionAndHeaders[i].header, b)
if err != nil {
return nil, nil, err
}
if !isValidPayload {
candidate, err := s.optimisticCandidateBlock(ctx, b.Block())
if err != nil {
return nil, nil, errors.Wrap(err, "could not check if block is optimistic candidate")
}
if !candidate {
return nil, nil, errNotOptimisticCandidate
}
}
if _, err := s.notifyForkchoiceUpdate(ctx, b.Block(), blockRoots[i], bytesutil.ToBytes32(fCheckpoints[i].Root)); err != nil {
if err := s.insertBlockToForkChoiceStore(ctx, b.Block(), blockRoots[i], fCheckpoints[i], jCheckpoints[i]); err != nil {
return nil, nil, err
}
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoots[i]); err != nil {
return nil, nil, errors.Wrap(err, "could not set optimistic block to valid")
}
}
if _, err := s.notifyForkchoiceUpdate(ctx, preState, b.Block(), blockRoots[i], bytesutil.ToBytes32(fCheckpoints[i].Root)); err != nil {
return nil, nil, err
}
}

View File

@@ -247,7 +247,7 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
}
fRoot := bytesutil.ToBytes32(cp.Root)
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(ctx, fRoot)
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
if err != nil {
return err
}

View File

@@ -1628,9 +1628,9 @@ func Test_getStateVersionAndPayload(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
version, header, err := getStateVersionAndPayload(tt.st)
ver, header, err := getStateVersionAndPayload(tt.st)
require.NoError(t, err)
require.Equal(t, tt.version, version)
require.Equal(t, tt.version, ver)
require.DeepEqual(t, tt.header, header)
})
}

View File

@@ -161,19 +161,19 @@ func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
log.WithError(err).Errorf("Unable to get justified balances for root %v", justified.Root)
continue
}
prevHead := s.headRoot()
if err := s.updateHead(s.ctx, balances); err != nil {
newHeadRoot, err := s.updateHead(s.ctx, balances)
if err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
}
s.notifyEngineIfChangedHead(prevHead)
s.notifyEngineIfChangedHead(s.ctx, newHeadRoot)
}
}
}()
}
// This calls notify Forkchoice Update in the event that the head has changed
func (s *Service) notifyEngineIfChangedHead(prevHead [32]byte) {
if s.headRoot() == prevHead {
func (s *Service) notifyEngineIfChangedHead(ctx context.Context, newHeadRoot [32]byte) {
if s.headRoot() == newHeadRoot {
return
}
finalized := s.store.FinalizedCheckpt()
@@ -181,14 +181,28 @@ func (s *Service) notifyEngineIfChangedHead(prevHead [32]byte) {
log.WithError(errNilFinalizedInStore).Error("could not get finalized checkpoint")
return
}
_, err := s.notifyForkchoiceUpdate(s.ctx,
s.headBlock().Block(),
s.headRoot(),
newHeadBlock, err := s.cfg.BeaconDB.Block(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not get block from db")
return
}
headState, err := s.cfg.StateGen.StateByRoot(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not get state from db")
return
}
_, err = s.notifyForkchoiceUpdate(s.ctx,
headState,
newHeadBlock.Block(),
newHeadRoot,
bytesutil.ToBytes32(finalized.Root),
)
if err != nil {
log.WithError(err).Error("could not notify forkchoice update")
}
if err := s.saveHead(ctx, newHeadRoot, newHeadBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
}
}
// This processes fork choice attestations from the pool to account for validator votes and fork choice.

View File

@@ -6,6 +6,7 @@ import (
"time"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
@@ -133,31 +134,43 @@ func TestNotifyEngineIfChangedHead(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.notifyEngineIfChangedHead(service.headRoot())
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.notifyEngineIfChangedHead(ctx, service.headRoot())
hookErr := "could not notify forkchoice update"
finalizedErr := "could not get finalized checkpoint"
require.LogsDoNotContain(t, hook, finalizedErr)
require.LogsDoNotContain(t, hook, hookErr)
service.notifyEngineIfChangedHead([32]byte{'a'})
service.notifyEngineIfChangedHead(ctx, [32]byte{'a'})
require.LogsContain(t, hook, finalizedErr)
hook.Reset()
service.head = &head{
root: [32]byte{'a'},
block: nil, /* should not panic if notify head uses correct head */
}
b := util.NewBeaconBlock()
b.Block.Slot = 1
b.Block.Slot = 2
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
r, err := b.Block.HashTreeRoot()
r1, err := b.Block.HashTreeRoot()
require.NoError(t, err)
finalized := &ethpb.Checkpoint{Root: r[:], Epoch: 0}
finalized := &ethpb.Checkpoint{Root: r1[:], Epoch: 0}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
slot: 1,
root: r,
root: r1,
block: wsb,
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1})
service.store.SetFinalizedCheckpt(finalized)
service.notifyEngineIfChangedHead([32]byte{'b'})
service.notifyEngineIfChangedHead(ctx, r1)
require.LogsDoNotContain(t, hook, finalizedErr)
require.LogsDoNotContain(t, hook, hookErr)
vId, payloadID, has := service.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(2)
require.Equal(t, true, has)
require.Equal(t, types.ValidatorIndex(1), vId)
require.Equal(t, [8]byte{1}, payloadID)
}

View File

@@ -29,7 +29,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
enginev1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
@@ -75,6 +74,7 @@ type config struct {
ChainStartFetcher powchain.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache *depositcache.DepositCache
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
@@ -88,7 +88,7 @@ type config struct {
WeakSubjectivityCheckpt *ethpb.Checkpoint
BlockFetcher powchain.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller enginev1.Caller
ExecutionEngineCaller powchain.EngineCaller
}
// NewService instantiates a new block service instance that will
@@ -191,14 +191,14 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
}
s.store = store.New(justified, finalized)
var store f.ForkChoicer
var f f.ForkChoicer
fRoot := bytesutil.ToBytes32(finalized.Root)
if features.Get().EnableForkChoiceDoublyLinkedTree {
store = doublylinkedtree.New(justified.Epoch, finalized.Epoch)
f = doublylinkedtree.New(justified.Epoch, finalized.Epoch)
} else {
store = protoarray.New(justified.Epoch, finalized.Epoch, fRoot)
f = protoarray.New(justified.Epoch, finalized.Epoch, fRoot)
}
s.cfg.ForkChoiceStore = store
s.cfg.ForkChoiceStore = f
fb, err := s.cfg.BeaconDB.Block(s.ctx, s.ensureRootNotZeros(fRoot))
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint block")
@@ -211,7 +211,7 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "could not get execution payload hash")
}
fSlot := fb.Block().Slot()
if err := store.InsertOptimisticBlock(s.ctx, fSlot, fRoot, params.BeaconConfig().ZeroHash,
if err := f.InsertOptimisticBlock(s.ctx, fSlot, fRoot, params.BeaconConfig().ZeroHash,
payloadHash, justified.Epoch, finalized.Epoch); err != nil {
return errors.Wrap(err, "could not insert finalized block to forkchoice")
}
@@ -221,7 +221,7 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "could not get last validated checkpoint")
}
if bytes.Equal(finalized.Root, lastValidatedCheckpoint.Root) {
if err := store.SetOptimisticToValid(s.ctx, fRoot); err != nil {
if err := f.SetOptimisticToValid(s.ctx, fRoot); err != nil {
return errors.Wrap(err, "could not set finalized block as validated")
}
}
@@ -362,9 +362,9 @@ func (s *Service) startFromPOWChain() error {
defer stateSub.Unsubscribe()
for {
select {
case event := <-stateChannel:
if event.Type == statefeed.ChainStarted {
data, ok := event.Data.(*statefeed.ChainStartedData)
case e := <-stateChannel:
if e.Type == statefeed.ChainStarted {
data, ok := e.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("event data is not type *statefeed.ChainStartedData")
return

View File

@@ -6,7 +6,9 @@ import (
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/sirupsen/logrus"
)
@@ -20,8 +22,11 @@ func TestChainService_SaveHead_DataRace(t *testing.T) {
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
b, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, err)
go func() {
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
@@ -74,10 +75,14 @@ func (mb *mockBroadcaster) BroadcastSyncCommitteeMessage(_ context.Context, _ ui
var _ p2p.Broadcaster = (*mockBroadcaster)(nil)
func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
endpoint := "http://127.0.0.1"
ctx := context.Background()
var web3Service *powchain.Service
var err error
srv, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
srv.Stop()
})
bState, _ := util.DeterministicGenesisState(t, 10)
pbState, err := v1.ProtobufBeaconState(bState.InnerStateUnsafe())
require.NoError(t, err)
@@ -496,12 +501,12 @@ func TestHasBlock_ForkChoiceAndDB_ProtoArray(t *testing.T) {
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
block := util.NewBeaconBlock()
r, err := block.Block.HashTreeRoot()
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(block)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wsb.Block(), r, beaconState))
@@ -517,12 +522,12 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
block := util.NewBeaconBlock()
r, err := block.Block.HashTreeRoot()
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(block)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wsb.Block(), r, beaconState))
@@ -539,10 +544,10 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
cancel: cancel,
initSyncBlocks: make(map[[32]byte]block.SignedBeaconBlock),
}
block := util.NewBeaconBlock()
r, err := block.Block.HashTreeRoot()
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(block)
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
s.saveInitSyncBlock(r, wsb)
require.NoError(t, s.Stop())
@@ -569,11 +574,11 @@ func BenchmarkHasBlockDB(b *testing.B) {
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
block := util.NewBeaconBlock()
wsb, err := wrapper.WrappedSignedBeaconBlock(block)
blk := util.NewBeaconBlock()
wsb, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(b, err)
require.NoError(b, s.cfg.BeaconDB.SaveBlock(ctx, wsb))
r, err := block.Block.HashTreeRoot()
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
b.ResetTimer()
@@ -613,13 +618,13 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
store: &store.Store{},
}
s.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]})
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := block.Block.HashTreeRoot()
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := v1.InitializeFromProto(bs)
require.NoError(b, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(block)
wsb, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(b, err)
require.NoError(b, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wsb.Block(), r, beaconState))

View File

@@ -24,7 +24,7 @@ type stateByRooter interface {
StateByRoot(context.Context, [32]byte) (state.BeaconState, error)
}
// newStateBalanceCache exists to remind us that stateBalanceCache needs a stagegen
// newStateBalanceCache exists to remind us that stateBalanceCache needs a state gen
// to avoid nil pointer bugs when updating the cache in the read path (get())
func newStateBalanceCache(sg *stategen.State) (*stateBalanceCache, error) {
if sg == nil {

View File

@@ -15,15 +15,6 @@ import (
"github.com/prysmaticlabs/prysm/time/slots"
)
type mockStateByRoot struct {
state state.BeaconState
err error
}
func (m *mockStateByRoot) StateByRoot(context.Context, [32]byte) (state.BeaconState, error) {
return m.state, m.err
}
type testStateOpt func(*ethpb.BeaconStateAltair)
func testStateWithValidators(v []*ethpb.Validator) testStateOpt {

View File

@@ -61,6 +61,7 @@ type ChainService struct {
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
}
// ForkChoicer mocks the same method in the chain service
@@ -195,32 +196,35 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []block.Signe
if s.State == nil {
return ErrNilState
}
for _, block := range blks {
if !bytes.Equal(s.Root, block.Block().ParentRoot()) {
return errors.Errorf("wanted %#x but got %#x", s.Root, block.Block().ParentRoot())
for _, b := range blks {
if !bytes.Equal(s.Root, b.Block().ParentRoot()) {
return errors.Errorf("wanted %#x but got %#x", s.Root, b.Block().ParentRoot())
}
if err := s.State.SetSlot(block.Block().Slot()); err != nil {
if err := s.State.SetSlot(b.Block().Slot()); err != nil {
return err
}
s.BlocksReceived = append(s.BlocksReceived, block)
signingRoot, err := block.Block().HashTreeRoot()
s.BlocksReceived = append(s.BlocksReceived, b)
signingRoot, err := b.Block().HashTreeRoot()
if err != nil {
return err
}
if s.DB != nil {
if err := s.DB.SaveBlock(ctx, block); err != nil {
if err := s.DB.SaveBlock(ctx, b); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, b.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = block
s.Block = b
}
return nil
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block block.SignedBeaconBlock, _ [32]byte) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}
if s.State == nil {
return ErrNilState
}
@@ -417,7 +421,7 @@ func (s *ChainService) HeadValidatorIndexToPublicKey(_ context.Context, _ types.
}
// HeadSyncCommitteeIndices mocks HeadSyncCommitteeIndices and always return `HeadNextSyncCommitteeIndices`.
func (s *ChainService) HeadSyncCommitteeIndices(_ context.Context, index types.ValidatorIndex, slot types.Slot) ([]types.CommitteeIndex, error) {
func (s *ChainService) HeadSyncCommitteeIndices(_ context.Context, _ types.ValidatorIndex, _ types.Slot) ([]types.CommitteeIndex, error) {
return s.SyncCommitteeIndices, nil
}

View File

@@ -13,6 +13,7 @@ go_library(
"common.go",
"doc.go",
"error.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
"proposer_indices_type.go",
@@ -26,6 +27,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/spectest:__subpackages__",
"//tools:__subpackages__",
],
deps = [
@@ -61,6 +63,7 @@ go_test(
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"payload_id_test.go",
"proposer_indices_test.go",
"skip_slot_cache_test.go",
"subnet_ids_test.go",

73
beacon-chain/cache/payload_id.go vendored Normal file
View File

@@ -0,0 +1,73 @@
package cache
import (
"sync"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
)
const vIdLength = 8
const pIdLength = 8
const vpIdsLength = vIdLength + pIdLength
// ProposerPayloadIDsCache is a cache of proposer payload IDs.
// The key is the slot. The value is the concatenation of the proposer and payload IDs. 8 bytes each.
type ProposerPayloadIDsCache struct {
slotToProposerAndPayloadIDs map[types.Slot][vpIdsLength]byte
sync.RWMutex
}
// NewProposerPayloadIDsCache creates a new proposer payload IDs cache.
func NewProposerPayloadIDsCache() *ProposerPayloadIDsCache {
return &ProposerPayloadIDsCache{
slotToProposerAndPayloadIDs: make(map[types.Slot][vpIdsLength]byte),
}
}
// GetProposerPayloadIDs returns the proposer and payload IDs for the given slot.
func (f *ProposerPayloadIDsCache) GetProposerPayloadIDs(slot types.Slot) (types.ValidatorIndex, [8]byte, bool) {
f.RLock()
defer f.RUnlock()
ids, ok := f.slotToProposerAndPayloadIDs[slot]
if !ok {
return 0, [8]byte{}, false
}
vId := ids[:vIdLength]
b := ids[vIdLength:]
var pId [pIdLength]byte
copy(pId[:], b)
return types.ValidatorIndex(bytesutil.BytesToUint64BigEndian(vId)), pId, true
}
// SetProposerAndPayloadIDs sets the proposer and payload IDs for the given slot.
func (f *ProposerPayloadIDsCache) SetProposerAndPayloadIDs(slot types.Slot, vId types.ValidatorIndex, pId [8]byte) {
f.Lock()
defer f.Unlock()
var vIdBytes [vIdLength]byte
copy(vIdBytes[:], bytesutil.Uint64ToBytesBigEndian(uint64(vId)))
var bytes [vpIdsLength]byte
copy(bytes[:], append(vIdBytes[:], pId[:]...))
_, ok := f.slotToProposerAndPayloadIDs[slot]
// Ok to overwrite if the slot is already set but the payload ID is not set.
// This combats the re-org case where payload assignment could change the epoch of.
if !ok || (ok && pId != [pIdLength]byte{}) {
f.slotToProposerAndPayloadIDs[slot] = bytes
}
}
// PrunePayloadIDs removes the payload id entries that's current than input slot.
func (f *ProposerPayloadIDsCache) PrunePayloadIDs(slot types.Slot) {
f.Lock()
defer f.Unlock()
for s := range f.slotToProposerAndPayloadIDs {
if slot > s {
delete(f.slotToProposerAndPayloadIDs, s)
}
}
}

60
beacon-chain/cache/payload_id_test.go vendored Normal file
View File

@@ -0,0 +1,60 @@
package cache
import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestValidatorPayloadIDsCache_GetAndSaveValidatorPayloadIDs(t *testing.T) {
cache := NewProposerPayloadIDsCache()
i, p, ok := cache.GetProposerPayloadIDs(0)
require.Equal(t, false, ok)
require.Equal(t, types.ValidatorIndex(0), i)
require.Equal(t, [pIdLength]byte{}, p)
slot := types.Slot(1234)
vid := types.ValidatorIndex(34234324)
pid := [8]byte{1, 2, 3, 3, 7, 8, 7, 8}
cache.SetProposerAndPayloadIDs(slot, vid, pid)
i, p, ok = cache.GetProposerPayloadIDs(slot)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, pid, p)
slot = types.Slot(9456456)
vid = types.ValidatorIndex(6786745)
cache.SetProposerAndPayloadIDs(slot, vid, [pIdLength]byte{})
i, p, ok = cache.GetProposerPayloadIDs(slot)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, [pIdLength]byte{}, p)
// reset cache without pid
slot = types.Slot(9456456)
vid = types.ValidatorIndex(11111)
pid = [8]byte{3, 2, 3, 33, 72, 8, 7, 8}
cache.SetProposerAndPayloadIDs(slot, vid, pid)
i, p, ok = cache.GetProposerPayloadIDs(slot)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, pid, p)
// reset cache with existing pid
slot = types.Slot(9456456)
vid = types.ValidatorIndex(11111)
newPid := [8]byte{1, 2, 3, 33, 72, 8, 7, 1}
cache.SetProposerAndPayloadIDs(slot, vid, newPid)
i, p, ok = cache.GetProposerPayloadIDs(slot)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, newPid, p)
// remove cache entry
cache.PrunePayloadIDs(slot + 1)
i, p, ok = cache.GetProposerPayloadIDs(slot)
require.Equal(t, false, ok)
require.Equal(t, types.ValidatorIndex(0), i)
require.Equal(t, [pIdLength]byte{}, p)
}

View File

@@ -175,9 +175,7 @@ func CommitteeAssignments(
return nil, nil, err
}
proposerIndexToSlots := make(map[types.ValidatorIndex][]types.Slot, params.BeaconConfig().SlotsPerEpoch)
// Proposal epochs do not have a look ahead, so we skip them over here.
validProposalEpoch := epoch < nextEpoch
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch && validProposalEpoch; slot++ {
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
// Skip proposer assignment for genesis slot.
if slot == 0 {
continue
@@ -192,6 +190,15 @@ func CommitteeAssignments(
proposerIndexToSlots[i] = append(proposerIndexToSlots[i], slot)
}
// If previous proposer indices computation is outside if current proposal epoch range,
// we need to reset state slot back to start slot so that we can compute the correct committees.
currentProposalEpoch := epoch < nextEpoch
if !currentProposalEpoch {
if err := state.SetSlot(state.Slot() - params.BeaconConfig().SlotsPerEpoch); err != nil {
return nil, nil, err
}
}
activeValidatorIndices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return nil, nil, err

View File

@@ -232,7 +232,7 @@ func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
_, proposerIndxs, err = CommitteeAssignments(context.Background(), state, time.CurrentEpoch(state)+1)
require.NoError(t, err)
require.Equal(t, 0, len(proposerIndxs), "wanted empty proposer index set")
require.NotEqual(t, 0, len(proposerIndxs), "wanted non-zero proposer index set")
}
func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {

View File

@@ -99,7 +99,7 @@ type HeadAccessDatabase interface {
SaveHeadBlockRoot(ctx context.Context, blockRoot [32]byte) error
// Genesis operations.
LoadGenesis(ctx context.Context, r io.Reader) error
LoadGenesis(ctx context.Context, stateBytes []byte) error
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error

View File

@@ -94,6 +94,7 @@ go_test(
"state_test.go",
"utils_test.go",
"validated_checkpoint_test.go",
"wss_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
@@ -101,6 +102,7 @@ go_test(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//beacon-chain/state/v2:go_default_library",
"//config/features:go_default_library",

View File

@@ -4,8 +4,6 @@ import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
@@ -54,14 +52,10 @@ func (s *Store) SaveGenesisData(ctx context.Context, genesisState state.BeaconSt
return nil
}
// LoadGenesis loads a genesis state from a given file path, if no genesis exists already.
func (s *Store) LoadGenesis(ctx context.Context, r io.Reader) error {
b, err := ioutil.ReadAll(r)
if err != nil {
return err
}
// LoadGenesis loads a genesis state from a ssz-serialized byte slice, if no genesis exists already.
func (s *Store) LoadGenesis(ctx context.Context, sb []byte) error {
st := &ethpb.BeaconState{}
if err := st.UnmarshalSSZ(b); err != nil {
if err := st.UnmarshalSSZ(sb); err != nil {
return err
}
gs, err := statev1.InitializeFromProtoUnsafe(st)

View File

@@ -53,21 +53,16 @@ func TestLoadGenesisFromFile(t *testing.T) {
if err == nil {
fp = rfp
}
r, err := os.Open(fp)
sb, err := os.ReadFile(fp)
assert.NoError(t, err)
defer func() {
assert.NoError(t, r.Close())
}()
db := setupDB(t)
assert.NoError(t, db.LoadGenesis(context.Background(), r))
assert.NoError(t, db.LoadGenesis(context.Background(), sb))
testGenesisDataSaved(t, db)
// Loading the same genesis again should not throw an error
_, err = r.Seek(0, 0)
assert.NoError(t, err)
assert.NoError(t, db.LoadGenesis(context.Background(), r))
assert.NoError(t, db.LoadGenesis(context.Background(), sb))
}
func TestLoadGenesisFromFile_mismatchedForkVersion(t *testing.T) {
@@ -76,15 +71,12 @@ func TestLoadGenesisFromFile_mismatchedForkVersion(t *testing.T) {
if err == nil {
fp = rfp
}
r, err := os.Open(fp)
sb, err := os.ReadFile(fp)
assert.NoError(t, err)
defer func() {
assert.NoError(t, r.Close())
}()
// Loading a genesis with the wrong fork version as beacon config should throw an error.
db := setupDB(t)
assert.ErrorContains(t, "does not match config genesis fork version", db.LoadGenesis(context.Background(), r))
assert.ErrorContains(t, "does not match config genesis fork version", db.LoadGenesis(context.Background(), sb))
}
func TestEnsureEmbeddedGenesis(t *testing.T) {

View File

@@ -77,6 +77,12 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
return errors.Wrap(err, "could not save head block root")
}
// save origin block root in a special key, to be used when the canonical
// origin (start of chain, ie alternative to genesis) block or state is needed
if err = s.SaveOriginCheckpointBlockRoot(ctx, blockRoot); err != nil {
return errors.Wrap(err, "could not save origin block root")
}
// rebuild the checkpoint from the block
// use it to mark the block as justified and finalized
slotEpoch, err := wblk.Block().Slot().SafeDivSlot(params.BeaconConfig().SlotsPerEpoch)
@@ -94,11 +100,5 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
return errors.Wrap(err, "could not mark checkpoint sync block as finalized")
}
// save origin block root in a special key, to be used when the canonical
// origin (start of chain, ie alternative to genesis) block or state is needed
if err = s.SaveOriginCheckpointBlockRoot(ctx, blockRoot); err != nil {
return errors.Wrap(err, "could not save origin block root")
}
return nil
}

View File

@@ -0,0 +1,45 @@
package kv
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/genesis"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestSaveOrigin(t *testing.T) {
// Embedded Genesis works with Mainnet config
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.ConfigName = params.ConfigNames[params.Mainnet]
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
db := setupDB(t)
st, err := genesis.State(params.Mainnet.String())
require.NoError(t, err)
sb, err := st.MarshalSSZ()
require.NoError(t, err)
require.NoError(t, db.LoadGenesis(ctx, sb))
// this is necessary for mainnet, because LoadGenesis is short-circuited by the embedded state,
// so the genesis root key is never written to the db.
require.NoError(t, db.EnsureEmbeddedGenesis(ctx))
cst, err := util.NewBeaconState()
require.NoError(t, err)
csb, err := cst.MarshalSSZ()
require.NoError(t, err)
cb := util.NewBeaconBlock()
scb, err := wrapper.WrappedSignedBeaconBlock(cb)
require.NoError(t, err)
cbb, err := scb.MarshalSSZ()
require.NoError(t, err)
require.NoError(t, db.SaveOrigin(ctx, csb, cbb))
}

View File

@@ -8,3 +8,4 @@ var errInvalidProposerBoostRoot = errors.New("invalid proposer boost root")
var errUnknownFinalizedRoot = errors.New("unknown finalized root")
var errUnknownJustifiedRoot = errors.New("unknown justified root")
var errInvalidOptimisticStatus = errors.New("invalid optimistic status")
var errUnknownPayloadHash = errors.New("unknown payload hash")

View File

@@ -18,6 +18,7 @@ func New(justifiedEpoch, finalizedEpoch types.Epoch) *ForkChoice {
finalizedEpoch: finalizedEpoch,
proposerBoostRoot: [32]byte{},
nodeByRoot: make(map[[fieldparams.RootLength]byte]*Node),
nodeByPayload: make(map[[fieldparams.RootLength]byte]*Node),
pruneThreshold: defaultPruneThreshold,
}
@@ -168,7 +169,7 @@ func (f *ForkChoice) IsCanonical(root [32]byte) bool {
}
// IsOptimistic returns true if the given root has been optimistically synced.
func (f *ForkChoice) IsOptimistic(_ context.Context, root [32]byte) (bool, error) {
func (f *ForkChoice) IsOptimistic(root [32]byte) (bool, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
@@ -302,6 +303,6 @@ func (f *ForkChoice) ForkChoiceNodes() []*pbrpc.ForkChoiceNode {
}
// SetOptimisticToInvalid removes a block with an invalid execution payload from fork choice store
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root [fieldparams.RootLength]byte) ([][32]byte, error) {
return f.store.removeNode(ctx, root)
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, payloadHash [fieldparams.RootLength]byte) ([][32]byte, error) {
return f.store.setOptimisticToInvalid(ctx, root, payloadHash)
}

View File

@@ -109,7 +109,7 @@ func (n *Node) leadsToViableHead(justifiedEpoch, finalizedEpoch types.Epoch) boo
return n.bestDescendant.viableForHead(justifiedEpoch, finalizedEpoch)
}
// setNodeAndParentValidated sets the current node and the parent as validated (i.e. non-optimistic).
// setNodeAndParentValidated sets the current node and all the ancestors as validated (i.e. non-optimistic).
func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
if ctx.Err() != nil {
return ctx.Err()

View File

@@ -180,27 +180,27 @@ func TestNode_SetFullyValidated(t *testing.T) {
require.NoError(t, f.InsertOptimisticBlock(ctx, 4, indexToHash(4), indexToHash(3), params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 5, indexToHash(5), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1))
opt, err := f.IsOptimistic(ctx, indexToHash(5))
opt, err := f.IsOptimistic(indexToHash(5))
require.NoError(t, err)
require.Equal(t, true, opt)
opt, err = f.IsOptimistic(ctx, indexToHash(4))
opt, err = f.IsOptimistic(indexToHash(4))
require.NoError(t, err)
require.Equal(t, true, opt)
require.NoError(t, f.store.nodeByRoot[indexToHash(4)].setNodeAndParentValidated(ctx))
// block 5 should still be optimistic
opt, err = f.IsOptimistic(ctx, indexToHash(5))
opt, err = f.IsOptimistic(indexToHash(5))
require.NoError(t, err)
require.Equal(t, true, opt)
// block 4 and 3 should now be valid
opt, err = f.IsOptimistic(ctx, indexToHash(4))
opt, err = f.IsOptimistic(indexToHash(4))
require.NoError(t, err)
require.Equal(t, false, opt)
opt, err = f.IsOptimistic(ctx, indexToHash(3))
opt, err = f.IsOptimistic(indexToHash(3))
require.NoError(t, err)
require.Equal(t, false, opt)
}

View File

@@ -2,22 +2,54 @@ package doublylinkedtree
import (
"context"
"github.com/prysmaticlabs/prysm/config/params"
)
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, payloadHash [32]byte) ([][32]byte, error) {
s.nodesLock.Lock()
invalidRoots := make([][32]byte, 0)
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
s.nodesLock.Unlock()
return invalidRoots, ErrNilNode
}
// Check if last valid hash is an ancestor of the passed node.
lastValid, ok := s.nodeByPayload[payloadHash]
if !ok || lastValid == nil {
s.nodesLock.Unlock()
return invalidRoots, errUnknownPayloadHash
}
firstInvalid := node
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != payloadHash; firstInvalid = firstInvalid.parent {
if ctx.Err() != nil {
s.nodesLock.Unlock()
return invalidRoots, ctx.Err()
}
}
// If the last valid payload is in a different fork, we remove only the
// passed node.
if firstInvalid.parent == nil {
firstInvalid = node
}
s.nodesLock.Unlock()
return s.removeNode(ctx, firstInvalid)
}
// removeNode removes the node with the given root and all of its children
// from the Fork Choice Store.
func (s *Store) removeNode(ctx context.Context, root [32]byte) ([][32]byte, error) {
func (s *Store) removeNode(ctx context.Context, node *Node) ([][32]byte, error) {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
invalidRoots := make([][32]byte, 0)
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
if node == nil {
return invalidRoots, ErrNilNode
}
if !node.optimistic || node.parent == nil {
return invalidRoots, errInvalidOptimisticStatus
}
children := node.parent.children
if len(children) == 1 {
node.parent.children = []*Node{}
@@ -47,6 +79,16 @@ func (s *Store) removeNodeAndChildren(ctx context.Context, node *Node, invalidRo
}
}
invalidRoots = append(invalidRoots, node.root)
s.proposerBoostLock.Lock()
if node.root == s.proposerBoostRoot {
s.proposerBoostRoot = [32]byte{}
}
if node.root == s.previousProposerBoostRoot {
s.previousProposerBoostRoot = params.BeaconConfig().ZeroHash
s.previousProposerBoostScore = 0
}
s.proposerBoostLock.Unlock()
delete(s.nodeByRoot, node.root)
delete(s.nodeByPayload, node.payloadHash)
return invalidRoots, nil
}

View File

@@ -24,32 +24,71 @@ import (
func TestPruneInvalid(t *testing.T) {
tests := []struct {
root [32]byte // the root of the new INVALID block
payload [32]byte // the last valid hash
wantedNodeNumber int
wantedRoots [][32]byte
}{
{
[32]byte{'j'},
[32]byte{'B'},
12,
[][32]byte{[32]byte{'j'}},
},
{
[32]byte{'c'},
[32]byte{'B'},
4,
[][32]byte{[32]byte{'f'}, [32]byte{'e'}, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'l'},
[32]byte{'k'}, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'c'}},
},
{
[32]byte{'i'},
[32]byte{'H'},
12,
[][32]byte{[32]byte{'i'}},
},
{
[32]byte{'h'},
[32]byte{'G'},
11,
[][32]byte{[32]byte{'i'}, [32]byte{'h'}},
},
{
[32]byte{'g'},
[32]byte{'D'},
8,
[][32]byte{[32]byte{'i'}, [32]byte{'h'}, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'g'}},
},
{
[32]byte{'i'},
[32]byte{'D'},
8,
[][32]byte{[32]byte{'i'}, [32]byte{'h'}, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'g'}},
},
{
[32]byte{'f'},
[32]byte{'D'},
11,
[][32]byte{[32]byte{'f'}, [32]byte{'e'}},
},
{
[32]byte{'h'},
[32]byte{'C'},
5,
[][32]byte{
[32]byte{'f'},
[32]byte{'e'},
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'l'},
[32]byte{'k'},
[32]byte{'g'},
[32]byte{'d'},
},
},
{
[32]byte{'g'},
[32]byte{'E'},
8,
[][32]byte{[32]byte{'i'}, [32]byte{'h'}, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'g'}},
},
@@ -58,22 +97,45 @@ func TestPruneInvalid(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1))
roots, err := f.store.removeNode(context.Background(), tc.root)
roots, err := f.store.setOptimisticToInvalid(context.Background(), tc.root, tc.payload)
require.NoError(t, err)
require.DeepEqual(t, tc.wantedRoots, roots)
require.Equal(t, tc.wantedNodeNumber, f.NodeCount())
}
}
// This is a regression test (10445)
func TestSetOptimisticToInvalid_ProposerBoost(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1))
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{'c'}
f.store.previousProposerBoostScore = 10
f.store.previousProposerBoostRoot = [32]byte{'b'}
f.store.proposerBoostLock.Unlock()
_, err := f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'A'})
require.NoError(t, err)
f.store.proposerBoostLock.RLock()
require.Equal(t, uint64(0), f.store.previousProposerBoostScore)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
require.DeepEqual(t, params.BeaconConfig().ZeroHash, f.store.previousProposerBoostRoot)
f.store.proposerBoostLock.RUnlock()
}

View File

@@ -121,6 +121,7 @@ func (s *Store) insert(ctx context.Context,
payloadHash: payloadHash,
}
s.nodeByPayload[payloadHash] = n
s.nodeByRoot[root] = n
if parent != nil {
parent.children = append(parent.children, n)

View File

@@ -107,7 +107,8 @@ func TestStore_Insert(t *testing.T) {
// The new node does not have a parent.
treeRootNode := &Node{slot: 0, root: indexToHash(0)}
nodeByRoot := map[[32]byte]*Node{indexToHash(0): treeRootNode}
s := &Store{nodeByRoot: nodeByRoot, treeRootNode: treeRootNode}
nodeByPayload := map[[32]byte]*Node{indexToHash(0): treeRootNode}
s := &Store{nodeByRoot: nodeByRoot, treeRootNode: treeRootNode, nodeByPayload: nodeByPayload}
payloadHash := [32]byte{'a'}
require.NoError(t, s.insert(context.Background(), 100, indexToHash(100), indexToHash(0), payloadHash, 1, 1))
assert.Equal(t, 2, len(s.nodeByRoot), "Did not insert block")

View File

@@ -26,6 +26,7 @@ type Store struct {
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
nodeByRoot map[[fieldparams.RootLength]byte]*Node // nodes indexed by roots.
nodeByPayload map[[fieldparams.RootLength]byte]*Node // nodes indexed by payload Hash
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
}

View File

@@ -24,7 +24,7 @@ type ForkChoicer interface {
type HeadRetriever interface {
Head(context.Context, types.Epoch, [32]byte, []uint64, types.Epoch) ([32]byte, error)
Tips() ([][32]byte, []types.Slot)
IsOptimistic(ctx context.Context, root [32]byte) (bool, error)
IsOptimistic(root [32]byte) (bool, error)
}
// BlockProcessor processes the block that's used for accounting fork choice.
@@ -71,5 +71,5 @@ type Getter interface {
// Setter allows to set forkchoice information
type Setter interface {
SetOptimisticToValid(context.Context, [fieldparams.RootLength]byte) error
SetOptimisticToInvalid(context.Context, [fieldparams.RootLength]byte) ([][32]byte, error)
SetOptimisticToInvalid(context.Context, [fieldparams.RootLength]byte, [fieldparams.RootLength]byte) ([][32]byte, error)
}

View File

@@ -5,11 +5,11 @@ import "errors"
var errUnknownFinalizedRoot = errors.New("unknown finalized root")
var errUnknownJustifiedRoot = errors.New("unknown justified root")
var errInvalidNodeIndex = errors.New("node index is invalid")
var errInvalidFinalizedNode = errors.New("invalid finalized block on chain")
var ErrUnknownNodeRoot = errors.New("unknown block root")
var errInvalidJustifiedIndex = errors.New("justified index is invalid")
var errInvalidBestChildIndex = errors.New("best child index is invalid")
var errInvalidBestDescendantIndex = errors.New("best descendant index is invalid")
var errInvalidParentDelta = errors.New("parent delta is invalid")
var errInvalidNodeDelta = errors.New("node delta is invalid")
var errInvalidDeltaLength = errors.New("delta length is invalid")
var errInvalidSyncedTips = errors.New("invalid synced tips")
var errInvalidOptimisticStatus = errors.New("invalid optimistic status")

View File

@@ -85,12 +85,9 @@ func copyNode(node *Node) *Node {
return &Node{}
}
copiedRoot := [32]byte{}
copy(copiedRoot[:], node.root[:])
return &Node{
slot: node.slot,
root: copiedRoot,
root: node.root,
parent: node.parent,
payloadHash: node.payloadHash,
justifiedEpoch: node.justifiedEpoch,
@@ -98,5 +95,6 @@ func copyNode(node *Node) *Node {
weight: node.weight,
bestChild: node.bestChild,
bestDescendant: node.bestDescendant,
status: node.status,
}
}

View File

@@ -48,16 +48,10 @@ var (
Help: "The number of times pruning happened.",
},
)
lastSyncedTipSlot = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "proto_array_last_synced_tip_slot",
Help: "The slot of the last fully validated block added to the proto array.",
},
)
syncedTipsCount = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "proto_array_synced_tips_count",
Help: "The number of elements in the syncedTips structure.",
validatedNodesCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_validated_nodes_count",
Help: "The number of nodes that have been fully validated.",
},
)
)

View File

@@ -43,8 +43,3 @@ func (n *Node) BestChild() uint64 {
func (n *Node) BestDescendant() uint64 {
return n.bestDescendant
}
// Graffiti of the fork choice node.
func (n *Node) Graffiti() [32]byte {
return n.graffiti
}

View File

@@ -16,7 +16,6 @@ func TestNode_Getters(t *testing.T) {
weight := uint64(10000)
bestChild := uint64(5)
bestDescendant := uint64(4)
graffiti := [32]byte{'b'}
n := &Node{
slot: slot,
root: root,
@@ -26,7 +25,6 @@ func TestNode_Getters(t *testing.T) {
weight: weight,
bestChild: bestChild,
bestDescendant: bestDescendant,
graffiti: graffiti,
}
require.Equal(t, slot, n.Slot())
@@ -37,5 +35,4 @@ func TestNode_Getters(t *testing.T) {
require.Equal(t, weight, n.Weight())
require.Equal(t, bestChild, n.BestChild())
require.Equal(t, bestDescendant, n.BestDescendant())
require.Equal(t, graffiti, n.Graffiti())
}

View File

@@ -3,314 +3,160 @@ package protoarray
import (
"context"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
)
// This returns the minimum and maximum slot of the synced_tips tree
func (f *ForkChoice) boundarySyncedTips() (types.Slot, types.Slot) {
f.syncedTips.RLock()
defer f.syncedTips.RUnlock()
min := params.BeaconConfig().FarFutureSlot
max := types.Slot(0)
for _, slot := range f.syncedTips.validatedTips {
if slot > max {
max = slot
}
if slot < min {
min = slot
}
}
return min, max
}
// IsOptimistic returns true if this node is optimistically synced
// A optimistically synced block is synced as usual, but its
// execution payload is not validated, while the EL is still syncing.
// This function returns an error if the block is not found in the fork choice
// store
func (f *ForkChoice) IsOptimistic(ctx context.Context, root [32]byte) (bool, error) {
if ctx.Err() != nil {
return false, ctx.Err()
}
func (f *ForkChoice) IsOptimistic(root [32]byte) (bool, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
index, ok := f.store.nodesIndices[root]
if !ok {
f.store.nodesLock.RUnlock()
return false, ErrUnknownNodeRoot
}
node := f.store.nodes[index]
slot := node.slot
// If the node is a synced tip, then it's fully validated
f.syncedTips.RLock()
_, ok = f.syncedTips.validatedTips[root]
if ok {
f.syncedTips.RUnlock()
f.store.nodesLock.RUnlock()
return false, nil
}
f.syncedTips.RUnlock()
// If the slot is higher than the max synced tip, it's optimistic
min, max := f.boundarySyncedTips()
if slot > max {
f.store.nodesLock.RUnlock()
return true, nil
}
// If the slot is lower than the min synced tip, it's fully validated
if slot <= min {
f.store.nodesLock.RUnlock()
return false, nil
}
// if the node is a leaf of the Fork Choice tree, then it's
// optimistic
childIndex := node.BestChild()
if childIndex == NonExistentNode {
f.store.nodesLock.RUnlock()
return true, nil
}
// recurse to the child
child := f.store.nodes[childIndex]
root = child.root
f.store.nodesLock.RUnlock()
return f.IsOptimistic(ctx, root)
}
// This function returns the index of sync tip node that's ancestor to the input node.
// In the event of none, `NonExistentNode` is returned.
// This internal method assumes the caller holds a lock on syncedTips and s.nodesLock
func (s *Store) findSyncedTip(ctx context.Context, node *Node, syncedTips *optimisticStore) (uint64, error) {
for {
if ctx.Err() != nil {
return 0, ctx.Err()
}
if _, ok := syncedTips.validatedTips[node.root]; ok {
return s.nodesIndices[node.root], nil
}
if node.parent == NonExistentNode {
return NonExistentNode, nil
}
node = s.nodes[node.parent]
}
return node.status == syncing, nil
}
// SetOptimisticToValid is called with the root of a block that was returned as
// VALID by the EL. This routine recomputes and updates the synced_tips map to
// account for this new tip.
// WARNING: This method returns an error if the root is not found in forkchoice or
// if the root is not a leaf of the fork choice tree.
// VALID by the EL.
// WARNING: This method returns an error if the root is not found in forkchoice
func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [32]byte) error {
f.store.nodesLock.RLock()
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
// We can only update if given root is in Fork Choice
index, ok := f.store.nodesIndices[root]
if !ok {
return errInvalidNodeIndex
return ErrUnknownNodeRoot
}
node := f.store.nodes[index]
f.store.nodesLock.RUnlock()
// Stop early if the node is Valid
optimistic, err := f.IsOptimistic(ctx, root)
if err != nil {
return err
}
if !optimistic {
return nil
}
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
// Cache root and slot to validated tips
newTips := make(map[[32]byte]types.Slot)
newValidSlot := node.slot
newTips[root] = newValidSlot
// Compute the full valid path from the given node to its previous synced tip
// This path will now consist of fully validated blocks. Notice that
// the previous tip may have been outside the Fork Choice store.
// In this case, only one block can be in syncedTips as the whole
// Fork Choice would be a descendant of this block.
validPath := make(map[uint64]bool)
validPath[index] = true
for {
for node := f.store.nodes[index]; node.status == syncing; node = f.store.nodes[index] {
if ctx.Err() != nil {
return ctx.Err()
}
parentIndex := node.parent
if parentIndex == NonExistentNode {
node.status = valid
index = node.parent
if index == NonExistentNode {
break
}
if parentIndex >= uint64(len(f.store.nodes)) {
return errInvalidNodeIndex
}
node = f.store.nodes[parentIndex]
_, ok = f.syncedTips.validatedTips[node.root]
if ok {
break
}
validPath[parentIndex] = true
validatedNodesCount.Inc()
}
// Retrieve the list of leaves in the Fork Choice
// These are all the nodes that have NonExistentNode as best child.
leaves, err := f.store.leaves()
if err != nil {
return err
}
// For each leaf, recompute the new tip.
for _, i := range leaves {
node = f.store.nodes[i]
j := i
for {
if ctx.Err() != nil {
return ctx.Err()
}
// Stop if we reached the previous tip
_, ok = f.syncedTips.validatedTips[node.root]
if ok {
newTips[node.root] = node.slot
break
}
// Stop if we reach valid path
_, ok = validPath[j]
if ok {
newTips[node.root] = node.slot
break
}
j = node.parent
if j == NonExistentNode {
break
}
if j >= uint64(len(f.store.nodes)) {
return errInvalidNodeIndex
}
node = f.store.nodes[j]
}
}
f.syncedTips.validatedTips = newTips
lastSyncedTipSlot.Set(float64(newValidSlot))
syncedTipsCount.Set(float64(len(newTips)))
return nil
}
// SetOptimisticToInvalid updates the synced_tips map when the block with the given root becomes INVALID.
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root [32]byte) ([][32]byte, error) {
// SetOptimisticToInvalid updates the synced_tips map when the block with the given root becomes INVALID.
// It takes two parameters: the root of the INVALID block and the payload Hash
// of the last valid block.s
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, payloadHash [32]byte) ([][32]byte, error) {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
invalidRoots := make([][32]byte, 0)
idx, ok := f.store.nodesIndices[root]
// We only support setting invalid a node existing in Forkchoice
invalidIndex, ok := f.store.nodesIndices[root]
if !ok {
return invalidRoots, errInvalidNodeIndex
return invalidRoots, ErrUnknownNodeRoot
}
node := f.store.nodes[idx]
// We only support changing status for the tips in Fork Choice store.
if node.bestChild != NonExistentNode {
return invalidRoots, errInvalidNodeIndex
node := f.store.nodes[invalidIndex]
lastValidIndex, ok := f.store.payloadIndices[payloadHash]
if !ok || lastValidIndex == NonExistentNode {
return invalidRoots, errInvalidFinalizedNode
}
parentIndex := node.parent
// This should not happen
if parentIndex == NonExistentNode {
return invalidRoots, errInvalidNodeIndex
// Check if last valid hash is an ancestor of the passed node
firstInvalidIndex := node.parent
for ; firstInvalidIndex != NonExistentNode && firstInvalidIndex != lastValidIndex; firstInvalidIndex = node.parent {
node = f.store.nodes[firstInvalidIndex]
}
// Update the weights of the nodes subtracting the INVALID node's weight
// if the last valid hash is not an ancestor of the invalid block, we
// just remove the invalid block.
if node.parent != lastValidIndex {
node = f.store.nodes[invalidIndex]
firstInvalidIndex = invalidIndex
lastValidIndex = node.parent
if lastValidIndex == NonExistentNode {
return invalidRoots, errInvalidFinalizedNode
}
} else {
firstInvalidIndex = f.store.nodesIndices[node.root]
}
// Update the weights of the nodes subtracting the first INVALID node's weight
weight := node.weight
node = f.store.nodes[parentIndex]
for {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}
node.weight -= weight
if node.parent == NonExistentNode {
break
}
node = f.store.nodes[node.parent]
}
parent := copyNode(f.store.nodes[parentIndex])
// delete the invalid node, order is important
f.store.nodes = append(f.store.nodes[:idx], f.store.nodes[idx+1:]...)
delete(f.store.nodesIndices, root)
invalidRoots = append(invalidRoots, root)
// Fix parent and best child for each node
for _, node := range f.store.nodes {
if node.parent == NonExistentNode {
node.parent = NonExistentNode
} else if node.parent > idx {
node.parent -= 1
}
if node.bestChild == NonExistentNode || node.bestChild == idx {
node.bestChild = NonExistentNode
} else if node.bestChild > idx {
node.bestChild -= 1
}
if node.bestDescendant == NonExistentNode || node.bestDescendant == idx {
node.bestDescendant = NonExistentNode
} else if node.bestDescendant > idx {
node.bestDescendant -= 1
}
var validNode *Node
for index := lastValidIndex; index != NonExistentNode; index = validNode.parent {
validNode = f.store.nodes[index]
validNode.weight -= weight
}
// Update the parent's best child and best descendant if necessary.
if parent.bestChild == idx || parent.bestDescendant == idx {
for childIndex, child := range f.store.nodes {
if child.parent == parentIndex {
err := f.store.updateBestChildAndDescendant(
parentIndex, uint64(childIndex))
if err != nil {
return invalidRoots, err
}
break
// Find the current proposer boost (it should be set to zero if an
// INVALID block was boosted)
f.store.proposerBoostLock.RLock()
boostRoot := f.store.proposerBoostRoot
previousBoostRoot := f.store.previousProposerBoostRoot
f.store.proposerBoostLock.RUnlock()
// Remove the invalid roots from our store maps and adjust their weight
// to zero
boosted := node.root == boostRoot
previouslyBoosted := node.root == previousBoostRoot
invalidIndices := map[uint64]bool{firstInvalidIndex: true}
node.status = invalid
node.weight = 0
delete(f.store.nodesIndices, node.root)
delete(f.store.canonicalNodes, node.root)
delete(f.store.payloadIndices, node.payloadHash)
for index := firstInvalidIndex + 1; index < uint64(len(f.store.nodes)); index++ {
invalidNode := f.store.nodes[index]
if _, ok := invalidIndices[invalidNode.parent]; !ok {
continue
}
if invalidNode.status == valid {
return invalidRoots, errInvalidOptimisticStatus
}
if !boosted && invalidNode.root == boostRoot {
boosted = true
}
if !previouslyBoosted && invalidNode.root == previousBoostRoot {
previouslyBoosted = true
}
invalidNode.status = invalid
invalidIndices[index] = true
invalidNode.weight = 0
delete(f.store.nodesIndices, invalidNode.root)
delete(f.store.canonicalNodes, invalidNode.root)
delete(f.store.payloadIndices, invalidNode.payloadHash)
}
if boosted {
if err := f.ResetBoostedProposerRoot(ctx); err != nil {
return invalidRoots, err
}
}
if previouslyBoosted {
f.store.proposerBoostLock.Lock()
f.store.previousProposerBoostRoot = params.BeaconConfig().ZeroHash
f.store.previousProposerBoostScore = 0
f.store.proposerBoostLock.Unlock()
}
for index := range invalidIndices {
invalidRoots = append(invalidRoots, f.store.nodes[index].root)
}
// Update the best child and descendant
for i := len(f.store.nodes) - 1; i >= 0; i-- {
n := f.store.nodes[i]
if n.parent != NonExistentNode {
if err := f.store.updateBestChildAndDescendant(n.parent, uint64(i)); err != nil {
return invalidRoots, err
}
}
}
// Return early if the parent is not a synced_tip.
f.syncedTips.Lock()
defer f.syncedTips.Unlock()
parentRoot := parent.root
_, ok = f.syncedTips.validatedTips[parentRoot]
if !ok {
return invalidRoots, nil
}
leaves, err := f.store.leaves()
if err != nil {
return invalidRoots, err
}
for _, i := range leaves {
node = f.store.nodes[i]
for {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}
// Return early if the parent is still a synced tip
if node.root == parentRoot {
return invalidRoots, nil
}
_, ok = f.syncedTips.validatedTips[node.root]
if ok {
break
}
if node.parent == NonExistentNode {
break
}
node = f.store.nodes[node.parent]
}
}
delete(f.syncedTips.validatedTips, parentRoot)
syncedTipsCount.Set(float64(len(f.syncedTips.validatedTips)))
return invalidRoots, nil
}

View File

@@ -10,116 +10,38 @@ import (
"github.com/prysmaticlabs/prysm/testing/require"
)
// We test the algorithm to check the optimistic status of a node. The
// status for this test is the following branching diagram
//
// -- E -- F
// /
// -- C -- D
// /
// 0 -- 1 -- A -- B -- J -- K
// \ /
// -- G -- H -- I
//
// Here nodes 0, 1, A, B, C, D are fully validated and nodes
// E, F, G, H, J, K are optimistic.
// Synced Tips are nodes B, C, D
// nodes 0 and 1 are outside the Fork Choice Store.
func slicesEqual(a, b [][32]byte) bool {
if len(a) != len(b) {
return false
}
func TestOptimistic(t *testing.T) {
mapA := make(map[[32]byte]bool, len(a))
for _, root := range a {
mapA[root] = true
}
for _, root := range b {
_, ok := mapA[root]
if !ok {
return false
}
}
return true
}
func TestOptimistic_Outside_ForkChoice(t *testing.T) {
root0 := bytesutil.ToBytes32([]byte("hello0"))
root1 := bytesutil.ToBytes32([]byte("hello1"))
nodeA := &Node{
slot: types.Slot(100),
root: bytesutil.ToBytes32([]byte("helloA")),
bestChild: 1,
}
nodeB := &Node{
slot: types.Slot(101),
root: bytesutil.ToBytes32([]byte("helloB")),
bestChild: 2,
parent: 0,
}
nodeC := &Node{
slot: types.Slot(102),
root: bytesutil.ToBytes32([]byte("helloC")),
bestChild: 3,
parent: 1,
}
nodeD := &Node{
slot: types.Slot(103),
root: bytesutil.ToBytes32([]byte("helloD")),
bestChild: NonExistentNode,
parent: 2,
}
nodeE := &Node{
slot: types.Slot(103),
root: bytesutil.ToBytes32([]byte("helloE")),
bestChild: 5,
parent: 2,
}
nodeF := &Node{
slot: types.Slot(104),
root: bytesutil.ToBytes32([]byte("helloF")),
bestChild: NonExistentNode,
parent: 4,
}
nodeG := &Node{
slot: types.Slot(102),
root: bytesutil.ToBytes32([]byte("helloG")),
bestChild: 7,
parent: 1,
}
nodeH := &Node{
slot: types.Slot(103),
root: bytesutil.ToBytes32([]byte("helloH")),
bestChild: 8,
parent: 6,
}
nodeI := &Node{
slot: types.Slot(104),
root: bytesutil.ToBytes32([]byte("helloI")),
bestChild: NonExistentNode,
parent: 7,
}
nodeJ := &Node{
slot: types.Slot(103),
root: bytesutil.ToBytes32([]byte("helloJ")),
bestChild: 10,
parent: 6,
}
nodeK := &Node{
slot: types.Slot(104),
root: bytesutil.ToBytes32([]byte("helloK")),
bestChild: NonExistentNode,
parent: 9,
status: valid,
}
nodes := []*Node{
nodeA,
nodeB,
nodeC,
nodeD,
nodeE,
nodeF,
nodeG,
nodeH,
nodeI,
nodeJ,
nodeK,
}
ni := map[[32]byte]uint64{
nodeA.root: 0,
nodeB.root: 1,
nodeC.root: 2,
nodeD.root: 3,
nodeE.root: 4,
nodeF.root: 5,
nodeG.root: 6,
nodeH.root: 7,
nodeI.root: 8,
nodeJ.root: 9,
nodeK.root: 10,
}
s := &Store{
@@ -127,82 +49,14 @@ func TestOptimistic(t *testing.T) {
nodesIndices: ni,
}
tips := map[[32]byte]types.Slot{
nodeB.root: nodeB.slot,
nodeC.root: nodeC.slot,
nodeD.root: nodeD.slot,
}
st := &optimisticStore{
validatedTips: tips,
}
f := &ForkChoice{
store: s,
syncedTips: st,
store: s,
}
ctx := context.Background()
// We test the implementation of boundarySyncedTips
min, max := f.boundarySyncedTips()
require.Equal(t, min, types.Slot(101), "minimum tip slot is different")
require.Equal(t, max, types.Slot(103), "maximum tip slot is different")
// We test first nodes outside the Fork Choice store
_, err := f.IsOptimistic(ctx, root0)
_, err := f.IsOptimistic(root0)
require.ErrorIs(t, ErrUnknownNodeRoot, err)
_, err = f.IsOptimistic(ctx, root1)
require.ErrorIs(t, ErrUnknownNodeRoot, err)
// We check all nodes in the Fork Choice store.
op, err := f.IsOptimistic(ctx, nodeA.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.IsOptimistic(ctx, nodeB.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.IsOptimistic(ctx, nodeC.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.IsOptimistic(ctx, nodeD.root)
require.NoError(t, err)
require.Equal(t, op, false)
op, err = f.IsOptimistic(ctx, nodeE.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.IsOptimistic(ctx, nodeF.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.IsOptimistic(ctx, nodeG.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.IsOptimistic(ctx, nodeH.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.IsOptimistic(ctx, nodeI.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.IsOptimistic(ctx, nodeJ.root)
require.NoError(t, err)
require.Equal(t, op, true)
op, err = f.IsOptimistic(ctx, nodeK.root)
require.NoError(t, err)
require.Equal(t, op, true)
// request a write Lock to synced Tips regression #10289
f.syncedTips.Lock()
defer f.syncedTips.Unlock()
}
// This tests the algorithm to update syncedTips
// This tests the algorithm to update optimistic Status
// We start with the following diagram
//
// E -- F
@@ -213,165 +67,105 @@ func TestOptimistic(t *testing.T) {
// \ \
// J -- K -- L
//
// And every block in the Fork choice is optimistic. Synced_Tips contains a
// single block that is outside of Fork choice
// The Chain A -- B -- C -- D -- E is VALID.
//
func TestSetOptimisticToValid(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, params.BeaconConfig().ZeroHash, 1, 1))
tests := []struct {
root [32]byte // the root of the new VALID block
tips map[[32]byte]types.Slot // the old synced tips
newTips map[[32]byte]types.Slot // the updated synced tips
wantedErr error
root [32]byte // the root of the new VALID block
testRoot [32]byte // root of the node we will test optimistic status
wantedOptimistic bool // wanted optimistic status for tested node
wantedErr error // wanted error message
}{
{
[32]byte{'i'},
map[[32]byte]types.Slot{[32]byte{'z'}: 90},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
[32]byte{'i'},
false,
nil,
},
{
[32]byte{'i'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
[32]byte{'f'},
true,
nil,
},
{
[32]byte{'i'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'e'}: 103,
},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'e'}: 104,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
nil,
},
{
[32]byte{'j'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'f'}: 105,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
map[[32]byte]types.Slot{
[32]byte{'f'}: 105,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
[32]byte{'j'}: 102,
},
nil,
},
{
[32]byte{'g'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'f'}: 105,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'f'}: 105,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
[32]byte{'b'},
false,
nil,
},
{
[32]byte{'i'},
[32]byte{'h'},
map[[32]byte]types.Slot{
[32]byte{'z'}: 90,
},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
[32]byte{'h'}: 105,
},
false,
nil,
},
{
[32]byte{'b'},
[32]byte{'b'},
false,
nil,
},
{
[32]byte{'b'},
[32]byte{'h'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
true,
nil,
},
{
[32]byte{'g'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'e'}: 104,
},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'e'}: 104,
[32]byte{'g'}: 104,
},
[32]byte{'b'},
[32]byte{'a'},
false,
nil,
},
{
[32]byte{'k'},
[32]byte{'k'},
false,
nil,
},
{
[32]byte{'k'},
[32]byte{'l'},
true,
nil,
},
{
[32]byte{'p'},
map[[32]byte]types.Slot{},
map[[32]byte]types.Slot{},
errInvalidNodeIndex,
[32]byte{},
false,
ErrUnknownNodeRoot,
},
}
for _, tc := range tests {
f.syncedTips.Lock()
f.syncedTips.validatedTips = tc.tips
f.syncedTips.Unlock()
err := f.SetOptimisticToValid(context.Background(), tc.root)
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1))
require.NoError(t, f.SetOptimisticToValid(context.Background(), [32]byte{'e'}))
optimistic, err := f.IsOptimistic([32]byte{'b'})
require.NoError(t, err)
require.Equal(t, false, optimistic)
err = f.SetOptimisticToValid(context.Background(), tc.root)
if tc.wantedErr != nil {
require.ErrorIs(t, err, tc.wantedErr)
} else {
require.NoError(t, err)
f.syncedTips.RLock()
require.DeepEqual(t, f.syncedTips.validatedTips, tc.newTips)
f.syncedTips.RUnlock()
optimistic, err := f.IsOptimistic(tc.testRoot)
require.NoError(t, err)
require.Equal(t, tc.wantedOptimistic, optimistic)
}
}
}
@@ -387,70 +181,81 @@ func TestSetOptimisticToValid(t *testing.T) {
// \ \
// J(1) -- K(1) -- L(0)
//
// And every block in the Fork choice is optimistic. Synced_Tips contains a
// single block that is outside of Fork choice. The numbers in parentheses are
// the weights of the nodes before removal
// And the chain A -- B -- C -- D -- E has been fully validated. The numbers in parentheses are
// the weights of the nodes.
//
func TestSetOptimisticToInvalid(t *testing.T) {
tests := []struct {
root [32]byte // the root of the new INVALID block
tips map[[32]byte]types.Slot // the old synced tips
wantedParentTip bool
name string // test description
root [32]byte // the root of the new INVALID block
payload [32]byte // the payload of the last valid hash
newBestChild uint64
newBestDescendant uint64
newParentWeight uint64
returnedRoots [][32]byte
}{
{
"Remove tip, parent was valid",
[32]byte{'j'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
},
false,
[32]byte{'B'},
3,
4,
8,
[][32]byte{[32]byte{'j'}},
},
{
[32]byte{'j'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
},
true,
3,
4,
12,
8,
[][32]byte{[32]byte{'j'}},
},
{
"Remove tip, parent was optimistic",
[32]byte{'i'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
[32]byte{'h'}: 105,
},
true,
[32]byte{'H'},
NonExistentNode,
NonExistentNode,
1,
[][32]byte{[32]byte{'i'}},
},
{
"Remove tip, lvh is inner and valid",
[32]byte{'i'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
},
false,
[32]byte{'D'},
6,
8,
3,
[][32]byte{[32]byte{'g'}, [32]byte{'h'}, [32]byte{'k'}, [32]byte{'i'}, [32]byte{'l'}},
},
{
"Remove inner, lvh is inner and optimistic",
[32]byte{'h'},
[32]byte{'G'},
10,
12,
2,
[][32]byte{[32]byte{'h'}, [32]byte{'i'}},
},
{
"Remove tip, lvh is inner and optimistic",
[32]byte{'l'},
[32]byte{'G'},
9,
11,
2,
[][32]byte{[32]byte{'k'}, [32]byte{'l'}},
},
{
"Remove tip, lvh is not an ancestor",
[32]byte{'j'},
[32]byte{'C'},
5,
12,
7,
[][32]byte{[32]byte{'j'}},
},
{
"Remove inner, lvh is not an ancestor",
[32]byte{'g'},
[32]byte{'J'},
NonExistentNode,
NonExistentNode,
1,
[][32]byte{[32]byte{'i'}},
[][32]byte{[32]byte{'g'}, [32]byte{'h'}, [32]byte{'k'}, [32]byte{'i'}, [32]byte{'l'}},
},
}
for _, tc := range tests {
@@ -458,184 +263,71 @@ func TestSetOptimisticToInvalid(t *testing.T) {
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1))
weights := []uint64{10, 10, 9, 7, 1, 6, 2, 3, 1, 1, 1, 0, 0}
f.syncedTips.Lock()
f.syncedTips.validatedTips = tc.tips
f.syncedTips.Unlock()
f.store.nodesLock.Lock()
for i, node := range f.store.nodes {
node.weight = weights[i]
}
// Make j be the best child and descendant of b
nodeB := f.store.nodes[2]
nodeB.bestChild = 4
nodeB.bestDescendant = 4
idx := f.store.nodesIndices[tc.root]
node := f.store.nodes[idx]
parentIndex := node.parent
require.NotEqual(t, NonExistentNode, parentIndex)
parent := f.store.nodes[parentIndex]
f.store.nodesLock.Unlock()
roots, err := f.SetOptimisticToInvalid(context.Background(), tc.root)
require.NoError(t, f.SetOptimisticToValid(ctx, [32]byte{'e'}))
roots, err := f.SetOptimisticToInvalid(ctx, tc.root, tc.payload)
require.NoError(t, err)
require.DeepEqual(t, tc.returnedRoots, roots)
f.syncedTips.RLock()
_, parentSyncedTip := f.syncedTips.validatedTips[parent.root]
f.syncedTips.RUnlock()
require.Equal(t, tc.wantedParentTip, parentSyncedTip)
require.Equal(t, tc.newBestChild, parent.bestChild)
require.Equal(t, tc.newBestDescendant, parent.bestDescendant)
require.Equal(t, tc.newParentWeight, parent.weight)
}
}
// This tests the algorithm to find the tip of a given node
// We start with the following diagram
//
// E -- F
// /
// C -- D
// / \
// A -- B G -- H -- I
// \ \
// J -- K -- L
//
//
func TestFindSyncedTip(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, params.BeaconConfig().ZeroHash, 1, 1))
tests := []struct {
root [32]byte // the root of the block
tips map[[32]byte]types.Slot // the synced tips
wanted [32]byte // the root of expected tip
}{
{
[32]byte{'i'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 104,
},
[32]byte{'g'},
},
{
[32]byte{'g'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'h'}: 104,
[32]byte{'k'}: 106,
},
[32]byte{'d'},
},
{
[32]byte{'e'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'g'}: 103,
},
[32]byte{'d'},
},
{
[32]byte{'j'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'f'}: 105,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
[32]byte{'b'},
},
{
[32]byte{'g'},
map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'f'}: 105,
[32]byte{'g'}: 104,
[32]byte{'i'}: 106,
},
[32]byte{'g'},
},
}
for _, tc := range tests {
f.store.nodesLock.RLock()
node := f.store.nodes[f.store.nodesIndices[tc.root]]
syncedTips := &optimisticStore{
validatedTips: tc.tips,
}
syncedTips.RLock()
idx, err := f.store.findSyncedTip(ctx, node, syncedTips)
require.NoError(t, err)
require.Equal(t, tc.wanted, f.store.nodes[idx].root)
_, ok := f.store.nodesIndices[tc.root]
require.Equal(t, false, ok)
lvh := f.store.nodes[f.store.payloadIndices[tc.payload]]
require.Equal(t, true, slicesEqual(tc.returnedRoots, roots))
require.Equal(t, tc.newBestChild, lvh.bestChild)
require.Equal(t, tc.newBestDescendant, lvh.bestDescendant)
require.Equal(t, tc.newParentWeight, lvh.weight)
require.Equal(t, syncing, f.store.nodes[8].status /* F */)
require.Equal(t, valid, f.store.nodes[5].status /* E */)
f.store.nodesLock.RUnlock()
syncedTips.RUnlock()
}
}
// This is a regression test (10341)
func TestIsOptimistic_DeadLock(t *testing.T) {
func TestSetOptimisticToInvalid_InvalidRoots(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 90, [32]byte{'b'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'c'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'d'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'e'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
tips := map[[32]byte]types.Slot{
[32]byte{'a'}: 100,
[32]byte{'d'}: 102,
}
f.syncedTips.validatedTips = tips
_, err := f.IsOptimistic(ctx, [32]byte{'a'})
require.NoError(t, err)
// Acquire a write lock, this should not hang
f.store.nodesLock.Lock()
f.store.nodesLock.Unlock()
_, err = f.IsOptimistic(ctx, [32]byte{'e'})
require.NoError(t, err)
// Acquire a write lock, this should not hang
f.store.nodesLock.Lock()
f.store.nodesLock.Unlock()
_, err = f.IsOptimistic(ctx, [32]byte{'b'})
require.NoError(t, err)
// Acquire a write lock, this should not hang
f.store.nodesLock.Lock()
f.store.nodesLock.Unlock()
_, err = f.IsOptimistic(ctx, [32]byte{'c'})
require.NoError(t, err)
// Acquire a write lock, this should not hang
f.store.nodesLock.Lock()
f.store.nodesLock.Unlock()
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
_, err := f.SetOptimisticToInvalid(ctx, [32]byte{'p'}, [32]byte{'B'})
require.ErrorIs(t, ErrUnknownNodeRoot, err)
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'a'}, [32]byte{'p'})
require.ErrorIs(t, errInvalidFinalizedNode, err)
}
// This is a regression test (10445)
func TestSetOptimisticToInvalid_ProposerBoost(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1))
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{'c'}
f.store.previousProposerBoostScore = 10
f.store.previousProposerBoostRoot = [32]byte{'b'}
f.store.proposerBoostLock.Unlock()
_, err := f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'A'})
require.NoError(t, err)
f.store.proposerBoostLock.RLock()
require.Equal(t, uint64(0), f.store.previousProposerBoostScore)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
require.DeepEqual(t, params.BeaconConfig().ZeroHash, f.store.previousProposerBoostRoot)
f.store.proposerBoostLock.RUnlock()
}

View File

@@ -30,43 +30,14 @@ func New(justifiedEpoch, finalizedEpoch types.Epoch, finalizedRoot [32]byte) *Fo
proposerBoostRoot: [32]byte{},
nodes: make([]*Node, 0),
nodesIndices: make(map[[32]byte]uint64),
payloadIndices: make(map[[32]byte]uint64),
canonicalNodes: make(map[[32]byte]bool),
pruneThreshold: defaultPruneThreshold,
}
b := make([]uint64, 0)
v := make([]Vote, 0)
st := &optimisticStore{
validatedTips: make(map[[32]byte]types.Slot),
}
return &ForkChoice{store: s, balances: b, votes: v, syncedTips: st}
}
// SetSyncedTips sets the synced and validated tips from the passed map
func (f *ForkChoice) SetSyncedTips(tips map[[32]byte]types.Slot) error {
if len(tips) == 0 {
return errInvalidSyncedTips
}
newTips := make(map[[32]byte]types.Slot, len(tips))
for k, v := range tips {
newTips[k] = v
}
f.syncedTips.Lock()
defer f.syncedTips.Unlock()
f.syncedTips.validatedTips = newTips
return nil
}
// SyncedTips returns the synced and validated tips from the fork choice store.
func (f *ForkChoice) SyncedTips() map[[32]byte]types.Slot {
f.syncedTips.RLock()
defer f.syncedTips.RUnlock()
m := make(map[[32]byte]types.Slot)
for k, v := range f.syncedTips.validatedTips {
m[k] = v
}
return m
return &ForkChoice{store: s, balances: b, votes: v}
}
// Head returns the head root from fork choice store.
@@ -159,7 +130,7 @@ func (f *ForkChoice) InsertOptimisticBlock(
// Prune prunes the fork choice store with the new finalized root. The store is only pruned if the input
// root is different than the current store finalized root, and the number of the store has met prune threshold.
func (f *ForkChoice) Prune(ctx context.Context, finalizedRoot [32]byte) error {
return f.store.prune(ctx, finalizedRoot, f.syncedTips)
return f.store.prune(ctx, finalizedRoot)
}
// HasNode returns true if the node exists in fork choice store,
@@ -375,6 +346,7 @@ func (s *Store) insert(ctx context.Context,
}
s.nodesIndices[root] = index
s.payloadIndices[payloadHash] = index
s.nodes = append(s.nodes, n)
// Update parent with the best child and descendant only if it's available.
@@ -443,6 +415,7 @@ func (s *Store) applyWeightChanges(
}
iProposerScore, err := pmath.Int(proposerScore)
if err != nil {
s.proposerBoostLock.Unlock()
return err
}
nodeDelta = nodeDelta + iProposerScore
@@ -598,7 +571,7 @@ func (s *Store) updateBestChildAndDescendant(parentIndex, childIndex uint64) err
// prune prunes the store with the new finalized root. The tree is only
// pruned if the input finalized root are different than the one in stored and
// the number of the nodes in store has met prune threshold.
func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte, syncedTips *optimisticStore) error {
func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "protoArrayForkChoice.prune")
defer span.End()
@@ -618,18 +591,9 @@ func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte, syncedTips *o
return nil
}
// Traverse through the node list starting from the finalized node at index 0.
// Nodes that are not branching off from the finalized node will be removed.
syncedTips.Lock()
defer syncedTips.Unlock()
canonicalNodesMap := make(map[uint64]uint64, uint64(len(s.nodes))-finalizedIndex)
canonicalNodes := make([]*Node, 1, uint64(len(s.nodes))-finalizedIndex)
finalizedNode := s.nodes[finalizedIndex]
finalizedTipIndex, err := s.findSyncedTip(ctx, finalizedNode, syncedTips)
if err != nil {
return err
}
finalizedNode.parent = NonExistentNode
canonicalNodes[0] = finalizedNode
canonicalNodesMap[finalizedIndex] = uint64(0)
@@ -645,10 +609,6 @@ func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte, syncedTips *o
} else {
// Remove node and synced tip that is not part of finalized branch.
delete(s.nodesIndices, node.root)
_, ok := syncedTips.validatedTips[node.root]
if ok && idx != finalizedTipIndex {
delete(syncedTips.validatedTips, node.root)
}
}
}
s.nodesIndices[finalizedRoot] = uint64(0)
@@ -665,7 +625,6 @@ func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte, syncedTips *o
s.nodes = canonicalNodes
prunedCount.Inc()
syncedTipsCount.Set(float64(len(syncedTips.validatedTips)))
return nil
}
@@ -673,6 +632,10 @@ func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte, syncedTips *o
// Any node with diff finalized or justified epoch than the ones in fork choice store
// should not be viable to head.
func (s *Store) leadsToViableHead(node *Node) (bool, error) {
if node.status == invalid {
return false, nil
}
var bestDescendantViable bool
bestDescendantIndex := node.bestDescendant
@@ -704,20 +667,6 @@ func (s *Store) viableForHead(node *Node) bool {
return justified && finalized
}
// Returns the list of leaves in the Fork Choice store.
// These are all the nodes that have NonExistentNode as best child.
// This internal method assumes that the caller holds a lock in s.nodesLock.
func (s *Store) leaves() ([]uint64, error) {
var leaves []uint64
for i := uint64(0); i < uint64(len(s.nodes)); i++ {
node := s.nodes[i]
if node.bestChild == NonExistentNode {
leaves = append(leaves, i)
}
}
return leaves, nil
}
// Tips returns all possible chain heads (leaves of fork choice tree).
// Heads roots and heads slots are returned.
func (f *ForkChoice) Tips() ([][32]byte, []types.Slot) {

View File

@@ -100,7 +100,7 @@ func TestStore_Head_ContextCancelled(t *testing.T) {
func TestStore_Insert_UnknownParent(t *testing.T) {
// The new node does not have a parent.
s := &Store{nodesIndices: make(map[[32]byte]uint64)}
s := &Store{nodesIndices: make(map[[32]byte]uint64), payloadIndices: make(map[[32]byte]uint64)}
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, [32]byte{'B'}, params.BeaconConfig().ZeroHash, 1, 1))
assert.Equal(t, 1, len(s.nodes), "Did not insert block")
assert.Equal(t, 1, len(s.nodesIndices), "Did not insert block")
@@ -113,7 +113,7 @@ func TestStore_Insert_UnknownParent(t *testing.T) {
func TestStore_Insert_KnownParent(t *testing.T) {
// Similar to UnknownParent test, but this time the new node has a valid parent already in store.
// The new node builds on top of the parent.
s := &Store{nodesIndices: make(map[[32]byte]uint64)}
s := &Store{nodesIndices: make(map[[32]byte]uint64), payloadIndices: make(map[[32]byte]uint64)}
s.nodes = []*Node{{}}
p := [32]byte{'B'}
s.nodesIndices[p] = 0
@@ -336,11 +336,10 @@ func TestStore_Prune_LessThanThreshold(t *testing.T) {
})
s := &Store{nodes: nodes, nodesIndices: indices, pruneThreshold: 100}
syncedTips := &optimisticStore{}
// Finalized root is at index 99 so everything before 99 should be pruned,
// but PruneThreshold is at 100 so nothing will be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(99), syncedTips))
require.NoError(t, s.prune(context.Background(), indexToHash(99)))
assert.Equal(t, 100, len(s.nodes), "Incorrect nodes count")
assert.Equal(t, 100, len(s.nodesIndices), "Incorrect node indices count")
}
@@ -377,10 +376,9 @@ func TestStore_Prune_MoreThanThreshold(t *testing.T) {
})
indices[indexToHash(uint64(numOfNodes-1))] = uint64(numOfNodes - 1)
s := &Store{nodes: nodes, nodesIndices: indices}
syncedTips := &optimisticStore{}
// Finalized root is at index 99 so everything before 99 should be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(99), syncedTips))
require.NoError(t, s.prune(context.Background(), indexToHash(99)))
assert.Equal(t, 1, len(s.nodes), "Incorrect nodes count")
assert.Equal(t, 1, len(s.nodesIndices), "Incorrect node indices count")
}
@@ -416,15 +414,14 @@ func TestStore_Prune_MoreThanOnce(t *testing.T) {
})
s := &Store{nodes: nodes, nodesIndices: indices}
syncedTips := &optimisticStore{}
// Finalized root is at index 11 so everything before 11 should be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(10), syncedTips))
require.NoError(t, s.prune(context.Background(), indexToHash(10)))
assert.Equal(t, 90, len(s.nodes), "Incorrect nodes count")
assert.Equal(t, 90, len(s.nodesIndices), "Incorrect node indices count")
// One more time.
require.NoError(t, s.prune(context.Background(), indexToHash(20), syncedTips))
require.NoError(t, s.prune(context.Background(), indexToHash(20)))
assert.Equal(t, 80, len(s.nodes), "Incorrect nodes count")
assert.Equal(t, 80, len(s.nodesIndices), "Incorrect node indices count")
}
@@ -460,7 +457,6 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
bestDescendant: NonExistentNode,
},
}
syncedTips := &optimisticStore{}
s := &Store{
pruneThreshold: 0,
nodes: nodes,
@@ -470,7 +466,7 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
indexToHash(uint64(2)): 2,
},
}
require.NoError(t, s.prune(context.Background(), indexToHash(uint64(1)), syncedTips))
require.NoError(t, s.prune(context.Background(), indexToHash(uint64(1))))
require.Equal(t, len(s.nodes), 1)
}
@@ -486,9 +482,6 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
// J -- K -- L
//
//
// Synced tips are B, D and E. And we finalize F. All that is left in fork
// choice is F, and the only synced tip left is E which is now away from Fork
// Choice.
func TestStore_PruneSyncedTips(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
@@ -505,19 +498,9 @@ func TestStore_PruneSyncedTips(t *testing.T) {
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, params.BeaconConfig().ZeroHash, 1, 1))
syncedTips := &optimisticStore{
validatedTips: map[[32]byte]types.Slot{
[32]byte{'b'}: 101,
[32]byte{'d'}: 103,
[32]byte{'e'}: 104,
},
}
f.syncedTips = syncedTips
f.store.pruneThreshold = 0
require.NoError(t, f.Prune(ctx, [32]byte{'f'}))
require.Equal(t, 1, len(f.syncedTips.validatedTips))
_, ok := f.syncedTips.validatedTips[[32]byte{'e'}]
require.Equal(t, true, ok)
require.Equal(t, 1, f.NodeCount())
}
func TestStore_LeadsToViableHead(t *testing.T) {
@@ -546,20 +529,6 @@ func TestStore_LeadsToViableHead(t *testing.T) {
}
}
func TestStore_SetSyncedTips(t *testing.T) {
f := setup(1, 1)
tips := make(map[[32]byte]types.Slot)
require.ErrorIs(t, errInvalidSyncedTips, f.SetSyncedTips(tips))
tips[bytesutil.ToBytes32([]byte{'a'})] = 1
require.NoError(t, f.SetSyncedTips(tips))
f.syncedTips.RLock()
defer f.syncedTips.RUnlock()
require.Equal(t, 1, len(f.syncedTips.validatedTips))
slot, ok := f.syncedTips.validatedTips[bytesutil.ToBytes32([]byte{'a'})]
require.Equal(t, true, ok)
require.Equal(t, types.Slot(1), slot)
}
func TestStore_ViableForHead(t *testing.T) {
tests := []struct {
n *Node

View File

@@ -9,11 +9,10 @@ import (
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
type ForkChoice struct {
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
balances []uint64 // tracks individual validator's last justified balances.
syncedTips *optimisticStore
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
balances []uint64 // tracks individual validator's last justified balances.
}
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.
@@ -28,6 +27,7 @@ type Store struct {
nodes []*Node // list of block nodes, each node is a representation of one block.
nodesIndices map[[fieldparams.RootLength]byte]uint64 // the root of block node and the nodes index in the list.
canonicalNodes map[[fieldparams.RootLength]byte]bool // the canonical block nodes.
payloadIndices map[[fieldparams.RootLength]byte]uint64 // the payload hash of block node and the index in the list
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
}
@@ -44,15 +44,17 @@ type Node struct {
weight uint64 // weight of this node.
bestChild uint64 // bestChild index of this node.
bestDescendant uint64 // bestDescendant of this node.
graffiti [fieldparams.RootLength]byte // graffiti of the block node.
status status // optimistic status of this node
}
// optimisticStore defines a structure that tracks the tips of the fully
// validated blocks tree.
type optimisticStore struct {
validatedTips map[[32]byte]types.Slot
sync.RWMutex
}
// enum used as optimistic status of a node
type status uint8
const (
syncing status = iota // the node is optimistic
valid //fully validated node
invalid // invalid execution payload
)
// Vote defines an individual validator's vote.
type Vote struct {

View File

@@ -12,7 +12,6 @@ import (
func TestVotes_CanFindHead(t *testing.T) {
balances := []uint64{1, 1}
f := setup(1, 1)
syncedTips := &optimisticStore{}
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), 1, params.BeaconConfig().ZeroHash, balances, 1)
@@ -249,7 +248,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// Verify pruning below the prune threshold does not affect head.
f.store.pruneThreshold = 1000
require.NoError(t, f.store.prune(context.Background(), indexToHash(5), syncedTips))
require.NoError(t, f.store.prune(context.Background(), indexToHash(5)))
assert.Equal(t, 11, len(f.store.nodes), "Incorrect nodes length after prune")
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)
@@ -273,7 +272,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// / \
// 9 10
f.store.pruneThreshold = 1
require.NoError(t, f.store.prune(context.Background(), indexToHash(5), syncedTips))
require.NoError(t, f.store.prune(context.Background(), indexToHash(5)))
assert.Equal(t, 5, len(f.store.nodes), "Incorrect nodes length after prune")
r, err = f.Head(context.Background(), 2, indexToHash(5), balances, 2)

View File

@@ -18,6 +18,7 @@ go_library(
"//api/gateway:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
@@ -43,6 +44,7 @@ go_library(
"//beacon-chain/sync:go_default_library",
"//beacon-chain/sync/backfill:go_default_library",
"//beacon-chain/sync/checkpoint:go_default_library",
"//beacon-chain/sync/genesis:go_default_library",
"//beacon-chain/sync/initial-sync:go_default_library",
"//cmd:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
@@ -76,6 +78,8 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//cmd:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/params:go_default_library",

View File

@@ -21,6 +21,7 @@ import (
apigateway "github.com/prysmaticlabs/prysm/api/gateway"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
@@ -46,6 +47,7 @@ import (
regularsync "github.com/prysmaticlabs/prysm/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/beacon-chain/sync/backfill"
"github.com/prysmaticlabs/prysm/beacon-chain/sync/checkpoint"
"github.com/prysmaticlabs/prysm/beacon-chain/sync/genesis"
initialsync "github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync"
"github.com/prysmaticlabs/prysm/cmd"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
@@ -93,6 +95,7 @@ type BeaconNode struct {
slashingsPool slashings.PoolManager
syncCommitteePool synccommittee.Pool
depositCache *depositcache.DepositCache
proposerIdsCache *cache.ProposerPayloadIDsCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
@@ -104,6 +107,7 @@ type BeaconNode struct {
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
blockchainFlagOpts []blockchain.Option
GenesisInitializer genesis.Initializer
CheckpointInitializer checkpoint.Initializer
}
@@ -150,6 +154,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
slasherBlockHeadersFeed: new(event.Feed),
slasherAttestationsFeed: new(event.Feed),
serviceFlagOpts: &serviceFlagOpts{},
proposerIdsCache: cache.NewProposerPayloadIDsCache(),
}
for _, opt := range opts {
@@ -381,20 +386,10 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
if err != nil {
return errors.Wrap(err, "could not create deposit cache")
}
b.depositCache = depositCache
if cliCtx.IsSet(flags.GenesisStatePath.Name) {
r, err := os.Open(cliCtx.String(flags.GenesisStatePath.Name))
if err != nil {
return err
}
defer func() {
if err := r.Close(); err != nil {
log.WithError(err).Error("Failed to close genesis file")
}
}()
if err := b.db.LoadGenesis(b.ctx, r); err != nil {
if b.GenesisInitializer != nil {
if err := b.GenesisInitializer.Initialize(b.ctx, d); err != nil {
if err == db.ErrExistingGenesisState {
return errors.New("Genesis state flag specified but a genesis state " +
"exists already. Run again with --clear-db and/or ensure you are using the " +
@@ -403,6 +398,7 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
return errors.Wrap(err, "could not load genesis from file")
}
}
if err := b.db.EnsureEmbeddedGenesis(b.ctx); err != nil {
return err
}
@@ -581,7 +577,7 @@ func (b *BeaconNode) registerBlockchainService() error {
blockchain.WithDatabase(b.db),
blockchain.WithDepositCache(b.depositCache),
blockchain.WithChainStartFetcher(web3Service),
blockchain.WithExecutionEngineCaller(web3Service.EngineAPIClient()),
blockchain.WithExecutionEngineCaller(web3Service),
blockchain.WithAttestationPool(b.attestationPool),
blockchain.WithExitPool(b.exitPool),
blockchain.WithSlashingPool(b.slashingsPool),
@@ -592,6 +588,7 @@ func (b *BeaconNode) registerBlockchainService() error {
blockchain.WithStateGen(b.stateGen),
blockchain.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
blockchain.WithProposerIdsCache(b.proposerIdsCache),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
if err != nil {
@@ -808,7 +805,8 @@ func (b *BeaconNode) registerRPCService() error {
StateGen: b.stateGen,
EnableDebugRPCEndpoints: enableDebugRPCEndpoints,
MaxMsgSize: maxMsgSize,
ExecutionEngineCaller: web3Service.EngineAPIClient(),
ProposerIdsCache: b.proposerIdsCache,
ExecutionEngineCaller: web3Service,
})
return b.services.RegisterService(rpcService)

View File

@@ -8,6 +8,8 @@ import (
"testing"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/cmd"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
@@ -45,6 +47,11 @@ func TestNodeClose_OK(t *testing.T) {
// TestClearDB tests clearing the database
func TestClearDB(t *testing.T) {
hook := logTest.NewGlobal()
srv, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
srv.Stop()
})
tmp := filepath.Join(t.TempDir(), "datadirtest")
@@ -54,7 +61,9 @@ func TestClearDB(t *testing.T) {
set.Bool(cmd.ForceClearDB.Name, true, "force clear db")
context := cli.NewContext(&app, set, nil)
_, err := New(context)
_, err = New(context, WithPowchainFlagOptions([]powchain.Option{
powchain.WithHttpEndpoints([]string{endpoint}),
}))
require.NoError(t, err)
require.LogsContain(t, hook, "Removing database")

View File

@@ -3,12 +3,16 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"auth.go",
"block_cache.go",
"block_reader.go",
"check_transition_config.go",
"deposit.go",
"engine_client.go",
"errors.go",
"log.go",
"log_processing.go",
"metrics.go",
"options.go",
"prometheus.go",
"provider.go",
@@ -28,7 +32,6 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/powchain/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native/v1:go_default_library",
@@ -56,6 +59,7 @@ go_library(
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//ethclient:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
@@ -70,10 +74,12 @@ go_test(
name = "go_default_test",
size = "medium",
srcs = [
"auth_test.go",
"block_cache_test.go",
"block_reader_test.go",
"check_transition_config_test.go",
"deposit_test.go",
"engine_client_test.go",
"init_test.go",
"log_processing_test.go",
"powchain_test.go",
@@ -92,8 +98,6 @@ go_test(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1/mocks:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//beacon-chain/powchain/types:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
@@ -119,10 +123,15 @@ go_test(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_ethereum_go_ethereum//trie:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -1,4 +1,4 @@
package v1
package powchain
import (
"net/http"

View File

@@ -1,4 +1,4 @@
package v1
package powchain
import (
"net/http"
@@ -19,7 +19,7 @@ func TestJWTAuthTransport(t *testing.T) {
jwtSecret: secret,
}
client := &http.Client{
Timeout: DefaultTimeout,
Timeout: DefaultRPCHTTPTimeout,
Transport: authTransport,
}
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {

View File

@@ -18,8 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/testing/require"
)
var endpoint = "http://127.0.0.1"
func setDefaultMocks(service *Service) *Service {
service.eth1DataFetcher = &goodFetcher{}
service.httpLogger = &goodLogger{}
@@ -32,6 +30,11 @@ func TestLatestMainchainInfo_OK(t *testing.T) {
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -70,6 +73,11 @@ func TestLatestMainchainInfo_OK(t *testing.T) {
func TestBlockHashByHeight_ReturnsHash(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -97,6 +105,11 @@ func TestBlockHashByHeight_ReturnsHash(t *testing.T) {
func TestBlockHashByHeight_ReturnsError_WhenNoEth1Client(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -113,6 +126,11 @@ func TestBlockHashByHeight_ReturnsError_WhenNoEth1Client(t *testing.T) {
func TestBlockExists_ValidHash(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -144,6 +162,11 @@ func TestBlockExists_ValidHash(t *testing.T) {
func TestBlockExists_InvalidHash(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -158,6 +181,11 @@ func TestBlockExists_InvalidHash(t *testing.T) {
func TestBlockExists_UsesCachedBlockInfo(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -181,6 +209,11 @@ func TestBlockExists_UsesCachedBlockInfo(t *testing.T) {
func TestBlockExistsWithCache_UsesCachedHeaderInfo(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -202,6 +235,11 @@ func TestBlockExistsWithCache_UsesCachedHeaderInfo(t *testing.T) {
func TestBlockExistsWithCache_HeaderNotCached(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -218,6 +256,11 @@ func TestService_BlockNumberByTimestamp(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -245,6 +288,11 @@ func TestService_BlockNumberByTimestampLessTargetTime(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -278,6 +326,11 @@ func TestService_BlockNumberByTimestampMoreTargetTime(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -309,6 +362,11 @@ func TestService_BlockNumberByTimestampMoreTargetTime(t *testing.T) {
func TestService_BlockTimeByHeight_ReturnsError_WhenNoEth1Client(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),

View File

@@ -7,12 +7,14 @@ import (
"math/big"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/config/params"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/sirupsen/logrus"
)
var (
@@ -27,16 +29,12 @@ var (
// If there are any discrepancies, we must log errors to ensure users can resolve
//the problem and be ready for the merge transition.
func (s *Service) checkTransitionConfiguration(
ctx context.Context, blockNotifications chan *statefeed.BlockProcessedData,
ctx context.Context, blockNotifications chan *feed.Event,
) {
// If Bellatrix fork epoch is not set, we do not run this check.
if params.BeaconConfig().BellatrixForkEpoch == math.MaxUint64 {
return
}
// If no engine API, then also avoid running this check.
if s.engineAPIClient == nil {
return
}
i := new(big.Int)
i.SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
ttd := new(uint256.Int)
@@ -46,9 +44,9 @@ func (s *Service) checkTransitionConfiguration(
TerminalBlockHash: params.BeaconConfig().TerminalBlockHash[:],
TerminalBlockNumber: big.NewInt(0).Bytes(), // A value of 0 is recommended in the request.
}
err := s.engineAPIClient.ExchangeTransitionConfiguration(ctx, cfg)
err := s.ExchangeTransitionConfiguration(ctx, cfg)
if err != nil {
if errors.Is(err, v1.ErrConfigMismatch) {
if errors.Is(err, ErrConfigMismatch) {
log.WithError(err).Fatal(configMismatchLog)
}
log.WithError(err).Error("Could not check configuration values between execution and consensus client")
@@ -58,6 +56,7 @@ func (s *Service) checkTransitionConfiguration(
// This serves as a heartbeat to ensure the execution client and Prysm are ready for the
// Bellatrix hard-fork transition.
ticker := time.NewTicker(checkTransitionPollingInterval)
hasTtdReached := false
defer ticker.Stop()
sub := s.cfg.stateNotifier.StateFeed().Subscribe(blockNotifications)
defer sub.Unsubscribe()
@@ -68,7 +67,11 @@ func (s *Service) checkTransitionConfiguration(
case <-sub.Err():
return
case ev := <-blockNotifications:
isExecutionBlock, err := blocks.IsExecutionBlock(ev.SignedBlock.Block().Body())
data, ok := ev.Data.(*statefeed.BlockProcessedData)
if !ok {
continue
}
isExecutionBlock, err := blocks.IsExecutionBlock(data.SignedBlock.Block().Body())
if err != nil {
log.WithError(err).Debug("Could not check whether signed block is execution block")
continue
@@ -77,9 +80,17 @@ func (s *Service) checkTransitionConfiguration(
log.Debug("PoS transition is complete, no longer checking for configuration changes")
return
}
case <-ticker.C:
err = s.engineAPIClient.ExchangeTransitionConfiguration(ctx, cfg)
case tm := <-ticker.C:
ctx, cancel := context.WithDeadline(ctx, tm.Add(DefaultRPCHTTPTimeout))
err = s.ExchangeTransitionConfiguration(ctx, cfg)
s.handleExchangeConfigurationError(err)
if !hasTtdReached {
hasTtdReached, err = s.logTtdStatus(ctx, ttd)
if err != nil {
log.WithError(err).Error("Could not log ttd status")
}
}
cancel()
}
}
}
@@ -91,16 +102,39 @@ func (s *Service) handleExchangeConfigurationError(err error) {
if err == nil {
// If there is no error in checking the exchange configuration error, we clear
// the run error of the service if we had previously set it to ErrConfigMismatch.
if errors.Is(s.runError, v1.ErrConfigMismatch) {
if errors.Is(s.runError, ErrConfigMismatch) {
s.runError = nil
}
return
}
// If the error is a configuration mismatch, we set a runtime error in the service.
if errors.Is(err, v1.ErrConfigMismatch) {
if errors.Is(err, ErrConfigMismatch) {
s.runError = err
log.WithError(err).Error(configMismatchLog)
return
}
log.WithError(err).Error("Could not check configuration values between execution and consensus client")
}
// Logs the terminal total difficulty status.
func (s *Service) logTtdStatus(ctx context.Context, ttd *uint256.Int) (bool, error) {
latest, err := s.LatestExecutionBlock(ctx)
if err != nil {
return false, err
}
if latest == nil {
return false, errors.New("latest block is nil")
}
latestTtd, err := hexutil.DecodeBig(latest.TotalDifficulty)
if err != nil {
return false, err
}
if latestTtd.Cmp(ttd.ToBig()) >= 0 {
return true, nil
}
log.WithFields(logrus.Fields{
"latestDifficulty": latestTtd.String(),
"terminalDifficulty": ttd.ToBig().String(),
}).Info("terminal difficulty has not been reached yet")
return false, nil
}

View File

@@ -2,21 +2,28 @@ package powchain
import (
"context"
"encoding/json"
"errors"
"net/http"
"net/http/httptest"
"testing"
"time"
mockChain2 "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
mockChain "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
mocks "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
enginev1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/protobuf/proto"
)
func Test_checkTransitionConfiguration(t *testing.T) {
@@ -31,14 +38,11 @@ func Test_checkTransitionConfiguration(t *testing.T) {
m := &mocks.EngineClient{}
m.Err = errors.New("something went wrong")
mockChain := &mockChain2.MockStateNotifier{}
srv := &Service{
cfg: &config{stateNotifier: mockChain},
}
srv.engineAPIClient = m
srv := setupTransitionConfigTest(t)
srv.cfg.stateNotifier = &mockChain.MockStateNotifier{}
checkTransitionPollingInterval = time.Millisecond
ctx, cancel := context.WithCancel(ctx)
go srv.checkTransitionConfiguration(ctx, make(chan *statefeed.BlockProcessedData, 1))
go srv.checkTransitionConfiguration(ctx, make(chan *feed.Event, 1))
<-time.After(100 * time.Millisecond)
cancel()
require.LogsContain(t, hook, "Could not check configuration values")
@@ -48,16 +52,13 @@ func Test_checkTransitionConfiguration(t *testing.T) {
ctx := context.Background()
m := &mocks.EngineClient{}
m.Err = errors.New("something went wrong")
srv := setupTransitionConfigTest(t)
srv.cfg.stateNotifier = &mockChain.MockStateNotifier{}
mockChain := &mockChain2.MockStateNotifier{}
srv := &Service{
cfg: &config{stateNotifier: mockChain},
}
srv.engineAPIClient = m
checkTransitionPollingInterval = time.Millisecond
ctx, cancel := context.WithCancel(ctx)
exit := make(chan bool)
notification := make(chan *statefeed.BlockProcessedData)
notification := make(chan *feed.Event)
go func() {
srv.checkTransitionConfiguration(ctx, notification)
exit <- true
@@ -72,8 +73,11 @@ func Test_checkTransitionConfiguration(t *testing.T) {
}},
)
require.NoError(t, err)
notification <- &statefeed.BlockProcessedData{
SignedBlock: wrappedBlock,
notification <- &feed.Event{
Data: &statefeed.BlockProcessedData{
SignedBlock: wrappedBlock,
},
Type: statefeed.BlockProcessed,
}
<-exit
cancel()
@@ -84,33 +88,110 @@ func Test_checkTransitionConfiguration(t *testing.T) {
func TestService_handleExchangeConfigurationError(t *testing.T) {
hook := logTest.NewGlobal()
t.Run("clears existing service error", func(t *testing.T) {
srv := &Service{isRunning: true}
srv.runError = v1.ErrConfigMismatch
srv := setupTransitionConfigTest(t)
srv.isRunning = true
srv.runError = ErrConfigMismatch
srv.handleExchangeConfigurationError(nil)
require.Equal(t, true, srv.Status() == nil)
})
t.Run("does not clear existing service error if wrong kind", func(t *testing.T) {
srv := &Service{isRunning: true}
srv := setupTransitionConfigTest(t)
srv.isRunning = true
err := errors.New("something else went wrong")
srv.runError = err
srv.handleExchangeConfigurationError(nil)
require.ErrorIs(t, err, srv.Status())
})
t.Run("sets service error on config mismatch", func(t *testing.T) {
srv := &Service{isRunning: true}
srv.handleExchangeConfigurationError(v1.ErrConfigMismatch)
require.Equal(t, v1.ErrConfigMismatch, srv.Status())
srv := setupTransitionConfigTest(t)
srv.isRunning = true
srv.handleExchangeConfigurationError(ErrConfigMismatch)
require.Equal(t, ErrConfigMismatch, srv.Status())
require.LogsContain(t, hook, configMismatchLog)
})
t.Run("does not set service error if unrelated problem", func(t *testing.T) {
srv := &Service{isRunning: true}
srv := setupTransitionConfigTest(t)
srv.isRunning = true
srv.handleExchangeConfigurationError(errors.New("foo"))
require.Equal(t, true, srv.Status() == nil)
require.LogsContain(t, hook, "Could not check configuration values")
})
}
func emptyPayload() *enginev1.ExecutionPayload {
return &enginev1.ExecutionPayload{
func setupTransitionConfigTest(t testing.TB) *Service {
fix := fixtures()
request, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
resp, ok := proto.Clone(request).(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
// Change the terminal block hash.
h := common.BytesToHash([]byte("foo"))
resp.TerminalBlockHash = h[:]
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
return service
}
func TestService_logTtdStatus(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
resp := &pb.ExecutionBlock{TotalDifficulty: "0x12345678"}
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
ttd := new(uint256.Int)
reached, err := service.logTtdStatus(context.Background(), ttd.SetUint64(24343))
require.NoError(t, err)
require.Equal(t, true, reached)
reached, err = service.logTtdStatus(context.Background(), ttd.SetUint64(323423484))
require.NoError(t, err)
require.Equal(t, false, reached)
}
func emptyPayload() *pb.ExecutionPayload {
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/signing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
testing2 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/trie"
@@ -52,6 +53,11 @@ func TestDepositContractAddress_OK(t *testing.T) {
func TestProcessDeposit_OK(t *testing.T) {
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -76,6 +82,11 @@ func TestProcessDeposit_OK(t *testing.T) {
func TestProcessDeposit_InvalidMerkleBranch(t *testing.T) {
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -102,6 +113,11 @@ func TestProcessDeposit_InvalidMerkleBranch(t *testing.T) {
func TestProcessDeposit_InvalidPublicKey(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -138,6 +154,11 @@ func TestProcessDeposit_InvalidPublicKey(t *testing.T) {
func TestProcessDeposit_InvalidSignature(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -173,6 +194,11 @@ func TestProcessDeposit_InvalidSignature(t *testing.T) {
func TestProcessDeposit_UnableToVerify(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -205,6 +231,11 @@ func TestProcessDeposit_UnableToVerify(t *testing.T) {
func TestProcessDeposit_IncompleteDeposit(t *testing.T) {
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -267,6 +298,11 @@ func TestProcessDeposit_IncompleteDeposit(t *testing.T) {
func TestProcessDeposit_AllDepositedSuccessfully(t *testing.T) {
beaconDB := testDB.SetupDB(t)
server, endpoint, err := testing2.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),

View File

@@ -1,49 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"auth.go",
"client.go",
"errors.go",
"metrics.go",
"options.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"auth_test.go",
"client_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/powchain/engine-api-client/v1/mocks:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -1,13 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["client.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks",
visibility = ["//visibility:public"],
deps = [
"//proto/engine/v1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -1,39 +0,0 @@
package v1
import (
"net/http"
)
// Option for configuring the engine API client.
type Option func(c *Client) error
type config struct {
httpClient *http.Client
}
func defaultConfig() *config {
return &config{
httpClient: &http.Client{
Timeout: DefaultTimeout,
},
}
}
// WithJWTSecret allows setting a JWT secret for authenticating
// the client via HTTP connections.
func WithJWTSecret(secret []byte) Option {
return func(c *Client) error {
if len(secret) == 0 {
return nil
}
authTransport := &jwtTransport{
underlyingTransport: http.DefaultTransport,
jwtSecret: secret,
}
c.cfg.httpClient = &http.Client{
Timeout: DefaultTimeout,
Transport: authTransport,
}
return nil
}
}

View File

@@ -1,14 +1,10 @@
// Package v1 defines an API client for the engine API defined in https://github.com/ethereum/execution-apis.
// This client is used for the Prysm consensus node to connect to execution node as part of
// the Ethereum proof-of-stake machinery.
package v1
package powchain
import (
"bytes"
"context"
"fmt"
"math/big"
"net/url"
"time"
"github.com/ethereum/go-ethereum/common"
@@ -33,8 +29,6 @@ const (
ExecutionBlockByHashMethod = "eth_getBlockByHash"
// ExecutionBlockByNumberMethod request string for JSON-RPC.
ExecutionBlockByNumberMethod = "eth_getBlockByNumber"
// DefaultTimeout for HTTP.
DefaultTimeout = time.Second * 5
)
// ForkchoiceUpdatedResponse is the response kind received by the
@@ -44,9 +38,9 @@ type ForkchoiceUpdatedResponse struct {
PayloadId *pb.PayloadIDBytes `json:"payloadId"`
}
// Caller defines a client that can interact with an Ethereum
// EngineCaller defines a client that can interact with an Ethereum
// execution node's engine service via JSON-RPC.
type Caller interface {
type EngineCaller interface {
NewPayload(ctx context.Context, payload *pb.ExecutionPayload) ([]byte, error)
ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs *pb.PayloadAttributes,
@@ -59,44 +53,8 @@ type Caller interface {
ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error)
}
// Client defines a new engine API client for the Prysm consensus node
// to interact with an Ethereum execution node.
type Client struct {
cfg *config
rpc *rpc.Client
}
// New returns a ready, engine API client from an endpoint and configuration options.
// Only http(s) and ipc (inter-process communication) URL schemes are supported.
func New(ctx context.Context, endpoint string, opts ...Option) (*Client, error) {
u, err := url.Parse(endpoint)
if err != nil {
return nil, err
}
c := &Client{
cfg: defaultConfig(),
}
for _, opt := range opts {
if err := opt(c); err != nil {
return nil, err
}
}
switch u.Scheme {
case "http", "https":
c.rpc, err = rpc.DialHTTPWithClient(endpoint, c.cfg.httpClient)
case "":
c.rpc, err = rpc.DialIPC(ctx, endpoint)
default:
return nil, errors.Wrapf(ErrUnsupportedScheme, "%q", u.Scheme)
}
if err != nil {
return nil, err
}
return c, nil
}
// NewPayload calls the engine_newPayloadV1 method via JSON-RPC.
func (c *Client) NewPayload(ctx context.Context, payload *pb.ExecutionPayload) ([]byte, error) {
func (s *Service) NewPayload(ctx context.Context, payload *pb.ExecutionPayload) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
defer span.End()
start := time.Now()
@@ -105,7 +63,7 @@ func (c *Client) NewPayload(ctx context.Context, payload *pb.ExecutionPayload) (
}()
result := &pb.PayloadStatus{}
err := c.rpc.CallContext(ctx, result, NewPayloadMethod, payload)
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethod, payload)
if err != nil {
return nil, handleRPCError(err)
}
@@ -127,7 +85,7 @@ func (c *Client) NewPayload(ctx context.Context, payload *pb.ExecutionPayload) (
}
// ForkchoiceUpdated calls the engine_forkchoiceUpdatedV1 method via JSON-RPC.
func (c *Client) ForkchoiceUpdated(
func (s *Service) ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs *pb.PayloadAttributes,
) (*pb.PayloadIDBytes, []byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ForkchoiceUpdated")
@@ -138,7 +96,7 @@ func (c *Client) ForkchoiceUpdated(
}()
result := &ForkchoiceUpdatedResponse{}
err := c.rpc.CallContext(ctx, result, ForkchoiceUpdatedMethod, state, attrs)
err := s.rpcClient.CallContext(ctx, result, ForkchoiceUpdatedMethod, state, attrs)
if err != nil {
return nil, nil, handleRPCError(err)
}
@@ -162,7 +120,7 @@ func (c *Client) ForkchoiceUpdated(
}
// GetPayload calls the engine_getPayloadV1 method via JSON-RPC.
func (c *Client) GetPayload(ctx context.Context, payloadId [8]byte) (*pb.ExecutionPayload, error) {
func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte) (*pb.ExecutionPayload, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayload")
defer span.End()
start := time.Now()
@@ -171,12 +129,12 @@ func (c *Client) GetPayload(ctx context.Context, payloadId [8]byte) (*pb.Executi
}()
result := &pb.ExecutionPayload{}
err := c.rpc.CallContext(ctx, result, GetPayloadMethod, pb.PayloadIDBytes(payloadId))
err := s.rpcClient.CallContext(ctx, result, GetPayloadMethod, pb.PayloadIDBytes(payloadId))
return result, handleRPCError(err)
}
// ExchangeTransitionConfiguration calls the engine_exchangeTransitionConfigurationV1 method via JSON-RPC.
func (c *Client) ExchangeTransitionConfiguration(
func (s *Service) ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) error {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeTransitionConfiguration")
@@ -186,7 +144,7 @@ func (c *Client) ExchangeTransitionConfiguration(
zeroBigNum := big.NewInt(0)
cfg.TerminalBlockNumber = zeroBigNum.Bytes()
result := &pb.TransitionConfiguration{}
if err := c.rpc.CallContext(ctx, result, ExchangeTransitionConfigurationMethod, cfg); err != nil {
if err := s.rpcClient.CallContext(ctx, result, ExchangeTransitionConfigurationMethod, cfg); err != nil {
return handleRPCError(err)
}
// We surface an error to the user if local configuration settings mismatch
@@ -218,12 +176,12 @@ func (c *Client) ExchangeTransitionConfiguration(
// LatestExecutionBlock fetches the latest execution engine block by calling
// eth_blockByNumber via JSON-RPC.
func (c *Client) LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock, error) {
func (s *Service) LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.LatestExecutionBlock")
defer span.End()
result := &pb.ExecutionBlock{}
err := c.rpc.CallContext(
err := s.rpcClient.CallContext(
ctx,
result,
ExecutionBlockByNumberMethod,
@@ -235,12 +193,12 @@ func (c *Client) LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock,
// ExecutionBlockByHash fetches an execution engine block by hash by calling
// eth_blockByHash via JSON-RPC.
func (c *Client) ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error) {
func (s *Service) ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExecutionBlockByHash")
defer span.End()
result := &pb.ExecutionBlock{}
err := c.rpc.CallContext(ctx, result, ExecutionBlockByHashMethod, hash, false /* no full transaction objects */)
err := s.rpcClient.CallContext(ctx, result, ExecutionBlockByHashMethod, hash, false /* no full transaction objects */)
return result, handleRPCError(err)
}
@@ -249,6 +207,9 @@ func handleRPCError(err error) error {
if err == nil {
return nil
}
if isTimeout(err) {
return errors.Wrapf(ErrHTTPTimeout, "%s", err)
}
e, ok := err.(rpc.Error)
if !ok {
return errors.Wrap(err, "got an unexpected error")
@@ -277,3 +238,16 @@ func handleRPCError(err error) error {
return err
}
}
// ErrHTTPTimeout returns true if the error is a http.Client timeout error.
var ErrHTTPTimeout = errors.New("timeout from http.Client")
type httpTimeoutError interface {
Error() string
Timeout() bool
}
func isTimeout(e error) bool {
t, ok := e.(httpTimeoutError)
return ok && t.Timeout()
}

View File

@@ -1,4 +1,4 @@
package v1
package powchain
import (
"context"
@@ -15,7 +15,7 @@ import (
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
mocks "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -25,8 +25,8 @@ import (
)
var (
_ = Caller(&Client{})
_ = Caller(&mocks.EngineClient{})
_ = EngineCaller(&Service{})
_ = EngineCaller(&mocks.EngineClient{})
)
func TestClient_IPC(t *testing.T) {
@@ -34,8 +34,8 @@ func TestClient_IPC(t *testing.T) {
defer server.Stop()
rpcClient := rpc.DialInProc(server)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
srv := &Service{}
srv.rpcClient = rpcClient
ctx := context.Background()
fix := fixtures()
@@ -43,14 +43,14 @@ func TestClient_IPC(t *testing.T) {
want, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
payloadId := [8]byte{1}
resp, err := client.GetPayload(ctx, payloadId)
resp, err := srv.GetPayload(ctx, payloadId)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(ForkchoiceUpdatedMethod, func(t *testing.T) {
want, ok := fix["ForkchoiceUpdatedResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, &pb.ForkchoiceState{}, &pb.PayloadAttributes{})
payloadID, validHash, err := srv.ForkchoiceUpdated(ctx, &pb.ForkchoiceState{}, &pb.PayloadAttributes{})
require.NoError(t, err)
require.DeepEqual(t, want.Status.LatestValidHash, validHash)
require.DeepEqual(t, want.PayloadId, payloadID)
@@ -60,20 +60,20 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
req, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
latestValidHash, err := client.NewPayload(ctx, req)
latestValidHash, err := srv.NewPayload(ctx, req)
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
t.Run(ExchangeTransitionConfigurationMethod, func(t *testing.T) {
want, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
err := client.ExchangeTransitionConfiguration(ctx, want)
err := srv.ExchangeTransitionConfiguration(ctx, want)
require.NoError(t, err)
})
t.Run(ExecutionBlockByNumberMethod, func(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
require.Equal(t, true, ok)
resp, err := client.LatestExecutionBlock(ctx)
resp, err := srv.LatestExecutionBlock(ctx)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
@@ -81,7 +81,7 @@ func TestClient_IPC(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
require.Equal(t, true, ok)
arg := common.BytesToHash([]byte("foo"))
resp, err := client.ExecutionBlockByHash(ctx, arg)
resp, err := srv.ExecutionBlockByHash(ctx, arg)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
@@ -125,8 +125,8 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
client := &Service{}
client.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.GetPayload(ctx, payloadId)
@@ -146,10 +146,10 @@ func TestClient_HTTP(t *testing.T) {
}
want, ok := fix["ForkchoiceUpdatedResponse"].(*ForkchoiceUpdatedResponse)
require.Equal(t, true, ok)
client := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
srv := forkchoiceUpdateSetup(t, forkChoiceState, payloadAttributes, want)
// We call the RPC method via HTTP and expect a proper result.
payloadID, validHash, err := client.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
payloadID, validHash, err := srv.ForkchoiceUpdated(ctx, forkChoiceState, payloadAttributes)
require.NoError(t, err)
require.DeepEqual(t, want.Status.LatestValidHash, validHash)
require.DeepEqual(t, want.PayloadId, payloadID)
@@ -332,11 +332,11 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
service := &Service{}
service.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.LatestExecutionBlock(ctx)
resp, err := service.LatestExecutionBlock(ctx)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
@@ -371,8 +371,8 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
client := &Service{}
client.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
err = client.ExchangeTransitionConfiguration(ctx, want)
@@ -408,11 +408,11 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
service := &Service{}
service.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.ExecutionBlockByHash(ctx, arg)
resp, err := service.ExecutionBlockByHash(ctx, arg)
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
@@ -449,10 +449,10 @@ func TestExchangeTransitionConfiguration(t *testing.T) {
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
service := &Service{}
service.rpcClient = rpcClient
err = client.ExchangeTransitionConfiguration(ctx, request)
err = service.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrConfigMismatch))
})
t.Run("wrong terminal total difficulty", func(t *testing.T) {
@@ -482,16 +482,17 @@ func TestExchangeTransitionConfiguration(t *testing.T) {
require.NoError(t, err)
defer rpcClient.Close()
client := &Client{}
client.rpc = rpcClient
service := &Service{}
service.rpcClient = rpcClient
err = client.ExchangeTransitionConfiguration(ctx, request)
err = service.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrConfigMismatch))
})
}
type customError struct {
code int
code int
timeout bool
}
func (c *customError) ErrorCode() int {
@@ -502,6 +503,10 @@ func (*customError) Error() string {
return "something went wrong"
}
func (c *customError) Timeout() bool {
return c.timeout
}
type dataError struct {
code int
data interface{}
@@ -534,6 +539,11 @@ func Test_handleRPCError(t *testing.T) {
expectedContains: "got an unexpected error",
given: errors.New("foo"),
},
{
name: "HTTP times out",
expectedContains: ErrHTTPTimeout.Error(),
given: &customError{timeout: true},
},
{
name: "ErrParse",
expectedContains: ErrParse.Error(),
@@ -807,7 +817,7 @@ func (*testEngineService) NewPayloadV1(
return item
}
func forkchoiceUpdateSetup(t *testing.T, fcs *pb.ForkchoiceState, att *pb.PayloadAttributes, res *ForkchoiceUpdatedResponse) *Client {
func forkchoiceUpdateSetup(t *testing.T, fcs *pb.ForkchoiceState, att *pb.PayloadAttributes, res *ForkchoiceUpdatedResponse) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
@@ -841,13 +851,12 @@ func forkchoiceUpdateSetup(t *testing.T, fcs *pb.ForkchoiceState, att *pb.Payloa
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
client := &Client{}
client.rpc = rpcClient
return client
service := &Service{}
service.rpcClient = rpcClient
return service
}
func newPayloadSetup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayload) *Client {
func newPayloadSetup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayload) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
@@ -876,7 +885,7 @@ func newPayloadSetup(t *testing.T, status *pb.PayloadStatus, payload *pb.Executi
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
client := &Client{}
client.rpc = rpcClient
return client
service := &Service{}
service.rpcClient = rpcClient
return service
}

View File

@@ -1,4 +1,4 @@
package v1
package powchain
import "github.com/pkg/errors"
@@ -24,12 +24,6 @@ var (
// ErrConfigMismatch when the execution node's terminal total difficulty or
// terminal block hash received via the API mismatches Prysm's configuration value.
ErrConfigMismatch = errors.New("execution client configuration mismatch")
// ErrMismatchTerminalBlockHash when the terminal block hash value received via
// the API mismatches Prysm's configuration value.
ErrMismatchTerminalBlockHash = errors.New("terminal block hash mismatch")
// ErrMismatchTerminalTotalDiff when the terminal total difficulty value received via
// the API mismatches Prysm's configuration value.
ErrMismatchTerminalTotalDiff = errors.New("terminal total difficulty mismatch")
// ErrAcceptedSyncingPayloadStatus when the status of the payload is syncing or accepted.
ErrAcceptedSyncingPayloadStatus = errors.New("payload status is SYNCING or ACCEPTED")
// ErrInvalidPayloadStatus when the status of the payload is invalid.

View File

@@ -34,6 +34,11 @@ func TestProcessDepositLog_OK(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -97,6 +102,11 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
beaconDB := testDB.SetupDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
@@ -154,6 +164,11 @@ func TestUnpackDepositLogData_OK(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := testDB.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -203,6 +218,11 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
beaconDB := testDB.SetupDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
@@ -274,6 +294,11 @@ func TestProcessETH2GenesisLog(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -359,6 +384,11 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
kvStore := testDB.SetupDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
@@ -452,6 +482,11 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
kvStore := testDB.SetupDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
@@ -561,6 +596,11 @@ func TestCheckForChainstart_NoValidator(t *testing.T) {
func newPowchainService(t *testing.T, eth1Backend *mock.TestAccount, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
require.NoError(t, err)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(eth1Backend.ContractAddr),

View File

@@ -1,4 +1,4 @@
package v1
package powchain
import (
"github.com/prometheus/client_golang/prometheus"

View File

@@ -1,6 +1,9 @@
package powchain
import (
"net/http"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
@@ -10,6 +13,9 @@ import (
"github.com/prysmaticlabs/prysm/network"
)
// DefaultRPCHTTPTimeout for HTTP requests via an RPC connection to an execution node.
const DefaultRPCHTTPTimeout = time.Second * 6
type Option func(s *Service) error
// WithHttpEndpoints deduplicates and parses http endpoints for the powchain service to use,
@@ -32,10 +38,20 @@ func WithHttpEndpoints(endpointStrings []string) Option {
}
}
// WithExecutionClientJWTSecret for authenticating the execution node JSON-RPC endpoint.
func WithExecutionClientJWTSecret(jwtSecret []byte) Option {
return func(s *Service) error {
s.cfg.executionEndpointJWTSecret = jwtSecret
// WithJWTSecret for authenticating the execution node JSON-RPC endpoint.
func WithJWTSecret(secret []byte) Option {
return func(c *Service) error {
if len(secret) == 0 {
return nil
}
authTransport := &jwtTransport{
underlyingTransport: http.DefaultTransport,
jwtSecret: secret,
}
c.cfg.httpRPCClient = &http.Client{
Timeout: DefaultRPCHTTPTimeout,
Transport: authTransport,
}
return nil
}
}

View File

@@ -7,6 +7,7 @@ import (
"context"
"fmt"
"math/big"
"net/http"
"reflect"
"runtime/debug"
"sort"
@@ -24,10 +25,10 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
engine "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
nativev1 "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/v1"
@@ -72,8 +73,6 @@ var (
logPeriod = 1 * time.Minute
// threshold of how old we will accept an eth1 node's head to be.
eth1Threshold = 20 * time.Minute
// error when eth1 node is not synced.
errNotSynced = errors.New("eth1 node is still syncing")
// error when eth1 node is too far behind.
errFarBehind = errors.Errorf("eth1 head is more than %s behind from current wall clock time", eth1Threshold.String())
)
@@ -123,21 +122,22 @@ type RPCDataFetcher interface {
// RPCClient defines the rpc methods required to interact with the eth1 node.
type RPCClient interface {
BatchCall(b []gethRPC.BatchElem) error
CallContext(ctx context.Context, result interface{}, method string, args ...interface{}) error
}
// config defines a config struct for dependencies into the service.
type config struct {
depositContractAddr common.Address
beaconDB db.HeadAccessDatabase
depositCache *depositcache.DepositCache
stateNotifier statefeed.Notifier
stateGen *stategen.State
eth1HeaderReqLimit uint64
beaconNodeStatsUpdater BeaconNodeStatsUpdater
httpEndpoints []network.Endpoint
executionEndpointJWTSecret []byte
currHttpEndpoint network.Endpoint
finalizedStateAtStartup state.BeaconState
depositContractAddr common.Address
beaconDB db.HeadAccessDatabase
depositCache *depositcache.DepositCache
stateNotifier statefeed.Notifier
stateGen *stategen.State
eth1HeaderReqLimit uint64
beaconNodeStatsUpdater BeaconNodeStatsUpdater
httpEndpoints []network.Endpoint
httpRPCClient *http.Client
currHttpEndpoint network.Endpoint
finalizedStateAtStartup state.BeaconState
}
// Service fetches important information about the canonical
@@ -156,7 +156,6 @@ type Service struct {
headTicker *time.Ticker
httpLogger bind.ContractFilterer
eth1DataFetcher RPCDataFetcher
engineAPIClient engine.Caller
rpcClient RPCClient
headerCache *headerCache // cache to store block hash/block height.
latestEth1Data *ethpb.LatestETH1Data
@@ -220,6 +219,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve eth1 data")
}
if err := s.initializeEth1Data(ctx, eth1Data); err != nil {
return nil, err
}
@@ -228,6 +228,14 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
// Start a web3 service's main event loop.
func (s *Service) Start() {
if err := s.connectToPowChain(); err != nil {
log.WithError(err).Fatal("Could not connect to execution endpoint")
}
log.WithFields(logrus.Fields{
"endpoint": logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url),
}).Info("Connected to Ethereum execution client RPC")
// If the chain has not started already and we don't have access to eth1 nodes, we will not be
// able to generate the genesis state.
if !s.chainStartData.Chainstarted && s.cfg.currHttpEndpoint.Url == "" {
@@ -242,27 +250,15 @@ func (s *Service) Start() {
}
}
// Exit early if eth1 endpoint is not set.
if s.cfg.currHttpEndpoint.Url == "" {
return
}
s.isRunning = true
if err := s.initializeEngineAPIClient(s.ctx); err != nil {
log.WithError(err).Fatal("unable to initialize engine API client")
}
// Poll the execution client connection and fallback if errors occur.
go s.pollConnectionStatus()
// Check transition configuration for the engine API client in the background.
go s.checkTransitionConfiguration(s.ctx, make(chan *statefeed.BlockProcessedData, 1))
go s.checkTransitionConfiguration(s.ctx, make(chan *feed.Event, 1))
go func() {
s.isRunning = true
s.waitForConnection()
if s.ctx.Err() != nil {
log.Info("Context closed, exiting pow goroutine")
return
}
s.run(s.ctx.Done())
}()
go s.run(s.ctx.Done())
}
// Stop the web3 service's main event loop and associated goroutines.
@@ -305,12 +301,6 @@ func (s *Service) Status() error {
return s.runError
}
// EngineAPIClient returns the associated engine API client to interact
// with an execution node via JSON-RPC.
func (s *Service) EngineAPIClient() engine.Caller {
return s.engineAPIClient
}
func (s *Service) updateBeaconNodeStats() {
bs := clientstats.BeaconNodeStats{}
if len(s.cfg.httpEndpoints) > 1 {
@@ -389,19 +379,24 @@ func (s *Service) followBlockHeight(_ context.Context) (uint64, error) {
func (s *Service) connectToPowChain() error {
httpClient, rpcClient, err := s.dialETH1Nodes(s.cfg.currHttpEndpoint)
if err != nil {
return errors.Wrap(err, "could not dial eth1 nodes")
return errors.Wrap(err, "could not dial execution node")
}
depositContractCaller, err := contracts.NewDepositContractCaller(s.cfg.depositContractAddr, httpClient)
if err != nil {
return errors.Wrap(err, "could not create deposit contract caller")
return errors.Wrap(err, "could not initialize deposit contract caller")
}
if httpClient == nil || rpcClient == nil || depositContractCaller == nil {
return errors.New("eth1 client is nil")
return errors.New("execution client RPC is nil")
}
s.httpLogger = httpClient
s.eth1DataFetcher = httpClient
s.depositContractCaller = depositContractCaller
s.rpcClient = rpcClient
s.initializeConnection(httpClient, rpcClient, depositContractCaller)
s.updateConnectedETH1(true)
s.runError = nil
return nil
}
@@ -424,15 +419,6 @@ func (s *Service) dialETH1Nodes(endpoint network.Endpoint) (*ethclient.Client, *
httpRPCClient.Close()
httpClient.Close()
}
syncProg, err := httpClient.SyncProgress(s.ctx)
if err != nil {
closeClients()
return nil, nil, err
}
if syncProg != nil {
closeClients()
return nil, nil, errors.New("eth1 node has not finished syncing yet")
}
// Make a simple call to ensure we are actually connected to a working node.
cID, err := httpClient.ChainID(s.ctx)
if err != nil {
@@ -456,17 +442,6 @@ func (s *Service) dialETH1Nodes(endpoint network.Endpoint) (*ethclient.Client, *
return httpClient, httpRPCClient, nil
}
func (s *Service) initializeConnection(
httpClient *ethclient.Client,
rpcClient *gethRPC.Client,
contractCaller *contracts.DepositContractCaller,
) {
s.httpLogger = httpClient
s.eth1DataFetcher = httpClient
s.depositContractCaller = contractCaller
s.rpcClient = rpcClient
}
// closes down our active eth1 clients.
func (s *Service) closeClients() {
gethClient, ok := s.rpcClient.(*gethRPC.Client)
@@ -479,30 +454,8 @@ func (s *Service) closeClients() {
}
}
func (s *Service) waitForConnection() {
errConnect := s.connectToPowChain()
if errConnect == nil {
synced, errSynced := s.isEth1NodeSynced()
// Resume if eth1 node is synced.
if synced {
s.updateConnectedETH1(true)
s.runError = nil
log.WithFields(logrus.Fields{
"endpoint": logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url),
}).Info("Connected to eth1 proof-of-work chain")
return
}
if errSynced != nil {
s.runError = errSynced
log.WithError(errSynced).Error("Could not check sync status of eth1 chain")
}
}
if errConnect != nil {
s.runError = errConnect
log.WithError(errConnect).Error("Could not connect to powchain endpoint")
}
func (s *Service) pollConnectionStatus() {
// Use a custom logger to only log errors
// once in a while.
logCounter := 0
errorLogger := func(err error, msg string) {
if logCounter > logThreshold {
@@ -511,7 +464,6 @@ func (s *Service) waitForConnection() {
}
logCounter++
}
ticker := time.NewTicker(backOffPeriod)
defer ticker.Stop()
for {
@@ -525,23 +477,6 @@ func (s *Service) waitForConnection() {
s.fallbackToNextEndpoint()
continue
}
synced, errSynced := s.isEth1NodeSynced()
if errSynced != nil {
errorLogger(errSynced, "Could not check sync status of eth1 chain")
s.runError = errSynced
s.fallbackToNextEndpoint()
continue
}
if synced {
s.updateConnectedETH1(true)
s.runError = nil
log.WithFields(logrus.Fields{
"endpoint": logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url),
}).Info("Connected to eth1 proof-of-work chain")
return
}
s.runError = errNotSynced
log.Debug("Eth1 node is currently syncing")
case <-s.ctx.Done():
log.Debug("Received cancelled context,closing existing powchain service")
return
@@ -573,7 +508,10 @@ func (s *Service) retryETH1Node(err error) {
// Back off for a while before
// resuming dialing the eth1 node.
time.Sleep(backOffPeriod)
s.waitForConnection()
if err := s.connectToPowChain(); err != nil {
s.runError = err
return
}
// Reset run error in the event of a successful connection.
s.runError = nil
}
@@ -643,6 +581,7 @@ func (s *Service) processBlockHeader(header *gethTypes.Header) {
log.WithFields(logrus.Fields{
"blockNumber": s.latestEth1Data.BlockHeight,
"blockHash": hexutil.Encode(s.latestEth1Data.BlockHash),
"difficulty": header.Difficulty.String(),
}).Debug("Latest eth1 chain event")
}
@@ -1056,19 +995,6 @@ func (s *Service) ensureValidPowchainData(ctx context.Context) error {
return nil
}
// Initializes a connection to the engine API if an execution provider endpoint is set.
func (s *Service) initializeEngineAPIClient(ctx context.Context) error {
opts := []engine.Option{
engine.WithJWTSecret(s.cfg.executionEndpointJWTSecret),
}
client, err := engine.New(ctx, s.cfg.currHttpEndpoint.Url, opts...)
if err != nil {
return err
}
s.engineAPIClient = client
return nil
}
func dedupEndpoints(endpoints []string) []string {
selectionMap := make(map[string]bool)
newEndpoints := make([]string, 0, len(endpoints))

View File

@@ -3,11 +3,9 @@ package powchain
import (
"bytes"
"context"
"errors"
"fmt"
"math/big"
"strings"
"sync"
"testing"
"time"
@@ -15,6 +13,7 @@ import (
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/common"
gethTypes "github.com/ethereum/go-ethereum/core/types"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
dbutil "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
@@ -128,6 +127,11 @@ func TestStart_OK(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -157,90 +161,13 @@ func TestStart_NoHttpEndpointDefinedFails_WithoutChainStarted(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
s, err := NewService(context.Background(),
_, err = NewService(context.Background(),
WithHttpEndpoints([]string{""}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err)
// Set custom exit func so test can proceed
log.Logger.ExitFunc = func(i int) {
panic(i)
}
defer func() {
log.Logger.ExitFunc = nil
}()
wg := new(sync.WaitGroup)
wg.Add(1)
// Expect Start function to fail from a fatal call due
// to no state existing.
go func() {
defer func() {
if r := recover(); r != nil {
wg.Done()
}
}()
s.Start()
}()
util.WaitTimeout(wg, time.Second)
require.LogsContain(t, hook, "cannot create genesis state: no eth1 http endpoint defined")
hook.Reset()
}
func TestStart_NoHttpEndpointDefinedSucceeds_WithGenesisState(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
st, _ := util.DeterministicGenesisState(t, 10)
b := util.NewBeaconBlock()
genRoot, err := b.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(context.Background(), st, genRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(context.Background(), genRoot))
depositCache, err := depositcache.New()
require.NoError(t, err)
s, err := NewService(context.Background(),
WithHttpEndpoints([]string{""}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err)
wg := new(sync.WaitGroup)
wg.Add(1)
go func() {
s.Start()
wg.Done()
}()
s.cancel()
util.WaitTimeout(wg, time.Second)
require.LogsDoNotContain(t, hook, "cannot create genesis state: no eth1 http endpoint defined")
hook.Reset()
}
func TestStart_NoHttpEndpointDefinedSucceeds_WithChainStarted(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
require.NoError(t, beaconDB.SavePowchainData(context.Background(), &ethpb.ETH1ChainData{
ChainstartData: &ethpb.ChainStartData{Chainstarted: true},
Trie: &ethpb.SparseMerkleTrie{},
}))
s, err := NewService(context.Background(),
WithHttpEndpoints([]string{""}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err)
s.Start()
require.LogsDoNotContain(t, hook, "cannot create genesis state: no eth1 http endpoint defined")
hook.Reset()
require.LogsDoNotContain(t, hook, "missing address")
}
func TestStop_OK(t *testing.T) {
@@ -248,6 +175,11 @@ func TestStop_OK(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -273,6 +205,11 @@ func TestService_Eth1Synced(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -298,6 +235,11 @@ func TestFollowBlock_OK(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -371,6 +313,11 @@ func TestStatus(t *testing.T) {
func TestHandlePanic_OK(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
@@ -403,6 +350,11 @@ func TestLogTillGenesis_OK(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -537,6 +489,11 @@ func TestNewService_EarliestVotingBlock(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -588,6 +545,11 @@ func TestNewService_Eth1HeaderRequLimit(t *testing.T) {
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
@@ -616,13 +578,26 @@ func (mbs *mockBSUpdater) Update(bs clientstats.BeaconNodeStats) {
var _ BeaconNodeStatsUpdater = &mockBSUpdater{}
func TestServiceFallbackCorrectly(t *testing.T) {
firstEndpoint := "A"
secondEndpoint := "B"
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
server, firstEndpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
server2, secondEndpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server2.Stop()
})
server3, thirdEndpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server3.Stop()
})
mbs := &mockBSUpdater{}
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{firstEndpoint}),
@@ -630,10 +605,11 @@ func TestServiceFallbackCorrectly(t *testing.T) {
WithDatabase(beaconDB),
WithBeaconNodeStatsUpdater(mbs),
)
s1.cfg.beaconNodeStatsUpdater = mbs
require.NoError(t, err)
s1.cfg.beaconNodeStatsUpdater = mbs
assert.Equal(t, firstEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
// Stay at the first endpoint.
s1.fallbackToNextEndpoint()
assert.Equal(t, firstEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
@@ -645,17 +621,11 @@ func TestServiceFallbackCorrectly(t *testing.T) {
assert.Equal(t, secondEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, true, mbs.lastBS.SyncEth1FallbackConfigured, "SyncEth1FallbackConfigured in clientstats update should be true when > 1 endpoint is configured")
thirdEndpoint := "C"
fourthEndpoint := "D"
s1.cfg.httpEndpoints = append(s1.cfg.httpEndpoints, network.Endpoint{Url: thirdEndpoint}, network.Endpoint{Url: fourthEndpoint})
s1.cfg.httpEndpoints = append(s1.cfg.httpEndpoints, network.Endpoint{Url: thirdEndpoint})
s1.fallbackToNextEndpoint()
assert.Equal(t, thirdEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
s1.fallbackToNextEndpoint()
assert.Equal(t, fourthEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
// Rollover correctly back to the first endpoint
s1.fallbackToNextEndpoint()
assert.Equal(t, firstEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
@@ -685,8 +655,13 @@ func TestService_EnsureConsistentPowchainData(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
cache, err := depositcache.New()
require.NoError(t, err)
srv, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
srv.Stop()
})
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
@@ -710,7 +685,13 @@ func TestService_InitializeCorrectly(t *testing.T) {
cache, err := depositcache.New()
require.NoError(t, err)
srv, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
srv.Stop()
})
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
@@ -733,8 +714,13 @@ func TestService_EnsureValidPowchainData(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
cache, err := depositcache.New()
require.NoError(t, err)
srv, endpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
srv.Stop()
})
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
@@ -825,8 +811,16 @@ func TestTimestampIsChecked(t *testing.T) {
}
func TestETH1Endpoints(t *testing.T) {
firstEndpoint := "A"
secondEndpoint := "B"
server, firstEndpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
server, secondEndpoint, err := mockPOW.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
endpoints := []string{firstEndpoint, secondEndpoint}
testAcc, err := mock.Setup()

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
testonly = True,
srcs = [
"mock_engine_client.go",
"mock_faulty_powchain.go",
"mock_powchain.go",
],
@@ -16,7 +17,9 @@ go_library(
"//beacon-chain/powchain/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//accounts/abi/bind/backends:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -1,4 +1,4 @@
package mocks
package testing
import (
"context"
@@ -19,12 +19,13 @@ type EngineClient struct {
ErrLatestExecBlock error
ErrExecBlockByHash error
ErrForkchoiceUpdated error
ErrNewPayload error
BlockByHashMap map[[32]byte]*pb.ExecutionBlock
}
// NewPayload --
func (e *EngineClient) NewPayload(_ context.Context, _ *pb.ExecutionPayload) ([]byte, error) {
return e.NewPayloadResp, nil
return e.NewPayloadResp, e.ErrNewPayload
}
// ForkchoiceUpdated --

View File

@@ -6,6 +6,7 @@ import (
"context"
"fmt"
"math/big"
"net/http/httptest"
"time"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
@@ -16,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
@@ -142,6 +144,10 @@ type RPCClient struct {
Backend *backends.SimulatedBackend
}
func (*RPCClient) CallContext(_ context.Context, _ interface{}, _ string, _ ...interface{}) error {
return nil
}
// BatchCall --
func (r *RPCClient) BatchCall(b []rpc.BatchElem) error {
if r.Backend == nil {
@@ -175,3 +181,28 @@ func (m *POWChain) InsertBlock(height int, time uint64, hash []byte) *POWChain {
func (m *POWChain) BlockExistsWithCache(ctx context.Context, hash common.Hash) (bool, *big.Int, error) {
return m.BlockExists(ctx, hash)
}
func SetupRPCServer() (*rpc.Server, string, error) {
srv := rpc.NewServer()
if err := srv.RegisterName("eth", &testETHRPC{}); err != nil {
return nil, "", err
}
if err := srv.RegisterName("net", &testETHRPC{}); err != nil {
return nil, "", err
}
hs := httptest.NewUnstartedServer(srv)
hs.Start()
return srv, hs.URL, nil
}
type testETHRPC struct{}
func (*testETHRPC) NoArgsRets() {}
func (*testETHRPC) ChainId(_ context.Context) *hexutil.Big {
return (*hexutil.Big)(big.NewInt(int64(params.BeaconConfig().DepositChainID)))
}
func (*testETHRPC) Version(_ context.Context) string {
return fmt.Sprintf("%d", params.BeaconConfig().DepositNetworkID)
}

View File

@@ -22,7 +22,6 @@ go_library(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/debug:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",

View File

@@ -424,11 +424,11 @@ type beaconBlockBodyBellatrixJson struct {
type executionPayloadJson struct {
ParentHash string `json:"parent_hash" hex:"true"`
CoinBase string `json:"coinbase" hex:"true"`
FeeRecipient string `json:"fee_recipient" hex:"true"`
StateRoot string `json:"state_root" hex:"true"`
ReceiptRoot string `json:"receipt_root" hex:"true"`
ReceiptsRoot string `json:"receipts_root" hex:"true"`
LogsBloom string `json:"logs_bloom" hex:"true"`
Random string `json:"random" hex:"true"`
PrevRandao string `json:"prev_randao" hex:"true"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
@@ -441,11 +441,11 @@ type executionPayloadJson struct {
type executionPayloadHeaderJson struct {
ParentHash string `json:"parent_hash" hex:"true"`
CoinBase string `json:"coinbase" hex:"true"`
FeeRecipient string `json:"fee_recipient" hex:"true"`
StateRoot string `json:"state_root" hex:"true"`
ReceiptRoot string `json:"receipt_root" hex:"true"`
ReceiptsRoot string `json:"receipt_root" hex:"true"`
LogsBloom string `json:"logs_bloom" hex:"true"`
Random string `json:"random" hex:"true"`
PrevRandao string `json:"prev_randao" hex:"true"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`

View File

@@ -63,7 +63,6 @@ go_test(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1/mocks:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",

View File

@@ -414,7 +414,7 @@ func (vs *Server) GetAggregateAttestation(ctx context.Context, req *ethpbv1.Aggr
}
if bestMatchingAtt == nil {
return nil, status.Error(codes.InvalidArgument, "No matching attestation found")
return nil, status.Error(codes.NotFound, "No matching attestation found")
}
return &ethpbv1.AggregateAttestationResponse{
Data: migration.V1Alpha1AttestationToV1(bestMatchingAtt),
@@ -631,6 +631,9 @@ func (vs *Server) ProduceSyncCommitteeContribution(
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get sync subcommittee messages: %v", err)
}
if msgs == nil {
return nil, status.Errorf(codes.NotFound, "No subcommittee messages found")
}
aggregatedSig, bits, err := vs.V1Alpha1Server.AggregatedSigAndAggregationBits(ctx, msgs, req.Slot, req.SubcommitteeIndex, req.BeaconBlockRoot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get contribution data: %v", err)

View File

@@ -25,7 +25,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
p2pmock "github.com/prysmaticlabs/prysm/beacon-chain/p2p/testing"
p2pType "github.com/prysmaticlabs/prysm/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
v1alpha1validator "github.com/prysmaticlabs/prysm/beacon-chain/rpc/prysm/v1alpha1/validator"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/testutil"
@@ -1020,24 +1019,25 @@ func TestProduceBlockV2(t *testing.T) {
require.NoError(t, db.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
v1Alpha1Server := &v1alpha1validator.Server{
ExecutionEngineCaller: &mocks.EngineClient{
ExecutionEngineCaller: &mockPOW.EngineClient{
ExecutionBlock: &enginev1.ExecutionBlock{
TotalDifficulty: "0x1",
},
},
TimeFetcher: &mockChain.ChainService{},
HeadFetcher: &mockChain.ChainService{State: beaconState, Root: parentRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
BlockReceiver: &mockChain.ChainService{},
ChainStartFetcher: &mockPOW.POWChain{},
Eth1InfoFetcher: &mockPOW.POWChain{},
Eth1BlockFetcher: &mockPOW.POWChain{},
MockEth1Votes: true,
AttPool: attestations.NewPool(),
SlashingsPool: slashings.NewPool(),
ExitPool: voluntaryexits.NewPool(),
StateGen: stategen.New(db),
SyncCommitteePool: synccommittee.NewStore(),
TimeFetcher: &mockChain.ChainService{},
HeadFetcher: &mockChain.ChainService{State: beaconState, Root: parentRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
BlockReceiver: &mockChain.ChainService{},
ChainStartFetcher: &mockPOW.POWChain{},
Eth1InfoFetcher: &mockPOW.POWChain{},
Eth1BlockFetcher: &mockPOW.POWChain{},
MockEth1Votes: true,
AttPool: attestations.NewPool(),
SlashingsPool: slashings.NewPool(),
ExitPool: voluntaryexits.NewPool(),
StateGen: stategen.New(db),
SyncCommitteePool: synccommittee.NewStore(),
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
proposerSlashings := make([]*ethpbalpha.ProposerSlashing, params.BeaconConfig().MaxProposerSlashings)
@@ -1994,6 +1994,25 @@ func TestProduceSyncCommitteeContribution(t *testing.T) {
aggregationBits := resp.Data.AggregationBits
assert.Equal(t, true, aggregationBits.BitAt(0))
assert.DeepEqual(t, sig, resp.Data.Signature)
syncCommitteePool = synccommittee.NewStore()
v1Server = &v1alpha1validator.Server{
SyncCommitteePool: syncCommitteePool,
HeadFetcher: &mockChain.ChainService{
SyncCommitteeIndices: []types.CommitteeIndex{0},
},
}
server = Server{
V1Alpha1Server: v1Server,
SyncCommitteePool: syncCommitteePool,
}
req = &ethpbv2.ProduceSyncCommitteeContributionRequest{
Slot: 0,
SubcommitteeIndex: 0,
BeaconBlockRoot: root,
}
_, err = server.ProduceSyncCommitteeContribution(ctx, req)
assert.ErrorContains(t, "No subcommittee messages found", err)
}
func TestSubmitContributionAndProofs(t *testing.T) {

View File

@@ -693,7 +693,7 @@ func (bs *Server) GetValidatorPerformance(
return nil, err
}
validatorSummary = vp
case version.Altair:
case version.Altair, version.Bellatrix:
vp, bp, err := altair.InitializePrecomputeValidators(ctx, headState)
if err != nil {
return nil, err
@@ -871,7 +871,7 @@ func (bs *Server) GetIndividualVotes(
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not pre compute attestations: %v", err)
}
case version.Altair:
case version.Altair, version.Bellatrix:
v, bal, err = altair.InitializePrecomputeValidators(ctx, st)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not set up altair pre compute instance: %v", err)

View File

@@ -2100,6 +2100,76 @@ func TestGetValidatorPerformanceAltair_OK(t *testing.T) {
}
}
func TestGetValidatorPerformanceBellatrix_OK(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
ctx := context.Background()
epoch := types.Epoch(1)
headState, _ := util.DeterministicGenesisStateBellatrix(t, 32)
require.NoError(t, headState.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch+1))))
defaultBal := params.BeaconConfig().MaxEffectiveBalance
extraBal := params.BeaconConfig().MaxEffectiveBalance + params.BeaconConfig().GweiPerEth
balances := []uint64{defaultBal, extraBal, extraBal + params.BeaconConfig().GweiPerEth}
require.NoError(t, headState.SetBalances(balances))
publicKey1 := bytesutil.ToBytes48([]byte{1})
publicKey2 := bytesutil.ToBytes48([]byte{2})
publicKey3 := bytesutil.ToBytes48([]byte{3})
validators := []*ethpb.Validator{
{
PublicKey: publicKey1[:],
ActivationEpoch: 5,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{
PublicKey: publicKey2[:],
EffectiveBalance: defaultBal,
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{
PublicKey: publicKey3[:],
EffectiveBalance: defaultBal,
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
require.NoError(t, headState.SetValidators(validators))
require.NoError(t, headState.SetInactivityScores([]uint64{0, 0, 0}))
require.NoError(t, headState.SetBalances([]uint64{100, 101, 102}))
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
bs := &Server{
HeadFetcher: &mock.ChainService{
State: headState,
},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
want := &ethpb.ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKey2[:], publicKey3[:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
InclusionSlots: nil,
InclusionDistances: nil,
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{101, 102},
BalancesAfterEpochTransition: []uint64{0, 0},
MissingValidators: [][]byte{publicKey1[:]},
InactivityScores: []uint64{0, 0},
}
res, err := bs.GetValidatorPerformance(ctx, &ethpb.ValidatorPerformanceRequest{
PublicKeys: [][]byte{publicKey1[:], publicKey3[:], publicKey2[:]},
})
require.NoError(t, err)
if !proto.Equal(want, res) {
t.Errorf("Wanted %v\nReceived %v", want, res)
}
}
func BenchmarkListValidatorBalances(b *testing.B) {
b.StopTimer()
beaconDB := dbTest.SetupDB(b)
@@ -2475,6 +2545,98 @@ func TestServer_GetIndividualVotes_AltairEndOfEpoch(t *testing.T) {
assert.DeepEqual(t, wanted, res, "Unexpected response")
}
func TestServer_GetIndividualVotes_BellatrixEndOfEpoch(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MainnetConfig())
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
validators := uint64(32)
beaconState, _ := util.DeterministicGenesisStateBellatrix(t, validators)
startSlot, err := slots.EpochStart(1)
assert.NoError(t, err)
require.NoError(t, beaconState.SetSlot(startSlot))
b := util.NewBeaconBlock()
b.Block.Slot = startSlot
wsb, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
gen := stategen.New(beaconDB)
require.NoError(t, gen.SaveState(ctx, gRoot, beaconState))
require.NoError(t, beaconDB.SaveState(ctx, beaconState, gRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, gRoot))
// Save State at the end of the epoch:
endSlot, err := slots.EpochEnd(1)
assert.NoError(t, err)
beaconState, _ = util.DeterministicGenesisStateBellatrix(t, validators)
require.NoError(t, beaconState.SetSlot(endSlot))
pb, err := beaconState.CurrentEpochParticipation()
require.NoError(t, err)
for i := range pb {
pb[i] = 0xff
}
require.NoError(t, beaconState.SetCurrentParticipationBits(pb))
require.NoError(t, beaconState.SetPreviousParticipationBits(pb))
b.Block.Slot = endSlot
wsb, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
gRoot, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, gen.SaveState(ctx, gRoot, beaconState))
require.NoError(t, beaconDB.SaveState(ctx, beaconState, gRoot))
bs := &Server{
StateGen: gen,
GenesisTimeFetcher: &mock.ChainService{},
}
addDefaultReplayerBuilder(bs, beaconDB)
res, err := bs.GetIndividualVotes(ctx, &ethpb.IndividualVotesRequest{
Indices: []types.ValidatorIndex{0, 1},
Epoch: 1,
})
require.NoError(t, err)
wanted := &ethpb.IndividualVotesRespond{
IndividualVotes: []*ethpb.IndividualVotesRespond_IndividualVote{
{
ValidatorIndex: 0,
PublicKey: beaconState.Validators()[0].PublicKey,
IsActiveInCurrentEpoch: true,
IsActiveInPreviousEpoch: true,
IsCurrentEpochTargetAttester: true,
IsCurrentEpochAttester: true,
IsPreviousEpochAttester: true,
IsPreviousEpochHeadAttester: true,
IsPreviousEpochTargetAttester: true,
CurrentEpochEffectiveBalanceGwei: params.BeaconConfig().MaxEffectiveBalance,
Epoch: 1,
},
{
ValidatorIndex: 1,
PublicKey: beaconState.Validators()[1].PublicKey,
IsActiveInCurrentEpoch: true,
IsActiveInPreviousEpoch: true,
IsCurrentEpochTargetAttester: true,
IsCurrentEpochAttester: true,
IsPreviousEpochAttester: true,
IsPreviousEpochHeadAttester: true,
IsPreviousEpochTargetAttester: true,
CurrentEpochEffectiveBalanceGwei: params.BeaconConfig().MaxEffectiveBalance,
Epoch: 1,
},
},
}
assert.DeepEqual(t, wanted, res, "Unexpected response")
}
func Test_validatorStatus(t *testing.T) {
tests := []struct {
name string

View File

@@ -49,7 +49,6 @@ go_library(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
@@ -80,6 +79,8 @@ go_library(
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
@@ -134,7 +135,6 @@ go_test(
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/powchain/engine-api-client/v1/mocks:go_default_library",
"//beacon-chain/powchain/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",

View File

@@ -138,7 +138,7 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
return nil, status.Errorf(codes.Internal, "Could not compute committee assignments: %v", err)
}
// Query the next epoch assignments for committee subnet subscriptions.
nextCommitteeAssignments, _, err := helpers.CommitteeAssignments(ctx, s, req.Epoch+1)
nextCommitteeAssignments, nextProposerIndexToSlots, err := helpers.CommitteeAssignments(ctx, s, req.Epoch+1)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not compute next committee assignments: %v", err)
}
@@ -180,6 +180,16 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
nextAssignment.AttesterSlot = ca.AttesterSlot
nextAssignment.CommitteeIndex = ca.CommitteeIndex
}
// Cache proposer assignment for the current epoch.
for _, slot := range proposerIndexToSlots[idx] {
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, idx, [8]byte{} /* payloadID */)
}
// Cache proposer assignment for the next epoch.
for _, slot := range nextProposerIndexToSlots[idx] {
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, idx, [8]byte{} /* payloadID */)
}
// Prune payload ID cache for any slots before request slot.
vs.ProposerSlotIndexCache.PrunePayloadIDs(epochStartSlot)
} else {
// If the validator isn't in the beacon state, try finding their deposit to determine their status.
vStatus, _ := vs.validatorStatus(ctx, s, pubKey)

View File

@@ -60,9 +60,10 @@ func TestGetDuties_OK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
// Test the first validator in registry.
@@ -144,10 +145,11 @@ func TestGetAltairDuties_SyncCommitteeOK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockPOW.POWChain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockPOW.POWChain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
// Test the first validator in registry.
@@ -181,12 +183,12 @@ func TestGetAltairDuties_SyncCommitteeOK(t *testing.T) {
res, err = vs.GetDuties(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.Equal(t, types.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
require.Equal(t, types.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
}
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
require.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
assert.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
require.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
@@ -197,7 +199,7 @@ func TestGetAltairDuties_SyncCommitteeOK(t *testing.T) {
res, err = vs.GetDuties(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
}
@@ -249,10 +251,11 @@ func TestGetBellatrixDuties_SyncCommitteeOK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockPOW.POWChain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockPOW.POWChain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
// Test the first validator in registry.
@@ -302,7 +305,7 @@ func TestGetBellatrixDuties_SyncCommitteeOK(t *testing.T) {
res, err = vs.GetDuties(context.Background(), req)
require.NoError(t, err, "Could not call epoch committee assignment")
for i := 0; i < len(res.CurrentEpochDuties); i++ {
assert.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
}
}
@@ -340,11 +343,12 @@ func TestGetAltairDuties_UnknownPubkey(t *testing.T) {
require.NoError(t, err)
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockPOW.POWChain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
DepositFetcher: depositCache,
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockPOW.POWChain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
DepositFetcher: depositCache,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
unknownPubkey := bytesutil.PadTo([]byte{'u'}, 48)
@@ -399,9 +403,10 @@ func TestGetDuties_CurrentEpoch_ShouldNotFail(t *testing.T) {
State: bState, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
// Test the first validator in registry.
@@ -437,9 +442,10 @@ func TestGetDuties_MultipleKeys_OK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
pubkey0 := deposits[0].Data.PublicKey
@@ -503,11 +509,12 @@ func TestStreamDuties_OK(t *testing.T) {
Genesis: time.Now(),
}
vs := &Server{
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
// Test the first validator in registry.
@@ -560,11 +567,12 @@ func TestStreamDuties_OK_ChainReorg(t *testing.T) {
Genesis: time.Now(),
}
vs := &Server{
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
// Test the first validator in registry.

View File

@@ -9,6 +9,8 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
@@ -24,9 +26,31 @@ import (
"github.com/sirupsen/logrus"
)
var (
// payloadIDCacheMiss tracks the number of payload ID requests that aren't present in the cache.
payloadIDCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "payload_id_cache_miss",
Help: "The number of payload id get requests that aren't present in the cache.",
})
// payloadIDCacheHit tracks the number of payload ID requests that are present in the cache.
payloadIDCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "payload_id_cache_hit",
Help: "The number of payload id get requests that are present in the cache.",
})
)
// This returns the execution payload of a given slot. The function has full awareness of pre and post merge.
// The payload is computed given the respected time of merge.
func (vs *Server) getExecutionPayload(ctx context.Context, slot types.Slot, vIdx types.ValidatorIndex) (*enginev1.ExecutionPayload, error) {
proposerID, payloadId, ok := vs.ProposerSlotIndexCache.GetProposerPayloadIDs(slot)
if ok && proposerID == vIdx && payloadId != [8]byte{} { // Payload ID is cache hit. Return the cached payload ID.
var pid [8]byte
copy(pid[:], payloadId[:])
payloadIDCacheHit.Inc()
return vs.ExecutionEngineCaller.GetPayload(ctx, pid)
}
payloadIDCacheMiss.Inc()
st, err := vs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, err

View File

@@ -9,8 +9,8 @@ import (
"github.com/holiman/uint256"
types "github.com/prysmaticlabs/eth2-types"
chainMock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
dbTest "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
powtesting "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
@@ -107,6 +107,11 @@ func TestServer_getExecutionPayload(t *testing.T) {
payloadID: &pb.PayloadIDBytes{0x1},
validatorIndx: 1,
},
{
name: "transition completed, happy case, payload ID cached)",
st: transitionSt,
validatorIndx: 100,
},
{
name: "transition completed, could not prepare payload",
st: transitionSt,
@@ -133,10 +138,12 @@ func TestServer_getExecutionPayload(t *testing.T) {
params.OverrideBeaconConfig(cfg)
vs := &Server{
ExecutionEngineCaller: &mocks.EngineClient{PayloadIDBytes: tt.payloadID, ErrForkchoiceUpdated: tt.forkchoiceErr},
HeadFetcher: &chainMock.ChainService{State: tt.st},
BeaconDB: beaconDB,
ExecutionEngineCaller: &powtesting.EngineClient{PayloadIDBytes: tt.payloadID, ErrForkchoiceUpdated: tt.forkchoiceErr},
HeadFetcher: &chainMock.ChainService{State: tt.st},
BeaconDB: beaconDB,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(tt.st.Slot(), 100, [8]byte{100})
_, err := vs.getExecutionPayload(context.Background(), tt.st.Slot(), tt.validatorIndx)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
@@ -260,7 +267,7 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
}
}
vs := &Server{
ExecutionEngineCaller: &mocks.EngineClient{
ExecutionEngineCaller: &powtesting.EngineClient{
ErrLatestExecBlock: tt.errLatestExecutionBlk,
ExecutionBlock: tt.currentPowBlock,
BlockByHashMap: m,
@@ -335,7 +342,7 @@ func TestServer_getTerminalBlockHashIfExists(t *testing.T) {
c.HashesByHeight[0] = tt.wantTerminalBlockHash
vs := &Server{
Eth1BlockFetcher: c,
ExecutionEngineCaller: &mocks.EngineClient{
ExecutionEngineCaller: &powtesting.EngineClient{
ExecutionBlock: tt.currentPowBlock,
BlockByHashMap: m,
},

View File

@@ -10,6 +10,7 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
@@ -21,7 +22,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/synccommittee"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
mockp2p "github.com/prysmaticlabs/prysm/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1/mocks"
mockPOW "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
@@ -2341,11 +2341,12 @@ func TestProposer_GetBeaconBlock_BellatrixEpoch(t *testing.T) {
ExitPool: voluntaryexits.NewPool(),
StateGen: stategen.New(db),
SyncCommitteePool: synccommittee.NewStore(),
ExecutionEngineCaller: &mocks.EngineClient{
ExecutionEngineCaller: &mockPOW.EngineClient{
PayloadIDBytes: &enginev1.PayloadIDBytes{1},
ExecutionPayload: payload,
},
BeaconDB: db,
BeaconDB: db,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
randaoReveal, err := util.RandaoReveal(beaconState, 0, privKeys)

View File

@@ -23,7 +23,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
enginev1 "github.com/prysmaticlabs/prysm/beacon-chain/powchain/engine-api-client/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/config/params"
@@ -42,6 +41,7 @@ import (
type Server struct {
Ctx context.Context
AttestationCache *cache.AttestationCache
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
HeadFetcher blockchain.HeadFetcher
ForkFetcher blockchain.ForkFetcher
FinalizationFetcher blockchain.FinalizationFetcher
@@ -66,7 +66,7 @@ type Server struct {
StateGen stategen.StateManager
ReplayerBuilder stategen.ReplayerBuilder
BeaconDB db.HeadAccessDatabase
ExecutionEngineCaller enginev1.Caller
ExecutionEngineCaller powchain.EngineCaller
}
// WaitForActivation checks if a validator public key exists in the active validator registry of the current

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
"google.golang.org/grpc/codes"
@@ -25,7 +26,7 @@ var errPubkeyDoesNotExist = errors.New("pubkey does not exist")
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
var nonExistentIndex = types.ValidatorIndex(^uint64(0))
const numStatesToCheck = 2
var errParticipation = status.Errorf(codes.Internal, "Failed to obtain epoch participation")
// ValidatorStatus returns the validator status of the current epoch.
// The status response can be one of the following:
@@ -110,44 +111,67 @@ func (vs *Server) CheckDoppelGanger(ctx context.Context, req *ethpb.DoppelGanger
return nil, status.Error(codes.Internal, "Could not get head state")
}
currEpoch := slots.ToEpoch(headState.Slot())
isRecent, resp := checkValidatorsAreRecent(currEpoch, req)
// Return early if we are in phase0.
if headState.Version() == version.Phase0 {
log.Info("Skipping goppelganger check for Phase 0")
resp := &ethpb.DoppelGangerResponse{
Responses: []*ethpb.DoppelGangerResponse_ValidatorResponse{},
}
for _, v := range req.ValidatorRequests {
resp.Responses = append(resp.Responses,
&ethpb.DoppelGangerResponse_ValidatorResponse{
PublicKey: v.PublicKey,
DuplicateExists: false,
})
}
return resp, nil
}
headSlot := headState.Slot()
currEpoch := slots.ToEpoch(headSlot)
// If all provided keys are recent we skip this check
// as we are unable to effectively determine if a doppelganger
// is active.
isRecent, resp := checkValidatorsAreRecent(currEpoch, req)
if isRecent {
return resp, nil
}
// We walk back from the current head state to the state at the beginning of the previous 2 epochs.
// Where S_i , i := 0,1,2. i = 0 would signify the current head state in this epoch.
previousEpoch, err := currEpoch.SafeSub(1)
if err != nil {
previousEpoch = currEpoch
}
olderEpoch, err := previousEpoch.SafeSub(1)
if err != nil {
olderEpoch = previousEpoch
}
prevState, err := vs.retrieveAfterEpochTransition(ctx, previousEpoch)
// We request a state 32 slots ago. We are guaranteed to have
// currentSlot > 32 since we assume that we are in Altair's fork.
prevState, err := vs.ReplayerBuilder.ReplayerForSlot(headSlot - params.BeaconConfig().SlotsPerEpoch).ReplayBlocks(ctx)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get previous state")
}
olderState, err := vs.retrieveAfterEpochTransition(ctx, olderEpoch)
headCurrentParticipation, err := headState.CurrentEpochParticipation()
if err != nil {
return nil, status.Error(codes.Internal, "Could not get older state")
return nil, errParticipation
}
headPreviousParticipation, err := headState.PreviousEpochParticipation()
if err != nil {
return nil, errParticipation
}
prevCurrentParticipation, err := prevState.CurrentEpochParticipation()
if err != nil {
return nil, errParticipation
}
prevPreviousParticipation, err := prevState.PreviousEpochParticipation()
if err != nil {
return nil, errParticipation
}
resp = &ethpb.DoppelGangerResponse{
Responses: []*ethpb.DoppelGangerResponse_ValidatorResponse{},
}
for _, v := range req.ValidatorRequests {
// If the validator's last recorded epoch was
// less than or equal to `numStatesToCheck` epochs ago, this method will not
// be able to catch duplicates. This is due to how attestation
// inclusion works, where an attestation for the current epoch
// is able to included in the current or next epoch. Depending
// on which epoch it is included the balance change will be
// reflected in the following epoch.
if v.Epoch+numStatesToCheck >= currEpoch {
// If the validator's last recorded epoch was less than 1 epoch
// ago, the current doppelganger check will not be able to
// identify dopplelgangers since an attestation can take up to
// 31 slots to be included.
if v.Epoch+1 >= currEpoch {
resp.Responses = append(resp.Responses,
&ethpb.DoppelGangerResponse_ValidatorResponse{
PublicKey: v.PublicKey,
@@ -155,37 +179,15 @@ func (vs *Server) CheckDoppelGanger(ctx context.Context, req *ethpb.DoppelGanger
})
continue
}
valIndex, ok := olderState.ValidatorIndexByPubkey(bytesutil.ToBytes48(v.PublicKey))
valIndex, ok := prevState.ValidatorIndexByPubkey(bytesutil.ToBytes48(v.PublicKey))
if !ok {
// Ignore if validator pubkey doesn't exist.
continue
}
baseBal, err := olderState.BalanceAtIndex(valIndex)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get validator's balance")
}
nextBal, err := prevState.BalanceAtIndex(valIndex)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get validator's balance")
}
// If the next epoch's balance is higher, we mark it as an existing
// duplicate.
if nextBal > baseBal {
log.Infof("current %d with last epoch %d and difference in bal %d gwei", currEpoch, v.Epoch, nextBal-baseBal)
resp.Responses = append(resp.Responses,
&ethpb.DoppelGangerResponse_ValidatorResponse{
PublicKey: v.PublicKey,
DuplicateExists: true,
})
continue
}
currBal, err := headState.BalanceAtIndex(valIndex)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get validator's balance")
}
// If the current epoch's balance is higher, we mark it as an existing
// duplicate.
if currBal > nextBal {
if (headCurrentParticipation[valIndex] != 0) || (headPreviousParticipation[valIndex] != 0) ||
(prevCurrentParticipation[valIndex] != 0) || (prevPreviousParticipation[valIndex] != 0) {
log.WithField("ValidatorIndex", valIndex).Infof("Participation flag found")
resp.Responses = append(resp.Responses,
&ethpb.DoppelGangerResponse_ValidatorResponse{
PublicKey: v.PublicKey,
@@ -374,8 +376,8 @@ func checkValidatorsAreRecent(headEpoch types.Epoch, req *ethpb.DoppelGangerRequ
// Due to how balances are reflected for individual
// validators, we can only effectively determine if a
// validator voted or not if we are able to look
// back more than `numStatesToCheck` epochs into the past.
if v.Epoch+numStatesToCheck < headEpoch {
// back more than 1 epoch into the past.
if v.Epoch+1 < headEpoch {
validatorsAreRecent = false
// Zero out response if we encounter non-recent validators to
// guard against potential misuse.

View File

@@ -8,7 +8,6 @@ import (
"github.com/d4l3k/messagediff"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/go-bitfield"
mockChain "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
@@ -17,7 +16,6 @@ import (
mockstategen "github.com/prysmaticlabs/prysm/beacon-chain/state/stategen/mock"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
mockSync "github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync/testing"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/trie"
"github.com/prysmaticlabs/prysm/crypto/bls"
@@ -961,27 +959,15 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
name: "normal doppelganger request",
wantErr: false,
svSetup: func(t *testing.T) (*Server, *ethpb.DoppelGangerRequest, *ethpb.DoppelGangerResponse) {
hs, ps, os, keys, builder := createStateSetup(t, 4)
// Previous Epoch State
for i := 0; i < 3; i++ {
bal, err := ps.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 100 gwei, to mock an inactivity leak
assert.NoError(t, ps.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal+1000000000))
}
// Older Epoch State
for i := 0; i < 3; i++ {
bal, err := os.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 200 gwei, to mock an inactivity leak
assert.NoError(t, os.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal+2000000000))
}
hs, ps, keys := createStateSetupAltair(t, 3)
rb := mockstategen.NewMockReplayerBuilder()
rb.SetMockStateForSlot(ps, 20)
vs := &Server{
HeadFetcher: &mockChain.ChainService{
State: hs,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ReplayerBuilder: builder,
ReplayerBuilder: rb,
}
request := &ethpb.DoppelGangerRequest{
ValidatorRequests: make([]*ethpb.DoppelGangerRequest_ValidatorRequest, 0),
@@ -1005,37 +991,19 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
name: "doppelganger exists current epoch",
wantErr: false,
svSetup: func(t *testing.T) (*Server, *ethpb.DoppelGangerRequest, *ethpb.DoppelGangerResponse) {
hs, ps, os, keys, builder := createStateSetup(t, 4)
// Previous Epoch State
for i := 0; i < 2; i++ {
bal, err := ps.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 100 gwei, to mock an inactivity leak
assert.NoError(t, ps.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal+1000000000))
}
bal, err := ps.BalanceAtIndex(types.ValidatorIndex(2))
assert.NoError(t, err)
// Sub 100 gwei, to mock an active validator.
assert.NoError(t, ps.UpdateBalancesAtIndex(types.ValidatorIndex(2), bal-1000000000))
// Older Epoch State
for i := 0; i < 2; i++ {
bal, err := os.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 200 gwei, to mock an inactivity leak
assert.NoError(t, os.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal+2000000000))
}
bal, err = os.BalanceAtIndex(types.ValidatorIndex(2))
assert.NoError(t, err)
// Sub 100 gwei, to mock an active validator.
assert.NoError(t, os.UpdateBalancesAtIndex(types.ValidatorIndex(2), bal-1000000000))
hs, ps, keys := createStateSetupAltair(t, 3)
rb := mockstategen.NewMockReplayerBuilder()
rb.SetMockStateForSlot(ps, 20)
currentIndices := make([]byte, 64)
currentIndices[2] = 1
require.NoError(t, hs.SetCurrentParticipationBits(currentIndices))
vs := &Server{
HeadFetcher: &mockChain.ChainService{
State: hs,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ReplayerBuilder: builder,
ReplayerBuilder: rb,
}
request := &ethpb.DoppelGangerRequest{
ValidatorRequests: make([]*ethpb.DoppelGangerRequest_ValidatorRequest, 0),
@@ -1070,37 +1038,19 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
name: "doppelganger exists previous epoch",
wantErr: false,
svSetup: func(t *testing.T) (*Server, *ethpb.DoppelGangerRequest, *ethpb.DoppelGangerResponse) {
hs, ps, os, keys, builder := createStateSetup(t, 4)
// Previous Epoch State
for i := 0; i < 2; i++ {
bal, err := ps.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 100 gwei, to mock an inactivity leak
assert.NoError(t, ps.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal+1000000000))
}
bal, err := ps.BalanceAtIndex(types.ValidatorIndex(2))
assert.NoError(t, err)
// Sub 100 gwei, to mock an active validator.
assert.NoError(t, ps.UpdateBalancesAtIndex(types.ValidatorIndex(2), bal-1000000000))
// Older Epoch State
for i := 0; i < 2; i++ {
bal, err := os.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 200 gwei, to mock an inactivity leak
assert.NoError(t, os.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal+2000000000))
}
bal, err = os.BalanceAtIndex(types.ValidatorIndex(2))
assert.NoError(t, err)
// Sub 200 gwei, to mock an active validator.
assert.NoError(t, os.UpdateBalancesAtIndex(types.ValidatorIndex(2), bal-2000000000))
hs, ps, keys := createStateSetupAltair(t, 3)
prevIndices := make([]byte, 64)
prevIndices[2] = 1
require.NoError(t, ps.SetPreviousParticipationBits(prevIndices))
rb := mockstategen.NewMockReplayerBuilder()
rb.SetMockStateForSlot(ps, 20)
vs := &Server{
HeadFetcher: &mockChain.ChainService{
State: hs,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ReplayerBuilder: builder,
ReplayerBuilder: rb,
}
request := &ethpb.DoppelGangerRequest{
ValidatorRequests: make([]*ethpb.DoppelGangerRequest_ValidatorRequest, 0),
@@ -1135,29 +1085,26 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
name: "multiple doppelganger exists",
wantErr: false,
svSetup: func(t *testing.T) (*Server, *ethpb.DoppelGangerRequest, *ethpb.DoppelGangerResponse) {
hs, ps, os, keys, builder := createStateSetup(t, 4)
// Previous Epoch State
for i := 10; i < 15; i++ {
bal, err := ps.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 100 gwei, to mock an inactivity leak
assert.NoError(t, ps.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal-1000000000))
}
hs, ps, keys := createStateSetupAltair(t, 3)
currentIndices := make([]byte, 64)
currentIndices[10] = 1
currentIndices[11] = 2
require.NoError(t, hs.SetPreviousParticipationBits(currentIndices))
rb := mockstategen.NewMockReplayerBuilder()
rb.SetMockStateForSlot(ps, 20)
// Older Epoch State
for i := 10; i < 15; i++ {
bal, err := os.BalanceAtIndex(types.ValidatorIndex(i))
assert.NoError(t, err)
// Add 200 gwei, to mock an inactivity leak
assert.NoError(t, os.UpdateBalancesAtIndex(types.ValidatorIndex(i), bal-2000000000))
prevIndices := make([]byte, 64)
for i := 12; i < 20; i++ {
prevIndices[i] = 1
}
require.NoError(t, ps.SetCurrentParticipationBits(prevIndices))
vs := &Server{
HeadFetcher: &mockChain.ChainService{
State: hs,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ReplayerBuilder: builder,
ReplayerBuilder: rb,
}
request := &ethpb.DoppelGangerRequest{
ValidatorRequests: make([]*ethpb.DoppelGangerRequest_ValidatorRequest, 0),
@@ -1175,6 +1122,17 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
DuplicateExists: true,
})
}
for i := 15; i < 20; i++ {
request.ValidatorRequests = append(request.ValidatorRequests, &ethpb.DoppelGangerRequest_ValidatorRequest{
PublicKey: keys[i].PublicKey().Marshal(),
Epoch: 3,
SignedRoot: []byte{'A'},
})
response.Responses = append(response.Responses, &ethpb.DoppelGangerResponse_ValidatorResponse{
PublicKey: keys[i].PublicKey().Marshal(),
DuplicateExists: false,
})
}
return vs, request, response
},
@@ -1183,14 +1141,16 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
name: "attesters are too recent",
wantErr: false,
svSetup: func(t *testing.T) (*Server, *ethpb.DoppelGangerRequest, *ethpb.DoppelGangerResponse) {
hs, _, _, keys, _ := createStateSetup(t, 4)
hs, ps, keys := createStateSetupAltair(t, 3)
rb := mockstategen.NewMockReplayerBuilder()
rb.SetMockStateForSlot(ps, 20)
vs := &Server{
HeadFetcher: &mockChain.ChainService{
State: hs,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ReplayerBuilder: nil,
ReplayerBuilder: rb,
}
request := &ethpb.DoppelGangerRequest{
ValidatorRequests: make([]*ethpb.DoppelGangerRequest_ValidatorRequest, 0),
@@ -1208,6 +1168,39 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
})
}
return vs, request, response
},
},
{
name: "exit early for Phase 0",
wantErr: false,
svSetup: func(t *testing.T) (*Server, *ethpb.DoppelGangerRequest, *ethpb.DoppelGangerResponse) {
hs, _, keys := createStateSetupPhase0(t, 3)
vs := &Server{
HeadFetcher: &mockChain.ChainService{
State: hs,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := &ethpb.DoppelGangerRequest{
ValidatorRequests: []*ethpb.DoppelGangerRequest_ValidatorRequest{
{
PublicKey: keys[0].PublicKey().Marshal(),
Epoch: 1,
SignedRoot: []byte{'A'},
},
},
}
response := &ethpb.DoppelGangerResponse{
Responses: []*ethpb.DoppelGangerResponse_ValidatorResponse{
{
PublicKey: keys[0].PublicKey().Marshal(),
DuplicateExists: false,
},
},
}
return vs, request, response
},
},
@@ -1228,104 +1221,36 @@ func TestServer_CheckDoppelGanger(t *testing.T) {
}
}
func createStateSetup(t *testing.T, head types.Epoch) (state.BeaconState,
state.BeaconState, state.BeaconState, []bls.SecretKey, *mockstategen.MockReplayerBuilder) {
rb := &mockstategen.MockReplayerBuilder{}
func createStateSetupPhase0(t *testing.T, head types.Epoch) (state.BeaconState,
state.BeaconState, []bls.SecretKey) {
gs, keys := util.DeterministicGenesisState(t, 64)
hs := gs.Copy()
// Head State
headEpoch := head
headSlot := types.Slot(headEpoch) * params.BeaconConfig().SlotsPerEpoch
headSlot := types.Slot(head)*params.BeaconConfig().SlotsPerEpoch + params.BeaconConfig().SlotsPerEpoch/2
assert.NoError(t, hs.SetSlot(headSlot))
assingments, _, err := helpers.CommitteeAssignments(context.Background(), hs, headEpoch)
assert.NoError(t, err)
for _, ctr := range assingments {
pendingAtt := &ethpb.PendingAttestation{
AggregationBits: bitfield.NewBitlist64(uint64(len(ctr.Committee))).ToBitlist().Not(),
Data: &ethpb.AttestationData{
Slot: ctr.AttesterSlot,
CommitteeIndex: ctr.CommitteeIndex,
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, fieldparams.RootLength),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: make([]byte, fieldparams.RootLength),
},
},
InclusionDelay: 1,
ProposerIndex: 10,
}
assert.NoError(t, hs.AppendCurrentEpochAttestations(pendingAtt))
}
rb.SetMockState(hs)
// Previous Epoch State
prevEpoch := headEpoch - 1
prevSlot := headSlot - params.BeaconConfig().SlotsPerEpoch
ps := gs.Copy()
prevSlot, err := slots.EpochEnd(prevEpoch)
assert.NoError(t, err)
assert.NoError(t, ps.SetSlot(prevSlot))
assingments, _, err = helpers.CommitteeAssignments(context.Background(), ps, prevEpoch)
assert.NoError(t, err)
for _, ctr := range assingments {
pendingAtt := &ethpb.PendingAttestation{
AggregationBits: bitfield.NewBitlist64(uint64(len(ctr.Committee))).ToBitlist().Not(),
Data: &ethpb.AttestationData{
Slot: ctr.AttesterSlot,
CommitteeIndex: ctr.CommitteeIndex,
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, fieldparams.RootLength),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: make([]byte, fieldparams.RootLength),
},
},
InclusionDelay: 1,
ProposerIndex: 10,
}
assert.NoError(t, ps.AppendCurrentEpochAttestations(pendingAtt))
}
rb.SetMockState(ps)
// Older Epoch State
olderEpoch := prevEpoch - 1
os := gs.Copy()
olderSlot, err := slots.EpochEnd(olderEpoch)
assert.NoError(t, err)
assert.NoError(t, os.SetSlot(olderSlot))
assingments, _, err = helpers.CommitteeAssignments(context.Background(), os, olderEpoch)
assert.NoError(t, err)
for _, ctr := range assingments {
attSlot := ctr.AttesterSlot
if attSlot == olderSlot {
continue
}
pendingAtt := &ethpb.PendingAttestation{
AggregationBits: bitfield.NewBitlist64(uint64(len(ctr.Committee))).ToBitlist().Not(),
Data: &ethpb.AttestationData{
Slot: attSlot,
CommitteeIndex: ctr.CommitteeIndex,
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, fieldparams.RootLength),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: make([]byte, fieldparams.RootLength),
},
},
InclusionDelay: 1,
ProposerIndex: 10,
}
assert.NoError(t, os.AppendCurrentEpochAttestations(pendingAtt))
}
rb.SetMockState(os)
return hs, ps, os, keys, rb
return hs, ps, keys
}
func createStateSetupAltair(t *testing.T, head types.Epoch) (state.BeaconState,
state.BeaconState, []bls.SecretKey) {
gs, keys := util.DeterministicGenesisStateAltair(t, 64)
hs := gs.Copy()
// Head State
headSlot := types.Slot(head)*params.BeaconConfig().SlotsPerEpoch + params.BeaconConfig().SlotsPerEpoch/2
assert.NoError(t, hs.SetSlot(headSlot))
// Previous Epoch State
prevSlot := headSlot - params.BeaconConfig().SlotsPerEpoch
ps := gs.Copy()
assert.NoError(t, ps.SetSlot(prevSlot))
return hs, ps, keys
}

Some files were not shown because too many files have changed in this diff Show More