Compare commits

...

128 Commits

Author SHA1 Message Date
Raul Jordan
4f0662bc49 confs 2022-07-08 09:40:40 -04:00
Raul Jordan
74ee0eded0 Merge branch 'develop' into reconstruct-engine 2022-07-08 03:31:23 +00:00
Raul Jordan
98b34ef942 Merge branch 'reconstruct-engine' of github.com:prysmaticlabs/prysm into reconstruct-engine 2022-07-07 23:31:06 -04:00
Raul Jordan
d2e6fe9c78 add test based on recs 2022-07-07 23:30:56 -04:00
Preston Van Loon
f9b3cde005 Batch build API requests for RegisterValidator (#11002)
* Add UnmarshalJSON for SignedValidatorRequest

* add failing test for batch limits

* Add functionality

* gofmt

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-07 23:50:48 +00:00
Raul Jordan
d8f9750387 Only Unmarshal Full Tx Bodies in ExecutionBlock JSON Unmarshaler (#11006)
* full tx unmarshaling fixed

* prefix check
2022-07-07 23:08:44 +00:00
james-prysm
5e8b9dde5b Simplify Push Proposer settings (#11005)
* initial commit

* fixing unit tests

* fixing more unit tests
2022-07-07 22:24:06 +00:00
Raul Jordan
6d6e4fe48a Merge branch 'develop' into reconstruct-engine 2022-07-07 21:56:57 +00:00
terencechain
c2caff4230 Minor UX improvement to validator registration (#11004) 2022-07-07 19:32:02 +00:00
Raul Jordan
539b997c66 build 2022-07-06 21:02:34 -04:00
Raul Jordan
8e11b1be74 prevent nil block 2022-07-06 20:59:17 -04:00
Raul Jordan
e58a83d53e deadcode 2022-07-06 20:57:44 -04:00
Raul Jordan
91a122e70e metrics 2022-07-06 20:55:27 -04:00
Raul Jordan
6d91db594d powchain pass 2022-07-06 20:54:19 -04:00
Raul Jordan
5104bb646d gaz 2022-07-06 20:45:39 -04:00
Raul Jordan
86ac0ed09b engine reconstructor 2022-07-06 20:11:47 -04:00
Raul Jordan
97fa814193 fix confs 2022-07-06 20:06:12 -04:00
Raul Jordan
b67c885995 Major Simplification of JSON Handling for Execution Blocks (#10993)
* no more execution block custom type

* simpler json rpc data unmarshaling

* simplicify

* included hash and txs fix

* all tests

* pass

* build

* mock fix

* attempt build

* builds

* build

* builds

* pass

* pass

* build

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-06 22:06:00 +00:00
james-prysm
60ed488428 changing gaslimit to validator registration (#10992)
* changing gaslimit to validator registration

* adding new flag to enable validator registration for suggested fee recipient

* making sure default gaslimit is set

* Update cmd/validator/flags/flags.go

Co-authored-by: terencechain <terence@prysmaticlabs.com>

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-06 18:42:21 +00:00
Potuz
2f0e2886b4 Do not error if the LVH is bogus (#10996)
* Do not error if the LVH is bogus

* add tests and mark the regression PR

* dead code

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-06 17:37:15 +00:00
terencechain
2d53ae79df Cleanups to pulltips (#10984)
* Minor cleanups to pulltips

* Feedbacks

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-06 16:55:17 +00:00
Nishant Das
ce277cb388 Add Fuzzing For JSON Marshalling/Unmarshalling Methods (#10995)
* modify it

* add gaz

* revert

* deps

* revert change

* fix it
2022-07-06 15:15:14 +00:00
Raul Jordan
c9a366c36a Revert "Move Slasher E2E To Scenario Test" (#10986)
This reverts commit 65900115fc.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-06 13:03:30 +00:00
terencechain
3a957c567f Handle invalid_block_hash error from ee (#10991)
* Handle invalid_block_hash error from ee

* Update beacon-chain/blockchain/error.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Remove invalid block and state

* Revert "Remove invalid block and state"

This reverts commit 9ca011b8ce.

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-07-06 00:22:12 +00:00
Raul Jordan
77a63f871d Included Blinded Beacon Block in V1alpha1 Protobuf (#10989)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-05 21:03:14 +00:00
Raul Jordan
8dd8ccc147 pure funcs (#10988) 2022-07-05 16:24:17 -04:00
Preston Van Loon
3c48bce3a3 Annotate build client requests (#10987)
* Annotate build client requests

* Use named return arguments to annotate errors

* Unhandled error was bad

* Error level is better than warning for this

* Clarifying commentary while i'm here

* delete the pasta
2022-07-05 19:33:33 +00:00
Raul Jordan
e9b4c0110b Merge branch 'develop' into only-save-payload 2022-07-05 13:38:05 -04:00
Nishant Das
0ed5007d2e Fix Pubsub Panic In Handling Dead Peers (#10976)
* fix

* fix it

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-04 00:41:33 +00:00
Raul Jordan
65900115fc Move Slasher E2E To Scenario Test (#10973)
* consolidate into slasher scenario test

* pattern

* revert

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-03 23:55:33 +00:00
Potuz
379bed9268 add heuristics for pulltips (#10955)
* add heuristics for pulltips

* gazelle

* add unit test

* fix unit test

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-03 20:27:39 +00:00
Potuz
2dd2e74369 update finalization on onblock (#10980)
* update finalization on onblock

* add unit test

* Minor cleanups

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-07-03 19:39:31 +00:00
Potuz
73237826d3 ensure there are as many deltas as nodes (#10979)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-03 00:53:33 -03:00
terencechain
af4d0c84c8 Check finalized beyond DB (#10978)
* Check finalized beyond DB

* Unhandle error

* Remove debug log

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-07-02 16:31:05 -03:00
Potuz
c68f1883d6 Save Head after pruning invalid nodes (#10977)
* Save Head after prunning

* fix unit test
2022-07-02 16:38:08 +00:00
terencechain
ae1685d937 Log invalid finalized root (#10975)
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-07-02 10:58:54 +00:00
terencechain
2a5f05bc29 Improve "rasied file descriptor limit..." log (#10970)
* Improvement to raise file descriptor log

* Radek feedback

* Change to debug
2022-07-01 18:05:01 -04:00
Potuz
49e5e73ec0 Default SafeSlotsToImportOptimistically to 128 (#10967)
* Default SafeSlotsToImportOptimistically to 128

* fix tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-07-01 17:04:51 +00:00
Nishant Das
2ecb905ae5 Update Prysm Libp2p Dependencies (#10958)
* add all changes in

* fix issues

* fix build

* remove curve check

* fix tool

* add test

* add tidy

* fmt

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-01 15:34:11 +00:00
Radosław Kapka
d4e7da8200 Change log level to debug in fetchBlocksFromPeer (#10969)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-01 14:49:31 +00:00
Nishant Das
4b042a7103 Fix Multiclient E2E (#10965)
* fix it

* gaz

* fix

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-07-01 14:02:01 +00:00
Radosław Kapka
e59859c78f Wrap client-stats flags (#10966) 2022-07-01 12:49:38 +00:00
Potuz
6bcc7d3a5e Do not fill in missing blocks on regular sync (#10957) 2022-07-01 11:03:49 +00:00
terencechain
7b597bb130 More sepolia boot nodes (#10962)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-30 19:49:35 +00:00
Potuz
2c7b273260 do not overwrite log (#10963) 2022-06-30 19:02:23 +00:00
Radosław Kapka
8bedaaf0a8 Log error in fetchBlocksFromPeer (#10959)
* Log error in `fetchBlocksFromPeer`

* update case
2022-06-30 16:22:10 +00:00
james-prysm
69350a6a80 Fee Recipient E2E misscounting deterministic keys leading to flakes (#10960) 2022-06-30 15:01:33 +00:00
terencechain
93e8c749f8 Can get payload header from builder (#10954) 2022-06-29 21:13:25 -07:00
james-prysm
96fecf8c57 E2E: fee-recipient evaluator (#10528)
* testing out fee-recipient evaluator

* fixing bazel linter

* adjusting comparison

* typo on file rolling back

* adding fee recipient is present to minimal e2e

* fixing gofmt

* fixing gofmt

* fixing flag usage name

* adding in log to help debug

* fixing log build

* trying to figure out why suggested fee recipient isn't working in e2e, adding more logging temporarily

* rolling back logs

* making e2e test more dynamic

* fixing deepsource issue

* fixing bazel

* adding in condition for latest release

* duplicate condtion check.

* fixing gofmt

* rolling back changes

* adding fee recipient evaluator in new file

* fixing validator component logic

* testing rpc client addition

* testing fee recipient evaluator

* fixing bazel:

* testing casting

* test casting

* reverting

* testing casting

* testing casting

* testing log

* adding bazel fix

* switching mixed case and adding temp logging

* fixing gofmt

* rolling back changes

* removing fee recipient evaluator when web3signer is used

* test only minimal config

* reverting changes

* adding fee recipient evaluator to mainnet

* current version uses wrong flag name

* optimizing key usage

* making mining address a variable

* moving from global to local variable

* removing unneeded log

* removing redundant check

* make proposer settings mroe deterministic and also have the evaluator compare the wanting values

* fixing err return

* fixing bazel

* checking file too much moving it out

* fixing gosec

* trying to fix gosec error

* trying to fix address

* fixing linting

* trying to gerenate key and random address

* fixing linting

* fixing check for proposer config

* trying with multi config files

* fixing is dir check

* testing for older previous balance

* adding logging to help debug

* changing how i get the block numbers

* fixing missed error check

* adding gasused check

* adding log for current gas used

* taking suggestion to make fee recipient more deterministic

* fixing linting

* fixing check

* fixing the address check

* fixing format error

* logic to differentiate recipients

* fixing linting
2022-06-30 00:24:39 +00:00
Potuz
5d29ca4984 Experimental disable boundary checks (#10936)
* init

* bellatrix + altair tests passing

* Add Phase0 support

* add feature flag

* phase0 test

* restore testvectors

* mod tidy

* state tests

* gaz

* do not call precompute

* fix test

* Fix context

* move to own's method

* remove spectests pulltips

* time import

* remove phase0

* mod tidy

* fix getters

* Update beacon-chain/forkchoice/doubly-linked-tree/types.go

* reviews

* fix workspace

* Recursive rlocks

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-06-29 23:37:21 +00:00
Raul Jordan
23aeb4df6f Merge branch 'only-save-payload' of github.com:prysmaticlabs/prysm into only-save-payload 2022-06-29 16:50:04 -04:00
Raul Jordan
ca5b368d15 build 2022-06-29 16:49:29 -04:00
Raul Jordan
9cc1076fee conflicts 2022-06-29 15:44:29 -05:00
Raul Jordan
6453e98dc6 fix up confs 2022-06-29 16:40:23 -04:00
terencechain
43523c0266 RPC adds builder service (#10953)
* RPC adds builder service

* Update beacon-chain/builder/service.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-06-29 18:54:24 +00:00
Sammy Rosso
8ebbde7836 Testutil refactor attestations (#10952)
* Add AttestationUtil receiver

* Modify usage to account for the receiver

* Add missing explanatory comments

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-06-29 16:42:33 +00:00
Radosław Kapka
44c39a0b40 Don't log terminal difficulty has not been reached yet... until Bellatrix (#10951) 2022-06-29 13:53:59 +00:00
Radosław Kapka
f376f3fb9b Integrate better fastssz validation errors into Prysm (#10945)
* update dep

* regenerate SSZ, update test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-29 16:05:56 +08:00
Sammy Rosso
8510743406 Add additional logging fields for post-merge transition blocks (#10944)
Added additional logging information to Bellatrix blocks.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-28 13:55:36 +00:00
Radosław Kapka
b82e2e7d40 Use prysmaticlabs/fastssz as a direct dependency (#10941)
* Update dependency

* Regenerate SSZ files

* fix BUILD files

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-06-28 13:03:24 +00:00
james-prysm
747db024ad regenerating beacon_chain proto 2022-06-16 09:04:54 -05:00
Raul Jordan
cdcb7ee389 fixing some rpc tests 2022-06-15 18:40:00 -06:00
Raul Jordan
5754f9d271 blinded bellatrix block in apis 2022-06-15 18:31:57 -06:00
Raul Jordan
f7a6167c1d pass db tests, fix cache issue 2022-06-15 18:23:02 -06:00
Raul Jordan
5f2fd08255 Merge branch 'develop' into only-save-payload 2022-06-08 17:39:10 -04:00
Raul Jordan
dda2064e07 resolve conflicts 2022-06-08 11:21:11 -04:00
Raul Jordan
0299d7a036 Merge branch 'only-save-payload' of github.com:prysmaticlabs/prysm into only-save-payload 2022-06-01 13:18:32 -05:00
Raul Jordan
7bb5bd0fba Merge branch 'develop' into only-save-payload 2022-05-26 21:32:03 +00:00
Raul Jordan
f5b2dd986a Merge branch 'develop' into only-save-payload 2022-05-26 17:28:28 -04:00
Raul Jordan
b9b24afb69 no log 2022-05-26 16:01:36 -04:00
Raul Jordan
8103095cc0 fix up 2022-05-26 15:58:41 -04:00
Raul Jordan
76f201ee8f no log 2022-05-26 15:31:03 -04:00
Raul Jordan
31c39aac96 fix up some failing tests 2022-05-26 15:21:07 -04:00
Raul Jordan
51109f61b4 Merge branch 'develop' into only-save-payload 2022-05-26 19:14:06 +00:00
Raul Jordan
dc94612272 Merge branch 'develop' into only-save-payload 2022-05-25 13:43:49 +00:00
Raul Jordan
eb150622ed Merge branch 'only-save-payload' of github.com:prysmaticlabs/prysm into only-save-payload 2022-05-24 23:29:59 -04:00
Raul Jordan
27e210f6b8 item 2022-05-24 23:29:54 -04:00
Raul Jordan
bbcbb8dc26 Merge branch 'develop' into only-save-payload 2022-05-25 03:17:13 +00:00
Raul Jordan
04a96da75d build 2022-05-24 23:16:21 -04:00
Raul Jordan
1acb3b6346 gaz 2022-05-24 14:07:25 -04:00
Raul Jordan
ff69994b7b reconstructs with payloads 2022-05-24 14:06:27 -04:00
Raul Jordan
7ad27324fd right test 2022-05-24 13:59:22 -04:00
Raul Jordan
1cba6c306e Merge branch 'develop' into only-save-payload 2022-05-24 13:46:56 -04:00
Raul Jordan
247c2da608 work on more rpc tests 2022-05-24 12:58:29 -04:00
Raul Jordan
f749702ed7 begin populating 2022-05-23 20:50:33 -04:00
Raul Jordan
dbd6232e6f Merge branch 'develop' into only-save-payload 2022-05-23 20:42:27 -04:00
Raul Jordan
d8b6b6d17c test with real txs 2022-05-23 20:40:44 -04:00
Raul Jordan
e2a06625cf Merge branch 'develop' into only-save-payload 2022-05-23 19:53:45 -04:00
Raul Jordan
92f9aff295 begin testing happy case of reconstruction func 2022-05-23 19:53:34 -04:00
Raul Jordan
ab734442a3 test fixes 2022-05-23 17:44:55 -04:00
Raul Jordan
36b1efb12f Merge branch 'develop' into only-save-payload 2022-05-23 16:32:54 -04:00
Raul Jordan
5a4a4c2016 engine client correct use 2022-05-20 18:32:21 -04:00
Raul Jordan
ba6c28c48d correct logic 2022-05-20 18:29:48 -04:00
Raul Jordan
01ae8d58d5 wrapper tests 2022-05-20 18:21:58 -04:00
Raul Jordan
574b03d2ed begin some tests 2022-05-20 18:02:35 -04:00
Raul Jordan
0c6feb60b1 builds 2022-05-20 17:51:56 -04:00
Raul Jordan
70143cff56 Merge branch 'payload-utils' into only-save-payload 2022-05-20 17:43:15 -04:00
Raul Jordan
49aedf8459 move to consensus-types folder 2022-05-20 17:43:00 -04:00
Raul Jordan
ea5e8b99b7 clean move 2022-05-20 17:30:40 -04:00
Raul Jordan
3611afb448 gaz 2022-05-20 17:28:04 -04:00
Raul Jordan
d3a1cff406 beacon block is nil wrapper 2022-05-20 17:26:45 -04:00
Raul Jordan
e3c07ac84f gazelle 2022-05-20 17:24:41 -04:00
Raul Jordan
57d52089bc handle err 2022-05-20 17:18:46 -04:00
Raul Jordan
d0b92aa42b future proof the code 2022-05-20 17:17:11 -04:00
Raul Jordan
35a7cc43e3 Merge branch 'develop' into only-save-payload 2022-05-20 17:10:15 -04:00
Raul Jordan
c214525e70 simpler logic 2022-05-20 16:40:43 -04:00
Raul Jordan
fcd9f0830e simplify 2022-05-20 16:27:08 -04:00
Raul Jordan
8c8380f28c better version check 2022-05-20 16:02:53 -04:00
Raul Jordan
5885e44670 Merge branch 'develop' into only-save-payload 2022-05-20 15:49:42 -04:00
Raul Jordan
11e0f4025a metrics for responding to blocks by range 2022-05-17 20:58:30 -04:00
Raul Jordan
05ed96dc25 add empty payload check 2022-05-17 20:35:40 -04:00
Raul Jordan
c57baa00f7 Merge branch 'develop' into only-save-payload 2022-05-17 19:55:41 -04:00
Raul Jordan
76b2e23232 ensure can respond to peer requests 2022-05-13 16:07:15 -04:00
Raul Jordan
68e67c3023 add logging times 2022-05-13 15:53:01 -04:00
Raul Jordan
eaa3d756e7 reconstruct full blocks during blocks by range request 2022-05-13 15:50:07 -04:00
Raul Jordan
cdf4c8d3fe api responses with real txs 2022-05-13 15:35:36 -04:00
Raul Jordan
ada07f5358 block reconstruction working as expected 2022-05-13 13:56:58 -04:00
Raul Jordan
114277d0b0 payload api reconstruct 2022-05-13 13:24:47 -04:00
Raul Jordan
0b6bf2c316 ensure can overcome sync hurdles 2022-05-13 00:05:07 -04:00
Raul Jordan
2299b00cd8 building up payloads 2022-05-12 17:08:42 -04:00
Raul Jordan
4ba8c98acd Merge branch 'develop' into only-save-payload 2022-05-12 15:52:59 -04:00
Raul Jordan
63f858d2da Merge branch 'develop' into only-save-payload 2022-05-12 15:52:49 -04:00
Raul Jordan
e7d9b33904 reconstruct full 2022-05-11 23:58:49 -04:00
Raul Jordan
77657dca93 reconstruct 2022-05-11 23:53:32 -04:00
Raul Jordan
c755751410 Merge branch 'develop' into only-save-payload 2022-05-11 14:24:58 -04:00
Raul Jordan
571edeaf43 wrap signed blinded 2022-05-05 17:10:04 -04:00
Raul Jordan
e2e8528f97 build from blinded 2022-05-05 17:05:23 -04:00
Raul Jordan
2cfbc92c17 experiment with only saving blinded blocks 2022-05-05 17:03:10 -04:00
271 changed files with 6586 additions and 3836 deletions

View File

@@ -12,11 +12,15 @@ go_library(
deps = [
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
@@ -36,6 +40,7 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -12,11 +12,15 @@ import (
"text/template"
"time"
"github.com/pkg/errors"
mathprysm "github.com/prysmaticlabs/prysm/math"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
log "github.com/sirupsen/logrus"
)
@@ -30,6 +34,8 @@ const (
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
const registerValidatorBatchLimit = 100
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
@@ -128,37 +134,49 @@ func (c *Client) NodeURL() string {
type reqOption func(*http.Request)
// do is a generic, opinionated GET function to reduce boilerplate amongst the getters in this packageapi/client/builder/types.go.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) ([]byte, error) {
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder/types.go.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) (res []byte, err error) {
ctx, span := trace.StartSpan(ctx, "builder.client.do")
defer func() {
tracing.AnnotateError(span, err)
span.End()
}()
u := c.baseURL.ResolveReference(&url.URL{Path: path})
log.Printf("requesting %s", u.String())
span.AddAttributes(trace.StringAttribute("url", u.String()),
trace.StringAttribute("method", method))
req, err := http.NewRequestWithContext(ctx, method, u.String(), body)
if err != nil {
return nil, err
return
}
for _, o := range opts {
o(req)
}
for _, o := range c.obvs {
if err := o.observe(req); err != nil {
return nil, err
if err = o.observe(req); err != nil {
return
}
}
r, err := c.hc.Do(req)
if err != nil {
return nil, err
return
}
defer func() {
err = r.Body.Close()
closeErr := r.Body.Close()
log.WithError(closeErr).Error("Failed to close response body")
}()
if r.StatusCode != http.StatusOK {
return nil, non200Err(r)
err = non200Err(r)
return
}
b, err := io.ReadAll(r.Body)
res, err = io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body from GetBlock")
err = errors.Wrap(err, "error reading http response body from builder server")
return
}
return b, nil
return
}
var execHeaderTemplate = template.Must(template.New("").Parse(getExecHeaderPath))
@@ -201,19 +219,38 @@ func (c *Client) GetHeader(ctx context.Context, slot types.Slot, parentHash [32]
// RegisterValidator encodes the SignedValidatorRegistrationV1 message to json (including hex-encoding the byte
// fields with 0x prefixes) and posts to the builder validator registration endpoint.
func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error {
ctx, span := trace.StartSpan(ctx, "builder.client.RegisterValidator")
defer span.End()
if len(svr) == 0 {
return errors.Wrap(errMalformedRequest, "empty validator registration list")
err := errors.Wrap(errMalformedRequest, "empty validator registration list")
tracing.AnnotateError(span, err)
return err
}
vs := make([]*SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
vs[i] = &SignedValidatorRegistration{SignedValidatorRegistrationV1: svr[i]}
eg, ctx := errgroup.WithContext(ctx)
for i := 0; i < len(svr); i += registerValidatorBatchLimit {
end := int(mathprysm.Min(uint64(len(svr)), uint64(i+registerValidatorBatchLimit))) // lint:ignore uintcast -- Request will never exceed int.
vs := make([]*SignedValidatorRegistration, 0, registerValidatorBatchLimit)
for j := i; j < end; j++ {
vs = append(vs, &SignedValidatorRegistration{SignedValidatorRegistrationV1: svr[j]})
}
body, err := json.Marshal(vs)
if err != nil {
err := errors.Wrap(err, "error encoding the SignedValidatorRegistration value body in RegisterValidator")
tracing.AnnotateError(span, err)
}
eg.Go(func() error {
ctx, span := trace.StartSpan(ctx, "builder.client.RegisterValidator.Go")
defer span.End()
span.AddAttributes(trace.Int64Attribute("reqs", int64(len(vs))))
_, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body))
return err
})
}
body, err := json.Marshal(vs)
if err != nil {
return errors.Wrap(err, "error encoding the SignedValidatorRegistration value body in RegisterValidator")
}
_, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body))
return err
return eg.Wait()
}
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.

View File

@@ -3,6 +3,7 @@ package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
@@ -108,6 +109,52 @@ func TestClient_RegisterValidator(t *testing.T) {
require.NoError(t, c.RegisterValidator(ctx, []*eth.SignedValidatorRegistrationV1{reg}))
}
func TestClient_RegisterValidator_Over100Requests(t *testing.T) {
reqs := make([]*eth.SignedValidatorRegistrationV1, 301)
for i := 0; i < len(reqs); i++ {
reqs[i] = &eth.SignedValidatorRegistrationV1{
Message: &eth.ValidatorRegistrationV1{
FeeRecipient: ezDecode(t, params.BeaconConfig().EthBurnAddressHex),
GasLimit: 23,
Timestamp: 42,
Pubkey: []byte(fmt.Sprint(i)),
},
}
}
var total int
ctx := context.Background()
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
body, err := io.ReadAll(r.Body)
require.NoError(t, r.Body.Close())
require.NoError(t, err)
recvd := make([]*SignedValidatorRegistration, 0)
require.NoError(t, json.Unmarshal(body, &recvd))
if len(recvd) > registerValidatorBatchLimit {
t.Errorf("Number of requests (%d) exceeds limit (%d)", len(recvd), registerValidatorBatchLimit)
}
total += len(recvd)
require.Equal(t, http.MethodPost, r.Method)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
require.NoError(t, c.RegisterValidator(ctx, reqs))
require.Equal(t, len(reqs), total)
}
func TestClient_GetHeader(t *testing.T) {
ctx := context.Background()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"

View File

@@ -31,6 +31,22 @@ func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
})
}
func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
if r.SignedValidatorRegistrationV1 == nil {
r.SignedValidatorRegistrationV1 = &eth.SignedValidatorRegistrationV1{}
}
o := struct {
Message *ValidatorRegistration `json:"message,omitempty"`
Signature hexutil.Bytes `json:"signature,omitempty"`
}{}
if err := json.Unmarshal(b, &o); err != nil {
return err
}
r.Message = o.Message.ValidatorRegistrationV1
r.Signature = o.Signature
return nil
}
func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
return json.Marshal(struct {
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
@@ -45,6 +61,33 @@ func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
})
}
func (r *ValidatorRegistration) UnmarshalJSON(b []byte) error {
if r.ValidatorRegistrationV1 == nil {
r.ValidatorRegistrationV1 = &eth.ValidatorRegistrationV1{}
}
o := struct {
FeeRecipient hexutil.Bytes `json:"fee_recipient,omitempty"`
GasLimit string `json:"gas_limit,omitempty"`
Timestamp string `json:"timestamp,omitempty"`
Pubkey hexutil.Bytes `json:"pubkey,omitempty"`
}{}
if err := json.Unmarshal(b, &o); err != nil {
return err
}
r.FeeRecipient = o.FeeRecipient
r.Pubkey = o.Pubkey
var err error
if r.GasLimit, err = strconv.ParseUint(o.GasLimit, 10, 64); err != nil {
return errors.Wrap(err, "failed to parse gas limit")
}
if r.Timestamp, err = strconv.ParseUint(o.Timestamp, 10, 64); err != nil {
return errors.Wrap(err, "failed to parse timestamp")
}
return nil
}
type Uint256 struct {
*big.Int
}

View File

@@ -9,6 +9,7 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/golang/protobuf/proto"
"github.com/prysmaticlabs/go-bitfield"
v1 "github.com/prysmaticlabs/prysm/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -31,7 +32,8 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
},
Signature: make([]byte, 96),
}
je, err := json.Marshal(&SignedValidatorRegistration{SignedValidatorRegistrationV1: svr})
a := &SignedValidatorRegistration{SignedValidatorRegistrationV1: svr}
je, err := json.Marshal(a)
require.NoError(t, err)
// decode with a struct w/ plain strings so we can check the string encoding of the hex fields
un := struct {
@@ -45,6 +47,14 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Signature)
require.Equal(t, "0x0000000000000000000000000000000000000000", un.Message.FeeRecipient)
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Message.Pubkey)
t.Run("roundtrip", func(t *testing.T) {
b := &SignedValidatorRegistration{}
if err := json.Unmarshal(je, b); err != nil {
require.NoError(t, err)
}
require.Equal(t, proto.Equal(a.SignedValidatorRegistrationV1, b.SignedValidatorRegistrationV1), true)
})
}
var testExampleHeaderResponse = `{

View File

@@ -83,6 +83,7 @@ type FinalizationFetcher interface {
CurrentJustifiedCheckpt() *ethpb.Checkpoint
PreviousJustifiedCheckpt() *ethpb.Checkpoint
VerifyFinalizedBlkDescendant(ctx context.Context, blockRoot [32]byte) error
IsFinalized(ctx context.Context, blockRoot [32]byte) bool
}
// OptimisticModeFetcher retrieves information about optimistic status of the node.
@@ -307,6 +308,15 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
return s.IsOptimisticForRoot(ctx, s.head.root)
}
// IsFinalized returns true if the input root is finalized.
// It first checks latest finalized root then checks finalized root index in DB.
func (s *Service) IsFinalized(ctx context.Context, root [32]byte) bool {
if s.ForkChoicer().FinalizedCheckpoint().Root == root {
return true
}
return s.cfg.BeaconDB.IsFinalizedBlock(ctx, root)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -669,3 +669,25 @@ func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
require.Equal(t, true, validated)
}
func TestService_IsFinalized(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}}
r1 := [32]byte{'a'}
require.NoError(t, c.ForkChoiceStore().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
Root: r1,
}))
b := util.NewBeaconBlock()
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: br[:], Slot: 10}))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{
Root: br[:],
}))
require.Equal(t, true, c.IsFinalized(ctx, r1))
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}

View File

@@ -4,7 +4,9 @@ import "github.com/pkg/errors"
var (
// ErrInvalidPayload is returned when the payload is invalid
ErrInvalidPayload = errors.New("recevied an INVALID payload from execution engine")
ErrInvalidPayload = errors.New("received an INVALID payload from execution engine")
// ErrInvalidBlockHashPayloadStatus is returned when the payload has invalid block hash.
ErrInvalidBlockHashPayloadStatus = errors.New("received an INVALID_BLOCK_HASH payload from execution engine")
// ErrUndefinedExecutionEngineError is returned when the execution engine returns an error that is not defined
ErrUndefinedExecutionEngineError = errors.New("received an undefined ee error")
// errNilFinalizedInStore is returned when a nil finalized checkpt is returned from store.

View File

@@ -110,10 +110,15 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
return nil, err
}
if err := s.saveHead(ctx, r, b, st); err != nil {
log.WithError(err).Error("could not save head after pruning invalid blocks")
}
log.WithFields(logrus.Fields{
"slot": headBlk.Slot(),
"blockRoot": fmt.Sprintf("%#x", headRoot),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot[:])),
"invalidCount": len(invalidRoots),
"newHeadRoot": fmt.Sprintf("%#x", bytesutil.Trunc(r[:])),
}).Warn("Pruned invalid blocks")
return pid, ErrInvalidPayload
@@ -140,14 +145,7 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
if err != nil {
return [32]byte{}, err
}
if blocks.IsPreBellatrixVersion(blk.Block().Version()) {
return params.BeaconConfig().ZeroHash, nil
}
payload, err := blk.Block().Body().ExecutionPayload()
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get execution payload")
}
return bytesutil.ToBytes32(payload.BlockHash), nil
return getBlockPayloadHash(blk.Block())
}
// notifyForkchoiceUpdate signals execution engine on a new payload.
@@ -208,6 +206,9 @@ func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
"invalidCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
return false, invalidBlock{ErrInvalidPayload}
case powchain.ErrInvalidBlockHashPayloadStatus:
newPayloadInvalidNodeCount.Inc()
return false, invalidBlock{ErrInvalidBlockHashPayloadStatus}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
gethtypes "github.com/ethereum/go-ethereum/core/types"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
@@ -31,6 +32,7 @@ import (
)
func Test_NotifyForkchoiceUpdate(t *testing.T) {
params.BeaconConfig().SafeSlotsToImportOptimistically = 0
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -212,7 +214,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
func Test_NotifyForkchoiceUpdateRecursive_Protoarray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -270,14 +272,11 @@ func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
require.NoError(t, err)
// Insert blocks into forkchoice
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
service := setupBeaconChain(t, beaconDB)
fcs := protoarray.New()
service.cfg.ForkChoiceStore = fcs
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -318,6 +317,153 @@ func Test_NotifyForkchoiceUpdateRecursive(t *testing.T) {
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: powchain.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
block: wba,
}
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
headState: st,
headBlock: wbg.Block(),
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.ErrorIs(t, ErrInvalidPayload, err)
// Ensure Head is D
headRoot, err = fcs.Head(ctx, service.justifiedBalances.balances)
require.NoError(t, err)
require.Equal(t, brd, headRoot)
// Ensure F and G where removed but their parent E wasn't
require.Equal(t, false, fcs.HasNode(brf))
require.Equal(t, false, fcs.HasNode(brg))
require.Equal(t, true, fcs.HasNode(bre))
}
//
//
// A <- B <- C <- D
// \
// ---------- E <- F
// \
// ------ G
// D is the current head, attestations for F and G come late, both are invalid.
// We switch recursively to F then G and finally to D.
//
// We test:
// 1. forkchoice removes blocks F and G from the forkchoice implementation
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
// Prepare blocks
ba := util.NewBeaconBlockBellatrix()
ba.Block.Body.ExecutionPayload.BlockNumber = 1
wba := util.SaveBlock(t, ctx, beaconDB, ba)
bra, err := wba.Block().HashTreeRoot()
require.NoError(t, err)
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.ExecutionPayload.BlockNumber = 2
wbb := util.SaveBlock(t, ctx, beaconDB, bb)
brb, err := wbb.Block().HashTreeRoot()
require.NoError(t, err)
bc := util.NewBeaconBlockBellatrix()
bc.Block.Body.ExecutionPayload.BlockNumber = 3
wbc := util.SaveBlock(t, ctx, beaconDB, bc)
brc, err := wbc.Block().HashTreeRoot()
require.NoError(t, err)
bd := util.NewBeaconBlockBellatrix()
pd := [32]byte{'D'}
bd.Block.Body.ExecutionPayload.BlockHash = pd[:]
bd.Block.Body.ExecutionPayload.BlockNumber = 4
wbd := util.SaveBlock(t, ctx, beaconDB, bd)
brd, err := wbd.Block().HashTreeRoot()
require.NoError(t, err)
be := util.NewBeaconBlockBellatrix()
pe := [32]byte{'E'}
be.Block.Body.ExecutionPayload.BlockHash = pe[:]
be.Block.Body.ExecutionPayload.BlockNumber = 5
wbe := util.SaveBlock(t, ctx, beaconDB, be)
bre, err := wbe.Block().HashTreeRoot()
require.NoError(t, err)
bf := util.NewBeaconBlockBellatrix()
pf := [32]byte{'F'}
bf.Block.Body.ExecutionPayload.BlockHash = pf[:]
bf.Block.Body.ExecutionPayload.BlockNumber = 6
bf.Block.ParentRoot = bre[:]
wbf := util.SaveBlock(t, ctx, beaconDB, bf)
brf, err := wbf.Block().HashTreeRoot()
require.NoError(t, err)
bg := util.NewBeaconBlockBellatrix()
bg.Block.Body.ExecutionPayload.BlockNumber = 7
pg := [32]byte{'G'}
bg.Block.Body.ExecutionPayload.BlockHash = pg[:]
bg.Block.ParentRoot = bre[:]
wbg := util.SaveBlock(t, ctx, beaconDB, bg)
brg, err := wbg.Block().HashTreeRoot()
require.NoError(t, err)
// Insert blocks into forkchoice
service := setupBeaconChain(t, beaconDB)
fcs := doublylinkedtree.New()
service.cfg.ForkChoiceStore = fcs
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
// Insert Attestations to D, F and G so that they have higher weight than D
// Ensure G is head
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(jc))
headRoot, err := fcs.Head(ctx, []uint64{50, 100, 200})
require.NoError(t, err)
require.Equal(t, brg, headRoot)
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockPOW.EngineClient{ErrForkchoiceUpdated: powchain.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
block: wba,
}
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
@@ -528,16 +674,40 @@ func Test_NotifyNewPayload(t *testing.T) {
newPayloadErr: ErrUndefinedExecutionEngineError,
errString: ErrUndefinedExecutionEngineError.Error(),
},
{
name: "invalid block hash error from ee",
postState: bellatrixState,
blk: func() interfaces.SignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
b, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
return b
}(),
newPayloadErr: ErrInvalidBlockHashPayloadStatus,
errString: ErrInvalidBlockHashPayloadStatus.Error(),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
e := &mockPOW.EngineClient{ErrNewPayload: tt.newPayloadErr, BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("b")),
},
TotalDifficulty: "0x2",
}
e.BlockByHashMap[[32]byte{'b'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("3")),
},
TotalDifficulty: "0x1",
}
service.cfg.ExecutionEngineCaller = e
@@ -590,11 +760,15 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
require.NoError(t, err)
e := &mockPOW.EngineClient{BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("b")),
},
TotalDifficulty: "0x2",
}
e.BlockByHashMap[[32]byte{'b'}] = &v1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("3")),
},
TotalDifficulty: "0x1",
}
service.cfg.ExecutionEngineCaller = e

View File

@@ -62,24 +62,30 @@ func logBlockSyncStatus(block interfaces.BeaconBlock, blockRoot [32]byte, justif
return err
}
level := log.Logger.GetLevel()
log = log.WithField("slot", block.Slot())
if level >= logrus.DebugLevel {
log = log.WithField("slotInEpoch", block.Slot()%params.BeaconConfig().SlotsPerEpoch)
log = log.WithField("justifiedEpoch", justified.Epoch)
log = log.WithField("justifiedRoot", fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]))
log = log.WithField("parentRoot", fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]))
log = log.WithField("version", version.String(block.Version()))
log = log.WithField("sinceSlotStartTime", prysmTime.Now().Sub(startTime))
log = log.WithField("chainServiceProcessedTime", prysmTime.Now().Sub(receivedTime))
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime),
}).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
}
log.WithFields(logrus.Fields{
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
}).Info("Synced new block")
return nil
}

View File

@@ -35,7 +35,7 @@ func TestReportEpochMetrics_SlashedValidatorOutOfBound(t *testing.T) {
require.NoError(t, err)
v.Slashed = true
require.NoError(t, h.UpdateValidatorAtIndex(0, v))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.HydrateAttestationData(&eth.AttestationData{})}))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.NewAttestationUtil().HydrateAttestationData(&eth.AttestationData{})}))
err = reportEpochMetrics(context.Background(), h, h)
require.ErrorContains(t, "slot 0 out of bounds", err)
}

View File

@@ -100,7 +100,7 @@ func (s *Service) getBlkParentHashAndTD(ctx context.Context, blkHash []byte) ([]
if overflows {
return nil, nil, errors.New("total difficulty overflows")
}
return blk.ParentHash, blkTDUint256, nil
return blk.ParentHash[:], blkTDUint256, nil
}
// validateTerminalBlockHash validates if the merge block is a valid terminal PoW block.

View File

@@ -6,12 +6,12 @@ import (
"math/big"
"testing"
gethtypes "github.com/ethereum/go-ethereum/core/types"
"github.com/holiman/uint256"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
mocks "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -120,12 +120,19 @@ func Test_validateMergeBlock(t *testing.T) {
engine := &mocks.EngineClient{BlockByHashMap: map[[32]byte]*enginev1.ExecutionBlock{}}
service.cfg.ExecutionEngineCaller = engine
engine.BlockByHashMap[[32]byte{'a'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
a := [32]byte{'a'}
b := [32]byte{'b'}
mergeBlockParentHash := [32]byte{'3'}
engine.BlockByHashMap[a] = &enginev1.ExecutionBlock{
Header: gethtypes.Header{
ParentHash: b,
},
TotalDifficulty: "0x2",
}
engine.BlockByHashMap[[32]byte{'b'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
engine.BlockByHashMap[b] = &enginev1.ExecutionBlock{
Header: gethtypes.Header{
ParentHash: mergeBlockParentHash,
},
TotalDifficulty: "0x1",
}
blk := &ethpb.SignedBeaconBlockBellatrix{
@@ -133,18 +140,18 @@ func Test_validateMergeBlock(t *testing.T) {
Slot: 1,
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &enginev1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentHash: a[:],
},
},
},
}
b, err := wrapper.WrappedSignedBeaconBlock(blk)
bk, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.validateMergeBlock(ctx, b))
require.NoError(t, service.validateMergeBlock(ctx, bk))
cfg.TerminalTotalDifficulty = "1"
params.OverrideBeaconConfig(cfg)
err = service.validateMergeBlock(ctx, b)
err = service.validateMergeBlock(ctx, bk)
require.ErrorContains(t, "invalid TTD, configTTD: 1, currentTTD: 2, parentTTD: 1", err)
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -167,7 +174,9 @@ func Test_getBlkParentHashAndTD(t *testing.T) {
p := [32]byte{'b'}
td := "0x1"
engine.BlockByHashMap[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
Header: gethtypes.Header{
ParentHash: p,
},
TotalDifficulty: td,
}
parentHash, totalDifficulty, err := service.getBlkParentHashAndTD(ctx, h[:])
@@ -183,14 +192,18 @@ func Test_getBlkParentHashAndTD(t *testing.T) {
require.ErrorContains(t, "pow block is nil", err)
engine.BlockByHashMap[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
Header: gethtypes.Header{
ParentHash: p,
},
TotalDifficulty: "1",
}
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])
require.ErrorContains(t, "could not decode merge block total difficulty: hex string without 0x prefix", err)
engine.BlockByHashMap[h] = &enginev1.ExecutionBlock{
ParentHash: p[:],
Header: gethtypes.Header{
ParentHash: p,
},
TotalDifficulty: "0XFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
}
_, _, err = service.getBlkParentHashAndTD(ctx, h[:])

View File

@@ -70,6 +70,7 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
au := util.AttestationUtil{}
tests := []struct {
name string
a *ethpb.Attestation
@@ -77,17 +78,17 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
@@ -176,6 +177,7 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
au := util.AttestationUtil{}
tests := []struct {
name string
a *ethpb.Attestation
@@ -183,17 +185,17 @@ func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
a: au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
@@ -249,7 +251,7 @@ func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
att, err := util.NewAttestationUtil().GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
@@ -279,7 +281,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
att, err := util.NewAttestationUtil().GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
@@ -422,8 +424,7 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
d := util.HydrateAttestationData(&ethpb.AttestationData{})
d := util.NewAttestationUtil().HydrateAttestationData(&ethpb.AttestationData{})
require.Equal(t, errBlockNotFoundInCacheOrDB, service.verifyBeaconBlock(ctx, d))
}

View File

@@ -106,6 +106,12 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
return err
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.ForkChoicer().JustifiedCheckpoint().Epoch
currStoreFinalizedEpoch := s.ForkChoicer().FinalizedCheckpoint().Epoch
preStateFinalizedEpoch := preState.FinalizedCheckpoint().Epoch
preStateJustifiedEpoch := preState.CurrentJustifiedCheckpoint().Epoch
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
@@ -130,9 +136,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState); err != nil {
return err
}
// save current justified and finalized epochs for future use
currJustifiedEpoch := s.ForkChoicer().JustifiedCheckpoint().Epoch
currFinalizedEpoch := s.ForkChoicer().FinalizedCheckpoint().Epoch
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, signed.Block(), blockRoot, postState); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
@@ -211,16 +214,20 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}()
// Save justified check point to db.
if justified.Epoch > currJustifiedEpoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint()); err != nil {
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > currStoreJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
// Update finalized check point.
// Save finalized check point to db and more.
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.ForkChoicer().FinalizedCheckpoint()
if finalized.Epoch > currFinalizedEpoch {
if err := s.updateFinalized(ctx, postState.FinalizedCheckpoint()); err != nil {
if finalized.Epoch > currStoreFinalizedEpoch || (finalized.Epoch == postStateFinalizedEpoch && finalized.Epoch > preStateFinalizedEpoch) {
if err := s.updateFinalized(ctx, &ethpb.Checkpoint{Epoch: finalized.Epoch, Root: finalized.Root[:]}); err != nil {
return err
}
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(finalized.Root)
@@ -493,9 +500,15 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
ctx, span := trace.StartSpan(ctx, "blockChain.insertBlockAndAttestationsToForkChoiceStore")
defer span.End()
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.insertBlockToForkChoiceStore(ctx, blk, root, st, fCheckpoint, jCheckpoint); err != nil {
if !s.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(blk.ParentRoot())) {
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
}
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, st, root); err != nil {
return err
}
// Feed in block's attestations to fork choice store.
@@ -513,13 +526,6 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
return nil
}
func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk interfaces.BeaconBlock, root [32]byte, st state.BeaconState, fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, root)
}
// InsertSlashingsToForkChoiceStore inserts attester slashing indices to fork choice store.
// To call this function, it's caller's responsibility to ensure the slashing object is valid.
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []*ethpb.AttesterSlashing) {
@@ -531,6 +537,27 @@ func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashing
}
}
func getBlockPayloadHash(blk interfaces.BeaconBlock) ([32]byte, error) {
var blockHashFromPayload [32]byte
if blocks.IsPreBellatrixVersion(blk.Version()) {
return blockHashFromPayload, nil
}
payload, err := blk.Body().ExecutionPayload()
switch {
case errors.Is(err, wrapper.ErrUnsupportedField):
payloadHeader, err := blk.Body().ExecutionPayloadHeader()
if err != nil {
return blockHashFromPayload, err
}
blockHashFromPayload = bytesutil.ToBytes32(payloadHeader.BlockHash)
case err != nil:
return blockHashFromPayload, err
default:
blockHashFromPayload = bytesutil.ToBytes32(payload.BlockHash)
}
return blockHashFromPayload, nil
}
// This saves post state info to DB or cache. This also saves post state info to fork choice store.
// Post state info consists of processed block and state. Do not call this method unless the block and state are verified.
func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interfaces.SignedBeaconBlock, st state.BeaconState) error {

View File

@@ -9,6 +9,8 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
gethtypes "github.com/ethereum/go-ethereum/core/types"
"github.com/pkg/errors"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
@@ -1148,6 +1150,55 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
require.Equal(t, 3*params.BeaconConfig().SlotsPerEpoch, service.nextEpochBoundarySlot)
}
func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
depositCache, err := depositcache.New()
require.NoError(t, err)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
WithStateNotifier(&mock.MockStateNotifier{}),
WithAttestationPool(attestations.NewPool()),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
testState := gs.Copy()
for i := types.Slot(1); i <= 4*params.BeaconConfig().SlotsPerEpoch; i++ {
blk, err := util.GenerateFullBlock(testState, keys, util.DefaultBlockGenConfig(), i)
require.NoError(t, err)
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := wrapper.WrappedSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, fcs.NewSlot(ctx, i))
require.NoError(t, service.onBlock(ctx, wsb, r))
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
cp := service.CurrentJustifiedCheckpt()
require.Equal(t, types.Epoch(3), cp.Epoch)
cp = service.FinalizedCheckpt()
require.Equal(t, types.Epoch(2), cp.Epoch)
// The update should persist in DB.
j, err := service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
require.NoError(t, err)
cp = service.CurrentJustifiedCheckpt()
require.Equal(t, j.Epoch, cp.Epoch)
f, err := service.cfg.BeaconDB.FinalizedCheckpoint(ctx)
require.NoError(t, err)
cp = service.FinalizedCheckpt()
require.Equal(t, f.Epoch, cp.Epoch)
}
func TestOnBlock_CanFinalize(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -1478,6 +1529,9 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
aHash := common.BytesToHash([]byte("a"))
bHash := common.BytesToHash([]byte("b"))
tests := []struct {
name string
stateVersion int
@@ -1508,7 +1562,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state older than Bellatrix, non empty payload",
stateVersion: 1,
payload: &enginev1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentHash: aHash[:],
},
},
{
@@ -1534,7 +1588,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state is Bellatrix, non empty payload, empty header",
stateVersion: 2,
payload: &enginev1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentHash: aHash[:],
},
header: &enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
@@ -1552,7 +1606,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state is Bellatrix, non empty payload, non empty header",
stateVersion: 2,
payload: &enginev1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentHash: aHash[:],
},
header: &enginev1.ExecutionPayloadHeader{
BlockNumber: 1,
@@ -1562,7 +1616,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state is Bellatrix, non empty payload, nil header",
stateVersion: 2,
payload: &enginev1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
ParentHash: aHash[:],
},
errString: "nil header or block body",
},
@@ -1570,12 +1624,16 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
e := &mockPOW.EngineClient{BlockByHashMap: map[[32]byte]*enginev1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
e.BlockByHashMap[aHash] = &enginev1.ExecutionBlock{
Header: gethtypes.Header{
ParentHash: bHash,
},
TotalDifficulty: "0x2",
}
e.BlockByHashMap[[32]byte{'b'}] = &enginev1.ExecutionBlock{
ParentHash: bytesutil.PadTo([]byte{'3'}, fieldparams.RootLength),
e.BlockByHashMap[bHash] = &enginev1.ExecutionBlock{
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("3")),
},
TotalDifficulty: "0x1",
}
service.cfg.ExecutionEngineCaller = e
@@ -1606,8 +1664,9 @@ func TestService_insertSlashingsToForkChoiceStore(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
au := util.AttestationUtil{}
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -1622,7 +1681,7 @@ func TestService_insertSlashingsToForkChoiceStore(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)

View File

@@ -60,7 +60,7 @@ func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
require.NoError(t, err)
wanted := "FFG and LMD votes are not consistent"
a := util.NewAttestation()
a := util.NewAttestationUtil().NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = []byte{'a'}
a.Data.BeaconBlockRoot = r33[:]
@@ -85,8 +85,7 @@ func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
a := util.NewAttestation()
a := util.NewAttestationUtil().NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = r32[:]
a.Data.BeaconBlockRoot = r33[:]
@@ -106,7 +105,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
atts, err := util.NewAttestationUtil().GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(atts[0].Data.Target.Root)
copied := genesisState.Copy()
@@ -227,7 +226,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
// Generate attestatios for this block in Slot 1
atts, err := util.GenerateAttestations(copied, pks, 1, 1, false)
atts, err := util.NewAttestationUtil().GenerateAttestations(copied, pks, 1, 1, false)
require.NoError(t, err)
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
// Verify the target is in forchoice

View File

@@ -63,6 +63,7 @@ type ChainService struct {
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
}
// ForkChoicer mocks the same method in the chain service
@@ -458,3 +459,8 @@ func (s *ChainService) UpdateHead(_ context.Context) error { return nil }
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
// IsFinalized mocks the same method in the chain service.
func (s *ChainService) IsFinalized(_ context.Context, blockRoot [32]byte) bool {
return s.FinalizedRoots[blockRoot]
}

View File

@@ -60,6 +60,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
return nil, err
}
s.c = c
log.WithField("endpoint", c.NodeURL()).Info("Builder has been configured")
}
return s, nil
}

View File

@@ -27,7 +27,7 @@ import (
func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
attestations := []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{
util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, fieldparams.RootLength)},
Slot: 5,
@@ -55,7 +55,7 @@ func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
}
func TestProcessAttestations_NeitherCurrentNorPrevEpoch(t *testing.T) {
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Target: &ethpb.Checkpoint{Epoch: 0}}})
@@ -201,7 +201,7 @@ func TestProcessAttestations_OK(t *testing.T) {
aggBits.SetBitAt(0, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Root: mockRoot[:]},

View File

@@ -25,7 +25,7 @@ import (
func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
data := util.HydrateAttestationData(&ethpb.AttestationData{
data := util.NewAttestationUtil().HydrateAttestationData(&ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: bytesutil.PadTo([]byte("hello-world"), 32)},
Target: &ethpb.Checkpoint{Epoch: 0, Root: bytesutil.PadTo([]byte("hello-world"), 32)},
})
@@ -85,7 +85,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
func TestVerifyAttestationNoVerifySignature_IncorrectSlotTargetEpoch(t *testing.T) {
beaconState, _ := util.DeterministicGenesisState(t, 1)
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: params.BeaconConfig().SlotsPerEpoch,
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
@@ -218,7 +218,7 @@ func TestConvertToIndexed_OK(t *testing.T) {
var sig [fieldparams.BLSSignatureLength]byte
copy(sig[:], "signed")
att := util.HydrateAttestation(&ethpb.Attestation{
att := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{
Signature: sig[:],
})
for _, tt := range tests {
@@ -261,11 +261,12 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
au := util.AttestationUtil{}
tests := []struct {
attestation *ethpb.IndexedAttestation
}{
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 2,
},
@@ -275,7 +276,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
Signature: make([]byte, fieldparams.BLSSignatureLength),
}},
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 1,
},
@@ -284,7 +285,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
Signature: make([]byte, fieldparams.BLSSignatureLength),
}},
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 4,
},
@@ -293,7 +294,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
Signature: make([]byte, fieldparams.BLSSignatureLength),
}},
{attestation: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: 7,
},
@@ -411,7 +412,8 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -430,7 +432,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
att2 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1*params.BeaconConfig().SlotsPerEpoch + 1,
@@ -470,7 +472,8 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -489,7 +492,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
att2 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -534,7 +537,8 @@ func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
comm1, err := helpers.BeaconCommitteeFromState(ctx, st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
@@ -553,7 +557,7 @@ func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
comm2, err := helpers.BeaconCommitteeFromState(ctx, st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
att2 := au.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1,

View File

@@ -19,11 +19,12 @@ import (
)
func TestSlashableAttestationData_CanSlash(t *testing.T) {
att1 := util.HydrateAttestationData(&ethpb.AttestationData{
au := util.AttestationUtil{}
att1 := au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 1, Root: make([]byte, 32)},
Source: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'A'}, 32)},
})
att2 := util.HydrateAttestationData(&ethpb.AttestationData{
att2 := au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 1, Root: make([]byte, 32)},
Source: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'B'}, 32)},
})
@@ -35,9 +36,10 @@ func TestSlashableAttestationData_CanSlash(t *testing.T) {
}
func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
au := util.AttestationUtil{}
slashings := []*ethpb.AttesterSlashing{{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_1: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{}),
Attestation_2: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
Target: &ethpb.Checkpoint{Epoch: 1}},
@@ -71,15 +73,16 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
})
require.NoError(t, err)
au := util.AttestationUtil{}
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_1: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: make([]uint64, params.BeaconConfig().MaxValidatorsPerCommittee+1),
}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_2: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: make([]uint64, params.BeaconConfig().MaxValidatorsPerCommittee+1),
}),
},
@@ -102,7 +105,8 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -117,7 +121,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
@@ -171,7 +175,8 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -186,7 +191,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
@@ -240,7 +245,8 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
vv.WithdrawableEpoch = types.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
@@ -255,7 +261,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)

View File

@@ -39,8 +39,9 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
expectedSlashedVal := 2800
root1 := [32]byte{'d', 'o', 'u', 'b', 'l', 'e', '1'}
au := util.AttestationUtil{}
att1 := &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: 0, Root: root1[:]}}),
Data: au.HydrateAttestationData(&ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: 0, Root: root1[:]}}),
AttestingIndices: setA,
Signature: make([]byte, 96),
}
@@ -58,7 +59,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
root2 := [32]byte{'d', 'o', 'u', 'b', 'l', 'e', '2'}
att2 := &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root2[:]},
}),
AttestingIndices: setB,

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -19,6 +20,12 @@ func UnrealizedCheckpoints(st state.BeaconState) (*ethpb.Checkpoint, *ethpb.Chec
return nil, nil, errNilState
}
if slots.ToEpoch(st.Slot()) <= params.BeaconConfig().GenesisEpoch+1 {
jc := st.CurrentJustifiedCheckpoint()
fc := st.FinalizedCheckpoint()
return jc, fc, nil
}
activeBalance, prevTarget, currentTarget, err := st.UnrealizedCheckpointBalances()
if err != nil {
return nil, nil, err

View File

@@ -16,8 +16,8 @@ go_library(
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
],
)

View File

@@ -1,8 +1,8 @@
package signing
import (
fssz "github.com/ferranbt/fastssz"
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"

View File

@@ -133,15 +133,16 @@ func TestProcessBlock_IncorrectProcessExits(t *testing.T) {
}),
},
}
au := util.AttestationUtil{}
attesterSlashings := []*ethpb.AttesterSlashing{
{
Attestation_1: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{}),
Data: au.HydrateAttestationData(&ethpb.AttestationData{}),
AttestingIndices: []uint64{0, 1},
Signature: make([]byte, 96),
},
Attestation_2: &ethpb.IndexedAttestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{}),
Data: au.HydrateAttestationData(&ethpb.AttestationData{}),
AttestingIndices: []uint64{0, 1},
Signature: make([]byte, 96),
},
@@ -152,7 +153,7 @@ func TestProcessBlock_IncorrectProcessExits(t *testing.T) {
blockRoots = append(blockRoots, []byte{byte(i)})
}
require.NoError(t, beaconState.SetBlockRoots(blockRoots))
blockAtt := util.HydrateAttestation(&ethpb.Attestation{
blockAtt := au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte("hello-world"), 32)},
},
@@ -255,7 +256,8 @@ func createFullBlockWithOperations(t *testing.T) (state.BeaconState,
require.NoError(t, beaconState.SetValidators(validators))
mockRoot2 := [32]byte{'A'}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
au := util.AttestationUtil{}
att1 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot2[:]},
},
@@ -271,7 +273,7 @@ func createFullBlockWithOperations(t *testing.T) (state.BeaconState,
att1.Signature = aggregateSig.Marshal()
mockRoot3 := [32]byte{'B'}
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att2 := au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot3[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, fieldparams.RootLength)},
@@ -301,7 +303,7 @@ func createFullBlockWithOperations(t *testing.T) (state.BeaconState,
aggBits := bitfield.NewBitlist(1)
aggBits.SetBitAt(0, true)
blockAtt := util.HydrateAttestation(&ethpb.Attestation{
blockAtt := au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: beaconState.Slot(),
Target: &ethpb.Checkpoint{Epoch: time.CurrentEpoch(beaconState)},

View File

@@ -60,11 +60,11 @@ go_library(
"@com_github_dgraph_io_ristretto//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_prombbolt//:go_default_library",
"@com_github_schollz_progressbar_v3//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -6,9 +6,9 @@ import (
"fmt"
"github.com/ethereum/go-ethereum/common"
ssz "github.com/ferranbt/fastssz"
"github.com/golang/snappy"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
@@ -304,6 +304,13 @@ func (s *Store) SaveBlocks(ctx context.Context, blocks []interfaces.SignedBeacon
if err := updateValueForIndices(ctx, indicesForBlocks[i], blockRoots[i], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
}
if _, err := blk.Block().Body().ExecutionPayload(); err == nil {
blindedBlock, err := wrapper.WrapSignedBlindedBeaconBlock(blk)
if err != nil {
return err
}
blk = blindedBlock
}
s.blockCache.Set(string(blockRoots[i]), blk, int64(len(encodedBlocks[i])))
if err := bkt.Put(blockRoots[i], encodedBlocks[i]); err != nil {
return err
@@ -768,11 +775,6 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.SignedBeaconBlock
if err := rawBlock.UnmarshalSSZ(enc[len(altairKey):]); err != nil {
return nil, err
}
case hasBellatrixKey(enc):
rawBlock = &ethpb.SignedBeaconBlockBellatrix{}
if err := rawBlock.UnmarshalSSZ(enc[len(bellatrixKey):]); err != nil {
return nil, err
}
case hasBellatrixBlindKey(enc):
rawBlock = &ethpb.SignedBlindedBeaconBlockBellatrix{}
if err := rawBlock.UnmarshalSSZ(enc[len(bellatrixBlindKey):]); err != nil {
@@ -790,19 +792,34 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.SignedBeaconBlock
// marshal versioned beacon block from struct type down to bytes.
func marshalBlock(_ context.Context, blk interfaces.SignedBeaconBlock) ([]byte, error) {
obj, err := blk.MarshalSSZ()
if err != nil {
var encodedBlock []byte
var blindedBlock interfaces.SignedBeaconBlock
var err error
// If the block supports blinding of execution payloads, we wrap as
// a signed, blinded beacon block and then marshal to bytes. Otherwise,
// We just marshal the block as it is.
blindedBlock, err = wrapper.WrapSignedBlindedBeaconBlock(blk)
switch {
case errors.Is(err, wrapper.ErrUnsupportedSignedBeaconBlock):
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, err
}
case err != nil:
return nil, err
default:
encodedBlock, err = blindedBlock.MarshalSSZ()
if err != nil {
return nil, err
}
}
switch blk.Version() {
case version.BellatrixBlind:
return snappy.Encode(nil, append(bellatrixBlindKey, obj...)), nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixKey, obj...)), nil
case version.Bellatrix, version.BellatrixBlind:
return snappy.Encode(nil, append(bellatrixBlindKey, encodedBlock...)), nil
case version.Altair:
return snappy.Encode(nil, append(altairKey, obj...)), nil
return snappy.Encode(nil, append(altairKey, encodedBlock...)), nil
case version.Phase0:
return snappy.Encode(nil, obj), nil
return snappy.Encode(nil, encodedBlock), nil
default:
return nil, errors.New("Unknown block version")
}

View File

@@ -134,11 +134,17 @@ func TestStore_BlocksCRUD(t *testing.T) {
retrievedBlock, err := db.Block(ctx, blockRoot)
require.NoError(t, err)
assert.DeepEqual(t, nil, retrievedBlock, "Expected nil block")
require.NoError(t, db.SaveBlock(ctx, blk))
assert.Equal(t, true, db.HasBlock(ctx, blockRoot), "Expected block to exist in the db")
retrievedBlock, err = db.Block(ctx, blockRoot)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(blk.Proto(), retrievedBlock.Proto()), "Wanted: %v, received: %v", blk, retrievedBlock)
wanted := retrievedBlock
if _, err := retrievedBlock.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(retrievedBlock)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), retrievedBlock.Proto()), "Wanted: %v, received: %v", wanted, retrievedBlock)
})
}
}
@@ -314,7 +320,13 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
assert.Equal(t, true, db.HasBlock(ctx, blockRoot), "Expected block to exist in the db")
retrievedBlock, err = db.Block(ctx, blockRoot)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(blk.Proto(), retrievedBlock.Proto()), "Wanted: %v, received: %v", blk, retrievedBlock)
wanted := blk
if _, err := blk.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(blk)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), retrievedBlock.Proto()), "Wanted: %v, received: %v", wanted, retrievedBlock)
})
}
}
@@ -524,7 +536,12 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
root := roots[0]
b, err := db.Block(ctx, root)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(block1.Proto(), b.Proto()), "Wanted: %v, received: %v", block1, b)
wanted := block1
if _, err := block1.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(wanted)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), b.Proto()), "Wanted: %v, received: %v", wanted, b)
_, roots, err = db.HighestRootsBelowSlot(ctx, 11)
require.NoError(t, err)
@@ -533,7 +550,12 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
root = roots[0]
b, err = db.Block(ctx, root)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(block2.Proto(), b.Proto()), "Wanted: %v, received: %v", block2, b)
wanted2 := block2
if _, err := block2.PbBellatrixBlock(); err == nil {
wanted2, err = wrapper.WrapSignedBlindedBeaconBlock(block2)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted2.Proto(), b.Proto()), "Wanted: %v, received: %v", wanted2, b)
_, roots, err = db.HighestRootsBelowSlot(ctx, 101)
require.NoError(t, err)
@@ -542,7 +564,12 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
root = roots[0]
b, err = db.Block(ctx, root)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(block3.Proto(), b.Proto()), "Wanted: %v, received: %v", block3, b)
wanted = block3
if _, err := block3.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(wanted)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), b.Proto()), "Wanted: %v, received: %v", wanted, b)
})
}
}
@@ -569,7 +596,12 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
root := roots[0]
b, err := db.Block(ctx, root)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(block1.Proto(), b.Proto()), "Wanted: %v, received: %v", block1, b)
wanted := block1
if _, err := block1.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(block1)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), b.Proto()), "Wanted: %v, received: %v", wanted, b)
_, roots, err = db.HighestRootsBelowSlot(ctx, 1)
require.NoError(t, err)
@@ -577,7 +609,12 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
root = roots[0]
b, err = db.Block(ctx, root)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(genesisBlock.Proto(), b.Proto()), "Wanted: %v, received: %v", genesisBlock, b)
wanted = genesisBlock
if _, err := genesisBlock.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(genesisBlock)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), b.Proto()), "Wanted: %v, received: %v", wanted, b)
_, roots, err = db.HighestRootsBelowSlot(ctx, 0)
require.NoError(t, err)
@@ -585,7 +622,12 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
root = roots[0]
b, err = db.Block(ctx, root)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(genesisBlock.Proto(), b.Proto()), "Wanted: %v, received: %v", genesisBlock, b)
wanted = genesisBlock
if _, err := genesisBlock.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(genesisBlock)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), b.Proto()), "Wanted: %v, received: %v", wanted, b)
})
}
}
@@ -671,15 +713,31 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
assert.Equal(t, 0, len(retrievedBlocks), "Unexpected number of blocks received, expected none")
retrievedBlocks, err = db.BlocksBySlot(ctx, 20)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(b1.Proto(), retrievedBlocks[0].Proto()), "Wanted: %v, received: %v", b1, retrievedBlocks[0])
wanted := b1
if _, err := b1.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(b1)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(retrievedBlocks[0].Proto(), wanted.Proto()), "Wanted: %v, received: %v", retrievedBlocks[0], wanted)
assert.Equal(t, true, len(retrievedBlocks) > 0, "Expected to have blocks")
retrievedBlocks, err = db.BlocksBySlot(ctx, 100)
require.NoError(t, err)
if len(retrievedBlocks) != 2 {
t.Fatalf("Expected 2 blocks, received %d blocks", len(retrievedBlocks))
}
assert.Equal(t, true, proto.Equal(b2.Proto(), retrievedBlocks[0].Proto()), "Wanted: %v, received: %v", b2, retrievedBlocks[0])
assert.Equal(t, true, proto.Equal(b3.Proto(), retrievedBlocks[1].Proto()), "Wanted: %v, received: %v", b3, retrievedBlocks[1])
wanted = b2
if _, err := b2.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(b2)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(wanted.Proto(), retrievedBlocks[0].Proto()), "Wanted: %v, received: %v", retrievedBlocks[0], wanted)
wanted = b3
if _, err := b3.PbBellatrixBlock(); err == nil {
wanted, err = wrapper.WrapSignedBlindedBeaconBlock(b3)
require.NoError(t, err)
}
assert.Equal(t, true, proto.Equal(retrievedBlocks[1].Proto(), wanted.Proto()), "Wanted: %v, received: %v", retrievedBlocks[1], wanted)
assert.Equal(t, true, len(retrievedBlocks) > 0, "Expected to have blocks")
hasBlockRoots, retrievedBlockRoots, err := db.BlockRootsBySlot(ctx, 1)

View File

@@ -5,8 +5,8 @@ import (
"errors"
"reflect"
fastssz "github.com/ferranbt/fastssz"
"github.com/golang/snappy"
fastssz "github.com/prysmaticlabs/fastssz"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"

View File

@@ -21,11 +21,11 @@ go_library(
"//io/file:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
@@ -51,7 +51,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",

View File

@@ -5,7 +5,7 @@ import (
"context"
"encoding/binary"
fssz "github.com/ferranbt/fastssz"
fssz "github.com/prysmaticlabs/fastssz"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/time/slots"
bolt "go.etcd.io/bbolt"

View File

@@ -8,9 +8,9 @@ import (
"sort"
"sync"
ssz "github.com/ferranbt/fastssz"
"github.com/golang/snappy"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
slashertypes "github.com/prysmaticlabs/prysm/beacon-chain/slasher/types"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"

View File

@@ -8,7 +8,7 @@ import (
"sort"
"testing"
ssz "github.com/ferranbt/fastssz"
ssz "github.com/prysmaticlabs/fastssz"
slashertypes "github.com/prysmaticlabs/prysm/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"

View File

@@ -22,9 +22,11 @@ go_library(
],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -60,6 +62,7 @@ go_test(
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v3:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",

View File

@@ -8,7 +8,6 @@ var errInvalidProposerBoostRoot = errors.New("invalid proposer boost root")
var errUnknownFinalizedRoot = errors.New("unknown finalized root")
var errUnknownJustifiedRoot = errors.New("unknown justified root")
var errInvalidOptimisticStatus = errors.New("invalid optimistic status")
var errUnknownPayloadHash = errors.New("unknown payload hash")
var errInvalidNilCheckpoint = errors.New("invalid nil checkpoint")
var errInvalidUnrealizedJustifiedEpoch = errors.New("invalid unrealized justified epoch")
var errInvalidUnrealizedFinalizedEpoch = errors.New("invalid unrealized finalized epoch")

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
@@ -23,15 +24,17 @@ import (
// New initializes a new fork choice store.
func New() *ForkChoice {
s := &Store{
justifiedCheckpoint: &forkchoicetypes.Checkpoint{},
bestJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
prevJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
finalizedCheckpoint: &forkchoicetypes.Checkpoint{},
proposerBoostRoot: [32]byte{},
nodeByRoot: make(map[[fieldparams.RootLength]byte]*Node),
nodeByPayload: make(map[[fieldparams.RootLength]byte]*Node),
slashedIndices: make(map[types.ValidatorIndex]bool),
pruneThreshold: defaultPruneThreshold,
justifiedCheckpoint: &forkchoicetypes.Checkpoint{},
bestJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
unrealizedJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
unrealizedFinalizedCheckpoint: &forkchoicetypes.Checkpoint{},
prevJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
finalizedCheckpoint: &forkchoicetypes.Checkpoint{},
proposerBoostRoot: [32]byte{},
nodeByRoot: make(map[[fieldparams.RootLength]byte]*Node),
nodeByPayload: make(map[[fieldparams.RootLength]byte]*Node),
slashedIndices: make(map[types.ValidatorIndex]bool),
pruneThreshold: defaultPruneThreshold,
}
b := make([]uint64, 0)
@@ -112,7 +115,7 @@ func (f *ForkChoice) ProcessAttestation(ctx context.Context, validatorIndices []
}
// InsertNode processes a new block by inserting it to the fork choice store.
func (f *ForkChoice) InsertNode(ctx context.Context, state state.ReadOnlyBeaconState, root [32]byte) error {
func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, root [32]byte) error {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.InsertNode")
defer span.End()
@@ -142,10 +145,14 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.ReadOnlyBeaconS
return errInvalidNilCheckpoint
}
finalizedEpoch := fc.Epoch
err := f.store.insert(ctx, slot, root, parentRoot, payloadHash, justifiedEpoch, finalizedEpoch)
node, err := f.store.insert(ctx, slot, root, parentRoot, payloadHash, justifiedEpoch, finalizedEpoch)
if err != nil {
return err
}
if features.Get().EnablePullTips {
jc, fc = f.store.pullTips(state, node, jc, fc)
}
return f.updateCheckpoints(ctx, jc, fc)
}
@@ -546,7 +553,7 @@ func (f *ForkChoice) InsertOptimisticChain(ctx context.Context, chain []*forkcho
if err != nil {
return err
}
if err := f.store.insert(ctx,
if _, err := f.store.insert(ctx,
b.Slot(), r, parentRoot, payloadHash,
chain[i].JustifiedCheckpoint.Epoch, chain[i].FinalizedCheckpoint.Epoch); err != nil {
return err

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/features"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -64,5 +65,8 @@ func (f *ForkChoice) NewSlot(ctx context.Context, slot types.Slot) error {
f.store.justifiedCheckpoint = bjcp
}
}
if features.Get().EnablePullTips {
f.updateUnrealizedCheckpoints()
}
return nil
}

View File

@@ -32,12 +32,6 @@ func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, pa
return invalidRoots, errInvalidParentRoot
}
}
// Check if last valid hash is an ancestor of the passed node.
lastValid, ok := s.nodeByPayload[payloadHash]
if !ok || lastValid == nil {
s.nodesLock.Unlock()
return invalidRoots, errUnknownPayloadHash
}
firstInvalid := node
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != payloadHash; firstInvalid = firstInvalid.parent {
if ctx.Err() != nil {

View File

@@ -30,6 +30,23 @@ func TestPruneInvalid(t *testing.T) {
wantedRoots [][32]byte
wantedErr error
}{
{ // Bogus LVH, root not in forkchoice
[32]byte{'x'},
[32]byte{'i'},
[32]byte{'R'},
13,
[][32]byte{},
nil,
},
{
// Bogus LVH
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'R'},
12,
[][32]byte{{'i'}},
nil,
},
{
[32]byte{'j'},
[32]byte{'b'},

View File

@@ -108,7 +108,7 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
func (s *Store) insert(ctx context.Context,
slot types.Slot,
root, parentRoot, payloadHash [fieldparams.RootLength]byte,
justifiedEpoch, finalizedEpoch types.Epoch) error {
justifiedEpoch, finalizedEpoch types.Epoch) (*Node, error) {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.insert")
defer span.End()
@@ -116,8 +116,8 @@ func (s *Store) insert(ctx context.Context,
defer s.nodesLock.Unlock()
// Return if the block has been inserted into Store before.
if _, ok := s.nodeByRoot[root]; ok {
return nil
if n, ok := s.nodeByRoot[root]; ok {
return n, nil
}
parent := s.nodeByRoot[parentRoot]
@@ -141,14 +141,14 @@ func (s *Store) insert(ctx context.Context,
s.treeRootNode = n
s.headNode = n
} else {
return errInvalidParentRoot
return n, errInvalidParentRoot
}
} else {
parent.children = append(parent.children, n)
// Apply proposer boost
timeNow := uint64(time.Now().Unix())
if timeNow < s.genesisTime {
return nil
return n, nil
}
secondsIntoSlot := (timeNow - s.genesisTime) % params.BeaconConfig().SecondsPerSlot
currentSlot := slots.CurrentSlot(s.genesisTime)
@@ -162,14 +162,14 @@ func (s *Store) insert(ctx context.Context,
// Update best descendants
if err := s.treeRootNode.updateBestDescendant(ctx,
s.justifiedCheckpoint.Epoch, s.finalizedCheckpoint.Epoch); err != nil {
return err
return n, err
}
}
// Update metrics.
processedBlockCount.Inc()
nodeCount.Set(float64(len(s.nodeByRoot)))
return nil
return n, nil
}
// pruneFinalizedNodeByRootMap prunes the `nodeByRoot` map

View File

@@ -141,7 +141,8 @@ func TestStore_Insert(t *testing.T) {
fc := &forkchoicetypes.Checkpoint{Epoch: 0}
s := &Store{nodeByRoot: nodeByRoot, treeRootNode: treeRootNode, nodeByPayload: nodeByPayload, justifiedCheckpoint: jc, finalizedCheckpoint: fc}
payloadHash := [32]byte{'a'}
require.NoError(t, s.insert(context.Background(), 100, indexToHash(100), indexToHash(0), payloadHash, 1, 1))
_, err := s.insert(context.Background(), 100, indexToHash(100), indexToHash(0), payloadHash, 1, 1)
require.NoError(t, err)
assert.Equal(t, 2, len(s.nodeByRoot), "Did not insert block")
assert.Equal(t, (*Node)(nil), treeRootNode.parent, "Incorrect parent")
assert.Equal(t, 1, len(treeRootNode.children), "Incorrect children number")

View File

@@ -18,24 +18,26 @@ type ForkChoice struct {
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.
type Store struct {
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified epoch in store.
bestJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized epoch in store.
pruneThreshold uint64 // do not prune tree unless threshold is reached.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
nodeByRoot map[[fieldparams.RootLength]byte]*Node // nodes indexed by roots.
nodeByPayload map[[fieldparams.RootLength]byte]*Node // nodes indexed by payload Hash
slashedIndices map[types.ValidatorIndex]bool // the list of equivocating validator indices
originRoot [fieldparams.RootLength]byte // The genesis block root
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
checkpointsLock sync.RWMutex
genesisTime uint64
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified epoch in store.
bestJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
unrealizedJustifiedCheckpoint *forkchoicetypes.Checkpoint // best unrealized justified checkpoint in store.
unrealizedFinalizedCheckpoint *forkchoicetypes.Checkpoint // best unrealized finalized checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized epoch in store.
pruneThreshold uint64 // do not prune tree unless threshold is reached.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
nodeByRoot map[[fieldparams.RootLength]byte]*Node // nodes indexed by roots.
nodeByPayload map[[fieldparams.RootLength]byte]*Node // nodes indexed by payload Hash
slashedIndices map[types.ValidatorIndex]bool // the list of equivocating validator indices
originRoot [fieldparams.RootLength]byte // The genesis block root
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
checkpointsLock sync.RWMutex
genesisTime uint64
}
// Node defines the individual block which includes its block parent, ancestor and how much weight accounted for it.

View File

@@ -2,7 +2,14 @@ package doublylinkedtree
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
)
func (s *Store) setUnrealizedJustifiedEpoch(root [32]byte, epoch types.Epoch) error {
@@ -35,20 +42,78 @@ func (s *Store) setUnrealizedFinalizedEpoch(root [32]byte, epoch types.Epoch) er
return nil
}
// UpdateUnrealizedCheckpoints "realizes" the unrealized justified and finalized
// epochs stored within nodes. It should be called at the beginning of each
// epoch
func (f *ForkChoice) UpdateUnrealizedCheckpoints() {
// updateUnrealizedCheckpoints "realizes" the unrealized justified and finalized
// epochs stored within nodes. It should be called at the beginning of each epoch.
func (f *ForkChoice) updateUnrealizedCheckpoints() {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
for _, node := range f.store.nodeByRoot {
node.justifiedEpoch = node.unrealizedJustifiedEpoch
node.finalizedEpoch = node.unrealizedFinalizedEpoch
if node.justifiedEpoch > f.store.justifiedCheckpoint.Epoch {
f.store.justifiedCheckpoint.Epoch = node.justifiedEpoch
f.store.justifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
if node.justifiedEpoch > f.store.bestJustifiedCheckpoint.Epoch {
f.store.bestJustifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
}
}
if node.finalizedEpoch > f.store.finalizedCheckpoint.Epoch {
f.store.finalizedCheckpoint.Epoch = node.finalizedEpoch
f.store.justifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
f.store.finalizedCheckpoint = f.store.unrealizedFinalizedCheckpoint
}
}
}
func (s *Store) pullTips(state state.BeaconState, node *Node, jc, fc *ethpb.Checkpoint) (*ethpb.Checkpoint, *ethpb.Checkpoint) {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
if node.parent == nil { // Nothing to do if the parent is nil.
return jc, fc
}
s.checkpointsLock.Lock()
defer s.checkpointsLock.Unlock()
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.genesisTime))
stateSlot := state.Slot()
stateEpoch := slots.ToEpoch(stateSlot)
currJustified := node.parent.unrealizedJustifiedEpoch == currentEpoch
prevJustified := node.parent.unrealizedJustifiedEpoch+1 == currentEpoch
tooEarlyForCurr := slots.SinceEpochStarts(stateSlot)*3 < params.BeaconConfig().SlotsPerEpoch*2
// Exit early if it's justified or too early to be justified.
if currJustified || (stateEpoch == currentEpoch && prevJustified && tooEarlyForCurr) {
node.unrealizedJustifiedEpoch = node.parent.unrealizedJustifiedEpoch
node.unrealizedFinalizedEpoch = node.parent.unrealizedFinalizedEpoch
return jc, fc
}
uj, uf, err := precompute.UnrealizedCheckpoints(state)
if err != nil {
log.WithError(err).Debug("could not compute unrealized checkpoints")
uj, uf = jc, fc
}
// Update store's unrealized checkpoints.
if uj.Epoch > s.unrealizedJustifiedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
}
if uf.Epoch > s.unrealizedFinalizedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
s.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uf.Epoch, Root: bytesutil.ToBytes32(uf.Root),
}
}
// Update node's checkpoints.
node.unrealizedJustifiedEpoch, node.unrealizedFinalizedEpoch = uj.Epoch, uf.Epoch
if stateEpoch < currentEpoch {
jc, fc = uj, uf
node.justifiedEpoch = uj.Epoch
node.finalizedEpoch = uf.Epoch
}
return jc, fc
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/testing/require"
@@ -97,7 +98,7 @@ func TestStore_LongFork(t *testing.T) {
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'c'}].weight)
// Update unrealized justification, c becomes head
f.UpdateUnrealizedCheckpoints()
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
@@ -147,6 +148,8 @@ func TestStore_NoDeadLock(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'g'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'g'}, 1))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 2}
f.store.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
state, blkRoot, err = prepareForkchoiceState(ctx, 107, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -176,7 +179,7 @@ func TestStore_NoDeadLock(t *testing.T) {
require.Equal(t, types.Epoch(0), f.FinalizedCheckpoint().Epoch)
// Realized Justified checkpoints, H becomes head
f.UpdateUnrealizedCheckpoints()
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
@@ -236,7 +239,8 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'d'}, 1))
f.UpdateUnrealizedCheckpoints()
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
@@ -244,3 +248,90 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'h'}].weight)
}
func TestStore_PullTips_Heuristics(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnablePullTips: true,
})
defer resetCfg()
ctx := context.Background()
t.Run("Current epoch is justified", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 65, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 66, 0)
st, root, err = prepareForkchoiceState(ctx, 66, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, types.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, types.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 96, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, types.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, types.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and not too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 127, 0)
st, root, err = prepareForkchoiceState(ctx, 127, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
})
t.Run("Block from previous Epoch", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 94, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 95, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
})
t.Run("Previous Epoch is not justified", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 128, [32]byte{'p'}, [32]byte{}, [32]byte{}, 2, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
driftGenesisTime(f, 129, 0)
st, root, err = prepareForkchoiceState(ctx, 129, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 2, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
})
}

View File

@@ -30,7 +30,7 @@ type HeadRetriever interface {
// BlockProcessor processes the block that's used for accounting fork choice.
type BlockProcessor interface {
InsertNode(context.Context, state.ReadOnlyBeaconState, [32]byte) error
InsertNode(context.Context, state.BeaconState, [32]byte) error
InsertOptimisticChain(context.Context, []*forkchoicetypes.BlockAndCheckpoints) error
}

View File

@@ -22,9 +22,11 @@ go_library(
],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -61,6 +63,7 @@ go_test(
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v3:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",

View File

@@ -17,3 +17,4 @@ var errInvalidNilCheckpoint = errors.New("invalid nil checkpoint")
var errInvalidUnrealizedJustifiedEpoch = errors.New("invalid unrealized justified epoch")
var errInvalidUnrealizedFinalizedEpoch = errors.New("invalid unrealized finalized epoch")
var errNilBlockHeader = errors.New("invalid nil block header")
var errInvalidParentRoot = errors.New("invalid parent root")

View File

@@ -13,6 +13,7 @@ import (
// It returns a list of deltas that represents the difference between old balances and new balances.
func computeDeltas(
ctx context.Context,
count int,
blockIndices map[[32]byte]uint64,
votes []Vote,
oldBalances, newBalances []uint64,
@@ -21,7 +22,7 @@ func computeDeltas(
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.computeDeltas")
defer span.End()
deltas := make([]int, len(blockIndices))
deltas := make([]int, count)
for validatorIndex, vote := range votes {
// Skip if validator has been slashed

View File

@@ -27,7 +27,7 @@ func TestComputeDelta_ZeroHash(t *testing.T) {
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
@@ -55,7 +55,7 @@ func TestComputeDelta_AllVoteTheSame(t *testing.T) {
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
@@ -88,7 +88,7 @@ func TestComputeDelta_DifferentVotes(t *testing.T) {
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
@@ -118,7 +118,7 @@ func TestComputeDelta_MovingVotes(t *testing.T) {
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
@@ -151,7 +151,7 @@ func TestComputeDelta_MoveOutOfTree(t *testing.T) {
Vote{indexToHash(1), [32]byte{'A'}, 0})
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 1, len(delta))
assert.Equal(t, 0-2*int(balance), delta[0])
@@ -180,7 +180,7 @@ func TestComputeDelta_ChangingBalances(t *testing.T) {
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 16, len(delta))
@@ -214,7 +214,7 @@ func TestComputeDelta_ValidatorAppear(t *testing.T) {
Vote{indexToHash(1), indexToHash(2), 0})
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 2, len(delta))
assert.Equal(t, 0-int(balance), delta[0])
@@ -240,7 +240,7 @@ func TestComputeDelta_ValidatorDisappears(t *testing.T) {
Vote{indexToHash(1), indexToHash(2), 0})
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), indices, votes, oldBalances, newBalances, slashedIndices)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 2, len(delta))
assert.Equal(t, 0-2*int(balance), delta[0])

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/features"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -64,5 +65,8 @@ func (f *ForkChoice) NewSlot(ctx context.Context, slot types.Slot) error {
f.store.justifiedCheckpoint = bjcp
}
}
if features.Get().EnablePullTips {
f.UpdateUnrealizedCheckpoints()
}
return nil
}

View File

@@ -56,8 +56,8 @@ func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, parentRoo
defer f.store.nodesLock.Unlock()
invalidRoots := make([][32]byte, 0)
lastValidIndex, ok := f.store.payloadIndices[payloadHash]
if !ok || lastValidIndex == NonExistentNode {
return invalidRoots, errInvalidFinalizedNode
if !ok {
lastValidIndex = uint64(len(f.store.nodes))
}
invalidIndex, ok := f.store.nodesIndices[root]

View File

@@ -385,8 +385,6 @@ func TestSetOptimisticToInvalid_InvalidRoots(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'p'}, [32]byte{'p'}, [32]byte{'B'})
require.ErrorIs(t, ErrUnknownNodeRoot, err)
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'a'}, [32]byte{}, [32]byte{'p'})
require.ErrorIs(t, errInvalidFinalizedNode, err)
}
// This is a regression test (10445)
@@ -417,3 +415,40 @@ func TestSetOptimisticToInvalid_ProposerBoost(t *testing.T) {
require.DeepEqual(t, params.BeaconConfig().ZeroHash, f.store.previousProposerBoostRoot)
f.store.proposerBoostLock.RUnlock()
}
// This is a regression test (10996)
func TestSetOptimisticToInvalid_BogusLVH(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
state, root, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
state, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
invalidRoots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'R'})
require.NoError(t, err)
require.Equal(t, 1, len(invalidRoots))
require.Equal(t, [32]byte{'b'}, invalidRoots[0])
}
// This is a regression test (10996)
func TestSetOptimisticToInvalid_BogusLVH_RotNotImported(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
state, root, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
state, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
invalidRoots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'R'})
require.NoError(t, err)
require.Equal(t, 0, len(invalidRoots))
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
@@ -30,17 +31,19 @@ const defaultPruneThreshold = 256
// New initializes a new fork choice store.
func New() *ForkChoice {
s := &Store{
justifiedCheckpoint: &forkchoicetypes.Checkpoint{},
bestJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
prevJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
finalizedCheckpoint: &forkchoicetypes.Checkpoint{},
proposerBoostRoot: [32]byte{},
nodes: make([]*Node, 0),
nodesIndices: make(map[[32]byte]uint64),
payloadIndices: make(map[[32]byte]uint64),
canonicalNodes: make(map[[32]byte]bool),
slashedIndices: make(map[types.ValidatorIndex]bool),
pruneThreshold: defaultPruneThreshold,
justifiedCheckpoint: &forkchoicetypes.Checkpoint{},
bestJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
unrealizedJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
prevJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
finalizedCheckpoint: &forkchoicetypes.Checkpoint{},
unrealizedFinalizedCheckpoint: &forkchoicetypes.Checkpoint{},
proposerBoostRoot: [32]byte{},
nodes: make([]*Node, 0),
nodesIndices: make(map[[32]byte]uint64),
payloadIndices: make(map[[32]byte]uint64),
canonicalNodes: make(map[[32]byte]bool),
slashedIndices: make(map[types.ValidatorIndex]bool),
pruneThreshold: defaultPruneThreshold,
}
b := make([]uint64, 0)
@@ -62,7 +65,7 @@ func (f *ForkChoice) Head(ctx context.Context, justifiedStateBalances []uint64)
// Using the write lock here because `updateCanonicalNodes` that gets called subsequently requires a write operation.
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
deltas, newVotes, err := computeDeltas(ctx, f.store.nodesIndices, f.votes, f.balances, newBalances, f.store.slashedIndices)
deltas, newVotes, err := computeDeltas(ctx, len(f.store.nodes), f.store.nodesIndices, f.votes, f.balances, newBalances, f.store.slashedIndices)
if err != nil {
return [32]byte{}, errors.Wrap(err, "Could not compute deltas")
}
@@ -117,7 +120,7 @@ func (f *ForkChoice) ProposerBoost() [fieldparams.RootLength]byte {
}
// InsertNode processes a new block by inserting it to the fork choice store.
func (f *ForkChoice) InsertNode(ctx context.Context, state state.ReadOnlyBeaconState, root [32]byte) error {
func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, root [32]byte) error {
ctx, span := trace.StartSpan(ctx, "protoArrayForkChoice.InsertNode")
defer span.End()
@@ -147,10 +150,14 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.ReadOnlyBeaconS
return errInvalidNilCheckpoint
}
finalizedEpoch := fc.Epoch
err := f.store.insert(ctx, slot, root, parentRoot, payloadHash, justifiedEpoch, finalizedEpoch)
node, err := f.store.insert(ctx, slot, root, parentRoot, payloadHash, justifiedEpoch, finalizedEpoch)
if err != nil {
return err
}
if features.Get().EnablePullTips {
jc, fc = f.store.pullTips(state, node, jc, fc)
}
return f.updateCheckpoints(ctx, jc, fc)
}
@@ -462,7 +469,7 @@ func (s *Store) updateCanonicalNodes(ctx context.Context, root [32]byte) error {
func (s *Store) insert(ctx context.Context,
slot types.Slot,
root, parent, payloadHash [32]byte,
justifiedEpoch, finalizedEpoch types.Epoch) error {
justifiedEpoch, finalizedEpoch types.Epoch) (*Node, error) {
_, span := trace.StartSpan(ctx, "protoArrayForkChoice.insert")
defer span.End()
@@ -470,8 +477,8 @@ func (s *Store) insert(ctx context.Context,
defer s.nodesLock.Unlock()
// Return if the block has been inserted into Store before.
if _, ok := s.nodesIndices[root]; ok {
return nil
if idx, ok := s.nodesIndices[root]; ok {
return s.nodes[idx], nil
}
index := uint64(len(s.nodes))
@@ -502,7 +509,7 @@ func (s *Store) insert(ctx context.Context,
// Apply proposer boost
timeNow := uint64(time.Now().Unix())
if timeNow < s.genesisTime {
return nil
return n, nil
}
secondsIntoSlot := (timeNow - s.genesisTime) % params.BeaconConfig().SecondsPerSlot
currentSlot := slots.CurrentSlot(s.genesisTime)
@@ -516,7 +523,7 @@ func (s *Store) insert(ctx context.Context,
// Update parent with the best child and descendant only if it's available.
if n.parent != NonExistentNode {
if err := s.updateBestChildAndDescendant(parentIndex, index); err != nil {
return err
return n, err
}
}
@@ -524,7 +531,7 @@ func (s *Store) insert(ctx context.Context,
processedBlockCount.Inc()
nodeCount.Set(float64(len(s.nodes)))
return nil
return n, nil
}
// applyWeightChanges iterates backwards through the nodes in store. It checks all nodes parent
@@ -991,7 +998,7 @@ func (f *ForkChoice) InsertOptimisticChain(ctx context.Context, chain []*forkcho
if err != nil {
return err
}
if err := f.store.insert(ctx,
if _, err := f.store.insert(ctx,
b.Slot(), r, parentRoot, payloadHash,
chain[i].JustifiedCheckpoint.Epoch, chain[i].FinalizedCheckpoint.Epoch); err != nil {
return err

View File

@@ -114,7 +114,8 @@ func TestStore_Head_ContextCancelled(t *testing.T) {
func TestStore_Insert_UnknownParent(t *testing.T) {
// The new node does not have a parent.
s := &Store{nodesIndices: make(map[[32]byte]uint64), payloadIndices: make(map[[32]byte]uint64)}
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, [32]byte{'B'}, params.BeaconConfig().ZeroHash, 1, 1))
_, err := s.insert(context.Background(), 100, [32]byte{'A'}, [32]byte{'B'}, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
assert.Equal(t, 1, len(s.nodes), "Did not insert block")
assert.Equal(t, 1, len(s.nodesIndices), "Did not insert block")
assert.Equal(t, NonExistentNode, s.nodes[0].parent, "Incorrect parent")
@@ -133,7 +134,8 @@ func TestStore_Insert_KnownParent(t *testing.T) {
payloadHash := [32]byte{'c'}
s.justifiedCheckpoint = &forkchoicetypes.Checkpoint{}
s.finalizedCheckpoint = &forkchoicetypes.Checkpoint{}
require.NoError(t, s.insert(context.Background(), 100, [32]byte{'A'}, p, payloadHash, 1, 1))
_, err := s.insert(context.Background(), 100, [32]byte{'A'}, p, payloadHash, 1, 1)
require.NoError(t, err)
assert.Equal(t, 2, len(s.nodes), "Did not insert block")
assert.Equal(t, 2, len(s.nodesIndices), "Did not insert block")
assert.Equal(t, uint64(0), s.nodes[1].parent, "Incorrect parent")

View File

@@ -18,25 +18,27 @@ type ForkChoice struct {
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.
type Store struct {
pruneThreshold uint64 // do not prune tree unless threshold is reached.
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified checkpoint in store.
bestJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized checkpoint in store.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
nodes []*Node // list of block nodes, each node is a representation of one block.
nodesIndices map[[fieldparams.RootLength]byte]uint64 // the root of block node and the nodes index in the list.
canonicalNodes map[[fieldparams.RootLength]byte]bool // the canonical block nodes.
payloadIndices map[[fieldparams.RootLength]byte]uint64 // the payload hash of block node and the index in the list
slashedIndices map[types.ValidatorIndex]bool // The list of equivocating validators
originRoot [fieldparams.RootLength]byte // The genesis block root
lastHeadRoot [fieldparams.RootLength]byte // The last cached head block root
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
checkpointsLock sync.RWMutex
genesisTime uint64
pruneThreshold uint64 // do not prune tree unless threshold is reached.
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified checkpoint in store.
bestJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
unrealizedJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
unrealizedFinalizedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized checkpoint in store.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
nodes []*Node // list of block nodes, each node is a representation of one block.
nodesIndices map[[fieldparams.RootLength]byte]uint64 // the root of block node and the nodes index in the list.
canonicalNodes map[[fieldparams.RootLength]byte]bool // the canonical block nodes.
payloadIndices map[[fieldparams.RootLength]byte]uint64 // the payload hash of block node and the index in the list
slashedIndices map[types.ValidatorIndex]bool // The list of equivocating validators
originRoot [fieldparams.RootLength]byte // The genesis block root
lastHeadRoot [fieldparams.RootLength]byte // The last cached head block root
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
checkpointsLock sync.RWMutex
genesisTime uint64
}
// Node defines the individual block which includes its block parent, ancestor and how much weight accounted for it.

View File

@@ -1,7 +1,14 @@
package protoarray
import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
)
func (s *Store) setUnrealizedJustifiedEpoch(root [32]byte, epoch types.Epoch) error {
@@ -56,10 +63,70 @@ func (f *ForkChoice) UpdateUnrealizedCheckpoints() {
node.justifiedEpoch = node.unrealizedJustifiedEpoch
node.finalizedEpoch = node.unrealizedFinalizedEpoch
if node.justifiedEpoch > f.store.justifiedCheckpoint.Epoch {
f.store.justifiedCheckpoint.Epoch = node.justifiedEpoch
if node.justifiedEpoch > f.store.bestJustifiedCheckpoint.Epoch {
f.store.bestJustifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
}
f.store.justifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
}
if node.finalizedEpoch > f.store.finalizedCheckpoint.Epoch {
f.store.finalizedCheckpoint.Epoch = node.finalizedEpoch
f.store.justifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
f.store.finalizedCheckpoint = f.store.unrealizedFinalizedCheckpoint
}
}
}
func (s *Store) pullTips(state state.BeaconState, node *Node, jc, fc *ethpb.Checkpoint) (*ethpb.Checkpoint, *ethpb.Checkpoint) {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
if node.parent == NonExistentNode { // Nothing to do if the parent is nil.
return jc, fc
}
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.genesisTime))
stateSlot := state.Slot()
stateEpoch := slots.ToEpoch(stateSlot)
parent := s.nodes[node.parent]
currJustified := parent.unrealizedJustifiedEpoch == currentEpoch
prevJustified := parent.unrealizedJustifiedEpoch+1 == currentEpoch
tooEarlyForCurr := slots.SinceEpochStarts(stateSlot)*3 < params.BeaconConfig().SlotsPerEpoch*2
if currJustified || (stateEpoch == currentEpoch && prevJustified && tooEarlyForCurr) {
node.unrealizedJustifiedEpoch = parent.unrealizedJustifiedEpoch
node.unrealizedFinalizedEpoch = parent.unrealizedFinalizedEpoch
return jc, fc
}
uj, uf, err := precompute.UnrealizedCheckpoints(state)
if err != nil {
log.WithError(err).Debug("could not compute unrealized checkpoints")
uj, uf = jc, fc
}
// Update store's unrealized checkpoints.
s.checkpointsLock.Lock()
if uj.Epoch > s.unrealizedJustifiedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
}
if uf.Epoch > s.unrealizedFinalizedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
s.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uf.Epoch, Root: bytesutil.ToBytes32(uf.Root),
}
}
s.checkpointsLock.Unlock()
// Update node's checkpoints.
node.unrealizedJustifiedEpoch, node.unrealizedFinalizedEpoch = uj.Epoch, uf.Epoch
if stateEpoch < currentEpoch {
jc, fc = uj, uf
node.justifiedEpoch = uj.Epoch
node.finalizedEpoch = uf.Epoch
}
return jc, fc
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/testing/require"
@@ -147,6 +148,8 @@ func TestStore_NoDeadLock(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'g'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'g'}, 1))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 2}
f.store.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
state, blkRoot, err = prepareForkchoiceState(ctx, 107, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -236,6 +239,7 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'d'}, 1))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
f.UpdateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
@@ -245,3 +249,90 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.Equal(t, uint64(0), f.store.nodes[8].weight)
require.Equal(t, uint64(100), f.store.nodes[7].weight)
}
func TestStore_PullTips_Heuristics(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnablePullTips: true,
})
defer resetCfg()
ctx := context.Background()
t.Run("Current epoch is justified", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 65, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 66, 0)
st, root, err = prepareForkchoiceState(ctx, 66, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 96, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and not too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 127, 0)
st, root, err = prepareForkchoiceState(ctx, 127, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedJustifiedEpoch)
})
t.Run("Block from previous Epoch", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 94, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 95, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedJustifiedEpoch)
})
t.Run("Previous Epoch is not justified", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 128, [32]byte{'p'}, [32]byte{}, [32]byte{}, 2, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
driftGenesisTime(f, 129, 0)
st, root, err = prepareForkchoiceState(ctx, 129, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 2, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
})
}

View File

@@ -18,6 +18,7 @@ import (
)
func TestProcessSlashings(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
block *ethpb.BeaconBlock
@@ -77,13 +78,13 @@ func TestProcessSlashings(t *testing.T) {
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: []*ethpb.AttesterSlashing{
{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_1: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{1, 3, 4},
}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_2: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{1, 5, 6},
}),
},
@@ -99,13 +100,13 @@ func TestProcessSlashings(t *testing.T) {
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: []*ethpb.AttesterSlashing{
{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_1: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{1, 3, 4},
}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Attestation_2: au.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{3, 5, 6},
}),
},

View File

@@ -62,9 +62,9 @@ go_library(
"//runtime/prereqs:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -4,8 +4,8 @@ import (
"fmt"
"github.com/ethereum/go-ethereum/common"
fastssz "github.com/ferranbt/fastssz"
"github.com/pkg/errors"
fastssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/cmd"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/features"

View File

@@ -694,6 +694,7 @@ func (b *BeaconNode) registerSyncService() error {
regularsync.WithStateGen(b.stateGen),
regularsync.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
regularsync.WithSlasherBlockHeadersFeed(b.slasherBlockHeadersFeed),
regularsync.WithExecutionPayloadReconstructor(web3Service),
)
return b.services.RegisterService(rs)
}
@@ -799,48 +800,50 @@ func (b *BeaconNode) registerRPCService() error {
p2pService := b.fetchP2P()
rpcService := rpc.NewService(b.ctx, &rpc.Config{
Host: host,
Port: port,
BeaconMonitoringHost: beaconMonitoringHost,
BeaconMonitoringPort: beaconMonitoringPort,
CertFlag: cert,
KeyFlag: key,
BeaconDB: b.db,
Broadcaster: p2pService,
PeersFetcher: p2pService,
PeerManager: p2pService,
MetadataProvider: p2pService,
ChainInfoFetcher: chainService,
HeadUpdater: chainService,
HeadFetcher: chainService,
CanonicalFetcher: chainService,
ForkFetcher: chainService,
FinalizationFetcher: chainService,
BlockReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,
OptimisticModeFetcher: chainService,
AttestationsPool: b.attestationPool,
ExitPool: b.exitPool,
SlashingsPool: b.slashingsPool,
SlashingChecker: slasherService,
SyncCommitteeObjectPool: b.syncCommitteePool,
POWChainService: web3Service,
POWChainInfoFetcher: web3Service,
ChainStartFetcher: chainStartFetcher,
MockEth1Votes: mockEth1DataVotes,
SyncService: syncService,
DepositFetcher: depositFetcher,
PendingDepositFetcher: b.depositCache,
BlockNotifier: b,
StateNotifier: b,
OperationNotifier: b,
StateGen: b.stateGen,
EnableDebugRPCEndpoints: enableDebugRPCEndpoints,
MaxMsgSize: maxMsgSize,
ProposerIdsCache: b.proposerIdsCache,
ExecutionEngineCaller: web3Service,
ExecutionEngineCaller: web3Service,
ExecutionPayloadReconstructor: web3Service,
Host: host,
Port: port,
BeaconMonitoringHost: beaconMonitoringHost,
BeaconMonitoringPort: beaconMonitoringPort,
CertFlag: cert,
KeyFlag: key,
BeaconDB: b.db,
Broadcaster: p2pService,
PeersFetcher: p2pService,
PeerManager: p2pService,
MetadataProvider: p2pService,
ChainInfoFetcher: chainService,
HeadUpdater: chainService,
HeadFetcher: chainService,
CanonicalFetcher: chainService,
ForkFetcher: chainService,
FinalizationFetcher: chainService,
BlockReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,
OptimisticModeFetcher: chainService,
AttestationsPool: b.attestationPool,
ExitPool: b.exitPool,
SlashingsPool: b.slashingsPool,
SlashingChecker: slasherService,
SyncCommitteeObjectPool: b.syncCommitteePool,
POWChainService: web3Service,
POWChainInfoFetcher: web3Service,
ChainStartFetcher: chainStartFetcher,
MockEth1Votes: mockEth1DataVotes,
SyncService: syncService,
DepositFetcher: depositFetcher,
PendingDepositFetcher: b.depositCache,
BlockNotifier: b,
StateNotifier: b,
OperationNotifier: b,
StateGen: b.stateGen,
EnableDebugRPCEndpoints: enableDebugRPCEndpoints,
MaxMsgSize: maxMsgSize,
ProposerIdsCache: b.proposerIdsCache,
BlockBuilder: b.fetchBuilderService(),
})
return b.services.RegisterService(rpcService)

View File

@@ -46,9 +46,9 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -5,9 +5,9 @@ import (
"sort"
"testing"
fssz "github.com/ferranbt/fastssz"
c "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
types "github.com/prysmaticlabs/prysm/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/crypto/bls"
@@ -23,14 +23,15 @@ func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
require.NoError(t, err)
sig1 := priv.Sign([]byte{'a'})
sig2 := priv.Sign([]byte{'b'})
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig1.Marshal()})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1010}, Signature: sig1.Marshal()})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1100}, Signature: sig1.Marshal()})
att4 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig2.Marshal()})
att5 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig1.Marshal()})
att6 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1010}, Signature: sig1.Marshal()})
att7 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1100}, Signature: sig1.Marshal()})
att8 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig2.Marshal()})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig1.Marshal()})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1010}, Signature: sig1.Marshal()})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1100}, Signature: sig1.Marshal()})
att4 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig2.Marshal()})
att5 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig1.Marshal()})
att6 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1010}, Signature: sig1.Marshal()})
att7 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1100}, Signature: sig1.Marshal()})
att8 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig2.Marshal()})
atts := []*ethpb.Attestation{att1, att2, att3, att4, att5, att6, att7, att8}
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
require.NoError(t, cache.AggregateUnaggregatedAttestations(context.Background()))
@@ -42,7 +43,7 @@ func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
func TestKV_Aggregated_AggregateUnaggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
genData := func(slot types.Slot, committeeIndex types.CommitteeIndex) *ethpb.AttestationData {
return util.HydrateAttestationData(&ethpb.AttestationData{
return util.NewAttestationUtil().HydrateAttestationData(&ethpb.AttestationData{
Slot: slot,
CommitteeIndex: committeeIndex,
})
@@ -95,6 +96,7 @@ func TestKV_Aggregated_AggregateUnaggregatedAttestationsBySlotIndex(t *testing.T
}
func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
att *ethpb.Attestation
@@ -113,23 +115,23 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
},
{
name: "not aggregated",
att: util.HydrateAttestation(&ethpb.Attestation{
att: au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b10100}}),
wantErrString: "attestation is not aggregated",
},
{
name: "invalid hash",
att: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
BeaconBlockRoot: []byte{0b0},
}),
AggregationBits: bitfield.Bitlist{0b10111},
},
wantErrString: "could not tree hash attestation: " + fssz.ErrBytesLength.Error(),
wantErrString: "could not tree hash attestation: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")",
},
{
name: "already seen",
att: util.HydrateAttestation(&ethpb.Attestation{
att: au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 100,
},
@@ -139,7 +141,7 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
},
{
name: "normal save",
att: util.HydrateAttestation(&ethpb.Attestation{
att: au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
},
@@ -148,7 +150,7 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
count: 1,
},
}
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{
r, err := hashFn(au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 100,
}))
require.NoError(t, err)
@@ -172,6 +174,7 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
}
func TestKV_Aggregated_SaveAggregatedAttestations(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
atts []*ethpb.Attestation
@@ -181,9 +184,9 @@ func TestKV_Aggregated_SaveAggregatedAttestations(t *testing.T) {
{
name: "no duplicates",
atts: []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
AggregationBits: bitfield.Bitlist{0b1101}}),
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
AggregationBits: bitfield.Bitlist{0b1101}}),
},
count: 1,
@@ -207,6 +210,7 @@ func TestKV_Aggregated_SaveAggregatedAttestations(t *testing.T) {
}
func TestKV_Aggregated_SaveAggregatedAttestations_SomeGoodSomeBad(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
atts []*ethpb.Attestation
@@ -216,9 +220,9 @@ func TestKV_Aggregated_SaveAggregatedAttestations_SomeGoodSomeBad(t *testing.T)
{
name: "the first attestation is bad",
atts: []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
AggregationBits: bitfield.Bitlist{0b1100}}),
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1},
AggregationBits: bitfield.Bitlist{0b1101}}),
},
count: 1,
@@ -243,10 +247,11 @@ func TestKV_Aggregated_SaveAggregatedAttestations_SomeGoodSomeBad(t *testing.T)
func TestKV_Aggregated_AggregatedAttestations(t *testing.T) {
cache := NewAttCaches()
au := util.AttestationUtil{}
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
@@ -261,16 +266,17 @@ func TestKV_Aggregated_AggregatedAttestations(t *testing.T) {
}
func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
au := util.AttestationUtil{}
t.Run("nil attestation", func(t *testing.T) {
cache := NewAttCaches()
assert.ErrorContains(t, "attestation can't be nil", cache.DeleteAggregatedAttestation(nil))
att := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10101}, Data: &ethpb.AttestationData{Slot: 2}})
att := au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10101}, Data: &ethpb.AttestationData{Slot: 2}})
assert.NoError(t, cache.DeleteAggregatedAttestation(att))
})
t.Run("non aggregated attestation", func(t *testing.T) {
cache := NewAttCaches()
att := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b1001}, Data: &ethpb.AttestationData{Slot: 2}})
att := au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b1001}, Data: &ethpb.AttestationData{Slot: 2}})
err := cache.DeleteAggregatedAttestation(att)
assert.ErrorContains(t, "attestation is not aggregated", err)
})
@@ -286,22 +292,22 @@ func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
},
}
err := cache.DeleteAggregatedAttestation(att)
wantErr := "could not tree hash attestation data: " + fssz.ErrBytesLength.Error()
wantErr := "could not tree hash attestation data: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")"
assert.ErrorContains(t, wantErr, err)
})
t.Run("nonexistent attestation", func(t *testing.T) {
cache := NewAttCaches()
att := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b1111}, Data: &ethpb.AttestationData{Slot: 2}})
att := au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b1111}, Data: &ethpb.AttestationData{Slot: 2}})
assert.NoError(t, cache.DeleteAggregatedAttestation(att))
})
t.Run("non-filtered deletion", func(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b11010}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b11010}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b11010}})
att4 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b10101}})
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b11010}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b11010}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b11010}})
att4 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b10101}})
atts := []*ethpb.Attestation{att1, att2, att3, att4}
require.NoError(t, cache.SaveAggregatedAttestations(atts))
require.NoError(t, cache.DeleteAggregatedAttestation(att1))
@@ -314,10 +320,10 @@ func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
t.Run("filtered deletion", func(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b110101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110111}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110100}})
att4 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110101}})
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b110101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110111}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110100}})
att4 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110101}})
atts := []*ethpb.Attestation{att1, att2, att3, att4}
require.NoError(t, cache.SaveAggregatedAttestations(atts))
@@ -334,6 +340,7 @@ func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
}
func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
existing []*ethpb.Attestation
@@ -357,7 +364,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
},
{
name: "empty cache aggregated",
input: util.HydrateAttestation(&ethpb.Attestation{
input: au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
},
@@ -366,7 +373,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
},
{
name: "empty cache unaggregated",
input: util.HydrateAttestation(&ethpb.Attestation{
input: au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
},
@@ -376,13 +383,13 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
{
name: "single attestation in cache with exact match",
existing: []*ethpb.Attestation{{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111}},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111}},
@@ -391,13 +398,13 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
{
name: "single attestation in cache with subset aggregation",
existing: []*ethpb.Attestation{{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111}},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1110}},
@@ -406,13 +413,13 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
{
name: "single attestation in cache with superset aggregation",
existing: []*ethpb.Attestation{{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1110}},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111}},
@@ -422,20 +429,20 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "multiple attestations with same data in cache with overlapping aggregation, input is subset",
existing: []*ethpb.Attestation{
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111000},
},
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1100111},
},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1100000}},
@@ -445,20 +452,20 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "multiple attestations with same data in cache with overlapping aggregation and input is superset",
existing: []*ethpb.Attestation{
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111000},
},
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1100111},
},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111111}},
@@ -468,20 +475,20 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "multiple attestations with different data in cache",
existing: []*ethpb.Attestation{
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 2,
}),
AggregationBits: bitfield.Bitlist{0b1111000},
},
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 3,
}),
AggregationBits: bitfield.Bitlist{0b1100111},
},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 1,
}),
AggregationBits: bitfield.Bitlist{0b1111111}},
@@ -491,14 +498,14 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "attestations with different bitlist lengths",
existing: []*ethpb.Attestation{
{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 2,
}),
AggregationBits: bitfield.Bitlist{0b1111000},
},
},
input: &ethpb.Attestation{
Data: util.HydrateAttestationData(&ethpb.AttestationData{
Data: au.HydrateAttestationData(&ethpb.AttestationData{
Slot: 2,
}),
AggregationBits: bitfield.Bitlist{0b1111},
@@ -540,8 +547,9 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
func TestKV_Aggregated_DuplicateAggregatedAttestations(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1111}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1111}})
atts := []*ethpb.Attestation{att1, att2}
for _, att := range atts {

View File

@@ -14,16 +14,17 @@ import (
func TestKV_BlockAttestation_CanSaveRetrieve(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveBlockAttestation(att))
}
// Diff bit length should not panic.
att4 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b11011}})
att4 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b11011}})
if err := cache.SaveBlockAttestation(att4); err != bitfield.ErrBitlistDifferentLength {
t.Errorf("Unexpected error: wanted %v, got %v", bitfield.ErrBitlistDifferentLength, err)
}
@@ -40,9 +41,10 @@ func TestKV_BlockAttestation_CanSaveRetrieve(t *testing.T) {
func TestKV_BlockAttestation_CanDelete(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {

View File

@@ -14,9 +14,10 @@ import (
func TestKV_Forkchoice_CanSaveRetrieve(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
@@ -35,9 +36,10 @@ func TestKV_Forkchoice_CanSaveRetrieve(t *testing.T) {
func TestKV_Forkchoice_CanDelete(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
@@ -55,9 +57,10 @@ func TestKV_Forkchoice_CanDelete(t *testing.T) {
func TestKV_Forkchoice_CanCount(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {

View File

@@ -12,20 +12,21 @@ import (
func TestAttCaches_hasSeenBit(t *testing.T) {
c := NewAttCaches()
seenA1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
seenA2 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11100000}})
au := util.AttestationUtil{}
seenA1 := au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
seenA2 := au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11100000}})
require.NoError(t, c.insertSeenBit(seenA1))
require.NoError(t, c.insertSeenBit(seenA2))
tests := []struct {
att *ethpb.Attestation
want bool
}{
{att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000000}}), want: true},
{att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000001}}), want: true},
{att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11100000}}), want: true},
{att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}}), want: true},
{att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10001000}}), want: false},
{att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11110111}}), want: false},
{att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000000}}), want: true},
{att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000001}}), want: true},
{att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11100000}}), want: true},
{att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}}), want: true},
{att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10001000}}), want: false},
{att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11110111}}), want: false},
}
for _, tt := range tests {
got, err := c.hasSeenBit(tt.att)
@@ -38,7 +39,8 @@ func TestAttCaches_hasSeenBit(t *testing.T) {
func TestAttCaches_insertSeenBitDuplicates(t *testing.T) {
c := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
att1 := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
r, err := hashFn(att1.Data)
require.NoError(t, err)
require.NoError(t, c.insertSeenBit(att1))

View File

@@ -6,8 +6,8 @@ import (
"sort"
"testing"
fssz "github.com/ferranbt/fastssz"
c "github.com/patrickmn/go-cache"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -17,6 +17,7 @@ import (
)
func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
att *ethpb.Attestation
@@ -43,12 +44,12 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
},
{
name: "normal save",
att: util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b0001}}),
att: au.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b0001}}),
count: 1,
},
{
name: "already seen",
att: util.HydrateAttestation(&ethpb.Attestation{
att: au.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 100,
},
@@ -57,7 +58,7 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
count: 0,
},
}
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{Slot: 100}))
r, err := hashFn(au.HydrateAttestationData(&ethpb.AttestationData{Slot: 100}))
require.NoError(t, err)
for _, tt := range tests {
@@ -83,6 +84,7 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
}
func TestKV_Unaggregated_SaveUnaggregatedAttestations(t *testing.T) {
au := util.AttestationUtil{}
tests := []struct {
name string
atts []*ethpb.Attestation
@@ -92,18 +94,18 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestations(t *testing.T) {
{
name: "unaggregated only",
atts: []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}}),
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}}),
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}}),
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}}),
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}}),
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}}),
},
count: 3,
},
{
name: "has aggregated",
atts: []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}}),
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}}),
{AggregationBits: bitfield.Bitlist{0b1111}, Data: &ethpb.AttestationData{Slot: 2}},
util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}}),
au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}}),
},
wantErrString: "attestation is aggregated",
count: 1,
@@ -141,10 +143,11 @@ func TestKV_Unaggregated_DeleteUnaggregatedAttestation(t *testing.T) {
})
t.Run("successful deletion", func(t *testing.T) {
au := util.AttestationUtil{}
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b110}})
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b110}})
atts := []*ethpb.Attestation{att1, att2, att3}
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
for _, att := range atts {
@@ -157,7 +160,8 @@ func TestKV_Unaggregated_DeleteUnaggregatedAttestation(t *testing.T) {
}
func TestKV_Unaggregated_DeleteSeenUnaggregatedAttestations(t *testing.T) {
d := util.HydrateAttestationData(&ethpb.AttestationData{})
au := util.AttestationUtil{}
d := au.HydrateAttestationData(&ethpb.AttestationData{})
t.Run("no attestations", func(t *testing.T) {
cache := NewAttCaches()
@@ -169,9 +173,9 @@ func TestKV_Unaggregated_DeleteSeenUnaggregatedAttestations(t *testing.T) {
t.Run("none seen", func(t *testing.T) {
cache := NewAttCaches()
atts := []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1001}}),
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1010}}),
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1100}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1001}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1010}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1100}}),
}
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
assert.Equal(t, 3, cache.UnaggregatedAttestationCount())
@@ -186,9 +190,9 @@ func TestKV_Unaggregated_DeleteSeenUnaggregatedAttestations(t *testing.T) {
t.Run("some seen", func(t *testing.T) {
cache := NewAttCaches()
atts := []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1001}}),
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1010}}),
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1100}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1001}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1010}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1100}}),
}
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
assert.Equal(t, 3, cache.UnaggregatedAttestationCount())
@@ -211,9 +215,9 @@ func TestKV_Unaggregated_DeleteSeenUnaggregatedAttestations(t *testing.T) {
t.Run("all seen", func(t *testing.T) {
cache := NewAttCaches()
atts := []*ethpb.Attestation{
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1001}}),
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1010}}),
util.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1100}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1001}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1010}}),
au.HydrateAttestation(&ethpb.Attestation{Data: d, AggregationBits: bitfield.Bitlist{0b1100}}),
}
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
assert.Equal(t, 3, cache.UnaggregatedAttestationCount())
@@ -236,9 +240,10 @@ func TestKV_Unaggregated_DeleteSeenUnaggregatedAttestations(t *testing.T) {
func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b101}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 2}, AggregationBits: bitfield.Bitlist{0b110}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b110}})
au := util.AttestationUtil{}
att1 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b101}})
att2 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 2}, AggregationBits: bitfield.Bitlist{0b110}})
att3 := au.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b110}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {

View File

@@ -234,7 +234,7 @@ func TestSeenAttestations_PresentInCache(t *testing.T) {
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
require.NoError(t, err)
ad1 := util.HydrateAttestationData(&ethpb.AttestationData{})
ad1 := util.NewAttestationUtil().HydrateAttestationData(&ethpb.AttestationData{})
att1 := &ethpb.Attestation{Data: ad1, Signature: []byte{'A'}, AggregationBits: bitfield.Bitlist{0x13} /* 0b00010011 */}
got, err := s.seen(att1)
require.NoError(t, err)
@@ -252,9 +252,9 @@ func TestSeenAttestations_PresentInCache(t *testing.T) {
}
func TestService_seen(t *testing.T) {
ad1 := util.HydrateAttestationData(&ethpb.AttestationData{Slot: 1})
ad2 := util.HydrateAttestationData(&ethpb.AttestationData{Slot: 2})
au := util.AttestationUtil{}
ad1 := au.HydrateAttestationData(&ethpb.AttestationData{Slot: 1})
ad2 := au.HydrateAttestationData(&ethpb.AttestationData{Slot: 2})
// Attestation are checked in order of this list.
tests := []struct {

View File

@@ -26,9 +26,9 @@ func TestPruneExpired_Ticker(t *testing.T) {
})
require.NoError(t, err)
ad1 := util.HydrateAttestationData(&ethpb.AttestationData{})
ad2 := util.HydrateAttestationData(&ethpb.AttestationData{Slot: 1})
au := util.AttestationUtil{}
ad1 := au.HydrateAttestationData(&ethpb.AttestationData{})
ad2 := au.HydrateAttestationData(&ethpb.AttestationData{Slot: 1})
atts := []*ethpb.Attestation{
{Data: ad1, AggregationBits: bitfield.Bitlist{0b1000, 0b1}, Signature: make([]byte, fieldparams.BLSSignatureLength)},
@@ -85,9 +85,9 @@ func TestPruneExpired_PruneExpiredAtts(t *testing.T) {
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
require.NoError(t, err)
ad1 := util.HydrateAttestationData(&ethpb.AttestationData{})
ad2 := util.HydrateAttestationData(&ethpb.AttestationData{})
au := util.AttestationUtil{}
ad1 := au.HydrateAttestationData(&ethpb.AttestationData{})
ad2 := au.HydrateAttestationData(&ethpb.AttestationData{})
att1 := &ethpb.Attestation{Data: ad1, AggregationBits: bitfield.Bitlist{0b1101}}
att2 := &ethpb.Attestation{Data: ad1, AggregationBits: bitfield.Bitlist{0b1111}}

View File

@@ -57,6 +57,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//crypto/ecdsa:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//io/file:go_default_library",
@@ -70,16 +71,16 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_kevinms_leakybucket_go//:go_default_library",
"@com_github_kr_pretty//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//config:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/protocol/identify:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_core//connmgr:go_default_library",
"@com_github_libp2p_go_libp2p_core//control:go_default_library",
"@com_github_libp2p_go_libp2p_core//crypto:go_default_library",
@@ -87,15 +88,14 @@ go_library(
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p_core//protocol:go_default_library",
"@com_github_libp2p_go_libp2p_noise//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_libp2p_go_tcp_transport//:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
@@ -149,6 +149,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//crypto/ecdsa:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network:go_default_library",
@@ -168,15 +169,15 @@ go_test(
"@com_github_golang_snappy//:go_default_library",
"@com_github_kevinms_leakybucket_go//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p_blankhost//:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/host/blank:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/net/swarm/testing:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p_core//crypto:go_default_library",
"@com_github_libp2p_go_libp2p_core//host:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p_noise//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_libp2p_go_libp2p_swarm//testing:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -7,8 +7,8 @@ import (
"reflect"
"time"
ssz "github.com/ferranbt/fastssz"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/crypto/hash"

View File

@@ -162,7 +162,7 @@ func TestService_BroadcastAttestation(t *testing.T) {
}),
}
msg := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.NewBitlist(7)})
msg := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.NewBitlist(7)})
subnet := uint64(5)
topic := AttestationSubnetTopicFormat
@@ -323,7 +323,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
go p.listenForNewNodes()
go p2.listenForNewNodes()
msg := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.NewBitlist(7)})
msg := util.NewAttestationUtil().HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.NewBitlist(7)})
topic := AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(msg)] = topic
digest, err := p.currentForkDigest()

View File

@@ -5,8 +5,8 @@ import (
"fmt"
"testing"
bh "github.com/libp2p/go-libp2p-blankhost"
swarmt "github.com/libp2p/go-libp2p-swarm/testing"
bh "github.com/libp2p/go-libp2p/p2p/host/blank"
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/config/params"
ecdsaprysm "github.com/prysmaticlabs/prysm/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/time/slots"
)
@@ -391,7 +392,10 @@ func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, ma.Multiaddr, error) {
func convertToSingleMultiAddr(node *enode.Node) (ma.Multiaddr, error) {
pubkey := node.Pubkey()
assertedKey := convertToInterfacePubkey(pubkey)
assertedKey, err := ecdsaprysm.ConvertToInterfacePubkey(pubkey)
if err != nil {
return nil, errors.Wrap(err, "could not get pubkey")
}
id, err := peer.IDFromPublicKey(assertedKey)
if err != nil {
return nil, errors.Wrap(err, "could not get peer id")
@@ -401,7 +405,10 @@ func convertToSingleMultiAddr(node *enode.Node) (ma.Multiaddr, error) {
func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
pubkey := node.Pubkey()
assertedKey := convertToInterfacePubkey(pubkey)
assertedKey, err := ecdsaprysm.ConvertToInterfacePubkey(pubkey)
if err != nil {
return nil, errors.Wrap(err, "could not get pubkey")
}
id, err := peer.IDFromPublicKey(assertedKey)
if err != nil {
return nil, errors.Wrap(err, "could not get peer id")

View File

@@ -15,10 +15,10 @@ go_library(
deps = [
"//config/params:go_default_library",
"//math:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
],
)

View File

@@ -3,7 +3,7 @@ package encoder
import (
"io"
ssz "github.com/ferranbt/fastssz"
ssz "github.com/prysmaticlabs/fastssz"
)
// NetworkEncoding represents an encoder compatible with Ethereum consensus p2p.

View File

@@ -5,10 +5,10 @@ import (
"io"
"sync"
fastssz "github.com/ferranbt/fastssz"
"github.com/gogo/protobuf/proto"
"github.com/golang/snappy"
"github.com/pkg/errors"
fastssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/math"
)

View File

@@ -7,10 +7,11 @@ import (
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p-core/peer"
noise "github.com/libp2p/go-libp2p-noise"
"github.com/libp2p/go-tcp-transport"
noise "github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
ecdsaprysm "github.com/prysmaticlabs/prysm/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/runtime/version"
)
@@ -30,7 +31,10 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
log.Fatalf("Failed to p2p listen: %v", err)
}
}
ifaceKey := convertToInterfacePrivkey(priKey)
ifaceKey, err := ecdsaprysm.ConvertToInterfacePrivkey(priKey)
if err != nil {
log.Fatalf("Failed to retrieve private key: %v", err)
}
id, err := peer.IDFromPublicKey(ifaceKey.GetPublic())
if err != nil {
log.Fatalf("Failed to retrieve peer id: %v", err)
@@ -113,7 +117,11 @@ func multiAddressBuilderWithID(ipAddr, protocol string, port uint, id peer.ID) (
// private key contents cannot be marshaled, an exception is thrown.
func privKeyOption(privkey *ecdsa.PrivateKey) libp2p.Option {
return func(cfg *libp2p.Config) error {
ifaceKey, err := ecdsaprysm.ConvertToInterfacePrivkey(privkey)
if err != nil {
return err
}
log.Debug("ECDSA private key generated")
return cfg.Apply(libp2p.Identity(convertToInterfacePrivkey(privkey)))
return cfg.Apply(libp2p.Identity(ifaceKey))
}
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/crypto"
"github.com/prysmaticlabs/prysm/config/params"
ecdsaprysm "github.com/prysmaticlabs/prysm/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
@@ -36,7 +37,8 @@ func TestPrivateKeyLoading(t *testing.T) {
}
pKey, err := privKey(cfg)
require.NoError(t, err, "Could not apply option")
newPkey := convertToInterfacePrivkey(pKey)
newPkey, err := ecdsaprysm.ConvertToInterfacePrivkey(pKey)
require.NoError(t, err)
rawBytes, err := key.Raw()
require.NoError(t, err)
newRaw, err := newPkey.Raw()

View File

@@ -3,12 +3,12 @@ package p2p
import (
"context"
ssz "github.com/ferranbt/fastssz"
"github.com/kr/pretty"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-core/protocol"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"

View File

@@ -13,7 +13,7 @@ import (
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/peer"
noise "github.com/libp2p/go-libp2p-noise"
noise "github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/async/event"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"

View File

@@ -19,6 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/crypto/ecdsa"
pb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
@@ -163,7 +164,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
localNode = initializeAttSubnets(localNode)
@@ -179,7 +180,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(attSubnetEnrKey, []byte{})
@@ -197,7 +198,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, 4))
@@ -215,7 +216,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, byteCount(int(attestationSubnetCount))+1))
@@ -233,7 +234,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, byteCount(int(attestationSubnetCount))+100))
@@ -251,7 +252,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
bitV := bitfield.NewBitvector64()
@@ -270,7 +271,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
bitV := bitfield.NewBitvector64()
@@ -298,7 +299,7 @@ func Test_AttSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
bitV := bitfield.NewBitvector64()
@@ -348,7 +349,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
localNode = initializeSyncCommSubnets(localNode)
@@ -364,7 +365,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(syncCommsSubnetEnrKey, []byte{})
@@ -382,7 +383,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(syncCommsSubnetEnrKey, make([]byte, byteCount(int(syncCommsSubnetCount))+1))
@@ -400,7 +401,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
entry := enr.WithEntry(syncCommsSubnetEnrKey, make([]byte, byteCount(int(syncCommsSubnetCount))+100))
@@ -418,7 +419,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
bitV := bitfield.Bitvector4{byte(0x00)}
@@ -437,7 +438,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
bitV := bitfield.Bitvector4{byte(0x00)}
@@ -463,7 +464,7 @@ func Test_SyncSubnets(t *testing.T) {
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey := convertFromInterfacePrivKey(priv)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
bitV := bitfield.Bitvector4{byte(0x00)}

View File

@@ -25,8 +25,8 @@ go_library(
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_libp2p_go_libp2p_blankhost//:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/host/blank:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/net/swarm/testing:go_default_library",
"@com_github_libp2p_go_libp2p_core//:go_default_library",
"@com_github_libp2p_go_libp2p_core//connmgr:go_default_library",
"@com_github_libp2p_go_libp2p_core//control:go_default_library",
@@ -37,8 +37,8 @@ go_library(
"@com_github_libp2p_go_libp2p_core//peerstore:go_default_library",
"@com_github_libp2p_go_libp2p_core//protocol:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_swarm//testing:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],

View File

@@ -10,8 +10,6 @@ import (
"time"
"github.com/ethereum/go-ethereum/p2p/enr"
ssz "github.com/ferranbt/fastssz"
bhost "github.com/libp2p/go-libp2p-blankhost"
core "github.com/libp2p/go-libp2p-core"
"github.com/libp2p/go-libp2p-core/control"
"github.com/libp2p/go-libp2p-core/host"
@@ -19,8 +17,10 @@ import (
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-core/protocol"
pubsub "github.com/libp2p/go-libp2p-pubsub"
swarmt "github.com/libp2p/go-libp2p-swarm/testing"
bhost "github.com/libp2p/go-libp2p/p2p/host/blank"
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/multiformats/go-multiaddr"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers/scorers"

View File

@@ -23,8 +23,8 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
],
)

View File

@@ -4,8 +4,8 @@
package types
import (
ssz "github.com/ferranbt/fastssz"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/config/params"
)

View File

@@ -12,12 +12,12 @@ import (
"path"
"time"
gcrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/crypto"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/io/file"
"github.com/prysmaticlabs/prysm/network"
pb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -44,22 +44,6 @@ func SerializeENR(record *enr.Record) (string, error) {
return enrString, nil
}
func convertFromInterfacePrivKey(privkey crypto.PrivKey) *ecdsa.PrivateKey {
typeAssertedKey := (*ecdsa.PrivateKey)(privkey.(*crypto.Secp256k1PrivateKey))
typeAssertedKey.Curve = gcrypto.S256() // Temporary hack, so libp2p Secp256k1 is recognized as geth Secp256k1 in disc v5.1.
return typeAssertedKey
}
func convertToInterfacePrivkey(privkey *ecdsa.PrivateKey) crypto.PrivKey {
typeAssertedKey := crypto.PrivKey((*crypto.Secp256k1PrivateKey)(privkey))
return typeAssertedKey
}
func convertToInterfacePubkey(pubkey *ecdsa.PublicKey) crypto.PubKey {
typeAssertedKey := crypto.PubKey((*crypto.Secp256k1PublicKey)(pubkey))
return typeAssertedKey
}
// Determines a private key for p2p networking from the p2p service's
// configuration struct. If no key is found, it generates a new one.
func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
@@ -77,8 +61,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
if err != nil {
return nil, err
}
convertedKey := convertFromInterfacePrivKey(priv)
return convertedKey, nil
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
if defaultKeysExist && privateKeyPath == "" {
privateKeyPath = defaultKeyPath
@@ -102,7 +85,7 @@ func privKeyFromFile(path string) (*ecdsa.PrivateKey, error) {
if err != nil {
return nil, err
}
return convertFromInterfacePrivKey(unmarshalledKey), nil
return ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledKey)
}
// Retrieves node p2p metadata from a set of configuration values

View File

@@ -39,6 +39,8 @@ go_library(
"//beacon-chain/state/v1:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/trie:go_default_library",
"//contracts/deposit:go_default_library",
"//crypto/hash:go_default_library",
@@ -77,6 +79,7 @@ go_test(
"block_reader_test.go",
"check_transition_config_test.go",
"deposit_test.go",
"engine_client_fuzz_test.go",
"engine_client_test.go",
"init_test.go",
"log_processing_test.go",
@@ -85,6 +88,7 @@ go_test(
"provider_test.go",
"service_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//async/event:go_default_library",
@@ -101,6 +105,7 @@ go_test(
"//beacon-chain/state/stategen:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/forks/bellatrix:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/trie:go_default_library",
@@ -121,6 +126,7 @@ go_test(
"@com_github_ethereum_go_ethereum//accounts/abi/bind/backends:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/beacon:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_ethereum_go_ethereum//trie:go_default_library",

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/network"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
)
@@ -85,7 +86,8 @@ func (s *Service) checkTransitionConfiguration(
ctx, cancel := context.WithDeadline(ctx, tm.Add(network.DefaultRPCHTTPTimeout))
err = s.ExchangeTransitionConfiguration(ctx, cfg)
s.handleExchangeConfigurationError(err)
if !hasTtdReached {
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.chainStartData.GetGenesisTime()))
if currentEpoch >= params.BeaconConfig().BellatrixForkEpoch && !hasTtdReached {
hasTtdReached, err = s.logTtdStatus(ctx, ttd)
if err != nil {
log.WithError(err).Error("Could not log ttd status")

View File

@@ -4,12 +4,14 @@ import (
"context"
"encoding/json"
"errors"
"math/big"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
gethtypes "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
mockChain "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
@@ -161,7 +163,27 @@ func TestService_logTtdStatus(t *testing.T) {
require.NoError(t, r.Body.Close())
}()
resp := &pb.ExecutionBlock{TotalDifficulty: "0x12345678"}
resp := &pb.ExecutionBlock{
Header: gethtypes.Header{
ParentHash: common.Hash{},
UncleHash: common.Hash{},
Coinbase: common.Address{},
Root: common.Hash{},
TxHash: common.Hash{},
ReceiptHash: common.Hash{},
Bloom: gethtypes.Bloom{},
Difficulty: big.NewInt(1),
Number: big.NewInt(2),
GasLimit: 3,
GasUsed: 4,
Time: 5,
Extra: nil,
MixDigest: common.Hash{},
Nonce: gethtypes.BlockNonce{},
BaseFee: big.NewInt(6),
},
TotalDifficulty: "0x12345678",
}
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,

View File

@@ -14,6 +14,8 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/sirupsen/logrus"
@@ -46,6 +48,15 @@ type ForkchoiceUpdatedResponse struct {
PayloadId *pb.PayloadIDBytes `json:"payloadId"`
}
// ExecutionPayloadReconstructor defines a service that can reconstruct a full beacon
// block with an execution payload from a signed beacon block and a connection
// to an execution client's engine API.
type ExecutionPayloadReconstructor interface {
ReconstructFullBellatrixBlock(
ctx context.Context, blindedBlock interfaces.SignedBeaconBlock,
) (interfaces.SignedBeaconBlock, error)
}
// EngineCaller defines a client that can interact with an Ethereum
// execution node's engine service via JSON-RPC.
type EngineCaller interface {
@@ -57,7 +68,7 @@ type EngineCaller interface {
ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) error
ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error)
ExecutionBlockByHash(ctx context.Context, hash common.Hash, withTxs bool) (*pb.ExecutionBlock, error)
GetTerminalBlockHash(ctx context.Context) ([]byte, bool, error)
}
@@ -80,7 +91,7 @@ func (s *Service) NewPayload(ctx context.Context, payload *pb.ExecutionPayload)
switch result.Status {
case pb.PayloadStatus_INVALID_BLOCK_HASH:
return nil, fmt.Errorf("could not validate block hash: %v", result.ValidationError)
return nil, ErrInvalidBlockHashPayloadStatus
case pb.PayloadStatus_ACCEPTED, pb.PayloadStatus_SYNCING:
return nil, ErrAcceptedSyncingPayloadStatus
case pb.PayloadStatus_INVALID:
@@ -228,11 +239,11 @@ func (s *Service) GetTerminalBlockHash(ctx context.Context) ([]byte, bool, error
}
blockReachedTTD := currentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
parentHash := bytesutil.ToBytes32(blk.ParentHash)
if len(blk.ParentHash) == 0 || parentHash == params.BeaconConfig().ZeroHash {
parentHash := blk.ParentHash
if parentHash == params.BeaconConfig().ZeroHash {
return nil, false, nil
}
parentBlk, err := s.ExecutionBlockByHash(ctx, parentHash)
parentBlk, err := s.ExecutionBlockByHash(ctx, parentHash, false /* no txs */)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent execution block")
}
@@ -248,12 +259,12 @@ func (s *Service) GetTerminalBlockHash(ctx context.Context) ([]byte, bool, error
if !parentReachedTTD {
log.WithFields(logrus.Fields{
"number": blk.Number,
"hash": fmt.Sprintf("%#x", bytesutil.Trunc(blk.Hash)),
"hash": fmt.Sprintf("%#x", bytesutil.Trunc(blk.Hash[:])),
"td": blk.TotalDifficulty,
"parentTd": parentBlk.TotalDifficulty,
"ttd": terminalTotalDifficulty,
}).Info("Retrieved terminal block hash")
return blk.Hash, true, nil
return blk.Hash[:], true, nil
}
} else {
return nil, false, nil
@@ -281,15 +292,88 @@ func (s *Service) LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock,
// ExecutionBlockByHash fetches an execution engine block by hash by calling
// eth_blockByHash via JSON-RPC.
func (s *Service) ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error) {
func (s *Service) ExecutionBlockByHash(ctx context.Context, hash common.Hash, withTxs bool) (*pb.ExecutionBlock, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExecutionBlockByHash")
defer span.End()
result := &pb.ExecutionBlock{}
err := s.rpcClient.CallContext(ctx, result, ExecutionBlockByHashMethod, hash, false /* no full transaction objects */)
err := s.rpcClient.CallContext(ctx, result, ExecutionBlockByHashMethod, hash, withTxs)
return result, handleRPCError(err)
}
// ReconstructFullBellatrixBlock takes in a blinded beacon block and reconstructs
// a beacon block with a full execution payload via the engine API.
func (s *Service) ReconstructFullBellatrixBlock(
ctx context.Context, blindedBlock interfaces.SignedBeaconBlock,
) (interfaces.SignedBeaconBlock, error) {
if err := wrapper.BeaconBlockIsNil(blindedBlock); err != nil {
return nil, errors.Wrap(err, "cannot reconstruct bellatrix block from nil data")
}
if !blindedBlock.Block().IsBlinded() {
return nil, errors.New("can only reconstruct block from blinded block format")
}
header, err := blindedBlock.Block().Body().ExecutionPayloadHeader()
if err != nil {
return nil, err
}
executionBlockHash := common.BytesToHash(header.BlockHash)
executionBlock, err := s.ExecutionBlockByHash(ctx, executionBlockHash, true /* with txs */)
if err != nil {
return nil, fmt.Errorf("could not fetch execution block with txs by hash %#x: %v", executionBlockHash, err)
}
if executionBlock == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionBlockHash)
}
payload, err := fullPayloadFromExecutionBlock(header, executionBlock)
if err != nil {
return nil, err
}
fullBlock, err := wrapper.BuildSignedBeaconBlockFromExecutionPayload(blindedBlock, payload)
if err != nil {
return nil, err
}
reconstructedExecutionPayloadCount.Add(1)
return fullBlock, nil
}
func fullPayloadFromExecutionBlock(
header *pb.ExecutionPayloadHeader, block *pb.ExecutionBlock,
) (*pb.ExecutionPayload, error) {
if header == nil || block == nil {
return nil, errors.New("execution block and header cannot be nil")
}
if !bytes.Equal(header.BlockHash, block.Hash[:]) {
return nil, fmt.Errorf(
"block hash field in execution header %#x does not match execution block hash %#x",
header.BlockHash,
block.Hash,
)
}
txs := make([][]byte, len(block.Transactions))
for i, tx := range block.Transactions {
txBin, err := tx.MarshalBinary()
if err != nil {
return nil, err
}
txs[i] = txBin
}
return &pb.ExecutionPayload{
ParentHash: header.ParentHash,
FeeRecipient: header.FeeRecipient,
StateRoot: header.StateRoot,
ReceiptsRoot: header.ReceiptsRoot,
LogsBloom: header.LogsBloom,
PrevRandao: header.PrevRandao,
BlockNumber: header.BlockNumber,
GasLimit: header.GasLimit,
GasUsed: header.GasUsed,
Timestamp: header.Timestamp,
ExtraData: header.ExtraData,
BaseFeePerGas: header.BaseFeePerGas,
BlockHash: block.Hash[:],
Transactions: txs,
}, nil
}
// Handles errors received from the RPC server according to the specification.
func handleRPCError(err error) error {
if err == nil {

View File

@@ -0,0 +1,114 @@
//go:build go1.18
// +build go1.18
package powchain_test
import (
"encoding/json"
"fmt"
"math"
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/beacon"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/prysmaticlabs/prysm/testing/assert"
)
func FuzzForkChoiceResponse(f *testing.F) {
valHash := common.Hash([32]byte{0xFF, 0x01})
payloadID := beacon.PayloadID([8]byte{0x01, 0xFF, 0xAA, 0x00, 0xEE, 0xFE, 0x00, 0x00})
valErr := "asjajshjahsaj"
seed := &beacon.ForkChoiceResponse{
PayloadStatus: beacon.PayloadStatusV1{
Status: "INVALID_TERMINAL_BLOCK",
LatestValidHash: &valHash,
ValidationError: &valErr,
},
PayloadID: &payloadID,
}
output, err := json.Marshal(seed)
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &beacon.ForkChoiceResponse{}
prysmResp := &powchain.ForkchoiceUpdatedResponse{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
// Nothing to marshal if we have an error.
if gethErr != nil {
return
}
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
newGethResp := &beacon.ForkChoiceResponse{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
if newGethResp.PayloadStatus.Status == "UNKNOWN" {
return
}
newGethResp2 := &beacon.ForkChoiceResponse{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)
assert.DeepEqual(t, newGethResp.PayloadID, newGethResp2.PayloadID)
assert.DeepEqual(t, newGethResp.PayloadStatus.Status, newGethResp2.PayloadStatus.Status)
assert.DeepEqual(t, newGethResp.PayloadStatus.LatestValidHash, newGethResp2.PayloadStatus.LatestValidHash)
isNilOrEmpty := newGethResp.PayloadStatus.ValidationError == nil || (*newGethResp.PayloadStatus.ValidationError == "")
isNilOrEmpty2 := newGethResp2.PayloadStatus.ValidationError == nil || (*newGethResp2.PayloadStatus.ValidationError == "")
assert.DeepEqual(t, isNilOrEmpty, isNilOrEmpty2)
if !isNilOrEmpty {
assert.DeepEqual(t, *newGethResp.PayloadStatus.ValidationError, *newGethResp2.PayloadStatus.ValidationError)
}
})
}
func FuzzExecutionPayload(f *testing.F) {
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
execData := &beacon.ExecutableDataV1{
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
LogsBloom: logsBloom[:],
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Number: math.MaxUint64,
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
}
output, err := json.Marshal(execData)
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &beacon.ExecutableDataV1{}
prysmResp := &pb.ExecutionPayload{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
// Nothing to marshal if we have an error.
if gethErr != nil {
return
}
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
newGethResp := &beacon.ExecutableDataV1{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
newGethResp2 := &beacon.ExecutableDataV1{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)
assert.DeepEqual(t, newGethResp.LogsBloom, newGethResp2.LogsBloom)
})
}

View File

@@ -12,20 +12,27 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
gethtypes "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
"github.com/pkg/errors"
mocks "github.com/prysmaticlabs/prysm/beacon-chain/powchain/testing"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/consensus-types/forks/bellatrix"
"github.com/prysmaticlabs/prysm/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"google.golang.org/protobuf/proto"
)
var (
_ = ExecutionPayloadReconstructor(&Service{})
_ = EngineCaller(&Service{})
_ = ExecutionPayloadReconstructor(&Service{})
_ = EngineCaller(&mocks.EngineClient{})
)
@@ -250,7 +257,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.NewPayload(ctx, execPayload)
require.ErrorContains(t, "could not validate block hash", err)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(NewPayloadMethod+" INVALID status", func(t *testing.T) {
@@ -385,6 +392,99 @@ func TestClient_HTTP(t *testing.T) {
})
}
func TestReconstructFullBellatrixBlock(t *testing.T) {
ctx := context.Background()
t.Run("nil block", func(t *testing.T) {
service := &Service{}
_, err := service.ReconstructFullBellatrixBlock(ctx, nil)
require.ErrorContains(t, "nil data", err)
})
t.Run("only blinded block", func(t *testing.T) {
want := "can only reconstruct block from blinded block format"
service := &Service{}
bellatrixBlock := util.NewBeaconBlockBellatrix()
wrapped, err := wrapper.WrappedSignedBeaconBlock(bellatrixBlock)
require.NoError(t, err)
_, err = service.ReconstructFullBellatrixBlock(ctx, wrapped)
require.ErrorContains(t, want, err)
})
t.Run("properly reconstructs block with correct payload", func(t *testing.T) {
fix := fixtures()
payload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
jsonPayload := make(map[string]interface{})
tx := gethtypes.NewTransaction(
0,
common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"),
big.NewInt(0), 0, big.NewInt(0),
nil,
)
txs := []*gethtypes.Transaction{tx}
encodedBinaryTxs := make([][]byte, 1)
var err error
encodedBinaryTxs[0], err = txs[0].MarshalBinary()
require.NoError(t, err)
payload.Transactions = encodedBinaryTxs
jsonPayload["transactions"] = txs
num := big.NewInt(1)
encodedNum := hexutil.EncodeBig(num)
jsonPayload["hash"] = hexutil.Encode(payload.BlockHash)
jsonPayload["parentHash"] = common.BytesToHash([]byte("parent"))
jsonPayload["sha3Uncles"] = common.BytesToHash([]byte("uncles"))
jsonPayload["miner"] = common.BytesToAddress([]byte("miner"))
jsonPayload["stateRoot"] = common.BytesToHash([]byte("state"))
jsonPayload["transactionsRoot"] = common.BytesToHash([]byte("txs"))
jsonPayload["receiptsRoot"] = common.BytesToHash([]byte("receipts"))
jsonPayload["logsBloom"] = gethtypes.BytesToBloom([]byte("bloom"))
jsonPayload["gasLimit"] = hexutil.EncodeUint64(1)
jsonPayload["gasUsed"] = hexutil.EncodeUint64(2)
jsonPayload["timestamp"] = hexutil.EncodeUint64(3)
jsonPayload["number"] = encodedNum
jsonPayload["extraData"] = common.BytesToHash([]byte("extra"))
jsonPayload["totalDifficulty"] = "0x123456"
jsonPayload["difficulty"] = encodedNum
jsonPayload["size"] = encodedNum
jsonPayload["baseFeePerGas"] = encodedNum
header, err := bellatrix.PayloadToHeader(payload)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": jsonPayload,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{}
service.rpcClient = rpcClient
blindedBlock := util.NewBlindedBeaconBlockBellatrix()
blindedBlock.Block.Body.ExecutionPayloadHeader = header
wrapped, err := wrapper.WrappedSignedBeaconBlock(blindedBlock)
require.NoError(t, err)
reconstructed, err := service.ReconstructFullBellatrixBlock(ctx, wrapped)
require.NoError(t, err)
got, err := reconstructed.Block().Body().ExecutionPayload()
require.NoError(t, err)
require.DeepEqual(t, payload, got)
})
}
func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
tests := []struct {
name string
@@ -416,7 +516,7 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
name: "current execution block invalid TD",
paramsTd: "1",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
Hash: common.BytesToHash([]byte("a")),
TotalDifficulty: "1115792089237316195423570985008687907853269984665640564039457584007913129638912",
},
errString: "could not convert total difficulty to uint256",
@@ -425,8 +525,10 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
name: "current execution block has zero hash parent",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: params.BeaconConfig().ZeroHash[:],
Hash: common.BytesToHash([]byte("a")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash(params.BeaconConfig().ZeroHash[:]),
},
TotalDifficulty: "0x3",
},
},
@@ -434,8 +536,10 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
name: "could not get parent block",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
Hash: common.BytesToHash([]byte("a")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("b")),
},
TotalDifficulty: "0x3",
},
errString: "could not get parent execution block",
@@ -444,13 +548,17 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
name: "parent execution block invalid TD",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
Hash: common.BytesToHash([]byte("a")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("b")),
},
TotalDifficulty: "0x3",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
Hash: common.BytesToHash([]byte("b")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("c")),
},
TotalDifficulty: "1",
},
errString: "could not convert total difficulty to uint256",
@@ -459,29 +567,37 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
name: "happy case",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
Hash: common.BytesToHash([]byte("a")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("b")),
},
TotalDifficulty: "0x3",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
Hash: common.BytesToHash([]byte("b")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("c")),
},
TotalDifficulty: "0x1",
},
wantExists: true,
wantTerminalBlockHash: []byte{'a'},
wantTerminalBlockHash: common.BytesToHash([]byte("a")).Bytes(),
},
{
name: "ttd not reached",
paramsTd: "3",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
Hash: common.BytesToHash([]byte("a")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("b")),
},
TotalDifficulty: "0x2",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
Hash: common.BytesToHash([]byte("b")),
Header: gethtypes.Header{
ParentHash: common.BytesToHash([]byte("c")),
},
TotalDifficulty: "0x1",
},
},
@@ -494,7 +610,7 @@ func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
var m map[[32]byte]*pb.ExecutionBlock
if tt.parentPowBlock != nil {
m = map[[32]byte]*pb.ExecutionBlock{
bytesutil.ToBytes32(tt.parentPowBlock.Hash): tt.parentPowBlock,
tt.parentPowBlock.Hash: tt.parentPowBlock,
}
}
client := mocks.EngineClient{
@@ -534,6 +650,87 @@ func Test_tDStringToUint256(t *testing.T) {
require.ErrorContains(t, "hex number > 256 bits", err)
}
func TestReconstructFullBellatrixBlock(t *testing.T) {
ctx := context.Background()
t.Run("nil block", func(t *testing.T) {
service := &Service{}
_, err := service.ReconstructFullBellatrixBlock(ctx, nil)
require.ErrorContains(t, "nil data", err)
})
t.Run("only blinded block", func(t *testing.T) {
want := "can only reconstruct block from blinded block format"
service := &Service{}
bellatrixBlock := util.NewBeaconBlockBellatrix()
wrapped, err := wrapper.WrappedSignedBeaconBlock(bellatrixBlock)
require.NoError(t, err)
_, err = service.ReconstructFullBellatrixBlock(ctx, wrapped)
require.ErrorContains(t, want, err)
})
t.Run("properly reconstructs block with correct payload", func(t *testing.T) {
fix := fixtures()
payload, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
jsonPayload := make(map[string]interface{})
tx := gethtypes.NewTransaction(
0,
common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"),
big.NewInt(0), 0, big.NewInt(0),
nil,
)
txs := []*gethtypes.Transaction{tx}
encodedBinaryTxs := make([][]byte, 1)
var err error
encodedBinaryTxs[0], err = txs[0].MarshalBinary()
require.NoError(t, err)
payload.Transactions = encodedBinaryTxs
jsonPayload["transactions"] = txs
num := big.NewInt(1)
encodedNum := hexutil.EncodeBig(num)
jsonPayload["hash"] = hexutil.Encode(payload.BlockHash)
jsonPayload["number"] = encodedNum
jsonPayload["difficulty"] = encodedNum
jsonPayload["size"] = encodedNum
jsonPayload["baseFeePerGas"] = encodedNum
header, err := bellatrix.PayloadToHeader(payload)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": jsonPayload,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{}
service.rpcClient = rpcClient
blindedBlock := util.NewBlindedBeaconBlockBellatrix()
blindedBlock.Block.Body.ExecutionPayloadHeader = header
wrapped, err := wrapper.WrappedSignedBeaconBlock(blindedBlock)
require.NoError(t, err)
reconstructed, err := service.ReconstructFullBellatrixBlock(ctx, wrapped)
require.NoError(t, err)
got, err := reconstructed.Block().Body().ExecutionPayload()
require.NoError(t, err)
require.DeepEqual(t, payload, got)
})
}
func TestExchangeTransitionConfiguration(t *testing.T) {
fix := fixtures()
ctx := context.Background()
@@ -749,8 +946,6 @@ func fixtures() map[string]interface{} {
BlockHash: foo[:],
Transactions: [][]byte{foo[:]},
}
number := bytesutil.PadTo([]byte("100"), fieldparams.RootLength)
hash := bytesutil.PadTo([]byte("hash"), fieldparams.RootLength)
parent := bytesutil.PadTo([]byte("parentHash"), fieldparams.RootLength)
sha3Uncles := bytesutil.PadTo([]byte("sha3Uncles"), fieldparams.RootLength)
miner := bytesutil.PadTo([]byte("miner"), fieldparams.FeeRecipientLength)
@@ -759,25 +954,24 @@ func fixtures() map[string]interface{} {
receiptsRoot := bytesutil.PadTo([]byte("receiptsRoot"), fieldparams.RootLength)
logsBloom := bytesutil.PadTo([]byte("logs"), fieldparams.LogsBloomLength)
executionBlock := &pb.ExecutionBlock{
Number: number,
Hash: hash,
ParentHash: parent,
Sha3Uncles: sha3Uncles,
Miner: miner,
StateRoot: stateRoot,
TransactionsRoot: transactionsRoot,
ReceiptsRoot: receiptsRoot,
LogsBloom: logsBloom,
Difficulty: bytesutil.PadTo([]byte("1"), fieldparams.RootLength),
TotalDifficulty: "2",
GasLimit: 3,
GasUsed: 4,
Timestamp: 5,
Size: bytesutil.PadTo([]byte("6"), fieldparams.RootLength),
ExtraData: bytesutil.PadTo([]byte("extraData"), fieldparams.RootLength),
BaseFeePerGas: bytesutil.PadTo([]byte("baseFeePerGas"), fieldparams.RootLength),
Transactions: [][]byte{foo[:]},
Uncles: [][]byte{foo[:]},
Header: gethtypes.Header{
ParentHash: common.BytesToHash(parent),
UncleHash: common.BytesToHash(sha3Uncles),
Coinbase: common.BytesToAddress(miner),
Root: common.BytesToHash(stateRoot),
TxHash: common.BytesToHash(transactionsRoot),
ReceiptHash: common.BytesToHash(receiptsRoot),
Bloom: gethtypes.BytesToBloom(logsBloom),
Difficulty: big.NewInt(1),
Number: big.NewInt(2),
GasLimit: 3,
GasUsed: 4,
Time: 5,
Extra: []byte("extra"),
MixDigest: common.BytesToHash([]byte("mix")),
Nonce: gethtypes.EncodeNonce(6),
BaseFee: big.NewInt(7),
},
}
status := &pb.PayloadStatus{
Status: pb.PayloadStatus_VALID,
@@ -806,7 +1000,7 @@ func fixtures() map[string]interface{} {
forkChoiceInvalidResp := &ForkchoiceUpdatedResponse{
Status: &pb.PayloadStatus{
Status: pb.PayloadStatus_INVALID,
LatestValidHash: []byte("latestValidHash"),
LatestValidHash: bytesutil.PadTo([]byte("latestValidHash"), 32),
},
PayloadId: &id,
}
@@ -859,6 +1053,66 @@ func fixtures() map[string]interface{} {
}
}
func Test_fullPayloadFromExecutionBlock(t *testing.T) {
type args struct {
header *pb.ExecutionPayloadHeader
block *pb.ExecutionBlock
}
wantedHash := common.BytesToHash([]byte("foo"))
tests := []struct {
name string
args args
want *pb.ExecutionPayload
err string
}{
{
name: "nil header fails",
args: args{header: nil, block: &pb.ExecutionBlock{}},
err: "cannot be nil",
},
{
name: "nil block fails",
args: args{header: &pb.ExecutionPayloadHeader{}, block: nil},
err: "cannot be nil",
},
{
name: "block hash field in header and block hash mismatch",
args: args{
header: &pb.ExecutionPayloadHeader{
BlockHash: []byte("foo"),
},
block: &pb.ExecutionBlock{
Hash: common.BytesToHash([]byte("bar")),
},
},
err: "does not match execution block hash",
},
{
name: "ok",
args: args{
header: &pb.ExecutionPayloadHeader{
BlockHash: wantedHash[:],
},
block: &pb.ExecutionBlock{
Hash: wantedHash,
},
},
want: &pb.ExecutionPayload{
BlockHash: wantedHash[:],
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := fullPayloadFromExecutionBlock(tt.args.header, tt.args.block)
if (err != nil) && !strings.Contains(err.Error(), tt.err) {
t.Fatalf("Wanted err %s got %v", tt.err, err)
}
require.DeepEqual(t, tt.want, got)
})
}
}
type testEngineService struct{}
func (*testEngineService) NoArgsRets() {}

View File

@@ -30,6 +30,8 @@ var (
ErrAcceptedSyncingPayloadStatus = errors.New("payload status is SYNCING or ACCEPTED")
// ErrInvalidPayloadStatus when the status of the payload is invalid.
ErrInvalidPayloadStatus = errors.New("payload status is INVALID")
// ErrInvalidBlockHashPayloadStatus when the status of the payload fails to validate block hash.
ErrInvalidBlockHashPayloadStatus = errors.New("payload status is INVALID_BLOCK_HASH")
// ErrNilResponse when the response is nil.
ErrNilResponse = errors.New("nil response")
)

View File

@@ -31,4 +31,8 @@ var (
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
reconstructedExecutionPayloadCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "reconstructed_execution_payload_count",
Help: "Count the number of execution payloads that are reconstructed using JSON-RPC from payload headers",
})
)

View File

@@ -0,0 +1,2 @@
go test fuzz v1
[]byte("{\"parentHash\":\"0xff01ff01ff01ff01ff01ff01ff01ff0100000000000000000000000000000000\",\"feeRecipient\":\"0xff01ff01ff01ff01ffff01ff01ff01ff01ff0000\",\"stateRoot\":\"0xff01ff01ff01ff01ff01ff01ff01ff0100000000000000000000000000000000\",\"receiptsRoot\":\"0xff01ff01ff01ff01ff01ff01ff01ff01000000fffffff0000000000000000000\",\"logsBloom\":\"0x6a756e6b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\",\"prevRandao\":\"0xff01ff01ff01ff01ff01ff01ff01ff0100000000000000000000000000000000\",\"blockNumber\":\"0xffffffffffffffff\",\"gasLimit\":\"0xffffffffffffffff\",\"gasUsed\":\"0xffffffffffffffff\",\"timestamp\":\"0x64\",\"extraData\":\"0x\",\"baseFeePerGas\":\"0x7fffff0000000fff\",\"blockHash\":\"0xff01ff01ff01ff01ff01ff01ff0100000000000000000000000000000000\",\"transactions\":[\"0xff01ff01ff01ff01ff01ff01ff01ff01\",\"0xff01ff01ff01ff01ff01ff01ff01ff01\",\"0xff01ff01ff01ff01ff01ff01ff01ff01\",\"0xff01ff01ff01ff01ff01ff01ff01ff01\"]}")

View File

@@ -0,0 +1,2 @@
go test fuzz v1
[]byte("{\"BAseFeePerGAs\":\"0X0\"}")

Some files were not shown because too many files have changed in this diff Show More