Compare commits

...

251 Commits

Author SHA1 Message Date
Raul Jordan
d6338f6042 Update Web UI Version to v1.0.2 (#10009)
* latest web ui

* fmt

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-12-11 11:36:00 -06:00
Antonio Sanso
c84c6ab547 Add Missing spec tests in sss_static.go (#10003)
* Update ssz_static.go

* Update ssz_static.go

* Update testing/spectest/shared/phase0/ssz_static/ssz_static.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Add Missing spec tests in sss_static.go

* Update testing/spectest/shared/altair/ssz_static/ssz_static.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-12-10 15:18:57 -06:00
Radosław Kapka
52d8a1646f Standardize config overriding in tests across repo (#9998)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-10 04:18:47 +00:00
kasey
3c61cc7d8a allow checkpoint or genesis origin; refactoring (#9976)
* allow checkpoint or genesis origin; refactoring

some quick readability improvements and simplifying the logic enforcing
the startup ordering of the attestation processing routine

* address PR feedback

* gofmt

* Update beacon-chain/blockchain/receive_attestation.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Apply suggestions from code review

use log.WithError for aggregation friendliness

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: kasey <kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-09 22:23:00 +00:00
Raul Jordan
37ca409cc1 Add More Fields to Remote Signer Sign Request (#10004)
* add in new fields

* sign request added

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-09 21:53:54 +00:00
Raul Jordan
b381ad49b5 Add Justifications for Gosec Ignored (#10005)
* pin gosec

* edit

* go back to master

* justifications

* Update crypto/bls/blst/signature.go

* proper format

* gosec
2021-12-09 19:40:48 +00:00
james-prysm
00c3a7dcaf Web3signer: Add an HTTP Client Wrapper to Interact with Remote Server (#9991)
* initial commit for web3signer code work in progress

* adding more functions for web3signer

* more improvements to unit tests and web3signer functions

* fixing unit test

* removing path construction

* fixing failing unit test

* adding more happy path unit tests

* fixing unit tests

* temp removing keymanagerfiles being wip

* removing some comments

* Update validator/keymanager/remote-web3signer/client.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/keymanager/remote-web3signer/client.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* making errors lowercase

* addressing review comments

* missed resolving a conflict

* addressing deepsource issues

* bazel test and gazelle

* no lint

* deadcode

* addressing comments

* fixing comments

* small fix for readability

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-12-09 09:33:53 -06:00
james-prysm
d66edc9670 fix cors middleware (#9999)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-12-08 17:13:28 -05:00
Raul Jordan
4b249607da Return ERROR Status in ImportKeystores standard API endpoint if slashing protection fails (#9995)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-08 19:52:05 +00:00
Radosław Kapka
02483ba89c Add missing config cleanup in tests (#9996)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-08 19:24:16 +00:00
terence tsao
9629c354f1 Use merge configs for processing (#9982) 2021-12-08 11:02:53 -08:00
Raul Jordan
886332c8fa Add Safe Sub64 Method (#9993)
* add sub 64

* add sub

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-08 09:44:54 +00:00
Raul Jordan
424c8f6b46 API Middleware for Keymanager Standard API Endpoints (#9936)
* begin the middleware approach

* attempt middleware

* middleware works in tandem with web ui

* handle delete as well

* delete request

* DELETE working

* tool to perform imports

* functioning

* commentary

* build

* gaz

* smol test

* enable keymanager api use protonames

* edit

* one rule

* rem gw

* Fix custom compiler

(cherry picked from commit 3b1f65919e04ddf7e07c8f60cba1be883a736476)

* gen proto

* imports

* Update validator/node/node.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* remaining comments

* update item

* rpc

* add

* run gateway

* simplify

* rem flag

* deep source

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-07 20:26:21 +00:00
terence tsao
cee3b626f3 make db merge compatible (#9987)
* make db merge compatible

* util merge.go per @rauljordan

* Go fmt

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-07 18:51:05 +00:00
Radosław Kapka
5569a68452 Code cleanup (#9992)
* Value assigned to a variable is never read before being overwritten

* The result of append is not used anywhere

* Suspicious assignment of range-loop vars detected

* Unused method receiver detected

* Revert "Auxiliary commit to revert individual files from 54edcb445484a2e5d79612e19af8e949b8861253"

This reverts commit bbd1e1beabf7b0c5cfc4f514dcc820062ad6c063.

* Method modifies receiver

* Fix test

* Duplicate imports detected

* Incorrectly formatted error string

* Types of function parameters can be combined

* One more "Unused method receiver detected"

* Unused parameter detected in function
2021-12-07 17:52:39 +00:00
Raul Jordan
1eff00fb33 Display Num Pruned Items in Slasher Only When Actually Prunes (#9989)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-07 14:38:19 +00:00
terence tsao
2bcda2e021 Add more merge related protobufs (#9986)
* Add merge protos

* Update proto/eth/v2/beacon_block.proto

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-12-07 08:08:14 +00:00
Raul Jordan
98fea2e94d Slasher Significant Optimizations (#9833)
* optimizations to slasher runtime

* remove unnecessary code

* test for epoch update

* commentary

* Gaz

* fmt

* amend test

* better logging

* better logs

* log

* div 0

* more logging

* no log

* use map instead

* passing

* comments

* passing

* for select loop wait for init

* sub

* srv

* debug

* fix panic

* gaz

* builds

* sim gen

* ineff

* commentary

* data

* log

* base

* try

* rem logs

* sim logs

* fix wait for sync event

* ev

* init

* init

* Update beacon-chain/slasher/service.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* comments

* elapsed

* Update testing/slasher/simulator/simulator.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* timeout

* inner cancel

* ctx err everywhere

* Add context aware to several potentially long running db operations

* Fix missing param after updating with develop

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-12-06 21:45:38 +00:00
Radosław Kapka
e53be1acbe Do not rely on peer order in TestDebugServer_ListPeers (#9988)
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-12-06 19:29:48 +00:00
Nishant Das
c9f5f8dad5 Use Cached Finalized State When Pruning Deposits (#9985)
* use cached finalized state

* Minor grammar edits

* fmt

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-12-06 12:38:23 +00:00
terence tsao
03bf463d0f Remove unused validator proto imports (#9983) 2021-12-04 21:05:54 +01:00
terence tsao
afb49aaeb9 Add and use merge penalty params (#9878)
* Add and use merge penalty params

* add attester slashing test

* Update config_test.go

* Use DeterministicGenesisStateMerge
2021-12-03 22:00:34 +00:00
Potuz
1fd65ad7fd typos (#9979) 2021-12-03 20:18:43 +00:00
Potuz
b1df3b55b0 more typos (#9980) 2021-12-03 16:31:30 -03:00
Preston Van Loon
3767574c77 stateutil: Reduce allocations by specifying slice capacity eager (#9977)
* Reduce allocations by specifying slice capacity eager

* gofmt
2021-12-03 09:06:05 +00:00
Raul Jordan
d3c97da4e1 Ensure Slashing Protection Exports and Keymanager API Work According to Spec (#9938)
* password compliance

* delete keys tests

* changes to slashing protection exports

* export tests pass

* fix up failures

* gaz

* table driven tests for delete keystores

* comment

* rem deletion logic

* look ma, no db

* fix up tests

* ineff

* gaz

* broken test fix

* Update validator/keymanager/imported/delete.go

* rem
2021-12-02 09:58:49 -05:00
Raul Jordan
1d216a8737 Filter Errored Keys from Returned Slashing Protection History in Standard API (#9968)
* add err condition

* naming
2021-12-02 03:32:34 +00:00
Raul Jordan
790bf03123 Replace a Few IntFlags with Uint64Flags (#9959)
* use uints instead of ints

* fix method

* fix

* fix

* builds

* deepsource

* deep source
2021-12-01 23:34:53 +00:00
Preston Van Loon
ab60b1c7b2 Update go-ethereum to v1.10.13 (#9967)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-01 19:31:21 +00:00
Nishant Das
236a5c4167 Cleanup From Deepsource (#9961)
* ds cleanup

* fix

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-01 18:56:07 +00:00
Nishant Das
5e2229ce9d Update Libp2p to v0.15.1 (#9960)
* fix deps

* tidy it all

* fix build

* remove tls patch

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-01 18:09:34 +00:00
Potuz
6ffba5c769 Add v1alpha1_to_v2.go (#9966)
* Add v1alpha1_to_v2.go

* add tests

* gazelle

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-12-01 17:39:43 +00:00
Potuz
3e61763bd7 fix operation precedence (#9965) 2021-12-01 17:14:08 +00:00
terence tsao
23bdce2354 Fix grpc client connected... logging (#9956)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-12-01 14:45:39 +00:00
Nishant Das
d94bf32dcf Faster Doppelganger Check (#9964)
* faster check

* potuz's review

* potuz's review
2021-12-01 12:37:10 +00:00
Nishant Das
7cbef104b0 Remove Balances Timeout (#9957)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-12-01 04:03:26 +00:00
Potuz
cd6d0d9cf1 Monitor aggregated logs (#9943) 2021-12-01 03:35:55 +00:00
Potuz
afbe02697d Monitor service (#9933)
* Add a service for the monitor

* Do not block service start

* gaz

* move channel subscription outide go routine

* add service start test

* fix panic on node tests

* Radek's first pass

* Radek's take 2

* uncap error messages

* revert reversal

* Terence take 1

* gaz

* Missing locks found by Terence

* Track via bool not empty interface

* Add tests for every function

* fix allocation of slice

* Minor cleanups

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-30 19:27:03 -03:00
terence tsao
2c921ec628 Update spec tests to v1.1.6 (#9955)
* Update spec test to v1.1.6

* Update spec test to v1.1.6
2021-11-30 21:21:59 +00:00
terence tsao
0e72938914 Uncap error messages (#9952) 2021-11-30 07:41:07 -08:00
Potuz
71d55d1cff Check for syncstatus before performing a voluntary exit (#9951)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-11-30 10:40:59 +00:00
terence tsao
d8aa0f8827 Alter config filed name to devnet if it's not populated in file (#9949) 2021-11-29 19:27:26 -08:00
Nishant Das
37bc407b56 Refactor States To Allow for Single Cached Hasher (#9922)
* initial changes

* gaz

* unexport and add in godoc

* nocache

* fix edge case

* fix bad implementation

* fix build file

* add it in

* terence's review

* gaz

* fix build

* Apply suggestions from code review

remove assigned ctx

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-29 16:30:17 +00:00
Potuz
5983d0a397 Allow requests for next sync committee (#9945)
* Allow requests for next sync committee

* fix deepsource and variable rename

* Minor cleanup

* Potuz's comments

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-28 17:34:24 +00:00
terence tsao
85faecf2ca Add test utility merge state (#9944)
* Add test utility merge state

* gaz

* gaz
2021-11-26 15:53:25 +00:00
terence tsao
f42227aa04 Rest of the merge state implementation (#9939)
* Add rest of the state implementations

* Update BUILD.bazel

* Update state_trie_test.go

* fix test

* fix test

* Update beacon-chain/state/v3/state_trie.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Update beacon-chain/state/v3/state_trie.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* add ctx

* go fmt

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2021-11-25 17:41:05 -03:00
terence tsao
c9d5b4ba0e Add merge beacon block wrappers (#9906) 2021-11-24 14:26:17 -08:00
Potuz
fed004686b Add verbosity to aggregation logs (#9937) 2021-11-24 11:35:45 -08:00
Raul Jordan
4ae7513835 Import Keystores Standard API Implementation (#9924)
* begin

* rem deleted code

* delete keystores all tests

* surface errors to user

* add in changes

* del

* tests

* slice

* begin import process

* add import keystores logic

* unit tests for import

* tests for all import keystores keymanager issues

* change proto

* pbs

* renaming works

* use proper request

* pb

* comment

* gaz

* fix up cli cmd

* test

* add gw

* precond

* tests

* radek comments
2021-11-24 10:40:49 -05:00
Nishant Das
1d53fd2fd3 revert change (#9931) 2021-11-24 07:09:15 -08:00
Potuz
a2c1185032 Monitor sync committee (#9923)
* Add sync committeee contributions to monitor

* gaz

* Raul's review

* Added lock around TrackedValidators

* add comment to trackedIndex

* add missing locks because of trackedIndex

* Terence fixes 2

* moved TrackedValidator to service from config

* Terence comment fix

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-24 09:56:34 +08:00
terence tsao
448d62d6e3 Add merge beacon chain objects and generate ssz.go (#9929) 2021-11-23 23:34:31 +00:00
terence tsao
4858de7875 Use prysmaticlabs/fastssz (#9928)
* Use prysmaticlabs/fastssz

* Generated code
2021-11-23 21:28:24 +00:00
terence tsao
cd1e3f2b3e Rename coinbase to fee recipient (#9918)
* Rename coinbase to fee recipient

* Fix imports

* Update field name

* Fee receipient

* Fix goimports

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-23 17:49:06 +00:00
Nishant Das
6f20d17d15 Rename To Signature Batch (#9926)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-11-23 17:57:06 +01:00
terence tsao
94fd99f5cd Add getters and setters for beacon state v3 (part 2) (#9916) 2021-11-22 15:56:23 -08:00
Potuz
d4a420ddfd Monitor blocks (#9910)
* Add proposer logging to validator monitor

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-22 18:20:31 -03:00
terence tsao
838b19e985 Add getters and setters for beacon state v3 (part 1) (#9915) 2021-11-22 09:37:55 -08:00
Potuz
788338a004 Stop packing deposits early if we reach max allowed (#9806)
* Stop packing deposits early if we reach max allowed

* Add logs to proposals without deposits

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_deposits.go

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_deposits.go

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_deposits.go

* reinsert debug log

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2021-11-20 14:14:07 +00:00
kasey
39c33b82ad Switch to lazy state balance cache (#9822)
* quick lazy balance cache proof of concept

* WIP refactoring to use lazy cache

* updating tests to use functional opts

* updating the rest of the tests, all passing

* use mock stategen where possible

reduces the number of test cases that require db setup

* rename test opt method for clear link

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* test assumption that zerohash is in db

* remove unused MockDB (mocking stategen instead)

* fix cache bug, switch to sync.Mutex

* improve test coverage for the state cache

* uncomment failing genesis test for discussion

* gofmt

* remove unused Service struct member

* cleanup unused func input

* combining type declaration in signature

* don't export the state cache constructor

* work around blockchain deps w/ new file

service_test brings in a ton of dependencies that make bazel rules
for blockchain complex, so just sticking these mocks in their own
file simplifies things.

* gofmt

* remove intentionally failing test

this test established that the zero root can't be used to look up the
state, resulting in a change in another PR to update stategen to use the
GenesisState db method instead when the zero root is detected.

* fixed error introduced by develop refresh

* fix import ordering

* appease deepsource

* remove unused function

* godoc comments on new requires/assert

* defensive constructor per terence's PR comment

* more differentiated balance cache metric names

Co-authored-by: kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-19 15:59:26 +00:00
Potuz
905e0f4c1c Monitor metrics (#9921)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-19 15:34:28 +00:00
Nishant Das
0ade1f121d Add Balance Field Trie (#9793)
* save stuff

* fix in v1

* clean up more

* fix bugs

* add comments and clean up

* add flag + test

* add tests

* fmt

* radek's review

* gaz

* kasey's review

* gaz and new conditional

* improve naming
2021-11-19 20:01:15 +08:00
Raul Jordan
ee52f8dff3 Implement Validator Standard Key Manager API Delete Keystores (#9886)
* begin

* implement delete and filter export history

* rem deleted code

* delete keystores all tests

* gaz

* test

* double import fix

* test

* surface errors to user

* add in changes

* edit proto

* edit

* del

* tests

* gaz

* slice

* duplicate key found in request
2021-11-19 04:11:54 +00:00
Potuz
50159c2e48 Monitor attestations (#9901)
Log attestation performance on the validator monitor
2021-11-18 22:14:56 -03:00
Preston Van Loon
cae58bbbd8 Unskip v2 end to end check for prior release (#9920) 2021-11-18 23:17:19 +00:00
Raul Jordan
9b37418761 Warn Users In Case Slashing Protection Exports are Empty (#9919)
* export text

* Update cmd/validator/slashing-protection/export.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2021-11-18 20:49:19 +00:00
Raul Jordan
a78cdf86cc Warn Users if Slashing Protection History is Empty (#9909)
* warning in case not found

* better comment

* fatal
2021-11-17 15:39:43 +00:00
Nishant Das
1c4ea75a18 Prevent Reprocessing of a Block From Our Pending Queue (#9904)
* fix bugs

* test

* raul's review
2021-11-17 10:05:50 -05:00
terence tsao
6f4c80531c Add field roots for beacon state v3 (#9914)
* Add field roots for beacon state

* Update BUILD.bazel

* Adding an exception for state v3

* fix deadcode

Co-authored-by: nisdas <nishdas93@gmail.com>
2021-11-17 09:04:49 +00:00
terence tsao
e3246922eb Add merge state type definitions (#9908) 2021-11-16 08:36:13 -08:00
Nishant Das
5962363847 Add in Deposit Map To Cache (#9885)
* add in map + tests

* tes

* raul's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-16 02:07:42 +00:00
Potuz
5e8cf9cd28 Monitor exits (#9899)
* Validator monitor process slashings

* Preston's comment

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* add test for tracked index

* Prestons requested changes

* Process voluntary exits in validator monitor

* Address Reviewers' comments

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-15 19:45:18 +00:00
Raul Jordan
b5ca09bce6 Add Network Flags to Slashing Protection Export Command (#9907) 2021-11-15 19:21:54 +00:00
terence tsao
720ee3f2a4 Add merge state protobuf (#9888)
* Add beacon state protobuf

* Update proto/prysm/v1alpha1/beacon_state.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update proto/prysm/v1alpha1/beacon_state.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Regenerate pbs

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-15 18:53:43 +00:00
Potuz
3d139d35f6 Validator monitor process slashings (#9898)
* Validator monitor process slashings

* Preston's comment

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* add test for tracked index

* Prestons requested changes

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-11-15 17:39:55 +00:00
terence tsao
ea38969af2 Register start-up state when interop mode (#9900)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-15 16:32:54 +00:00
terence tsao
f753ce81cc Add merge block protobuf (#9887)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-15 15:42:18 +00:00
Nishant Das
9d678b0c47 Clean Up Methods In Prysm (#9903)
* clean up

* go simple
2021-11-15 10:13:52 -05:00
Nishant Das
4a4a7e97df handle canceled contexts (#9893)
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-13 16:03:27 +08:00
Nishant Das
652b1617ed Use Next Slot Cache In More Places (#9884)
* add changes

* terence's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-12 23:46:06 +00:00
Nishant Das
42edc4f8dd Improve RNG Documentation (#9892)
* add comments

* comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-12 20:30:09 +00:00
terence tsao
70d5bc448f Add cli override setting for merge (#9891) 2021-11-12 12:01:14 -08:00
Preston Van Loon
d1159308c8 Update security.txt (#9896)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-12 15:55:45 +00:00
Nishant Das
e01298bd08 Remove Superflous Errors From Parameter Registration (#9894) 2021-11-12 15:28:21 +00:00
terence tsao
2bcb62db28 Remove unused import (#9890) 2021-11-11 22:54:48 +00:00
terence tsao
672fb72a7f Update spec tests to v1.1.5 (#9875)
* Update spec tests to v1.1.5

* Ignore `TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH`

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-10 02:41:16 +00:00
terence tsao
7fbd5b06da time pkg: can upgrade to merge helper (#9879)
* Helper: can upgrade to merge

* Typo
2021-11-09 18:36:50 +00:00
Raul Jordan
0fb91437fc Group Slashing Protection History Packages Idiomatically (#9873)
* rename

* gaz

* gaz

* Gaz

* rename

* edit

* gaz

* gaz

* build

* fix

* build

* fix up

* fix

* gaz

* cli import export

* gaz

* flag

* rev

* comm

* package renames

* radek
2021-11-09 16:49:28 +00:00
Raul Jordan
6e731bdedd Implement List Keystores for Standard API (#9863)
* start api

* keystores list

* gaz

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-11-08 19:08:17 +00:00
Dan Loewenherz
40eb718ba2 Fix typos: repones -> response, attestion -> attestation (#9868)
* Fix typo: repones -> response

* Fix typo: attestion -> attestation
2021-11-07 11:02:01 -06:00
terence tsao
e1840f7523 Share finalized state at start up (#9843)
* Reuse finalized beacon state at startup

* Better logging for replay

* Update tests

* Fix lint

* Add `WithFinalizedStateAtStartup`

* Update service.go

* Remove unused fields

* Update service_test.go

* Update service_test.go

* Update service_test.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-06 13:45:16 +00:00
terence tsao
b07e1ba7a4 GenesisState when zero hash tests (#9866) 2021-11-06 06:57:54 +01:00
Raul Jordan
233171d17c [Service Config Revamp] - Sync Service With Functional Options (#9859)
* sync config refactor

* rem

* rem

* testing

* gaz

* next

* fuzz

* build

* fuzz

* rev

* log

* cfg
2021-11-05 19:08:58 +00:00
kasey
2b0e132201 fix #9851 using GenesisState when zero hash in stategen StateByRoot/StateByRootInitialSync (#9852)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-05 18:14:37 +00:00
Raul Jordan
ad9e5331f5 Update Prysm Web to Use v1.0.1 (#9858)
* update web

* site data update

* fmt
2021-11-04 16:12:32 -04:00
terence tsao
341a2f1ea3 Use math.MaxUint64 (#9857)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-04 18:48:09 +00:00
Raul Jordan
7974fe01cd [Service Revamp] - Powchain Service With Functional Options (#9856)
* begin powchain service refactor

* begin refactor

* powchain passes

* options pkg

* gaz

* rev

* rev

* comments

* move to right place

* bazel powchain

* fix test

* log

* contract addr

* happy path and comments

* gaz

* new service
2021-11-04 14:19:44 -04:00
Raul Jordan
ae56f643eb Rename Web UI Performance Endpoint to Summary (#9855)
* rename endpoint

* rename endpoint
2021-11-04 15:13:16 +00:00
terence tsao
2ea09b621e Simplify should update justified (#9837)
* Simplify should update justified

* Update tests

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-03 22:29:03 +00:00
terence tsao
40fedee137 Use prev epoch source naming correctly (#9840)
* Use prev epoch source correctly

* Update epoch_precompute_test.go

* Update type.go

* Update tests

* Gazelle

* Move version to read only

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-03 21:50:41 +00:00
kasey
7cdddcb015 Weak subjectivity verification refactor (#9832)
* weak subjectivity verification refactor

This separates weak subjectivity verification into a
distinct type which does not have a dependency on
blockchain.Service

* remove unused variable

* saving enqueued init blocks before ws verify

* remove TODO, handled in previous commit

* accept suggested comment change

start comment w/ name of function NewWeakSubjectivityVerifier

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* update log w/ Raul's suggested language

* explicit zero value for clarity

* add comments clarifying how we adhere to spec

* more clear TODO per Raul's feedback

* gofmt

Co-authored-by: kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-03 15:07:46 +00:00
Raul Jordan
4440ac199f Empty Genesis Validators Root Check in Slashing Protection Export (#9849)
* test for empty genesis validators root

* precod

* fix test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-03 14:16:57 +00:00
Radosław Kapka
d78428c49e Return proper responses from KeyManagement service (#9846)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-02 15:46:11 +00:00
Raul Jordan
2e45fada34 Validate Password on RPC CreateWallet Request (#9848)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-02 15:20:43 +00:00
Raul Jordan
4c18d291f4 Rename Interop-Cold-Start Package to Deterministic-Genesis (#9841)
* rename interop-cold-start

* rev

* rev

* BUILD

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-02 14:55:36 +00:00
Preston Van Loon
dfe33b0770 stateutil: Remove duplicated MerkleizeTrieLeaves method (#9847) 2021-11-02 14:31:16 +00:00
Raul Jordan
63308239d9 Define Validator Key Management Standard API Schema (#9817)
* begin service for key management

* begin defining schema

* generate bindings

* rev

* add in custom compiler

* use custom plugin with option

* goimports

* fix up proto to take in multiple passwords

* keymanagent proto edit

* rev

* rev

* dev

* builds

* comment

* indent

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-01 17:26:15 +00:00
terence tsao
712cc18ee0 Use BeaconBlockIsNil helper more (#9834)
* Replace manual checks with helper

* Rename to `BeaconBlockIsNil`

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-01 14:13:05 +00:00
Nishant Das
3d318cffa2 Fix Individual Votes RPC Endpoint (#9831)
* fix

* fix tests

* add terence's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-01 18:36:08 +08:00
terence tsao
026207fc16 Remove de-duplication condition for seen aggregates (#9830)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-29 21:53:49 +00:00
Raul Jordan
6b7b30ce47 Use GET Request for Slashing Protection Export (#9838) 2021-10-29 18:23:45 +00:00
Yash Bhutwala
105bb70b5e remove extra condition (#9836) 2021-10-29 09:29:33 -07:00
Potuz
9564ab1f7f Fix validator performance logs (#9828)
* Fix validator performance logs

* Add test

* Clean both phase 0 and altair calculations

* Add back total inclusion distance

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-10-28 16:56:27 +00:00
Raul Jordan
0b09e3e955 Refresh Web Backend JWT Secret Upon File Changes With FSNotify (#9810)
* refresh auth file

* refresh auth token from file changes

* gaz

* test for refresh

* rem token

* secret

* refresh

* remove wallet dir on test end
2021-10-28 10:24:39 -04:00
Radosław Kapka
bb319e02e8 Event support for contribution_and_proof and voluntary_exit (#9779)
* Event support for `contribution_and_prrof`

* event test

* fix panic in tests

* fix

* Revert "Auxiliary commit to revert individual files from dc8d01a15f0056c1fb48733219feab6461f71695"

This reverts commit f5f198564079781f80e1a045cefad7c27f89af25.

* remove receiver

* revive test

* move sending events to sync package

* remove receiver

* remove notification test

* build file

* notifier tests

* revert removal of exit event in API

* simplify exit test

* send notification in contribution API method

* test fix

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-10-28 08:56:22 +00:00
terence tsao
53c86429e4 Add altair test for RPC end point (#9829) 2021-10-27 21:00:09 -04:00
Raul Jordan
61172d5007 Add Missing Objects to Keymanager Protobuf (#9827)
* add missing objects

* update keymanager

* fix up missing sign calls

* gaz

* msg block root

* naming
2021-10-27 18:30:53 +00:00
Yash Bhutwala
1507719613 fix altair individual votes endpoint (#9825) 2021-10-27 15:15:51 +00:00
Raul Jordan
4c677e7b40 Remove JWT Expiration (#9813)
* remove expiry from claims

* fix tests

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-26 10:24:09 +00:00
Raul Jordan
28f50862cb Goimport All Items in Proto Folder In Bash Script (#9815)
* update goimports to format all, including gw.pb.go files

* update hack

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-26 10:07:49 +00:00
Nishant Das
2dfb0696f7 Fix Validator Exit in our V1 Method (#9819) 2021-10-26 10:14:11 +02:00
terence tsao
ae2c883aaf Generate secret key from big number test only (#9816)
* Generate secret key from big number test only

* Gazelle

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-25 19:14:19 +00:00
Raul Jordan
ad9ef9d803 Simplify Prysm Backend Password Requirements (#9814)
* simpler reqs

* gaz

* tidy mod
2021-10-25 13:17:47 -05:00
Kirill Fedoseev
b837f90b35 Fix GetDuties (#9811)
* Fix GetDuties

* Add regression test
2021-10-25 09:09:47 -05:00
Håvard Anda Estensen
7f3ec4221f Add errcheck and gosimple linters (#9729)
* Add errcheck linter

* Check unchecked error

* Add gosimple linter

* Remove type assertion to same type

* Omit nil check

len() for nil slices is defined as zero

* Revert "Remove type assertion to same type"

This reverts commit af69ca1ac8.

* Revert "Revert "Remove type assertion to same type""

This reverts commit 5fe8931504.

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-10-22 17:40:03 -05:00
Radosław Kapka
5b3375638a Improve description of datadir flag (#9809)
* Improve description of `datadir` flag

* improve doc
2021-10-22 12:53:13 +00:00
terence tsao
3c721418db Check participation flag offset length (#9784)
* Check participation flag offset length

* Fix test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-21 15:01:09 +00:00
terence tsao
d7cad27cc3 Pool returns empty contribution slice instead of nil (#9808) 2021-10-20 14:45:57 -07:00
terence tsao
290b4273dd InitializePrecomputeValidators check overflows (#9807) 2021-10-20 13:06:43 -05:00
Raul Jordan
a797a7aaac Refactor Web Authentication to Use Auth Token (#9740)
* builds

* initialize auth token from scratch

* change auth

* add improvements

* moar auth fixes

* web auth changes running

* url encode

* tests

* gaz

* navigate auth

* separate line

* auth token

* 304

* persistent auth tokens and jwts

* fixed up test

* auth token path test and integration test

* auth token test

* auth token command

* gaz

* gaz

* Radek feedback

* fix up

* test

* Update validator/rpc/auth_token.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-10-20 16:37:05 +00:00
Raul Jordan
f7c34b0fd6 Validate Keystores Validator Client RPC Endpoint (#9799)
* validate endpoint

* validate keystores proto

* wallet validate

* validate keystores endpoint

* better err message

* added in gaz
2021-10-20 09:23:59 -05:00
Nishant Das
c6874e33f7 fix transitions (#9804) 2021-10-20 19:36:08 +08:00
Radosław Kapka
13ddc171eb Ignore validators without committee assignment when fetching attester duties (#9780)
* Ignore validators without committee assignment when fetching attester duties

* simplify

* fix tests

* slice capacity

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-10-19 17:41:12 +00:00
Radosław Kapka
acde184aa7 Fill out Version for SSZ block (#9801) 2021-10-19 16:45:38 +00:00
Radosław Kapka
5fd6474e56 Allow submitting sync committee subscriptions for next period (#9798)
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-10-19 14:04:04 +00:00
Preston Van Loon
65db331eaf Remove bazel-go-ethereum fork, use upstream go-ethereum (#9725)
* Check in go-ethereum crypto/sepc256k1 package with proper build rules

* gaz

* Add karalabe/usb

* viz improvement

* Remove bazel-go-ethereum, use vendored libraries only

* move vendor stuff to third_party so that go mod wont be mad anymore

* fix geth e2e flags

* fix geth e2e flags

* remove old rules_foreign_cc toolchain

* Update cross compile docker image to support os x

* works for geth build

* remove copy of sepc256k1

* revert changes in tools/cross-toolchain

* gaz

* Update go-ethereum to 1.10.10

* Revert "revert changes in tools/cross-toolchain"

This reverts commit 2e8128f7c3.

* revert tags changes

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-19 00:00:22 +00:00
Radosław Kapka
7b8aedbfe4 Update Beacon API to v2.1.0 (#9797)
* `Eth-Consensus-Version` header

* rename unused receiver

* omit receiver name

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-18 18:55:18 +00:00
terence tsao
cf956c718d Update to Spectest 1.1.3 (#9786)
* Update transition spec test setup

* Update WORKSPACE

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-18 18:13:40 +00:00
Raul Jordan
975f0ea1af [Service Config Revamp] - Blockchain Initialization (#9783)
* viz

* unexport config

* builds

* viz

* viz

* register cfg

* fuzz

* blockchain opts

* deepsource

* rename flag opts
2021-10-18 17:48:05 +00:00
terence tsao
a80b1c252a Refactor rpc proposer into smaller files (#9796)
* Refactor rpc proposer into smaller files

* Update BUILD.bazel

* Fix test

* Update BUILD.bazel
2021-10-18 11:35:32 -05:00
Preston Van Loon
1f51e59bfd Bazel: minimal build config alias (#9794) 2021-10-18 14:22:57 +00:00
Nishant Das
bfcb113f78 Fix Doppelganger Protection (#9748)
* add fix

* fix tests

* Update beacon-chain/rpc/prysm/v1alpha1/validator/status.go

* fix tests

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-18 13:09:14 +00:00
terence tsao
20c7efda2c Process slashings return error on unknown state (#9778)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-18 12:40:47 +00:00
Radosław Kapka
f2990d8fdd Return errors for unknown block/state versions (#9781)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-18 12:18:49 +00:00
Nishant Das
d1f3050d20 Ignore G307 for Gosec (#9790) 2021-10-18 14:04:01 +02:00
Raul Jordan
4dbb5d6974 Add WebUI Security Headers (#9775) 2021-10-15 12:40:23 +02:00
Raul Jordan
59547aea66 Deregister Remote Slashing Protection Until Further Notice (#9774)
* avoid registering

* disable endpoints

* remote slasher protection register

* gaz

* fatal on external protection flag call

* radek comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-14 19:33:05 +00:00
Radosław Kapka
545424dd09 Fix epoch calculation when fetching sync committee duties (#9767)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-14 18:52:26 +00:00
Nishant Das
508b18f1bd Return Error For Batch Verifier (#9766)
* fix it

* make err check happy

* radek's review

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-13 23:52:22 +00:00
Radosław Kapka
e644e6b626 Remove special preparation of graffiti (#9770)
* add attestations to migration tests

* remove sepcial preparation of graffiti

* Revert "add attestations to migration tests"

This reverts commit adbe8cf4bf.
2021-10-13 15:32:15 +00:00
Radosław Kapka
280dc4ecf0 Register v1alpha2 endpoints in the gateway (#9768)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-10-12 15:13:52 +00:00
terence tsao
7a825a79ae E2e test: stricter participation check (#9718)
* Stricter participation check

* 0.99 is still better than 0.95...

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-10-12 11:32:36 +00:00
Nishant Das
b81f5fc7a5 Handle Invalid Number Of Leaves (#9761)
* fix bad trie

* handle edge case

* radek's review

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-10-12 10:36:57 +00:00
Nishant Das
06c084ff52 Log out Gossip Handling Errors (#9765) 2021-10-12 07:44:47 +00:00
Preston Van Loon
76e06438e9 Update bazel version to 4.x (#9763)
* Update to bazel 4.0.0

* bazel 4.2.1

* Regenerate crosstool configs

* restore manual tags

* restore manual tags
2021-10-12 06:17:24 +00:00
Nishant Das
f8f037b63d updateInterval (#9764) 2021-10-11 22:12:07 -05:00
terence tsao
f114a47b5b Improve "synced block.." log (#9760) 2021-10-09 15:27:34 +02:00
Nishant Das
65d2df4609 Add in Stronger Length Checks (#9758)
* add changes

* radek's review

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-08 17:41:36 +00:00
Radosław Kapka
63349d863b Do not return data for validators without sync duties (#9756)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-08 15:20:40 +00:00
Nishant Das
e2a00d3e2e Update to v1.1.2 version of the Spec (#9755) 2021-10-08 15:08:57 +02:00
terence tsao
271ee2ed32 Cleanup slot helpers into intended locations (#9752)
* Refactor slot time helpers to intended locations

* Gazelle

* Update beacon-chain/core/time/slot_epoch.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/time/slot_epoch_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Go fmt

* Update transition_fuzz_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-10-07 19:15:35 +00:00
Radosław Kapka
b5f0bd88b0 Add sync details to error messages (#9750) 2021-10-07 20:50:03 +02:00
terence tsao
e7085897ad Use BLS signature length config value (#9746)
* Use `BeaconConfig().BLSSignatureLength`

* Update BUILD.bazel

* Fix build

* Update BUILD.bazel
2021-10-07 10:37:53 -05:00
Raul Jordan
d9b98e9913 Health Endpoints Not Registered in Validator gRPC Gateway Fix (#9747) 2021-10-06 19:07:29 +00:00
terence tsao
a9f9026c78 Minor cleanups (#9743)
* Minor cleanups

* Delete slasher client files

* Revert "Delete slasher client files"

This reverts commit 0c995a1d4a.

* Update slasher_client.go
2021-10-06 13:23:40 -05:00
Nishant Das
b128d446f2 Bring Eth2FastAggregateVerify Inline With the Spec (#9742)
* fix

* tie it closer to the spec
2021-10-06 10:48:56 +08:00
Radosław Kapka
9aa50352b6 Allow fetching sync committee duties for current and next period's epochs (#9728) 2021-10-05 08:46:16 -07:00
Mohamed Zahoor
362dfa691a GetBlock() - Simple optimisations (#9541)
* GetBlock() optimizations

* addressed preston's comments

* fix fmt

* added benchmarks

* fix deepsource

* merge fix

* remove unnesesary check

* fixed terence's comments

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2021-10-05 17:28:02 +08:00
Raul Jordan
843ed50e0a Register Slashing Protection Client in Validator (#9735) 2021-10-05 00:41:10 +00:00
Preston Van Loon
da58b4e017 Bump github.com/libp2p/go-tcp-transport to v0.2.8 (#9734) 2021-10-04 23:53:22 +00:00
Raul Jordan
800f78e279 Web UI Fix for Prysm V2.0.0 (#9732) 2021-10-04 22:26:55 +00:00
terence tsao
644038ba61 eth2 api: use balance instead of effective balance (#9722)
* Use balance instead of effective balance

* Update test to reflect validator balance

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-04 20:57:40 +00:00
Nishant Das
865ef4e948 Pin Base Images (#9727)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-04 18:55:56 +00:00
Håvard Anda Estensen
b793d6258f Add Golangci-lint to GitHub Actions and add Deadcode linter (#9597)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-10-04 13:06:21 +02:00
Raul Jordan
f7845afa57 Use Unique Slot Time Tickers for Slasher (#9723) 2021-10-03 07:49:01 +00:00
terence tsao
c21e43e4c5 Refactor: move functions beacon-chain/core/time -> time/slots (#9719)
* Move necessary functions beacon-chain/core/time -> time/slots

* Fix fuzz

* Fix build

* Update slot_epoch.go
2021-10-01 15:17:57 -05:00
Raul Jordan
4f31ba6489 Changes to E2E for Optimized Slasher (#9698)
* remaining slasher e2e changes

* testing gaz

* slash e2e

* deepsource

* gaz

* viz

* revert wait group changes

* comment

* lock around reset cache

* add slashing reason

* revert changes

* is sync

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2021-09-30 20:28:14 +00:00
terence tsao
2bc3f4bc6a Update sync time error message (#9713)
* Update sync_committee.go

* Update beacon-chain/core/altair/sync_committee.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Fix slot calculation

* Go fmt

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-30 20:03:57 +00:00
Radosław Kapka
601493098b Return error when request JSON contains unknown fields (#9710)
* Return error when request JSON contains unknown fields

* Update api/gateway/apimiddleware/process_request.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-09-30 19:42:20 +00:00
terence tsao
8219af46e4 Move slot epoch from core to time pkg (#9714)
* Move slot epoch from core to time pkg

* Fix fuzz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-30 19:00:14 +00:00
Raul Jordan
f5234634d6 Prevent Saving Empty Chunks to Disk (#9707)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-30 18:29:40 +00:00
Raul Jordan
26978fcc50 Make Jaeger in E2E Sharding Aware (#9711)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-30 17:35:05 +00:00
Nishant Das
7c67d381f4 Remove Peer Scorer From Dev (#9712)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-30 16:51:13 +00:00
Nishant Das
a196c78bed clean up (#9709) 2021-09-30 08:10:49 -07:00
Nishant Das
3c052d917f Use Scoring Parameters To Enable Global Score (#8794)
* fix score parameters

* add check

* fix deadlock issues

* fix

* fix again

* gaz

* comment

* fix tests and victor's review

* gaz

* clean up

* fix tests

* fix tests

* gate behind flag

* add scorer to dev

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-30 03:39:53 +00:00
Raul Jordan
7dae66afc9 Highest Attestations Endpoint for Optimized Slasher (#9706)
* begin highest atts

* highest atts

* deep source
2021-09-29 21:33:28 -05:00
Radosław Kapka
62dc74af2f Move head event's epoch transition to epoch start (#9704)
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-29 17:18:10 -05:00
Raul Jordan
13cdb83591 Validator Changes for Optimized Slasher (#9705)
* val changes

* mock

* mocks

* gomock any

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-29 21:25:45 +00:00
Preston Van Loon
520bc9d955 Update validator reporting logs and metrics for Altair (#9589)
* Mark fields as deprecated due to Altair

* Only print inclusion distance fields before Altair fork

* Report phase0 and altair metrics respectively

* only set phase0 fields in phase0, only set altair fields in altair

* better use of fields

* Update go pbs

* Update individual votes method

* regen go proto files

* formatting

* Feedback from @potuz

* Annotate metrics per @potuz suggestion

* Set previous release e2e to end 1 epoch before altair. Add some out of bounds checks for validator metrics reporting and a panic catch

* gofmt

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-29 20:49:58 +00:00
Raul Jordan
df33ce3309 Remaining Slasher Beacon Node Changes (#9701)
* slasher beacon node changes

* remaining beacon node items

* moar changes

* gaz

* flag fix

* rem slashable

* builds

* imports

* fix up

* pruning faster test

* deepsource

* fix wrong item

* node node feature flags

* broken test

* preston review

* more preston comments

* comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-29 18:17:37 +00:00
terence tsao
86efa87101 Refactor sync and participation field roots (#9703)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-29 17:46:51 +00:00
Potuz
ff625d55df Use target root for pending attestations in tests instead of unrelated one (#9699)
* Use target root for pending attestations instead of unrelated one

* remove unnecessary block saves

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-29 12:27:47 -05:00
Mohamed Zahoor
0678e9f718 finalise deposits before we initialise the node (#9639)
* finalize deposits before we initialize the node

* Update beacon-chain/blockchain/service.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/blockchain/service.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* moved this when intiializing caches

* added test case

* satisfy deepsource

* fix gazel

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-29 15:43:24 +00:00
Nishant Das
10251c4191 increase size (#9702) 2021-09-29 10:01:10 -05:00
Potuz
861c2f5120 remove unnecessary asserts from tests (#9700) 2021-09-29 22:17:06 +08:00
terence tsao
bfc821d03a Can save justified checkpoint to DB (#9697)
* Can save justified checkpoint to DB

* Update beacon-chain/blockchain/process_block_test.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2021-09-29 03:00:52 +00:00
Raul Jordan
9edba29f64 Slasher Simulator Code for Testing Optimized Slasher Behavior (#9695)
* slashing simulator

* add in necessary items for slasher sim

* sim item

* fix up

* fixed build

* rev

* slasher sim in testing

* testonly

* gaz

* gaz

* fix viz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
2021-09-29 02:27:21 +00:00
Nishant Das
0edb3b9e65 Flatten Attestation Packing (#9683)
* flatten attestation packing

* dedup again

* terence's review

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-09-29 01:56:55 +00:00
Preston Van Loon
6c5bf70021 CI: fix remote caching (#9696) 2021-09-29 00:18:31 +00:00
Nishant Das
393549ad19 Add in Balance Safety Check (#9419)
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-28 22:19:10 +00:00
terence tsao
8f8ccf11e4 Update head more timely before (#9651)
* Move update head closer to transition

* Update process_block.go

* Update tests

* Move savePostStateInfo back

* Update process_block.go

* Update process_block_helpers.go

* Minor clean up for better diff

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-28 21:51:11 +00:00
terence tsao
806bcf1d29 Clean up unused types & function comments (#9691)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-28 21:21:07 +00:00
Raul Jordan
2a2239d937 Detect Slashable Attestations in Optimized Slasher (#9694)
* detect slashable attestations

* Update beacon-chain/slasher/detect_attestations.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-28 20:54:17 +00:00
Preston Van Loon
c94ba40db2 config: ensure config matches altair presets (#9690)
* config: ensure config matches altair presets

* gofmt

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-09-28 20:02:12 +00:00
Raul Jordan
1816906bc7 Detect Slashable Blocks in Optimized Slasher (#9693)
* pass

* Update beacon-chain/slasher/detect_blocks.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/slasher/detect_blocks.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-28 19:33:45 +00:00
Radosław Kapka
c32090aae5 Allow sending Altair blocks to /eth/v1/beacon/blocks (#9685)
* Allow sending Altar blocks to `/eth/v1/beacon/blocks`

* tests

* add documentation

* fix ineffectual assignment

* change type of sync committee bits

* remove unused import

* fix Altair epoch calculation

* compare slot against slot

* do not publicly export E2E constant

* tests for setInitialPublishBlockPostRequest

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-28 19:07:32 +00:00
Raul Jordan
57f965df50 Include Slasher Receiving Methods (#9692)
* first

* add receive details

* ensure most builds

* add slasherkv changes

* db iface additions

* build

* gaz

* proper todo comment

* terence comments

* sig check

* bad sig checks

* proper lock issue

* fix test

* fix up tests
2021-09-28 18:13:16 +00:00
Radosław Kapka
8a6e2a5c63 Add MIN_SYNC_COMMITTEE_PARTICIPANTS parameter to config (#9689)
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-28 16:54:12 +00:00
Raul Jordan
6d79f61fda Process Slashings for Newly Optimized Slasher (#9687)
* add process slashings

* verify sig

* add method

* add test

* add in process slashings functionality

* target state for aggregate

* comment

* Radek comments
2021-09-28 10:27:40 -05:00
Nishant Das
bf2d2cf981 make opt max cover default (#9684) 2021-09-28 17:34:29 +08:00
Raul Jordan
f2840c9ffa Slasher Min/Max Chunk Logic (#9673)
* slasher chunks code

* slasher chunks code

* avoid using shared

* testing helper

* slasher gaz

* radek comments

* preston feedback
2021-09-28 02:04:32 +00:00
Preston Van Loon
cbb4361c64 Update spectests to v1.1.0 (#9680)
* Update spectests to v1.1.0

* ignore placeholder fields, fix spec config check
2021-09-28 00:50:06 +00:00
terence tsao
328e3e6caf Update mainnet_config.go (#9678) 2021-09-27 13:39:55 -07:00
Preston Van Loon
ee0a453b7b core: refactor signing and domain methods from helper to core/signing pkg (#9520)
* Move domain function and all signing root functions from beacon-chain/core/helpers to beacon-chain/core

* @terencechain suggestion to put these methods under core/signing
2021-09-27 16:19:20 +00:00
Radosław Kapka
3e640fe79f Remove unused Eth1Data-related code from the proposer (#9670)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-27 14:29:41 +00:00
Radosław Kapka
bf41fd854d Remove fmt.Println from committee cache (#9677) 2021-09-27 11:18:20 +00:00
Nishant Das
6eb158c16a Check Head State For New Head Methods (#9676)
* check head state

* add tests
2021-09-27 08:27:11 +00:00
terence tsao
376d248c22 Add in progress handler to committee cache (#9664)
* Add in progress handler for committee cache

* Remove debug print

* Update validators.go

* Fix all the tests

* More tests

* Update committee_disabled.go

* Update committee_disabled.go

* Update testing util

* Update main.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2021-09-26 23:27:57 +08:00
Nishant Das
6e4c2b4b20 Wrap Gossip Validators With Error (#9660) 2021-09-25 18:06:48 -07:00
Raul Jordan
75936853af Optimized Slasher Docs and Helpers (#9578)
* bring over helpers

* slasher helpers pass tests

* fix dead link

* rem eth2

* gaz

* params

* gaz

* builds

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-24 13:38:13 -05:00
terence tsao
ea9ceeff03 Various clean up before v2 (#9672)
* Update package names

* Various clean up

* Gazelle

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-24 17:42:16 +00:00
Radosław Kapka
78ea402d9d Correct the semantics of startEpoch calculation in registerSyncSubnet (#9671) 2021-09-24 16:02:55 +00:00
Radosław Kapka
fae806c73e Small changes in API Middleware (#9666)
* Small changes in API Middleware's custom hooks

* reorder fields in `Endpoint`

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-24 14:34:19 +00:00
Radosław Kapka
45510fafea Fix error handing for block v2 endpoints (#9667)
* Fix error handing for block v2 endpoints

* rename helper func
2021-09-24 13:52:00 +00:00
Radosław Kapka
12480e12b2 Add flags for disabling selected API (#9606)
* Add flags for disabling selected API

* tests

* build file

* Use comma-separated modules

* test fix

* fix gateway tests

* fix import in flag tests
2021-09-24 09:25:42 +00:00
Radosław Kapka
7dd99de69f Restructure API Middleware (#9663)
* Restructure API Middleware

* fix package name in tests

* build file

* gzl

* fix one more test
2021-09-23 20:41:04 +00:00
Raul Jordan
a9a4bb9163 Move Shared/Testutil into Testing (#9659)
* move testutil

* util pkg

* build

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 18:53:46 +00:00
Radosław Kapka
2952bb3570 Increase debug gRPC message size to 128MB (#9661)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 18:31:24 +00:00
Radosław Kapka
1fbe05292f Fix SubmitPoolSyncCommitteeSignatures API endpoint (#9646)
* Fix `SubmitPoolSyncCommitteeSignatures` API endpoint

* fix test

* fix another test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 18:01:07 +00:00
Radosław Kapka
36b1f322c0 Various bug fixes in Eth API (#9649)
* fix block structure

* correct status code when block is not found

* make `/internal` work with events and SSZ

* test fix

* better block serialize tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 17:34:11 +00:00
Nishant Das
5225d97fd4 Remove Update Timely Feature Config (#9655)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 17:04:40 +00:00
Nishant Das
57b7bf6ea0 Update Mainnet Bootnodes (#9656)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 16:23:01 +00:00
Raul Jordan
29513c804c Create Encoding Bytesutil (#9658)
* bytesutil

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-23 15:23:37 +00:00
Nishant Das
b9ad81db50 bump it up (#9657) 2021-09-23 09:20:19 -05:00
Nishant Das
d694a775d7 Check For Blocks In Our Initial Sync Cache (#9558)
* add change

* test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-09-23 05:18:19 +00:00
Nishant Das
4f3762f1f7 Clean Up V2 BeaconState (#9648)
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-09-23 01:10:25 +00:00
terence tsao
8678610683 Update package name comments (#9653) 2021-09-22 22:38:06 +00:00
Radosław Kapka
8d4cdde07e Clean up code in Eth API (#9650)
* typo fix

* code cleanup

* fix spans

* move state convertion to migration package

* add migration package to v1 visibility
2021-09-22 17:59:06 +00:00
Raul Jordan
191bce3655 Add True Eth2 Deposit Contract, Bytecode, ABI (#9637)
* solidity contract, abi, and deposit util

* gaz

* gaz

* drain contract remove

* build

* fix up deploy

* add readme

* fix e2e

* revert

* revert flag

* fix

* revert test flag

* fix broken test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-22 17:27:13 +00:00
terence tsao
f52f737d2b Review Altair core epoch files with changes (#9556)
* Clean up core epoch for Altair

* Update BUILD.bazel

* Remove check for spec test

* Update validators_test.go

* Update validators_test.go

* Update beacon-chain/core/altair/epoch_precompute.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/altair/epoch_precompute_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/altair/epoch_spec.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-09-22 15:07:05 +00:00
Preston Van Loon
2cbe10c701 Fix long running e2e altair exception for previous release (#9645)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-22 04:01:43 +00:00
Preston Van Loon
36a7575437 e2e: fix trace_sync http handler registration (#9644)
* Avoid a global http server so that multiple test runs aren't registering global http handlers and causing a panic

* gofmt
2021-09-22 03:34:09 +00:00
terence tsao
8e3e6d0156 Fix package comment typo (#9641)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-21 20:25:12 +00:00
Raul Jordan
f3d6dbcc1e Move Shared/Params Into Config/Params (#9642)
* config params into pkg

* gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-21 19:59:25 +00:00
Radosław Kapka
26893bcb8c Production version of Altair Eth APIs (#9640)
* internal prefix in proto services

* half-baked inmplementation

* rename v1 to eth

* use router and remove old flag

* uncomment cors

* update v2 methods in proto services

* move adding path prefix after param processing

* remove unneeded code

* remove flag

* fix e2e

* uncomment sync committee e2e

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-09-21 19:20:57 +00:00
Raul Jordan
eebcd52ee6 Miscellaneous Packages from Shared Into Proper Folders (#9638)
* slashutil

* builds

* interop

* viz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-21 18:11:16 +00:00
Raul Jordan
45bfd82c88 Add Encoding SSZ Package (#9630)
* ssz package

* compile

* htrutils

* rem pkg doc

* fix cloners_test.go

* fix circular dep/build issues

Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-21 15:02:48 +00:00
Nishant Das
b943f7bce5 Add Flag To Parameterize Number of Peers in A Subnet (#9631)
* add flag

* use old value

* gaz

* raul's review
2021-09-21 07:55:52 +00:00
Radosław Kapka
28364d9f33 E2E for API Middleware (#9635)
* rename file

* Enable SSZ-serialization in v2 endpoints

# Conflicts:
#	proto/eth/v2/beacon_state.pb.go

* e2e

* rename file

* improve logging errors

* Revert "improve logging errors"

This reverts commit 796bcf3e97.

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-09-21 04:31:41 +00:00
deepsource-autofix[bot]
531f05d30d Unused parameter should be replaced by underscore (#9632)
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-09-20 20:51:59 +00:00
Preston Van Loon
728c77cc0c CI: use nostamp for code coverage and fuzzer uploads (#9634)
* Follow up to #9633, nostamp for code coverage

* nostamp for fuzzer uploads too

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-09-20 18:59:44 +00:00
1479 changed files with 50578 additions and 24118 deletions

View File

@@ -36,6 +36,9 @@ run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true
build:blst_disabled --define gotags=blst_disabled
build:minimal --//proto:network=minimal
build:minimal --@io_bazel_rules_go//go/config:tags=minimal
# Release flags
build:release --compilation_mode=opt
build:release --config=llvm

View File

@@ -1 +1 @@
3.7.0
4.2.1

View File

@@ -10,7 +10,7 @@
# Prysm specific remote-cache properties.
#build:remote-cache --disk_cache=
build:remote-cache --remote_download_minimal
build:remote-cache --remote_download_toplevel
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092
build:remote-cache --remote_local_fallback
@@ -46,4 +46,4 @@ test:fuzz --flaky_test_attempts=1
# Better caching
build:nostamp --nostamp
build:nostamp --workspace_status_command=./hack/workspace_status_ci.sh
build:nostamp --workspace_status_command=./hack/workspace_status_ci.sh

View File

@@ -24,7 +24,7 @@ jobs:
uses: ./.github/actions/gofmt
with:
path: ./
- name: GoImports checker
id: goimports
uses: Jerome1337/goimports-action@v1.0.2
@@ -34,7 +34,13 @@ jobs:
- name: Gosec security scanner
uses: securego/gosec@master
with:
args: '-exclude-dir=crypto/bls/herumi ./...'
args: '-exclude=G307 -exclude-dir=crypto/bls/herumi ./...'
- name: Golangci-lint
uses: golangci/golangci-lint-action@v2
with:
args: --print-issued-lines --sort-results --no-config --timeout=10m --disable-all -E deadcode -E errcheck -E gosimple --skip-files=validator/web/site_data.go
build:
name: Build
runs-on: ubuntu-latest

View File

@@ -5,21 +5,22 @@ Contact: mailto:security@prysmaticlabs.com
Encryption: openpgp4fpr:0AE0051D647BA3C1A917AF4072E33E4DF1A5036E
Encryption: openpgp4fpr:CD08DE68C60B82D3EE2A3F7D95452A701810FEDB
Encryption: openpgp4fpr:317D6E91058F8F3C2303BA7756313E44581297A6
Encryption: openpgp4fpr:79C59A585E3FD3AFFA00F5C22940A6479DA7C9EC
Preferred-Languages: en
Canonical: https://github.com/prysmaticlabs/prysm/tree/master/.well-known/security.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAl++klgACgkQcuM+TfGl
A27rQw/6A29p1W20J0v+h218p8XWLSUpTIGLnZTxw6KqdyVXMzlsQK0YG4G2s2AB
0LKh7Ae/Di5E0U+Z4AjUW5nc5eaCxK36GMscH9Ah0rgJwNYxEJw7/2o8ZqVT/Ip2
+56rFihRqxFZfaCNKFVuZFaL9jKewV9FKYP38ID6/SnTcrOHiu2AoAlyZGmB03p+
iT57SPRHatygeY4xb/gwcfREFWEv+VHGyBTv8A+6ABZDxyurboCFMERHzFICrbmk
8UdHxxlWZDnHAbAUyAwpERC5znx6IHXQJwF8TMtu6XY6a6axT2XBOyJDF9/mZOz+
kdkz6loX5uxaQBGLtTv6Kqf1yUGANOZ16VhHvWwL209LmHmigIVQ+qSM6c79PsW/
vrsqdz3GBsiMC5Fq2vYgnbgzpfE8Atjn0y7E+j4R7IvwOAE/Ro/b++nqnc4YqhME
P/yTcfGftaCrdSNnQCXeoV9JxpFM5Xy8KV3eexvNKbcgA/9DtgxL5i+s5ZJkUT9A
+qJvoRrRyIym32ghkHgtFJKB3PLCdobeoOVRk6EnMo9zKSiSK2rZEJW8Ccbo515D
W9qUOn3GF7lNVuUFAU/YKEdmDp/AVaViZ7vH+8aq0LC0HBkZ8XlzWnWoArS8sMhw
fX0R9g/HMgrwNte/d0mwim5lJ2Plgv60Bh4grJqwZJeWbU0zi1U=
=uW+X
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAmGOfiYACgkQcuM+TfGl
A24YwRAAiQk3w6yzqSEggrOlNoNn04iu/rWZdn5ihkQgzACXy8XH2D1gdKLChE/X
7e5bUtgE2aCuHryQjwoKxqZakviBJFstVmHgF64rXv2zKhpqA30Mj4fI+T3zn8I+
+FpFV0TTsxNLDx+AcR1eQ1nSayO7ImUDIfOQNDDnSZZy42Bc+F+QIGKB3aH/8bpG
kT+bDTZrXvX+TE1gZTbAtZG8sH8g/zadoWEHIhfXUuYb0kTz+DRzAxoqU4j4Z4ee
1zSfFAgfJwxJP4kWD7s4xkE1sBbCgGBeD6cW/C2lbcfIei+XSizLpHW3jD9dNqh4
fLkmEspSa/LV/iXFq8nFzu/GLww4q+sQZDzzDKZyws54CrATinRitZMhzoIL0bTn
yFZVOGHosFAMEVZ36dl1Aw2+B2W6tr2CVr9c5zfV+kup5/KZH1EmT5nYY/zFwfg2
jYCFB5wmYeiyWZvuprgJXRArgVZLZaJxwWazlPVk4i/4vPvRgvfHqOwHCBe8DXy0
VHPhpewwb/ECYek1KoaNQflgR8iH2GMHkC5RjhGDAt1S0AQDtite5m4ZYt1kvO9E
k/znkv89dduhL9CKDvZvnI+DICwsTrf//4KJ8PM/qaPAJa4GvtiUU/eS/jKBivtv
OP5dZQtX6KPc9ewqqZgn622uHSezoBidgeTkdZsJ6tw2eIu0lsY=
=V7L0
-----END PGP SIGNATURE-----

View File

@@ -144,12 +144,6 @@ common_files = {
"//:README.md": "README.md",
}
toolchain(
name = "built_cmake_toolchain",
toolchain = "@rules_foreign_cc//tools/build_defs/native_tools:built_cmake",
toolchain_type = "@rules_foreign_cc//tools/build_defs:cmake_toolchain",
)
string_setting(
name = "gotags",
build_setting_default = "",

View File

@@ -76,9 +76,9 @@ http_archive(
http_archive(
name = "io_bazel_rules_docker",
sha256 = "59d5b42ac315e7eadffa944e86e90c2990110a1c8075f1cd145f487e999d22b3",
strip_prefix = "rules_docker-0.17.0",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.17.0/rules_docker-v0.17.0.tar.gz"],
sha256 = "1f4e59843b61981a96835dc4ac377ad4da9f8c334ebe5e0bb3f58f80c09735f4",
strip_prefix = "rules_docker-0.19.0",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.19.0/rules_docker-v0.19.0.tar.gz"],
)
http_archive(
@@ -139,6 +139,34 @@ load(
"container_pull",
)
container_pull(
name = "cc_image_base",
digest = "sha256:2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb",
registry = "gcr.io",
repository = "distroless/cc",
)
container_pull(
name = "cc_debug_image_base",
digest = "sha256:3680c61e81f68fc00bfb5e1ec65e8e678aaafa7c5f056bc2681c29527ebbb30c",
registry = "gcr.io",
repository = "distroless/cc",
)
container_pull(
name = "go_image_base",
digest = "sha256:ba7a315f86771332e76fa9c3d423ecfdbb8265879c6f1c264d6fff7d4fa460a4",
registry = "gcr.io",
repository = "distroless/base",
)
container_pull(
name = "go_debug_image_base",
digest = "sha256:efd8711717d9e9b5d0dbb20ea10876dab0609c923bc05321b912f9239090ca80",
registry = "gcr.io",
repository = "distroless/base",
)
container_pull(
name = "alpine_cc_linux_amd64",
digest = "sha256:752aa0c9a88461ffc50c5267bb7497ef03a303e38b2c8f7f2ded9bebe5f1f00e",
@@ -197,7 +225,7 @@ filegroup(
url = "https://github.com/eth2-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
)
consensus_spec_version = "v1.1.0-beta.4"
consensus_spec_version = "v1.1.6"
bls_test_version = "v0.1.1"
@@ -213,7 +241,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "d2d501453cf29777896a5f9ae52e93921f34330e57fcc5b55e4982d4795236b9",
sha256 = "58dbf798e86017b5561af38f2217b99e9fa5b6be0e928b4c73dad6040bb94d65",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -229,7 +257,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "b152f1f03a4fdacd1882fe867bdfbd5067e74ed3c532bd01d81e7adf6cc3737a",
sha256 = "5be19f7fca9733686ca25dad5ae306327e98830ef6354549d1ddfc56c10e0e9a",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -245,7 +273,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "6046f89c679a9ffda217dee6f021eb7c4fe91aad6ff940fae4d2cf894cc61cbb",
sha256 = "cc110528fcf7ede049e6a05788c77f4a865c3110b49508149d61bb2a992bb896",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -260,7 +288,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "26a0a2c3022a3c5354f3204c7e441580df2e585dbde22a43a1d43f885b3a221c",
sha256 = "c318d7b909ab39db9cc861f645ddd364e7475a4a3425bb702ab407fad3807acd",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -334,9 +362,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "54ce527b83d092da01127f2e3816f4d5cfbab69354caba8537f1ea55889b6d7c",
sha256 = "f196fe4367c2d2d01d36565c0dc6eecfa4f03adba1fc03a61d62953fce606e1f",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v1.0.0-beta.4/prysm-web-ui.tar.gz",
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v1.0.2/prysm-web-ui.tar.gz",
],
)
@@ -367,10 +395,6 @@ load(
_cc_image_repos()
load("@com_github_ethereum_go_ethereum//:deps.bzl", "geth_dependencies")
geth_dependencies()
load("@io_bazel_rules_go//extras:embed_data_deps.bzl", "go_embed_data_dependencies")
go_embed_data_dependencies()

View File

@@ -3,12 +3,9 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"api_middleware.go",
"api_middleware_processing.go",
"api_middleware_structs.go",
"gateway.go",
"log.go",
"param_handling.go",
"options.go",
],
importpath = "github.com/prysmaticlabs/prysm/api/gateway",
visibility = [
@@ -16,16 +13,13 @@ go_library(
"//validator:__subpackages__",
],
deps = [
"//api/grpc:go_default_library",
"//api/gateway/apimiddleware:go_default_library",
"//runtime:go_default_library",
"//shared/bytesutil:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_wealdtech_go_bytesutil//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//connectivity:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
@@ -34,17 +28,13 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"api_middleware_processing_test.go",
"gateway_test.go",
"param_handling_test.go",
],
srcs = ["gateway_test.go"],
embed = [":go_default_library"],
deps = [
"//api/grpc:go_default_library",
"//api/gateway/apimiddleware:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",

View File

@@ -1,217 +0,0 @@
package gateway
import (
"net/http"
"reflect"
"github.com/gorilla/mux"
)
// ApiProxyMiddleware is a proxy between an Ethereum consensus API HTTP client and grpc-gateway.
// The purpose of the proxy is to handle HTTP requests and gRPC responses in such a way that:
// - Ethereum consensus API requests can be handled by grpc-gateway correctly
// - gRPC responses can be returned as spec-compliant Ethereum consensus API responses
type ApiProxyMiddleware struct {
GatewayAddress string
ProxyAddress string
EndpointCreator EndpointFactory
router *mux.Router
}
// EndpointFactory is responsible for creating new instances of Endpoint values.
type EndpointFactory interface {
Create(path string) (*Endpoint, error)
Paths() []string
IsNil() bool
}
// Endpoint is a representation of an API HTTP endpoint that should be proxied by the middleware.
type Endpoint struct {
Path string // The path of the HTTP endpoint.
PostRequest interface{} // The struct corresponding to the JSON structure used in a POST request.
PostResponse interface{} // The struct corresponding to the JSON structure used in a POST response.
RequestURLLiterals []string // Names of URL parameters that should not be base64-encoded.
RequestQueryParams []QueryParam // Query parameters of the request.
GetResponse interface{} // The struct corresponding to the JSON structure used in a GET response.
Err ErrorJson // The struct corresponding to the error that should be returned in case of a request failure.
Hooks HookCollection // A collection of functions that can be invoked at various stages of the request/response cycle.
CustomHandlers []CustomHandler // Functions that will be executed instead of the default request/response behaviour.
}
// DefaultEndpoint returns an Endpoint with default configuration, e.g. DefaultErrorJson for error handling.
func DefaultEndpoint() Endpoint {
return Endpoint{
Err: &DefaultErrorJson{},
}
}
// QueryParam represents a single query parameter's metadata.
type QueryParam struct {
Name string
Hex bool
Enum bool
}
// Hook is a function that can be invoked at various stages of the request/response cycle, leading to custom behaviour for a specific endpoint.
type Hook = func(endpoint Endpoint, w http.ResponseWriter, req *http.Request) ErrorJson
// CustomHandler is a function that can be invoked at the very beginning of the request,
// essentially replacing the whole default request/response logic with custom logic for a specific endpoint.
type CustomHandler = func(m *ApiProxyMiddleware, endpoint Endpoint, w http.ResponseWriter, req *http.Request) (handled bool)
// HookCollection contains hooks that can be used to amend the default request/response cycle with custom logic for a specific endpoint.
type HookCollection struct {
OnPreDeserializeRequestBodyIntoContainer []Hook
OnPostDeserializeRequestBodyIntoContainer []Hook
OnPreDeserializeGrpcResponseBodyIntoContainer []func([]byte, interface{}) (bool, ErrorJson)
OnPreSerializeMiddlewareResponseIntoJson []func(interface{}) (bool, []byte, ErrorJson)
}
// fieldProcessor applies the processing function f to a value when the tag is present on the field.
type fieldProcessor struct {
tag string
f func(value reflect.Value) error
}
// Run starts the proxy, registering all proxy endpoints on ApiProxyMiddleware.ProxyAddress.
func (m *ApiProxyMiddleware) Run() error {
m.router = mux.NewRouter()
for _, path := range m.EndpointCreator.Paths() {
m.handleApiPath(path, m.EndpointCreator)
}
return http.ListenAndServe(m.ProxyAddress, m.router)
}
func (m *ApiProxyMiddleware) handleApiPath(path string, endpointFactory EndpointFactory) {
m.router.HandleFunc(path, func(w http.ResponseWriter, req *http.Request) {
endpoint, err := endpointFactory.Create(path)
if err != nil {
errJson := InternalServerErrorWithMessage(err, "could not create endpoint")
WriteError(w, errJson, nil)
}
for _, handler := range endpoint.CustomHandlers {
if handler(m, *endpoint, w, req) {
return
}
}
if req.Method == "POST" {
for _, hook := range endpoint.Hooks.OnPreDeserializeRequestBodyIntoContainer {
if errJson := hook(*endpoint, w, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := DeserializeRequestBodyIntoContainer(req.Body, endpoint.PostRequest); errJson != nil {
WriteError(w, errJson, nil)
return
}
for _, hook := range endpoint.Hooks.OnPostDeserializeRequestBodyIntoContainer {
if errJson := hook(*endpoint, w, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := ProcessRequestContainerFields(endpoint.PostRequest); errJson != nil {
WriteError(w, errJson, nil)
return
}
if errJson := SetRequestBodyToRequestContainer(endpoint.PostRequest, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := m.PrepareRequestForProxying(*endpoint, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
grpcResponse, errJson := ProxyRequest(req)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
grpcResponseBody, errJson := ReadGrpcResponseBody(grpcResponse.Body)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
var responseJson []byte
if !GrpcResponseIsEmpty(grpcResponseBody) {
if errJson := DeserializeGrpcResponseBodyIntoErrorJson(endpoint.Err, grpcResponseBody); errJson != nil {
WriteError(w, errJson, nil)
return
}
if endpoint.Err.Msg() != "" {
HandleGrpcResponseError(endpoint.Err, grpcResponse, w)
return
}
var response interface{}
if req.Method == "GET" {
response = endpoint.GetResponse
} else {
response = endpoint.PostResponse
}
runDefault := true
for _, hook := range endpoint.Hooks.OnPreDeserializeGrpcResponseBodyIntoContainer {
ok, errJson := hook(grpcResponseBody, response)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
if ok {
runDefault = false
break
}
}
if runDefault {
if errJson := DeserializeGrpcResponseBodyIntoContainer(grpcResponseBody, response); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := ProcessMiddlewareResponseFields(response); errJson != nil {
WriteError(w, errJson, nil)
return
}
var ok bool
runDefault = true
for _, hook := range endpoint.Hooks.OnPreSerializeMiddlewareResponseIntoJson {
ok, responseJson, errJson = hook(response)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
if ok {
runDefault = false
break
}
}
if runDefault {
responseJson, errJson = SerializeMiddlewareResponseIntoJson(response)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
}
}
if errJson := WriteMiddlewareResponseHeadersAndBody(grpcResponse, responseJson, w); errJson != nil {
WriteError(w, errJson, nil)
return
}
if errJson := Cleanup(grpcResponse.Body); errJson != nil {
WriteError(w, errJson, nil)
return
}
})
}

View File

@@ -0,0 +1,40 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"api_middleware.go",
"log.go",
"param_handling.go",
"process_field.go",
"process_request.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/api/gateway/apimiddleware",
visibility = ["//visibility:public"],
deps = [
"//api/grpc:go_default_library",
"//encoding/bytesutil:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_wealdtech_go_bytesutil//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"param_handling_test.go",
"process_request_test.go",
],
embed = [":go_default_library"],
deps = [
"//api/grpc:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -0,0 +1,262 @@
package apimiddleware
import (
"net/http"
"reflect"
"github.com/gorilla/mux"
)
// ApiProxyMiddleware is a proxy between an Ethereum consensus API HTTP client and grpc-gateway.
// The purpose of the proxy is to handle HTTP requests and gRPC responses in such a way that:
// - Ethereum consensus API requests can be handled by grpc-gateway correctly
// - gRPC responses can be returned as spec-compliant Ethereum consensus API responses
type ApiProxyMiddleware struct {
GatewayAddress string
EndpointCreator EndpointFactory
router *mux.Router
}
// EndpointFactory is responsible for creating new instances of Endpoint values.
type EndpointFactory interface {
Create(path string) (*Endpoint, error)
Paths() []string
IsNil() bool
}
// Endpoint is a representation of an API HTTP endpoint that should be proxied by the middleware.
type Endpoint struct {
Path string // The path of the HTTP endpoint.
GetResponse interface{} // The struct corresponding to the JSON structure used in a GET response.
PostRequest interface{} // The struct corresponding to the JSON structure used in a POST request.
PostResponse interface{} // The struct corresponding to the JSON structure used in a POST response.
DeleteRequest interface{} // The struct corresponding to the JSON structure used in a DELETE request.
DeleteResponse interface{} // The struct corresponding to the JSON structure used in a DELETE response.
RequestURLLiterals []string // Names of URL parameters that should not be base64-encoded.
RequestQueryParams []QueryParam // Query parameters of the request.
Err ErrorJson // The struct corresponding to the error that should be returned in case of a request failure.
Hooks HookCollection // A collection of functions that can be invoked at various stages of the request/response cycle.
CustomHandlers []CustomHandler // Functions that will be executed instead of the default request/response behaviour.
}
// RunDefault expresses whether the default processing logic should be carried out after running a pre hook.
type RunDefault bool
// DefaultEndpoint returns an Endpoint with default configuration, e.g. DefaultErrorJson for error handling.
func DefaultEndpoint() Endpoint {
return Endpoint{
Err: &DefaultErrorJson{},
}
}
// QueryParam represents a single query parameter's metadata.
type QueryParam struct {
Name string
Hex bool
Enum bool
}
// CustomHandler is a function that can be invoked at the very beginning of the request,
// essentially replacing the whole default request/response logic with custom logic for a specific endpoint.
type CustomHandler = func(m *ApiProxyMiddleware, endpoint Endpoint, w http.ResponseWriter, req *http.Request) (handled bool)
// HookCollection contains hooks that can be used to amend the default request/response cycle with custom logic for a specific endpoint.
type HookCollection struct {
OnPreDeserializeRequestBodyIntoContainer func(endpoint *Endpoint, w http.ResponseWriter, req *http.Request) (RunDefault, ErrorJson)
OnPostDeserializeRequestBodyIntoContainer func(endpoint *Endpoint, w http.ResponseWriter, req *http.Request) ErrorJson
OnPreDeserializeGrpcResponseBodyIntoContainer func([]byte, interface{}) (RunDefault, ErrorJson)
OnPreSerializeMiddlewareResponseIntoJson func(interface{}) (RunDefault, []byte, ErrorJson)
}
// fieldProcessor applies the processing function f to a value when the tag is present on the field.
type fieldProcessor struct {
tag string
f func(value reflect.Value) error
}
// Run starts the proxy, registering all proxy endpoints.
func (m *ApiProxyMiddleware) Run(gatewayRouter *mux.Router) {
for _, path := range m.EndpointCreator.Paths() {
gatewayRouter.HandleFunc(path, m.WithMiddleware(path))
}
m.router = gatewayRouter
}
// ServeHTTP for the proxy middleware.
func (m *ApiProxyMiddleware) ServeHTTP(w http.ResponseWriter, req *http.Request) {
m.router.ServeHTTP(w, req)
}
// WithMiddleware wraps the given endpoint handler with the middleware logic.
func (m *ApiProxyMiddleware) WithMiddleware(path string) http.HandlerFunc {
endpoint, err := m.EndpointCreator.Create(path)
if err != nil {
log.WithError(err).Errorf("Could not create endpoint for path: %s", path)
return nil
}
return func(w http.ResponseWriter, req *http.Request) {
for _, handler := range endpoint.CustomHandlers {
if handler(m, *endpoint, w, req) {
return
}
}
if req.Method == "POST" {
if errJson := handlePostRequestForEndpoint(endpoint, w, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if req.Method == "DELETE" {
if errJson := handleDeleteRequestForEndpoint(endpoint, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := m.PrepareRequestForProxying(*endpoint, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
grpcResp, errJson := ProxyRequest(req)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
grpcRespBody, errJson := ReadGrpcResponseBody(grpcResp.Body)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
var respJson []byte
if !GrpcResponseIsEmpty(grpcRespBody) {
respHasError, errJson := HandleGrpcResponseError(endpoint.Err, grpcResp, grpcRespBody, w)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
if respHasError {
return
}
var resp interface{}
if req.Method == "GET" {
resp = endpoint.GetResponse
} else if req.Method == "DELETE" {
resp = endpoint.DeleteResponse
} else {
resp = endpoint.PostResponse
}
if errJson := deserializeGrpcResponseBodyIntoContainerWrapped(endpoint, grpcRespBody, resp); errJson != nil {
WriteError(w, errJson, nil)
return
}
if errJson := ProcessMiddlewareResponseFields(resp); errJson != nil {
WriteError(w, errJson, nil)
return
}
respJson, errJson = serializeMiddlewareResponseIntoJsonWrapped(endpoint, respJson, resp)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := WriteMiddlewareResponseHeadersAndBody(grpcResp, respJson, w); errJson != nil {
WriteError(w, errJson, nil)
return
}
if errJson := Cleanup(grpcResp.Body); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
}
func handlePostRequestForEndpoint(endpoint *Endpoint, w http.ResponseWriter, req *http.Request) ErrorJson {
if errJson := deserializeRequestBodyIntoContainerWrapped(endpoint, req, w); errJson != nil {
return errJson
}
if errJson := ProcessRequestContainerFields(endpoint.PostRequest); errJson != nil {
return errJson
}
return SetRequestBodyToRequestContainer(endpoint.PostRequest, req)
}
func handleDeleteRequestForEndpoint(endpoint *Endpoint, req *http.Request) ErrorJson {
if errJson := DeserializeRequestBodyIntoContainer(req.Body, endpoint.DeleteRequest); errJson != nil {
return errJson
}
if errJson := ProcessRequestContainerFields(endpoint.DeleteRequest); errJson != nil {
return errJson
}
return SetRequestBodyToRequestContainer(endpoint.DeleteRequest, req)
}
func deserializeRequestBodyIntoContainerWrapped(endpoint *Endpoint, req *http.Request, w http.ResponseWriter) ErrorJson {
runDefault := true
if endpoint.Hooks.OnPreDeserializeRequestBodyIntoContainer != nil {
run, errJson := endpoint.Hooks.OnPreDeserializeRequestBodyIntoContainer(endpoint, w, req)
if errJson != nil {
return errJson
}
if !run {
runDefault = false
}
}
if runDefault {
if errJson := DeserializeRequestBodyIntoContainer(req.Body, endpoint.PostRequest); errJson != nil {
return errJson
}
}
if endpoint.Hooks.OnPostDeserializeRequestBodyIntoContainer != nil {
if errJson := endpoint.Hooks.OnPostDeserializeRequestBodyIntoContainer(endpoint, w, req); errJson != nil {
return errJson
}
}
return nil
}
func deserializeGrpcResponseBodyIntoContainerWrapped(endpoint *Endpoint, grpcResponseBody []byte, resp interface{}) ErrorJson {
runDefault := true
if endpoint.Hooks.OnPreDeserializeGrpcResponseBodyIntoContainer != nil {
run, errJson := endpoint.Hooks.OnPreDeserializeGrpcResponseBodyIntoContainer(grpcResponseBody, resp)
if errJson != nil {
return errJson
}
if !run {
runDefault = false
}
}
if runDefault {
if errJson := DeserializeGrpcResponseBodyIntoContainer(grpcResponseBody, resp); errJson != nil {
return errJson
}
}
return nil
}
func serializeMiddlewareResponseIntoJsonWrapped(endpoint *Endpoint, respJson []byte, resp interface{}) ([]byte, ErrorJson) {
runDefault := true
var errJson ErrorJson
if endpoint.Hooks.OnPreSerializeMiddlewareResponseIntoJson != nil {
var run RunDefault
run, respJson, errJson = endpoint.Hooks.OnPreSerializeMiddlewareResponseIntoJson(resp)
if errJson != nil {
return nil, errJson
}
if !run {
runDefault = false
}
}
if runDefault {
respJson, errJson = SerializeMiddlewareResponseIntoJson(resp)
if errJson != nil {
return nil, errJson
}
}
return respJson, nil
}

View File

@@ -0,0 +1,5 @@
package apimiddleware
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "apimiddleware")

View File

@@ -1,4 +1,4 @@
package gateway
package apimiddleware
import (
"encoding/base64"
@@ -7,7 +7,7 @@ import (
"strings"
"github.com/gorilla/mux"
butil "github.com/prysmaticlabs/prysm/shared/bytesutil"
butil "github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/wealdtech/go-bytesutil"
)

View File

@@ -1,4 +1,4 @@
package gateway
package apimiddleware
import (
"bytes"
@@ -6,8 +6,8 @@ import (
"testing"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestHandleURLParameters(t *testing.T) {

View File

@@ -0,0 +1,108 @@
package apimiddleware
import (
"encoding/base64"
"fmt"
"reflect"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/wealdtech/go-bytesutil"
)
// processField calls each processor function on any field that has the matching tag set.
// It is a recursive function.
func processField(s interface{}, processors []fieldProcessor) error {
kind := reflect.TypeOf(s).Kind()
if kind != reflect.Ptr && kind != reflect.Slice && kind != reflect.Array {
return fmt.Errorf("processing fields of kind '%v' is unsupported", kind)
}
t := reflect.TypeOf(s).Elem()
v := reflect.Indirect(reflect.ValueOf(s))
for i := 0; i < t.NumField(); i++ {
switch v.Field(i).Kind() {
case reflect.Slice:
sliceElem := t.Field(i).Type.Elem()
kind := sliceElem.Kind()
// Recursively process slices to struct pointers.
if kind == reflect.Ptr && sliceElem.Elem().Kind() == reflect.Struct {
for j := 0; j < v.Field(i).Len(); j++ {
if err := processField(v.Field(i).Index(j).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
// Process each string in string slices.
if kind == reflect.String {
for _, proc := range processors {
_, hasTag := t.Field(i).Tag.Lookup(proc.tag)
if hasTag {
for j := 0; j < v.Field(i).Len(); j++ {
if err := proc.f(v.Field(i).Index(j)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
}
// Recursively process struct pointers.
case reflect.Ptr:
if v.Field(i).Elem().Kind() == reflect.Struct {
if err := processField(v.Field(i).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
default:
field := t.Field(i)
for _, proc := range processors {
if _, hasTag := field.Tag.Lookup(proc.tag); hasTag {
if err := proc.f(v.Field(i)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
}
return nil
}
func hexToBase64Processor(v reflect.Value) error {
b, err := bytesutil.FromHexString(v.String())
if err != nil {
return err
}
v.SetString(base64.StdEncoding.EncodeToString(b))
return nil
}
func base64ToHexProcessor(v reflect.Value) error {
if v.String() == "" {
return nil
}
b, err := base64.StdEncoding.DecodeString(v.String())
if err != nil {
return err
}
v.SetString(hexutil.Encode(b))
return nil
}
func enumToLowercaseProcessor(v reflect.Value) error {
v.SetString(strings.ToLower(v.String()))
return nil
}
func timeToUnixProcessor(v reflect.Value) error {
t, err := time.Parse(time.RFC3339, v.String())
if err != nil {
return err
}
v.SetString(strconv.FormatUint(uint64(t.Unix()), 10))
return nil
}

View File

@@ -1,27 +1,31 @@
package gateway
package apimiddleware
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"reflect"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/api/grpc"
"github.com/wealdtech/go-bytesutil"
)
// DeserializeRequestBodyIntoContainer deserializes the request's body into an endpoint-specific struct.
func DeserializeRequestBodyIntoContainer(body io.Reader, requestContainer interface{}) ErrorJson {
if err := json.NewDecoder(body).Decode(&requestContainer); err != nil {
decoder := json.NewDecoder(body)
decoder.DisallowUnknownFields()
if err := decoder.Decode(&requestContainer); err != nil {
if strings.Contains(err.Error(), "json: unknown field") {
e := errors.Wrap(err, "could not decode request body")
return &DefaultErrorJson{
Message: e.Error(),
Code: http.StatusBadRequest,
}
}
return InternalServerErrorWithMessage(err, "could not decode request body")
}
return nil
@@ -62,7 +66,12 @@ func (m *ApiProxyMiddleware) PrepareRequestForProxying(endpoint Endpoint, req *h
if errJson := HandleURLParameters(endpoint.Path, req, endpoint.RequestURLLiterals); errJson != nil {
return errJson
}
return HandleQueryParameters(req, endpoint.RequestQueryParams)
if errJson := HandleQueryParameters(req, endpoint.RequestQueryParams); errJson != nil {
return errJson
}
// We have to add the prefix after handling parameters because adding the prefix changes URL segment indexing.
req.URL.Path = "/internal" + req.URL.Path
return nil
}
// ProxyRequest proxies the request to grpc-gateway.
@@ -88,26 +97,25 @@ func ReadGrpcResponseBody(r io.Reader) ([]byte, ErrorJson) {
return body, nil
}
// DeserializeGrpcResponseBodyIntoErrorJson deserializes the body from the grpc-gateway's response into an error struct.
// The struct can be later examined to check if the request resulted in an error.
func DeserializeGrpcResponseBodyIntoErrorJson(errJson ErrorJson, body []byte) ErrorJson {
if err := json.Unmarshal(body, errJson); err != nil {
return InternalServerErrorWithMessage(err, "could not unmarshal error")
}
return nil
}
// HandleGrpcResponseError acts on an error that resulted from a grpc-gateway's response.
func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, w http.ResponseWriter) {
// Something went wrong, but the request completed, meaning we can write headers and the error message.
for h, vs := range resp.Header {
for _, v := range vs {
w.Header().Set(h, v)
}
func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, respBody []byte, w http.ResponseWriter) (bool, ErrorJson) {
responseHasError := false
if err := json.Unmarshal(respBody, errJson); err != nil {
return false, InternalServerErrorWithMessage(err, "could not unmarshal error")
}
// Set code to HTTP code because unmarshalled body contained gRPC code.
errJson.SetCode(resp.StatusCode)
WriteError(w, errJson, resp.Header)
if errJson.Msg() != "" {
responseHasError = true
// Something went wrong, but the request completed, meaning we can write headers and the error message.
for h, vs := range resp.Header {
for _, v := range vs {
w.Header().Set(h, v)
}
}
// Set code to HTTP code because unmarshalled body contained gRPC code.
errJson.SetCode(resp.StatusCode)
WriteError(w, errJson, resp.Header)
}
return responseHasError, nil
}
// GrpcResponseIsEmpty determines whether the grpc-gateway's response body contains no data.
@@ -192,9 +200,11 @@ func WriteMiddlewareResponseHeadersAndBody(grpcResp *http.Response, responseJson
// WriteError writes the error by manipulating headers and the body of the final response.
func WriteError(w http.ResponseWriter, errJson ErrorJson, responseHeader http.Header) {
// Include custom error in the error JSON.
hasCustomError := false
if responseHeader != nil {
customError, ok := responseHeader["Grpc-Metadata-"+grpc.CustomErrorMetadataKey]
if ok {
hasCustomError = true
// Assume header has only one value and read the 0 index.
if err := json.Unmarshal([]byte(customError[0]), errJson); err != nil {
log.WithError(err).Error("Could not unmarshal custom error message")
@@ -203,10 +213,29 @@ func WriteError(w http.ResponseWriter, errJson ErrorJson, responseHeader http.He
}
}
j, err := json.Marshal(errJson)
if err != nil {
log.WithError(err).Error("Could not marshal error message")
return
var j []byte
if hasCustomError {
var err error
j, err = json.Marshal(errJson)
if err != nil {
log.WithError(err).Error("Could not marshal error message")
return
}
} else {
var err error
// We marshal the response body into a DefaultErrorJson if the custom error is not present.
// This is because the ErrorJson argument is the endpoint's error definition, which may contain custom fields.
// In such a scenario marhaling the endpoint's error would populate the resulting JSON
// with these fields even if they are not present in the gRPC header.
d := &DefaultErrorJson{
Message: errJson.Msg(),
Code: errJson.StatusCode(),
}
j, err = json.Marshal(d)
if err != nil {
log.WithError(err).Error("Could not marshal error message")
return
}
}
w.Header().Set("Content-Length", strconv.Itoa(len(j)))
@@ -224,97 +253,3 @@ func Cleanup(grpcResponseBody io.ReadCloser) ErrorJson {
}
return nil
}
// processField calls each processor function on any field that has the matching tag set.
// It is a recursive function.
func processField(s interface{}, processors []fieldProcessor) error {
kind := reflect.TypeOf(s).Kind()
if kind != reflect.Ptr && kind != reflect.Slice && kind != reflect.Array {
return fmt.Errorf("processing fields of kind '%v' is unsupported", kind)
}
t := reflect.TypeOf(s).Elem()
v := reflect.Indirect(reflect.ValueOf(s))
for i := 0; i < t.NumField(); i++ {
switch v.Field(i).Kind() {
case reflect.Slice:
sliceElem := t.Field(i).Type.Elem()
kind := sliceElem.Kind()
// Recursively process slices to struct pointers.
if kind == reflect.Ptr && sliceElem.Elem().Kind() == reflect.Struct {
for j := 0; j < v.Field(i).Len(); j++ {
if err := processField(v.Field(i).Index(j).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
// Process each string in string slices.
if kind == reflect.String {
for _, proc := range processors {
_, hasTag := t.Field(i).Tag.Lookup(proc.tag)
if hasTag {
for j := 0; j < v.Field(i).Len(); j++ {
if err := proc.f(v.Field(i).Index(j)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
}
// Recursively process struct pointers.
case reflect.Ptr:
if v.Field(i).Elem().Kind() == reflect.Struct {
if err := processField(v.Field(i).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
default:
field := t.Field(i)
for _, proc := range processors {
if _, hasTag := field.Tag.Lookup(proc.tag); hasTag {
if err := proc.f(v.Field(i)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
}
return nil
}
func hexToBase64Processor(v reflect.Value) error {
b, err := bytesutil.FromHexString(v.String())
if err != nil {
return err
}
v.SetString(base64.StdEncoding.EncodeToString(b))
return nil
}
func base64ToHexProcessor(v reflect.Value) error {
if v.String() == "" {
return nil
}
b, err := base64.StdEncoding.DecodeString(v.String())
if err != nil {
return err
}
v.SetString(hexutil.Encode(b))
return nil
}
func enumToLowercaseProcessor(v reflect.Value) error {
v.SetString(strings.ToLower(v.String()))
return nil
}
func timeToUnixProcessor(v reflect.Value) error {
t, err := time.Parse(time.RFC3339, v.String())
if err != nil {
return err
}
v.SetString(strconv.FormatUint(uint64(t.Unix()), 10))
return nil
}

View File

@@ -1,4 +1,4 @@
package gateway
package apimiddleware
import (
"bytes"
@@ -9,8 +9,8 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/api/grpc"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/sirupsen/logrus/hooks/test"
)
@@ -63,6 +63,11 @@ func (e *testErrorJson) SetCode(code int) {
e.Code = code
}
// SetMsg sets the error's underlying message.
func (e *testErrorJson) SetMsg(msg string) {
e.Message = msg
}
func TestDeserializeRequestBodyIntoContainer(t *testing.T) {
t.Run("ok", func(t *testing.T) {
var bodyJson bytes.Buffer
@@ -83,6 +88,15 @@ func TestDeserializeRequestBodyIntoContainer(t *testing.T) {
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode request body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
t.Run("unknown field", func(t *testing.T) {
var bodyJson bytes.Buffer
bodyJson.Write([]byte("{\"foo\":\"foo\"}"))
errJson := DeserializeRequestBodyIntoContainer(&bodyJson, &testRequestContainer{})
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode request body"))
assert.Equal(t, http.StatusBadRequest, errJson.StatusCode())
})
}
func TestProcessRequestContainerFields(t *testing.T) {
@@ -147,29 +161,6 @@ func TestReadGrpcResponseBody(t *testing.T) {
assert.Equal(t, "foo", string(body))
}
func TestDeserializeGrpcResponseBodyIntoErrorJson(t *testing.T) {
t.Run("ok", func(t *testing.T) {
e := &testErrorJson{
Message: "foo",
Code: 500,
}
body, err := json.Marshal(e)
require.NoError(t, err)
eToDeserialize := &testErrorJson{}
errJson := DeserializeGrpcResponseBodyIntoErrorJson(eToDeserialize, body)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "foo", eToDeserialize.Msg())
assert.Equal(t, 500, eToDeserialize.StatusCode())
})
t.Run("error", func(t *testing.T) {
errJson := DeserializeGrpcResponseBodyIntoErrorJson(nil, nil)
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not unmarshal error"))
})
}
func TestHandleGrpcResponseError(t *testing.T) {
response := &http.Response{
StatusCode: 400,
@@ -181,10 +172,14 @@ func TestHandleGrpcResponseError(t *testing.T) {
writer := httptest.NewRecorder()
errJson := &testErrorJson{
Message: "foo",
Code: 500,
Code: 400,
}
b, err := json.Marshal(errJson)
require.NoError(t, err)
HandleGrpcResponseError(errJson, response, writer)
hasError, e := HandleGrpcResponseError(errJson, response, b, writer)
require.Equal(t, true, e == nil)
assert.Equal(t, true, hasError)
v, ok := writer.Header()["Foo"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")

View File

@@ -1,4 +1,4 @@
package gateway
package apimiddleware
import (
"net/http"
@@ -15,6 +15,7 @@ type ErrorJson interface {
StatusCode() int
SetCode(code int)
Msg() string
SetMsg(msg string)
}
// DefaultErrorJson is a JSON representation of a simple error value, containing only a message and an error code.
@@ -54,3 +55,8 @@ func (e *DefaultErrorJson) Msg() string {
func (e *DefaultErrorJson) SetCode(code int) {
e.Code = code
}
// SetMsg sets the error's underlying message.
func (e *DefaultErrorJson) SetMsg(msg string) {
e.Message = msg
}

View File

@@ -10,8 +10,10 @@ import (
"strings"
"time"
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/runtime"
"github.com/rs/cors"
"google.golang.org/grpc"
@@ -25,83 +27,58 @@ var _ runtime.Service = (*Gateway)(nil)
type PbMux struct {
Registrations []PbHandlerRegistration // Protobuf registrations to be registered in Mux.
Patterns []string // URL patterns that will be handled by Mux.
Mux *gwruntime.ServeMux // The mux that will be used for grpc-gateway requests.
Mux *gwruntime.ServeMux // The router that will be used for grpc-gateway requests.
}
// PbHandlerRegistration is a function that registers a protobuf handler.
type PbHandlerRegistration func(context.Context, *gwruntime.ServeMux, *grpc.ClientConn) error
// MuxHandler is a function that implements the mux handler functionality.
type MuxHandler func(http.Handler, http.ResponseWriter, *http.Request)
type MuxHandler func(
apiMiddlewareHandler *apimiddleware.ApiProxyMiddleware,
h http.HandlerFunc,
w http.ResponseWriter,
req *http.Request,
)
// Config parameters for setting up the gateway service.
type config struct {
maxCallRecvMsgSize uint64
remoteCert string
gatewayAddr string
remoteAddr string
allowedOrigins []string
apiMiddlewareEndpointFactory apimiddleware.EndpointFactory
muxHandler MuxHandler
pbHandlers []*PbMux
router *mux.Router
}
// Gateway is the gRPC gateway to serve HTTP JSON traffic as a proxy and forward it to the gRPC server.
type Gateway struct {
conn *grpc.ClientConn
pbHandlers []PbMux
muxHandler MuxHandler
maxCallRecvMsgSize uint64
mux *http.ServeMux
server *http.Server
cancel context.CancelFunc
remoteCert string
gatewayAddr string
apiMiddlewareAddr string
apiMiddlewareEndpointFactory EndpointFactory
ctx context.Context
startFailure error
remoteAddr string
allowedOrigins []string
cfg *config
conn *grpc.ClientConn
server *http.Server
cancel context.CancelFunc
proxy *apimiddleware.ApiProxyMiddleware
ctx context.Context
startFailure error
}
// New returns a new instance of the Gateway.
func New(
ctx context.Context,
pbHandlers []PbMux,
muxHandler MuxHandler,
remoteAddr,
gatewayAddress string,
) *Gateway {
func New(ctx context.Context, opts ...Option) (*Gateway, error) {
g := &Gateway{
pbHandlers: pbHandlers,
muxHandler: muxHandler,
mux: http.NewServeMux(),
gatewayAddr: gatewayAddress,
ctx: ctx,
remoteAddr: remoteAddr,
allowedOrigins: []string{},
ctx: ctx,
cfg: &config{
router: mux.NewRouter(),
},
}
return g
}
// WithMux allows adding a custom http.ServeMux to the gateway.
func (g *Gateway) WithMux(m *http.ServeMux) *Gateway {
g.mux = m
return g
}
// WithAllowedOrigins allows adding a set of allowed origins to the gateway.
func (g *Gateway) WithAllowedOrigins(origins []string) *Gateway {
g.allowedOrigins = origins
return g
}
// WithRemoteCert allows adding a custom certificate to the gateway,
func (g *Gateway) WithRemoteCert(cert string) *Gateway {
g.remoteCert = cert
return g
}
// WithMaxCallRecvMsgSize allows specifying the maximum allowed gRPC message size.
func (g *Gateway) WithMaxCallRecvMsgSize(size uint64) *Gateway {
g.maxCallRecvMsgSize = size
return g
}
// WithApiMiddleware allows adding API Middleware proxy to the gateway.
func (g *Gateway) WithApiMiddleware(address string, endpointFactory EndpointFactory) *Gateway {
g.apiMiddlewareAddr = address
g.apiMiddlewareEndpointFactory = endpointFactory
return g
for _, opt := range opts {
if err := opt(g); err != nil {
return nil, err
}
}
return g, nil
}
// Start the gateway service.
@@ -109,7 +86,7 @@ func (g *Gateway) Start() {
ctx, cancel := context.WithCancel(g.ctx)
g.cancel = cancel
conn, err := g.dial(ctx, "tcp", g.remoteAddr)
conn, err := g.dial(ctx, "tcp", g.cfg.remoteAddr)
if err != nil {
log.WithError(err).Error("Failed to connect to gRPC server")
g.startFailure = err
@@ -117,7 +94,7 @@ func (g *Gateway) Start() {
}
g.conn = conn
for _, h := range g.pbHandlers {
for _, h := range g.cfg.pbHandlers {
for _, r := range h.Registrations {
if err := r(ctx, h.Mux, g.conn); err != nil {
log.WithError(err).Error("Failed to register handler")
@@ -126,35 +103,35 @@ func (g *Gateway) Start() {
}
}
for _, p := range h.Patterns {
g.mux.Handle(p, h.Mux)
g.cfg.router.PathPrefix(p).Handler(h.Mux)
}
}
corsMux := g.corsMiddleware(g.mux)
corsMux := g.corsMiddleware(g.cfg.router)
if g.muxHandler != nil {
g.mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
g.muxHandler(corsMux, w, r)
if g.cfg.apiMiddlewareEndpointFactory != nil && !g.cfg.apiMiddlewareEndpointFactory.IsNil() {
g.registerApiMiddleware()
}
if g.cfg.muxHandler != nil {
g.cfg.router.PathPrefix("/").HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
g.cfg.muxHandler(g.proxy, corsMux.ServeHTTP, w, r)
})
}
g.server = &http.Server{
Addr: g.gatewayAddr,
Addr: g.cfg.gatewayAddr,
Handler: corsMux,
}
go func() {
log.WithField("address", g.gatewayAddr).Info("Starting gRPC gateway")
log.WithField("address", g.cfg.gatewayAddr).Info("Starting gRPC gateway")
if err := g.server.ListenAndServe(); err != http.ErrServerClosed {
log.WithError(err).Error("Failed to start gRPC gateway")
g.startFailure = err
return
}
}()
if g.apiMiddlewareAddr != "" && g.apiMiddlewareEndpointFactory != nil && !g.apiMiddlewareEndpointFactory.IsNil() {
go g.registerApiMiddleware()
}
}
// Status of grpc gateway. Returns an error if this service is unhealthy.
@@ -162,11 +139,9 @@ func (g *Gateway) Status() error {
if g.startFailure != nil {
return g.startFailure
}
if s := g.conn.GetState(); s != connectivity.Ready {
return fmt.Errorf("grpc server is %s", s)
}
return nil
}
@@ -183,18 +158,16 @@ func (g *Gateway) Stop() error {
}
}
}
if g.cancel != nil {
g.cancel()
}
return nil
}
func (g *Gateway) corsMiddleware(h http.Handler) http.Handler {
c := cors.New(cors.Options{
AllowedOrigins: g.allowedOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodOptions},
AllowedOrigins: g.cfg.allowedOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
@@ -236,8 +209,8 @@ func (g *Gateway) dial(ctx context.Context, network, addr string) (*grpc.ClientC
// "addr" must be a valid TCP address with a port number.
func (g *Gateway) dialTCP(ctx context.Context, addr string) (*grpc.ClientConn, error) {
security := grpc.WithInsecure()
if len(g.remoteCert) > 0 {
creds, err := credentials.NewClientTLSFromFile(g.remoteCert, "")
if len(g.cfg.remoteCert) > 0 {
creds, err := credentials.NewClientTLSFromFile(g.cfg.remoteCert, "")
if err != nil {
return nil, err
}
@@ -245,7 +218,7 @@ func (g *Gateway) dialTCP(ctx context.Context, addr string) (*grpc.ClientConn, e
}
opts := []grpc.DialOption{
security,
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.maxCallRecvMsgSize))),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
@@ -266,21 +239,16 @@ func (g *Gateway) dialUnix(ctx context.Context, addr string) (*grpc.ClientConn,
opts := []grpc.DialOption{
grpc.WithInsecure(),
grpc.WithContextDialer(f),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.maxCallRecvMsgSize))),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
}
func (g *Gateway) registerApiMiddleware() {
proxy := &ApiProxyMiddleware{
GatewayAddress: g.gatewayAddr,
ProxyAddress: g.apiMiddlewareAddr,
EndpointCreator: g.apiMiddlewareEndpointFactory,
}
log.WithField("API middleware address", g.apiMiddlewareAddr).Info("Starting API middleware")
if err := proxy.Run(); err != http.ErrServerClosed {
log.WithError(err).Error("Failed to start API middleware")
g.startFailure = err
return
g.proxy = &apimiddleware.ApiProxyMiddleware{
GatewayAddress: g.cfg.gatewayAddr,
EndpointCreator: g.cfg.apiMiddlewareEndpointFactory,
}
log.Info("Starting API middleware")
g.proxy.Run(g.cfg.router)
}

View File

@@ -9,9 +9,11 @@ import (
"net/url"
"testing"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
@@ -23,7 +25,7 @@ func (*mockEndpointFactory) Paths() []string {
return []string{}
}
func (*mockEndpointFactory) Create(_ string) (*Endpoint, error) {
func (*mockEndpointFactory) Create(_ string) (*apimiddleware.Endpoint, error) {
return nil, nil
}
@@ -32,34 +34,36 @@ func (*mockEndpointFactory) IsNil() bool {
}
func TestGateway_Customized(t *testing.T) {
mux := http.NewServeMux()
r := mux.NewRouter()
cert := "cert"
origins := []string{"origin"}
size := uint64(100)
middlewareAddr := "middleware"
endpointFactory := &mockEndpointFactory{}
g := New(
context.Background(),
[]PbMux{},
func(handler http.Handler, writer http.ResponseWriter, request *http.Request) {
opts := []Option{
WithRouter(r),
WithRemoteCert(cert),
WithAllowedOrigins(origins),
WithMaxCallRecvMsgSize(size),
WithApiMiddleware(endpointFactory),
WithMuxHandler(func(
_ *apimiddleware.ApiProxyMiddleware,
_ http.HandlerFunc,
_ http.ResponseWriter,
_ *http.Request,
) {
}),
}
},
"",
"",
).WithMux(mux).
WithRemoteCert(cert).
WithAllowedOrigins(origins).
WithMaxCallRecvMsgSize(size).
WithApiMiddleware(middlewareAddr, endpointFactory)
g, err := New(context.Background(), opts...)
require.NoError(t, err)
assert.Equal(t, mux, g.mux)
assert.Equal(t, cert, g.remoteCert)
require.Equal(t, 1, len(g.allowedOrigins))
assert.Equal(t, origins[0], g.allowedOrigins[0])
assert.Equal(t, size, g.maxCallRecvMsgSize)
assert.Equal(t, middlewareAddr, g.apiMiddlewareAddr)
assert.Equal(t, endpointFactory, g.apiMiddlewareEndpointFactory)
assert.Equal(t, r, g.cfg.router)
assert.Equal(t, cert, g.cfg.remoteCert)
require.Equal(t, 1, len(g.cfg.allowedOrigins))
assert.Equal(t, origins[0], g.cfg.allowedOrigins[0])
assert.Equal(t, size, g.cfg.maxCallRecvMsgSize)
assert.Equal(t, endpointFactory, g.cfg.apiMiddlewareEndpointFactory)
}
func TestGateway_StartStop(t *testing.T) {
@@ -75,23 +79,27 @@ func TestGateway_StartStop(t *testing.T) {
selfAddress := fmt.Sprintf("%s:%d", rpcHost, ctx.Int(flags.RPCPort.Name))
gatewayAddress := fmt.Sprintf("%s:%d", gatewayHost, gatewayPort)
g := New(
ctx.Context,
[]PbMux{},
func(handler http.Handler, writer http.ResponseWriter, request *http.Request) {
opts := []Option{
WithGatewayAddr(gatewayAddress),
WithRemoteAddr(selfAddress),
WithMuxHandler(func(
_ *apimiddleware.ApiProxyMiddleware,
_ http.HandlerFunc,
_ http.ResponseWriter,
_ *http.Request,
) {
}),
}
},
selfAddress,
gatewayAddress,
)
g, err := New(context.Background(), opts...)
require.NoError(t, err)
g.Start()
go func() {
require.LogsContain(t, hook, "Starting gRPC gateway")
require.LogsDoNotContain(t, hook, "Starting API middleware")
}()
err := g.Stop()
err = g.Stop()
require.NoError(t, err)
}
@@ -106,15 +114,15 @@ func TestGateway_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
selfAddress := fmt.Sprintf("%s:%d", rpcHost, ctx.Int(flags.RPCPort.Name))
gatewayAddress := fmt.Sprintf("%s:%d", gatewayHost, gatewayPort)
g := New(
ctx.Context,
[]PbMux{},
/* muxHandler */ nil,
selfAddress,
gatewayAddress,
)
opts := []Option{
WithGatewayAddr(gatewayAddress),
WithRemoteAddr(selfAddress),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()
g.mux.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
g.cfg.router.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
assert.Equal(t, http.StatusNotFound, writer.Code)
}

81
api/gateway/options.go Normal file
View File

@@ -0,0 +1,81 @@
package gateway
import (
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/api/gateway/apimiddleware"
)
type Option func(g *Gateway) error
func (g *Gateway) SetRouter(r *mux.Router) *Gateway {
g.cfg.router = r
return g
}
func WithPbHandlers(handlers []*PbMux) Option {
return func(g *Gateway) error {
g.cfg.pbHandlers = handlers
return nil
}
}
func WithMuxHandler(m MuxHandler) Option {
return func(g *Gateway) error {
g.cfg.muxHandler = m
return nil
}
}
func WithGatewayAddr(addr string) Option {
return func(g *Gateway) error {
g.cfg.gatewayAddr = addr
return nil
}
}
func WithRemoteAddr(addr string) Option {
return func(g *Gateway) error {
g.cfg.remoteAddr = addr
return nil
}
}
// WithRouter allows adding a custom mux router to the gateway.
func WithRouter(r *mux.Router) Option {
return func(g *Gateway) error {
g.cfg.router = r
return nil
}
}
// WithAllowedOrigins allows adding a set of allowed origins to the gateway.
func WithAllowedOrigins(origins []string) Option {
return func(g *Gateway) error {
g.cfg.allowedOrigins = origins
return nil
}
}
// WithRemoteCert allows adding a custom certificate to the gateway,
func WithRemoteCert(cert string) Option {
return func(g *Gateway) error {
g.cfg.remoteCert = cert
return nil
}
}
// WithMaxCallRecvMsgSize allows specifying the maximum allowed gRPC message size.
func WithMaxCallRecvMsgSize(size uint64) Option {
return func(g *Gateway) error {
g.cfg.maxCallRecvMsgSize = size
return nil
}
}
// WithApiMiddleware allows adding an API middleware proxy to the gateway.
func WithApiMiddleware(endpointFactory apimiddleware.EndpointFactory) Option {
return func(g *Gateway) error {
g.cfg.apiMiddlewareEndpointFactory = endpointFactory
return nil
}
}

View File

@@ -20,8 +20,8 @@ go_test(
srcs = ["grpcutils_test.go"],
embed = [":go_default_library"],
deps = [
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//:go_default_library",

View File

@@ -7,8 +7,8 @@ import (
"testing"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"

View File

@@ -6,7 +6,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/api/pagination",
visibility = ["//visibility:public"],
deps = [
"//shared/params:go_default_library",
"//config/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
@@ -16,7 +16,7 @@ go_test(
srcs = ["pagination_test.go"],
deps = [
":go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -6,7 +6,7 @@ import (
"strconv"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/config/params"
)
// StartAndEndPage takes in the requested page token, wanted page size, total page size.

View File

@@ -4,8 +4,8 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/api/pagination"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestStartAndEndPage(t *testing.T) {

View File

@@ -24,9 +24,9 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//shared/testutil:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_stretchr_testify//assert:go_default_library",
],

View File

@@ -7,7 +7,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/async"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
log "github.com/sirupsen/logrus"
)

View File

@@ -7,9 +7,9 @@ import (
"time"
"github.com/prysmaticlabs/prysm/async"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestDebounce_NoEvents(t *testing.T) {
@@ -30,7 +30,7 @@ func TestDebounce_NoEvents(t *testing.T) {
})
wg.Done()
}()
if testutil.WaitTimeout(wg, interval*2) {
if util.WaitTimeout(wg, interval*2) {
t.Fatalf("Test should have exited by now, timed out")
}
assert.Equal(t, 0, timesHandled, "Wrong number of handled calls")
@@ -66,7 +66,7 @@ func TestDebounce_CtxClosing(t *testing.T) {
})
wg.Done()
}()
if testutil.WaitTimeout(wg, interval*2) {
if util.WaitTimeout(wg, interval*2) {
t.Fatalf("Test should have exited by now, timed out")
}
assert.Equal(t, 0, timesHandled, "Wrong number of handled calls")

View File

@@ -23,7 +23,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -23,7 +23,7 @@ import (
"testing"
"time"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/testing/assert"
)
func TestFeedPanics(t *testing.T) {

View File

@@ -23,7 +23,7 @@ import (
"testing"
"time"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
)
var errInts = errors.New("error in subscribeInts")

View File

@@ -1,4 +1,4 @@
// Package runutil includes helpers for scheduling runnable, periodic functions.
// Package async includes helpers for scheduling runnable, periodic functions and contains useful helpers for converting multi-processor computation.
package async
import (

View File

@@ -50,7 +50,7 @@ func TestGetChan(t *testing.T) {
assert.Equal(ch1, ch3)
}
func TestLockUnlock(t *testing.T) {
func TestLockUnlock(_ *testing.T) {
var wg sync.WaitGroup
wg.Add(5)

View File

@@ -1,5 +1,3 @@
// Package mputil contains useful helpers for converting
// multi-processor computation.
package async
import (

View File

@@ -6,8 +6,8 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/async"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestDouble(t *testing.T) {

View File

@@ -10,6 +10,7 @@ go_library(
"init_sync_process_block.go",
"log.go",
"metrics.go",
"options.go",
"process_attestation.go",
"process_attestation_helpers.go",
"process_block.go",
@@ -17,23 +18,28 @@ go_library(
"receive_attestation.go",
"receive_block.go",
"service.go",
"state_balance_cache.go",
"weak_subjectivity_checks.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain",
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/beacon-chain:__subpackages__",
"//testing/fuzz:__pkg__",
"//testing/slasher/simulator:__pkg__",
],
deps = [
"//async:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
@@ -48,15 +54,15 @@ go_library(
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/block:go_default_library",
"//runtime/version:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_emicklei_dot//:go_default_library",
@@ -90,6 +96,7 @@ go_test(
"init_test.go",
"log_test.go",
"metrics_test.go",
"mock_test.go",
"process_attestation_test.go",
"process_block_test.go",
"receive_attestation_test.go",
@@ -112,13 +119,13 @@ go_test(
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/wrapper:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
@@ -136,6 +143,7 @@ go_test(
"chain_info_norace_test.go",
"checktags_test.go",
"init_test.go",
"mock_test.go",
"receive_block_test.go",
"service_norace_test.go",
],
@@ -161,13 +169,13 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/wrapper:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",

View File

@@ -8,10 +8,10 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
@@ -182,7 +182,7 @@ func (s *Service) HeadValidatorsIndices(ctx context.Context, epoch types.Epoch)
if !s.hasHeadState() {
return []types.ValidatorIndex{}, nil
}
return helpers.ActiveValidatorIndices(s.headState(ctx), epoch)
return helpers.ActiveValidatorIndices(ctx, s.headState(ctx), epoch)
}
// HeadSeed returns the seed from the head view of a given epoch.
@@ -293,15 +293,19 @@ func (s *Service) ChainHeads() ([][32]byte, []types.Slot) {
func (s *Service) HeadPublicKeyToValidatorIndex(ctx context.Context, pubKey [48]byte) (types.ValidatorIndex, bool) {
s.headLock.RLock()
defer s.headLock.RUnlock()
if !s.hasHeadState() {
return 0, false
}
return s.headState(ctx).ValidatorIndexByPubkey(pubKey)
}
// HeadValidatorIndexToPublicKey returns the pubkey of the validator `index` in current head state.
func (s *Service) HeadValidatorIndexToPublicKey(ctx context.Context, index types.ValidatorIndex) ([48]byte, error) {
func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index types.ValidatorIndex) ([48]byte, error) {
s.headLock.RLock()
defer s.headLock.RUnlock()
if !s.hasHeadState() {
return [48]byte{}, nil
}
v, err := s.headValidatorAtIndex(index)
if err != nil {
return [48]byte{}, err

View File

@@ -8,13 +8,13 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestHeadSlot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{BeaconDB: beaconDB},
cfg: &config{BeaconDB: beaconDB},
}
go func() {
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))
@@ -25,7 +25,7 @@ func TestHeadSlot_DataRace(t *testing.T) {
func TestHeadRoot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
head: &head{root: [32]byte{'A'}},
}
go func() {
@@ -38,7 +38,7 @@ func TestHeadRoot_DataRace(t *testing.T) {
func TestHeadBlock_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
head: &head{block: wrapper.WrappedPhase0SignedBeaconBlock(&ethpb.SignedBeaconBlock{})},
}
go func() {
@@ -51,7 +51,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
func TestHeadState_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
}
go func() {
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))

View File

@@ -10,13 +10,13 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"google.golang.org/protobuf/proto"
)
@@ -56,8 +56,8 @@ func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, beaconDB)
c.finalizedCheckpt = cp
c.genesisRoot = genesisRoot
assert.DeepEqual(t, c.genesisRoot[:], c.FinalizedCheckpt().Root)
c.originBlockRoot = genesisRoot
assert.DeepEqual(t, c.originBlockRoot[:], c.FinalizedCheckpt().Root)
}
func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
@@ -77,8 +77,8 @@ func TestJustifiedCheckpt_GenesisRootOk(t *testing.T) {
genesisRoot := [32]byte{'B'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c.justifiedCheckpt = cp
c.genesisRoot = genesisRoot
assert.DeepEqual(t, c.genesisRoot[:], c.CurrentJustifiedCheckpt().Root)
c.originBlockRoot = genesisRoot
assert.DeepEqual(t, c.originBlockRoot[:], c.CurrentJustifiedCheckpt().Root)
}
func TestPreviousJustifiedCheckpt_CanRetrieve(t *testing.T) {
@@ -98,8 +98,8 @@ func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, beaconDB)
c.prevJustifiedCheckpt = cp
c.genesisRoot = genesisRoot
assert.DeepEqual(t, c.genesisRoot[:], c.PreviousJustifiedCheckpt().Root)
c.originBlockRoot = genesisRoot
assert.DeepEqual(t, c.originBlockRoot[:], c.PreviousJustifiedCheckpt().Root)
}
func TestHeadSlot_CanRetrieve(t *testing.T) {
@@ -120,9 +120,9 @@ func TestHeadRoot_CanRetrieve(t *testing.T) {
func TestHeadRoot_UseDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := &Service{cfg: &Config{BeaconDB: beaconDB}}
c := &Service{cfg: &config{BeaconDB: beaconDB}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(b)))
@@ -134,7 +134,7 @@ func TestHeadRoot_UseDB(t *testing.T) {
}
func TestHeadBlock_CanRetrieve(t *testing.T) {
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 1
s, err := v1.InitializeFromProto(&ethpb.BeaconState{})
require.NoError(t, err)
@@ -217,7 +217,7 @@ func TestIsCanonical_Ok(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
blk := testutil.NewBeaconBlock()
blk := util.NewBeaconBlock()
blk.Block.Slot = 0
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
@@ -233,7 +233,7 @@ func TestIsCanonical_Ok(t *testing.T) {
}
func TestService_HeadValidatorsIndices(t *testing.T) {
s, _ := testutil.DeterministicGenesisState(t, 10)
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c.head = &head{}
@@ -248,7 +248,7 @@ func TestService_HeadValidatorsIndices(t *testing.T) {
}
func TestService_HeadSeed(t *testing.T) {
s, _ := testutil.DeterministicGenesisState(t, 1)
s, _ := util.DeterministicGenesisState(t, 1)
c := &Service{}
seed, err := helpers.Seed(s, 0, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
@@ -265,7 +265,7 @@ func TestService_HeadSeed(t *testing.T) {
}
func TestService_HeadGenesisValidatorRoot(t *testing.T) {
s, _ := testutil.DeterministicGenesisState(t, 1)
s, _ := util.DeterministicGenesisState(t, 1)
c := &Service{}
c.head = &head{}
@@ -278,14 +278,14 @@ func TestService_HeadGenesisValidatorRoot(t *testing.T) {
}
func TestService_ProtoArrayStore(t *testing.T) {
c := &Service{cfg: &Config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}}
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}}
p := c.ProtoArrayStore()
require.Equal(t, 0, int(p.FinalizedEpoch()))
}
func TestService_ChainHeads(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &Config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}}
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}}
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0))
require.NoError(t, c.cfg.ForkChoiceStore.ProcessBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{}, 0, 0))
@@ -298,7 +298,7 @@ func TestService_ChainHeads(t *testing.T) {
}
func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
s, _ := testutil.DeterministicGenesisState(t, 10)
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c.head = &head{state: s}
@@ -313,8 +313,22 @@ func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
require.Equal(t, types.ValidatorIndex(0), i)
}
func TestService_HeadPublicKeyToValidatorIndexNil(t *testing.T) {
c := &Service{}
c.head = nil
idx, e := c.HeadPublicKeyToValidatorIndex(context.Background(), [48]byte{})
require.Equal(t, false, e)
require.Equal(t, types.ValidatorIndex(0), idx)
c.head = &head{state: nil}
i, e := c.HeadPublicKeyToValidatorIndex(context.Background(), [48]byte{})
require.Equal(t, false, e)
require.Equal(t, types.ValidatorIndex(0), i)
}
func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
s, _ := testutil.DeterministicGenesisState(t, 10)
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c.head = &head{state: s}
@@ -326,3 +340,17 @@ func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
require.Equal(t, bytesutil.ToBytes48(v.PublicKey), p)
}
func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
c := &Service{}
c.head = nil
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
require.NoError(t, err)
require.Equal(t, [48]byte{}, p)
c.head = &head{state: nil}
p, err = c.HeadValidatorIndexToPublicKey(context.Background(), 0)
require.NoError(t, err)
require.Equal(t, [48]byte{}, p)
}

View File

@@ -7,17 +7,16 @@ import (
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -42,19 +41,16 @@ func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
// ensure head gets its best justified info.
if s.bestJustifiedCheckpt.Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = s.bestJustifiedCheckpt
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
return err
}
}
// Get head from the fork choice service.
f := s.finalizedCheckpt
j := s.justifiedCheckpt
// To get head before the first justified epoch, the fork choice will start with genesis root
// To get head before the first justified epoch, the fork choice will start with origin root
// instead of zero hashes.
headStartRoot := bytesutil.ToBytes32(j.Root)
if headStartRoot == params.BeaconConfig().ZeroHash {
headStartRoot = s.genesisRoot
headStartRoot = s.originBlockRoot
}
// In order to process head, fork choice store requires justified info.
@@ -107,8 +103,8 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
if err != nil {
return err
}
if newHeadBlock == nil || newHeadBlock.IsNil() || newHeadBlock.Block().IsNil() {
return errors.New("cannot save nil head block")
if err := helpers.BeaconBlockIsNil(newHeadBlock); err != nil {
return err
}
// Get the new head state from cached state or DB.
@@ -141,7 +137,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
NewHeadBlock: headRoot[:],
OldHeadState: oldStateRoot,
NewHeadState: newStateRoot,
Epoch: core.SlotToEpoch(newHeadSlot),
Epoch: slots.ToEpoch(newHeadSlot),
},
})
@@ -175,7 +171,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
// root in DB. With the inception of initial-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b block.SignedBeaconBlock, r [32]byte, hs state.BeaconState) error {
if err := helpers.VerifyNilBeaconBlock(b); err != nil {
if err := helpers.BeaconBlockIsNil(b); err != nil {
return err
}
cachedHeadRoot, err := s.HeadRoot(ctx)
@@ -248,7 +244,7 @@ func (s *Service) headBlock() block.SignedBeaconBlock {
// It does a full copy on head state for immutability.
// This is a lock free version.
func (s *Service) headState(ctx context.Context) state.BeaconState {
ctx, span := trace.StartSpan(ctx, "blockChain.headState")
_, span := trace.StartSpan(ctx, "blockChain.headState")
defer span.End()
return s.head.state.Copy()
@@ -273,57 +269,6 @@ func (s *Service) hasHeadState() bool {
return s.head != nil && s.head.state != nil
}
// This caches justified state balances to be used for fork choice.
func (s *Service) cacheJustifiedStateBalances(ctx context.Context, justifiedRoot [32]byte) error {
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
var justifiedState state.BeaconState
var err error
if justifiedRoot == s.genesisRoot {
justifiedState, err = s.cfg.BeaconDB.GenesisState(ctx)
if err != nil {
return err
}
} else {
justifiedState, err = s.cfg.StateGen.StateByRoot(ctx, justifiedRoot)
if err != nil {
return err
}
}
if justifiedState == nil || justifiedState.IsNil() {
return errors.New("justified state can't be nil")
}
epoch := core.CurrentEpoch(justifiedState)
justifiedBalances := make([]uint64, justifiedState.NumValidators())
if err := justifiedState.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if helpers.IsActiveValidatorUsingTrie(val, epoch) {
justifiedBalances[idx] = val.EffectiveBalance()
} else {
justifiedBalances[idx] = 0
}
return nil
}); err != nil {
return err
}
s.justifiedBalancesLock.Lock()
defer s.justifiedBalancesLock.Unlock()
s.justifiedBalances = justifiedBalances
return nil
}
func (s *Service) getJustifiedBalances() []uint64 {
s.justifiedBalancesLock.RLock()
defer s.justifiedBalancesLock.RUnlock()
return s.justifiedBalances
}
// Notifies a common event feed of a new chain head event. Called right after a new
// chain head is determined, set, and saved to disk.
func (s *Service) notifyNewHeadEvent(
@@ -332,19 +277,19 @@ func (s *Service) notifyNewHeadEvent(
newHeadStateRoot,
newHeadRoot []byte,
) error {
previousDutyDependentRoot := s.genesisRoot[:]
currentDutyDependentRoot := s.genesisRoot[:]
previousDutyDependentRoot := s.originBlockRoot[:]
currentDutyDependentRoot := s.originBlockRoot[:]
var previousDutyEpoch types.Epoch
currentDutyEpoch := core.SlotToEpoch(newHeadSlot)
currentDutyEpoch := slots.ToEpoch(newHeadSlot)
if currentDutyEpoch > 0 {
previousDutyEpoch = currentDutyEpoch.Sub(1)
}
currentDutySlot, err := core.StartSlot(currentDutyEpoch)
currentDutySlot, err := slots.EpochStart(currentDutyEpoch)
if err != nil {
return errors.Wrap(err, "could not get duty slot")
}
previousDutySlot, err := core.StartSlot(previousDutyEpoch)
previousDutySlot, err := slots.EpochStart(previousDutyEpoch)
if err != nil {
return errors.Wrap(err, "could not get duty slot")
}
@@ -366,7 +311,7 @@ func (s *Service) notifyNewHeadEvent(
Slot: newHeadSlot,
Block: newHeadRoot,
State: newHeadStateRoot,
EpochTransition: core.IsEpochEnd(newHeadSlot),
EpochTransition: slots.IsEpochStart(newHeadSlot),
PreviousDutyDependentRoot: previousDutyDependentRoot,
CurrentDutyDependentRoot: currentDutyDependentRoot,
},

View File

@@ -8,13 +8,14 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/async"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time/slots"
)
// Initialize the state cache for sync committees.
@@ -61,14 +62,14 @@ func (s *Service) HeadSyncContributionProofDomain(ctx context.Context, slot type
// rather than for the range
// [compute_start_slot_at_epoch(epoch), compute_start_slot_at_epoch(epoch) + SLOTS_PER_EPOCH)
func (s *Service) HeadSyncCommitteeIndices(ctx context.Context, index types.ValidatorIndex, slot types.Slot) ([]types.CommitteeIndex, error) {
nextSlotEpoch := core.SlotToEpoch(slot + 1)
currentEpoch := core.SlotToEpoch(slot)
nextSlotEpoch := slots.ToEpoch(slot + 1)
currentEpoch := slots.ToEpoch(slot)
switch {
case core.SyncCommitteePeriod(nextSlotEpoch) == core.SyncCommitteePeriod(currentEpoch):
case slots.SyncCommitteePeriod(nextSlotEpoch) == slots.SyncCommitteePeriod(currentEpoch):
return s.headCurrentSyncCommitteeIndices(ctx, index, slot)
// At sync committee period boundary, validator should sample the next epoch sync committee.
case core.SyncCommitteePeriod(nextSlotEpoch) == core.SyncCommitteePeriod(currentEpoch)+1:
case slots.SyncCommitteePeriod(nextSlotEpoch) == slots.SyncCommitteePeriod(currentEpoch)+1:
return s.headNextSyncCommitteeIndices(ctx, index, slot)
default:
// Impossible condition.
@@ -104,11 +105,11 @@ func (s *Service) HeadSyncCommitteePubKeys(ctx context.Context, slot types.Slot,
return nil, err
}
nextSlotEpoch := core.SlotToEpoch(headState.Slot() + 1)
currEpoch := core.SlotToEpoch(headState.Slot())
nextSlotEpoch := slots.ToEpoch(headState.Slot() + 1)
currEpoch := slots.ToEpoch(headState.Slot())
var syncCommittee *ethpb.SyncCommittee
if currEpoch == nextSlotEpoch || core.SyncCommitteePeriod(currEpoch) == core.SyncCommitteePeriod(nextSlotEpoch) {
if currEpoch == nextSlotEpoch || slots.SyncCommitteePeriod(currEpoch) == slots.SyncCommitteePeriod(nextSlotEpoch) {
syncCommittee, err = headState.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -129,7 +130,7 @@ func (s *Service) domainWithHeadState(ctx context.Context, slot types.Slot, doma
if err != nil {
return nil, err
}
return helpers.Domain(headState.Fork(), core.SlotToEpoch(headState.Slot()), domain, headState.GenesisValidatorRoot())
return signing.Domain(headState.Fork(), slots.ToEpoch(headState.Slot()), domain, headState.GenesisValidatorRoot())
}
// returns the head state that is advanced up to `slot`. It utilizes the cache `syncCommitteeHeadState` by retrieving using `slot` as key.

View File

@@ -6,19 +6,19 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/signing"
dbtest "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
)
func TestService_headSyncCommitteeFetcher_Errors(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
c := &Service{
cfg: &Config{
cfg: &config{
StateGen: stategen.New(beaconDB),
},
}
@@ -36,7 +36,7 @@ func TestService_headSyncCommitteeFetcher_Errors(t *testing.T) {
func TestService_HeadDomainFetcher_Errors(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
c := &Service{
cfg: &Config{
cfg: &config{
StateGen: stategen.New(beaconDB),
},
}
@@ -52,7 +52,7 @@ func TestService_HeadDomainFetcher_Errors(t *testing.T) {
}
func TestService_HeadSyncCommitteeIndices(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
@@ -75,7 +75,7 @@ func TestService_HeadSyncCommitteeIndices(t *testing.T) {
}
func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
@@ -89,7 +89,7 @@ func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
}
func TestService_headNextSyncCommitteeIndices(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
@@ -103,7 +103,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
}
func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
@@ -118,11 +118,11 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
}
func TestService_HeadSyncCommitteeDomain(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
wanted, err := helpers.Domain(s.Fork(), core.SlotToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorRoot())
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorRoot())
require.NoError(t, err)
d, err := c.HeadSyncCommitteeDomain(context.Background(), 0)
@@ -132,11 +132,11 @@ func TestService_HeadSyncCommitteeDomain(t *testing.T) {
}
func TestService_HeadSyncContributionProofDomain(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
wanted, err := helpers.Domain(s.Fork(), core.SlotToEpoch(s.Slot()), params.BeaconConfig().DomainContributionAndProof, s.GenesisValidatorRoot())
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainContributionAndProof, s.GenesisValidatorRoot())
require.NoError(t, err)
d, err := c.HeadSyncContributionProofDomain(context.Background(), 0)
@@ -146,11 +146,11 @@ func TestService_HeadSyncContributionProofDomain(t *testing.T) {
}
func TestService_HeadSyncSelectionProofDomain(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c.head = &head{state: s}
wanted, err := helpers.Domain(s.Fork(), core.SlotToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommitteeSelectionProof, s.GenesisValidatorRoot())
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommitteeSelectionProof, s.GenesisValidatorRoot())
require.NoError(t, err)
d, err := c.HeadSyncSelectionProofDomain(context.Background(), 0)
@@ -164,7 +164,7 @@ func TestSyncCommitteeHeadStateCache_RoundTrip(t *testing.T) {
t.Cleanup(func() {
syncCommitteeHeadStateCache = cache.NewSyncCommitteeHeadState()
})
beaconState, _ := testutil.DeterministicGenesisStateAltair(t, 100)
beaconState, _ := util.DeterministicGenesisStateAltair(t, 100)
require.NoError(t, beaconState.SetSlot(100))
cachedState, err := c.Get(101)
require.ErrorContains(t, cache.ErrNotFound.Error(), err)

View File

@@ -8,16 +8,16 @@ import (
types "github.com/prysmaticlabs/eth2-types"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -38,9 +38,9 @@ func TestSaveHead_Different(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
testutil.NewBeaconBlock()
util.NewBeaconBlock()
oldBlock := wrapper.WrappedPhase0SignedBeaconBlock(
testutil.NewBeaconBlock(),
util.NewBeaconBlock(),
)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), oldBlock))
oldRoot, err := oldBlock.Block().HashTreeRoot()
@@ -51,14 +51,14 @@ func TestSaveHead_Different(t *testing.T) {
block: oldBlock,
}
newHeadSignedBlock := testutil.NewBeaconBlock()
newHeadSignedBlock := util.NewBeaconBlock()
newHeadSignedBlock.Block.Slot = 1
newHeadBlock := newHeadSignedBlock.Block
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(newHeadSignedBlock)))
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
headState, err := testutil.NewBeaconState()
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
@@ -81,7 +81,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
service := setupBeaconChain(t, beaconDB)
oldBlock := wrapper.WrappedPhase0SignedBeaconBlock(
testutil.NewBeaconBlock(),
util.NewBeaconBlock(),
)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), oldBlock))
oldRoot, err := oldBlock.Block().HashTreeRoot()
@@ -93,7 +93,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
}
reorgChainParent := [32]byte{'B'}
newHeadSignedBlock := testutil.NewBeaconBlock()
newHeadSignedBlock := util.NewBeaconBlock()
newHeadSignedBlock.Block.Slot = 1
newHeadSignedBlock.Block.ParentRoot = reorgChainParent[:]
newHeadBlock := newHeadSignedBlock.Block
@@ -101,7 +101,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(newHeadSignedBlock)))
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
headState, err := testutil.NewBeaconState()
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
@@ -123,20 +123,22 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
func TestCacheJustifiedStateBalances_CanCache(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
ctx := context.Background()
state, _ := testutil.DeterministicGenesisState(t, 100)
state, _ := util.DeterministicGenesisState(t, 100)
r := [32]byte{'a'}
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: r[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), state, r))
require.NoError(t, service.cacheJustifiedStateBalances(context.Background(), r))
require.DeepEqual(t, service.getJustifiedBalances(), state.Balances(), "Incorrect justified balances")
balances, err := service.justifiedBalances.get(ctx, r)
require.NoError(t, err)
require.DeepEqual(t, balances, state.Balances(), "Incorrect justified balances")
}
func TestUpdateHead_MissingJustifiedRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(b)))
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -150,13 +152,13 @@ func TestUpdateHead_MissingJustifiedRoot(t *testing.T) {
func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("genesis_state_root", func(t *testing.T) {
bState, _ := testutil.DeterministicGenesisState(t, 10)
bState, _ := util.DeterministicGenesisState(t, 10)
notifier := &mock.MockStateNotifier{RecordEvents: true}
srv := &Service{
cfg: &Config{
cfg: &config{
StateNotifier: notifier,
},
genesisRoot: [32]byte{1},
originBlockRoot: [32]byte{1},
}
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
@@ -172,24 +174,24 @@ func Test_notifyNewHeadEvent(t *testing.T) {
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: false,
PreviousDutyDependentRoot: srv.genesisRoot[:],
CurrentDutyDependentRoot: srv.genesisRoot[:],
PreviousDutyDependentRoot: srv.originBlockRoot[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("non_genesis_values", func(t *testing.T) {
bState, _ := testutil.DeterministicGenesisState(t, 10)
bState, _ := util.DeterministicGenesisState(t, 10)
notifier := &mock.MockStateNotifier{RecordEvents: true}
genesisRoot := [32]byte{1}
srv := &Service{
cfg: &Config{
cfg: &config{
StateNotifier: notifier,
},
genesisRoot: genesisRoot,
originBlockRoot: genesisRoot,
}
epoch1Start, err := core.StartSlot(1)
epoch1Start, err := slots.EpochStart(1)
require.NoError(t, err)
epoch2Start, err := core.StartSlot(1)
epoch2Start, err := slots.EpochStart(1)
require.NoError(t, err)
require.NoError(t, bState.SetSlot(epoch1Start))
@@ -206,7 +208,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
Slot: epoch2Start,
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: false,
EpochTransition: true,
PreviousDutyDependentRoot: genesisRoot[:],
CurrentDutyDependentRoot: make([]byte, 32),
}
@@ -220,8 +222,8 @@ func TestSaveOrphanedAtts(t *testing.T) {
})
defer resetCfg()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
b, err := testutil.GenerateFullBlock(genesis, keys, testutil.DefaultBlockGenConfig(), 1)
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -246,8 +248,8 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
})
defer resetCfg()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
b, err := testutil.GenerateFullBlock(genesis, keys, testutil.DefaultBlockGenConfig(), 1)
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)

View File

@@ -6,7 +6,7 @@ import (
"net/http"
"github.com/emicklei/dot"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/config/params"
)
const template = `<html>

View File

@@ -9,11 +9,11 @@ import (
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestService_TreeHandler(t *testing.T) {
@@ -22,23 +22,24 @@ func TestService_TreeHandler(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
headState, err := testutil.NewBeaconState()
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetBalances([]uint64{params.BeaconConfig().GweiPerEth}))
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
[32]byte{'a'},
),
StateGen: stategen.New(beaconDB),
fcs := protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
[32]byte{'a'},
)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
s, err := NewService(ctx, cfg)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, s.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, [32]byte{'a'}, [32]byte{'g'}, [32]byte{'c'}, 0, 0))
require.NoError(t, s.cfg.ForkChoiceStore.ProcessBlock(ctx, 1, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'c'}, 0, 0))
s.setHead([32]byte{'a'}, wrapper.WrappedPhase0SignedBeaconBlock(testutil.NewBeaconBlock()), headState)
s.setHead([32]byte{'a'}, wrapper.WrappedPhase0SignedBeaconBlock(util.NewBeaconBlock()), headState)
rr := httptest.NewRecorder()
handler := http.HandlerFunc(s.TreeHandler)

View File

@@ -1,7 +1,7 @@
package blockchain
import (
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/config/params"
)
func init() {

View File

@@ -5,12 +5,12 @@ import (
"fmt"
"time"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/shared/params"
prysmTime "github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
)
@@ -44,7 +44,7 @@ func logStateTransitionData(b block.BeaconBlock) {
}
func logBlockSyncStatus(block block.BeaconBlock, blockRoot [32]byte, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64) error {
startTime, err := core.SlotToTime(genesisTime, block.Slot())
startTime, err := slots.ToTime(genesisTime, block.Slot())
if err != nil {
return err
}
@@ -52,9 +52,11 @@ func logBlockSyncStatus(block block.BeaconBlock, blockRoot [32]byte, finalized *
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": core.SlotToEpoch(block.Slot()),
"epoch": slots.ToEpoch(block.Slot()),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(block.ParentRoot())[:8]),
"version": version.String(block.Version()),
}).Info("Synced new block")
log.WithFields(logrus.Fields{
"slot": block.Slot,

View File

@@ -6,7 +6,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)

View File

@@ -10,11 +10,11 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
var (
@@ -130,6 +130,14 @@ var (
Name: "sync_head_state_hit",
Help: "The number of sync head state requests that are present in the cache.",
})
stateBalanceCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "state_balance_cache_hit",
Help: "Count the number of state balance cache hits.",
})
stateBalanceCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "state_balance_cache_miss",
Help: "Count the number of state balance cache hits.",
})
)
// reportSlotMetrics reports slot related metrics.
@@ -245,8 +253,8 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
if err != nil {
return err
}
case version.Altair:
v, b, err = altair.InitializeEpochValidators(ctx, headState)
case version.Altair, version.Merge:
v, b, err = altair.InitializePrecomputeValidators(ctx, headState)
if err != nil {
return err
}

View File

@@ -5,14 +5,14 @@ import (
"testing"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestReportEpochMetrics_BadHeadState(t *testing.T) {
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
h, err := testutil.NewBeaconState()
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.SetValidators(nil))
err = reportEpochMetrics(context.Background(), s, h)
@@ -20,9 +20,9 @@ func TestReportEpochMetrics_BadHeadState(t *testing.T) {
}
func TestReportEpochMetrics_BadAttestation(t *testing.T) {
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
h, err := testutil.NewBeaconState()
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 0}))
err = reportEpochMetrics(context.Background(), s, h)
@@ -30,12 +30,12 @@ func TestReportEpochMetrics_BadAttestation(t *testing.T) {
}
func TestReportEpochMetrics_SlashedValidatorOutOfBound(t *testing.T) {
h, _ := testutil.DeterministicGenesisState(t, 1)
h, _ := util.DeterministicGenesisState(t, 1)
v, err := h.ValidatorAtIndex(0)
require.NoError(t, err)
v.Slashed = true
require.NoError(t, h.UpdateValidatorAtIndex(0, v))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: testutil.HydrateAttestationData(&eth.AttestationData{})}))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.HydrateAttestationData(&eth.AttestationData{})}))
err = reportEpochMetrics(context.Background(), h, h)
require.ErrorContains(t, "slot 0 out of bounds", err)
}

View File

@@ -0,0 +1,50 @@
package blockchain
import (
"context"
"errors"
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
)
func testServiceOptsWithDB(t *testing.T) []Option {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
return []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
}
// warning: only use these opts when you are certain there are no db calls
// in your code path. this is a lightweight way to satisfy the stategen/beacondb
// initialization requirements w/o the overhead of db init.
func testServiceOptsNoDB() []Option {
return []Option{
withStateBalanceCache(satisfactoryStateBalanceCache()),
}
}
type mockStateByRooter struct {
state state.BeaconState
err error
}
var _ stateByRooter = &mockStateByRooter{}
func (m mockStateByRooter) StateByRoot(_ context.Context, _ [32]byte) (state.BeaconState, error) {
return m.state, m.err
}
// returns an instance of the state balance cache that can be used
// to satisfy the requirement for one in NewService, but which will
// always return an error if used.
func satisfactoryStateBalanceCache() *stateBalanceCache {
err := errors.New("satisfactoryStateBalanceCache doesn't perform real caching")
return &stateBalanceCache{stateGen: mockStateByRooter{err: err}}
}

View File

@@ -0,0 +1,146 @@
package blockchain
import (
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
type Option func(s *Service) error
// WithMaxGoroutines to control resource use of the blockchain service.
func WithMaxGoroutines(x int) Option {
return func(s *Service) error {
s.cfg.MaxRoutines = x
return nil
}
}
// WithWeakSubjectivityCheckpoint for checkpoint sync.
func WithWeakSubjectivityCheckpoint(c *ethpb.Checkpoint) Option {
return func(s *Service) error {
s.cfg.WeakSubjectivityCheckpt = c
return nil
}
}
// WithDatabase for head access.
func WithDatabase(beaconDB db.HeadAccessDatabase) Option {
return func(s *Service) error {
s.cfg.BeaconDB = beaconDB
return nil
}
}
// WithChainStartFetcher to retrieve information about genesis.
func WithChainStartFetcher(f powchain.ChainStartFetcher) Option {
return func(s *Service) error {
s.cfg.ChainStartFetcher = f
return nil
}
}
// WithDepositCache for deposit lifecycle after chain inclusion.
func WithDepositCache(c *depositcache.DepositCache) Option {
return func(s *Service) error {
s.cfg.DepositCache = c
return nil
}
}
// WithAttestationPool for attestation lifecycle after chain inclusion.
func WithAttestationPool(p attestations.Pool) Option {
return func(s *Service) error {
s.cfg.AttPool = p
return nil
}
}
// WithExitPool for exits lifecycle after chain inclusion.
func WithExitPool(p voluntaryexits.PoolManager) Option {
return func(s *Service) error {
s.cfg.ExitPool = p
return nil
}
}
// WithSlashingPool for slashings lifecycle after chain inclusion.
func WithSlashingPool(p slashings.PoolManager) Option {
return func(s *Service) error {
s.cfg.SlashingPool = p
return nil
}
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
return func(s *Service) error {
s.cfg.P2p = p
return nil
}
}
// WithStateNotifier to notify an event feed of state processing.
func WithStateNotifier(n statefeed.Notifier) Option {
return func(s *Service) error {
s.cfg.StateNotifier = n
return nil
}
}
// WithForkChoiceStore to update an optimized fork-choice representation.
func WithForkChoiceStore(f forkchoice.ForkChoicer) Option {
return func(s *Service) error {
s.cfg.ForkChoiceStore = f
return nil
}
}
// WithAttestationService for dealing with attestation lifecycles.
func WithAttestationService(srv *attestations.Service) Option {
return func(s *Service) error {
s.cfg.AttService = srv
return nil
}
}
// WithStateGen for managing state regeneration and replay.
func WithStateGen(g *stategen.State) Option {
return func(s *Service) error {
s.cfg.StateGen = g
return nil
}
}
// WithSlasherAttestationsFeed to forward attestations into slasher if enabled.
func WithSlasherAttestationsFeed(f *event.Feed) Option {
return func(s *Service) error {
s.cfg.SlasherAttestationsFeed = f
return nil
}
}
func withStateBalanceCache(c *stateBalanceCache) Option {
return func(s *Service) error {
s.justifiedBalances = c
return nil
}
}
// WithFinalizedStateAtStartUp to store finalized state at start up.
func WithFinalizedStateAtStartUp(st state.BeaconState) Option {
return func(s *Service) error {
s.cfg.FinalizedStateAtStartUp = st
return nil
}
}

View File

@@ -4,13 +4,13 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
)
@@ -62,7 +62,7 @@ func (s *Service) onAttestation(ctx context.Context, a *ethpb.Attestation) error
genesisTime := baseState.GenesisTime()
// Verify attestation target is from current epoch or previous epoch.
if err := s.verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Unix()), tgt); err != nil {
if err := verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Unix()), tgt); err != nil {
return err
}
@@ -75,12 +75,12 @@ func (s *Service) onAttestation(ctx context.Context, a *ethpb.Attestation) error
// validate_aggregate_proof.go and validate_beacon_attestation.go
// Verify attestations can only affect the fork choice of subsequent slots.
if err := core.VerifySlotTime(genesisTime, a.Data.Slot+1, params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
if err := slots.VerifyTime(genesisTime, a.Data.Slot+1, params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
return err
}
// Use the target state to verify attesting indices are valid.
committee, err := helpers.BeaconCommitteeFromState(baseState, a.Data.Slot, a.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(ctx, baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return err
}

View File

@@ -8,13 +8,13 @@ import (
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/async"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time/slots"
)
// getAttPreState retrieves the att pre state by either from the cache or the DB.
@@ -38,7 +38,7 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
return nil, errors.Wrapf(err, "could not get pre state for epoch %d", c.Epoch)
}
epochStartSlot, err := core.StartSlot(c.Epoch)
epochStartSlot, err := slots.EpochStart(c.Epoch)
if err != nil {
return nil, err
}
@@ -61,9 +61,9 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
}
// verifyAttTargetEpoch validates attestation is from the current or previous epoch.
func (s *Service) verifyAttTargetEpoch(_ context.Context, genesisTime, nowTime uint64, c *ethpb.Checkpoint) error {
func verifyAttTargetEpoch(_ context.Context, genesisTime, nowTime uint64, c *ethpb.Checkpoint) error {
currentSlot := types.Slot((nowTime - genesisTime) / params.BeaconConfig().SecondsPerSlot)
currentEpoch := core.SlotToEpoch(currentSlot)
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch types.Epoch
// Prevents previous epoch under flow
if currentEpoch > 1 {
@@ -87,7 +87,7 @@ func (s *Service) verifyBeaconBlock(ctx context.Context, data *ethpb.Attestation
if (b == nil || b.IsNil()) && s.hasInitSyncBlock(r) {
b = s.getInitSyncBlock(r)
}
if err := helpers.VerifyNilBeaconBlock(b); err != nil {
if err := helpers.BeaconBlockIsNil(b); err != nil {
return err
}
if b.Block().Slot() > data.Slot {

View File

@@ -5,60 +5,60 @@ import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
)
func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
BlkWithOutState := testutil.NewBeaconBlock()
BlkWithOutState := util.NewBeaconBlock()
BlkWithOutState.Block.Slot = 0
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(BlkWithOutState)))
BlkWithOutStateRoot, err := BlkWithOutState.Block.HashTreeRoot()
require.NoError(t, err)
BlkWithStateBadAtt := testutil.NewBeaconBlock()
BlkWithStateBadAtt := util.NewBeaconBlock()
BlkWithStateBadAtt.Block.Slot = 1
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(BlkWithStateBadAtt)))
BlkWithStateBadAttRoot, err := BlkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(100*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
BlkWithValidState := testutil.NewBeaconBlock()
BlkWithValidState := util.NewBeaconBlock()
BlkWithValidState.Block.Slot = 2
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(BlkWithValidState)))
BlkWithValidStateRoot, err := BlkWithValidState.Block.HashTreeRoot()
require.NoError(t, err)
s, err = testutil.NewBeaconState()
s, err = util.NewBeaconState()
require.NoError(t, err)
err = s.SetFork(&ethpb.Fork{
Epoch: 0,
@@ -75,17 +75,17 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
}{
{
name: "attestation's data slot not aligned with target vote",
a: testutil.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: testutil.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: testutil.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
@@ -130,17 +130,18 @@ func TestStore_OnAttestation_Ok(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(beaconDB),
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisState, pks := testutil.DeterministicGenesisState(t, 64)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(time.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := testutil.GenerateAttestations(genesisState, pks, 1, 0, false)
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
@@ -155,14 +156,14 @@ func TestStore_SaveCheckpointState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
err = s.SetFinalizedCheckpoint(&ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'A'}, 32)})
require.NoError(t, err)
@@ -227,15 +228,15 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
epoch := types.Epoch(1)
baseState, _ := testutil.DeterministicGenesisState(t, 1)
baseState, _ := util.DeterministicGenesisState(t, 1)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: bytesutil.PadTo([]byte("hi"), 32)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root)))
returned, err := service.getAttPreState(ctx, checkpoint)
@@ -251,7 +252,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(newCheckpoint.Root)))
returned, err = service.getAttPreState(ctx, newCheckpoint)
require.NoError(t, err)
s, err := core.StartSlot(newCheckpoint.Epoch)
s, err := slots.EpochStart(newCheckpoint.Epoch)
require.NoError(t, err)
baseState, err = transition.ProcessSlots(ctx, baseState, s)
require.NoError(t, err)
@@ -264,62 +265,44 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
require.NoError(t, err)
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, service.verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, 32)}))
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, 32)}))
}
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
require.NoError(t, err)
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, service.verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
}
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
require.NoError(t, err)
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
err = service.verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, 32)})
err := verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, 32)})
assert.ErrorContains(t, "target epoch 0 does not match current epoch 2 or prev epoch 1", err)
}
func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
d := testutil.HydrateAttestationData(&ethpb.AttestationData{})
d := util.HydrateAttestationData(&ethpb.AttestationData{})
assert.ErrorContains(t, "signed beacon block can't be nil", service.verifyBeaconBlock(ctx, d))
}
func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 2
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
r, err := b.Block.HashTreeRoot()
@@ -331,13 +314,12 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
func TestVerifyBeaconBlock_OK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 2
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
r, err := b.Block.HashTreeRoot()
@@ -351,11 +333,16 @@ func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := testutil.NewBeaconBlock()
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b32)))
r32, err := b32.Block.HashTreeRoot()
@@ -363,7 +350,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
service.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 1}
b33 := testutil.NewBeaconBlock()
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b33)))
@@ -376,13 +363,12 @@ func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
func TestVerifyFinalizedConsistency_OK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := testutil.NewBeaconBlock()
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b32)))
r32, err := b32.Block.HashTreeRoot()
@@ -390,7 +376,7 @@ func TestVerifyFinalizedConsistency_OK(t *testing.T) {
service.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 1, Root: r32[:]}
b33 := testutil.NewBeaconBlock()
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b33)))
@@ -403,20 +389,19 @@ func TestVerifyFinalizedConsistency_OK(t *testing.T) {
func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := testutil.NewBeaconBlock()
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
service.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 1, Root: r32[:]}
b33 := testutil.NewBeaconBlock()
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
r33, err := b33.Block.HashTreeRoot()

View File

@@ -6,20 +6,22 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/crypto/bls"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
ethpbv1 "github.com/prysmaticlabs/prysm/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
)
@@ -86,9 +88,8 @@ var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
defer span.End()
if signed == nil || signed.IsNil() || signed.Block().IsNil() {
return errors.New("nil block")
if err := helpers.BeaconBlockIsNil(signed); err != nil {
return err
}
b := signed.Block()
@@ -106,6 +107,74 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
return err
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
// Feed the indexed attestation to slasher if enabled. This action
// is done in the background to avoid adding more load to this critical code path.
go func() {
// Using a different context to prevent timeouts as this operation can be expensive
// and we want to avoid affecting the critical code path.
ctx := context.TODO()
for _, att := range signed.Block().Body().Attestations() {
committee, err := helpers.BeaconCommitteeFromState(ctx, preState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
log.WithError(err).Error("Could not get attestation committee")
tracing.AnnotateError(span, err)
return
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
tracing.AnnotateError(span, err)
return
}
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
}
}()
}
// Update justified check point.
currJustifiedEpoch := s.justifiedCheckpt.Epoch
if postState.CurrentJustifiedCheckpoint().Epoch > currJustifiedEpoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
newFinalized := postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch
if newFinalized {
if err := s.finalizedImpliesNewJustified(ctx, postState); err != nil {
return errors.Wrap(err, "could not save new justified")
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = postState.FinalizedCheckpoint()
}
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", s.justifiedCheckpt.Root)
return errors.Wrap(err, msg)
}
if err := s.updateHead(ctx, balances); err != nil {
log.WithError(err).Warn("Could not update head")
}
if err := s.pruneCanonicalAttsFromPool(ctx, blockRoot, signed); err != nil {
return err
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: signed.Block().Slot(),
BlockRoot: blockRoot,
SignedBlock: signed,
Verified: true,
},
})
// Updating next slot state cache can happen in the background. It shouldn't block rest of the process.
if features.Get().EnableNextSlotStateCache {
go func() {
@@ -120,43 +189,13 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
}()
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint().Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
// Save justified check point to db.
if postState.CurrentJustifiedCheckpoint().Epoch > currJustifiedEpoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint()); err != nil {
return err
}
}
newFinalized := postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch
if features.Get().UpdateHeadTimely {
if newFinalized {
if err := s.finalizedImpliesNewJustified(ctx, postState); err != nil {
return errors.Wrap(err, "could not save new justified")
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = postState.FinalizedCheckpoint()
}
if err := s.updateHead(ctx, s.getJustifiedBalances()); err != nil {
log.WithError(err).Warn("Could not update head")
}
if err := s.pruneCanonicalAttsFromPool(ctx, blockRoot, signed); err != nil {
return err
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: signed.Block().Slot(),
BlockRoot: blockRoot,
SignedBlock: signed,
Verified: true,
},
})
}
// Update finalized check point.
if newFinalized {
if err := s.updateFinalized(ctx, postState.FinalizedCheckpoint()); err != nil {
@@ -166,11 +205,6 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
if err := s.cfg.ForkChoiceStore.Prune(ctx, fRoot); err != nil {
return errors.Wrap(err, "could not prune proto array fork choice nodes")
}
if !features.Get().UpdateHeadTimely {
if err := s.finalizedImpliesNewJustified(ctx, postState); err != nil {
return errors.Wrap(err, "could not save new justified")
}
}
go func() {
// Send an event regarding the new finalized checkpoint over a common event feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
@@ -207,8 +241,8 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []block.SignedBeaconBlo
if len(blks) == 0 || len(blockRoots) == 0 {
return nil, nil, errors.New("no blocks provided")
}
if blks[0] == nil || blks[0].IsNil() || blks[0].Block().IsNil() {
return nil, nil, errors.New("nil block")
if err := helpers.BeaconBlockIsNil(blks[0]); err != nil {
return nil, nil, err
}
b := blks[0].Block()
@@ -226,12 +260,12 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []block.SignedBeaconBlo
jCheckpoints := make([]*ethpb.Checkpoint, len(blks))
fCheckpoints := make([]*ethpb.Checkpoint, len(blks))
sigSet := &bls.SignatureSet{
sigSet := &bls.SignatureBatch{
Signatures: [][]byte{},
PublicKeys: []bls.PublicKey{},
Messages: [][32]byte{},
}
var set *bls.SignatureSet
var set *bls.SignatureBatch
boundaries := make(map[[32]byte]state.BeaconState)
for i, b := range blks {
set, preState, err = transition.ExecuteStateTransitionNoVerifyAnySig(ctx, preState, b)
@@ -239,7 +273,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []block.SignedBeaconBlo
return nil, nil, err
}
// Save potential boundary states.
if core.IsEpochStart(preState.Slot()) {
if slots.IsEpochStart(preState.Slot()) {
boundaries[blockRoots[i]] = preState.Copy()
if err := s.handleEpochBoundary(ctx, preState); err != nil {
return nil, nil, errors.Wrap(err, "could not handle epoch boundary state")
@@ -309,10 +343,8 @@ func (s *Service) handleBlockAfterBatchVerify(ctx context.Context, signed block.
if err := s.updateFinalized(ctx, fCheckpoint); err != nil {
return err
}
if features.Get().UpdateHeadTimely {
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = fCheckpoint
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = fCheckpoint
}
return nil
}
@@ -324,7 +356,7 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
if postState.Slot()+1 == s.nextEpochBoundarySlot {
// Update caches for the next epoch at epoch boundary slot - 1.
if err := helpers.UpdateCommitteeCache(postState, core.NextEpoch(postState)); err != nil {
if err := helpers.UpdateCommitteeCache(postState, coreTime.NextEpoch(postState)); err != nil {
return err
}
copied := postState.Copy()
@@ -332,7 +364,7 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
if err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(copied); err != nil {
if err := helpers.UpdateProposerIndicesInCache(ctx, copied); err != nil {
return err
}
} else if postState.Slot() >= s.nextEpochBoundarySlot {
@@ -340,17 +372,17 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
return err
}
var err error
s.nextEpochBoundarySlot, err = core.StartSlot(core.NextEpoch(postState))
s.nextEpochBoundarySlot, err = slots.EpochStart(coreTime.NextEpoch(postState))
if err != nil {
return err
}
// Update caches at epoch boundary slot.
// The following updates have short cut to return nil cheaply if fulfilled during boundary slot - 1.
if err := helpers.UpdateCommitteeCache(postState, core.CurrentEpoch(postState)); err != nil {
if err := helpers.UpdateCommitteeCache(postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(postState); err != nil {
if err := helpers.UpdateProposerIndicesInCache(ctx, postState); err != nil {
return err
}
}
@@ -372,7 +404,7 @@ func (s *Service) insertBlockAndAttestationsToForkChoiceStore(ctx context.Contex
}
// Feed in block's attestations to fork choice store.
for _, a := range blk.Body().Attestations() {
committee, err := helpers.BeaconCommitteeFromState(st, a.Data.Slot, a.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(ctx, st, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return err
}

View File

@@ -7,22 +7,21 @@ import (
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() types.Slot {
return core.CurrentSlot(uint64(s.genesisTime.Unix()))
return slots.CurrentSlot(uint64(s.genesisTime.Unix()))
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
@@ -46,7 +45,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b block.BeaconBlock) (st
}
// Verify block slot time is not from the future.
if err := core.VerifySlotTime(preState.GenesisTime(), b.Slot(), params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
if err := slots.VerifyTime(preState.GenesisTime(), b.Slot(), params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
return nil, err
}
@@ -123,7 +122,7 @@ func (s *Service) VerifyBlkDescendant(ctx context.Context, root [32]byte) error
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Service) verifyBlkFinalizedSlot(b block.BeaconBlock) error {
finalizedSlot, err := core.StartSlot(s.finalizedCheckpt.Epoch)
finalizedSlot, err := slots.EpochStart(s.finalizedCheckpt.Epoch)
if err != nil {
return err
}
@@ -136,59 +135,45 @@ func (s *Service) verifyBlkFinalizedSlot(b block.BeaconBlock) error {
// shouldUpdateCurrentJustified prevents bouncing attack, by only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
//
// Spec code:
// def should_update_justified_checkpoint(store: Store, new_justified_checkpoint: Checkpoint) -> bool:
// """
// To address the bouncing attack, only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
//
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
// """
// if compute_slots_since_epoch_start(get_current_slot(store)) < SAFE_SLOTS_TO_UPDATE_JUSTIFIED:
// return True
//
// justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
// if not get_ancestor(store, new_justified_checkpoint.root, justified_slot) == store.justified_checkpoint.root:
// return False
//
// return True
func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustifiedCheckpt *ethpb.Checkpoint) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.shouldUpdateCurrentJustified")
defer span.End()
if core.SlotsSinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
if slots.SinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
var newJustifiedBlockSigned block.SignedBeaconBlock
justifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(newJustifiedCheckpt.Root))
var err error
if s.hasInitSyncBlock(justifiedRoot) {
newJustifiedBlockSigned = s.getInitSyncBlock(justifiedRoot)
} else {
newJustifiedBlockSigned, err = s.cfg.BeaconDB.Block(ctx, justifiedRoot)
if err != nil {
return false, err
}
}
if newJustifiedBlockSigned == nil || newJustifiedBlockSigned.IsNil() || newJustifiedBlockSigned.Block().IsNil() {
return false, errors.New("nil new justified block")
}
newJustifiedBlock := newJustifiedBlockSigned.Block()
jSlot, err := core.StartSlot(s.justifiedCheckpt.Epoch)
jSlot, err := slots.EpochStart(s.justifiedCheckpt.Epoch)
if err != nil {
return false, err
}
if newJustifiedBlock.Slot() <= jSlot {
return false, nil
}
var justifiedBlockSigned block.SignedBeaconBlock
cachedJustifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if s.hasInitSyncBlock(cachedJustifiedRoot) {
justifiedBlockSigned = s.getInitSyncBlock(cachedJustifiedRoot)
} else {
justifiedBlockSigned, err = s.cfg.BeaconDB.Block(ctx, cachedJustifiedRoot)
if err != nil {
return false, err
}
}
if justifiedBlockSigned == nil || justifiedBlockSigned.IsNil() || justifiedBlockSigned.Block().IsNil() {
return false, errors.New("nil justified block")
}
justifiedBlock := justifiedBlockSigned.Block()
b, err := s.ancestor(ctx, justifiedRoot[:], justifiedBlock.Slot())
justifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(newJustifiedCheckpt.Root))
b, err := s.ancestor(ctx, justifiedRoot[:], jSlot)
if err != nil {
return false, err
}
if !bytes.Equal(b, s.justifiedCheckpt.Root) {
return false, nil
}
return true, nil
}
@@ -208,12 +193,9 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
if canUpdate {
s.prevJustifiedCheckpt = s.justifiedCheckpt
s.justifiedCheckpt = cpt
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
return err
}
}
return s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cpt)
return nil
}
// This caches input checkpoint as justified for the service struct. It rotates current justified to previous justified,
@@ -221,12 +203,13 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
// This method does not have defense against fork choice bouncing attack, which is why it's only recommend to be used during initial syncing.
func (s *Service) updateJustifiedInitSync(ctx context.Context, cp *ethpb.Checkpoint) error {
s.prevJustifiedCheckpt = s.justifiedCheckpt
s.justifiedCheckpt = cp
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cp); err != nil {
return err
}
s.justifiedCheckpt = cp
return s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cp)
return nil
}
func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) error {
@@ -243,10 +226,6 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
if err := s.cfg.BeaconDB.SaveFinalizedCheckpoint(ctx, cp); err != nil {
return err
}
if !features.Get().UpdateHeadTimely {
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = cp
}
fRoot := bytesutil.ToBytes32(cp.Root)
if err := s.cfg.StateGen.MigrateToCold(ctx, fRoot); err != nil {
@@ -349,11 +328,13 @@ func (s *Service) finalizedImpliesNewJustified(ctx context.Context, state state.
if !attestation.CheckPointIsEqual(s.justifiedCheckpt, state.CurrentJustifiedCheckpoint()) {
if state.CurrentJustifiedCheckpoint().Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint()
return s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
// we don't need to check if the previous justified checkpoint was an ancestor since the new
// finalized checkpoint is overriding it.
return nil
}
// Update justified if store justified is not in chain with finalized check point.
finalizedSlot, err := core.StartSlot(s.finalizedCheckpt.Epoch)
finalizedSlot, err := slots.EpochStart(s.finalizedCheckpt.Epoch)
if err != nil {
return err
}
@@ -364,9 +345,6 @@ func (s *Service) finalizedImpliesNewJustified(ctx context.Context, state state.
}
if !bytes.Equal(anc, s.finalizedCheckpt.Root) {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint()
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
return err
}
}
}
return nil
@@ -382,7 +360,7 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk block.B
parentRoot := bytesutil.ToBytes32(blk.ParentRoot())
slot := blk.Slot()
// Fork choice only matters from last finalized slot.
fSlot, err := core.StartSlot(s.finalizedCheckpt.Epoch)
fSlot, err := slots.EpochStart(s.finalizedCheckpt.Epoch)
if err != nil {
return err
}
@@ -461,7 +439,7 @@ func (s *Service) deletePoolAtts(atts []*ethpb.Attestation) error {
// fork choice justification routine.
func (s *Service) ensureRootNotZeros(root [32]byte) [32]byte {
if root == params.BeaconConfig().ZeroHash {
return s.genesisRoot
return s.originBlockRoot
}
return root
}

View File

@@ -21,40 +21,42 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/time"
)
func TestStore_OnBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := testutil.NewBeaconState()
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
random := testutil.NewBeaconBlock()
random := util.NewBeaconBlock()
random.Block.Slot = 1
random.Block.ParentRoot = validGenesisRoot[:]
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(random)))
@@ -75,14 +77,14 @@ func TestStore_OnBlock(t *testing.T) {
}{
{
name: "parent block root does not have a state",
blk: testutil.NewBeaconBlock(),
blk: util.NewBeaconBlock(),
s: st.Copy(),
wantErrString: "could not reconstruct parent state",
},
{
name: "block is from the future",
blk: func() *ethpb.SignedBeaconBlock {
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot2
b.Block.Slot = params.BeaconConfig().FarFutureSlot
return b
@@ -93,7 +95,7 @@ func TestStore_OnBlock(t *testing.T) {
{
name: "could not get finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot[:]
return b
}(),
@@ -103,7 +105,7 @@ func TestStore_OnBlock(t *testing.T) {
{
name: "same slot as finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 0
b.Block.ParentRoot = randomParentRoot2
return b
@@ -133,11 +135,11 @@ func TestStore_OnBlockBatch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
@@ -151,7 +153,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.saveInitSyncBlock(gRoot, wrapper.WrappedPhase0SignedBeaconBlock(genesis))
st, keys := testutil.DeterministicGenesisState(t, 64)
st, keys := util.DeterministicGenesisState(t, 64)
bState := st.Copy()
@@ -159,7 +161,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
var blkRoots [][32]byte
var firstState state.BeaconState
for i := 1; i < 10; i++ {
b, err := testutil.GenerateFullBlock(bState, keys, testutil.DefaultBlockGenConfig(), types.Slot(i))
b, err := util.GenerateFullBlock(bState, keys, util.DefaultBlockGenConfig(), types.Slot(i))
require.NoError(t, err)
bState, err = transition.ExecuteStateTransition(ctx, bState, wrapper.WrappedPhase0SignedBeaconBlock(b))
require.NoError(t, err)
@@ -184,23 +186,22 @@ func TestStore_OnBlockBatch(t *testing.T) {
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisTime = time.Now()
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: make([]byte, 32)})
require.NoError(t, err)
assert.Equal(t, true, update, "Should be able to update justified")
lastJustifiedBlk := testutil.NewBeaconBlock()
lastJustifiedBlk := util.NewBeaconBlock()
lastJustifiedBlk.Block.ParentRoot = bytesutil.PadTo([]byte{'G'}, 32)
lastJustifiedRoot, err := lastJustifiedBlk.Block.HashTreeRoot()
require.NoError(t, err)
newJustifiedBlk := testutil.NewBeaconBlock()
newJustifiedBlk := util.NewBeaconBlock()
newJustifiedBlk.Block.Slot = 1
newJustifiedBlk.Block.ParentRoot = bytesutil.PadTo(lastJustifiedRoot[:], 32)
newJustifiedRoot, err := newJustifiedBlk.Block.HashTreeRoot()
@@ -218,18 +219,18 @@ func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
lastJustifiedBlk := testutil.NewBeaconBlock()
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
lastJustifiedBlk := util.NewBeaconBlock()
lastJustifiedBlk.Block.ParentRoot = bytesutil.PadTo([]byte{'G'}, 32)
lastJustifiedRoot, err := lastJustifiedBlk.Block.HashTreeRoot()
require.NoError(t, err)
newJustifiedBlk := testutil.NewBeaconBlock()
newJustifiedBlk := util.NewBeaconBlock()
newJustifiedBlk.Block.ParentRoot = bytesutil.PadTo(lastJustifiedRoot[:], 32)
newJustifiedRoot, err := newJustifiedBlk.Block.HashTreeRoot()
require.NoError(t, err)
@@ -249,11 +250,11 @@ func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
s, err := v1.InitializeFromProto(&ethpb.BeaconState{Slot: 1, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
@@ -270,7 +271,7 @@ func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.saveInitSyncBlock(gRoot, wrapper.WrappedPhase0SignedBeaconBlock(genesis))
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 1
b.Block.ParentRoot = gRoot[:]
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: 1, Root: gRoot[:]}))
@@ -282,11 +283,11 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisStateRoot := [32]byte{}
@@ -300,7 +301,7 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
service.saveInitSyncBlock(gRoot, wrapper.WrappedPhase0SignedBeaconBlock(genesis))
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 1
service.finalizedCheckpt = &ethpb.Checkpoint{Root: gRoot[:]}
err = service.verifyBlkPreState(ctx, wrapper.WrappedPhase0BeaconBlock(b.Block))
@@ -319,22 +320,26 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}
service, err := NewService(ctx, cfg)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
signedBlock := testutil.NewBeaconBlock()
signedBlock := util.NewBeaconBlock()
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(signedBlock)))
r, err := signedBlock.Block.HashTreeRoot()
require.NoError(t, err)
service.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
st, err := testutil.NewBeaconState()
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, st.Copy(), r))
// Could update
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: 1, Root: r[:]}))
require.NoError(t, service.updateJustified(context.Background(), s))
@@ -352,8 +357,11 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.finalizedCheckpt = &ethpb.Checkpoint{Root: make([]byte, 32)}
@@ -363,15 +371,15 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := testutil.NewBeaconState()
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
block := testutil.NewBeaconBlock()
beaconState, _ := util.DeterministicGenesisState(t, 32)
block := util.NewBeaconBlock()
block.Block.Slot = 9
block.Block.ParentRoot = roots[8]
@@ -390,8 +398,11 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.finalizedCheckpt = &ethpb.Checkpoint{Root: make([]byte, 32)}
@@ -401,15 +412,15 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := testutil.NewBeaconState()
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
block := testutil.NewBeaconBlock()
beaconState, _ := util.DeterministicGenesisState(t, 32)
block := util.NewBeaconBlock()
block.Block.Slot = 9
block.Block.ParentRoot = roots[8]
@@ -431,8 +442,11 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
// Set finalized epoch to 1.
@@ -443,29 +457,29 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
validGenesisRoot, err := genesis.Block.HashTreeRoot()
assert.NoError(t, err)
st, err := testutil.NewBeaconState()
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
// Define a tree branch, slot 63 <- 64 <- 65
b63 := testutil.NewBeaconBlock()
b63 := util.NewBeaconBlock()
b63.Block.Slot = 63
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b63)))
r63, err := b63.Block.HashTreeRoot()
require.NoError(t, err)
b64 := testutil.NewBeaconBlock()
b64 := util.NewBeaconBlock()
b64.Block.Slot = 64
b64.Block.ParentRoot = r63[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b64)))
r64, err := b64.Block.HashTreeRoot()
require.NoError(t, err)
b65 := testutil.NewBeaconBlock()
b65 := util.NewBeaconBlock()
b65.Block.Slot = 65
b65.Block.ParentRoot = r64[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b65)))
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState, _ := util.DeterministicGenesisState(t, 32)
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(b65).Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
@@ -484,67 +498,67 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
// (B1, and B3 are all from the same slots)
func blockTree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][]byte, error) {
genesisRoot = bytesutil.PadTo(genesisRoot, 32)
b0 := testutil.NewBeaconBlock()
b0 := util.NewBeaconBlock()
b0.Block.Slot = 0
b0.Block.ParentRoot = genesisRoot
r0, err := b0.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b1 := testutil.NewBeaconBlock()
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.ParentRoot = r0[:]
r1, err := b1.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b3 := testutil.NewBeaconBlock()
b3 := util.NewBeaconBlock()
b3.Block.Slot = 3
b3.Block.ParentRoot = r0[:]
r3, err := b3.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b4 := testutil.NewBeaconBlock()
b4 := util.NewBeaconBlock()
b4.Block.Slot = 4
b4.Block.ParentRoot = r3[:]
r4, err := b4.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b5 := testutil.NewBeaconBlock()
b5 := util.NewBeaconBlock()
b5.Block.Slot = 5
b5.Block.ParentRoot = r4[:]
r5, err := b5.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b6 := testutil.NewBeaconBlock()
b6 := util.NewBeaconBlock()
b6.Block.Slot = 6
b6.Block.ParentRoot = r4[:]
r6, err := b6.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b7 := testutil.NewBeaconBlock()
b7 := util.NewBeaconBlock()
b7.Block.Slot = 7
b7.Block.ParentRoot = r5[:]
r7, err := b7.Block.HashTreeRoot()
if err != nil {
return nil, err
}
b8 := testutil.NewBeaconBlock()
b8 := util.NewBeaconBlock()
b8.Block.Slot = 8
b8.Block.ParentRoot = r6[:]
r8, err := b8.Block.HashTreeRoot()
if err != nil {
return nil, err
}
st, err := testutil.NewBeaconState()
st, err := util.NewBeaconState()
require.NoError(t, err)
for _, b := range []*ethpb.SignedBeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} {
beaconBlock := testutil.NewBeaconBlock()
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
if err := beaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(beaconBlock)); err != nil {
@@ -574,7 +588,8 @@ func TestCurrentSlot_HandlesOverflow(t *testing.T) {
}
func TestAncestorByDB_CtxErr(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
service, err := NewService(ctx, &Config{})
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
cancel()
@@ -586,27 +601,32 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b1 := testutil.NewBeaconBlock()
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.ParentRoot = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
b100 := testutil.NewBeaconBlock()
b100 := util.NewBeaconBlock()
b100.Block.Slot = 100
b100.Block.ParentRoot = r1[:]
r100, err := b100.Block.HashTreeRoot()
require.NoError(t, err)
b200 := testutil.NewBeaconBlock()
b200 := util.NewBeaconBlock()
b200.Block.Slot = 200
b200.Block.ParentRoot = r100[:]
r200, err := b200.Block.HashTreeRoot()
require.NoError(t, err)
for _, b := range []*ethpb.SignedBeaconBlock{b1, b100, b200} {
beaconBlock := testutil.NewBeaconBlock()
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(beaconBlock)))
@@ -629,27 +649,27 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
func TestAncestor_CanUseForkchoice(t *testing.T) {
ctx := context.Background()
cfg := &Config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b1 := testutil.NewBeaconBlock()
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.ParentRoot = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
b100 := testutil.NewBeaconBlock()
b100 := util.NewBeaconBlock()
b100.Block.Slot = 100
b100.Block.ParentRoot = r1[:]
r100, err := b100.Block.HashTreeRoot()
require.NoError(t, err)
b200 := testutil.NewBeaconBlock()
b200 := util.NewBeaconBlock()
b200.Block.Slot = 200
b200.Block.ParentRoot = r100[:]
r200, err := b200.Block.HashTreeRoot()
require.NoError(t, err)
for _, b := range []*ethpb.SignedBeaconBlock{b1, b100, b200} {
beaconBlock := testutil.NewBeaconBlock()
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
@@ -668,27 +688,32 @@ func TestAncestor_CanUseDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b1 := testutil.NewBeaconBlock()
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.ParentRoot = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
b100 := testutil.NewBeaconBlock()
b100 := util.NewBeaconBlock()
b100.Block.Slot = 100
b100.Block.ParentRoot = r1[:]
r100, err := b100.Block.HashTreeRoot()
require.NoError(t, err)
b200 := testutil.NewBeaconBlock()
b200 := util.NewBeaconBlock()
b200.Block.Slot = 200
b200.Block.ParentRoot = r100[:]
r200, err := b200.Block.HashTreeRoot()
require.NoError(t, err)
for _, b := range []*ethpb.SignedBeaconBlock{b1, b100, b200} {
beaconBlock := testutil.NewBeaconBlock()
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
require.NoError(t, beaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(beaconBlock))) // Saves blocks to DB.
@@ -705,13 +730,13 @@ func TestAncestor_CanUseDB(t *testing.T) {
func TestEnsureRootNotZeroHashes(t *testing.T) {
ctx := context.Background()
cfg := &Config{}
service, err := NewService(ctx, cfg)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisRoot = [32]byte{'a'}
service.originBlockRoot = [32]byte{'a'}
r := service.ensureRootNotZeros(params.BeaconConfig().ZeroHash)
assert.Equal(t, service.genesisRoot, r, "Did not get wanted justified root")
assert.Equal(t, service.originBlockRoot, r, "Did not get wanted justified root")
root := [32]byte{'b'}
r = service.ensureRootNotZeros(root)
assert.Equal(t, root, r, "Did not get wanted justified root")
@@ -719,6 +744,12 @@ func TestEnsureRootNotZeroHashes(t *testing.T) {
func TestFinalizedImpliesNewJustified(t *testing.T) {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
ctx := context.Background()
type args struct {
cachedCheckPoint *ethpb.Checkpoint
@@ -757,30 +788,30 @@ func TestFinalizedImpliesNewJustified(t *testing.T) {
},
}
for _, test := range tests {
beaconState, err := testutil.NewBeaconState()
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(test.args.stateCheckPoint))
service, err := NewService(ctx, &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: protoarray.New(0, 0, [32]byte{})})
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.justifiedCheckpt = test.args.cachedCheckPoint
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo(test.want.Root, 32)}))
genesisState, err := testutil.NewBeaconState()
genesisState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, genesisState, bytesutil.ToBytes32(test.want.Root)))
if test.args.diffFinalizedCheckPoint {
b1 := testutil.NewBeaconBlock()
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.ParentRoot = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
b100 := testutil.NewBeaconBlock()
b100 := util.NewBeaconBlock()
b100.Block.Slot = 100
b100.Block.ParentRoot = r1[:]
r100, err := b100.Block.HashTreeRoot()
require.NoError(t, err)
for _, b := range []*ethpb.SignedBeaconBlock{b1, b100} {
beaconBlock := testutil.NewBeaconBlock()
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(beaconBlock)))
@@ -798,13 +829,19 @@ func TestVerifyBlkDescendant(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
b := testutil.NewBeaconBlock()
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
b := util.NewBeaconBlock()
b.Block.Slot = 1
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
b1 := testutil.NewBeaconBlock()
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.Body.Graffiti = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
@@ -852,7 +889,7 @@ func TestVerifyBlkDescendant(t *testing.T) {
},
}
for _, tt := range tests {
service, err := NewService(ctx, &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: protoarray.New(0, 0, [32]byte{})})
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.finalizedCheckpt = &ethpb.Checkpoint{
Root: tt.args.finalizedRoot[:],
@@ -867,21 +904,20 @@ func TestVerifyBlkDescendant(t *testing.T) {
}
func TestUpdateJustifiedInitSync(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
cfg := &Config{BeaconDB: beaconDB}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gBlk := testutil.NewBeaconBlock()
gBlk := util.NewBeaconBlock()
gRoot, err := gBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(gBlk)))
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, gRoot))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: gRoot[:]}))
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, beaconState, gRoot))
service.genesisRoot = gRoot
service.originBlockRoot = gRoot
currentCp := &ethpb.Checkpoint{Epoch: 1}
service.justifiedCheckpt = currentCp
newCp := &ethpb.Checkpoint{Epoch: 2, Root: gRoot[:]}
@@ -897,11 +933,11 @@ func TestUpdateJustifiedInitSync(t *testing.T) {
func TestHandleEpochBoundary_BadMetrics(t *testing.T) {
ctx := context.Background()
cfg := &Config{}
service, err := NewService(ctx, cfg)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(1))
service.head = &head{state: (*v1.BeaconState)(nil)}
@@ -911,11 +947,11 @@ func TestHandleEpochBoundary_BadMetrics(t *testing.T) {
func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
ctx := context.Background()
cfg := &Config{}
service, err := NewService(ctx, cfg)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
s, _ := testutil.DeterministicGenesisState(t, 1024)
s, _ := util.DeterministicGenesisState(t, 1024)
service.head = &head{state: s}
require.NoError(t, s.SetSlot(2*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.handleEpochBoundary(ctx, s))
@@ -925,19 +961,20 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
func TestOnBlock_CanFinalize(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
depositCache, err := depositcache.New()
require.NoError(t, err)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
DepositCache: depositCache,
StateNotifier: &mock.MockStateNotifier{},
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
WithStateNotifier(&mock.MockStateNotifier{}),
}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, keys := testutil.DeterministicGenesisState(t, 32)
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gBlk, err := service.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
@@ -947,7 +984,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
testState := gs.Copy()
for i := types.Slot(1); i <= 4*params.BeaconConfig().SlotsPerEpoch; i++ {
blk, err := testutil.GenerateFullBlock(testState, keys, testutil.DefaultBlockGenConfig(), i)
blk, err := util.GenerateFullBlock(testState, keys, util.DefaultBlockGenConfig(), i)
require.NoError(t, err)
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
@@ -957,23 +994,26 @@ func TestOnBlock_CanFinalize(t *testing.T) {
}
require.Equal(t, types.Epoch(3), service.CurrentJustifiedCheckpt().Epoch)
require.Equal(t, types.Epoch(2), service.FinalizedCheckpt().Epoch)
// The update should persist in DB.
j, err := service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
require.NoError(t, err)
require.Equal(t, j.Epoch, service.CurrentJustifiedCheckpt().Epoch)
f, err := service.cfg.BeaconDB.FinalizedCheckpoint(ctx)
require.NoError(t, err)
require.Equal(t, f.Epoch, service.FinalizedCheckpt().Epoch)
}
func TestInsertFinalizedDeposits(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
cfg := &Config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
DepositCache: depositCache,
}
service, err := NewService(ctx, cfg)
opts = append(opts, WithDepositCache(depositCache))
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, _ := testutil.DeterministicGenesisState(t, 32)
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gBlk, err := service.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
@@ -1008,8 +1048,8 @@ func TestRemoveBlockAttestationsInPool_Canonical(t *testing.T) {
})
defer resetCfg()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
b, err := testutil.GenerateFullBlock(genesis, keys, testutil.DefaultBlockGenConfig(), 1)
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -1032,8 +1072,8 @@ func TestRemoveBlockAttestationsInPool_NonCanonical(t *testing.T) {
})
defer resetCfg()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
b, err := testutil.GenerateFullBlock(genesis, keys, testutil.DefaultBlockGenConfig(), 1)
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)

View File

@@ -7,22 +7,28 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// AttestationStateFetcher allows for retrieving a beacon state corresponding to the block
// root of an attestation's target checkpoint.
type AttestationStateFetcher interface {
AttestationTargetState(ctx context.Context, target *ethpb.Checkpoint) (state.BeaconState, error)
}
// AttestationReceiver interface defines the methods of chain service receive and processing new attestations.
type AttestationReceiver interface {
AttestationStateFetcher
ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation) error
AttestationPreState(ctx context.Context, att *ethpb.Attestation) (state.BeaconState, error)
VerifyLmdFfgConsistency(ctx context.Context, att *ethpb.Attestation) error
VerifyFinalizedConsistency(ctx context.Context, root []byte) error
}
@@ -43,21 +49,21 @@ func (s *Service) ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Att
return nil
}
// AttestationPreState returns the pre state of attestation.
func (s *Service) AttestationPreState(ctx context.Context, att *ethpb.Attestation) (state.BeaconState, error) {
ss, err := core.StartSlot(att.Data.Target.Epoch)
// AttestationTargetState returns the pre state of attestation.
func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Checkpoint) (state.BeaconState, error) {
ss, err := slots.EpochStart(target.Epoch)
if err != nil {
return nil, err
}
if err := core.ValidateSlotClock(ss, uint64(s.genesisTime.Unix())); err != nil {
if err := slots.ValidateClock(ss, uint64(s.genesisTime.Unix())); err != nil {
return nil, err
}
return s.getAttPreState(ctx, att.Data.Target)
return s.getAttPreState(ctx, target)
}
// VerifyLmdFfgConsistency verifies that attestation's LMD and FFG votes are consistency to each other.
func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestation) error {
targetSlot, err := core.StartSlot(a.Data.Target.Epoch)
targetSlot, err := slots.EpochStart(a.Data.Target.Epoch)
if err != nil {
return err
}
@@ -82,7 +88,7 @@ func (s *Service) VerifyFinalizedConsistency(ctx context.Context, root []byte) e
}
f := s.FinalizedCheckpt()
ss, err := core.StartSlot(f.Epoch)
ss, err := slots.EpochStart(f.Epoch)
if err != nil {
return err
}
@@ -98,39 +104,56 @@ func (s *Service) VerifyFinalizedConsistency(ctx context.Context, root []byte) e
}
// This routine processes fork choice attestations from the pool to account for validator votes and fork choice.
func (s *Service) processAttestationsRoutine(subscribedToStateEvents chan<- struct{}) {
func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
// Wait for state to be initialized.
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
subscribedToStateEvents <- struct{}{}
<-stateChannel
stateSub.Unsubscribe()
if s.genesisTime.IsZero() {
log.Warn("ProcessAttestations routine waiting for genesis time")
for s.genesisTime.IsZero() {
time.Sleep(1 * time.Second)
}
log.Warn("Genesis time received, now available to process attestations")
}
st := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
for {
stateSub := stateFeed.Subscribe(stateChannel)
go func() {
select {
case <-s.ctx.Done():
stateSub.Unsubscribe()
return
case <-st.C():
// Continue when there's no fork choice attestation, there's nothing to process and update head.
// This covers the condition when the node is still initial syncing to the head of the chain.
if s.cfg.AttPool.ForkchoiceAttestationCount() == 0 {
continue
case <-stateChannel:
stateSub.Unsubscribe()
break
}
if s.genesisTime.IsZero() {
log.Warn("ProcessAttestations routine waiting for genesis time")
for s.genesisTime.IsZero() {
if err := s.ctx.Err(); err != nil {
log.WithError(err).Error("Giving up waiting for genesis time")
return
}
time.Sleep(1 * time.Second)
}
s.processAttestations(s.ctx)
if err := s.updateHead(s.ctx, s.getJustifiedBalances()); err != nil {
log.Warnf("Resolving fork due to new attestation: %v", err)
log.Warn("Genesis time received, now available to process attestations")
}
st := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
for {
select {
case <-s.ctx.Done():
return
case <-st.C():
// Continue when there's no fork choice attestation, there's nothing to process and update head.
// This covers the condition when the node is still initial syncing to the head of the chain.
if s.cfg.AttPool.ForkchoiceAttestationCount() == 0 {
continue
}
s.processAttestations(s.ctx)
balances, err := s.justifiedBalances.get(s.ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
log.WithError(err).Errorf("Unable to get justified balances for root %v", s.justifiedCheckpt.Root)
continue
}
if err := s.updateHead(s.ctx, balances); err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
}
}
}
}
}()
}
// This processes fork choice attestations from the pool to account for validator votes and fork choice.
@@ -141,7 +164,7 @@ func (s *Service) processAttestations(ctx context.Context) {
// This delays consideration in the fork choice until their slot is in the past.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#validate_on_attestation
nextSlot := a.Data.Slot + 1
if err := core.VerifySlotTime(uint64(s.genesisTime.Unix()), nextSlot, params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), nextSlot, params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
continue
}

View File

@@ -6,23 +6,26 @@ import (
"time"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
var (
_ = AttestationReceiver(&Service{})
_ = AttestationStateFetcher(&Service{})
)
func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
helpers.ClearCache()
beaconDB := testDB.SetupDB(t)
@@ -30,25 +33,24 @@ func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
chainService := setupBeaconChain(t, beaconDB)
chainService.genesisTime = time.Now()
e := types.Epoch(core.MaxSlotBuffer/uint64(params.BeaconConfig().SlotsPerEpoch) + 1)
_, err := chainService.AttestationPreState(context.Background(), &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: e}}})
e := types.Epoch(slots.MaxSlotBuffer/uint64(params.BeaconConfig().SlotsPerEpoch) + 1)
_, err := chainService.AttestationTargetState(context.Background(), &ethpb.Checkpoint{Epoch: e})
require.ErrorContains(t, "exceeds max allowed value relative to the local clock", err)
}
func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := testutil.NewBeaconBlock()
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b32)))
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
b33 := testutil.NewBeaconBlock()
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b33)))
@@ -56,7 +58,7 @@ func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
require.NoError(t, err)
wanted := "FFG and LMD votes are not consistent"
a := testutil.NewAttestation()
a := util.NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = []byte{'a'}
a.Data.BeaconBlockRoot = r33[:]
@@ -65,25 +67,24 @@ func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &Config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := testutil.NewBeaconBlock()
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b32)))
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
b33 := testutil.NewBeaconBlock()
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b33)))
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
a := testutil.NewAttestation()
a := util.NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = r32[:]
a.Data.BeaconBlockRoot = r33[:]
@@ -94,21 +95,16 @@ func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
func TestProcessAttestations_Ok(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
opts = append(opts, WithAttestationPool(attestations.NewPool()))
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(beaconDB),
AttPool: attestations.NewPool(),
}
service, err := NewService(ctx, cfg)
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisState, pks := testutil.DeterministicGenesisState(t, 64)
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := testutil.GenerateAttestations(genesisState, pks, 1, 0, false)
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(atts[0].Data.Target.Root)
copied := genesisState.Copy()

View File

@@ -5,13 +5,12 @@ import (
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
"go.opencensus.io/trace"
)
@@ -43,28 +42,6 @@ func (s *Service) ReceiveBlock(ctx context.Context, block block.SignedBeaconBloc
return err
}
// Update and save head block after fork choice.
if !features.Get().UpdateHeadTimely {
if err := s.updateHead(ctx, s.getJustifiedBalances()); err != nil {
log.WithError(err).Warn("Could not update head")
}
if err := s.pruneCanonicalAttsFromPool(ctx, blockRoot, block); err != nil {
return err
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: blockCopy.Block().Slot(),
BlockRoot: blockRoot,
SignedBlock: blockCopy,
Verified: true,
},
})
}
// Handle post block operations such as attestations and exits.
if err := s.handlePostBlockOperations(blockCopy.Block()); err != nil {
return err
@@ -124,7 +101,10 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []block.SignedBe
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), s.finalizedCheckpt)
}
if err := s.VerifyWeakSubjectivityRoot(s.ctx); err != nil {
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
if err := s.wsVerifier.VerifyWeakSubjectivity(s.ctx, s.finalizedCheckpt.Epoch); err != nil {
// log.Fatalf will prevent defer from being called
span.End()
// Exit run time if the node failed to verify weak subjectivity checkpoint.
@@ -165,7 +145,7 @@ func (s *Service) handlePostBlockOperations(b block.BeaconBlock) error {
// This checks whether it's time to start saving hot state to DB.
// It's time when there's `epochsSinceFinalitySaveHotStateDB` epochs of non-finality.
func (s *Service) checkSaveHotStateDB(ctx context.Context) error {
currentEpoch := core.SlotToEpoch(s.CurrentSlot())
currentEpoch := slots.ToEpoch(s.CurrentSlot())
// Prevent `sinceFinality` going underflow.
var sinceFinality types.Epoch
if currentEpoch > s.finalizedCheckpt.Epoch {

View File

@@ -8,33 +8,32 @@ import (
types "github.com/prysmaticlabs/eth2-types"
blockchainTesting "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestService_ReceiveBlock(t *testing.T) {
ctx := context.Background()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
genFullBlock := func(t *testing.T, conf *testutil.BlockGenConfig, slot types.Slot) *ethpb.SignedBeaconBlock {
blk, err := testutil.GenerateFullBlock(genesis, keys, conf, slot)
genesis, keys := util.DeterministicGenesisState(t, 64)
genFullBlock := func(t *testing.T, conf *util.BlockGenConfig, slot types.Slot) *ethpb.SignedBeaconBlock {
blk, err := util.GenerateFullBlock(genesis, keys, conf, slot)
assert.NoError(t, err)
return blk
}
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.ShardCommitteePeriod = 0 // Required for voluntary exits test in reasonable time.
params.OverrideBeaconConfig(bc)
@@ -51,7 +50,7 @@ func TestService_ReceiveBlock(t *testing.T) {
{
name: "applies block with state transition",
args: args{
block: genFullBlock(t, testutil.DefaultBlockGenConfig(), 2 /*slot*/),
block: genFullBlock(t, util.DefaultBlockGenConfig(), 2 /*slot*/),
},
check: func(t *testing.T, s *Service) {
if hs := s.head.state.Slot(); hs != 2 {
@@ -66,7 +65,7 @@ func TestService_ReceiveBlock(t *testing.T) {
name: "saves attestations to pool",
args: args{
block: genFullBlock(t,
&testutil.BlockGenConfig{
&util.BlockGenConfig{
NumProposerSlashings: 0,
NumAttesterSlashings: 0,
NumAttestations: 2,
@@ -86,7 +85,7 @@ func TestService_ReceiveBlock(t *testing.T) {
{
name: "updates exit pool",
args: args{
block: genFullBlock(t, &testutil.BlockGenConfig{
block: genFullBlock(t, &util.BlockGenConfig{
NumProposerSlashings: 0,
NumAttesterSlashings: 0,
NumAttestations: 0,
@@ -110,7 +109,7 @@ func TestService_ReceiveBlock(t *testing.T) {
{
name: "notifies block processed on state feed",
args: args{
block: genFullBlock(t, testutil.DefaultBlockGenConfig(), 1 /*slot*/),
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
@@ -126,19 +125,15 @@ func TestService_ReceiveBlock(t *testing.T) {
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
AttPool: attestations.NewPool(),
ExitPool: voluntaryexits.NewPool(),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: true},
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
s, err := NewService(ctx, cfg)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, s.saveGenesisData(ctx, genesis))
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
@@ -161,25 +156,22 @@ func TestService_ReceiveBlock(t *testing.T) {
func TestService_ReceiveBlockUpdateHead(t *testing.T) {
ctx := context.Background()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
b, err := testutil.GenerateFullBlock(genesis, keys, testutil.DefaultBlockGenConfig(), 1)
genesis, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(genesis, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
beaconDB := testDB.SetupDB(t)
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
AttPool: attestations.NewPool(),
ExitPool: voluntaryexits.NewPool(),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: true},
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
s, err := NewService(ctx, cfg)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, s.saveGenesisData(ctx, genesis))
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
@@ -206,9 +198,9 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
func TestService_ReceiveBlockBatch(t *testing.T) {
ctx := context.Background()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
genFullBlock := func(t *testing.T, conf *testutil.BlockGenConfig, slot types.Slot) *ethpb.SignedBeaconBlock {
blk, err := testutil.GenerateFullBlock(genesis, keys, conf, slot)
genesis, keys := util.DeterministicGenesisState(t, 64)
genFullBlock := func(t *testing.T, conf *util.BlockGenConfig, slot types.Slot) *ethpb.SignedBeaconBlock {
blk, err := util.GenerateFullBlock(genesis, keys, conf, slot)
assert.NoError(t, err)
return blk
}
@@ -225,7 +217,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
{
name: "applies block with state transition",
args: args{
block: genFullBlock(t, testutil.DefaultBlockGenConfig(), 2 /*slot*/),
block: genFullBlock(t, util.DefaultBlockGenConfig(), 2 /*slot*/),
},
check: func(t *testing.T, s *Service) {
assert.Equal(t, types.Slot(2), s.head.state.Slot(), "Incorrect head state slot")
@@ -235,7 +227,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
{
name: "notifies block processed on state feed",
args: args{
block: genFullBlock(t, testutil.DefaultBlockGenConfig(), 1 /*slot*/),
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
@@ -250,17 +242,13 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisBlockRoot, err := genesis.HashTreeRoot(ctx)
require.NoError(t, err)
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: true},
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
s, err := NewService(ctx, cfg)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
err = s.saveGenesisData(ctx, genesis)
require.NoError(t, err)
@@ -285,79 +273,25 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
}
}
func TestService_ReceiveBlockBatch_UpdateFinalizedCheckpoint(t *testing.T) {
// Must enable head timely feature flag to test this.
resetCfg := features.InitWithReset(&features.Flags{
UpdateHeadTimely: true,
})
defer resetCfg()
ctx := context.Background()
genesis, keys := testutil.DeterministicGenesisState(t, 64)
// Generate 5 epochs worth of blocks.
var blks []block.SignedBeaconBlock
var roots [][32]byte
copied := genesis.Copy()
for i := types.Slot(1); i < params.BeaconConfig().SlotsPerEpoch*5; i++ {
b, err := testutil.GenerateFullBlock(copied, keys, testutil.DefaultBlockGenConfig(), i)
assert.NoError(t, err)
copied, err = transition.ExecuteStateTransition(context.Background(), copied, wrapper.WrappedPhase0SignedBeaconBlock(b))
assert.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
blks = append(blks, wrapper.WrappedPhase0SignedBeaconBlock(b))
roots = append(roots, r)
}
beaconDB := testDB.SetupDB(t)
genesisBlockRoot, err := genesis.HashTreeRoot(ctx)
require.NoError(t, err)
cfg := &Config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: false},
StateGen: stategen.New(beaconDB),
}
s, err := NewService(ctx, cfg)
require.NoError(t, err)
err = s.saveGenesisData(ctx, genesis)
require.NoError(t, err)
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
s.finalizedCheckpt = &ethpb.Checkpoint{Root: gRoot[:]}
// Process 5 epochs worth of blocks.
require.NoError(t, s.ReceiveBlockBatch(ctx, blks, roots))
// Finalized epoch must be updated.
require.Equal(t, types.Epoch(2), s.finalizedCheckpt.Epoch)
}
func TestService_HasInitSyncBlock(t *testing.T) {
s, err := NewService(context.Background(), &Config{StateNotifier: &blockchainTesting.MockStateNotifier{}})
opts := testServiceOptsNoDB()
opts = append(opts, WithStateNotifier(&blockchainTesting.MockStateNotifier{}))
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
r := [32]byte{'a'}
if s.HasInitSyncBlock(r) {
t.Error("Should not have block")
}
s.saveInitSyncBlock(r, wrapper.WrappedPhase0SignedBeaconBlock(testutil.NewBeaconBlock()))
s.saveInitSyncBlock(r, wrapper.WrappedPhase0SignedBeaconBlock(util.NewBeaconBlock()))
if !s.HasInitSyncBlock(r) {
t.Error("Should have block")
}
}
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
hook := logTest.NewGlobal()
s, err := NewService(context.Background(), &Config{StateGen: stategen.New(beaconDB)})
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -368,9 +302,9 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
}
func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
beaconDB := testDB.SetupDB(t)
hook := logTest.NewGlobal()
s, err := NewService(context.Background(), &Config{StateGen: stategen.New(beaconDB)})
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.finalizedCheckpt = &ethpb.Checkpoint{}
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
@@ -381,9 +315,9 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
}
func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
beaconDB := testDB.SetupDB(t)
hook := logTest.NewGlobal()
s, err := NewService(context.Background(), &Config{StateGen: stategen.New(beaconDB)})
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 10000000}
s.genesisTime = time.Now()

View File

@@ -11,9 +11,9 @@ import (
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/async/event"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
@@ -29,10 +29,11 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
prysmTime "github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -45,13 +46,14 @@ const headSyncMinEpochsAfterCheckpoint = 128
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *Config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
genesisRoot [32]byte
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
// originBlockRoot is the genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
originBlockRoot [32]byte
justifiedCheckpt *ethpb.Checkpoint
prevJustifiedCheckpt *ethpb.Checkpoint
bestJustifiedCheckpt *ethpb.Checkpoint
@@ -62,13 +64,13 @@ type Service struct {
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]block.SignedBeaconBlock
initSyncBlocksLock sync.RWMutex
justifiedBalances []uint64
justifiedBalancesLock sync.RWMutex
wsVerified bool
//justifiedBalances []uint64
justifiedBalances *stateBalanceCache
wsVerifier *WeakSubjectivityVerifier
}
// Config options for the service.
type Config struct {
// config options for the service.
type config struct {
BeaconBlockBuf int
ChainStartFetcher powchain.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
@@ -82,166 +84,287 @@ type Config struct {
ForkChoiceStore f.ForkChoicer
AttService *attestations.Service
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
FinalizedStateAtStartUp state.BeaconState
}
// NewService instantiates a new block service instance that will
// be registered into a running beacon node.
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
return &Service{
cfg: cfg,
srv := &Service{
ctx: ctx,
cancel: cancel,
boundaryRoots: [][32]byte{},
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]block.SignedBeaconBlock),
justifiedBalances: make([]uint64, 0),
}, nil
cfg: &config{},
}
for _, opt := range opts {
if err := opt(srv); err != nil {
return nil, err
}
}
var err error
if srv.justifiedBalances == nil {
srv.justifiedBalances, err = newStateBalanceCache(srv.cfg.StateGen)
if err != nil {
return nil, err
}
}
srv.wsVerifier, err = NewWeakSubjectivityVerifier(srv.cfg.WeakSubjectivityCheckpt, srv.cfg.BeaconDB)
if err != nil {
return nil, err
}
return srv, nil
}
// Start a blockchain service's main event loop.
func (s *Service) Start() {
// For running initial sync with state cache, in an event of restart, we use
// last finalized check point as start point to sync instead of head
// state. This is because we no longer save state every slot during sync.
cp, err := s.cfg.BeaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
saved := s.cfg.FinalizedStateAtStartUp
r := bytesutil.ToBytes32(cp.Root)
// Before the first finalized epoch, in the current epoch,
// the finalized root is defined as zero hashes instead of genesis root hash.
// We want to use genesis root to retrieve for state.
if r == params.BeaconConfig().ZeroHash {
genesisBlock, err := s.cfg.BeaconDB.GenesisBlock(s.ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
if saved != nil && !saved.IsNil() {
if err := s.startFromSavedState(saved); err != nil {
log.Fatal(err)
}
if genesisBlock != nil && !genesisBlock.IsNil() {
r, err = genesisBlock.Block().HashTreeRoot()
if err != nil {
log.Fatalf("Could not tree hash genesis block: %v", err)
}
}
}
beaconState, err := s.cfg.StateGen.StateByRoot(s.ctx, r)
if err != nil {
log.Fatalf("Could not fetch beacon state by root: %v", err)
}
// Make sure that attestation processor is subscribed and ready for state initializing event.
attestationProcessorSubscribed := make(chan struct{}, 1)
// If the chain has already been initialized, simply start the block processing routine.
if beaconState != nil && !beaconState.IsNil() {
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = time.Unix(int64(beaconState.GenesisTime()), 0)
s.cfg.AttService.SetGenesisTime(beaconState.GenesisTime())
if err := s.initializeChainInfo(s.ctx); err != nil {
log.Fatalf("Could not set up chain info: %v", err)
}
// We start a counter to genesis, if needed.
gState, err := s.cfg.BeaconDB.GenesisState(s.ctx)
if err != nil {
log.Fatalf("Could not retrieve genesis state: %v", err)
}
gRoot, err := gState.HashTreeRoot(s.ctx)
if err != nil {
log.Fatalf("Could not hash tree root genesis state: %v", err)
}
go slots.CountdownToGenesis(s.ctx, s.genesisTime, uint64(gState.NumValidators()), gRoot)
justifiedCheckpoint, err := s.cfg.BeaconDB.JustifiedCheckpoint(s.ctx)
if err != nil {
log.Fatalf("Could not get justified checkpoint: %v", err)
}
finalizedCheckpoint, err := s.cfg.BeaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
log.Fatalf("Could not get finalized checkpoint: %v", err)
}
// Resume fork choice.
s.justifiedCheckpt = ethpb.CopyCheckpoint(justifiedCheckpoint)
if err := s.cacheJustifiedStateBalances(s.ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(s.justifiedCheckpt.Root))); err != nil {
log.Fatalf("Could not cache justified state balances: %v", err)
}
s.prevJustifiedCheckpt = ethpb.CopyCheckpoint(justifiedCheckpoint)
s.bestJustifiedCheckpt = ethpb.CopyCheckpoint(justifiedCheckpoint)
s.finalizedCheckpt = ethpb.CopyCheckpoint(finalizedCheckpoint)
s.prevFinalizedCheckpt = ethpb.CopyCheckpoint(finalizedCheckpoint)
s.resumeForkChoice(justifiedCheckpoint, finalizedCheckpoint)
ss, err := core.StartSlot(s.finalizedCheckpt.Epoch)
if err != nil {
log.Fatalf("Could not get start slot of finalized epoch: %v", err)
}
h := s.headBlock().Block()
if h.Slot() > ss {
log.WithFields(logrus.Fields{
"startSlot": ss,
"endSlot": h.Slot(),
}).Info("Loading blocks to fork choice store, this may take a while.")
if err := s.fillInForkChoiceMissingBlocks(s.ctx, h, s.finalizedCheckpt, s.justifiedCheckpt); err != nil {
log.Fatalf("Could not fill in fork choice store missing blocks: %v", err)
}
}
if err := s.VerifyWeakSubjectivityRoot(s.ctx); err != nil {
// Exit run time if the node failed to verify weak subjectivity checkpoint.
log.Fatalf("Could not verify weak subjectivity checkpoint: %v", err)
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized,
Data: &statefeed.InitializedData{
StartTime: s.genesisTime,
GenesisValidatorsRoot: beaconState.GenesisValidatorRoot(),
},
})
} else {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.cfg.ChainStartFetcher == nil {
log.Fatal("Not configured web3Service for POW chain")
return // return need for TestStartUninitializedChainWithoutConfigPOWChain.
if err := s.startFromPOWChain(); err != nil {
log.Fatal(err)
}
go func() {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
<-attestationProcessorSubscribed
for {
select {
case event := <-stateChannel:
if event.Type == statefeed.ChainStarted {
data, ok := event.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("starttime", data.StartTime).Debug("Received chain start event")
s.processChainStartTime(s.ctx, data.StartTime)
return
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state notifier failed")
return
}
}
}()
}
go s.processAttestationsRoutine(attestationProcessorSubscribed)
}
// processChainStartTime initializes a series of deposits from the ChainStart deposits in the eth1
// Stop the blockchain service's main event loop and associated goroutines.
func (s *Service) Stop() error {
defer s.cancel()
if s.cfg.StateGen != nil && s.head != nil && s.head.state != nil {
if err := s.cfg.StateGen.ForceCheckpoint(s.ctx, s.head.state.FinalizedCheckpoint().Root); err != nil {
return err
}
}
// Save initial sync cached blocks to the DB before stop.
return s.cfg.BeaconDB.SaveBlocks(s.ctx, s.getInitSyncBlocks())
}
// Status always returns nil unless there is an error condition that causes
// this service to be unhealthy.
func (s *Service) Status() error {
if s.originBlockRoot == params.BeaconConfig().ZeroHash {
return errors.New("genesis state has not been created")
}
if runtime.NumGoroutine() > s.cfg.MaxRoutines {
return fmt.Errorf("too many goroutines %d", runtime.NumGoroutine())
}
return nil
}
func (s *Service) startFromSavedState(saved state.BeaconState) error {
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = time.Unix(int64(saved.GenesisTime()), 0)
s.cfg.AttService.SetGenesisTime(saved.GenesisTime())
originRoot, err := s.originRootFromSavedState(s.ctx)
if err != nil {
return err
}
s.originBlockRoot = originRoot
if err := s.initializeHeadFromDB(s.ctx); err != nil {
return errors.Wrap(err, "could not set up chain info")
}
spawnCountdownIfPreGenesis(s.ctx, s.genesisTime, s.cfg.BeaconDB)
justified, err := s.cfg.BeaconDB.JustifiedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
s.prevJustifiedCheckpt = ethpb.CopyCheckpoint(justified)
s.bestJustifiedCheckpt = ethpb.CopyCheckpoint(justified)
s.justifiedCheckpt = ethpb.CopyCheckpoint(justified)
finalized, err := s.cfg.BeaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
}
s.prevFinalizedCheckpt = ethpb.CopyCheckpoint(finalized)
s.finalizedCheckpt = ethpb.CopyCheckpoint(finalized)
store := protoarray.New(justified.Epoch, finalized.Epoch, bytesutil.ToBytes32(finalized.Root))
s.cfg.ForkChoiceStore = store
ss, err := slots.EpochStart(s.finalizedCheckpt.Epoch)
if err != nil {
return errors.Wrap(err, "could not get start slot of finalized epoch")
}
h := s.headBlock().Block()
if h.Slot() > ss {
log.WithFields(logrus.Fields{
"startSlot": ss,
"endSlot": h.Slot(),
}).Info("Loading blocks to fork choice store, this may take a while.")
if err := s.fillInForkChoiceMissingBlocks(s.ctx, h, s.finalizedCheckpt, s.justifiedCheckpt); err != nil {
return errors.Wrap(err, "could not fill in fork choice store missing blocks")
}
}
// not attempting to save initial sync blocks here, because there shouldn't be any until
// after the statefeed.Initialized event is fired (below)
if err := s.wsVerifier.VerifyWeakSubjectivity(s.ctx, s.finalizedCheckpt.Epoch); err != nil {
// Exit run time if the node failed to verify weak subjectivity checkpoint.
return errors.Wrap(err, "could not verify initial checkpoint provided for chain sync")
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized,
Data: &statefeed.InitializedData{
StartTime: s.genesisTime,
GenesisValidatorsRoot: saved.GenesisValidatorRoot(),
},
})
s.spawnProcessAttestationsRoutine(s.cfg.StateNotifier.StateFeed())
return nil
}
func (s *Service) originRootFromSavedState(ctx context.Context) ([32]byte, error) {
// first check if we have started from checkpoint sync and have a root
originRoot, err := s.cfg.BeaconDB.OriginBlockRoot(ctx)
if err == nil {
return originRoot, nil
}
if !errors.Is(err, db.ErrNotFound) {
return originRoot, errors.Wrap(err, "could not retrieve checkpoint sync chain origin data from db")
}
// we got here because OriginBlockRoot gave us an ErrNotFound. this means the node was started from a genesis state,
// so we should have a value for GenesisBlock
genesisBlock, err := s.cfg.BeaconDB.GenesisBlock(ctx)
if err != nil {
return originRoot, errors.Wrap(err, "could not get genesis block from db")
}
if err := helpers.BeaconBlockIsNil(genesisBlock); err != nil {
return originRoot, err
}
genesisBlkRoot, err := genesisBlock.Block().HashTreeRoot()
if err != nil {
return genesisBlkRoot, errors.Wrap(err, "could not get signing root of genesis block")
}
return genesisBlkRoot, nil
}
// initializeHeadFromDB uses the finalized checkpoint and head block found in the database to set the current head
// note that this may block until stategen replays blocks between the finalized and head blocks
// if the head sync flag was specified and the gap between the finalized and head blocks is at least 128 epochs long
func (s *Service) initializeHeadFromDB(ctx context.Context) error {
finalized, err := s.cfg.BeaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint from db")
}
if finalized == nil {
// This should never happen. At chain start, the finalized checkpoint
// would be the genesis state and block.
return errors.New("no finalized epoch in the database")
}
finalizedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
var finalizedState state.BeaconState
finalizedState, err = s.cfg.StateGen.Resume(ctx, s.cfg.FinalizedStateAtStartUp)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
if flags.Get().HeadSync {
headBlock, err := s.cfg.BeaconDB.HeadBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not retrieve head block")
}
headEpoch := slots.ToEpoch(headBlock.Block().Slot())
var epochsSinceFinality types.Epoch
if headEpoch > finalized.Epoch {
epochsSinceFinality = headEpoch - finalized.Epoch
}
// Head sync when node is far enough beyond known finalized epoch,
// this becomes really useful during long period of non-finality.
if epochsSinceFinality >= headSyncMinEpochsAfterCheckpoint {
headRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash head block")
}
finalizedState, err := s.cfg.StateGen.Resume(ctx, s.cfg.FinalizedStateAtStartUp)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
log.Infof("Regenerating state from the last checkpoint at slot %d to current head slot of %d."+
"This process may take a while, please wait.", finalizedState.Slot(), headBlock.Block().Slot())
headState, err := s.cfg.StateGen.StateByRoot(ctx, headRoot)
if err != nil {
return errors.Wrap(err, "could not retrieve head state")
}
s.setHead(headRoot, headBlock, headState)
return nil
} else {
log.Warnf("Finalized checkpoint at slot %d is too close to the current head slot, "+
"resetting head from the checkpoint ('--%s' flag is ignored).",
finalizedState.Slot(), flags.HeadSync.Name)
}
}
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block from db")
}
if finalizedState == nil || finalizedState.IsNil() || finalizedBlock == nil || finalizedBlock.IsNil() {
return errors.New("finalized state and block can't be nil")
}
s.setHead(finalizedRoot, finalizedBlock, finalizedState)
return nil
}
func (s *Service) startFromPOWChain() error {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.cfg.ChainStartFetcher == nil {
return errors.New("not configured web3Service for POW chain")
}
go func() {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
s.spawnProcessAttestationsRoutine(s.cfg.StateNotifier.StateFeed())
for {
select {
case event := <-stateChannel:
if event.Type == statefeed.ChainStarted {
data, ok := event.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("starttime", data.StartTime).Debug("Received chain start event")
s.onPowchainStart(s.ctx, data.StartTime)
return
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state notifier failed")
return
}
}
}()
return nil
}
// onPowchainStart initializes a series of deposits from the ChainStart deposits in the eth1
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Time) {
func (s *Service) onPowchainStart(ctx context.Context, genesisTime time.Time) {
preGenesisState := s.cfg.ChainStartFetcher.PreGenesisState()
initializedState, err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.cfg.ChainStartFetcher.ChainStartEth1Data())
if err != nil {
@@ -296,7 +419,7 @@ func (s *Service) initializeBeaconChain(
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
return nil, err
}
if err := helpers.UpdateProposerIndicesInCache(genesisState); err != nil {
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState); err != nil {
return nil, err
}
@@ -305,32 +428,6 @@ func (s *Service) initializeBeaconChain(
return genesisState, nil
}
// Stop the blockchain service's main event loop and associated goroutines.
func (s *Service) Stop() error {
defer s.cancel()
if s.cfg.StateGen != nil && s.head != nil && s.head.state != nil {
if err := s.cfg.StateGen.ForceCheckpoint(s.ctx, s.head.state.FinalizedCheckpoint().Root); err != nil {
return err
}
}
// Save initial sync cached blocks to the DB before stop.
return s.cfg.BeaconDB.SaveBlocks(s.ctx, s.getInitSyncBlocks())
}
// Status always returns nil unless there is an error condition that causes
// this service to be unhealthy.
func (s *Service) Status() error {
if s.genesisRoot == params.BeaconConfig().ZeroHash {
return errors.New("genesis state has not been created")
}
if runtime.NumGoroutine() > s.cfg.MaxRoutines {
return fmt.Errorf("too many goroutines %d", runtime.NumGoroutine())
}
return nil
}
// This gets called when beacon chain is first initialized to save genesis data (state, block, and more) in db.
func (s *Service) saveGenesisData(ctx context.Context, genesisState state.BeaconState) error {
if err := s.cfg.BeaconDB.SaveGenesisData(ctx, genesisState); err != nil {
@@ -345,16 +442,13 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
return errors.Wrap(err, "could not get genesis block root")
}
s.genesisRoot = genesisBlkRoot
s.originBlockRoot = genesisBlkRoot
s.cfg.StateGen.SaveFinalizedState(0 /*slot*/, genesisBlkRoot, genesisState)
// Finalized checkpoint at genesis is a zero hash.
genesisCheckpoint := genesisState.FinalizedCheckpoint()
s.justifiedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
if err := s.cacheJustifiedStateBalances(ctx, genesisBlkRoot); err != nil {
return err
}
s.prevJustifiedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
s.bestJustifiedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
s.finalizedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
@@ -374,94 +468,6 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
return nil
}
// This gets called to initialize chain info variables using the finalized checkpoint stored in DB
func (s *Service) initializeChainInfo(ctx context.Context) error {
genesisBlock, err := s.cfg.BeaconDB.GenesisBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not get genesis block from db")
}
if genesisBlock == nil || genesisBlock.IsNil() {
return errors.New("no genesis block in db")
}
genesisBlkRoot, err := genesisBlock.Block().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not get signing root of genesis block")
}
s.genesisRoot = genesisBlkRoot
finalized, err := s.cfg.BeaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint from db")
}
if finalized == nil {
// This should never happen. At chain start, the finalized checkpoint
// would be the genesis state and block.
return errors.New("no finalized epoch in the database")
}
finalizedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
var finalizedState state.BeaconState
finalizedState, err = s.cfg.StateGen.Resume(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
if flags.Get().HeadSync {
headBlock, err := s.cfg.BeaconDB.HeadBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not retrieve head block")
}
headEpoch := core.SlotToEpoch(headBlock.Block().Slot())
var epochsSinceFinality types.Epoch
if headEpoch > finalized.Epoch {
epochsSinceFinality = headEpoch - finalized.Epoch
}
// Head sync when node is far enough beyond known finalized epoch,
// this becomes really useful during long period of non-finality.
if epochsSinceFinality >= headSyncMinEpochsAfterCheckpoint {
headRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash head block")
}
finalizedState, err := s.cfg.StateGen.Resume(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
log.Infof("Regenerating state from the last checkpoint at slot %d to current head slot of %d."+
"This process may take a while, please wait.", finalizedState.Slot(), headBlock.Block().Slot())
headState, err := s.cfg.StateGen.StateByRoot(ctx, headRoot)
if err != nil {
return errors.Wrap(err, "could not retrieve head state")
}
s.setHead(headRoot, headBlock, headState)
return nil
} else {
log.Warnf("Finalized checkpoint at slot %d is too close to the current head slot, "+
"resetting head from the checkpoint ('--%s' flag is ignored).",
finalizedState.Slot(), flags.HeadSync.Name)
}
}
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block from db")
}
if finalizedState == nil || finalizedState.IsNil() || finalizedBlock == nil || finalizedBlock.IsNil() {
return errors.New("finalized state and block can't be nil")
}
s.setHead(finalizedRoot, finalizedBlock, finalizedState)
return nil
}
// This is called when a client starts from non-genesis slot. This passes last justified and finalized
// information to fork choice service to initializes fork choice store.
func (s *Service) resumeForkChoice(justifiedCheckpoint, finalizedCheckpoint *ethpb.Checkpoint) {
store := protoarray.New(justifiedCheckpoint.Epoch, finalizedCheckpoint.Epoch, bytesutil.ToBytes32(finalizedCheckpoint.Root))
s.cfg.ForkChoiceStore = store
}
// This returns true if block has been processed before. Two ways to verify the block has been processed:
// 1.) Check fork choice store.
// 2.) Check DB.
@@ -473,3 +479,20 @@ func (s *Service) hasBlock(ctx context.Context, root [32]byte) bool {
return s.cfg.BeaconDB.HasBlock(ctx, root)
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {
currentTime := prysmTime.Now()
if currentTime.After(genesisTime) {
return
}
gState, err := db.GenesisState(ctx)
if err != nil {
log.Fatalf("Could not retrieve genesis state: %v", err)
}
gRoot, err := gState.HashTreeRoot(ctx)
if err != nil {
log.Fatalf("Could not hash tree root genesis state: %v", err)
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(gState.NumValidators()), gRoot)
}

View File

@@ -6,7 +6,7 @@ import (
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/sirupsen/logrus"
)
@@ -18,7 +18,7 @@ func init() {
func TestChainService_SaveHead_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{BeaconDB: beaconDB},
cfg: &config{BeaconDB: beaconDB},
}
go func() {
require.NoError(t, s.saveHead(context.Background(), [32]byte{}))

View File

@@ -9,8 +9,8 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/async/event"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
@@ -25,14 +25,15 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/prysmaticlabs/prysm/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/protobuf/proto"
)
@@ -75,7 +76,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
ctx := context.Background()
var web3Service *powchain.Service
var err error
bState, _ := testutil.DeterministicGenesisState(t, 10)
bState, _ := util.DeterministicGenesisState(t, 10)
pbState, err := v1.ProtobufBeaconState(bState.InnerStateUnsafe())
require.NoError(t, err)
err = beaconDB.SavePowchainData(ctx, &ethpb.ETH1ChainData{
@@ -94,11 +95,12 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
DepositContainers: []*ethpb.DepositContainer{},
})
require.NoError(t, err)
web3Service, err = powchain.NewService(ctx, &powchain.Web3ServiceConfig{
BeaconDB: beaconDB,
HttpEndpoints: []string{endpoint},
DepositContract: common.Address{},
})
web3Service, err = powchain.NewService(
ctx,
powchain.WithDatabase(beaconDB),
powchain.WithHttpEndpoints([]string{endpoint}),
powchain.WithDepositContractAddress(common.Address{}),
)
require.NoError(t, err, "Unable to set up web3 service")
attService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
@@ -107,23 +109,23 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
require.NoError(t, err)
cfg := &Config{
BeaconBlockBuf: 0,
BeaconDB: beaconDB,
DepositCache: depositCache,
ChainStartFetcher: web3Service,
P2p: &mockBroadcaster{},
StateNotifier: &mockBeaconNode{},
AttPool: attestations.NewPool(),
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, params.BeaconConfig().ZeroHash),
AttService: attService,
stateGen := stategen.New(beaconDB)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
require.NoError(t, stateGen.SaveState(ctx, bytesutil.ToBytes32(bState.FinalizedCheckpoint().Root), bState))
opts := []Option{
WithDatabase(beaconDB),
WithDepositCache(depositCache),
WithChainStartFetcher(web3Service),
WithAttestationPool(attestations.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(protoarray.New(0, 0, params.BeaconConfig().ZeroHash)),
WithAttestationService(attService),
WithStateGen(stateGen),
}
// Safe a state in stategen to purposes of testing a service stop / shutdown.
require.NoError(t, cfg.StateGen.SaveState(ctx, bytesutil.ToBytes32(bState.FinalizedCheckpoint().Root), bState))
chainService, err := NewService(ctx, cfg)
chainService, err := NewService(ctx, opts...)
require.NoError(t, err, "Unable to setup chain service")
chainService.genesisTime = time.Unix(1, 0) // non-zero time
@@ -137,11 +139,11 @@ func TestChainStartStop_Initialized(t *testing.T) {
chainService := setupBeaconChain(t, beaconDB)
genesisBlk := testutil.NewBeaconBlock()
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesisBlk)))
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(1))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
@@ -149,7 +151,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
@@ -167,16 +169,16 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
chainService := setupBeaconChain(t, beaconDB)
genesisBlk := testutil.NewBeaconBlock()
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesisBlk)))
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
@@ -197,9 +199,9 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
// Set up 10 deposits pre chain start for validators to register
count := uint64(10)
deposits, _, err := testutil.DeterministicDepositsAndKeys(count)
deposits, _, err := util.DeterministicDepositsAndKeys(count)
require.NoError(t, err)
trie, _, err := testutil.DepositTrieFromDeposits(deposits)
trie, _, err := util.DepositTrieFromDeposits(deposits)
require.NoError(t, err)
hashTreeRoot := trie.HashTreeRoot()
genState, err := transition.EmptyGenesisState()
@@ -236,18 +238,18 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
chainService := setupBeaconChain(t, beaconDB)
genesisBlk := testutil.NewBeaconBlock()
genesisBlk := util.NewBeaconBlock()
blkRoot, err := genesisBlk.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesisBlk)))
s, err := testutil.NewBeaconState()
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(0))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
@@ -262,17 +264,17 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
genesis := testutil.NewBeaconBlock()
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := testutil.NewBeaconBlock()
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headState, err := testutil.NewBeaconState()
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetGenesisValidatorRoot(params.BeaconConfig().ZeroHash[:]))
@@ -281,9 +283,12 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(headBlock)))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: core.SlotToEpoch(finalizedSlot), Root: headRoot[:]}))
c := &Service{cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}}
require.NoError(t, c.initializeChainInfo(ctx))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
c, err := NewService(ctx, WithDatabase(beaconDB), WithStateGen(stategen.New(beaconDB)), WithAttestationService(attSrv), WithStateNotifier(&mock.MockStateNotifier{}), WithFinalizedStateAtStartUp(headState))
require.NoError(t, err)
require.NoError(t, c.startFromSavedState(headState))
headBlk, err := c.HeadBlock(ctx)
require.NoError(t, err)
assert.DeepEqual(t, headBlock, headBlk.Proto(), "Head block incorrect")
@@ -296,24 +301,24 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
if !bytes.Equal(headRoot[:], r) {
t.Error("head slot incorrect")
}
assert.Equal(t, genesisRoot, c.genesisRoot, "Genesis block root incorrect")
assert.Equal(t, genesisRoot, c.originBlockRoot, "Genesis block root incorrect")
}
func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
genesis := testutil.NewBeaconBlock()
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := testutil.NewBeaconBlock()
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headState, err := testutil.NewBeaconState()
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetGenesisValidatorRoot(params.BeaconConfig().ZeroHash[:]))
@@ -322,12 +327,15 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(headBlock)))
c := &Service{cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}}
require.NoError(t, c.initializeChainInfo(ctx))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
c, err := NewService(ctx, WithDatabase(beaconDB), WithStateGen(stategen.New(beaconDB)), WithAttestationService(attSrv), WithStateNotifier(&mock.MockStateNotifier{}))
require.NoError(t, err)
require.NoError(t, c.startFromSavedState(headState))
s, err := c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.Equal(t, genesisRoot, c.genesisRoot, "Genesis block root incorrect")
assert.Equal(t, genesisRoot, c.originBlockRoot, "Genesis block root incorrect")
assert.DeepEqual(t, genesis, c.head.block.Proto())
}
@@ -345,13 +353,13 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
genesisBlock := testutil.NewBeaconBlock()
genesisBlock := util.NewBeaconBlock()
genesisRoot, err := genesisBlock.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesisBlock)))
finalizedBlock := testutil.NewBeaconBlock()
finalizedBlock := util.NewBeaconBlock()
finalizedBlock.Block.Slot = finalizedSlot
finalizedBlock.Block.ParentRoot = genesisRoot[:]
finalizedRoot, err := finalizedBlock.Block.HashTreeRoot()
@@ -359,14 +367,14 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(finalizedBlock)))
// Set head slot close to the finalization point, no head sync is triggered.
headBlock := testutil.NewBeaconBlock()
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot + params.BeaconConfig().SlotsPerEpoch*5
headBlock.Block.ParentRoot = finalizedRoot[:]
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(headBlock)))
headState, err := testutil.NewBeaconState()
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(headBlock.Block.Slot))
require.NoError(t, headState.SetGenesisValidatorRoot(params.BeaconConfig().ZeroHash[:]))
@@ -375,24 +383,26 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, headRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: core.SlotToEpoch(finalizedBlock.Block.Slot),
Epoch: slots.ToEpoch(finalizedBlock.Block.Slot),
Root: finalizedRoot[:],
}))
c := &Service{cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}}
require.NoError(t, c.initializeChainInfo(ctx))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
c, err := NewService(ctx, WithDatabase(beaconDB), WithStateGen(stategen.New(beaconDB)), WithAttestationService(attSrv), WithStateNotifier(&mock.MockStateNotifier{}), WithFinalizedStateAtStartUp(headState))
require.NoError(t, err)
require.NoError(t, c.startFromSavedState(headState))
s, err := c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.Equal(t, genesisRoot, c.genesisRoot, "Genesis block root incorrect")
assert.Equal(t, genesisRoot, c.originBlockRoot, "Genesis block root incorrect")
// Since head sync is not triggered, chain is initialized to the last finalization checkpoint.
assert.DeepEqual(t, finalizedBlock, c.head.block.Proto())
assert.LogsContain(t, hook, "resetting head from the checkpoint ('--head-sync' flag is ignored)")
assert.LogsDoNotContain(t, hook, "Regenerating state from the last checkpoint at slot")
// Set head slot far beyond the finalization point, head sync should be triggered.
headBlock = testutil.NewBeaconBlock()
headBlock = util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot + params.BeaconConfig().SlotsPerEpoch*headSyncMinEpochsAfterCheckpoint
headBlock.Block.ParentRoot = finalizedRoot[:]
headRoot, err = headBlock.Block.HashTreeRoot()
@@ -402,11 +412,11 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, headRoot))
hook.Reset()
require.NoError(t, c.initializeChainInfo(ctx))
require.NoError(t, c.initializeHeadFromDB(ctx))
s, err = c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.Equal(t, genesisRoot, c.genesisRoot, "Genesis block root incorrect")
assert.Equal(t, genesisRoot, c.originBlockRoot, "Genesis block root incorrect")
// Head slot is far beyond the latest finalized checkpoint, head sync is triggered.
assert.DeepEqual(t, headBlock, c.head.block.Proto())
assert.LogsContain(t, hook, "Regenerating state from the last checkpoint at slot 225")
@@ -417,13 +427,13 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
s := &Service{
cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
}
blk := testutil.NewBeaconBlock()
blk := util.NewBeaconBlock()
blk.Block.Slot = 1
r, err := blk.HashTreeRoot()
require.NoError(t, err)
newState, err := testutil.NewBeaconState()
newState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.StateGen.SaveState(ctx, r, newState))
require.NoError(t, s.saveHeadNoDB(ctx, wrapper.WrappedPhase0SignedBeaconBlock(blk), r, newState))
@@ -439,13 +449,13 @@ func TestHasBlock_ForkChoiceAndDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{}), BeaconDB: beaconDB},
cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{}), BeaconDB: beaconDB},
finalizedCheckpt: &ethpb.Checkpoint{Root: make([]byte, 32)},
}
block := testutil.NewBeaconBlock()
block := util.NewBeaconBlock()
r, err := block.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, err := testutil.NewBeaconState()
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.insertBlockAndAttestationsToForkChoiceStore(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block).Block(), r, beaconState))
@@ -457,12 +467,12 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &Config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
ctx: ctx,
cancel: cancel,
initSyncBlocks: make(map[[32]byte]block.SignedBeaconBlock),
}
block := testutil.NewBeaconBlock()
block := util.NewBeaconBlock()
r, err := block.Block.HashTreeRoot()
require.NoError(t, err)
s.saveInitSyncBlock(r, wrapper.WrappedPhase0SignedBeaconBlock(block))
@@ -476,7 +486,7 @@ func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
stateChannel := make(chan *feed.Event, 1)
stateSub := service.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
service.processChainStartTime(context.Background(), time.Now())
service.onPowchainStart(context.Background(), time.Now())
stateEvent := <-stateChannel
require.Equal(t, int(stateEvent.Type), statefeed.Initialized)
@@ -488,9 +498,9 @@ func BenchmarkHasBlockDB(b *testing.B) {
beaconDB := testDB.SetupDB(b)
ctx := context.Background()
s := &Service{
cfg: &Config{BeaconDB: beaconDB},
cfg: &config{BeaconDB: beaconDB},
}
block := testutil.NewBeaconBlock()
block := util.NewBeaconBlock()
require.NoError(b, s.cfg.BeaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(block)))
r, err := block.Block.HashTreeRoot()
require.NoError(b, err)
@@ -505,7 +515,7 @@ func BenchmarkHasBlockForkChoiceStore(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &Config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{}), BeaconDB: beaconDB},
cfg: &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{}), BeaconDB: beaconDB},
finalizedCheckpt: &ethpb.Checkpoint{Root: make([]byte, 32)},
}
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}

View File

@@ -0,0 +1,82 @@
package blockchain
import (
"context"
"errors"
"sync"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
)
var errNilStateFromStategen = errors.New("justified state can't be nil")
type stateBalanceCache struct {
sync.Mutex
balances []uint64
root [32]byte
stateGen stateByRooter
}
type stateByRooter interface {
StateByRoot(context.Context, [32]byte) (state.BeaconState, error)
}
// newStateBalanceCache exists to remind us that stateBalanceCache needs a stagegen
// to avoid nil pointer bugs when updating the cache in the read path (get())
func newStateBalanceCache(sg *stategen.State) (*stateBalanceCache, error) {
if sg == nil {
return nil, errors.New("can't initialize state balance cache without stategen")
}
return &stateBalanceCache{stateGen: sg}, nil
}
// update is called by get() when the requested root doesn't match
// the previously read value. This cache assumes we only want to cache one
// set of balances for a single root (the current justified root).
//
// warning: this is not thread-safe on its own, relies on get() for locking
func (c *stateBalanceCache) update(ctx context.Context, justifiedRoot [32]byte) ([]uint64, error) {
stateBalanceCacheMiss.Inc()
justifiedState, err := c.stateGen.StateByRoot(ctx, justifiedRoot)
if err != nil {
return nil, err
}
if justifiedState == nil || justifiedState.IsNil() {
return nil, errNilStateFromStategen
}
epoch := time.CurrentEpoch(justifiedState)
justifiedBalances := make([]uint64, justifiedState.NumValidators())
var balanceAccumulator = func(idx int, val state.ReadOnlyValidator) error {
if helpers.IsActiveValidatorUsingTrie(val, epoch) {
justifiedBalances[idx] = val.EffectiveBalance()
} else {
justifiedBalances[idx] = 0
}
return nil
}
if err := justifiedState.ReadFromEveryValidator(balanceAccumulator); err != nil {
return nil, err
}
c.balances = justifiedBalances
c.root = justifiedRoot
return c.balances, nil
}
// getBalances takes an explicit justifiedRoot so it can invalidate the singleton cache key
// when the justified root changes, and takes a context so that the long-running stategen
// read path can connect to the upstream cancellation/timeout chain.
func (c *stateBalanceCache) get(ctx context.Context, justifiedRoot [32]byte) ([]uint64, error) {
c.Lock()
defer c.Unlock()
if justifiedRoot == c.root {
stateBalanceCacheHit.Inc()
return c.balances, nil
}
return c.update(ctx, justifiedRoot)
}

View File

@@ -0,0 +1,225 @@
package blockchain
import (
"context"
"encoding/binary"
"errors"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v2 "github.com/prysmaticlabs/prysm/beacon-chain/state/v2"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/time/slots"
)
type mockStateByRoot struct {
state state.BeaconState
err error
}
func (m *mockStateByRoot) StateByRoot(context.Context, [32]byte) (state.BeaconState, error) {
return m.state, m.err
}
type testStateOpt func(*ethpb.BeaconStateAltair)
func testStateWithValidators(v []*ethpb.Validator) testStateOpt {
return func(a *ethpb.BeaconStateAltair) {
a.Validators = v
}
}
func testStateWithSlot(slot types.Slot) testStateOpt {
return func(a *ethpb.BeaconStateAltair) {
a.Slot = slot
}
}
func testStateFixture(opts ...testStateOpt) state.BeaconState {
a := &ethpb.BeaconStateAltair{}
for _, o := range opts {
o(a)
}
s, _ := v2.InitializeFromProtoUnsafe(a)
return s
}
func generateTestValidators(count int, opts ...func(*ethpb.Validator)) []*ethpb.Validator {
vs := make([]*ethpb.Validator, count)
var i uint32 = 0
for ; i < uint32(count); i++ {
pk := make([]byte, 48)
binary.LittleEndian.PutUint32(pk, i)
v := &ethpb.Validator{PublicKey: pk}
for _, o := range opts {
o(v)
}
vs[i] = v
}
return vs
}
func oddValidatorsExpired(currentSlot types.Slot) func(*ethpb.Validator) {
return func(v *ethpb.Validator) {
pki := binary.LittleEndian.Uint64(v.PublicKey)
if pki%2 == 0 {
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
} else {
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) - 1)
}
}
}
func oddValidatorsQueued(currentSlot types.Slot) func(*ethpb.Validator) {
return func(v *ethpb.Validator) {
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
pki := binary.LittleEndian.Uint64(v.PublicKey)
if pki%2 == 0 {
v.ActivationEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) - 1)
} else {
v.ActivationEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
}
}
}
func allValidatorsValid(currentSlot types.Slot) func(*ethpb.Validator) {
return func(v *ethpb.Validator) {
v.ActivationEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) - 1)
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
}
}
func balanceIsKeyTimes2(v *ethpb.Validator) {
pki := binary.LittleEndian.Uint64(v.PublicKey)
v.EffectiveBalance = uint64(pki) * 2
}
func testHalfExpiredValidators() ([]*ethpb.Validator, []uint64) {
balances := []uint64{0, 0, 4, 0, 8, 0, 12, 0, 16, 0}
return generateTestValidators(10,
oddValidatorsExpired(types.Slot(99)),
balanceIsKeyTimes2), balances
}
func testHalfQueuedValidators() ([]*ethpb.Validator, []uint64) {
balances := []uint64{0, 0, 4, 0, 8, 0, 12, 0, 16, 0}
return generateTestValidators(10,
oddValidatorsQueued(types.Slot(99)),
balanceIsKeyTimes2), balances
}
func testAllValidValidators() ([]*ethpb.Validator, []uint64) {
balances := []uint64{0, 2, 4, 6, 8, 10, 12, 14, 16, 18}
return generateTestValidators(10,
allValidatorsValid(types.Slot(99)),
balanceIsKeyTimes2), balances
}
func TestStateBalanceCache(t *testing.T) {
type sbcTestCase struct {
err error
root [32]byte
sbc *stateBalanceCache
balances []uint64
name string
}
sentinelCacheMiss := errors.New("Cache missed, as expected!")
sentinelBalances := []uint64{1, 2, 3, 4, 5}
halfExpiredValidators, halfExpiredBalances := testHalfExpiredValidators()
halfQueuedValidators, halfQueuedBalances := testHalfQueuedValidators()
allValidValidators, allValidBalances := testAllValidValidators()
cases := []sbcTestCase{
{
root: bytesutil.ToBytes32([]byte{'A'}),
balances: sentinelBalances,
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
err: sentinelCacheMiss,
},
root: bytesutil.ToBytes32([]byte{'A'}),
balances: sentinelBalances,
},
name: "cache hit",
},
// this works by using a staterooter that returns a known error
// so really we're testing the miss by making sure stategen got called
// this also tells us stategen errors are propagated
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
//state: generateTestValidators(1, testWithBadEpoch),
err: sentinelCacheMiss,
},
root: bytesutil.ToBytes32([]byte{'B'}),
},
err: sentinelCacheMiss,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "cache miss",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{},
root: bytesutil.ToBytes32([]byte{'B'}),
},
err: errNilStateFromStategen,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "error for nil state upon cache miss",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
state: testStateFixture(
testStateWithSlot(99),
testStateWithValidators(halfExpiredValidators)),
},
},
balances: halfExpiredBalances,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "test filtering by exit epoch",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
state: testStateFixture(
testStateWithSlot(99),
testStateWithValidators(halfQueuedValidators)),
},
},
balances: halfQueuedBalances,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "test filtering by activation epoch",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
state: testStateFixture(
testStateWithSlot(99),
testStateWithValidators(allValidValidators)),
},
},
balances: allValidBalances,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "happy path",
},
}
ctx := context.Background()
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cache := c.sbc
cacheRootStart := cache.root
b, err := cache.get(ctx, c.root)
require.ErrorIs(t, err, c.err)
require.DeepEqual(t, c.balances, b)
if c.err != nil {
// if there was an error somewhere, the root should not have changed (unless it already matched)
require.Equal(t, cacheRootStart, cache.root)
} else {
// when successful, the cache should always end with a root matching the request
require.Equal(t, c.root, cache.root)
}
})
}
}

View File

@@ -7,6 +7,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing",
visibility = [
"//beacon-chain:__subpackages__",
"//testing:__subpackages__",
"//testing/fuzz:__pkg__",
],
deps = [
@@ -21,10 +22,10 @@ go_library(
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/block:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -21,10 +21,10 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
@@ -57,6 +57,7 @@ type ChainService struct {
SyncContributionProofDomain []byte
PublicKey [48]byte
SyncCommitteePubkeys [][]byte
InitSyncBlockRoots map[[32]byte]bool
}
// StateNotifier mocks the same method in the chain service.
@@ -284,26 +285,26 @@ func (s *ChainService) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
func (s *ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestation) error {
func (_ *ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestation) error {
return nil
}
// ReceiveAttestationNoPubsub mocks ReceiveAttestationNoPubsub method in chain service.
func (s *ChainService) ReceiveAttestationNoPubsub(context.Context, *ethpb.Attestation) error {
func (_ *ChainService) ReceiveAttestationNoPubsub(context.Context, *ethpb.Attestation) error {
return nil
}
// AttestationPreState mocks AttestationPreState method in chain service.
func (s *ChainService) AttestationPreState(_ context.Context, _ *ethpb.Attestation) (state.BeaconState, error) {
// AttestationTargetState mocks AttestationTargetState method in chain service.
func (s *ChainService) AttestationTargetState(_ context.Context, _ *ethpb.Checkpoint) (state.BeaconState, error) {
return s.State, nil
}
// HeadValidatorsIndices mocks the same method in the chain service.
func (s *ChainService) HeadValidatorsIndices(_ context.Context, epoch types.Epoch) ([]types.ValidatorIndex, error) {
func (s *ChainService) HeadValidatorsIndices(ctx context.Context, epoch types.Epoch) ([]types.ValidatorIndex, error) {
if s.State == nil {
return []types.ValidatorIndex{}, nil
}
return helpers.ActiveValidatorIndices(s.State, epoch)
return helpers.ActiveValidatorIndices(ctx, s.State, epoch)
}
// HeadSeed mocks the same method in the chain service.
@@ -360,12 +361,15 @@ func (s *ChainService) IsCanonical(_ context.Context, r [32]byte) (bool, error)
}
// HasInitSyncBlock mocks the same method in the chain service.
func (s *ChainService) HasInitSyncBlock(_ [32]byte) bool {
return false
func (s *ChainService) HasInitSyncBlock(rt [32]byte) bool {
if s.InitSyncBlockRoots == nil {
return false
}
return s.InitSyncBlockRoots[rt]
}
// HeadGenesisValidatorRoot mocks HeadGenesisValidatorRoot method in chain service.
func (s *ChainService) HeadGenesisValidatorRoot() [32]byte {
func (_ *ChainService) HeadGenesisValidatorRoot() [32]byte {
return [32]byte{}
}
@@ -375,7 +379,7 @@ func (s *ChainService) VerifyBlkDescendant(_ context.Context, _ [32]byte) error
}
// VerifyLmdFfgConsistency mocks VerifyLmdFfgConsistency and always returns nil.
func (s *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
if !bytes.Equal(a.Data.BeaconBlockRoot, a.Data.Target.Root) {
return errors.New("LMD and FFG miss matched")
}
@@ -391,7 +395,7 @@ func (s *ChainService) VerifyFinalizedConsistency(_ context.Context, r []byte) e
}
// ChainHeads mocks ChainHeads and always return nil.
func (s *ChainService) ChainHeads() ([][32]byte, []types.Slot) {
func (_ *ChainService) ChainHeads() ([][32]byte, []types.Slot) {
return [][32]byte{
bytesutil.ToBytes32(bytesutil.PadTo([]byte("foo"), 32)),
bytesutil.ToBytes32(bytesutil.PadTo([]byte("bar"), 32)),
@@ -400,36 +404,36 @@ func (s *ChainService) ChainHeads() ([][32]byte, []types.Slot) {
}
// HeadPublicKeyToValidatorIndex mocks HeadPublicKeyToValidatorIndex and always return 0 and true.
func (s *ChainService) HeadPublicKeyToValidatorIndex(ctx context.Context, pubKey [48]byte) (types.ValidatorIndex, bool) {
func (_ *ChainService) HeadPublicKeyToValidatorIndex(_ context.Context, _ [48]byte) (types.ValidatorIndex, bool) {
return 0, true
}
// HeadValidatorIndexToPublicKey mocks HeadValidatorIndexToPublicKey and always return empty and nil.
func (s *ChainService) HeadValidatorIndexToPublicKey(ctx context.Context, index types.ValidatorIndex) ([48]byte, error) {
func (s *ChainService) HeadValidatorIndexToPublicKey(_ context.Context, _ types.ValidatorIndex) ([48]byte, error) {
return s.PublicKey, nil
}
// HeadSyncCommitteeIndices mocks HeadSyncCommitteeIndices and always return `HeadNextSyncCommitteeIndices`.
func (s *ChainService) HeadSyncCommitteeIndices(ctx context.Context, index types.ValidatorIndex, slot types.Slot) ([]types.CommitteeIndex, error) {
func (s *ChainService) HeadSyncCommitteeIndices(_ context.Context, index types.ValidatorIndex, slot types.Slot) ([]types.CommitteeIndex, error) {
return s.SyncCommitteeIndices, nil
}
// HeadSyncCommitteePubKeys mocks HeadSyncCommitteePubKeys and always return empty nil.
func (s *ChainService) HeadSyncCommitteePubKeys(ctx context.Context, slot types.Slot, committeeIndex types.CommitteeIndex) ([][]byte, error) {
func (s *ChainService) HeadSyncCommitteePubKeys(_ context.Context, _ types.Slot, _ types.CommitteeIndex) ([][]byte, error) {
return s.SyncCommitteePubkeys, nil
}
// HeadSyncCommitteeDomain mocks HeadSyncCommitteeDomain and always return empty nil.
func (s *ChainService) HeadSyncCommitteeDomain(ctx context.Context, slot types.Slot) ([]byte, error) {
func (s *ChainService) HeadSyncCommitteeDomain(_ context.Context, _ types.Slot) ([]byte, error) {
return s.SyncCommitteeDomain, nil
}
// HeadSyncSelectionProofDomain mocks HeadSyncSelectionProofDomain and always return empty nil.
func (s *ChainService) HeadSyncSelectionProofDomain(ctx context.Context, slot types.Slot) ([]byte, error) {
func (s *ChainService) HeadSyncSelectionProofDomain(_ context.Context, _ types.Slot) ([]byte, error) {
return s.SyncSelectionProofDomain, nil
}
// HeadSyncContributionProofDomain mocks HeadSyncContributionProofDomain and always return empty nil.
func (s *ChainService) HeadSyncContributionProofDomain(ctx context.Context, slot types.Slot) ([]byte, error) {
func (s *ChainService) HeadSyncContributionProofDomain(_ context.Context, _ types.Slot) ([]byte, error) {
return s.SyncContributionProofDomain, nil
}

View File

@@ -4,57 +4,94 @@ import (
"context"
"fmt"
"github.com/prysmaticlabs/prysm/beacon-chain/core"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
)
// VerifyWeakSubjectivityRoot verifies the weak subjectivity root in the service struct.
var errWSBlockNotFound = errors.New("weak subjectivity root not found in db")
var errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
type weakSubjectivityDB interface {
HasBlock(ctx context.Context, blockRoot [32]byte) bool
BlockRoots(ctx context.Context, f *filters.QueryFilter) ([][32]byte, error)
}
type WeakSubjectivityVerifier struct {
enabled bool
verified bool
root [32]byte
epoch types.Epoch
slot types.Slot
db weakSubjectivityDB
}
// NewWeakSubjectivityVerifier validates a checkpoint, and if valid, uses it to initialize a weak subjectivity verifier
func NewWeakSubjectivityVerifier(wsc *ethpb.Checkpoint, db weakSubjectivityDB) (*WeakSubjectivityVerifier, error) {
// TODO(7342): Weak subjectivity checks are currently optional. When we require the flag to be specified
// per 7342, a nil checkpoint, zero-root or zero-epoch should all fail validation
// and return an error instead of creating a WeakSubjectivityVerifier that permits any chain history.
if wsc == nil || len(wsc.Root) == 0 || wsc.Epoch == 0 {
return &WeakSubjectivityVerifier{
enabled: false,
}, nil
}
startSlot, err := slots.EpochStart(wsc.Epoch)
if err != nil {
return nil, err
}
return &WeakSubjectivityVerifier{
enabled: true,
verified: false,
root: bytesutil.ToBytes32(wsc.Root),
epoch: wsc.Epoch,
db: db,
slot: startSlot,
}, nil
}
// VerifyWeakSubjectivity verifies the weak subjectivity root in the service struct.
// Reference design: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/weak-subjectivity.md#weak-subjectivity-sync-procedure
func (s *Service) VerifyWeakSubjectivityRoot(ctx context.Context) error {
// TODO(7342): Remove the following to fully use weak subjectivity in production.
if s.cfg.WeakSubjectivityCheckpt == nil || len(s.cfg.WeakSubjectivityCheckpt.Root) == 0 || s.cfg.WeakSubjectivityCheckpt.Epoch == 0 {
func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, finalizedEpoch types.Epoch) error {
if v.verified || !v.enabled {
return nil
}
// Do nothing if the weak subjectivity has previously been verified,
// or weak subjectivity epoch is higher than last finalized epoch.
if s.wsVerified {
return nil
}
if s.cfg.WeakSubjectivityCheckpt.Epoch > s.finalizedCheckpt.Epoch {
// Two conditions are described in the specs:
// IF epoch_number > store.finalized_checkpoint.epoch,
// then ASSERT during block sync that block with root block_root
// is in the sync path at epoch epoch_number. Emit descriptive critical error if this assert fails,
// then exit client process.
// we do not handle this case ^, because we can only blocks that have been processed / are currently
// in line for finalization, we don't have the ability to look ahead. so we only satisfy the following:
// IF epoch_number <= store.finalized_checkpoint.epoch,
// then ASSERT that the block in the canonical chain at epoch epoch_number has root block_root.
// Emit descriptive critical error if this assert fails, then exit client process.
if v.epoch > finalizedEpoch {
return nil
}
log.Infof("Performing weak subjectivity check for root %#x in epoch %d", v.root, v.epoch)
r := bytesutil.ToBytes32(s.cfg.WeakSubjectivityCheckpt.Root)
log.Infof("Performing weak subjectivity check for root %#x in epoch %d", r, s.cfg.WeakSubjectivityCheckpt.Epoch)
// Save initial sync cached blocks to DB.
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
// A node should have the weak subjectivity block in the DB.
if !s.cfg.BeaconDB.HasBlock(ctx, r) {
return fmt.Errorf("node does not have root in DB: %#x", r)
}
startSlot, err := core.StartSlot(s.cfg.WeakSubjectivityCheckpt.Epoch)
if err != nil {
return err
if !v.db.HasBlock(ctx, v.root) {
return errors.Wrap(errWSBlockNotFound, fmt.Sprintf("missing root %#x", v.root))
}
filter := filters.NewFilter().SetStartSlot(v.slot).SetEndSlot(v.slot + params.BeaconConfig().SlotsPerEpoch)
// A node should have the weak subjectivity block corresponds to the correct epoch in the DB.
filter := filters.NewFilter().SetStartSlot(startSlot).SetEndSlot(startSlot + params.BeaconConfig().SlotsPerEpoch)
roots, err := s.cfg.BeaconDB.BlockRoots(ctx, filter)
roots, err := v.db.BlockRoots(ctx, filter)
if err != nil {
return err
return errors.Wrap(err, "error while retrieving block roots to verify weak subjectivity")
}
for _, root := range roots {
if r == root {
if v.root == root {
log.Info("Weak subjectivity check has passed")
s.wsVerified = true
v.verified = true
return nil
}
}
return fmt.Errorf("node does not have root in db corresponding to epoch: %#x %d", r, s.cfg.WeakSubjectivityCheckpt.Epoch)
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
}

View File

@@ -4,78 +4,77 @@ import (
"context"
"testing"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
b := testutil.NewBeaconBlock()
b := util.NewBeaconBlock()
b.Block.Slot = 32
require.NoError(t, beaconDB.SaveBlock(context.Background(), wrapper.WrappedPhase0SignedBeaconBlock(b)))
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
tests := []struct {
wsVerified bool
wantErr bool
wantErr error
checkpt *ethpb.Checkpoint
finalizedEpoch types.Epoch
errString string
name string
}{
{
name: "nil root and epoch",
wantErr: false,
name: "nil root and epoch",
},
{
name: "already verified",
checkpt: &ethpb.Checkpoint{Epoch: 2},
finalizedEpoch: 2,
wsVerified: true,
wantErr: false,
},
{
name: "not yet to verify, ws epoch higher than finalized epoch",
checkpt: &ethpb.Checkpoint{Epoch: 2},
finalizedEpoch: 1,
wantErr: false,
},
{
name: "can't find the block in DB",
checkpt: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'a'}, 32), Epoch: 1},
finalizedEpoch: 3,
wantErr: true,
errString: "node does not have root in DB",
wantErr: errWSBlockNotFound,
},
{
name: "can't find the block corresponds to ws epoch in DB",
checkpt: &ethpb.Checkpoint{Root: r[:], Epoch: 2}, // Root belongs in epoch 1.
finalizedEpoch: 3,
wantErr: true,
errString: "node does not have root in db corresponding to epoch",
wantErr: errWSBlockNotFoundInEpoch,
},
{
name: "can verify and pass",
checkpt: &ethpb.Checkpoint{Root: r[:], Epoch: 1},
finalizedEpoch: 3,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.NoError(t, err)
s := &Service{
cfg: &Config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt},
wsVerified: tt.wsVerified,
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt},
finalizedCheckpt: &ethpb.Checkpoint{Epoch: tt.finalizedEpoch},
wsVerifier: wv,
}
if err := s.VerifyWeakSubjectivityRoot(context.Background()); (err != nil) != tt.wantErr {
require.ErrorContains(t, tt.errString, err)
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), s.finalizedCheckpt.Epoch)
if tt.wantErr == nil {
require.NoError(t, err)
} else {
require.Equal(t, true, errors.Is(err, tt.wantErr))
}
})
}

View File

@@ -41,13 +41,13 @@ go_library(
"//beacon-chain/state/v2:go_default_library",
"//cache/lru:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//container/slice:go_default_library",
"//crypto/hash:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -82,12 +82,12 @@ go_test(
"//beacon-chain/state/v1:go_default_library",
"//beacon-chain/state/v2:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",

View File

@@ -14,7 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
lruwrpr "github.com/prysmaticlabs/prysm/cache/lru"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/config/params"
)
var (

View File

@@ -8,9 +8,9 @@ import (
types "github.com/prysmaticlabs/eth2-types"
state "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestBalanceCache_AddGetBalance(t *testing.T) {

View File

@@ -6,7 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/testing/assert"
"google.golang.org/protobuf/proto"
)

View File

@@ -6,11 +6,11 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"google.golang.org/protobuf/proto"
)

View File

@@ -3,17 +3,20 @@
package cache
import (
"context"
"errors"
"math"
"sync"
"time"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
types "github.com/prysmaticlabs/eth2-types"
lruwrpr "github.com/prysmaticlabs/prysm/cache/lru"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/slice"
"github.com/prysmaticlabs/prysm/math"
"github.com/prysmaticlabs/prysm/shared/params"
mathutil "github.com/prysmaticlabs/prysm/math"
)
var (
@@ -37,6 +40,7 @@ var (
type CommitteeCache struct {
CommitteeCache *lru.Cache
lock sync.RWMutex
inProgress map[string]bool
}
// committeeKeyFn takes the seed as the key to retrieve shuffled indices of a committee in a given epoch.
@@ -52,14 +56,16 @@ func committeeKeyFn(obj interface{}) (string, error) {
func NewCommitteesCache() *CommitteeCache {
return &CommitteeCache{
CommitteeCache: lruwrpr.New(int(maxCommitteesCacheSize)),
inProgress: make(map[string]bool),
}
}
// Committee fetches the shuffled indices by slot and committee index. Every list of indices
// represent one committee. Returns true if the list exists with slot and committee index. Otherwise returns false, nil.
func (c *CommitteeCache) Committee(slot types.Slot, seed [32]byte, index types.CommitteeIndex) ([]types.ValidatorIndex, error) {
c.lock.RLock()
defer c.lock.RUnlock()
func (c *CommitteeCache) Committee(ctx context.Context, slot types.Slot, seed [32]byte, index types.CommitteeIndex) ([]types.ValidatorIndex, error) {
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
if exists {
@@ -79,7 +85,7 @@ func (c *CommitteeCache) Committee(slot types.Slot, seed [32]byte, index types.C
committeeCountPerSlot = item.CommitteeCount / uint64(params.BeaconConfig().SlotsPerEpoch)
}
indexOffSet, err := math.Add64(uint64(index), uint64(slot.ModSlot(params.BeaconConfig().SlotsPerEpoch).Mul(committeeCountPerSlot)))
indexOffSet, err := mathutil.Add64(uint64(index), uint64(slot.ModSlot(params.BeaconConfig().SlotsPerEpoch).Mul(committeeCountPerSlot)))
if err != nil {
return nil, err
}
@@ -106,9 +112,10 @@ func (c *CommitteeCache) AddCommitteeShuffledList(committees *Committees) error
}
// ActiveIndices returns the active indices of a given seed stored in cache.
func (c *CommitteeCache) ActiveIndices(seed [32]byte) ([]types.ValidatorIndex, error) {
c.lock.RLock()
defer c.lock.RUnlock()
func (c *CommitteeCache) ActiveIndices(ctx context.Context, seed [32]byte) ([]types.ValidatorIndex, error) {
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
if exists {
@@ -127,9 +134,11 @@ func (c *CommitteeCache) ActiveIndices(seed [32]byte) ([]types.ValidatorIndex, e
}
// ActiveIndicesCount returns the active indices count of a given seed stored in cache.
func (c *CommitteeCache) ActiveIndicesCount(seed [32]byte) (int, error) {
c.lock.RLock()
defer c.lock.RUnlock()
func (c *CommitteeCache) ActiveIndicesCount(ctx context.Context, seed [32]byte) (int, error) {
if err := c.checkInProgress(ctx, seed); err != nil {
return 0, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
if exists {
CommitteeCacheHit.Inc()
@@ -152,6 +161,29 @@ func (c *CommitteeCache) HasEntry(seed string) bool {
return ok
}
// MarkInProgress a request so that any other similar requests will block on
// Get until MarkNotInProgress is called.
func (c *CommitteeCache) MarkInProgress(seed [32]byte) error {
c.lock.Lock()
defer c.lock.Unlock()
s := key(seed)
if c.inProgress[s] {
return ErrAlreadyInProgress
}
c.inProgress[s] = true
return nil
}
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *CommitteeCache) MarkNotInProgress(seed [32]byte) error {
c.lock.Lock()
defer c.lock.Unlock()
s := key(seed)
delete(c.inProgress, s)
return nil
}
func startEndIndices(c *Committees, index uint64) (uint64, uint64) {
validatorCount := uint64(len(c.ShuffledIndices))
start := slice.SplitOffset(validatorCount, c.CommitteeCount, index)
@@ -166,3 +198,28 @@ func startEndIndices(c *Committees, index uint64) (uint64, uint64) {
func key(seed [32]byte) string {
return string(seed[:])
}
func (c *CommitteeCache) checkInProgress(ctx context.Context, seed [32]byte) error {
delay := minDelay
// Another identical request may be in progress already. Let's wait until
// any in progress request resolves or our timeout is exceeded.
for {
if ctx.Err() != nil {
return ctx.Err()
}
c.lock.RLock()
if !c.inProgress[key(seed)] {
c.lock.RUnlock()
break
}
c.lock.RUnlock()
// This increasing backoff is to decrease the CPU cycles while waiting
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
}
return nil
}

View File

@@ -3,7 +3,11 @@
// This file is used in fuzzer builds to bypass global committee caches.
package cache
import types "github.com/prysmaticlabs/eth2-types"
import (
"context"
types "github.com/prysmaticlabs/eth2-types"
)
// FakeCommitteeCache is a struct with 1 queue for looking up shuffled indices list by seed.
type FakeCommitteeCache struct {
@@ -16,7 +20,7 @@ func NewCommitteesCache() *FakeCommitteeCache {
// Committee fetches the shuffled indices by slot and committee index. Every list of indices
// represent one committee. Returns true if the list exists with slot and committee index. Otherwise returns false, nil.
func (c *FakeCommitteeCache) Committee(slot types.Slot, seed [32]byte, index types.CommitteeIndex) ([]types.ValidatorIndex, error) {
func (c *FakeCommitteeCache) Committee(ctx context.Context, slot types.Slot, seed [32]byte, index types.CommitteeIndex) ([]types.ValidatorIndex, error) {
return nil, nil
}
@@ -32,12 +36,12 @@ func (c *FakeCommitteeCache) AddProposerIndicesList(seed [32]byte, indices []typ
}
// ActiveIndices returns the active indices of a given seed stored in cache.
func (c *FakeCommitteeCache) ActiveIndices(seed [32]byte) ([]types.ValidatorIndex, error) {
func (c *FakeCommitteeCache) ActiveIndices(ctx context.Context, seed [32]byte) ([]types.ValidatorIndex, error) {
return nil, nil
}
// ActiveIndicesCount returns the active indices count of a given seed stored in cache.
func (c *FakeCommitteeCache) ActiveIndicesCount(seed [32]byte) (int, error) {
func (c *FakeCommitteeCache) ActiveIndicesCount(ctx context.Context, seed [32]byte) (int, error) {
return 0, nil
}
@@ -55,3 +59,13 @@ func (c *FakeCommitteeCache) ProposerIndices(seed [32]byte) ([]types.ValidatorIn
func (c *FakeCommitteeCache) HasEntry(string) bool {
return false
}
// MarkInProgress is a stub.
func (c *FakeCommitteeCache) MarkInProgress(seed [32]byte) error {
return nil
}
// MarkNotInProgress is a stub.
func (c *FakeCommitteeCache) MarkNotInProgress(seed [32]byte) error {
return nil
}

View File

@@ -1,11 +1,12 @@
package cache
import (
"context"
"testing"
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestCommitteeKeyFuzz_OK(t *testing.T) {
@@ -28,7 +29,7 @@ func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(c))
_, err := cache.Committee(0, c.Seed, 0)
_, err := cache.Committee(context.Background(), 0, c.Seed, 0)
require.NoError(t, err)
}
@@ -44,7 +45,7 @@ func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(c))
indices, err := cache.ActiveIndices(c.Seed)
indices, err := cache.ActiveIndices(context.Background(), c.Seed)
require.NoError(t, err)
assert.DeepEqual(t, c.SortedIndices, indices)
}

View File

@@ -1,16 +1,17 @@
package cache
import (
"context"
"math"
"sort"
"strconv"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestCommitteeKeyFn_OK(t *testing.T) {
@@ -41,7 +42,7 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
slot := params.BeaconConfig().SlotsPerEpoch
committeeIndex := types.CommitteeIndex(1)
indices, err := cache.Committee(slot, item.Seed, committeeIndex)
indices, err := cache.Committee(context.Background(), slot, item.Seed, committeeIndex)
require.NoError(t, err)
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
@@ -49,7 +50,7 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
require.NoError(t, cache.AddCommitteeShuffledList(item))
wantedIndex := types.CommitteeIndex(0)
indices, err = cache.Committee(slot, item.Seed, wantedIndex)
indices, err = cache.Committee(context.Background(), slot, item.Seed, wantedIndex)
require.NoError(t, err)
start, end := startEndIndices(item, uint64(wantedIndex))
@@ -60,7 +61,7 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []types.ValidatorIndex{1, 2, 3, 4, 5, 6}}
indices, err := cache.ActiveIndices(item.Seed)
indices, err := cache.ActiveIndices(context.Background(), item.Seed)
require.NoError(t, err)
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
@@ -68,7 +69,7 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
require.NoError(t, cache.AddCommitteeShuffledList(item))
indices, err = cache.ActiveIndices(item.Seed)
indices, err = cache.ActiveIndices(context.Background(), item.Seed)
require.NoError(t, err)
assert.DeepEqual(t, item.SortedIndices, indices)
}
@@ -77,13 +78,13 @@ func TestCommitteeCache_ActiveCount(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []types.ValidatorIndex{1, 2, 3, 4, 5, 6}}
count, err := cache.ActiveIndicesCount(item.Seed)
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
require.NoError(t, cache.AddCommitteeShuffledList(item))
count, err = cache.ActiveIndicesCount(item.Seed)
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, len(item.SortedIndices), count)
}
@@ -127,6 +128,6 @@ func TestCommitteeCacheOutOfRange(t *testing.T) {
assert.NoError(t, err)
_ = cache.CommitteeCache.Add(key, comms)
_, err = cache.Committee(0, seed, math.MaxUint64) // Overflow!
_, err = cache.Committee(context.Background(), 0, seed, math.MaxUint64) // Overflow!
require.NotNil(t, err, "Did not fail as expected")
}

View File

@@ -10,9 +10,6 @@ import (
// a Committee struct.
var ErrNotCommittee = errors.New("object is not a committee struct")
// ErrNonCommitteeKey will be returned when the committee key does not exist in cache.
var ErrNonCommitteeKey = errors.New("committee key does not exist")
// Committees defines the shuffled committees seed.
type Committees struct {
CommitteeCount uint64

View File

@@ -1,7 +1,7 @@
package cache
import (
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/config/params"
"k8s.io/client-go/tools/cache"
)

View File

@@ -10,11 +10,11 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//config/params:go_default_library",
"//container/trie:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -31,12 +31,12 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//config/params:go_default_library",
"//container/trie:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],

View File

@@ -5,7 +5,6 @@
package depositcache
import (
"bytes"
"context"
"encoding/hex"
"math/big"
@@ -15,11 +14,10 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/trie"
dbpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -51,9 +49,10 @@ type FinalizedDeposits struct {
// stores all the deposit related data that is required by the beacon-node.
type DepositCache struct {
// Beacon chain deposits in memory.
pendingDeposits []*dbpb.DepositContainer
deposits []*dbpb.DepositContainer
pendingDeposits []*ethpb.DepositContainer
deposits []*ethpb.DepositContainer
finalizedDeposits *FinalizedDeposits
depositsByKey map[[48]byte][]*ethpb.DepositContainer
depositsLock sync.RWMutex
}
@@ -67,8 +66,9 @@ func New() (*DepositCache, error) {
// finalizedDeposits.MerkleTrieIndex is initialized to -1 because it represents the index of the last trie item.
// Inserting the first item into the trie will set the value of the index to 0.
return &DepositCache{
pendingDeposits: []*dbpb.DepositContainer{},
deposits: []*dbpb.DepositContainer{},
pendingDeposits: []*ethpb.DepositContainer{},
deposits: []*ethpb.DepositContainer{},
depositsByKey: map[[48]byte][]*ethpb.DepositContainer{},
finalizedDeposits: &FinalizedDeposits{Deposits: finalizedDepositsTrie, MerkleTrieIndex: -1},
}, nil
}
@@ -76,7 +76,7 @@ func New() (*DepositCache, error) {
// InsertDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertDeposit")
_, span := trace.StartSpan(ctx, "DepositsCache.InsertDeposit")
defer span.End()
if d == nil {
log.WithFields(logrus.Fields{
@@ -95,29 +95,41 @@ func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blo
}
// Keep the slice sorted on insertion in order to avoid costly sorting on retrieval.
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Index >= index })
depCtr := &ethpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, DepositRoot: depositRoot[:], Index: index}
newDeposits := append(
[]*dbpb.DepositContainer{{Deposit: d, Eth1BlockHeight: blockNum, DepositRoot: depositRoot[:], Index: index}},
[]*ethpb.DepositContainer{depCtr},
dc.deposits[heightIdx:]...)
dc.deposits = append(dc.deposits[:heightIdx], newDeposits...)
// Append the deposit to our map, in the event no deposits
// exist for the pubkey , it is simply added to the map.
pubkey := bytesutil.ToBytes48(d.Data.PublicKey)
dc.depositsByKey[pubkey] = append(dc.depositsByKey[pubkey], depCtr)
historicalDepositsCount.Inc()
return nil
}
// InsertDepositContainers inserts a set of deposit containers into our deposit cache.
func (dc *DepositCache) InsertDepositContainers(ctx context.Context, ctrs []*dbpb.DepositContainer) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertDepositContainers")
func (dc *DepositCache) InsertDepositContainers(ctx context.Context, ctrs []*ethpb.DepositContainer) {
_, span := trace.StartSpan(ctx, "DepositsCache.InsertDepositContainers")
defer span.End()
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
sort.SliceStable(ctrs, func(i int, j int) bool { return ctrs[i].Index < ctrs[j].Index })
dc.deposits = ctrs
for _, c := range ctrs {
// Use a new value, as the reference
// of c changes in the next iteration.
newPtr := c
pKey := bytesutil.ToBytes48(newPtr.Deposit.Data.PublicKey)
dc.depositsByKey[pKey] = append(dc.depositsByKey[pKey], newPtr)
}
historicalDepositsCount.Add(float64(len(ctrs)))
}
// InsertFinalizedDeposits inserts deposits up to eth1DepositIndex (inclusive) into the finalized deposits cache.
func (dc *DepositCache) InsertFinalizedDeposits(ctx context.Context, eth1DepositIndex int64) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertFinalizedDeposits")
_, span := trace.StartSpan(ctx, "DepositsCache.InsertFinalizedDeposits")
defer span.End()
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
@@ -136,7 +148,10 @@ func (dc *DepositCache) InsertFinalizedDeposits(ctx context.Context, eth1Deposit
log.WithError(err).Error("Could not hash deposit data. Finalized deposit cache not updated.")
return
}
depositTrie.Insert(depHash[:], insertIndex)
if err = depositTrie.Insert(depHash[:], insertIndex); err != nil {
log.WithError(err).Error("Could not insert deposit hash")
return
}
insertIndex++
}
@@ -147,8 +162,8 @@ func (dc *DepositCache) InsertFinalizedDeposits(ctx context.Context, eth1Deposit
}
// AllDepositContainers returns all historical deposit containers.
func (dc *DepositCache) AllDepositContainers(ctx context.Context) []*dbpb.DepositContainer {
ctx, span := trace.StartSpan(ctx, "DepositsCache.AllDepositContainers")
func (dc *DepositCache) AllDepositContainers(ctx context.Context) []*ethpb.DepositContainer {
_, span := trace.StartSpan(ctx, "DepositsCache.AllDepositContainers")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
@@ -159,7 +174,7 @@ func (dc *DepositCache) AllDepositContainers(ctx context.Context) []*dbpb.Deposi
// AllDeposits returns a list of historical deposits until the given block number
// (inclusive). If no block is specified then this method returns all historical deposits.
func (dc *DepositCache) AllDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit {
ctx, span := trace.StartSpan(ctx, "DepositsCache.AllDeposits")
_, span := trace.StartSpan(ctx, "DepositsCache.AllDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
@@ -176,7 +191,7 @@ func (dc *DepositCache) AllDeposits(ctx context.Context, untilBlk *big.Int) []*e
// DepositsNumberAndRootAtHeight returns number of deposits made up to blockheight and the
// root that corresponds to the latest deposit at that blockheight.
func (dc *DepositCache) DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight *big.Int) (uint64, [32]byte) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.DepositsNumberAndRootAtHeight")
_, span := trace.StartSpan(ctx, "DepositsCache.DepositsNumberAndRootAtHeight")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
@@ -192,26 +207,28 @@ func (dc *DepositCache) DepositsNumberAndRootAtHeight(ctx context.Context, block
// DepositByPubkey looks through historical deposits and finds one which contains
// a certain public key within its deposit data.
func (dc *DepositCache) DepositByPubkey(ctx context.Context, pubKey []byte) (*ethpb.Deposit, *big.Int) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.DepositByPubkey")
_, span := trace.StartSpan(ctx, "DepositsCache.DepositByPubkey")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var deposit *ethpb.Deposit
var blockNum *big.Int
for _, ctnr := range dc.deposits {
if bytes.Equal(ctnr.Deposit.Data.PublicKey, pubKey) {
deposit = ctnr.Deposit
blockNum = big.NewInt(int64(ctnr.Eth1BlockHeight))
break
}
deps, ok := dc.depositsByKey[bytesutil.ToBytes48(pubKey)]
if !ok || len(deps) == 0 {
return deposit, blockNum
}
// We always return the first deposit if a particular
// validator key has multiple deposits assigned to
// it.
deposit = deps[0].Deposit
blockNum = big.NewInt(int64(deps[0].Eth1BlockHeight))
return deposit, blockNum
}
// FinalizedDeposits returns the finalized deposits trie.
func (dc *DepositCache) FinalizedDeposits(ctx context.Context) *FinalizedDeposits {
ctx, span := trace.StartSpan(ctx, "DepositsCache.FinalizedDeposits")
_, span := trace.StartSpan(ctx, "DepositsCache.FinalizedDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()

View File

@@ -7,13 +7,12 @@ import (
"math/big"
"testing"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/trie"
dbpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -44,31 +43,31 @@ func TestInsertDeposit_MaintainsSortedOrderByIndex(t *testing.T) {
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &ethpb.Deposit_Data{PublicKey: []byte{'A'}}},
index: 0,
expectedErr: "",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &ethpb.Deposit_Data{PublicKey: []byte{'B'}}},
index: 3,
expectedErr: "wanted deposit with index 1 to be inserted but received 3",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &ethpb.Deposit_Data{PublicKey: []byte{'C'}}},
index: 1,
expectedErr: "",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &ethpb.Deposit_Data{PublicKey: []byte{'D'}}},
index: 4,
expectedErr: "wanted deposit with index 2 to be inserted but received 4",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &ethpb.Deposit_Data{PublicKey: []byte{'E'}}},
index: 2,
expectedErr: "",
},
@@ -93,7 +92,7 @@ func TestAllDeposits_ReturnsAllDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []*dbpb.DepositContainer{
deposits := []*ethpb.DepositContainer{
{
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
@@ -133,7 +132,7 @@ func TestAllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []*dbpb.DepositContainer{
deposits := []*ethpb.DepositContainer{
{
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
@@ -174,7 +173,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
t.Run("requesting_last_item_works", func(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
dc.deposits = []*ethpb.DepositContainer{
{
Eth1BlockHeight: 10,
Index: 0,
@@ -205,7 +204,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
dc.deposits = []*ethpb.DepositContainer{
{
Eth1BlockHeight: 10,
Index: 0,
@@ -221,7 +220,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
dc.deposits = []*ethpb.DepositContainer{
{
Eth1BlockHeight: 8,
Index: 0,
@@ -247,7 +246,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
dc.deposits = []*ethpb.DepositContainer{
{
Eth1BlockHeight: 8,
Index: 0,
@@ -263,7 +262,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
dc.deposits = []*ethpb.DepositContainer{
{
Eth1BlockHeight: 8,
Index: 0,
@@ -279,7 +278,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
dc.deposits = []*ethpb.DepositContainer{
{
Eth1BlockHeight: 8,
Index: 0,
@@ -316,8 +315,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
ctrs := []*ethpb.DepositContainer{
{
Eth1BlockHeight: 9,
Deposit: &ethpb.Deposit{
@@ -359,6 +357,7 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
}
dc.InsertDepositContainers(context.Background(), ctrs)
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, blkNum := dc.DepositByPubkey(context.Background(), pk1)
@@ -374,7 +373,7 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)
finalizedDeposits := []*dbpb.DepositContainer{
finalizedDeposits := []*ethpb.DepositContainer{
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
@@ -406,7 +405,7 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
Index: 2,
},
}
dc.deposits = append(finalizedDeposits, &dbpb.DepositContainer{
dc.deposits = append(finalizedDeposits, &ethpb.DepositContainer{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{3}, 48),
@@ -438,7 +437,7 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
oldFinalizedDeposits := []*dbpb.DepositContainer{
oldFinalizedDeposits := []*ethpb.DepositContainer{
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
@@ -460,7 +459,7 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
Index: 1,
},
}
newFinalizedDeposit := dbpb.DepositContainer{
newFinalizedDeposit := ethpb.DepositContainer{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{2}, 48),
@@ -473,7 +472,7 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
dc.deposits = oldFinalizedDeposits
dc.InsertFinalizedDeposits(context.Background(), 1)
// Artificially exclude old deposits so that they can only be retrieved from previously finalized deposits.
dc.deposits = []*dbpb.DepositContainer{&newFinalizedDeposit}
dc.deposits = []*ethpb.DepositContainer{&newFinalizedDeposit}
dc.InsertFinalizedDeposits(context.Background(), 2)
@@ -506,7 +505,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
finalizedDeposits := []*dbpb.DepositContainer{
finalizedDeposits := []*ethpb.DepositContainer{
{
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{
@@ -531,7 +530,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
},
}
dc.deposits = append(finalizedDeposits,
&dbpb.DepositContainer{
&ethpb.DepositContainer{
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
@@ -542,7 +541,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
},
Index: 2,
},
&dbpb.DepositContainer{
&ethpb.DepositContainer{
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
@@ -563,7 +562,7 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
dc, err := New()
require.NoError(t, err)
finalizedDeposits := []*dbpb.DepositContainer{
finalizedDeposits := []*ethpb.DepositContainer{
{
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{
@@ -588,7 +587,7 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
},
}
dc.deposits = append(finalizedDeposits,
&dbpb.DepositContainer{
&ethpb.DepositContainer{
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
@@ -599,7 +598,7 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
},
Index: 2,
},
&dbpb.DepositContainer{
&ethpb.DepositContainer{
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
@@ -626,24 +625,28 @@ func TestPruneProofs_Ok(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 1,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 2,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -669,24 +672,26 @@ func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}}, index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
deposit: &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -709,24 +714,28 @@ func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 1,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 2,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -752,24 +761,28 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 1,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 2,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -785,6 +798,36 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestDepositMap_WorksCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)
pk0 := bytesutil.PadTo([]byte("pk0"), 48)
dep, _ := dc.DepositByPubkey(context.Background(), pk0)
var nilDep *ethpb.Deposit
assert.DeepEqual(t, nilDep, dep)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 1000}}
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 0, [32]byte{}))
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 10000}}
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 1, [32]byte{}))
// Make sure we have the same deposit returned over here.
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
// Make sure another key doesn't work.
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, _ = dc.DepositByPubkey(context.Background(), pk1)
assert.DeepEqual(t, nilDep, dep)
}
func makeDepositProof() [][]byte {
proof := make([][]byte, int(params.BeaconConfig().DepositContractTreeDepth)+1)
for i := range proof {

View File

@@ -8,7 +8,6 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/crypto/hash"
dbpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -24,13 +23,13 @@ var (
// PendingDepositsFetcher specifically outlines a struct that can retrieve deposits
// which have not yet been included in the chain.
type PendingDepositsFetcher interface {
PendingContainers(ctx context.Context, untilBlk *big.Int) []*dbpb.DepositContainer
PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer
}
// InsertPendingDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertPendingDeposit")
_, span := trace.StartSpan(ctx, "DepositsCache.InsertPendingDeposit")
defer span.End()
if d == nil {
log.WithFields(logrus.Fields{
@@ -42,7 +41,7 @@ func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Depos
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
dc.pendingDeposits = append(dc.pendingDeposits,
&dbpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, Index: index, DepositRoot: depositRoot[:]})
&ethpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, Index: index, DepositRoot: depositRoot[:]})
pendingDepositsCount.Inc()
span.AddAttributes(trace.Int64Attribute("count", int64(len(dc.pendingDeposits))))
}
@@ -66,13 +65,13 @@ func (dc *DepositCache) PendingDeposits(ctx context.Context, untilBlk *big.Int)
// PendingContainers returns a list of deposit containers until the given block number
// (inclusive).
func (dc *DepositCache) PendingContainers(ctx context.Context, untilBlk *big.Int) []*dbpb.DepositContainer {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
func (dc *DepositCache) PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer {
_, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var depositCntrs []*dbpb.DepositContainer
var depositCntrs []*ethpb.DepositContainer
for _, ctnr := range dc.pendingDeposits {
if untilBlk == nil || untilBlk.Uint64() >= ctnr.Eth1BlockHeight {
depositCntrs = append(depositCntrs, ctnr)
@@ -91,7 +90,7 @@ func (dc *DepositCache) PendingContainers(ctx context.Context, untilBlk *big.Int
// RemovePendingDeposit from the database. The deposit is indexed by the
// Index. This method does nothing if deposit ptr is nil.
func (dc *DepositCache) RemovePendingDeposit(ctx context.Context, d *ethpb.Deposit) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.RemovePendingDeposit")
_, span := trace.StartSpan(ctx, "DepositsCache.RemovePendingDeposit")
defer span.End()
if d == nil {
@@ -129,7 +128,7 @@ func (dc *DepositCache) RemovePendingDeposit(ctx context.Context, d *ethpb.Depos
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (dc *DepositCache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PrunePendingDeposits")
_, span := trace.StartSpan(ctx, "DepositsCache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
@@ -140,7 +139,7 @@ func (dc *DepositCache) PrunePendingDeposits(ctx context.Context, merkleTreeInde
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
var cleanDeposits []*dbpb.DepositContainer
var cleanDeposits []*ethpb.DepositContainer
for _, dp := range dc.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)

View File

@@ -5,10 +5,9 @@ import (
"math/big"
"testing"
dbpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/testing/assert"
"google.golang.org/protobuf/proto"
)
@@ -42,7 +41,7 @@ func TestRemovePendingDeposit_OK(t *testing.T) {
}
depToRemove := &ethpb.Deposit{Proof: proof1, Data: data}
otherDep := &ethpb.Deposit{Proof: proof2, Data: data}
db.pendingDeposits = []*dbpb.DepositContainer{
db.pendingDeposits = []*ethpb.DepositContainer{
{Deposit: depToRemove, Index: 1},
{Deposit: otherDep, Index: 5},
}
@@ -55,7 +54,7 @@ func TestRemovePendingDeposit_OK(t *testing.T) {
func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*dbpb.DepositContainer{{Deposit: &ethpb.Deposit{}}}
dc.pendingDeposits = []*ethpb.DepositContainer{{Deposit: &ethpb.Deposit{}}}
dc.RemovePendingDeposit(context.Background(), nil /*deposit*/)
assert.Equal(t, 1, len(dc.pendingDeposits), "Deposit unexpectedly removed")
}
@@ -79,7 +78,7 @@ func TestPendingDeposit_RoundTrip(t *testing.T) {
func TestPendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*dbpb.DepositContainer{
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}},
{Eth1BlockHeight: 4, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}},
{Eth1BlockHeight: 6, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
@@ -99,7 +98,7 @@ func TestPendingDeposits_OK(t *testing.T) {
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*dbpb.DepositContainer{
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
@@ -109,7 +108,7 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*dbpb.DepositContainer{
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
@@ -123,7 +122,7 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*dbpb.DepositContainer{
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
@@ -133,7 +132,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*dbpb.DepositContainer{
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
@@ -142,7 +141,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
assert.DeepEqual(t, expected, dc.pendingDeposits)
dc.pendingDeposits = []*dbpb.DepositContainer{
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
@@ -152,7 +151,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*dbpb.DepositContainer{
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}

View File

@@ -5,9 +5,9 @@ import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestProposerKeyFn_OK(t *testing.T) {

View File

@@ -120,26 +120,22 @@ func (c *SkipSlotCache) MarkInProgress(r [32]byte) error {
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *SkipSlotCache) MarkNotInProgress(r [32]byte) error {
func (c *SkipSlotCache) MarkNotInProgress(r [32]byte) {
if c.disabled {
return nil
return
}
c.lock.Lock()
defer c.lock.Unlock()
delete(c.inProgress, r)
return nil
}
// Put the response in the cache.
func (c *SkipSlotCache) Put(_ context.Context, r [32]byte, state state.BeaconState) error {
func (c *SkipSlotCache) Put(_ context.Context, r [32]byte, state state.BeaconState) {
if c.disabled {
return nil
return
}
// Copy state so cached value is not mutated.
c.cache.Add(r, state.Copy())
return nil
}

View File

@@ -8,8 +8,8 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestSkipSlotCache_RoundTrip(t *testing.T) {
@@ -28,8 +28,8 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
})
require.NoError(t, err)
require.NoError(t, c.Put(ctx, r, s))
require.NoError(t, c.MarkNotInProgress(r))
c.Put(ctx, r, s)
c.MarkNotInProgress(r)
res, err := c.Get(ctx, r)
require.NoError(t, err)

View File

@@ -8,8 +8,8 @@ import (
"github.com/patrickmn/go-cache"
types "github.com/prysmaticlabs/eth2-types"
lruwrpr "github.com/prysmaticlabs/prysm/cache/lru"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/slice"
"github.com/prysmaticlabs/prysm/shared/params"
)
type subnetIDs struct {

View File

@@ -4,8 +4,8 @@ import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestSubnetIDsCache_RoundTrip(t *testing.T) {

View File

@@ -9,7 +9,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"k8s.io/client-go/tools/cache"
)

View File

@@ -7,9 +7,9 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v1 "github.com/prysmaticlabs/prysm/beacon-chain/state/v1"
v2 "github.com/prysmaticlabs/prysm/beacon-chain/state/v2"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestSyncCommitteeHeadState(t *testing.T) {

View File

@@ -6,15 +6,13 @@ import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
numValidators := 101
deterministicState, _ := testutil.DeterministicGenesisStateAltair(t, uint64(numValidators))
deterministicState, _ := util.DeterministicGenesisStateAltair(t, uint64(numValidators))
pubKeys := make([][]byte, deterministicState.NumValidators())
for i, val := range deterministicState.Validators() {
pubKeys[i] = val.PublicKey
@@ -28,10 +26,10 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
}{
{
name: "only current epoch",
currentSyncCommittee: convertToCommittee([][]byte{
currentSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[1], pubKeys[2], pubKeys[3], pubKeys[2], pubKeys[2],
}),
nextSyncCommittee: convertToCommittee([][]byte{}),
nextSyncCommittee: util.ConvertToCommittee([][]byte{}),
currentSyncMap: map[types.ValidatorIndex][]types.CommitteeIndex{
1: {0},
2: {1, 3, 4},
@@ -45,8 +43,8 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
},
{
name: "only next epoch",
currentSyncCommittee: convertToCommittee([][]byte{}),
nextSyncCommittee: convertToCommittee([][]byte{
currentSyncCommittee: util.ConvertToCommittee([][]byte{}),
nextSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[1], pubKeys[2], pubKeys[3], pubKeys[2], pubKeys[2],
}),
currentSyncMap: map[types.ValidatorIndex][]types.CommitteeIndex{
@@ -62,14 +60,14 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
},
{
name: "some current epoch and some next epoch",
currentSyncCommittee: convertToCommittee([][]byte{
currentSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[1],
pubKeys[2],
pubKeys[3],
pubKeys[2],
pubKeys[2],
}),
nextSyncCommittee: convertToCommittee([][]byte{
nextSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[7],
pubKeys[6],
pubKeys[5],
@@ -90,14 +88,14 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
},
{
name: "some current epoch and some next epoch duplicated across",
currentSyncCommittee: convertToCommittee([][]byte{
currentSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[1],
pubKeys[2],
pubKeys[3],
pubKeys[2],
pubKeys[2],
}),
nextSyncCommittee: convertToCommittee([][]byte{
nextSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[2],
pubKeys[1],
pubKeys[3],
@@ -117,13 +115,13 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
},
{
name: "all duplicated",
currentSyncCommittee: convertToCommittee([][]byte{
currentSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[100],
pubKeys[100],
pubKeys[100],
pubKeys[100],
}),
nextSyncCommittee: convertToCommittee([][]byte{
nextSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[100],
pubKeys[100],
pubKeys[100],
@@ -138,13 +136,13 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
},
{
name: "unknown keys",
currentSyncCommittee: convertToCommittee([][]byte{
currentSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[100],
pubKeys[100],
pubKeys[100],
pubKeys[100],
}),
nextSyncCommittee: convertToCommittee([][]byte{
nextSyncCommittee: util.ConvertToCommittee([][]byte{
pubKeys[100],
pubKeys[100],
pubKeys[100],
@@ -160,7 +158,7 @@ func TestSyncCommitteeCache_CanUpdateAndRetrieve(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s, _ := testutil.DeterministicGenesisStateAltair(t, uint64(numValidators))
s, _ := util.DeterministicGenesisStateAltair(t, uint64(numValidators))
require.NoError(t, s.SetCurrentSyncCommittee(tt.currentSyncCommittee))
require.NoError(t, s.SetNextSyncCommittee(tt.nextSyncCommittee))
cache := cache.NewSyncCommittee()
@@ -188,14 +186,14 @@ func TestSyncCommitteeCache_RootDoesNotExist(t *testing.T) {
func TestSyncCommitteeCache_CanRotate(t *testing.T) {
c := cache.NewSyncCommittee()
s, _ := testutil.DeterministicGenesisStateAltair(t, 64)
require.NoError(t, s.SetCurrentSyncCommittee(convertToCommittee([][]byte{{1}})))
s, _ := util.DeterministicGenesisStateAltair(t, 64)
require.NoError(t, s.SetCurrentSyncCommittee(util.ConvertToCommittee([][]byte{{1}})))
require.NoError(t, c.UpdatePositionsInCommittee([32]byte{'a'}, s))
require.NoError(t, s.SetCurrentSyncCommittee(convertToCommittee([][]byte{{2}})))
require.NoError(t, s.SetCurrentSyncCommittee(util.ConvertToCommittee([][]byte{{2}})))
require.NoError(t, c.UpdatePositionsInCommittee([32]byte{'b'}, s))
require.NoError(t, s.SetCurrentSyncCommittee(convertToCommittee([][]byte{{3}})))
require.NoError(t, s.SetCurrentSyncCommittee(util.ConvertToCommittee([][]byte{{3}})))
require.NoError(t, c.UpdatePositionsInCommittee([32]byte{'c'}, s))
require.NoError(t, s.SetCurrentSyncCommittee(convertToCommittee([][]byte{{4}})))
require.NoError(t, s.SetCurrentSyncCommittee(util.ConvertToCommittee([][]byte{{4}})))
require.NoError(t, c.UpdatePositionsInCommittee([32]byte{'d'}, s))
_, err := c.CurrentPeriodIndexPosition([32]byte{'a'}, 0)
@@ -204,19 +202,3 @@ func TestSyncCommitteeCache_CanRotate(t *testing.T) {
_, err = c.CurrentPeriodIndexPosition([32]byte{'c'}, 0)
require.NoError(t, err)
}
func convertToCommittee(inputKeys [][]byte) *ethpb.SyncCommittee {
var pubKeys [][]byte
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSize; i++ {
if i < uint64(len(inputKeys)) {
pubKeys = append(pubKeys, bytesutil.PadTo(inputKeys[i], params.BeaconConfig().BLSPubkeyLength))
} else {
pubKeys = append(pubKeys, bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength))
}
}
return &ethpb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
}

View File

@@ -6,10 +6,10 @@ import (
"github.com/patrickmn/go-cache"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/slice"
"github.com/prysmaticlabs/prysm/crypto/rand"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
)
type syncSubnetIDs struct {

View File

@@ -3,9 +3,9 @@ package cache
import (
"testing"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil/assert"
"github.com/prysmaticlabs/prysm/shared/testutil/require"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestSyncSubnetIDsCache_Roundtrip(t *testing.T) {

View File

@@ -1,50 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["slot_epoch.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core",
visibility = [
"//beacon-chain:__subpackages__",
"//fuzz:__pkg__",
"//network/forks:__pkg__",
"//proto/prysm/v1alpha1/attestation:__pkg__",
"//shared/attestationutil:__pkg__",
"//shared/depositutil:__pkg__",
"//shared/interop:__pkg__",
"//shared/keystore:__pkg__",
"//shared/testutil:__pkg__",
"//shared/testutil/altair:__pkg__",
"//slasher:__subpackages__",
"//testing/benchmark/benchmark_files:__subpackages__",
"//testing/endtoend/evaluators:__pkg__",
"//testing/fuzz:__pkg__",
"//testing/spectest:__subpackages__",
"//tools:__subpackages__",
"//validator:__subpackages__",
],
deps = [
"//beacon-chain/state:go_default_library",
"//runtime/version:go_default_library",
"//shared/params:go_default_library",
"//time:go_default_library",
"@com_github_ethereum_go_ethereum//common/math:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["slot_epoch_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/state/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//time:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
],
)

View File

@@ -16,29 +16,33 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/altair",
visibility = [
"//beacon-chain:__subpackages__",
"//shared/testutil:__pkg__",
"//testing/spectest:__subpackages__",
"//testing/util:__pkg__",
"//validator/client:__pkg__",
],
deps = [
"//beacon-chain/core:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v2:go_default_library",
"//config/params:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/block:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
@@ -59,25 +63,28 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v2:go_default_library",
"//beacon-chain/state/v3:go_default_library",
"//config/params:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/wrapper:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/testutil/assert:go_default_library",
"//shared/testutil/require:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

Some files were not shown because too many files have changed in this diff Show More