Compare commits

...

239 Commits

Author SHA1 Message Date
terence tsao
89eedd2123 Efficient computation of epoch participation (#4430)
* Remove custody (#3986)

* Update proto fields

* Updated block operations

* Fixed all block operation tests

* Fixed tests part 1

* Fixed tests part 1

* All tests pass

* Clean up

* Skip spec test

* Fixed ssz test

* Skip ssz test

* Skip mainnet tests

* Update beacon-chain/operations/attestation.go

* Update beacon-chain/operations/attestation.go
* Decoy flip flop check (#3987)
* Bounce attack check (#3989)

* New store values

* Update process block

* Update process attestation

* Update tests

* Helper

* Fixed blockchain package tests

* Update beacon-chain/blockchain/forkchoice/process_block.go
* Conflict
* Unskip mainnet spec tests (#3998)

* Starting

* Fixed attestation mainnet test

* Unskip ssz static and block processing tests

* Fixed workspace

* fixed workspace

* fixed workspace

* Update beacon-chain/core/blocks/block_operations.go
* Unskip minimal spec tests (#3999)

* Starting

* Fixed attestation mainnet test

* Unskip ssz static and block processing tests

* Fixed workspace

* fixed workspace

* fixed workspace

* Update workspace

* Unskip all minimal spec tests

* Update workspace for general test
* Unskip test (#4001)
* Update minimal seconds per slot to 6 (#3978)
* Bounce attack tests (#3993)

* New store values

* Update process block

* Update process attestation

* Update tests

* Helper

* Fixed blockchain package tests

* Slots since epoch starts tests

* Update justified checkpt tests

* Conflict

* Fixed logic

* Update process_block.go

* Use helper
* Conflict
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.1
* Conflict
* Fixed failed tests
* Lower MinGenesisActiveValidatorCount to 16384 (#4100)
* Fork choice beacon block checks (#4107)

* Prevent future blocks check and test

* Removed old code
* Update aggregation proto (#4121)

* Update def
* Update spec test
* Conflict
* Update workspace
* patch
* Resolve conflict
* Patch
* Change workspace
* Update ethereumapis to a forked branch at commit 6eb1193e47
* Fixed all the tests
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into conflict
* fix patch
* Need to regenerate test data
* Merge branch 'master' into v0.9.2
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Enable snappy compression for all (#4157)

* enable snappy compression for all
* enable snappy compression for all
* enable snappy compression for all
* enable snappy compression for all
* Validate aggregate and proof subscriber (#4159)
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Conflict
* Update workspace
* Conflict
* Conflict
* Conflict
* Merge branch 'master' into v0.9.2
* Merge branch 'master' into v0.9.2
* Conflict
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Remove migrate to snappy  (#4205)
* Feature flag: Deprecate --prune-states, release to all (#4204)

* Deprecated prune-states, release to all

* imports

* remote unused import

* remove unused import

* Rm prune state test

* gaz
* Refactoring for dynamic pubsub subscriptions for non-aggregated attestations (#4189)

* checkpoint progress

* chkpt

* checkpoint progress

* put pipeline in its own file

* remove unused imports

* add test, it's failing though

* fix test

* remove head state issue

* add clear db flag to e2e

* add some more error handling, debug logging

* skip processing if chain has not started

* fix test

* wrap in go routine to see if anything breaks

* remove duplicated topic

* Add a regression test. Thanks @nisdas for finding the original problem. May it never happen again *fingers crossed*

* Comments

* gofmt

* comment out with TODO
* Sync with master
* Sync with master
* RPC servers use attestation pool (#4223)
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Refactor RPC to Fully Utilize Ethereum APIs (#4243)

* include attester as a file in the validator server

* remove old proposer server impl

* include new patch and properly sync changes

* align with public pbs

* ensure matches rpc def

* fix up status tests

* resolve all broken test files in the validator rpc package

* gazelle include

* fix up the duties implementation

* fixed up all get duties functions

* all tests pass

* utilize new ethereum apis

* amend validator client to use the new beacon node validator rpc client

* fix up most of validator items

* added in mock

* fix up test

* readd test

* add chain serv mock

* fix a few more validator methods

* all validator tests passingggg

* fix broken test

* resolve even more broken tests

* all tests passsssss

* fix lint

* try PR

* fix up test

* resolve broken other tests
* Sync with master
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Aggregate and proof subscriber (#4240)

* Added subscribers

* Fixed conflict

* Tests

* fix up patch

* Use upstream pb

* include latest patch

* Fmt

* Save state before head block
* skip tests (#4275)
* Delete block attestations from the pool (#4241)

* Added subscribers
* Clean up
* Fixed conflict
* Delete atts in pool in validate pipeline
* Moved it to subscriber
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into use-att-pool-3
* Test
* Fixed test
* Initial work on voluntary exit (#4207)

* Initial implementation of voluntary exit: RPC call

* Update for recent merges

* Break out validation logic for voluntary exits to core module

* RequestExit -> ProposeExit

* Decrease exit package visibility

* Move to operation feed

* Wrap errors
* Fix critical proposer selection bug #4259 (#4265)

* fix critical proposer selection bug #4259

* gofmt

* add 1 more validator to make it 5

* more tests

* Fixed archivedProposerIndex

* Fixed TestFilterAttestation_OK

* Refactor ComputeProposerIndex, add regression test for potential out of range panic

* handle case of nil validator

* Update validators_test.go
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Leftover merge files, oops
* gaz
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.9.2
* Fixes Duplicate Validator Bug (#4322)

* Update dict

* Test helper

* Regression test

* Comment

* Reset test cache
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* fixes after PR #4328
* Complete attestation pool for run time (#4286)

* Added subscribers

* Fixed conflict

* Delete atts in pool in validate pipeline

* Moved it to subscriber

* Test

* Fixed test

* New curl for forkchoice attestations

* Starting att pool service for fork choice

* Update pool interface

* Update pool interface

* Update sync and node

* Lint

* Gazelle

* Updated servers, filled in missing functionalities

* RPC working with 1 beacon node 64 validators

* Started writing tests. Yay

* Test to aggregate and save multiple fork choice atts

* Tests for BatchAttestations for fork choice

* Fixed exisiting tests

* Minor fixes

* Fmt

* Added batch saves

* Lint

* Mo tests yay

* Delete test

* Fmt

* Update interval

* Fixed aggregation broadcast

* Clean up based on design review comment

* Fixed setupBeaconChain

* Raul's feedback. s/error/err
* resolve conflicts
* Merge branch 'v0.9.2' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* Removed old protos and fixed tests (#4336)
* Merge refs/heads/master into v0.9.2
* Disallow duplicated indices and test (#4339)
* Explicit use of GENESIS_SLOT in fork choice (#4343)
* Update from 2 to 3 (#4345)
* Remove verify unaggregated attestation when aggregating (#4347)
* use slot ticker instead of run every (#4348)
* Add context check for unbounded loop work (#4346)
* Revert "Explicit use of GENESIS_SLOT in fork choice (#4343)" (#4349)

This reverts commit d3f6753c77.
* Refactor Powchain Service (#4306)

* add data structures

* generate proto

* add in new fields

* add comments

* add new mock state

* add new mock state

* add new methods

* some more changes

* check genesis time properly

* lint

* fix refs

* fix tests

* lint

* lint

* lint

* gaz

* fix lint

* raul's comments

* use one method

* fix test

* raul's comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Ensure best better-justification is stored for fork choice (#4342)

* Ensure best better-justification is stored. Minor refactor
* Tests
* Merge refs/heads/v0.9.2 into better-best-justified
* Merge refs/heads/v0.9.2 into better-best-justified
* Ensure that epoch of attestation slot matches the target epoch (#4341)

* Disallow duplicated indices and test
* Add slot to target epoch check to on_attestation
* Add slot to target epoch check to process_attestation
* Merge branch 'v0.9.2' of git+ssh://github.com/prysmaticlabs/prysm into no-dup-att-indices
* Fixed TestProcessAttestations_PrevEpochFFGDataMismatches
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Update beacon-chain/blockchain/forkchoice/process_attestation_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Filter viable branches in fork choice (#4355)
* Only activate upon finality (#4359)

* Updated functions
* Tests
* Merge branch 'v0.9.2' of git+ssh://github.com/prysmaticlabs/prysm into queue-fix-on-finality
* Comment
* Merge refs/heads/v0.9.2 into queue-fix-on-finality
* Fixed failing test from 4359 (#4360)

* Fixed
* Skip registry spec tests
* Wait for state to be initialized at least once before running slot ticker based on genesis time (#4364)
* Sync with master
* Fix checkpoint root to  use genesis block root (#4368)
* Return an error on nil head state in fork choice (#4369)

* Return error if nil head state

* Fixed tests. Saved childen blocks state

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* Update metrics every epoch (#4367)
* return empty slice if state is nil (#4365)
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* Pubsub: Broadcast attestations to committee based subnets (#4316)

* Working on un-aggregated pubsub topics

* update subscriber to call pool

* checkpointing

* fix

* untested message validation

* minor fixes

* rename slotsSinceGenesis to slotsSince

* some progress on a unit test, subscribe is not being called still...

* dont change topic

* need to set the data on the message

* restore topic

* fixes

* some helpful parameter changes for mainnet operations

* lint

* Terence feedback

* unskip e2e

* Unit test for validate committee index beacon attestation

* PR feedbacK

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into resolveConflicts
* remove condition
* Remove unused operation pool (#4361)
* Merge refs/heads/master into v0.9.2
* Aggregate attestations periodically  (#4376)
* Persist ETH1 Data to Disk (#4329)

* add data structures

* generate proto

* add in new fields

* add comments

* add new mock state

* add new mock state

* add new methods

* some more changes

* check genesis time properly

* lint

* fix refs

* fix tests

* lint

* lint

* lint

* gaz

* adding in new proto message

* remove outdated vars

* add new changes

* remove latest eth1data

* continue refactoring

* finally works

* lint

* fix test

* fix all tests

* fix all tests again

* fix build

* change back

* add full eth1 test

* fix logs and test

* add constant

* changes

* fix bug

* lint

* fix another bug

* change back

* Apply suggestions from code review

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Fixed VerifyIndexedAttestation (#4382)
* rm signing root (#4381)

* rm signing root

* Fixed VerifyIndexedAttestation

* Check proposer slashed status inside ProcessBlockHeaderNoVerify

* Fixed TestUpdateJustified_CouldUpdateBest

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Remove Redundant Trie Generation (#4383)

* remove trie generation
* remove deposit hashes
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.9.2
* fix build
* Conflict
* Implement StreamAttestations RPC Endpoint (#4390)

* started attestation stream

* stream attestations test

* on slot tick test passing

* imports

* gaz

* Update beacon-chain/rpc/beacon/attestations_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

Co-authored-by: shayzluf <thezluf@gmail.com>
* Fixed goimport (#4394)
* Use custom stateutil ssz for ssz HTR spec tests (#4396)

* Use custom stateutil ssz for ssz HTR spec tests

* gofmt
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* set mainnet to be the default for build and run (#4398)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* gracefully handle deduplicated registration of topic validators (#4399)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* SSZ: temporarily disable roots cache until cache issues can be resolved (#4407)

* temporarily disable roots cache until cache issues can be resolved

* Also use custom ssz for spectests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Remove process block attestations as separate routine (#4408)

* Removed old save/process block atts

* Fixed tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Save Deposit Cache to Disk (#4384)

* change to protos

* fix build

* glue everything together

* fix test

* raul's review

* preston's comments

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Fix activation queue sorting (#4409)

* Removed old save/process block atts

* Fixed tests

* Proper sorting by eligibility epoch then by indices

* Deleted old colde
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' into v0.9.2
* Merge refs/heads/master into v0.9.2
* stop recursive lookup if context is cancelled (#4420)
* Fix proposal bug (#4419)
* Add Pending Deposits Safely (#4422)

* safely prune cache

* use proper method

* preston's,terence's reviews and comments

* revert change to build files

* use as feature config instead
* Release custom state ssz (#4421)

* Release custom state ssz, change all HTR of beacon state to use custom method

* typo

* use mainnet config

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Define framework
* Use participation fetcher
* Build
* Fixed all tests
* Lint
* Update initial sync save justified to align with v0.9.3 (#4432)
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* fix build
* don't blacklist on pubsub (#4435)
* Fix Flakey Slot Ticker Test (#4434)

* use interface instead for the slot ticker

* fixed up flakey tests

* add gen time

* get duties comment

* fix lifecycle test

* more fixes
* Fixed rest of the test
* Pass in correct chain service
* Pass in another chain service
* Run time
* Configurable min genesis delay (#4437)

* Configurable min genesis delay based on https://github.com/ethereum/eth2.0-specs/pull/1557

* remove feature flag for genesis delay

* fix

* demo config feedback
* Current -> Prev
* Tests
* patch readme
* save keys unencrypted for validators (#4439)
* Add new demo configuration targeting mainnet scale (#4397)

* Add new demo configuration targeting mainnet, with 1/10th of the deposit value

* reduce quotant by 1/10th. Use 1/10th mainnet values

* only change the inactivity quotant

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Save justified checkpoint state (#4433)

* Save justified checkpoint state

* Lint

* Feedback

* Fixed test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Update shared/testutil/deposits.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update proto/testing/ssz_regression_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/core/epoch/epoch_processing.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/kv/forkchoice.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber_beacon_blocks_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber_beacon_blocks_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/proposer.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/prepare_forkchoice.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/aggregator/server.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/cache/depositcache/pending_deposits.go
* Update beacon-chain/cache/depositcache/pending_deposits_test.go
* Update beacon-chain/rpc/validator/proposer.go
* Merge refs/heads/master into v0.9.2
* Update test
* Conflict
* Update beacon-chain/blockchain/chain_info.go
* Conflict
* Merge branch 'efficient-participation' of git+ssh://github.com/prysmaticlabs/prysm into efficient-participation
* Merge refs/heads/master into efficient-participation
2020-01-07 19:28:25 +00:00
Preston Van Loon
2182e1cdc9 Fix pk manager db (#4447)
* fix pk manager db
2020-01-07 19:19:40 +00:00
terence tsao
6d2a2ebadf Update run time to v0.9.3 (#4154)
* Remove custody (#3986)

* Update proto fields

* Updated block operations

* Fixed all block operation tests

* Fixed tests part 1

* Fixed tests part 1

* All tests pass

* Clean up

* Skip spec test

* Fixed ssz test

* Skip ssz test

* Skip mainnet tests

* Update beacon-chain/operations/attestation.go

* Update beacon-chain/operations/attestation.go
* Decoy flip flop check (#3987)
* Bounce attack check (#3989)

* New store values

* Update process block

* Update process attestation

* Update tests

* Helper

* Fixed blockchain package tests

* Update beacon-chain/blockchain/forkchoice/process_block.go
* Conflict
* Unskip mainnet spec tests (#3998)

* Starting

* Fixed attestation mainnet test

* Unskip ssz static and block processing tests

* Fixed workspace

* fixed workspace

* fixed workspace

* Update beacon-chain/core/blocks/block_operations.go
* Unskip minimal spec tests (#3999)

* Starting

* Fixed attestation mainnet test

* Unskip ssz static and block processing tests

* Fixed workspace

* fixed workspace

* fixed workspace

* Update workspace

* Unskip all minimal spec tests

* Update workspace for general test
* Unskip test (#4001)
* Update minimal seconds per slot to 6 (#3978)
* Bounce attack tests (#3993)

* New store values

* Update process block

* Update process attestation

* Update tests

* Helper

* Fixed blockchain package tests

* Slots since epoch starts tests

* Update justified checkpt tests

* Conflict

* Fixed logic

* Update process_block.go

* Use helper
* Conflict
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.1
* Conflict
* Fixed failed tests
* Lower MinGenesisActiveValidatorCount to 16384 (#4100)
* Fork choice beacon block checks (#4107)

* Prevent future blocks check and test

* Removed old code
* Update aggregation proto (#4121)

* Update def
* Update spec test
* Conflict
* Update workspace
* patch
* Resolve conflict
* Patch
* Change workspace
* Update ethereumapis to a forked branch at commit 6eb1193e47
* Fixed all the tests
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into conflict
* fix patch
* Need to regenerate test data
* Merge branch 'master' into v0.9.2
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Enable snappy compression for all (#4157)

* enable snappy compression for all
* enable snappy compression for all
* enable snappy compression for all
* enable snappy compression for all
* Validate aggregate and proof subscriber (#4159)
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Conflict
* Update workspace
* Conflict
* Conflict
* Conflict
* Merge branch 'master' into v0.9.2
* Merge branch 'master' into v0.9.2
* Conflict
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Remove migrate to snappy  (#4205)
* Feature flag: Deprecate --prune-states, release to all (#4204)

* Deprecated prune-states, release to all

* imports

* remote unused import

* remove unused import

* Rm prune state test

* gaz
* Refactoring for dynamic pubsub subscriptions for non-aggregated attestations (#4189)

* checkpoint progress

* chkpt

* checkpoint progress

* put pipeline in its own file

* remove unused imports

* add test, it's failing though

* fix test

* remove head state issue

* add clear db flag to e2e

* add some more error handling, debug logging

* skip processing if chain has not started

* fix test

* wrap in go routine to see if anything breaks

* remove duplicated topic

* Add a regression test. Thanks @nisdas for finding the original problem. May it never happen again *fingers crossed*

* Comments

* gofmt

* comment out with TODO
* Sync with master
* Sync with master
* RPC servers use attestation pool (#4223)
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Refactor RPC to Fully Utilize Ethereum APIs (#4243)

* include attester as a file in the validator server

* remove old proposer server impl

* include new patch and properly sync changes

* align with public pbs

* ensure matches rpc def

* fix up status tests

* resolve all broken test files in the validator rpc package

* gazelle include

* fix up the duties implementation

* fixed up all get duties functions

* all tests pass

* utilize new ethereum apis

* amend validator client to use the new beacon node validator rpc client

* fix up most of validator items

* added in mock

* fix up test

* readd test

* add chain serv mock

* fix a few more validator methods

* all validator tests passingggg

* fix broken test

* resolve even more broken tests

* all tests passsssss

* fix lint

* try PR

* fix up test

* resolve broken other tests
* Sync with master
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Aggregate and proof subscriber (#4240)

* Added subscribers

* Fixed conflict

* Tests

* fix up patch

* Use upstream pb

* include latest patch

* Fmt

* Save state before head block
* skip tests (#4275)
* Delete block attestations from the pool (#4241)

* Added subscribers
* Clean up
* Fixed conflict
* Delete atts in pool in validate pipeline
* Moved it to subscriber
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into use-att-pool-3
* Test
* Fixed test
* Initial work on voluntary exit (#4207)

* Initial implementation of voluntary exit: RPC call

* Update for recent merges

* Break out validation logic for voluntary exits to core module

* RequestExit -> ProposeExit

* Decrease exit package visibility

* Move to operation feed

* Wrap errors
* Fix critical proposer selection bug #4259 (#4265)

* fix critical proposer selection bug #4259

* gofmt

* add 1 more validator to make it 5

* more tests

* Fixed archivedProposerIndex

* Fixed TestFilterAttestation_OK

* Refactor ComputeProposerIndex, add regression test for potential out of range panic

* handle case of nil validator

* Update validators_test.go
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Leftover merge files, oops
* gaz
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.9.2
* Fixes Duplicate Validator Bug (#4322)

* Update dict

* Test helper

* Regression test

* Comment

* Reset test cache
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* fixes after PR #4328
* Complete attestation pool for run time (#4286)

* Added subscribers

* Fixed conflict

* Delete atts in pool in validate pipeline

* Moved it to subscriber

* Test

* Fixed test

* New curl for forkchoice attestations

* Starting att pool service for fork choice

* Update pool interface

* Update pool interface

* Update sync and node

* Lint

* Gazelle

* Updated servers, filled in missing functionalities

* RPC working with 1 beacon node 64 validators

* Started writing tests. Yay

* Test to aggregate and save multiple fork choice atts

* Tests for BatchAttestations for fork choice

* Fixed exisiting tests

* Minor fixes

* Fmt

* Added batch saves

* Lint

* Mo tests yay

* Delete test

* Fmt

* Update interval

* Fixed aggregation broadcast

* Clean up based on design review comment

* Fixed setupBeaconChain

* Raul's feedback. s/error/err
* resolve conflicts
* Merge branch 'v0.9.2' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* Removed old protos and fixed tests (#4336)
* Merge refs/heads/master into v0.9.2
* Disallow duplicated indices and test (#4339)
* Explicit use of GENESIS_SLOT in fork choice (#4343)
* Update from 2 to 3 (#4345)
* Remove verify unaggregated attestation when aggregating (#4347)
* use slot ticker instead of run every (#4348)
* Add context check for unbounded loop work (#4346)
* Revert "Explicit use of GENESIS_SLOT in fork choice (#4343)" (#4349)

This reverts commit d3f6753c77.
* Refactor Powchain Service (#4306)

* add data structures

* generate proto

* add in new fields

* add comments

* add new mock state

* add new mock state

* add new methods

* some more changes

* check genesis time properly

* lint

* fix refs

* fix tests

* lint

* lint

* lint

* gaz

* fix lint

* raul's comments

* use one method

* fix test

* raul's comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Ensure best better-justification is stored for fork choice (#4342)

* Ensure best better-justification is stored. Minor refactor
* Tests
* Merge refs/heads/v0.9.2 into better-best-justified
* Merge refs/heads/v0.9.2 into better-best-justified
* Ensure that epoch of attestation slot matches the target epoch (#4341)

* Disallow duplicated indices and test
* Add slot to target epoch check to on_attestation
* Add slot to target epoch check to process_attestation
* Merge branch 'v0.9.2' of git+ssh://github.com/prysmaticlabs/prysm into no-dup-att-indices
* Fixed TestProcessAttestations_PrevEpochFFGDataMismatches
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Update beacon-chain/blockchain/forkchoice/process_attestation_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Filter viable branches in fork choice (#4355)
* Only activate upon finality (#4359)

* Updated functions
* Tests
* Merge branch 'v0.9.2' of git+ssh://github.com/prysmaticlabs/prysm into queue-fix-on-finality
* Comment
* Merge refs/heads/v0.9.2 into queue-fix-on-finality
* Fixed failing test from 4359 (#4360)

* Fixed
* Skip registry spec tests
* Wait for state to be initialized at least once before running slot ticker based on genesis time (#4364)
* Sync with master
* Fix checkpoint root to  use genesis block root (#4368)
* Return an error on nil head state in fork choice (#4369)

* Return error if nil head state

* Fixed tests. Saved childen blocks state

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* Update metrics every epoch (#4367)
* return empty slice if state is nil (#4365)
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* Pubsub: Broadcast attestations to committee based subnets (#4316)

* Working on un-aggregated pubsub topics

* update subscriber to call pool

* checkpointing

* fix

* untested message validation

* minor fixes

* rename slotsSinceGenesis to slotsSince

* some progress on a unit test, subscribe is not being called still...

* dont change topic

* need to set the data on the message

* restore topic

* fixes

* some helpful parameter changes for mainnet operations

* lint

* Terence feedback

* unskip e2e

* Unit test for validate committee index beacon attestation

* PR feedbacK

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into resolveConflicts
* remove condition
* Remove unused operation pool (#4361)
* Merge refs/heads/master into v0.9.2
* Aggregate attestations periodically  (#4376)
* Persist ETH1 Data to Disk (#4329)

* add data structures

* generate proto

* add in new fields

* add comments

* add new mock state

* add new mock state

* add new methods

* some more changes

* check genesis time properly

* lint

* fix refs

* fix tests

* lint

* lint

* lint

* gaz

* adding in new proto message

* remove outdated vars

* add new changes

* remove latest eth1data

* continue refactoring

* finally works

* lint

* fix test

* fix all tests

* fix all tests again

* fix build

* change back

* add full eth1 test

* fix logs and test

* add constant

* changes

* fix bug

* lint

* fix another bug

* change back

* Apply suggestions from code review

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Fixed VerifyIndexedAttestation (#4382)
* rm signing root (#4381)

* rm signing root

* Fixed VerifyIndexedAttestation

* Check proposer slashed status inside ProcessBlockHeaderNoVerify

* Fixed TestUpdateJustified_CouldUpdateBest

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Remove Redundant Trie Generation (#4383)

* remove trie generation
* remove deposit hashes
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.9.2
* fix build
* Conflict
* Implement StreamAttestations RPC Endpoint (#4390)

* started attestation stream

* stream attestations test

* on slot tick test passing

* imports

* gaz

* Update beacon-chain/rpc/beacon/attestations_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

Co-authored-by: shayzluf <thezluf@gmail.com>
* Fixed goimport (#4394)
* Use custom stateutil ssz for ssz HTR spec tests (#4396)

* Use custom stateutil ssz for ssz HTR spec tests

* gofmt
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* set mainnet to be the default for build and run (#4398)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* gracefully handle deduplicated registration of topic validators (#4399)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* SSZ: temporarily disable roots cache until cache issues can be resolved (#4407)

* temporarily disable roots cache until cache issues can be resolved

* Also use custom ssz for spectests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Remove process block attestations as separate routine (#4408)

* Removed old save/process block atts

* Fixed tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Save Deposit Cache to Disk (#4384)

* change to protos

* fix build

* glue everything together

* fix test

* raul's review

* preston's comments

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Fix activation queue sorting (#4409)

* Removed old save/process block atts

* Fixed tests

* Proper sorting by eligibility epoch then by indices

* Deleted old colde
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' into v0.9.2
* Merge refs/heads/master into v0.9.2
* stop recursive lookup if context is cancelled (#4420)
* Fix proposal bug (#4419)
* Add Pending Deposits Safely (#4422)

* safely prune cache

* use proper method

* preston's,terence's reviews and comments

* revert change to build files

* use as feature config instead
* Release custom state ssz (#4421)

* Release custom state ssz, change all HTR of beacon state to use custom method

* typo

* use mainnet config

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Update initial sync save justified to align with v0.9.3 (#4432)
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* fix build
* don't blacklist on pubsub (#4435)
* Fix Flakey Slot Ticker Test (#4434)

* use interface instead for the slot ticker

* fixed up flakey tests

* add gen time

* get duties comment

* fix lifecycle test

* more fixes
* Configurable min genesis delay (#4437)

* Configurable min genesis delay based on https://github.com/ethereum/eth2.0-specs/pull/1557

* remove feature flag for genesis delay

* fix

* demo config feedback
* patch readme
* save keys unencrypted for validators (#4439)
* Add new demo configuration targeting mainnet scale (#4397)

* Add new demo configuration targeting mainnet, with 1/10th of the deposit value

* reduce quotant by 1/10th. Use 1/10th mainnet values

* only change the inactivity quotant

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Save justified checkpoint state (#4433)

* Save justified checkpoint state

* Lint

* Feedback

* Fixed test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Update shared/testutil/deposits.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update proto/testing/ssz_regression_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/core/epoch/epoch_processing.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/kv/forkchoice.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber_beacon_blocks_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber_beacon_blocks_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/proposer.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/prepare_forkchoice.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/aggregator/server.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/cache/depositcache/pending_deposits.go
* Update beacon-chain/cache/depositcache/pending_deposits_test.go
* Update beacon-chain/rpc/validator/proposer.go
* Merge refs/heads/master into v0.9.2
* Fix e2e genesis delay issues (#4442)

* fix e2e genesis delay issues

* register flag

* typo

* Update shared/featureconfig/config.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* Apply suggestions from code review

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* skip demo e2e

* fix validator

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
* Batch Eth1 RPC Calls (#4392)

* add new methods

* get it working

* optimize past deposit logs processing

* revert change

* fix all tests

* use mock

* lint

* lint

* check for nil

* stop panics

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Terence's Review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-01-07 18:47:39 +00:00
Ivan Martinez
9aed0034ec Vastly improve E2E logs and add README (#4440)
* Improve E2E logs to help debugging
* Add README to E2E
* Remove newline logs
* Remove removedb
* Try releasing after killing process
* Fix validator output
* Fix e2e
* Solve eth1 issue by clearing eth1 db
* Whoops
* Fix log spacing
2020-01-07 17:00:51 +00:00
terence tsao
f764522cbe Log warn and cont if validator pub key not exist in DB (#4429)
* log warn and cont
* assignment
* fixed
* Merge refs/heads/master into log-warn-cont
2020-01-06 21:16:36 +00:00
Ivan Martinez
c7ae03e1b2 E2E cleanup and fix ETH1 chain startup (#4431)
* E2E cleanup and fixes

* Fix build issue
2020-01-06 14:50:36 -06:00
Preston Van Loon
4efc0f5286 Require a state to exist to save justified checkpoint (#4423)
* Add validation to save justified checkpoint in db
* gofmt
2020-01-06 14:41:51 +00:00
Preston Van Loon
9052620453 Release feature --fast-assignments (#4416)
* Deprecated --fast-assignments
* gaz
* Merge branch 'master' of github.com:prysmaticlabs/prysm into release-fast-assignments
2020-01-06 02:57:52 +00:00
Preston Van Loon
0174397f6e Release --enable-bls-pubkey-cache (#4417)
* Release bls pubkey-cache
2020-01-05 20:08:49 +00:00
terence tsao
9f5caf8fea total and target balances metrics (#4414) 2020-01-05 11:12:48 -08:00
Nishant Das
3b3f2c78e2 unskip test (#4411) 2020-01-05 13:39:14 +08:00
Ivan Martinez
242e4bccbf Move confirmDelete from beacon-chain to shared/cmd (#4410)
* Move confirmDelete to shared/cmd as ConfirmAction

* Finish moving function to shared/cmd

* Pass in both text

* Fix for comments
2020-01-04 23:32:09 -05:00
terence tsao
59ab89c98a Validator caches index (#4406) 2020-01-04 11:50:16 -08:00
terence tsao
ac768207ac Safe delete states (#4401)
* Filter block roots by finalization and head
* Tests
* Comments
* Merge branch 'master' into safe-delete-states
* Fixed exisiting tests
* Merge branch 'safe-delete-states' of git+ssh://github.com/prysmaticlabs/prysm into safe-delete-states
* Merge refs/heads/master into safe-delete-states
* Merge refs/heads/master into safe-delete-states
* Merge refs/heads/master into safe-delete-states
* Merge refs/heads/master into safe-delete-states
2020-01-04 19:20:20 +00:00
terence tsao
77d41024dc Revert "Use poststate for calculating att votes (#4395)" (#4404) 2020-01-04 09:25:42 -08:00
Preston Van Loon
f03083f6c8 PK manager: don't panic on bad key (#4405)
* don't panic on bad key
2020-01-04 05:32:47 +00:00
Jim McDonald
5ff9ae2108 Validator keymanager refactor (#4340)
* Move to keymanager
* Move to keymanager
* Merge branch 'keymanager' of github.com:mcdee/prysm into keymanager
* Lint
* Fix visibility
* Bazel fix
* Merge remote-tracking branch 'upstream/master' into keymanager
* logrus->log
* Merge branch 'master' into keymanager
* Merge remote-tracking branch 'upstream/master' into keymanager
* Merge branch 'master' into keymanager
* Merge branch 'master' into keymanager
* Merge branch 'master' into keymanager
* Merge branch 'master' into keymanager
* Fix test after merge
* Merge branch 'master' into keymanager
* And again
2020-01-04 03:51:53 +00:00
terence tsao
5fa03edb29 Update committee cache prev epoch (#4402)
* Update cache base on input epoch, not state epoch
* Tests
* Fixed benchmarks
* Use epochs
* One more
2020-01-03 23:47:54 +00:00
Preston Van Loon
ebe4c9c971 Use a single lock for arrays cache (#4400)
* use a single lock for arrays cache
* Merge refs/heads/master into one-lock
2020-01-03 20:24:19 +00:00
Preston Van Loon
6efe5ef496 Slot ticker: panic on zero genesis time given (#4366)
* panic on zero genesis time given

* fix test

* fix test

* fix test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-03 13:42:35 -06:00
Celeste Ariana Seberras
fbbf5514d1 Syncing gitbook information with README (#4323)
* Syncing gitbook information with README

Updated to match https://prysmaticlabs.gitbook.io/prysm/
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
* Curl readded
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
2020-01-03 19:05:16 +00:00
terence tsao
220af25bce Use poststate for calculating att votes (#4395)
* Use poststate for votes
* Merge branch 'master' into use-post-state
* Merge branch 'master' into use-post-state
2020-01-03 18:29:26 +00:00
Nishant Das
c9252c06c4 Change Skip Slot Cache Key (#4391)
* use different cache key

* add build

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-03 11:44:48 -06:00
Jim McDonald
2c565f5d59 Harden BLS (#4393) 2020-01-03 07:34:15 -08:00
Ivan Martinez
1cb58e859e Add protos for validator proposal slashing protection (#4387)
* Add protos for validator proposal protection
* Fix formatting
* Fix formatting
* Rename protos
* remove extra line
2020-01-03 02:41:31 +00:00
terence tsao
d26839c1f2 Add aggregator indices to logs (#4385)
* Add validator_log.go

* Use new logging scheme

* Go fmt

* Better name

* Tests

* Tests

* Add wg.done, moved logging before span end

* Add aggregator indices to submit attestation log

* Rename

* Fixed test

* Add proposer index

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-02 17:45:48 -06:00
terence tsao
2cb8430ad4 Enhance attester logging (#4380)
* Add validator_log.go
* Use new logging scheme
* Go fmt
* Better name
* Tests
* Tests
* Merge refs/heads/master into better-logging
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into better-logging
* Add wg.done, moved logging before span end
* Merge branch 'better-logging' of git+ssh://github.com/prysmaticlabs/prysm into better-logging
2020-01-02 17:04:07 +00:00
Nishant Das
03356fc7b5 Add Ability to Resync Node (#4279)
* add resyncing functionality

* add more validation to status message

* lint and build

* jim's review

* preston's review

* clean up

* remove log

* remove no sync

* change again

* change back

* remove spaces

* Update shared/slotutil/slottime.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Apply suggestions from code review

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* fix refs

* raul's review

* goimports

* goimports

* add counter

* removed condition

* change back

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-02 16:09:28 +08:00
Preston Van Loon
bdc4045e23 Add eth1data deposit count metric (#4374) 2019-12-30 08:34:46 -08:00
Preston Van Loon
dc1bd1ef62 Revert 4372 and 4373 (#4375)
* Revert "only add chain start deposits up to min genesis active validator count (#4373)"

This reverts commit 35380dd9bf.
* Revert "Return an error if the wrong number of deposits are provided for genesis state (#4372)"

This reverts commit 9674575892.
2019-12-30 01:21:08 +00:00
Preston Van Loon
35380dd9bf only add chain start deposits up to min genesis active validator count (#4373) 2019-12-29 16:27:17 -08:00
Preston Van Loon
9674575892 Return an error if the wrong number of deposits are provided for genesis state (#4372)
* Return an error if the wrong number of deposits are provided for genesis state
* add regression test
2019-12-29 20:10:23 +00:00
Nishant Das
b7d0d7cbb6 Shift Deposit Contract Tools (#4357)
* move tools
* Merge refs/heads/master into shiftTools
* Merge refs/heads/master into shiftTools
2019-12-27 00:41:43 +00:00
Nishant Das
28eadac172 Fix Deposit Log Processing (#4352)
* fix log processing
* Merge branch 'master' into fixLogs
* Merge refs/heads/master into fixLogs
2019-12-26 17:44:56 +00:00
Nishant Das
d5181496c4 Add Docker image for slasher (#4356)
* add docker image for slasher

* load docker rules

* change to c base image

* switch off pure builds
2019-12-26 10:53:27 -06:00
Nishant Das
b337a5720c Handle Pubsub Panics (#4350)
* handle panics
* lint
* gaz
* preston's review
2019-12-24 04:59:08 +00:00
terence tsao
53b8eb57ee Fuzz ProcessFinalUpdates (#4308) 2019-12-23 10:27:16 -08:00
terence tsao
30b4b045f5 Add justified check points to chain info getters (#4335)
* Add justified checkpoint getters

* Use it for chainhead

* Mock

* Fixed tests

* Fixed TestServer_StreamChainHead_OnHeadUpdated

* Caught a run time bug. Fixed

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2019-12-22 00:14:23 -06:00
Ivan Martinez
ec1e7ae005 Remove proto/sharding and move slashing to own dir (#4332)
* Clean proto and move slasher proto to own folder

* Change package name to match files

* Fix typo

* Fix tests

* Undo out of scope changes

* Run gazlle

* Fix build.bazel

* goimports
2019-12-20 21:47:00 -06:00
Preston Van Loon
a949673e33 Pubsub ignore messages from yourself (#4337)
* ignore messages from myself
2019-12-20 19:37:25 +00:00
terence tsao
996f4c7f5a Clean up list beacon committees to not use head state (#4333) 2019-12-20 08:02:12 -08:00
Valentin Mihov
3915a6e15a Fix the links in the TOC (#4334)
The links for running the client, were pointing to wrong sections.
2019-12-20 18:44:03 +08:00
Preston Van Loon
961dd21554 Use libp2p gossipsub upstream validator framework (#4318)
* add reject all pubsub validator to stop automatic propagation of messages
* gaz
* Merge branch 'master' of github.com:prysmaticlabs/prysm into pubsub-validator
* refactor p2p validator pipeline
* add sanity check
* Merge branch 'pubsub-validator' of github.com:prysmaticlabs/prysm into pubsub-validator
* fixed up test
* rem
* gaz
* Merge refs/heads/master into pubsub-validator
* fix from self test
* ensure validator data is set
* resolve todo
* Merge refs/heads/master into pubsub-validator
* gaz
* Merge refs/heads/master into pubsub-validator
* Merge branch 'pubsub-validator' of github.com:prysmaticlabs/prysm into pubsub-validator
* Merge refs/heads/master into pubsub-validator
* remove all of the 'from self' logic. filed https://github.com/libp2p/go-libp2p-pubsub/issues/250
* Merge branch 'pubsub-validator' of github.com:prysmaticlabs/prysm into pubsub-validator
* gaz
* update comment
* Merge refs/heads/master into pubsub-validator
* rename "VaidatorData"
* Merge branch 'pubsub-validator' of github.com:prysmaticlabs/prysm into pubsub-validator
* refactor
* one more bit of refactoring
* Update beacon-chain/sync/validate_beacon_attestation.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* skip validation on self messages, add @nisdas feedback to increment failure counter
* Merge branch 'pubsub-validator' of github.com:prysmaticlabs/prysm into pubsub-validator
* remove flakey
2019-12-20 03:18:08 +00:00
terence tsao
2e4908e7c4 Optimize committee helpers (#4328) 2019-12-19 15:40:51 -08:00
Preston Van Loon
da637668a8 Minor fixes to create keys errors (#4330)
* minor fixes
2019-12-19 19:44:06 +00:00
Jim McDonald
20168ad729 More complete validator metrics (#4327)
* More complete validator metrics
* Merge branch 'master' into metrics
* Merge branch 'master' into metrics
2019-12-19 16:14:44 +00:00
Jim McDonald
0b07a9f227 Migrate periodic function to use RunEvery (#4324) 2019-12-19 07:02:10 -08:00
Jim McDonald
5dca662d01 Comment typo (#4325) 2019-12-19 06:01:23 -08:00
Nishant Das
8c28d1080c Revert "Fix same deposits from same validator in same block" (#4321)
* Revert "Fix same deposits from same validator in same block (#4319)"

This reverts commit 908d220eb2.
2019-12-19 05:36:19 +00:00
Raul Jordan
6a54a430e1 Add Filter by Epoch in kv/blocks.go (#4303)
* allow for epoch based filtering
* modify repo to include filter by epoch
* resolve items
* revamped to use epoch filter
* Merge branch 'master' into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* gazelle rem unused
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Merge refs/heads/master into roots-by-epoch
* Update beacon-chain/db/kv/blocks_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>
* Update beacon-chain/db/kv/blocks_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>
* fmt
* lint res
2019-12-19 00:15:31 +00:00
terence tsao
908d220eb2 Fix same deposits from same validator in same block (#4319)
* Update dict

* Test helper

* Regression test

* Comment

* Reset test cache
2019-12-18 16:53:30 -06:00
Preston Van Loon
ff1fd77425 Build docker images for non-root user (#4320)
* build docker images as non-root user
* search and replace mistake
* buildifer
* Change uid to 1001
2019-12-18 20:52:25 +00:00
Nishant Das
e27bc8312f Persist ETH1 Information (#4305)
* add data structures
* generate proto
* add in new fields
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into saveETH1Data
* add comments
* Merge branch 'master' into saveETH1Data
* remove file
* Merge branch 'saveETH1Data' of https://github.com/prysmaticlabs/geth-sharding into saveETH1Data
* Merge branch 'master' into saveETH1Data
* Merge branch 'master' into saveETH1Data
* Merge refs/heads/master into saveETH1Data
* Merge refs/heads/master into saveETH1Data
2019-12-18 05:57:54 +00:00
Jim McDonald
78968c1e29 Add individual p2p host counts (#4312)
* Add individual p2p host counts
* Merge branch 'master' into p2pmetrics
* Merge branch 'master' into p2pmetrics
* Merge branch 'master' into p2pmetrics
* Merge branch 'master' into p2pmetrics
2019-12-18 05:31:46 +00:00
terence tsao
fb431c11c1 Process slots exit early if same slot (#4314)
* skip process slot if it's same slots
* Merge branch 'master' into optimize-process-slots
* Merge refs/heads/master into optimize-process-slots
2019-12-18 04:37:28 +00:00
Raul Jordan
30ed59e9c8 Make Sure ChainHeadStream Remains Open (#4282)
* do not return from stream
* fix test
* Merge branch 'master' into no-stream-return
* Merge refs/heads/master into no-stream-return
2019-12-18 04:07:11 +00:00
Jim McDonald
2e2d5199e8 Remove ChainStartFeed mocks (#4310)
* Remove ChainStartFeed from interop service
* Remove final ChainStartFeed mocks
* Gazelle
* Merge branch 'master' into coldstart
* Merge branch 'master' into coldstart
* Merge branch 'master' into coldstart
2019-12-18 03:36:07 +00:00
Raul Jordan
4fe31cf1b3 Add Benchmarks for Custom SSZ Hash Tree Root (#4313)
* bench ssz tree root
* more benches
* Merge branch 'master' into ssz-bench
2019-12-18 02:57:40 +00:00
Preston Van Loon
e82e582cdf Config to exclude kafka dep at build time (#4309)
* add flag to exclude kafka
* Add config flag to exclude kafka
* Merge branch 'master' into buildtime-exclude-kafka
2019-12-18 02:07:49 +00:00
Jim McDonald
0b2d9d8576 Tidy up interop commands (#4311) 2019-12-17 15:49:21 -08:00
Preston Van Loon
65e3f3e007 Add pubsub message ID function (#4304)
* add pubsub message ID
* thanks linter
* Update rules_go, gogo protobuf, comment
* Merge branch 'master' into add-msg-fn-id
2019-12-17 05:17:54 +00:00
Preston Van Loon
2c28e4e7a3 Improvements to Committee Assignments for multiple key requests (#4294)
* Add committees helper, benchmark, results show 62ms for 8k validators which was previously 4 minutes
* Add regression test with same data
* fix epoch conversion
* lint
* undo and lint
* Merge branch 'master' of github.com:prysmaticlabs/prysm into zoom-zoom-assignments
* remove validaotr index span
* fix comment, add test to test against spec definition method for consistency.
* Deprecate CommitteeAssignment, delete unused reference to CommitteeAssignment
* Merge branch 'master' of github.com:prysmaticlabs/prysm into zoom-zoom-assignments
* remove new line
* make test be more complicated with validators activated in an epoch transition
* add feature flag for fast-assignments
* Merge branch 'master' of github.com:prysmaticlabs/prysm into zoom-zoom-assignments
* gaz, gofmt, add deprecated code back
* Update beacon-chain/core/helpers/committee.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Merge refs/heads/master into zoom-zoom-assignments
* Merge refs/heads/master into zoom-zoom-assignments
* Merge refs/heads/master into zoom-zoom-assignments
2019-12-17 03:05:26 +00:00
Raul Jordan
642254daa6 Update README Instructions to Resolve Deposit Contract Address (#4302)
* dep addr resolver
* Merge refs/heads/master into update-read
2019-12-17 02:52:54 +00:00
Nishant Das
c41140e15a Optimize Insertion in Deposit Trie (#4299)
* current changes
* change algorithm for tree insert
* almost done with getting this to pass
* unit test passes
* tests now pass
* fix in repo
* Merge branch 'master' into optimizeDepositLogs
* fix build
* Merge branch 'optimizeDepositLogs' of github.com:prysmaticlabs/prysm into optimizeDepositLogs
* remove tautology
* fix tautology
* fix up sparsity
* Merge branch 'master' into optimizeDepositLogs
* further fixes
* Merge branch 'optimizeDepositLogs' of github.com:prysmaticlabs/prysm into optimizeDepositLogs
* Update shared/trieutil/sparse_merkle.go
* comments
* Merge branch 'optimizeDepositLogs' of github.com:prysmaticlabs/prysm into optimizeDepositLogs
* add bench for optimized
* gaz
* Merge refs/heads/master into optimizeDepositLogs
2019-12-17 02:19:12 +00:00
terence tsao
23a6c20dd4 Service as proper names (#4293) 2019-12-16 19:53:55 -06:00
Preston Van Loon
514f5f904f Add prometheus gRPC time histograms (#4300)
* Add grpc_prometheus.EnableHandlingTimeHistogram()
* Merge refs/heads/master into enable-prom-hist
2019-12-16 22:00:34 +00:00
Preston Van Loon
5844436716 Don't serialize bls signature just to deserialize it again (#4298)
* Don't serialize bls signature just to deserialize it again
* gaz
* Merge branch 'master' into minor-thing
* Merge branch 'master' into minor-thing
2019-12-16 19:01:40 +00:00
terence tsao
5879b26b4b Hardening Committee Cache for Runtime (#4270) 2019-12-16 10:14:21 -08:00
terence tsao
566efaef89 Optimize aggregator process slots (#4297)
* Advance slots up to epoch start
* Merge branch 'master' into opt-process-slots-aggregator
* Merge branch 'master' into opt-process-slots-aggregator
2019-12-16 17:43:33 +00:00
Jim McDonald
d9062a7e30 Use RunEvery in place of custom tickers (#4290) 2019-12-16 11:00:15 -06:00
Preston Van Loon
3f344aee55 add a few fuzz tests (#4291) 2019-12-16 00:52:20 -06:00
Nishant Das
fd93751bf7 Fix Goerli Faucet (#4289)
* fix faucet

* minor fixes
2019-12-15 08:21:29 -06:00
Preston Van Loon
325a2503f7 AttestingIndices: Make beacon committee be an argument (#4284)
* make beacon committee be an argument
* remove state from ConvertToIndexed
* Merge branch 'master' into refactor-AttestingIndices-committee
* Merge branch 'master' into refactor-AttestingIndices-committee
* Merge branch 'master' into refactor-AttestingIndices-committee
* Merge refs/heads/master into refactor-AttestingIndices-committee
2019-12-15 05:02:50 +00:00
Preston Van Loon
2179ac683e Fuzz testing for custom state ssz (#4234)
* Add a random fuzz test to ssz to capture panics and compare the effectiveness of the cache. This comment shows a difference in state root calculation 52% of the time and what is even more concering is that spec tests pass with the flag on.
* added case for one
* bring down failure rate
* prevent caching operations if no cache enabled
* unit test and pretty printer
* identify further sources of problems
* no more panics
* not panicking anymore
* fix lint
* Merge branch 'master' into fuzz-ssz
* Merge branch 'master' into fuzz-ssz
* passing up to 68
* Merge branch 'fuzz-ssz' of github.com:prysmaticlabs/prysm into fuzz-ssz
* need to find the culprit for 100
* 100 passes, now only 16 out of 1000
* state roots being mutated
* one out of 10k
* fuzzing stuff
* fix up lint
* Merge branch 'master' into fuzz-ssz
* cleanup
* fixing more comments
* Merge branch 'master' into fuzz-ssz
2019-12-15 04:32:19 +00:00
terence tsao
0f4dabfad8 Fix cloning target state for check point state cache (#4288) 2019-12-14 16:06:30 -08:00
terence tsao
8724dcd41b Sort received atts by sig (#4287) 2019-12-14 10:51:27 -06:00
Jim McDonald
89e1200b73 Add ticker shared helper (#4285) 2019-12-13 15:14:56 -08:00
metanull-operator
0f677a09b6 Added 'Prysm' to version information. (#4281)
* Added 'Prysm' to version information.
* Merge branch 'master' into versionUpdate
2019-12-13 18:52:28 +00:00
Nishant Das
c5dcf49ded Add Flag For Minimum Handshakes (#4280)
* add flag
* jim and preston's review
* check max peers
* gaz
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into minStatusCount
* remove space
* add references
* add warning log
* change log
* gaz
2019-12-13 15:12:49 +00:00
terence tsao
a5881f924f Deprecate active count and committee cache flags (#4276)
* Deprecate active count and committee cache flags
* Merge branch 'master' into deprecate-flags
2019-12-13 14:00:29 +00:00
shayzluf
d93ec64b21 Slasher Grpc client (#4230)
* grpc connection
* fix order
* Merge branch 'fixInteropGenesis' of https://github.com/prysmaticlabs/prysm into grpc_client
* gaz
* grpc setup
* running version
* added comments
* Merge branch 'master' of github.com:prysmaticlabs/prysm into grpc_client
* fix test
* terence feedback
* terence feedback
* feedback changes
* feedback changes
* comment fix
* Merge branch 'master' of github.com:prysmaticlabs/prysm into grpc_client
* logging when there is no chain head
* rename function
* terence and nishant feedback
* fix imports
* nishant feedback
* fix wait for stop
* fix imports
* fix tests
2019-12-13 07:31:37 +00:00
Raul Jordan
a9a5973b98 Add Getter for Genesis Block (#4271)
* test passing

* kafka
2019-12-12 16:27:22 -06:00
Jim McDonald
570efe3d04 Give peers a chance (#4268)
* Add decay function for peer badresponses count
* Activate peer decay in p2p
2019-12-12 14:34:28 +00:00
Raul Jordan
2e9c3895f4 Bring Back Epoch Filtering for ListBlocks API (#4262)
* bring back the epochs!
* fix up
* Merge refs/heads/master into bring-back-epoch-filter
* add in patch
* Merge branch 'bring-back-epoch-filter' of github.com:prysmaticlabs/prysm into bring-back-epoch-filter
* import spacing
* lint
* build
* gaz
* Merge refs/heads/master into bring-back-epoch-filter
* gaz
* Merge branch 'bring-back-epoch-filter' of github.com:prysmaticlabs/prysm into bring-back-epoch-filter
* move back perf
* update ethapis
* fix build
* Merge refs/heads/master into bring-back-epoch-filter
2019-12-12 02:27:19 +00:00
terence tsao
9033f6801b Removed active count and shuffling cache (#4266)
* Removed
* All tests pass
* Gaz
* Removed new lines
* A few more lines...
* I think i got them all
* and I didnt : )
* Could this be last...
2019-12-12 01:15:44 +00:00
Preston Van Loon
c0b3767757 remove old cache for active indices. this is not used in production and will soon be replaced (#4264) 2019-12-11 15:48:48 -06:00
Preston Van Loon
e72ff1bb4f Add unit test to ActiveValidatorIndices (#4263)
* Add regression test to ActiveValidatorIndices

* fix test, more comments

* imports
2019-12-11 12:27:25 -08:00
Jim McDonald
0cb59bb018 Tidy up "Requesting blocks" log in initial sync (#4256)
* Tidy up log
* Merge branch 'master' into logfix
2019-12-11 17:32:38 +00:00
Preston Van Loon
6e549c90ba Initialize server context for beacon server (#4260)
* Initialize server context for beacon server
* Merge branch 'master' into fix-4254
2019-12-11 16:37:05 +00:00
Jim McDonald
813233373e Advanced peer tracking (#4233)
* Advanced peer status

* Rework errors; add tests

* Gazelle

* time->roughtime

* Update beacon-chain/p2p/handshake.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/p2p/interfaces.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Downgrade log

* Tidy up handshaking logic and commentary

* Downgrade log message

* Protect connected peers from disconnection; increase high water level to avoid bad interactions at maxPeers
2019-12-11 18:31:36 +08:00
Preston Van Loon
5757ce8894 Fix flaky TestKV_Aggregated_CanSaveRetrieve (#4253)
* Update aggregated_test.go
2019-12-11 06:08:54 +00:00
Nishant Das
7c11367cd8 Update SSZ (#4252)
* update ssz to latest
* Merge branch 'master' into updateSSZ
* Merge refs/heads/master into updateSSZ
2019-12-11 05:16:37 +00:00
terence tsao
6d2c37caf1 Removed process epoch (#4251) 2019-12-10 20:40:14 -08:00
Ivan Martinez
812311f6f7 Add more detail to README and add benchmark for HashTreeRootState (#4247)
* Add more detail to readme and add benchmark for HashTreeRootState
* Add hashtreerootstate benchmark results to readme
* Merge branch 'master' into benchmarks-readme
2019-12-11 00:14:33 +00:00
terence tsao
22d81ef0ed Update process_epoch benchmark (#4245)
* Update to ProcessEpochPrecompute
* Comment
* Add b.N back
2019-12-10 23:32:11 +00:00
Ivan Martinez
414fcda9a2 Change jaeger default endpoint (#4242)
* Change jaeger default
2019-12-10 19:45:14 +00:00
Nishant Das
bb2fc4cd5e Fix Deposit Sender Utility (#4239)
* add in fix

* change gas limit
2019-12-10 09:35:24 -08:00
Nishant Das
5fd6a92052 Fix DiscoveryV5 (#4237)
* add fallback

* fix test
2019-12-10 13:35:16 +08:00
Nishant Das
7ccbe48f54 fix order (#4228) 2019-12-09 22:31:53 +08:00
Preston Van Loon
7a46cc0681 Enforce stronger head state operations (#4216)
* Enforce stronger head state operations

* fix genesis state generation

* one test left to fix

* all tests passing now

* gofmt

* Update beacon-chain/db/kv/state_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update beacon-chain/db/kv/state.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* fix tests
2019-12-09 15:35:18 +08:00
Preston Van Loon
92d21c72b8 Skip accessing peer status if it does not exist (#4226)
* skip accessing peer status if it does not exist
* Merge refs/heads/master into fix-panic-rr
2019-12-08 23:30:39 +00:00
Preston Van Loon
0cb681476e Add span for saveCheckpointState (#4227)
* Add span for saveCheckpointState
* Add span for saveCheckpointState
2019-12-08 22:53:09 +00:00
Preston Van Loon
fa7b8ab60d A few improvements to handshake (#4214)
* A few improvements to handshake and exit round robin
* revert beacon-chain/sync/initial-sync/round_robin.go
* Merge refs/heads/master into p2p-fixes
* make handshake non-blocking
* Merge branch 'p2p-fixes' of github.com:prysmaticlabs/prysm into p2p-fixes
* Merge refs/heads/master into p2p-fixes
* Merge refs/heads/master into p2p-fixes
* Merge refs/heads/master into p2p-fixes
* Update handshake.go
2019-12-08 05:12:56 +00:00
Preston Van Loon
bdb80271a3 Allow faucet on prylabs.network (#4220)
* fix faucet hostname issue
2019-12-08 03:01:44 +00:00
terence tsao
1b8eb16fc7 --initial-sync-cache-state don't need to save head root (#4219)
* Test

* Run time works

* Revert
2019-12-07 16:45:39 -06:00
metanull-operator
1222ebb6db Graffiti flag (#4213)
* Implementation of graffiti flag without tests.
* Updated to pass graffiti as string instead of []byte all the way to the ProposeBlock RPC call. This ensures that the ToBytes32() call is handled in ProposeBlock as opposed to relying on the caller to ensure that the value passed is only 32 bytes. This adds work by doing that conversion on each proposed block for a static value of graffiti, but it also helps protect against an RPC call to ProposeBlock that has more than 32 bytes for graffiti.
* Added test case for validator.
* Added GraffitiFlag to validate usage test.
* Updated data structures and logic to convert graffiti flag from string to byte array earlier in the process. Now converting when setting up ValidatorService.
* Updated test case to correctly set up validator using byte array.
* Merge branch 'master' into graffitiFlag
2019-12-07 19:13:56 +00:00
terence tsao
3e15e2fc1e Add operation feed (#4215)
* Events
* Notifiers
* Refactor
* Gaz
* Fixed rest
* Lint
* Lint
* Visibility
* Typo
* Typo
* Apply suggestions from code review

Co-Authored-By: Nishant Das <nishdas93@gmail.com>
2019-12-07 17:57:26 +00:00
Nishant Das
667466020e Change All Caches To Ristretto (#4208)
* new caches
* goimports, gaz
* fix all tests
* Merge branch 'swapP2PCaches' of https://github.com/prysmaticlabs/geth-sharding into swapP2PCaches
* remove from bls
* remove ccache
* fix handshake
* Merge branch 'master' into swapP2PCaches
* gofmt
* Merge branch 'master' into swapP2PCaches
2019-12-06 20:06:37 +00:00
terence tsao
f63ab1e136 Remove formatting error for signature fail to verify (#4211)
* Remove formatting error for sig
* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Merge branch 'master' into sig-error-log
2019-12-06 18:21:57 +00:00
Preston Van Loon
6841d96f36 Remove formatting from error (#4210)
* remove formatting from error
* Fix err
2019-12-06 17:48:38 +00:00
Preston Van Loon
cae24068d4 prevent OR on bitlists of different length (#4209)
* prevent OR on bitlists of different length
* prevent OR on bitlists of different length
2019-12-06 14:33:40 +00:00
terence tsao
dc0b8fad4f Move recently seen roots (#4206)
* Move recently seen roots earlier
* Preston's feedback
2019-12-06 06:30:43 +00:00
Preston Van Loon
d3375d98a8 Kafka exporter (#3840)
* abstract db interface, kafka build, work in progress
* checkpoint
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* feature flag
* move passthrough
* flag change
* gofmt
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* missing db methods
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* fix interface
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* try using cmake built from source
* lint godocs
* lint godocs
* lint godocs
* Update BUILD.bazel
* Merge branch 'master' into es-exporter
* Merge branch 'master' into es-exporter
* Merge branch 'master' into es-exporter
* Merge branch 'master' of github.com:prysmaticlabs/prysm into es-exporter
* gaz
2019-12-06 02:05:58 +00:00
terence tsao
9d4c7cb4f7 Use cache state during init sync (#4199)
* Initial sync state cache
* Gaz
* Gaz
* Don't save head root
* Fix config validator
* Uncomment save head
* Merge branch 'master' into initial-sync-no-verify
* Minor refactor
* Merge branch 'initial-sync-no-verify' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* Merge branch 'master' into initial-sync-no-verify
* Tests
* Merge branch 'initial-sync-no-verify' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* Merge branch 'master' into initial-sync-no-verify
* Add lock
* Merge branch 'initial-sync-no-verify' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* Tests
* Removed save head
* One more test
* Merge branch 'master' into initial-sync-no-verify
* Raul's feedback
* Merge branch 'initial-sync-no-verify' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* Comment
* Gazelle
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* revert
* Update beacon-chain/blockchain/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* Merge branch 'initial-sync-no-verify' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
* Fixed test
* Fixed feature flag
* Merge branch 'master' into initial-sync-no-verify
* Fixed cache gensis state test
* Merge branch 'initial-sync-no-verify' of https://github.com/prysmaticlabs/prysm into initial-sync-no-verify
2019-12-06 00:49:19 +00:00
Raul Jordan
ae2b2e74ca Create Functional Cache for Custom State SSZ (#4197)
* better abstraction
* using ristretto
* begin on custom, cached array roots merkleization
* do cache initialization
* passing with new cache
* works
* fix up test
* fixed up cache
* include proper comments
* remove old hash tree root
* rem validator bottleneck
* gaz
* Merge branch 'master' into caching-ssz
* optimized!!!!
* Merge branch 'caching-ssz' of github.com:prysmaticlabs/prysm into caching-ssz
* add mutex
* Merge branch 'master' into caching-ssz
* add read lock
* fmt
* add mathutil
* Merge branch 'master' into caching-ssz
* Merge refs/heads/master into caching-ssz
* Merge refs/heads/master into caching-ssz
* Merge refs/heads/master into caching-ssz
2019-12-05 20:23:59 +00:00
Ivan Martinez
83179376d4 Cleanup testutil and change name scheme to reference deterministic (#4167)
* Clean testutil, change tool names to Deterministic
* Cleanup errors
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into clean-testutil
* Fix bug with generating deposits
* Fix a few tests
* Fix most tests
* Clean up some tests
* Remove err pt. 1
* Remove err pt. 2
* Change tests to use genesis state util
* Remove err from deposits
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Remove circular dependency
* Remove uncompressed signature test
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Merge branch 'master' into clean-testutil
* Goimports
* gazelle
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Add back error handling
* New attestation pool (#4185)

* New pool
* Better namings
* Fmt
* Gazelle
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into define-pool
* Raul's feedback
* Raul's feedback
* Log peer conected log for incoming connections (#4173)

* Log peer conected log for incoming connections
* Merge branch 'master' into peerconnected
* Merge branch 'master' into peerconnected
* Update handshake.go
* Update handshake.go
* Merge branch 'master' into peerconnected
* Merge branch 'master' into peerconnected
* Attestation pool to use go-cache (#4187)
* Update EthereumAPIs  (#4186)

* include new patch targeting latest ethapis master
* ensure project builds
* Merge branch 'master' into update-all-api
* fix up committees
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* include latest eth apis
* Merge branch 'master' into update-all-api
* update block tests
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* Merge branch 'master' into update-all-api
* add todos
* Implement GetValidator RPC Endpoint (#4188)

* include new patch targeting latest ethapis master
* ensure project builds
* Merge branch 'master' into update-all-api
* fix up committees
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* include latest eth apis
* Merge branch 'master' into update-all-api
* update block tests
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* Merge branch 'master' into update-all-api
* add todos
* implement get validator rpc
* add test for get validator
* table driven test
* fix up test
* fix confs
* tests for more cases
* fix up tests and add out of range
* Slasher optimization (#4172)

* size

* batching and concurrency improvements

* gaz

* merge fixes

* fix comment

* fix test

* fix test

* fix build

* ethpb

* ethpb

* fix test

* fix comment

* add benchmark

* fix benchmark
* Handle error for all testutil uses
* Fix errors
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Revert error handling

Revert "Fix errors"

This reverts commit db081f5486.

Revert "Handle error for all testutil uses"

This reverts commit bdabef2306.

Revert "Add back error handling"

This reverts commit da7e3d2020.
* Change genesis state func to use testing.T
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Fix conflict
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Merge branch 'master' into clean-testutil
* Merge branch 'master' into clean-testutil
* Captialize other logs
* Merge branch 'clean-testutil' of https://github.com/0xKiwi/Prysm into clean-testutil
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into clean-testutil
* Merge branch 'master' into clean-testutil
2019-12-05 19:51:33 +00:00
Nishant Das
c36a852329 Swap to Ristretto Cache (#4070)
* add new cache
* change to larger size
* Merge branch 'master' into swapCache
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into swapCache
* remove imports
* cache fixes
* Merge branch 'master' into swapCache
* add better costing
* Merge branch 'swapCache' of https://github.com/prysmaticlabs/geth-sharding into swapCache
* comment
* change back to var
* Merge branch 'master' into swapCache
* Merge branch 'master' into swapCache
* Merge branch 'master' into swapCache
* Merge branch 'master' into swapCache
* Merge refs/heads/master into swapCache
2019-12-05 19:13:11 +00:00
Jim McDonald
650a278fee Harden BLS against invalid input (#4203)
* Harden BLS against invalid input
* Merge branch 'master' into blsharden
* Merge branch 'master' into blsharden
* Merge branch 'master' into blsharden
2019-12-05 18:33:29 +00:00
Ivan Martinez
6816337589 Make logs more helpful for E2E (#4198)
* Make logs more helpful for E2E
* gofmt
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into helpful-logs
* Add extra info
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into helpful-logs
* gofmt and fix error output
* Use errors
* Gazelle
* Revert "gofmt and fix error output"

This reverts commit 9fc85f2dd2.
* Formatting and fix
* add f
* Add more details to logs
* Merge branch 'master' into helpful-logs
* Change text a bit
* Merge branch 'helpful-logs' of https://github.com/0xKiwi/Prysm into helpful-logs
* Merge branch 'master' into helpful-logs
2019-12-05 18:00:55 +00:00
Preston Van Loon
2950e4aeb4 Faucet: Add score in error (#4200)
* Update server.go
* Merge refs/heads/master into prestonvanloon-patch-2
2019-12-05 17:37:19 +00:00
Jim McDonald
746cc142d0 Remove erroneous err (#4202) 2019-12-05 05:53:39 -08:00
Ivan Martinez
261428118e Isolate BLS pubkey cache to only when cache is enabled (#4195)
* Only add to cache when cache is enabled
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into isolate-cache
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into isolate-cache
* Merge branch 'master' into isolate-cache
2019-12-05 02:56:36 +00:00
Preston Van Loon
544e5309ad Faucet: Use client IP in captcha requests (#4196)
* Use client IP in captcha requests
2019-12-05 02:18:15 +00:00
terence tsao
23dd951e59 Chain head state return nil instead of err (#4193)
* Return nil instead of err
* Preston's feedback
* Merge branch 'master' into return-nil
2019-12-04 23:48:30 +00:00
Preston Van Loon
498417a8fc Wait for 3 peers to start sync (#4194)
* Update service.go
2019-12-04 23:09:47 +00:00
Preston Van Loon
617325b726 Faucet improvements (#4192)
* Add faucet reCaptcha improvements in verification
* Add faucet reCaptcha improvements in verification
* add roughtime
2019-12-04 20:33:46 +00:00
Raul Jordan
9e5cc81340 Implement Prysm-Specific HashTreeRootState (#4077)
* new ssz hash tree root
* Merge branch 'master' into new-ssz-state
* better comments on func
* add errors instead of panic in state
* utilize errors wrap everywhere
* include bench
* added bench info
* equality test
* dup
* gaz
* use new hash tree root in state transition
* fix build
* separate test package
* three targets failign
* single target fails
* please test targets...pass for me
* revert
* Merge branch 'master' into new-ssz-state
* rev
* Merge branch 'new-ssz-state' of github.com:prysmaticlabs/prysm into new-ssz-state
* broken build
* Merge branch 'master' into new-ssz-state
* gaz
* Merge branch 'new-ssz-state' of github.com:prysmaticlabs/prysm into new-ssz-state
* ssz workspace
* master ssz
* Merge branch 'master' into new-ssz-state
* resolve conf
* resolve some conflicts and fix up broken file
* fix up build file issues and sync
* eth1 data votes included
* further abstractions, simplifications
* Merge branch 'master' into new-ssz-state
* gaz
* Merge branch 'new-ssz-state' of github.com:prysmaticlabs/prysm into new-ssz-state
* feature flag gating
* add field count test
* Merge branch 'master' into new-ssz-state
* resolving ivan feedback
* Merge branch 'new-ssz-state' of github.com:prysmaticlabs/prysm into new-ssz-state
* gaz
* Merge branch 'master' into new-ssz-state
* addressed
* Merge branch 'new-ssz-state' of github.com:prysmaticlabs/prysm into new-ssz-state
2019-12-04 19:20:33 +00:00
terence tsao
f75a5a5df8 Implement Atts Pool Curl Methods (#4191)
* New pool
* Better namings
* Fmt
* Gazelle
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into define-pool
* Raul's feedback
* Raul's feedback
* Update to use go-cache
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into define-pool-1
* Update workspace
* Update workspace
* Update pool to use interface
* Move kv init methods
* Curd for aggregated
* Curd for unaggregated
* Gaz
* Tests for aggregated
* Fixed test
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into curd
* Minor fixes
* Typoe
* pool test
* Added deletions as well
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into curd
* Update beacon-chain/operations/attestations/kv/aggregated.go
* Update beacon-chain/operations/attestations/kv/aggregated.go
* Update beacon-chain/operations/attestations/kv/unaggregated_test.go
* Update beacon-chain/operations/attestations/kv/kv.go
2019-12-04 18:30:45 +00:00
shayzluf
ae8df9c32b Slasher optimization (#4172)
* size

* batching and concurrency improvements

* gaz

* merge fixes

* fix comment

* fix test

* fix test

* fix build

* ethpb

* ethpb

* fix test

* fix comment

* add benchmark

* fix benchmark
2019-12-04 12:09:38 +05:30
Raul Jordan
90cbe49496 Implement GetValidator RPC Endpoint (#4188)
* include new patch targeting latest ethapis master
* ensure project builds
* Merge branch 'master' into update-all-api
* fix up committees
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* include latest eth apis
* Merge branch 'master' into update-all-api
* update block tests
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* Merge branch 'master' into update-all-api
* add todos
* implement get validator rpc
* add test for get validator
* table driven test
* fix up test
* fix confs
* tests for more cases
* fix up tests and add out of range
2019-12-04 00:33:34 +00:00
Raul Jordan
c31f46d973 Update EthereumAPIs (#4186)
* include new patch targeting latest ethapis master
* ensure project builds
* Merge branch 'master' into update-all-api
* fix up committees
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* include latest eth apis
* Merge branch 'master' into update-all-api
* update block tests
* Merge branch 'update-all-api' of github.com:prysmaticlabs/prysm into update-all-api
* Merge branch 'master' into update-all-api
* add todos
2019-12-03 23:44:58 +00:00
terence tsao
83781d0b74 Attestation pool to use go-cache (#4187) 2019-12-03 15:07:44 -08:00
Jim McDonald
6488b0527c Log peer conected log for incoming connections (#4173)
* Log peer conected log for incoming connections
* Merge branch 'master' into peerconnected
* Merge branch 'master' into peerconnected
* Update handshake.go
* Update handshake.go
* Merge branch 'master' into peerconnected
* Merge branch 'master' into peerconnected
2019-12-03 22:37:49 +00:00
terence tsao
eeb8779cfc New attestation pool (#4185)
* New pool
* Better namings
* Fmt
* Gazelle
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into define-pool
* Raul's feedback
* Raul's feedback
2019-12-03 22:04:11 +00:00
Raul Jordan
f40bbb92d1 Resolve Broken Active Changes for Validators (#4182)
* fix issues with exited validator indices
* tests pass for validator active set changes and exited keys
* Merge branch 'master' into cached-active-changes
* resolve archive test
* Merge branch 'cached-active-changes' of github.com:prysmaticlabs/prysm into cached-active-changes
* Merge branch 'master' into cached-active-changes
* Merge branch 'master' into cached-active-changes
2019-12-03 21:34:52 +00:00
Nishant Das
4f0bef929f Change BLS to Herumi Again (#4181)
* change to herumi's bls
* change alias
* change to better
* add benchmark
* build
* change to bazel fork
* fix prefix
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* make it work with library
* update to latest
* change again
* add import
* update to latest
* add sha commit
* new static lib with groups swapped
* using herumis new lib
* fix dep paths in c headers
* update again
* new changes
* fix commit
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* fix serialization
* comment
* fix test
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* fix to herumis latest version
* fix test
* fix benchmarks
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* add new workspace
* change commit and remove init
* get test to pass
* remove parameter
* remove reverse byte order
* make gazelle happy
* set pure to off
* fix failing tests
* Merge branch 'master' into herumiBLS
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* Merge branch 'herumiBLS' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* remove old ref
* use HashWithDomain functions
* update to latest version
* clean up
* gaz
* add back removed code
* switch off pure
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* use local repo
* resolve docker issues
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into herumiBLS
* fix build and tests
* gaz
* Merge branch 'master' into herumiBLS
* Merge refs/heads/master into herumiBLS
* Merge refs/heads/master into herumiBLS
2019-12-03 20:29:05 +00:00
Raul Jordan
81a83cf100 Implement Chain Head Stream & Naming Consistency (#4160)
* include stream chain head mock
* uncomment test
* stream chain head implemented
* remove imports
* chain head stream test
* include stream test with mockgen
* test now passes
* checkin items
* stream tests all passing
* rem learn
* fix up fork checker
* add stream ctx
* gaz, fix test
* fix broken test
* Merge branch 'master' into chain-head-stream
* include context in chain head stream happy path test
* Merge branch 'master' into chain-head-stream
* Merge branch 'master' into chain-head-stream
* Merge refs/heads/master into chain-head-stream
* Merge refs/heads/master into chain-head-stream
2019-12-03 19:48:11 +00:00
terence tsao
8bbc589edd Spans and check type (#4164)
* Spans and check type
* Typos
* Remove type checks
* Fixed a test bug
* Merge branch 'master' into subs-fixes
* Merge branch 'master' into subs-fixes
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into subs-fixes
* Revert back type assertions
* Merge branch 'master' into subs-fixes
* Merge branch 'subs-fixes' of https://github.com/prysmaticlabs/prysm into subs-fixes
* Merge branch 'master' into subs-fixes
2019-12-03 19:15:01 +00:00
Preston Van Loon
32245a9062 Deprecate --init-sync-no-verify, make it the default (#4179)
* deprecated --init-sync-no-verify, make it the default
* Merge branch 'master' into deprecate-init-sync-verify-flag
* add more flag info
* Merge branch 'deprecate-init-sync-verify-flag' of github.com:prysmaticlabs/prysm into deprecate-init-sync-verify-flag
* gofmt
* Merge refs/heads/master into deprecate-init-sync-verify-flag
* Merge refs/heads/master into deprecate-init-sync-verify-flag
2019-12-03 18:46:04 +00:00
Nishant Das
28c4f28d32 Add Strict Connection Manager (#4110)
* add forked connMgr
* gaz
* add license header
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into connMgr
* add conn manager test
* gaz
* fix connManager
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into connMgr
* gaz
* remove todo
* add new dep
* lint
* lint
* lint
* space
* visibility
* Merge branch 'master' into connMgr
* Merge branch 'master' into connMgr
* Merge refs/heads/master into connMgr
2019-12-03 18:18:57 +00:00
Peter Pratscher
a2d4701f6e Update API to return empty next page token on last page response (#4176)
* Update API to return empty next page token on last page response
* Update tests
* Merge branch 'master' into api-return-empty-page-token-on-last-page
2019-12-03 17:45:43 +00:00
Nishant Das
8e4022f8aa Remove Outdated Proto Files (#4178)
* remove gateway folder
* update att container
* Merge branch 'master' into removeProtoFiles
* Merge refs/heads/master into removeProtoFiles
2019-12-03 16:45:20 +00:00
terence tsao
42e766e909 Fix on participation RPC return (#4171)
* Edit returning epoch
* Merge branch 'master' into fix-participation-typos
2019-12-03 16:23:03 +00:00
Nishant Das
a686be8bd0 Revert "Revert "Update Pending Queue (#4066)" (#4101)" (#4168)
This reverts commit 7a9c297206.
2019-12-03 07:56:04 -08:00
Nishant Das
e3c3dea5d2 Regenerate Missing Proto Files (#4163)
* regen missing proto files
* Merge branch 'master' into regenProto
2019-12-02 23:19:37 +00:00
Ivan Martinez
7754cfb6c6 Reduce amount of time for benchmark tests (#4166)
* Change benchmarks to use different cache

* Fix bench tests and cache

* Add back sig check for test
2019-12-02 15:54:57 -05:00
Andrei Ivasko
f55a380ade Deposit testing (#4043)
* debugging...
* debugging... feedback required
* moved sendDeposits_test to powchain package
* need some guidance to proceed further
* further guidance needed
* match depositData to depositEvent
* debugging validating merkle root
* fixed compile error
* test passed for a single deposit
* Unable verify deposit merkle branch
* fix test
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into AndreisPR
* ready for review
* Merge branch 'master' into deposit-testing
* Merge branch 'master' into deposit-testing
* applied requested changes
* Merge branch 'master' into deposit-testing
2019-12-01 22:23:55 +00:00
Preston Van Loon
9a317ffc0f Update tracing dependencies (#4158)
* update tracing deps
2019-12-01 06:01:53 +00:00
Raul Jordan
3be4894b8a Update Ethereum APIs, Allow Genesis Data Retrieval for Blocks + Attestations (#4150)
* update apis

* include block filter genesis

* genesis atts

* add in workspace file

* include proper diff targeting master of ethereum apis

* genesis block fetching fixes

* remove fmt

* tests for genesis list blocks passing

* fixed up container tests

* tests now passing

* fix up tests
2019-11-30 22:30:48 -06:00
Jim McDonald
646411b881 Log connections (#4143)
* Log connections
* Merge branch 'master' into logconnection
* Merge branch 'master' into logconnection
* Merge branch 'master' into logconnection
2019-12-01 01:37:42 +00:00
Jim McDonald
0e99e4af4f Update lastUpdated on peerstatus.Set() (#4152)
* Update lastUpdated on Set()
* Merge branch 'master' into lastupdated
* gazelle
* Merge branch 'master' into lastupdated
* Merge branch 'master' into lastupdated
2019-11-30 19:47:08 +00:00
terence tsao
e87337a97a Update forkchoice spec link to v0.9.0 (#4147)
* Update forkchoice doc link to v0.9.0
* Merge refs/heads/master into update-link
2019-11-30 05:48:18 +00:00
Jim McDonald
53523b3eef Implement ListPeers API call (#4151)
* update ethereumapis from https://github.com/prysmaticlabs/ethereumapis/pull/55
* add stub for https://github.com/prysmaticlabs/prysm/issues/4141
* Add ListPeers API call
* Merge
* Add comment for exported method
* Fix visibility of new peers package.
* Merge branch 'master' into peersapi
2019-11-30 05:36:02 +00:00
terence tsao
5ec02b28a5 Remove pruned states check (#4153)
* Removed already pruned check
* Tested run time
2019-11-30 05:07:13 +00:00
Raul Jordan
1620290305 Check in archive.pb.go (#4148)
* gen archive.pb
* Merge branch 'master' into regen-protos
2019-11-29 18:28:21 +00:00
Preston Van Loon
fc171434c5 Update README.md to reflect spec version (#4146)
* Update README.md
2019-11-29 17:15:14 +00:00
Preston Van Loon
b08f3f760d Update ethereumapis (#4142)
* update ethereumapis from https://github.com/prysmaticlabs/ethereumapis/pull/55
* add stub for https://github.com/prysmaticlabs/prysm/issues/4141
2019-11-29 16:44:51 +00:00
terence tsao
7495961d6b Prune boundary state (#4139)
* Delete epoch boundary slot of last finalized epoch
* Case to cover start slot is skipped
* Test
* Feature flag
* feature gate the new functionality only
* Update DB for migration
* Test
* Fmt
* Fixed test
* Gazelle
2019-11-28 23:05:47 +00:00
Preston Van Loon
4dbf68b50c Fix log message from PR #4130 (#4136)
* fix log message from PR #4130
* Merge refs/heads/master into fix-log-2
2019-11-27 23:49:47 +00:00
Raul Jordan
e24b060eb6 README for Third Party Directory (#4134)
* begin readme
* add common bugs
* include more details for third party readme
* patch diff
* add readme
* complete readme
* Merge branch 'master' into third-party-readme
* rev
* Merge branch 'third-party-readme' of github.com:prysmaticlabs/prysm into third-party-readme
* revert
* Update third_party/README.md
* Update third_party/README.md
2019-11-27 23:27:13 +00:00
terence tsao
e90358cd8e Removed unused mocks (#4135)
* Removed unused mocks

* Lint

* Gaz
2019-11-27 14:52:24 -06:00
Nishant Das
80865ff3f2 Account for Skipped Slots When Requesting for Blocks (#4130)
* add check

* Update beacon-chain/sync/initial-sync/round_robin.go
2019-11-27 11:18:18 -06:00
Jim McDonald
60469ec7ee Avoid crash if peer goes missing (#4115)
* Migrate ChainStarted and StateInitialized to state notifier
* Provide state notifier to powchain service
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Remove commented line
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Accept err from HeadState() as non-fatal
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Do not crash if peer goes missing
* Additional catches
* Merge branch 'master' into rrfix
* Use single refresh time
* Merge branch 'master' into rrfix
2019-11-27 16:00:59 +00:00
Jim McDonald
67be8bd4f0 Mirror run definitions in build (#4129)
* Migrate ChainStarted and StateInitialized to state notifier
* Provide state notifier to powchain service
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Remove commented line
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Accept err from HeadState() as non-fatal
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Mirror run arguments in build
* Reset ssz to mainnet for testing
2019-11-27 15:34:57 +00:00
Nishant Das
3682bf1cda Check Best Peer Before Syncing (#4128)
* add check
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into checkBestPeer
* Merge refs/heads/master into checkBestPeer
2019-11-27 06:56:02 +00:00
Preston Van Loon
e203f66fe0 DB Improvements: Snappy compression, remove some unnecessary batch / goroutines (#4125)
* do not use batch for SaveAttestations
* use snappy compression
* Encode / decode everything with snappy
* Add snappy migration path
* batch is probably fine...
* fix test
* gofmt
* Merge branch 'master' of github.com:prysmaticlabs/prysm into remove-batch-attestations
* add sanity check
* remove that thing
* gaz
* Merge branch 'master' of github.com:prysmaticlabs/prysm into remove-batch-attestations
2019-11-27 06:32:56 +00:00
terence tsao
04df922ac9 Add votes to tree graph (#4127)
* Added block tree tool
* Gaz
* Updated workspace
* Playing around
* Adding votes
* Votes work
* Comments
* Gaz
* Add tools to subpackage
* Merge branch 'master' into block-tree-tool-1
2019-11-27 06:07:52 +00:00
Raul Jordan
0326be86b5 Apply Patch Rules to Use EthereumAPIs Generated Protos in Prysm (#4112)
* starting on patch
* finish determining all required patches
* properly redefine the patch rules
* new patch
* rem double semicolon
* fix patch file
* Merge branch 'master' of github.com:prysmaticlabs/prysm into deprecate-eth-protos
* building the deps
* test target passes using ethereumapis
* compile gateway
* attempting to build everything
* e2e use ethereumapis
* more fixes for slasher
* other item
* getting closer to compiling slasher
* build slasher package
* Merge branch 'master' into deprecate-eth-protos
* Merge branch 'master' into deprecate-eth-protos
* fix benches
* lint gazelle
* Merge branch 'deprecate-eth-protos' of github.com:prysmaticlabs/prysm into deprecate-eth-protos
* proper gateway
* lint
* Merge branch 'master' into deprecate-eth-protos
* fix build
* Merge branch 'deprecate-eth-protos' of github.com:prysmaticlabs/prysm into deprecate-eth-protos
* use swag
* resolve
* ignore change
* include new patch changes
* fix test
* builds
* fix e2e
* gaz
2019-11-27 05:08:18 +00:00
Nishant Das
a7ccd52a95 Save Deposit Contract Address (#4114)
* save contract address
* Update beacon-chain/node/node.go
* Merge branch 'master' into saveContract
* Merge refs/heads/master into saveContract
2019-11-26 21:01:56 +00:00
terence tsao
1ced4754db Add signatures to logs (#4095)
* Enhance logging with sig
* Fixed
* Merge branch 'master' into add-sig
* Merge branch 'master' into add-sig
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into add-sig
* signature via debug
* Merge branch 'add-sig' of https://github.com/prysmaticlabs/prysm into add-sig
* Merge branch 'master' into add-sig
* Merge branch 'master' into add-sig
* Merge branch 'master' into add-sig
2019-11-26 20:36:18 +00:00
terence tsao
b872f74fd3 Do not save duplicated indices (#4118)
* Added a duplication test

* Refactor

* Updated test

* Do not save dups for indices bucket
2019-11-26 13:26:35 -06:00
Ivan Martinez
c1c48a8af5 Create Benchmarks Package for State Transition (#3688)
* Begin benchmarks file for block processing
* Complete block processing benchmarks
* Begin epoch benchmarks
* Write most of epoch benchmarks
* Start config
* Make cases for max conditions
* Begin work on benchmarking doc
* Update benchmark numbers
* Complete epoch benchmarks
* Minor changes
* Make createFullBlock function
* Clean up block benchmarks
* Begin fixing merge issues
* Start adding 4M benchmarks
* Almost finish epoch benchmarks
* Test blocks under real life conditions
* More progress on benchmarks
* Fixes
* Fix benchmark errors
* Begin fixing benchmarks
* More progress on tests
* Complete epoch benchmarks
* More progress on block benches
* Finish epoch benchmarks, get progress on block benchmarks
* Undo unneeded changes
* Fix
* Fix block benchmarks
* Complete block benchmarks
* Finish block benchmarks
* Complete benchmarks
* Increase block benchmarks to 65536
* Fix everything
* Reset configs after benchmarks
* Fix logging and suggestions
* Fix comments
* Fix benchmarks after merge
* Fix merge issues
* Add sanity tests for benchmark
* Make sanity check simpler
* Begin fixing after merge
* Add log
* Remove extra line
* Remove unneeded change
* Finally get block benchmarks to pass
* Begin fixing epoch test
* Finetuning constants
* Revert "Finetuning constants"

This reverts commit a872790d67.
* Finetuning
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Add benches for helper functions
* Abstract block generation to testutil
* Create block generation util in testutil
* Gazelle
* Fix deps
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into block-util
* Fix imports
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into block-util
* Merge branch 'master' into block-util
* Change tests to use config and fix integer division
* Merge branch 'block-util' of https://github.com/0xKiwi/prysm into block-util
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into block-util
* Remove logs
* Fix build
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Merge branch 'master' into block-util
* Add test to ensure finalization occurs
* Add check for finalization
* Merge branch 'block-util' of https://github.com/0xKiwi/prysm into block-util
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into block-util
* Add comment for incrementing the state
* Fix test
* Fix test
* Merge branch 'master' into block-util
* Fix testutil use
* Fix tests
* Change var name
* Merge branch 'master' into block-util
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Merge branch 'block-util' of https://github.com/0xKiwi/prysm into new-benchmarks
* Begin cleaning benchmarks
* Get some numbers going
* Use state saved to disk
* Remove cruft
* Cleanup
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Fix merge arrows
* Set up block util and benchmarks for 128 attestations
* Use intended config for benchmark
* Add more benchmark functions
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Add benchmark epoch and modify block gen config to exclude signing
* Cleanup
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Begin unstaleling
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Update block gen util to v0.9 changes
* Prepare benchmarks to use marshalled files
* Cleanup block gen tool some more
* split up into file generation and benchmarking
* Remove logrus
* Merge branch 'master' into new-benchmarks
* Get benchmarks work, start work on process epoch benchmark
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Merge branch 'new-benchmarks' of https://github.com/0xKiwi/prysm into new-benchmarks
* All benchmarks working
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Fix after merge
* Cleanup
* Add bazel target
* Added TestBenchmarkExecuteStateTransition_WithCache
* Change tests to use SSZ and begin making binary
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Merge branch 'new-benchmarks' of https://github.com/0xKiwi/prysm into new-benchmarks
* bazel binary
* Fully change to binary
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Create go_binary to handle benchmark files
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Gofmt
* Remove genesis state from generated files
* Fix tests
* Gazelle
* Fix tests
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Fix block util
* Allow attestations to be in future for block util
* Fix inclusion delay issue
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Finally fix test
* Add README detailing usage and results
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Change test to run with bazel test
* Fix imports
* Merge branch 'master' into new-benchmarks
* Accidentally removed  config change
* Merge branch 'new-benchmarks' of https://github.com/0xKiwi/prysm into new-benchmarks
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-benchmarks
* Move to core/state/
* Update readme
* Gazelle
* Remove test for cached block
2019-11-26 18:09:57 +00:00
Nishant Das
b88e6dc918 Speed Up Block Processing In Sync (#4075)
* fix proto
* make them non-batched
* gate behind flag
* fix refs
* fix refs
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into speedUpProcessing
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into speedUpProcessing
* use global archiver flags
* lint
* Merge branch 'master' into speedUpProcessing
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into speedUpProcessing
* preston's review
* Merge branch 'speedUpProcessing' of https://github.com/prysmaticlabs/geth-sharding into speedUpProcessing
* Merge branch 'master' into speedUpProcessing
* Merge branch 'master' into speedUpProcessing
* Merge branch 'master' into speedUpProcessing
2019-11-26 07:15:54 +00:00
terence tsao
3868837471 Tool to dump graphviz data for block tree (#4108) 2019-11-25 21:06:25 -08:00
Preston Van Loon
60b1596c4d Lower conn mgr grace period to 1s (#4109)
* Update options.go
2019-11-26 01:32:10 +00:00
Raul Jordan
4f0dcd5e6e Prevent Requesting Current Epoch for Validator Participation (#4104)
* cannot request current epoch
* test for prev epoch instead
* Merge branch 'master' into no-curr-epoch-participation
* Merge branch 'master' into no-curr-epoch-participation
2019-11-25 23:31:21 +00:00
Preston Van Loon
ac405c714f Enforce --p2p-max-peers (#4106)
* Enforce p2p-max-peers
* high == low
2019-11-25 18:55:20 +00:00
Nishant Das
7d0e5a9dc4 Use Latest Vote Map (#4102)
* add latest vote map
* fix all tests
* remove db crud methods
* Merge branch 'master' into latestVoteMap
* preston's review
* Merge branch 'latestVoteMap' of https://github.com/prysmaticlabs/geth-sharding into latestVoteMap
2019-11-25 16:34:20 +00:00
Raul Jordan
feb1267fee Properly Return Finalized Epoch in GetValidatorParticipation Archival Endpoint (#4091)
* properly handle retrieving archived finalized epochs
* test passes for determining if epoch finalized
* Merge branch 'master' into archive-finality
* Merge refs/heads/master into archive-finality
* Merge refs/heads/master into archive-finality
* Merge refs/heads/master into archive-finality
* Merge refs/heads/master into archive-finality
* Merge refs/heads/master into archive-finality
* Merge branch 'master' into archive-finality
* prevent setup panic
* Merge branch 'archive-finality' of github.com:prysmaticlabs/prysm into archive-finality
* Merge refs/heads/master into archive-finality
2019-11-25 15:26:32 +00:00
Nishant Das
7a9c297206 Revert "Update Pending Queue (#4066)" (#4101)
This reverts commit a264a097cc.
2019-11-24 21:05:51 -08:00
terence tsao
21deed0fb7 Revert "Add Lock When Accessing Checkpoints" (#4094)
* Revert "Add Lock When Accessing Checkpoints (#4086)"

This reverts commit 2f392544a6.
* Merge branch 'master' into revert-4086-checkpointLock
2019-11-24 17:12:56 +00:00
terence tsao
627791c54e Complete finalization metrics (#4096)
* Complete finaliztion metrics

* Fixed test
2019-11-23 17:12:29 -08:00
Jim McDonald
3358bde42d Feedfixes (#4093)
* Migrate ChainStarted and StateInitialized to state notifier
* Provide state notifier to powchain service
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Remove commented line
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Accept err from HeadState() as non-fatal
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Explicit unsubscribes from state channels where required
2019-11-23 11:15:02 +00:00
Jim McDonald
9e45cffabc Move StateInitialized and ChainStarted to state feed (#4084)
* Migrate ChainStarted and StateInitialized to state notifier
* Provide state notifier to powchain service
* Merge remote-tracking branch 'upstream/master'
* Merge remote-tracking branch 'upstream/master'
* Remove commented line
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge remote-tracking branch 'upstream/master'
* Merge branch 'master' of github.com:mcdee/prysm
* Accept err from HeadState() as non-fatal
* Merge branch 'master' into master
* Merge branch 'master' into master
* Merge branch 'master' into master
2019-11-23 03:35:47 +00:00
terence tsao
2c8ff7b36f Update attester to wait till one-third (#4090)
* Update to 1/3
* Use tag
* Test
* Fixed test
* Merge branch 'master' into update-1/3
* Merge branch 'master' into update-1/3
2019-11-23 02:23:20 +00:00
terence tsao
a7ec0679b5 Implement wait till two thirds (#4089)
* Implement two thirds

* Test
2019-11-22 16:48:40 -06:00
Preston Van Loon
f717c5d852 Release --prune-finalized-states to all (#4082)
* Release --prune-finalized-states to all
* Merge branch 'master' into deprecate-ff-prune-finalized-states
* Merge branch 'master' of github.com:prysmaticlabs/prysm into deprecate-ff-prune-finalized-states
* Merge refs/heads/master into deprecate-ff-prune-finalized-states
* Merge refs/heads/master into deprecate-ff-prune-finalized-states
* Merge refs/heads/master into deprecate-ff-prune-finalized-states
* Merge refs/heads/master into deprecate-ff-prune-finalized-states
* Merge branch 'master' of github.com:prysmaticlabs/prysm into deprecate-ff-prune-finalized-states
2019-11-22 20:29:46 +00:00
Preston Van Loon
0cec0ee6c3 Release --optimize-process-epoch to all (#4080)
* Release optimize-proces-epoch to all
* Merge branch 'master' into deprecate-ff-optimize-process-epoch
* Merge branch 'master' of github.com:prysmaticlabs/prysm into deprecate-ff-optimize-process-epoch
* Merge refs/heads/master into deprecate-ff-optimize-process-epoch
* Merge refs/heads/master into deprecate-ff-optimize-process-epoch
* Merge refs/heads/master into deprecate-ff-optimize-process-epoch
* Merge refs/heads/master into deprecate-ff-optimize-process-epoch
2019-11-22 09:40:53 +00:00
Nishant Das
2f392544a6 Add Lock When Accessing Checkpoints (#4086)
* fix data races
* Merge branch 'master' into checkpointLock
* Merge branch 'master' into checkpointLock
* Merge refs/heads/master into checkpointLock
2019-11-22 06:34:42 +00:00
Preston Van Loon
75ce8359eb Buildkite: Disable failing BES (#4087)
* Disable failing BES
2019-11-22 06:11:41 +00:00
terence tsao
f5cb04012e Aggregator selection from RPC to validator client (#4071)
* Config
* Updated proto
* Updated pool
* Updated RPC
* Updated validator client
* run time works
* Clean ups
* Fix tests
* Visibility
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into aggregator
* Raul's feedback
* Tests for RPC server
* Tests for validator client
* Span
* More tests
* Use go routine for SubmitAggregateAndProof
* Go routines
* Updated comments
* Use array of roles
* Fixed tests
* Build
* Update validator/client/runner.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update validator/client/runner.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* If
* Merge branch 'refactor-validator-roles' of https://github.com/prysmaticlabs/prysm into refactor-validator-roles
* Empty
* Feedback
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into aggregator
* Removed proto/eth/v1alpha1/shard_chain.pb.go?
* Cleaned up
* Revert
* Comments
* Lint
* Comment
* Merge branch 'master' into aggregator
2019-11-22 05:11:38 +00:00
Raul Jordan
f461d1e024 Resolve Panic in ListValidatorBalances RPC (#4051)
* adding default response
* regression test for archival
* include full regression test
* Merge branch 'master' into resolve-panic-rpc
* Merge branch 'master' into resolve-panic-rpc
* Merge branch 'master' into resolve-panic-rpc
* Merge branch 'master' into resolve-panic-rpc
* listbal
* Merge branch 'master' into resolve-panic-rpc
* Merge branch 'master' into resolve-panic-rpc
* Merge branch 'master' into resolve-panic-rpc
* Merge refs/heads/master into resolve-panic-rpc
2019-11-22 04:39:28 +00:00
Preston Van Loon
bdbd0aaeb8 Deprecate feature flag --scatter (#4079)
* deprecate --scatter. issue #4031
* forgot one for #4061
* use deprecatedUsage
* hidden
* Merge branch 'master' into deprecate-ff-scatter
* Merge branch 'master' into deprecate-ff-scatter
2019-11-22 04:08:49 +00:00
Preston Van Loon
715d06a215 gRPC Gateway: Emit JSON empty fields by default (#4085)
* Emit JSON empty fields by default
* Merge branch 'master' into emit-json-empty-fields
2019-11-22 03:36:47 +00:00
terence tsao
976a3af637 Refactor validator roles into an array (#4081) 2019-11-21 14:35:20 -08:00
Raul Jordan
8f8d2d36c0 Filter ListValidators by Active in RPC (#4061)
* update workspace

* include active filter

* fix up latest changes to match naming

* better comments, fix evaluators

* latest master

* filter items

* filter only active validators
2019-11-21 14:29:24 -06:00
Nishant Das
a264a097cc Update Pending Queue (#4066)
* update queue

* fix test

* put this all in validate method

* remove ancestors too

* not needed

* terence's review

* period

* preston's review
2019-11-21 21:24:50 +08:00
shayzluf
4330839bc1 Add surround check to endpoint (#4065)
* first version of the watchtower api

* service files

* Begin work on grpc server

* More changes to server

* REnames and mock setup

* working test

* merge

* double propose detection test

* nishant review

* todo change

* gaz

* fix service

* gaz

* remove unused import

* gaz

* resolve circular dependency

* resolve circular dependency 2nd try

* remove package

* fix package

* fix test

* added tests

* gaz

* remove status check

* gaz

* remove context

* remove context

* change var name

* moved to rpc dir

* gaz

* remove server code

* gaz

* slasher server

* visibility change

* pb

* service update

* gaz

* slasher grpc server

* making it work

* setup db and start

* gaz

* service flags fixes

* grpc service running

* go imports

* remove new initializer

* gaz

* remove feature flags

* change back SetupSlasherDB

* fix SetupSlasherDB calls

* define err

* fix bad merge

* fix test

* fix imports

* fix imports

* fix imports

* add cancel

* comment stop

* fix cancel issue

* remove unneeded code

* bring back bad merge that removed TODO

* remove use of epoch as am input

* fixed slasher to be runable again

* wait for channel close

* gaz

* small test

* flags fix

* fix flag order

* double vote detection

* remove source epoch from indexed attestation indices

* change server method to receive indexed attestation

* start implementation

* double vote detection

* proto

* pb

* fir comment

* add surround detection and retrieval to endpoint

* nishant review

* import fix

* fix miss order

* fix detection 0 case
added tests

* terence review
2019-11-21 12:41:23 +05:30
terence tsao
835418d1e3 Fix UpdateCommitteeCache slot (#4074) 2019-11-20 21:29:43 -08:00
Raul Jordan
ae07dc7962 Archive Data Even Through Skip Slots (#4054)
* red test first

* does not archive through skip slot

* test out at runtime

* underflow check

* fix tests

* rem info log
2019-11-19 23:53:28 -06:00
shayzluf
d071a0a90a Double vote detection (#4049)
* first version of the watchtower api

* service files

* Begin work on grpc server

* More changes to server

* REnames and mock setup

* working test

* merge

* double propose detection test

* nishant review

* todo change

* gaz

* fix service

* gaz

* remove unused import

* gaz

* resolve circular dependency

* resolve circular dependency 2nd try

* remove package

* fix package

* fix test

* added tests

* gaz

* remove status check

* gaz

* remove context

* remove context

* change var name

* moved to rpc dir

* gaz

* remove server code

* gaz

* slasher server

* visibility change

* pb

* service update

* gaz

* slasher grpc server

* making it work

* setup db and start

* gaz

* service flags fixes

* grpc service running

* go imports

* remove new initializer

* gaz

* remove feature flags

* change back SetupSlasherDB

* fix SetupSlasherDB calls

* define err

* fix bad merge

* fix test

* fix imports

* fix imports

* fix imports

* add cancel

* comment stop

* fix cancel issue

* remove unneeded code

* bring back bad merge that removed TODO

* remove use of epoch as am input

* fixed slasher to be runable again

* wait for channel close

* gaz

* small test

* flags fix

* fix flag order

* double vote detection

* remove source epoch from indexed attestation indices

* change server method to receive indexed attestation

* start implementation

* double vote detection

* proto

* pb

* fir comment

* nishant review

* import fix

* Update slasher/db/indexed_attestations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* terence feedback
2019-11-20 10:44:50 +05:30
Ivan Martinez
2d7802c637 Rename featureconfig.Flag to Flags (#4063) 2019-11-19 21:03:00 -06:00
terence tsao
fcb663acde Implement aggregation helpers (#4062)
* Aggregation helpers

* Tests

* Config

* Faulty test cases

* Err
2019-11-19 20:24:39 -06:00
Raul Jordan
858dbbf038 Update Ethereum APIs and Match Schemas (#4059)
* update workspace

* include active filter

* fix up latest changes to match naming

* better comments, fix evaluators

* latest master

* Update proto/eth/v1alpha1/beacon_chain.proto
2019-11-19 18:36:45 -06:00
Jim McDonald
49c2dd2cfc Move the state notifier to a different module (#4058)
* Move state notifier to statefeed

* Updates to state notifier

* Create state feed in beacon node

* Formatting
2019-11-19 16:15:48 -06:00
terence tsao
7a22e98c0f Update ChainHead (#4053)
* Can build

* All tests pass

* Update beacon-chain/blockchain/chain_info.go

* Fix context

* Update chainhead

* Tests

* Tests

* e2e

* Update ordering

* Typo

* Use root to get slot

* Division
2019-11-19 13:33:13 -06:00
terence tsao
26da7c4114 Nil state fallback in Blockchain.HeadState() (#4042)
* Can build

* All tests pass

* Update beacon-chain/blockchain/chain_info.go

* Fix context
2019-11-19 10:12:50 -06:00
Nishant Das
7acb45d186 add one more return (#4050) 2019-11-19 09:40:15 -06:00
Preston Van Loon
24a5000e47 return to exit select loop rather than break (#4040) 2019-11-19 21:22:45 +08:00
Jim McDonald
65d920e13a Add generic state feed (#4004)
* Initial implementation of state feed

* Add instructions on adding new events

* Tidy up log messages

* Tidy up mock

* Update beacon-chain/core/statefeed/events.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/core/statefeed/events.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Remove unused BlockReceivedData

* Rename BlockHash to BlockRoot in BlockProcessedData

* Punctuation

* Use correct root for block processed event

* StateFeeder -> StateNotifier; fix up tests.

* Add Verified flag to BlockProcessed event

* Fix visibility in Bazel
2019-11-19 17:17:41 +08:00
Preston Van Loon
d27d18b192 Update protobuf to v3.10.1 and rules_docker to v0.12.1 (#4048)
* update protobuf v3.10.1

* update rules_docker
2019-11-18 23:02:25 -08:00
Nishant Das
0e88085661 add step fix (#4047) 2019-11-19 12:51:40 +08:00
Preston Van Loon
3f6435ac80 Fix server side beacon blocks by range (#4046) 2019-11-18 19:56:37 -08:00
Preston Van Loon
64b69d9216 remove fully async from bes upload (#4044) 2019-11-19 10:57:34 +08:00
Preston Van Loon
13207a9de5 Improve validator status method (#4032)
* Cleanup validatorStatus

* gaz

* fix tests

* fix tests
2019-11-18 16:47:02 -08:00
Raul Jordan
ab756ec094 Return Empty Results Instead of Pagination Error in RPC + Prevent Future Epoch Requests (#4030)
* return empty if no attestations

* list balances proper response

* standardize epoch error

* future epoch error test

* no results test

* no results in list attestations

* test for list blocks no results

* cannot request future epoch for balances rpc

* test for no results in balances

* adding tests for get validator

* cannot request future in participation

* useless conditional

* resolve old epoch test

* completed failing tests

* fix request bug
2019-11-18 17:24:33 -06:00
terence tsao
499f05f34b Return error instead of logging (#4039) 2019-11-18 14:09:26 -08:00
Preston Van Loon
0077654fb5 Fix deleted branch from ethereumapis (#4034) 2019-11-18 13:14:49 -08:00
terence tsao
f8cac0fb41 RPC assignment Nil state check (#4033)
* State nil check and test

* One more check
2019-11-18 14:33:27 -06:00
shayzluf
607f086de9 Surround detection (#3967)
* min max span update logic

* add comment to exported method

* Update slasher/rpc/update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/update_min_max_span_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/update_min_max_span.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/rpc/update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* weak subjectivity error

* add context

* SlasherDb change to SlasherDB

* gaz

* raul feedback

* fix old problem

* gofmt goimports

* gaz

* import fix

* change order

* min max span detection

* added benchmark

* max diff without error

* Update slasher/rpc/detect_update_min_max_span_bench_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/db/indexed_attestations.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span_bench_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span_bench_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul feedback, benchmark fix

* raul feedback

* gaz

* fix merge

* bench fix

* another bench fix

* comments

* changed names of functions and proto

* name change fix

* name change fix

* fix test

* clarification comment

* change to interface

* Update proto/eth/v1alpha1/slasher.proto

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* Update slasher/rpc/detect_update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* change order to reduce confusion

* Update proto/eth/v1alpha1/slasher.proto

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/rpc/detect_update_min_max_span.go

* Fix some comments

* terence feedback

* preston feedback

* fix test

* fix comments
2019-11-18 13:49:39 -06:00
Nishant Das
3b18aee181 Handle Missing Logs (#4012)
* make it non-recursive

* add new test case

* Update beacon-chain/powchain/log_processing.go

* Update beacon-chain/powchain/log_processing_test.go

* Update beacon-chain/powchain/log_processing_test.go

* Update beacon-chain/powchain/log_processing.go

* Update beacon-chain/powchain/log_processing.go

* standardize error
2019-11-18 12:34:34 -06:00
terence tsao
f43a7c67f2 Process attestation to use operation service's pool (#4014)
* Starting

* Routine working

* Single client working

* Fixed all the tests

* Lint

* Gazelle

* 12

* Tests
2019-11-18 11:19:03 -06:00
Nishant Das
199ddc6cdb Update To Latest Eth API (#4028)
* update to latest

* add container

* update to current eth repo

* fix test

* change to signing root

* gaz

* fix test

* fix test
2019-11-18 10:15:45 -06:00
Nishant Das
023dfebc73 Revert "Reverts the Revert (#4011)" (#4026)
This reverts commit c4ca8a47b3.
2019-11-17 22:48:22 -08:00
Nishant Das
53c4a26184 Optimize Processing Of Past Logs (#4015)
* add test and new code

* fix failing test

* better clean up

* change back to debug

* remove space
2019-11-17 10:16:20 -06:00
terence tsao
5acc362f7e End slot can't be greater than start slot (#4008) 2019-11-15 16:23:43 -08:00
Ivan Martinez
68edad13bc End To End Tests for Demo and Minimal config (#3932)
* Begin working on end to end tests using geth dev chain

* Start on beacon node set up

* More progress on bnode setup

* Complete flow until chainstart, begin work on evaluators

* More progress on evaluators

* Start changing bazel run to direct binary

* Move endtoend to inside beacon-chain

* use bazel provided geth, use bazel test

* tempdir

* use fork rules_go

* Change to use UUID dir and bazel binaries

* Truncate UUID a bit

* Get full run from chainstart to evaluating

* Rewrite to react to logs rather than arbitrarily wait

* Fix export

* Move evaluators to evaluators.go

* Add peer check test

* Add more comments

* Remove unneeded exports

* Check all nodes have the correct amount of peers

* Change name to onGenesisEpoch

* Remove extra wait times where not needed

* Cleanup

* Add log for beacon start

* Fix deposit amount

* Make room for eth1follow distnce

* Cleanup and fix minimal test

* Goimports

* Fix imports

* gazelle and minimal

* manual

* Fix for comments

* Make timing rely on reading logs, and cleanup

* Fix for comments

* Fix workspace

* Cleanup

* Fix visibility

* Cleanup and some comments

* Address comments

* Fix for v0.9

* Modify for v0.9

* Move to own package outside of beacon-chain

* Gazelle

* Polishing, logging

* Fix filenames

* Add more logs

* Add flag logging

* Cover for page not having libp2p info

* Improve multiAddr detection

* Add more logs

* Add missing flags

* Add log printing to defer

* Get multiAddr from logs

* Fix logging and detection

* Change evaluators to rely on EpochTimer

* Add evaluator for ValidatorParticipation

* Fix validator participation evaluator

* Cleanup, comments and fix participation calculation

* Cleanup

* Let the file searcher search for longer

* Change participation to check for full

* Log out file contents if text isnt found

* Split into different files

* Disable IPC and use RPC instead, change tmp dir to bazel dir

* Change visibility

* Gazelle

* Add e2e tag

* new line
2019-11-15 13:56:26 -05:00
shayzluf
bb2f329562 remove source epoch from indexed attestation indices (#4010) 2019-11-15 10:48:45 -06:00
Raul Jordan
5169209360 Properly Archive Active Set Changes (#4007)
* archiving information properly

* tests passing

* broken test fix
2019-11-15 10:45:02 -06:00
Nishant Das
c4ca8a47b3 Reverts the Revert (#4011)
* Revert "Revert "Change BLS Library to Herumi (#3752)" (#4006)"

This reverts commit 904898e405.

* turn it on

* make all docker images with cgo deps static

* change back

* fix build

* switch back

* address gateway

* fix library again
2019-11-15 10:27:23 -06:00
Nishant Das
904898e405 Revert "Change BLS Library to Herumi (#3752)" (#4006)
This reverts commit 24583864b4.
2019-11-14 13:00:50 -05:00
Raul Jordan
7f96fcc51b nil check in active set changes (#4005) 2019-11-15 00:21:30 +08:00
Nishant Das
24583864b4 Change BLS Library to Herumi (#3752)
* change to herumi's bls

* change alias

* change to better

* add benchmark

* build

* change to bazel fork

* fix prefix

* make it work with library

* update to latest

* change again

* add import

* update to latest

* add sha commit

* new static lib with groups swapped

* using herumis new lib

* fix dep paths in c headers

* update again

* new changes

* fix commit

* fix serialization

* comment

* fix test

* fix to herumis latest version

* fix test

* fix benchmarks

* add new workspace

* change commit and remove init

* get test to pass

* remove parameter

* remove reverse byte order

* make gazelle happy

* set pure to off

* fix failing tests

* remove old ref

* use HashWithDomain functions

* update to latest version

* clean up

* gaz

* add back removed code

* switch off pure
2019-11-14 09:51:42 -06:00
Celeste A.S
db9153e8e4 Local dev instructions added (#3980)
* Interop instructions added

Interop instructions have been merged to the main README in addition to a number of formatting adjustments

* Interop instruction adjustments

* Formatting adjustments

Changes to resolve PR comments
2019-11-13 15:43:38 -06:00
Raul Jordan
cd6e3e8a09 Productionize RPC Server Error Codes (#3994)
* carefully return grpc status codes in attester server

* import spacing

* work on status codes

* codes in validator

* most changes done

* gaz and imports

* done

* fix broken tests

* tests fixed
2019-11-13 15:03:12 -06:00
Preston Van Loon
fc7c530696 Use a data table for common power of 2 roots (#3995)
* use a data table for common power of 2 roots

* revert beacon-chain/rpc/proposer/server.go
2019-11-13 14:03:42 -06:00
Nishant Das
8f05f14b36 Validate Deposit Transactions (#3992)
* check deposit txs

* add comment

* gaz

* docker build

* Update tools/cluster-pk-manager/server/server.go

* Update tools/cluster-pk-manager/server/server.go
2019-11-13 10:31:57 -06:00
Jim McDonald
3b8701296b Avoid repeated hashing (#3981) 2019-11-14 00:03:27 +08:00
Raul Jordan
48f69c0762 better comment (#3990) 2019-11-13 23:37:23 +08:00
581 changed files with 32802 additions and 36414 deletions

View File

@@ -16,7 +16,9 @@ run --host_force_python=PY2
--experimental_sandbox_default_allow_network=false
# Use minimal protobufs at runtime
run --define ssz=minimal
run --define ssz=mainnet
test --define ssz=mainnet
build --define ssz=mainnet
# Prevent PATH changes from rebuilding when switching from IDE to command line.
build --incompatible_strict_action_env

View File

@@ -11,11 +11,10 @@ build:remote-cache --strategy=Closure=standalone
build:remote-cache --strategy=Genrule=standalone
# Build results backend.
build:remote-cache --bes_results_url="https://source.cloud.google.com/results/invocations/"
build:remote-cache --bes_backend=buildeventservice.googleapis.com
build:remote-cache --bes_timeout=60s
build:remote-cache --project_id=prysmaticlabs
build:remote-cache --bes_upload_mode=fully_async
#build:remote-cache --bes_results_url="https://source.cloud.google.com/results/invocations/"
#build:remote-cache --bes_backend=buildeventservice.googleapis.com
#build:remote-cache --bes_timeout=60s
#build:remote-cache --project_id=prysmaticlabs
# Prysm specific remote-cache properties.
build:remote-cache --disk_cache=

View File

@@ -31,27 +31,21 @@ alias(
alias(
name = "grpc_proto_compiler",
actual = "@io_bazel_rules_go//proto:gogofast_grpc",
visibility = [
"//proto:__subpackages__",
],
visibility = ["//visibility:public"],
)
# Protobuf gRPC compiler without gogoproto. Required for gRPC gateway.
alias(
name = "grpc_nogogo_proto_compiler",
actual = "@io_bazel_rules_go//proto:go_grpc",
visibility = [
"//proto:__subpackages__",
],
visibility = ["//visibility:public"],
)
# Protobuf gRPC gateway compiler
alias(
name = "grpc_gateway_proto_compiler",
actual = "@grpc_ecosystem_grpc_gateway//protoc-gen-grpc-gateway:go_gen_grpc_gateway",
visibility = [
"//proto:__subpackages__",
],
visibility = ["//visibility:public"],
)
gometalinter(
@@ -143,3 +137,9 @@ common_files = {
),
tags = ["manual"],
) for pair in binary_targets]
toolchain(
name = "built_cmake_toolchain",
toolchain = "@rules_foreign_cc//tools/build_defs/native_tools:built_cmake",
toolchain_type = "@rules_foreign_cc//tools/build_defs:cmake_toolchain",
)

View File

@@ -45,10 +45,9 @@ Open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--no-genesis-delay \
--bootstrap-node= \
--deposit-contract 0xD775140349E6A5D12524C6ccc3d6A1d4519D4029 \
--clear-db \
--deposit-contract $(curl -s https://prylabs.net/contract) \
--force-clear-db \
--interop-num-validators 64 \
--interop-eth1data-votes
```
@@ -62,7 +61,6 @@ bazel run //validator -- --interop-num-validators 64
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
specify which keys
### Launching from `genesis.ssz`
@@ -70,10 +68,9 @@ Assuming you generated a `genesis.ssz` file with 64 validators, open up two term
```
bazel run //beacon-chain -- \
--no-genesis-delay \
--bootstrap-node= \
--deposit-contract 0xD775140349E6A5D12524C6ccc3d6A1d4519D4029 \
--clear-db \
--deposit-contract $(curl -s https://prylabs.net/contract) \
--force-clear-db \
--interop-genesis-state /path/to/genesis.ssz \
--interop-eth1data-votes
```

234
README.md
View File

@@ -1,206 +1,236 @@
# Prysm: Ethereum 'Serenity' 2.0 Go Implementation
# Prysm: An Ethereum 2.0 Client Written in Go
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![ETH2.0_Spec_Version 0.8.1](https://img.shields.io/badge/ETH2.0%20Spec%20Version-v0.8.1-blue.svg)](https://github.com/ethereum/eth2.0-specs/commit/452ecf8e27c7852c7854597f2b1bb4a62b80c7ec)
[![ETH2.0_Spec_Version 0.9.3](https://img.shields.io/badge/ETH2.0%20Spec%20Version-v0.9.3-blue.svg)](https://github.com/ethereum/eth2.0-specs/tree/v0.9.3)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/KSA7rPr)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
This is the Core repository for Prysm, [Prysmatic Labs](https://prysmaticlabs.com)' [Go](https://golang.org/) implementation of the Ethereum protocol 2.0 (Serenity).
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the Ethereum 2.0 client specifications developed by [Prysmatic Labs](https://prysmaticlabs.com).
### Need assistance?
A more detailed set of installation and usage instructions as well as explanations of each component are available on our [official documentation portal](https://prysmaticlabs.gitbook.io/prysm/). If you still have questions, feel free to stop by either our [Discord](https://discord.gg/KSA7rPr) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) and a member of the team or our community will be happy to assist you.
**Interested in what's next?** Be sure to read our [Roadmap Reference Implementation](https://github.com/prysmaticlabs/prysm/blob/master/docs/ROADMAP.md) document. This page outlines the basics of sharding as well as the various short-term milestones that we hope to achieve over the coming year.
A more detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://prysmaticlabs.gitbook.io/prysm/). If you still have questions, feel free to stop by either our [Discord](https://discord.gg/KSA7rPr) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) and a member of the team or our community will be happy to assist you.
### Come join the testnet!
Participation is now open to the public in our testnet release for Ethereum 2.0 phase 0. Visit [prylabs.net](https://prylabs.net) for more information on the project itself or to sign up as a validator on the network.
Participation is now open to the public for our Ethereum 2.0 phase 0 testnet release. Visit [prylabs.net](https://prylabs.net) for more information on the project or to sign up as a validator on the network.
# Table of Contents
- [Dependencies](#dependencies)
- [Installation](#installation)
- [Build Via Docker](#build-via-docker)
- [Build Via Bazel](#build-via-bazel)
- [Running an Ethereum 2.0 Beacon Node](#running-an-ethereum-20-beacon-node)
- [Staking ETH: Running a Validator Client](#staking-eth-running-a-validator-client)
- [Installation](#installing-prysm)
- [Build via Docker](#build-via-docker)
- [Build via Bazel](#build-via-bazel)
- [Connecting to the public testnet: running a beacon node](#connecting-to-the-testnet-running-a-beacon-node)
- [Running via Docker](#running-via-docker)
- [Running via Bazel](#running-via-bazel)
- [Staking ETH: running a validator client](#staking-eth-running-a-validator-client)
- [Activating your validator: depositing 3.2 Goerli ETH](#activating-your-validator-depositing-32-göerli-eth)
- [Starting the validator with Bazel](#starting-the-validator-with-bazel)
- [Setting up a local ETH2 development chain](#setting-up-a-local-eth2-development-chain)
- [Installation and dependencies](#installation-and-dependencies)
- [Running a local beacon node and validator client](#running-a-local-beacon-node-and-validator-client)
- [Testing Prysm](#testing-prysm)
- [Contributing](#contributing)
- [License](#license)
## Dependencies
Prysm can be installed either with Docker **(recommended method)** or using our build tool, Bazel. The below instructions include sections for performing both.
**For Docker installations:**
- The latest release of [Docker](https://docs.docker.com/install/)
Prysm can be installed either with Docker **\(recommended\)** or using our build tool, Bazel. The below instructions include sections for performing both.
**For Bazel installations:**
- The latest release of [Bazel](https://docs.bazel.build/versions/master/install.html)
- A modern UNIX operating system (MacOS included)
#### **For Docker installations:**
## Installation
* The latest release of [Docker](https://docs.docker.com/install/)
#### **For Bazel installations:**
* The latest release of [Bazel](https://docs.bazel.build/versions/master/install.html)
* The latest release of `cmake`
* The latest release of `git`
* A modern UNIX operating system \(macOS included\)
## Installing Prysm
### Build via Docker
1. Ensure you are running the most recent version of Docker by issuing the command:
```
```text
docker -v
```
2. To pull the Prysm images from the server, issue the following commands:
```
2. To pull the Prysm images, issue the following commands:
```text
docker pull gcr.io/prysmaticlabs/prysm/validator:latest
docker pull gcr.io/prysmaticlabs/prysm/beacon-chain:latest
```
This process will also install any related dependencies.
### Build via Bazel
1. Open a terminal window. Ensure you are running the most recent version of Bazel by issuing the command:
```
```text
bazel version
```
2. Clone this repository and enter the directory:
```
2. Clone Prysm's [main repository](https://github.com/prysmaticlabs/prysm) and enter the directory:
```text
git clone https://github.com/prysmaticlabs/prysm
cd prysm
```
3. Build both the beacon chain node implementation and the validator client:
```
3. Build both the beacon chain node and the validator client:
```text
bazel build //beacon-chain:beacon-chain
bazel build //validator:validator
```
Bazel will automatically pull and install any dependencies as well, including Go and necessary compilers.
4. Build the configuration for the Prysm testnet by issuing the commands:
## Connecting to the testnet: running a beacon node
```
bazel build --define ssz=minimal //beacon-chain:beacon-chain
bazel build --define ssz=minimal //validator:validator
```
Below are instructions for initialising a beacon node and connecting to the public testnet. To further understand the role that the beacon node plays in Prysm, see [this section of the documentation.](https://prysmaticlabs.gitbook.io/prysm/how-prysm-works/overview-technical)
The binaries will be built in an architecture-dependent subdirectory of `bazel-bin`, and are supplied as part of Bazel's build process. To fetch the location, issue the command:
```
$ bazel build --define ssz=minimal //beacon-chain:beacon-chain
...
Target //beacon-chain:beacon-chain up-to-date:
bazel-bin/beacon-chain/linux_amd64_stripped/beacon-chain
...
```
In the example above, the beacon chain binary has been created in `bazel-bin/beacon-chain/linux_amd64_stripped/beacon-chain`.
## Running an Ethereum 2.0 Beacon Node
To understand the role that both the beacon node and validator play in Prysm, see [this section of our documentation](https://prysmaticlabs.gitbook.io/prysm/how-prysm-works/overview-technical).
**NOTE:** It is recommended to open up port 13000 on your local router to improve connectivity and receive more peers from the network. To do so, navigate to `192.168.0.1` in your browser and login if required. Follow along with the interface to modify your routers firewall settings. When this task is completed, append the parameter`--p2p-host-ip=$(curl -s ident.me)` to your selected beacon startup command presented in this section to use the newly opened port.
### Running via Docker
**Docker on Linux/Mac:**
#### **Docker on Linux/macOS:**
To start your beacon node, issue the following command:
```
docker run -v $HOME/prysm-data:/data -p 4000:4000 \
--name beacon-node \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--no-genesis-delay \
--datadir=/data
```
(Optional) If you want to enable gRPC, then run this command instead of the one above:
```
docker run -v $HOME/prysm-data:/data -p 4000:4000 -p 7000:7000 \
--name beacon-node \
```text
docker run -it -v $HOME/prysm:/data -p 4000:4000 --name beacon-node \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data \
--no-genesis-delay \
--grpc-gateway-port=7000
--init-sync-no-verify
```
You can stop the beacon node using `Ctrl+c` or with the following command:
=======
The beacon node can be halted by either using `Ctrl+c` or with the command:
```
```text
docker stop beacon-node
```
To restart the beacon node, issue the command:
To restart the beacon node, issue the following command:
```
```text
docker start -ai beacon-node
```
To delete a corrupted container, issue the command:
To delete a corrupted container, issue the following command:
```
```text
docker rm beacon-node
```
To recreate a deleted container and refresh the chain database, issue the start command with an additional `--force-clear-db` parameter:
To recreate a deleted container and refresh the chain database, issue the start command with an additional `--clear-db` parameter:
```
docker run -it -v $HOME/prysm-data:/data -p 4000:4000 --name beacon-node \
```text
docker run -it -v $HOME/prysm:/data -p 4000:4000 --name beacon-node \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data \
--force-clear-db
--clear-db
```
**Docker on Windows:**
#### **Docker on Windows:**
1) You will need to share the local drive you wish to mount to to container (e.g. C:).
1. Enter Docker settings (right click the tray icon)
2. Click 'Shared Drives'
3. Select a drive to share
4. Click 'Apply'
1. You will need to 'share' the local drive you wish to mount to \(e.g. C:\).
1. Enter Docker settings \(right click the tray icon\)
2. Click 'Shared Drives'
3. Select a drive to share
4. Click 'Apply'
2. You will next need to create a directory named `/prysm/` within your selected shared Drive. This folder will be used as a local data directory for Beacon Node chain data as well as account and keystore information required by the validator. Docker will **not** create this directory if it does not exist already. For the purposes of these instructions, it is assumed that `C:` is your prior-selected shared Drive.
3. To run the beacon node, issue the following command:
2) You will next need to create a directory named ```/tmp/prysm-data/``` within your selected shared Drive. This folder will be used as a local data directory for Beacon Node chain data as well as account and keystore information required by the validator. Docker will **not** create this directory if it does not exist already. For the purposes of these instructions, it is assumed that ```C:``` is your prior-selected shared Drive.
4) To run the beacon node, issue the command:
```
docker run -it -v c:/tmp/prysm-data:/data -p 4000:4000 gcr.io/prysmaticlabs/prysm/beacon-chain:latest --datadir=/data
```text
docker run -it -v c:/prysm/:/data -p 4000:4000 gcr.io/prysmaticlabs/prysm/beacon-chain:latest --datadir=/data --init-sync-no-verify --clear-db
```
### Running via Bazel
1) To start your Beacon Node with Bazel, issue the command:
To start your Beacon Node with Bazel, issue the following command:
```text
bazel run //beacon-chain -- --clear-db --datadir=$HOME/prysm
```
bazel run //beacon-chain -- --datadir=/tmp/prysm-data
```
This will sync up the Beacon Node with the latest head block in the network. Note that the beacon node must be **completely synced** before attempting to initialise a validator client, otherwise the validator will not be able to complete the deposit and funds will be lost.
This will sync up the beacon node with the latest head block in the network.
## Staking ETH: Running a Validator Client
**NOTE:** The beacon node must be **completely synced** before attempting to initialise a validator client, otherwise the validator will not be able to complete the deposit and **funds will lost**.
Once your beacon node is up, the chain will be waiting for you to deposit 3.2 Goerli ETH into the Validator Deposit Contract to activate your validator (discussed in the section below). First though, you will need to create a validator client to connect to this node in order to stake and participate. Each validator represents 3.2 Goerli ETH being staked in the system, and it is possible to spin up as many as you desire in order to have more stake in the network.
### Activating Your Validator: Depositing 3.2 Goerli ETH
## Staking ETH: Running a validator client
Using your validator deposit data from the previous step, follow the instructions found on https://prylabs.net/participate to make a deposit.
Once your beacon node is up, the chain will be waiting for you to deposit 3.2 Goerli ETH into a [validator deposit contract](how-prysm-works/validator-deposit-contract.md) in order to activate your validator \(discussed in the section below\). First though, you will need to create this validator and connect to this node to participate in consensus.
It will take a while for the nodes in the network to process your deposit, but once your node is active, the validator will begin doing its responsibility. In your validator client, you will be able to frequently see your validator balance as it goes up over time. Note that, should your node ever go offline for a long period, you'll start gradually losing your deposit until you are removed from the system.
Each validator represents 3.2 Goerli ETH being staked in the system, and it is possible to spin up as many as you desire in order to have more stake in the network.
### Starting the validator with Bazel
### Activating your validator: depositing 3.2 Göerli ETH
To begin setting up a validator, follow the instructions found on [prylabs.net](https://prylabs.net) to use the Göerli ETH faucet and make a deposit. For step-by-step assistance with the deposit page, see the [Activating a Validator ](activating-a-validator.md)section of this documentation.
It will take a while for the nodes in the network to process a deposit. Once the node is active, the validator will immediately begin performing its responsibilities.
In your validator client, you will be able to frequently see your validator balance as it goes up over time. Note that, should your node ever go offline for a long period, a validator will start gradually losing its deposit until it is removed from the network entirely.
1. Open another terminal window. Enter your Prysm directory and run the validator by issuing the following command:
```
cd prysm
bazel run //validator
```
**Congratulations, you are now running Ethereum 2.0 Phase 0!**
## Setting up a local ETH2 development chain
This section outlines the process of setting up Prysm for local testing with other Ethereum 2.0 client implementations. See the [INTEROP.md](https://github.com/prysmaticlabs/prysm/blob/master/INTEROP.md) file for advanced configuration options. For more background information on interoperability development, see [this blog post](https://blog.ethereum.org/2019/09/19/eth2-interop-in-review/).
### Installation and dependencies
To begin setting up a local ETH2 development chain, follow the **Bazel** instructions found in the [dependencies](https://github.com/prysmaticlabs/prysm#dependencies) and [installation](https://github.com/prysmaticlabs/prysm#installation) sections respectively.
### Running a local beacon node and validator client
The example below will generate a beacon genesis state and initiate Prysm with 64 validators with the genesis time set to your machines UNIX time.
Open up two terminal windows. In the first, issue the command:
```text
bazel run //beacon-chain -- \
--no-genesis-delay \
--bootstrap-node= \
--deposit-contract $(curl https://prylabs.net/contract) \
--clear-db \
--interop-num-validators 64 \
--interop-eth1data-votes
```
Wait a moment for the beacon chain to start. In the other terminal, issue the command:
```text
bazel run //validator -- --interop-num-validators 64
```
This command will kickstart the system with your 64 validators performing their duties accordingly.
## Testing Prysm
To run the unit tests of our system, issue the command:
```
```text
bazel test //...
```
To run the linter, make sure you have [golangci-lint](https://github.com/golangci/golangci-lint) installed and then issue the command:
```
To run our linter, make sure you have [golangci-lint](https://github.com/golangci/golangci-lint) installed and then issue the command:
```text
golangci-lint run
```
## Contributing
We have put all of our contribution guidelines into [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/master/CONTRIBUTING.md)! Check it out to get started.
Want to get involved? Check out our [Contribution Guide](https://prysmaticlabs.gitbook.io/prysm/getting-involved/contribution-guidelines) to learn more!
## License
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)

218
WORKSPACE
View File

@@ -1,3 +1,5 @@
workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
@@ -8,21 +10,12 @@ http_archive(
url = "https://github.com/bazelbuild/bazel-skylib/archive/0.8.0.tar.gz",
)
http_archive(
name = "io_bazel_rules_go",
sha256 = "513c12397db1bc9aa46dd62f02dd94b49a9b5d17444d49b5a04c5a89f3053c1c",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/rules_go/releases/download/v0.19.5/rules_go-v0.19.5.tar.gz",
"https://github.com/bazelbuild/rules_go/releases/download/v0.19.5/rules_go-v0.19.5.tar.gz",
],
)
http_archive(
name = "bazel_gazelle",
sha256 = "7fc87f4170011201b1690326e8c16c5d802836e3a0d617d8f75c3af2b23180c4",
sha256 = "86c6d481b3f7aedc1d60c1c211c6f76da282ae197c3b3160f54bd3a8f847896f",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/bazel-gazelle/releases/download/0.18.2/bazel-gazelle-0.18.2.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/0.18.2/bazel-gazelle-0.18.2.tar.gz",
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/bazel-gazelle/releases/download/v0.19.1/bazel-gazelle-v0.19.1.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.19.1/bazel-gazelle-v0.19.1.tar.gz",
],
)
@@ -35,9 +28,18 @@ http_archive(
http_archive(
name = "io_bazel_rules_docker",
sha256 = "9ff889216e28c918811b77999257d4ac001c26c1f7c7fb17a79bc28abf74182e",
strip_prefix = "rules_docker-0.10.1",
url = "https://github.com/bazelbuild/rules_docker/archive/v0.10.1.tar.gz",
# sha256 = "9ff889216e28c918811b77999257d4ac001c26c1f7c7fb17a79bc28abf74182e",
strip_prefix = "rules_docker-0.12.1",
url = "https://github.com/bazelbuild/rules_docker/archive/v0.12.1.tar.gz",
)
http_archive(
name = "io_bazel_rules_go",
sha256 = "e88471aea3a3a4f19ec1310a55ba94772d087e9ce46e41ae38ecebe17935de7b",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/rules_go/releases/download/v0.20.3/rules_go-v0.20.3.tar.gz",
"https://github.com/bazelbuild/rules_go/releases/download/v0.20.3/rules_go-v0.20.3.tar.gz",
],
)
http_archive(
@@ -57,14 +59,15 @@ git_repository(
# https://github.com/gogo/protobuf/pull/582 is merged.
git_repository(
name = "com_github_gogo_protobuf",
commit = "ba06b47c162d49f2af050fb4c75bcbc86a159d5c", # v1.2.1, as of 2019-03-03
# v1.3.0 (latest) as of 2019-10-05
commit = "0ca988a254f991240804bf9821f3450d87ccbb1b",
patch_args = ["-p1"],
patches = [
"@io_bazel_rules_go//third_party:com_github_gogo_protobuf-gazelle.patch",
"//third_party:com_github_gogo_protobuf-equal.patch",
],
remote = "https://github.com/gogo/protobuf",
shallow_since = "1550471403 +0200",
shallow_since = "1567336231 +0200",
# gazelle args: -go_prefix github.com/gogo/protobuf -proto legacy
)
@@ -96,6 +99,24 @@ load(
_go_image_repos()
# Golang images
# This is using gcr.io/distroless/base
load(
"@io_bazel_rules_docker//go:image.bzl",
_go_image_repos = "repositories",
)
_go_image_repos()
# CC images
# This is using gcr.io/distroless/base
load(
"@io_bazel_rules_docker//cc:image.bzl",
_cc_image_repos = "repositories",
)
_cc_image_repos()
http_archive(
name = "prysm_testnet_site",
build_file_content = """
@@ -128,8 +149,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "5c5b65a961b5e7251435efc9548648b45142a07993ad3e100850c240cb76e9af",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.0/general.tar.gz",
sha256 = "72c6ee3c20d19736b1203f364a6eb0ddee2c173073e20bee2beccd288fdc42be",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.4/general.tar.gz",
)
http_archive(
@@ -144,8 +165,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "3b5f0168af4331d09da52bebc26609def9d11be3e6c784ce7c3df3596617808d",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.0/minimal.tar.gz",
sha256 = "a3cc860a3679f6f62ee57b65677a9b48a65fdebb151cdcbf50f23852632845ef",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.4/minimal.tar.gz",
)
http_archive(
@@ -160,8 +181,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "f3ff68508dfe9696f23506daf0ca895cda955e30398741e00cffa33a01b0565c",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.0/mainnet.tar.gz",
sha256 = "8fc1b6220973ca30fa4ddc4ed24d66b1719abadca8bedb5e06c3bd9bc0df28e9",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.4/mainnet.tar.gz",
)
http_archive(
@@ -171,6 +192,13 @@ http_archive(
url = "https://github.com/bazelbuild/buildtools/archive/bf564b4925ab5876a3f64d8b90fab7f769013d42.zip",
)
http_archive(
name = "com_github_herumi_bls_eth_go_binary",
sha256 = "15a41ddb0bf7d142ebffae68337f19c16e747676cb56794c5d80dbe388ce004c",
strip_prefix = "bls-go-binary-ac038c7cb6d3185c4a46f3bca0c99ebf7b191e16",
url = "https://github.com/nisdas/bls-go-binary/archive/ac038c7cb6d3185c4a46f3bca0c99ebf7b191e16.zip",
)
load("@com_github_bazelbuild_buildtools//buildifier:deps.bzl", "buildifier_dependencies")
buildifier_dependencies()
@@ -183,7 +211,7 @@ go_repository(
git_repository(
name = "com_google_protobuf",
commit = "09745575a923640154bcf307fba8aedff47f240a",
commit = "d09d649aea36f02c03f8396ba39a8d4db8a607e4",
remote = "https://github.com/protocolbuffers/protobuf",
shallow_since = "1558721209 -0700",
)
@@ -192,6 +220,28 @@ load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
# Group the sources of the library so that CMake rule have access to it
all_content = """filegroup(name = "all", srcs = glob(["**"]), visibility = ["//visibility:public"])"""
http_archive(
name = "rules_foreign_cc",
strip_prefix = "rules_foreign_cc-master",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/master.zip",
)
load("@rules_foreign_cc//:workspace_definitions.bzl", "rules_foreign_cc_dependencies")
rules_foreign_cc_dependencies([
"@prysm//:built_cmake_toolchain",
])
http_archive(
name = "librdkafka",
build_file_content = all_content,
strip_prefix = "librdkafka-1.2.1",
urls = ["https://github.com/edenhill/librdkafka/archive/v1.2.1.tar.gz"],
)
# External dependencies
go_repository(
@@ -209,7 +259,7 @@ go_repository(
go_repository(
name = "com_github_prysmaticlabs_go_ssz",
commit = "58b2f86b0f02f06e634db06dee0c838ad41849f8",
commit = "e24db4d9e9637cf88ee9e4a779e339a1686a84ee",
importpath = "github.com/prysmaticlabs/go-ssz",
)
@@ -589,22 +639,23 @@ go_repository(
go_repository(
name = "io_opencensus_go",
commit = "7bbec1755a8162b5923fc214a494773a701d506a", # v0.22.0
importpath = "go.opencensus.io",
sum = "h1:75k/FF0Q2YM8QYo07VPddOLBslDt1MZOdEslOHvmzAs=",
version = "v0.22.2",
)
go_repository(
name = "io_opencensus_go_contrib_exporter_jaeger",
commit = "5b8293c22f362562285c2acbc52f4a1870a47a33",
importpath = "contrib.go.opencensus.io/exporter/jaeger",
remote = "http://github.com/census-ecosystem/opencensus-go-exporter-jaeger",
vcs = "git",
sum = "h1:nhTv/Ry3lGmqbJ/JGvCjWxBl5ozRfqo86Ngz59UAlfk=",
version = "v0.2.0",
)
go_repository(
name = "org_golang_google_api",
commit = "aac82e61c0c8fe133c297b4b59316b9f481e1f0a", # v0.6.0
importpath = "google.golang.org/api",
sum = "h1:uMf5uLi4eQMRrMKhCplNik4U4H8Z6C1br3zOtAa/aDE=",
version = "v0.14.0",
)
go_repository(
@@ -694,8 +745,9 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_pubsub",
build_file_proto_mode = "disable_global",
commit = "9f04364996b415168f0e0d7e9fc82272fbed4005", # v0.1.1
importpath = "github.com/libp2p/go-libp2p-pubsub",
sum = "h1:+Iz8zeI1KO6HX8cexU9g98cCGjae52Vujeg087SkuME=",
version = "v0.2.6-0.20191219233527-97846b574895",
)
go_repository(
@@ -786,8 +838,9 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_discovery",
commit = "d248d63b0af8c023307da18ad7000a12020e06f0", # v0.1.0
importpath = "github.com/libp2p/go-libp2p-discovery",
sum = "h1:1p3YSOq7VsgaL+xVHPi8XAmtGyas6D2J6rWBEfz/aiY=",
version = "v0.2.0",
)
go_repository(
@@ -830,8 +883,9 @@ go_repository(
go_repository(
name = "com_github_google_gofuzz",
commit = "f140a6486e521aad38f5917de355cbf147cc0496", # v1.0.0
importpath = "github.com/google/gofuzz",
sum = "h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=",
version = "v1.0.0",
)
go_repository(
@@ -993,12 +1047,6 @@ go_repository(
importpath = "github.com/grpc-ecosystem/go-grpc-prometheus",
)
go_repository(
name = "com_github_karlseguin_ccache",
commit = "ec06cd93a07565b373789b0078ba88fe697fddd9", # v2.0.3
importpath = "github.com/karlseguin/ccache",
)
go_repository(
name = "com_github_libp2p_go_libp2p_connmgr",
commit = "b46e9bdbcd8436b4fe4b30a53ec913c07e5e09c9", # v0.1.1
@@ -1199,10 +1247,20 @@ go_repository(
importpath = "github.com/googleapis/gnostic",
)
go_repository(
name = "com_github_patrickmn_go_cache",
commit = "46f407853014144407b6c2ec7ccc76bf67958d93",
importpath = "github.com/patrickmn/go-cache",
)
go_repository(
name = "com_github_prysmaticlabs_ethereumapis",
commit = "c7f1fd03716c94dcc287a0d35905ed35b8a0afe1",
commit = "87118fb893cc6f32b25793d819790fd3bcce3221",
importpath = "github.com/prysmaticlabs/ethereumapis",
patch_args = ["-p1"],
patches = [
"//third_party:com_github_prysmaticlabs_ethereumapis-tags.patch",
],
)
go_repository(
@@ -1238,13 +1296,6 @@ go_repository(
version = "v0.0.0-20161005185022-dfcf01d20ee9",
)
go_repository(
name = "com_github_kilic_bls12-381",
importpath = "github.com/kilic/bls12-381",
sum = "h1:hCD4IWWYsETkACK7U+isYppKfB/6d54sBkCDk3k+w2U=",
version = "v0.0.0-20191005202515-c798d6202457",
)
go_repository(
name = "com_github_minio_highwayhash",
importpath = "github.com/minio/highwayhash",
@@ -1259,6 +1310,15 @@ go_repository(
version = "v0.0.0-20191002040644-a1355ae1e2c3",
)
go_repository(
name = "in_gopkg_confluentinc_confluent_kafka_go_v1",
importpath = "gopkg.in/confluentinc/confluent-kafka-go.v1",
patch_args = ["-p1"],
patches = ["//third_party:in_gopkg_confluentinc_confluent_kafka_go_v1.patch"],
sum = "h1:roy97m/3wj9/o8OuU3sZ5wildk30ep38k2x8nhNbKrI=",
version = "v1.1.0",
)
go_repository(
name = "com_github_naoina_toml",
importpath = "github.com/naoina/toml",
@@ -1349,3 +1409,67 @@ go_repository(
sum = "h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=",
version = "v1.7.0",
)
go_repository(
name = "com_github_protolambda_zssz",
commit = "632f11e5e281660402bd0ac58f76090f3503def0",
importpath = "github.com/protolambda/zssz",
)
go_repository(
name = "com_github_emicklei_dot",
commit = "f4a04130244d60cef56086d2f649b4b55e9624aa",
importpath = "github.com/emicklei/dot",
)
go_repository(
name = "com_github_googleapis_gax_go_v2",
importpath = "github.com/googleapis/gax-go/v2",
sum = "h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=",
version = "v2.0.5",
)
go_repository(
name = "com_github_golang_groupcache",
importpath = "github.com/golang/groupcache",
sum = "h1:uHTyIjqVhYRhLbJ8nIiOJHkEZZ+5YoOsAbD3sk82NiE=",
version = "v0.0.0-20191027212112-611e8accdfc9",
)
go_repository(
name = "com_github_uber_jaeger_client_go",
importpath = "github.com/uber/jaeger-client-go",
sum = "h1:HgqpYBng0n7tLJIlyT4kPCIv5XgCsF+kai1NnnrJzEU=",
version = "v2.20.1+incompatible",
)
go_repository(
name = "com_github_dgraph_io_ristretto",
commit = "99d1bbbf28e64530eb246be0568fc7709a35ebdd",
importpath = "github.com/dgraph-io/ristretto",
)
go_repository(
name = "com_github_cespare_xxhash",
commit = "d7df74196a9e781ede915320c11c378c1b2f3a1f",
importpath = "github.com/cespare/xxhash",
)
go_repository(
name = "com_github_ipfs_go_detect_race",
importpath = "github.com/ipfs/go-detect-race",
sum = "h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=",
version = "v0.0.1",
)
go_repository(
name = "com_github_dgraph_io_ristretto",
commit = "99d1bbbf28e64530eb246be0568fc7709a35ebdd",
importpath = "github.com/dgraph-io/ristretto",
)
go_repository(
name = "com_github_cespare_xxhash",
commit = "d7df74196a9e781ede915320c11c378c1b2f3a1f",
importpath = "github.com/cespare/xxhash",
)

View File

@@ -36,6 +36,7 @@ go_image(
"main.go",
"usage.go",
],
base = "//tools:cc_image",
goarch = "amd64",
goos = "linux",
importpath = "github.com/prysmaticlabs/prysm/beacon-chain",
@@ -79,7 +80,10 @@ docker_push(
go_binary(
name = "beacon-chain",
embed = [":go_default_library"],
visibility = ["//beacon-chain:__subpackages__"],
visibility = [
"//beacon-chain:__subpackages__",
"//endtoend:__pkg__",
],
)
go_test(

View File

@@ -7,14 +7,15 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
@@ -25,14 +26,17 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",

View File

@@ -5,13 +5,14 @@ import (
"fmt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
@@ -21,31 +22,33 @@ var log = logrus.WithField("prefix", "archiver")
// Service defining archiver functionality for persisting checkpointed
// beacon chain information to a database backend for historical purposes.
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
headFetcher blockchain.HeadFetcher
newHeadNotifier blockchain.NewHeadNotifier
newHeadRootChan chan [32]byte
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
headFetcher blockchain.HeadFetcher
participationFetcher blockchain.ParticipationFetcher
stateNotifier statefeed.Notifier
lastArchivedEpoch uint64
}
// Config options for the archiver service.
type Config struct {
BeaconDB db.Database
HeadFetcher blockchain.HeadFetcher
NewHeadNotifier blockchain.NewHeadNotifier
BeaconDB db.Database
HeadFetcher blockchain.HeadFetcher
ParticipationFetcher blockchain.ParticipationFetcher
StateNotifier statefeed.Notifier
}
// NewArchiverService initializes the service from configuration options.
func NewArchiverService(ctx context.Context, cfg *Config) *Service {
ctx, cancel := context.WithCancel(ctx)
return &Service{
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
headFetcher: cfg.HeadFetcher,
newHeadNotifier: cfg.NewHeadNotifier,
newHeadRootChan: make(chan [32]byte, 1),
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
headFetcher: cfg.HeadFetcher,
participationFetcher: cfg.ParticipationFetcher,
stateNotifier: cfg.StateNotifier,
}
}
@@ -67,41 +70,45 @@ func (s *Service) Status() error {
}
// We archive committee information pertaining to the head state's epoch.
func (s *Service) archiveCommitteeInfo(ctx context.Context, headState *pb.BeaconState) error {
currentEpoch := helpers.SlotToEpoch(headState.Slot)
proposerSeed, err := helpers.Seed(headState, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
func (s *Service) archiveCommitteeInfo(ctx context.Context, headState *pb.BeaconState, epoch uint64) error {
proposerSeed, err := helpers.Seed(headState, epoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return errors.Wrap(err, "could not generate seed")
}
attesterSeed, err := helpers.Seed(headState, currentEpoch, params.BeaconConfig().DomainBeaconAttester)
attesterSeed, err := helpers.Seed(headState, epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return errors.Wrap(err, "could not generate seed")
}
info := &ethpb.ArchivedCommitteeInfo{
info := &pb.ArchivedCommitteeInfo{
ProposerSeed: proposerSeed[:],
AttesterSeed: attesterSeed[:],
}
if err := s.beaconDB.SaveArchivedCommitteeInfo(ctx, currentEpoch, info); err != nil {
if err := s.beaconDB.SaveArchivedCommitteeInfo(ctx, epoch, info); err != nil {
return errors.Wrap(err, "could not archive committee info")
}
return nil
}
// We archive active validator set changes that happened during the epoch.
func (s *Service) archiveActiveSetChanges(ctx context.Context, headState *pb.BeaconState) error {
activations := validators.ActivatedValidatorIndices(headState)
slashings := validators.SlashedValidatorIndices(headState)
exited, err := validators.ExitedValidatorIndices(headState)
// We archive active validator set changes that happened during the previous epoch.
func (s *Service) archiveActiveSetChanges(ctx context.Context, headState *pb.BeaconState, epoch uint64) error {
prevEpoch := epoch - 1
activations := validators.ActivatedValidatorIndices(prevEpoch, headState.Validators)
slashings := validators.SlashedValidatorIndices(prevEpoch, headState.Validators)
activeValidatorCount, err := helpers.ActiveValidatorCount(headState, prevEpoch)
if err != nil {
return errors.Wrap(err, "could not get active validator count")
}
exited, err := validators.ExitedValidatorIndices(prevEpoch, headState.Validators, activeValidatorCount)
if err != nil {
return errors.Wrap(err, "could not determine exited validator indices")
}
activeSetChanges := &ethpb.ArchivedActiveSetChanges{
activeSetChanges := &pb.ArchivedActiveSetChanges{
Activated: activations,
Exited: exited,
Slashed: slashings,
}
if err := s.beaconDB.SaveArchivedActiveValidatorChanges(ctx, helpers.CurrentEpoch(headState), activeSetChanges); err != nil {
if err := s.beaconDB.SaveArchivedActiveValidatorChanges(ctx, prevEpoch, activeSetChanges); err != nil {
return errors.Wrap(err, "could not archive active validator set changes")
}
return nil
@@ -109,60 +116,78 @@ func (s *Service) archiveActiveSetChanges(ctx context.Context, headState *pb.Bea
// We compute participation metrics by first retrieving the head state and
// matching validator attestations during the epoch.
func (s *Service) archiveParticipation(ctx context.Context, headState *pb.BeaconState) error {
participation, err := epoch.ComputeValidatorParticipation(headState, helpers.SlotToEpoch(headState.Slot))
if err != nil {
return errors.Wrap(err, "could not compute participation")
func (s *Service) archiveParticipation(ctx context.Context, epoch uint64) error {
p := s.participationFetcher.Participation(epoch)
participation := &ethpb.ValidatorParticipation{}
if p != nil {
participation = &ethpb.ValidatorParticipation{
EligibleEther: p.PrevEpoch,
VotedEther: p.PrevEpochTargetAttesters,
GlobalParticipationRate: float32(p.PrevEpochTargetAttesters) / float32(p.PrevEpoch),
}
}
return s.beaconDB.SaveArchivedValidatorParticipation(ctx, helpers.SlotToEpoch(headState.Slot), participation)
return s.beaconDB.SaveArchivedValidatorParticipation(ctx, epoch, participation)
}
// We archive validator balances and active indices.
func (s *Service) archiveBalances(ctx context.Context, headState *pb.BeaconState) error {
func (s *Service) archiveBalances(ctx context.Context, headState *pb.BeaconState, epoch uint64) error {
balances := headState.Balances
currentEpoch := helpers.CurrentEpoch(headState)
if err := s.beaconDB.SaveArchivedBalances(ctx, currentEpoch, balances); err != nil {
if err := s.beaconDB.SaveArchivedBalances(ctx, epoch, balances); err != nil {
return errors.Wrap(err, "could not archive balances")
}
return nil
}
func (s *Service) run(ctx context.Context) {
sub := s.newHeadNotifier.HeadUpdatedFeed().Subscribe(s.newHeadRootChan)
defer sub.Unsubscribe()
stateChannel := make(chan *feed.Event, 1)
stateSub := s.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for {
select {
case r := <-s.newHeadRootChan:
log.WithField("headRoot", fmt.Sprintf("%#x", r)).Debug("New chain head event")
headState := s.headFetcher.HeadState()
if !helpers.IsEpochEnd(headState.Slot) {
continue
case event := <-stateChannel:
if event.Type == statefeed.BlockProcessed {
data := event.Data.(*statefeed.BlockProcessedData)
log.WithField("headRoot", fmt.Sprintf("%#x", data.BlockRoot)).Debug("Received block processed event")
headState, err := s.headFetcher.HeadState(ctx)
if err != nil {
log.WithError(err).Error("Head state is not available")
continue
}
currentEpoch := helpers.CurrentEpoch(headState)
if !helpers.IsEpochEnd(headState.Slot) && currentEpoch <= s.lastArchivedEpoch {
continue
}
epochToArchive := currentEpoch
if !helpers.IsEpochEnd(headState.Slot) {
epochToArchive--
}
if err := s.archiveCommitteeInfo(ctx, headState, epochToArchive); err != nil {
log.WithError(err).Error("Could not archive committee info")
continue
}
if err := s.archiveActiveSetChanges(ctx, headState, epochToArchive); err != nil {
log.WithError(err).Error("Could not archive active validator set changes")
continue
}
if err := s.archiveParticipation(ctx, epochToArchive); err != nil {
log.WithError(err).Error("Could not archive validator participation")
continue
}
if err := s.archiveBalances(ctx, headState, epochToArchive); err != nil {
log.WithError(err).Error("Could not archive validator balances and active indices")
continue
}
log.WithField(
"epoch",
epochToArchive,
).Debug("Successfully archived beacon chain data during epoch")
s.lastArchivedEpoch = epochToArchive
}
if err := s.archiveCommitteeInfo(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive committee info")
continue
}
if err := s.archiveActiveSetChanges(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive active validator set changes")
continue
}
if err := s.archiveParticipation(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive validator participation")
continue
}
if err := s.archiveBalances(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive validator balances and active indices")
continue
}
log.WithField(
"epoch",
helpers.CurrentEpoch(headState),
).Debug("Successfully archived beacon chain data during epoch")
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-sub.Err():
log.WithError(err).Error("Subscription to new chain head notifier failed")
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state feed notifier failed")
return
}
}

View File

@@ -8,13 +8,16 @@ import (
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
dbutil "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/sirupsen/logrus"
@@ -27,17 +30,24 @@ func init() {
params.OverrideBeaconConfig(params.MinimalSpecConfig())
}
func TestArchiverService_ReceivesNewChainHeadEvent(t *testing.T) {
func TestArchiverService_ReceivesBlockProcessedEvent(t *testing.T) {
hook := logTest.NewGlobal()
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: &pb.BeaconState{Slot: 1},
}
headRoot := [32]byte{1, 2, 3}
triggerNewHeadEvent(t, svc, headRoot)
testutil.AssertLogsContain(t, hook, fmt.Sprintf("%#x", headRoot))
testutil.AssertLogsContain(t, hook, "New chain head event")
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
testutil.AssertLogsContain(t, hook, fmt.Sprintf("%#x", event.Data.(*statefeed.BlockProcessedData).BlockRoot))
testutil.AssertLogsContain(t, hook, "Received block processed event")
}
func TestArchiverService_OnlyArchiveAtEpochEnd(t *testing.T) {
@@ -46,20 +56,77 @@ func TestArchiverService_OnlyArchiveAtEpochEnd(t *testing.T) {
defer dbutil.TeardownDB(t, beaconDB)
// The head state is NOT an epoch end.
svc.headFetcher = &mock.ChainService{
State: &pb.BeaconState{Slot: params.BeaconConfig().SlotsPerEpoch - 3},
State: &pb.BeaconState{Slot: params.BeaconConfig().SlotsPerEpoch - 2},
}
triggerNewHeadEvent(t, svc, [32]byte{})
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
// The context should have been canceled.
if svc.ctx.Err() != context.Canceled {
t.Error("context was not canceled")
}
testutil.AssertLogsContain(t, hook, "New chain head event")
testutil.AssertLogsContain(t, hook, "Received block processed event")
// The service should ONLY log any archival logs if we receive a
// head slot that is an epoch end.
testutil.AssertLogsDoNotContain(t, hook, "Successfully archived")
}
func TestArchiverService_ArchivesEvenThroughSkipSlot(t *testing.T) {
hook := logTest.NewGlobal()
svc, beaconDB := setupService(t)
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
defer dbutil.TeardownDB(t, beaconDB)
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
exitRoutine := make(chan bool)
go func() {
svc.run(svc.ctx)
<-exitRoutine
}()
// Send out an event every slot, skipping the end slot of the epoch.
for i := uint64(0); i < params.BeaconConfig().SlotsPerEpoch+1; i++ {
headState.Slot = i
svc.headFetcher = &mock.ChainService{
State: headState,
}
if helpers.IsEpochEnd(i) {
continue
}
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
for sent := 0; sent == 0; {
sent = svc.stateNotifier.StateFeed().Send(event)
}
}
if err := svc.Stop(); err != nil {
t.Fatal(err)
}
exitRoutine <- true
// The context should have been canceled.
if svc.ctx.Err() != context.Canceled {
t.Error("context was not canceled")
}
testutil.AssertLogsContain(t, hook, "Received block processed event")
// Even though there was a skip slot, we should still be able to archive
// upon the next block event afterwards.
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func TestArchiverService_ComputesAndSavesParticipation(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
@@ -69,9 +136,17 @@ func TestArchiverService_ComputesAndSavesParticipation(t *testing.T) {
svc.headFetcher = &mock.ChainService{
State: headState,
}
triggerNewHeadEvent(t, svc, [32]byte{})
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
attestedBalance := uint64(1)
currentEpoch := helpers.CurrentEpoch(headState)
wanted := &ethpb.ValidatorParticipation{
VotedEther: attestedBalance,
@@ -85,7 +160,7 @@ func TestArchiverService_ComputesAndSavesParticipation(t *testing.T) {
}
if !proto.Equal(wanted, retrieved) {
t.Errorf("Wanted participation for epoch %d %v, retrieved %v", currentEpoch, wanted, retrieved)
t.Errorf("Wanted participation for epoch %d %v, retrieved %v", currentEpoch-1, wanted, retrieved)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
@@ -99,7 +174,14 @@ func TestArchiverService_SavesIndicesAndBalances(t *testing.T) {
svc.headFetcher = &mock.ChainService{
State: headState,
}
triggerNewHeadEvent(t, svc, [32]byte{})
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
retrieved, err := svc.beaconDB.ArchivedBalances(svc.ctx, helpers.CurrentEpoch(headState))
if err != nil {
@@ -125,7 +207,14 @@ func TestArchiverService_SavesCommitteeInfo(t *testing.T) {
svc.headFetcher = &mock.ChainService{
State: headState,
}
triggerNewHeadEvent(t, svc, [32]byte{})
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
currentEpoch := helpers.CurrentEpoch(headState)
proposerSeed, err := helpers.Seed(headState, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
@@ -136,7 +225,7 @@ func TestArchiverService_SavesCommitteeInfo(t *testing.T) {
if err != nil {
t.Fatal(err)
}
wanted := &ethpb.ArchivedCommitteeInfo{
wanted := &pb.ArchivedCommitteeInfo{
ProposerSeed: proposerSeed[:],
AttesterSeed: attesterSeed[:],
}
@@ -165,16 +254,26 @@ func TestArchiverService_SavesActivatedValidatorChanges(t *testing.T) {
svc.headFetcher = &mock.ChainService{
State: headState,
}
currentEpoch := helpers.CurrentEpoch(headState)
delayedActEpoch := helpers.DelayedActivationExitEpoch(currentEpoch)
prevEpoch := helpers.PrevEpoch(headState)
delayedActEpoch := helpers.DelayedActivationExitEpoch(prevEpoch)
headState.Validators[4].ActivationEpoch = delayedActEpoch
headState.Validators[5].ActivationEpoch = delayedActEpoch
triggerNewHeadEvent(t, svc, [32]byte{})
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, currentEpoch)
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, prevEpoch)
if err != nil {
t.Fatal(err)
}
if retrieved == nil {
t.Fatal("Retrieved indices are nil")
}
if !reflect.DeepEqual(retrieved.Activated, []uint64{4, 5}) {
t.Errorf("Wanted indices 4 5 activated, received %v", retrieved.Activated)
}
@@ -190,15 +289,25 @@ func TestArchiverService_SavesSlashedValidatorChanges(t *testing.T) {
svc.headFetcher = &mock.ChainService{
State: headState,
}
currentEpoch := helpers.CurrentEpoch(headState)
prevEpoch := helpers.PrevEpoch(headState)
headState.Validators[95].Slashed = true
headState.Validators[96].Slashed = true
triggerNewHeadEvent(t, svc, [32]byte{})
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, currentEpoch)
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, prevEpoch)
if err != nil {
t.Fatal(err)
}
if retrieved == nil {
t.Fatal("Retrieved indices are nil")
}
if !reflect.DeepEqual(retrieved.Slashed, []uint64{95, 96}) {
t.Errorf("Wanted indices 95, 96 slashed, received %v", retrieved.Slashed)
}
@@ -214,19 +323,28 @@ func TestArchiverService_SavesExitedValidatorChanges(t *testing.T) {
svc.headFetcher = &mock.ChainService{
State: headState,
}
currentEpoch := helpers.CurrentEpoch(headState)
headState.Validators[95].ExitEpoch = currentEpoch + 1
headState.Validators[95].WithdrawableEpoch = currentEpoch + 1 + params.BeaconConfig().MinValidatorWithdrawabilityDelay
triggerNewHeadEvent(t, svc, [32]byte{})
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, currentEpoch)
prevEpoch := helpers.PrevEpoch(headState)
headState.Validators[95].ExitEpoch = prevEpoch
headState.Validators[95].WithdrawableEpoch = prevEpoch + params.BeaconConfig().MinValidatorWithdrawabilityDelay
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: [32]byte{1, 2, 3},
Verified: true,
},
}
triggerStateEvent(t, svc, event)
testutil.AssertLogsContain(t, hook, "Successfully archived")
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, prevEpoch)
if err != nil {
t.Fatal(err)
}
if retrieved == nil {
t.Fatal("Retrieved indices are nil")
}
if !reflect.DeepEqual(retrieved.Exited, []uint64{95}) {
t.Errorf("Wanted indices 95 exited, received %v", retrieved.Exited)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func setupState(t *testing.T, validatorCount uint64) *pb.BeaconState {
@@ -262,23 +380,30 @@ func setupState(t *testing.T, validatorCount uint64) *pb.BeaconState {
func setupService(t *testing.T) (*Service, db.Database) {
beaconDB := dbutil.SetupDB(t)
ctx, cancel := context.WithCancel(context.Background())
validatorCount := uint64(100)
totalBalance := validatorCount * params.BeaconConfig().MaxEffectiveBalance
mockChainService := &mock.ChainService{}
return &Service{
beaconDB: beaconDB,
ctx: ctx,
cancel: cancel,
newHeadRootChan: make(chan [32]byte, 0),
newHeadNotifier: &mock.ChainService{},
beaconDB: beaconDB,
ctx: ctx,
cancel: cancel,
stateNotifier: mockChainService.StateNotifier(),
participationFetcher: &mock.ChainService{
Balance: &precompute.Balance{PrevEpoch: totalBalance, PrevEpochTargetAttesters: 1}},
}, beaconDB
}
func triggerNewHeadEvent(t *testing.T, svc *Service, headRoot [32]byte) {
func triggerStateEvent(t *testing.T, svc *Service, event *feed.Event) {
exitRoutine := make(chan bool)
go func() {
svc.run(svc.ctx)
<-exitRoutine
}()
svc.newHeadRootChan <- headRoot
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
for sent := 0; sent == 0; {
sent = svc.stateNotifier.StateFeed().Send(event)
}
if err := svc.Stop(); err != nil {
t.Fatal(err)
}

View File

@@ -17,23 +17,26 @@ go_library(
"//beacon-chain/blockchain/forkchoice:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/operations:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
"//shared/slotutil:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
@@ -68,15 +71,16 @@ go_test(
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/params:go_default_library",
"//shared/stateutil:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
@@ -103,7 +107,6 @@ go_test(
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/params:go_default_library",
@@ -112,6 +115,7 @@ go_test(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",

View File

@@ -1,11 +1,15 @@
package blockchain
import (
"bytes"
"context"
"time"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -27,8 +31,10 @@ type GenesisTimeFetcher interface {
type HeadFetcher interface {
HeadSlot() uint64
HeadRoot() []byte
HeadBlock() *ethpb.BeaconBlock
HeadState() *pb.BeaconState
HeadBlock() *ethpb.SignedBeaconBlock
HeadState(ctx context.Context) (*pb.BeaconState, error)
HeadValidatorsIndices(epoch uint64) ([]uint64, error)
HeadSeed(epoch uint64) ([32]byte, error)
}
// CanonicalRootFetcher defines a common interface for methods in blockchain service which
@@ -43,19 +49,62 @@ type ForkFetcher interface {
}
// FinalizationFetcher defines a common interface for methods in blockchain service which
// directly retrieves finalization related data.
// directly retrieves finalization and justification related data.
type FinalizationFetcher interface {
FinalizedCheckpt() *ethpb.Checkpoint
CurrentJustifiedCheckpt() *ethpb.Checkpoint
PreviousJustifiedCheckpt() *ethpb.Checkpoint
}
// FinalizedCheckpt returns the latest finalized checkpoint tracked in fork choice service.
// ParticipationFetcher defines a common interface for methods in blockchain service which
// directly retrieves validator participation related data.
type ParticipationFetcher interface {
Participation(epoch uint64) *precompute.Balance
}
// FinalizedCheckpt returns the latest finalized checkpoint from head state.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
cp := s.forkChoiceStore.FinalizedCheckpt()
if cp != nil {
return cp
if s.headState == nil || s.headState.FinalizedCheckpoint == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
// If head state exists but there hasn't been a finalized check point,
// the check point's root should refer to genesis block root.
if bytes.Equal(s.headState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
return &ethpb.Checkpoint{Root: s.genesisRoot[:]}
}
return s.headState.FinalizedCheckpoint
}
// CurrentJustifiedCheckpt returns the current justified checkpoint from head state.
func (s *Service) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
if s.headState == nil || s.headState.CurrentJustifiedCheckpoint == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
// If head state exists but there hasn't been a justified check point,
// the check point root should refer to genesis block root.
if bytes.Equal(s.headState.CurrentJustifiedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
return &ethpb.Checkpoint{Root: s.genesisRoot[:]}
}
return s.headState.CurrentJustifiedCheckpoint
}
// PreviousJustifiedCheckpt returns the previous justified checkpoint from head state.
func (s *Service) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
if s.headState == nil || s.headState.PreviousJustifiedCheckpoint == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
// If head state exists but there hasn't been a justified check point,
// the check point root should refer to genesis block root.
if bytes.Equal(s.headState.PreviousJustifiedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
return &ethpb.Checkpoint{Root: s.genesisRoot[:]}
}
return s.headState.PreviousJustifiedCheckpoint
}
// HeadSlot returns the slot of the head of the chain.
@@ -80,19 +129,42 @@ func (s *Service) HeadRoot() []byte {
}
// HeadBlock returns the head block of the chain.
func (s *Service) HeadBlock() *ethpb.BeaconBlock {
func (s *Service) HeadBlock() *ethpb.SignedBeaconBlock {
s.headLock.RLock()
defer s.headLock.RUnlock()
return proto.Clone(s.headBlock).(*ethpb.BeaconBlock)
return proto.Clone(s.headBlock).(*ethpb.SignedBeaconBlock)
}
// HeadState returns the head state of the chain.
func (s *Service) HeadState() *pb.BeaconState {
// If the head state is nil from service struct,
// it will attempt to get from DB and error if nil again.
func (s *Service) HeadState(ctx context.Context) (*pb.BeaconState, error) {
s.headLock.RLock()
defer s.headLock.RUnlock()
return proto.Clone(s.headState).(*pb.BeaconState)
if s.headState == nil {
return s.beaconDB.HeadState(ctx)
}
return proto.Clone(s.headState).(*pb.BeaconState), nil
}
// HeadValidatorsIndices returns a list of active validator indices from the head view of a given epoch.
func (s *Service) HeadValidatorsIndices(epoch uint64) ([]uint64, error) {
if s.headState == nil {
return []uint64{}, nil
}
return helpers.ActiveValidatorIndices(s.headState, epoch)
}
// HeadSeed returns the seed from the head view of a given epoch.
func (s *Service) HeadSeed(epoch uint64) ([32]byte, error) {
if s.headState == nil {
return [32]byte{}, nil
}
return helpers.Seed(s.headState, epoch, params.BeaconConfig().DomainBeaconAttester)
}
// CanonicalRoot returns the canonical root of a given slot.
@@ -118,3 +190,11 @@ func (s *Service) CurrentFork() *pb.Fork {
}
return proto.Clone(s.headState.Fork).(*pb.Fork)
}
// Participation returns the participation stats of a given epoch.
func (s *Service) Participation(epoch uint64) *precompute.Balance {
s.epochParticipationLock.RLock()
defer s.epochParticipationLock.RUnlock()
return s.epochParticipation[epoch]
}

View File

@@ -4,8 +4,8 @@ import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
func TestHeadSlot_DataRace(t *testing.T) {
@@ -18,7 +18,7 @@ func TestHeadSlot_DataRace(t *testing.T) {
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
@@ -35,7 +35,7 @@ func TestHeadRoot_DataRace(t *testing.T) {
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
@@ -52,7 +52,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
@@ -69,9 +69,9 @@ func TestHeadState_DataRace(t *testing.T) {
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
s.HeadState()
s.HeadState(context.Background())
}

View File

@@ -7,10 +7,11 @@ import (
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
// Ensure Service implements chain info interface.
@@ -19,14 +20,19 @@ var _ = GenesisTimeFetcher(&Service{})
var _ = ForkFetcher(&Service{})
func TestFinalizedCheckpt_Nil(t *testing.T) {
c := setupBeaconChain(t, nil)
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
c := setupBeaconChain(t, db)
c.headState, _ = testutil.DeterministicGenesisState(t, 1)
if !bytes.Equal(c.FinalizedCheckpt().Root, params.BeaconConfig().ZeroHash[:]) {
t.Error("Incorrect pre chain start value")
}
}
func TestHeadRoot_Nil(t *testing.T) {
c := setupBeaconChain(t, nil)
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
c := setupBeaconChain(t, db)
if !bytes.Equal(c.HeadRoot(), params.BeaconConfig().ZeroHash[:]) {
t.Error("Incorrect pre chain start value")
}
@@ -35,16 +41,81 @@ func TestHeadRoot_Nil(t *testing.T) {
func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
cp := &ethpb.Checkpoint{Epoch: 5}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{FinalizedCheckpoint: cp}
if err := c.forkChoiceStore.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
if c.FinalizedCheckpt().Epoch != cp.Epoch {
t.Errorf("Finalized epoch at genesis should be %d, got: %d", cp.Epoch, c.FinalizedCheckpt().Epoch)
}
}
if c.FinalizedCheckpt().Epoch != 0 {
t.Errorf("Finalized epoch at genesis should be 0, got: %d", c.FinalizedCheckpt().Epoch)
func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{FinalizedCheckpoint: cp}
c.genesisRoot = [32]byte{'A'}
if !bytes.Equal(c.FinalizedCheckpt().Root, c.genesisRoot[:]) {
t.Errorf("Got: %v, wanted: %v", c.FinalizedCheckpt().Root, c.genesisRoot[:])
}
}
func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Epoch: 6}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{CurrentJustifiedCheckpoint: cp}
if c.CurrentJustifiedCheckpt().Epoch != cp.Epoch {
t.Errorf("Current Justifiied epoch at genesis should be %d, got: %d", cp.Epoch, c.CurrentJustifiedCheckpt().Epoch)
}
}
func TestJustifiedCheckpt_GenesisRootOk(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{CurrentJustifiedCheckpoint: cp}
c.genesisRoot = [32]byte{'B'}
if !bytes.Equal(c.CurrentJustifiedCheckpt().Root, c.genesisRoot[:]) {
t.Errorf("Got: %v, wanted: %v", c.CurrentJustifiedCheckpt().Root, c.genesisRoot[:])
}
}
func TestPreviousJustifiedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Epoch: 7}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{PreviousJustifiedCheckpoint: cp}
if c.PreviousJustifiedCheckpt().Epoch != cp.Epoch {
t.Errorf("Previous Justifiied epoch at genesis should be %d, got: %d", cp.Epoch, c.PreviousJustifiedCheckpt().Epoch)
}
}
func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{PreviousJustifiedCheckpoint: cp}
c.genesisRoot = [32]byte{'C'}
if !bytes.Equal(c.PreviousJustifiedCheckpt().Root, c.genesisRoot[:]) {
t.Errorf("Got: %v, wanted: %v", c.PreviousJustifiedCheckpt().Root, c.genesisRoot[:])
}
}
@@ -66,7 +137,7 @@ func TestHeadRoot_CanRetrieve(t *testing.T) {
}
func TestHeadBlock_CanRetrieve(t *testing.T) {
b := &ethpb.BeaconBlock{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
c := &Service{headBlock: b}
if !reflect.DeepEqual(b, c.HeadBlock()) {
t.Error("incorrect head block received")
@@ -76,7 +147,11 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
func TestHeadState_CanRetrieve(t *testing.T) {
s := &pb.BeaconState{Slot: 2}
c := &Service{headState: s}
if !reflect.DeepEqual(s, c.HeadState()) {
headState, err := c.HeadState(context.Background())
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, headState) {
t.Error("incorrect head state received")
}
}

View File

@@ -15,21 +15,23 @@ go_library(
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/flags:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/stateutil:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
@@ -57,12 +59,12 @@ go_test(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/stateutil:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@in_gopkg_yaml_v2//:go_default_library",

View File

@@ -4,10 +4,10 @@ import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
@@ -18,7 +18,7 @@ func BenchmarkForkChoiceTree1(b *testing.B) {
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
b.Fatal(err)
}
@@ -50,17 +50,11 @@ func BenchmarkForkChoiceTree1(b *testing.B) {
for i := 0; i < len(validators); i++ {
switch {
case i < 256:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
b.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[1]}
case i > 768:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
b.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[7]}
default:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[8]}); err != nil {
b.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[8]}
}
}
@@ -110,9 +104,7 @@ func BenchmarkForkChoiceTree2(b *testing.B) {
// Spread out the votes evenly for all the leaf nodes. 8 to 15
nodeIndex := 8
for i := 0; i < len(validators); i++ {
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[nodeIndex]}); err != nil {
b.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[nodeIndex]}
if i%155 == 0 {
nodeIndex++
}
@@ -163,9 +155,7 @@ func BenchmarkForkChoiceTree3(b *testing.B) {
// All validators vote on the same head
for i := 0; i < len(validators); i++ {
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[len(roots)-1]}); err != nil {
b.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[len(roots)-1]}
}
b.ResetTimer()

View File

@@ -4,6 +4,6 @@ Sub-Tree) algorithm as the Ethereum Serenity beacon chain fork choice rule. This
properly detect the canonical chain based on validator votes even in the presence of high network
latency, network partitions, and many conflicting blocks. To read more about fork choice, read the
official accompanying document:
https://github.com/ethereum/eth2.0-specs/blob/v0.8.3/specs/core/0_fork-choice.md
https://github.com/ethereum/eth2.0-specs/blob/v0.9.0/specs/core/0_fork-choice.md
*/
package forkchoice

View File

@@ -8,13 +8,13 @@ import (
"strconv"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"gopkg.in/yaml.v2"
)
@@ -39,6 +39,8 @@ func TestGetHeadFromYaml(t *testing.T) {
var c *Config
err = yaml.Unmarshal(yamlFile, &c)
params.UseMainnetConfig()
for _, test := range c.TestCases {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
@@ -49,10 +51,10 @@ func TestGetHeadFromYaml(t *testing.T) {
// genesis block condition
if blk.ID == blk.Parent {
b := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
if err := db.SaveBlock(ctx, b); err != nil {
if err := db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: b}); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
root, err := ssz.HashTreeRoot(b)
if err != nil {
t.Fatal(err)
}
@@ -66,18 +68,23 @@ func TestGetHeadFromYaml(t *testing.T) {
if err != nil {
t.Fatal(err)
}
b := &ethpb.BeaconBlock{Slot: uint64(slot), ParentRoot: blksRoot[parentSlot]}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: uint64(slot), ParentRoot: blksRoot[parentSlot]}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
root, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
blksRoot[slot] = root[:]
if err := db.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
}
}
store := NewForkChoiceService(ctx, db)
// Assign validator votes to the blocks as weights.
count := 0
for blk, votes := range test.Weights {
@@ -87,14 +94,11 @@ func TestGetHeadFromYaml(t *testing.T) {
}
max := count + votes
for i := count; i < max; i++ {
if err := db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: blksRoot[slot]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: blksRoot[slot]}
count++
}
}
store := NewForkChoiceService(ctx, db)
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
@@ -102,12 +106,10 @@ func TestGetHeadFromYaml(t *testing.T) {
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(blksRoot[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = blksRoot[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(blksRoot[0])); err != nil {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: blksRoot[0]}, &ethpb.Checkpoint{Root: blksRoot[0]}); err != nil {
t.Fatal(err)
}
@@ -133,8 +135,6 @@ func TestGetHeadFromYaml(t *testing.T) {
t.Errorf("wanted root %#x, got root %#x", wantedHead, head)
}
helpers.ClearAllCaches()
testDB.TeardownDB(t, db)
}
}

View File

@@ -3,6 +3,7 @@ package forkchoice
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -17,6 +18,14 @@ var (
Name: "beacon_finalized_root",
Help: "Last finalized root of the processed state",
})
cacheFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "cache_finalized_epoch",
Help: "Last cached finalized epoch",
})
cacheFinalizedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "cache_finalized_root",
Help: "Last cached finalized root",
})
beaconCurrentJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_epoch",
Help: "Current justified epoch of the processed state",
@@ -33,46 +42,98 @@ var (
Name: "beacon_previous_justified_root",
Help: "Previous justified root of the processed state",
})
activeValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "state_active_validators",
Help: "Total number of active validators",
sigFailsToVerify = promauto.NewCounter(prometheus.CounterOpts{
Name: "att_signature_failed_to_verify_with_cache",
Help: "Number of attestation signatures that failed to verify with cache on, but succeeded without cache",
})
slashedValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "state_slashed_validators",
Help: "Total slashed validators",
validatorsCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validator_count",
Help: "The total number of validators, in GWei",
}, []string{"state"})
validatorsBalance = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_total_balance",
Help: "The total balance of validators, in GWei",
}, []string{"state"})
validatorsEffectiveBalance = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_total_effective_balance",
Help: "The total effective balance of validators, in GWei",
}, []string{"state"})
currentEth1DataDepositCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "current_eth1_data_deposit_count",
Help: "The current eth1 deposit count in the last processed state eth1data field.",
})
withdrawnValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "state_withdrawn_validators",
Help: "Total withdrawn validators",
totalEligibleBalances = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_eligible_balances",
Help: "The total amount of ether, in gwei, that has been used in voting attestation target of previous epoch",
})
totalValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_validators",
Help: "Number of status=pending|active|exited|withdrawable validators in current epoch",
totalVotedTargetBalances = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_voted_target_balances",
Help: "The total amount of ether, in gwei, that is eligible for voting of previous epoch",
})
)
func reportEpochMetrics(state *pb.BeaconState) {
currentEpoch := state.Slot / params.BeaconConfig().SlotsPerEpoch
// Validator counts
var active float64
var slashed float64
var withdrawn float64
for _, v := range state.Validators {
if v.ActivationEpoch <= currentEpoch && currentEpoch < v.ExitEpoch {
active++
// Validator instances
pendingInstances := 0
activeInstances := 0
slashingInstances := 0
slashedInstances := 0
exitingInstances := 0
exitedInstances := 0
// Validator balances
pendingBalance := uint64(0)
activeBalance := uint64(0)
activeEffectiveBalance := uint64(0)
exitingBalance := uint64(0)
exitingEffectiveBalance := uint64(0)
slashingBalance := uint64(0)
slashingEffectiveBalance := uint64(0)
for i, validator := range state.Validators {
if validator.Slashed {
if currentEpoch < validator.ExitEpoch {
slashingInstances++
slashingBalance += state.Balances[i]
slashingEffectiveBalance += validator.EffectiveBalance
} else {
slashedInstances++
}
continue
}
if v.Slashed {
slashed++
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch {
exitingInstances++
exitingBalance += state.Balances[i]
exitingEffectiveBalance += validator.EffectiveBalance
} else {
exitedInstances++
}
continue
}
if currentEpoch >= v.ExitEpoch {
withdrawn++
if currentEpoch < validator.ActivationEpoch {
pendingInstances++
pendingBalance += state.Balances[i]
continue
}
activeInstances++
activeBalance += state.Balances[i]
activeEffectiveBalance += validator.EffectiveBalance
}
activeValidatorsGauge.Set(active)
slashedValidatorsGauge.Set(slashed)
withdrawnValidatorsGauge.Set(withdrawn)
totalValidatorsGauge.Set(float64(len(state.Validators)))
validatorsCount.WithLabelValues("Pending").Set(float64(pendingInstances))
validatorsCount.WithLabelValues("Active").Set(float64(activeInstances))
validatorsCount.WithLabelValues("Exiting").Set(float64(exitingInstances))
validatorsCount.WithLabelValues("Exited").Set(float64(exitedInstances))
validatorsCount.WithLabelValues("Slashing").Set(float64(slashingInstances))
validatorsCount.WithLabelValues("Slashed").Set(float64(slashedInstances))
validatorsBalance.WithLabelValues("Pending").Set(float64(pendingBalance))
validatorsBalance.WithLabelValues("Active").Set(float64(activeBalance))
validatorsBalance.WithLabelValues("Exiting").Set(float64(exitingBalance))
validatorsBalance.WithLabelValues("Slashing").Set(float64(slashingBalance))
validatorsEffectiveBalance.WithLabelValues("Active").Set(float64(activeEffectiveBalance))
validatorsEffectiveBalance.WithLabelValues("Exiting").Set(float64(exitingEffectiveBalance))
validatorsEffectiveBalance.WithLabelValues("Slashing").Set(float64(slashingEffectiveBalance))
// Last justified slot
if state.CurrentJustifiedCheckpoint != nil {
@@ -89,4 +150,12 @@ func reportEpochMetrics(state *pb.BeaconState) {
beaconFinalizedEpoch.Set(float64(state.FinalizedCheckpoint.Epoch))
beaconFinalizedRoot.Set(float64(bytesutil.ToLowInt64(state.FinalizedCheckpoint.Root)))
}
if state.Eth1Data != nil {
currentEth1DataDepositCount.Set(float64(state.Eth1Data.DepositCount))
}
if precompute.Balances != nil {
totalEligibleBalances.Set(float64(precompute.Balances.PrevEpoch))
totalVotedTargetBalances.Set(float64(precompute.Balances.PrevEpochTargetAttesters))
}
}

View File

@@ -7,132 +7,144 @@ import (
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ErrTargetRootNotInDB returns when the target block root of an attestation cannot be found in the
// beacon database.
var ErrTargetRootNotInDB = errors.New("target root does not exist in db")
// OnAttestation is called whenever an attestation is received, it updates validators latest vote,
// as well as the fork choice store struct.
//
// Spec pseudocode definition:
// def on_attestation(store: Store, attestation: Attestation) -> None:
// """
// Run ``on_attestation`` upon receiving a new ``attestation`` from either within a block or directly on the wire.
//
// An ``attestation`` that is asserted as invalid may be valid at a later time,
// consider scheduling it for later processing in such case.
// """
// target = attestation.data.target
//
// # Cannot calculate the current shuffling if have not seen the target
// assert target.root in store.blocks
// # Attestations must be from the current or previous epoch
// current_epoch = compute_epoch_at_slot(get_current_slot(store))
// # Use GENESIS_EPOCH for previous when genesis to avoid underflow
// previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else GENESIS_EPOCH
// assert target.epoch in [current_epoch, previous_epoch]
// assert target.epoch == compute_epoch_at_slot(attestation.data.slot)
//
// # Attestations target be for a known block. If target block is unknown, delay consideration until the block is found
// assert target.root in store.blocks
// # Attestations cannot be from future epochs. If they are, delay consideration until the epoch arrives
// base_state = store.block_states[target.root].copy()
// assert store.time >= base_state.genesis_time + compute_start_slot_of_epoch(target.epoch) * SECONDS_PER_SLOT
// assert store.time >= base_state.genesis_time + compute_start_slot_at_epoch(target.epoch) * SECONDS_PER_SLOT
//
// # Attestations must be for a known block. If block is unknown, delay consideration until the block is found
// assert attestation.data.beacon_block_root in store.blocks
// # Attestations must not be for blocks in the future. If not, the attestation should not be considered
// assert store.blocks[attestation.data.beacon_block_root].slot <= attestation.data.slot
//
// # Store target checkpoint state if not yet seen
// if target not in store.checkpoint_states:
// process_slots(base_state, compute_start_slot_of_epoch(target.epoch))
// process_slots(base_state, compute_start_slot_at_epoch(target.epoch))
// store.checkpoint_states[target] = base_state
// target_state = store.checkpoint_states[target]
//
// # Attestations can only affect the fork choice of subsequent slots.
// # Delay consideration in the fork choice until their slot is in the past.
// attestation_slot = get_attestation_data_slot(target_state, attestation.data)
// assert store.time >= (attestation_slot + 1) * SECONDS_PER_SLOT
// assert store.time >= (attestation.data.slot + 1) * SECONDS_PER_SLOT
//
// # Get state at the `target` to validate attestation and calculate the committees
// indexed_attestation = get_indexed_attestation(target_state, attestation)
// assert is_valid_indexed_attestation(target_state, indexed_attestation)
//
// # Update latest messages
// for i in indexed_attestation.custody_bit_0_indices + indexed_attestation.custody_bit_1_indices:
// for i in indexed_attestation.attesting_indices:
// if i not in store.latest_messages or target.epoch > store.latest_messages[i].epoch:
// store.latest_messages[i] = LatestMessage(epoch=target.epoch, root=attestation.data.beacon_block_root)
func (s *Store) OnAttestation(ctx context.Context, a *ethpb.Attestation) (uint64, error) {
func (s *Store) OnAttestation(ctx context.Context, a *ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onAttestation")
defer span.End()
tgt := proto.Clone(a.Data.Target).(*ethpb.Checkpoint)
tgtSlot := helpers.StartSlot(tgt.Epoch)
if helpers.SlotToEpoch(a.Data.Slot) != a.Data.Target.Epoch {
return fmt.Errorf("data slot is not in the same epoch as target %d != %d", helpers.SlotToEpoch(a.Data.Slot), a.Data.Target.Epoch)
}
// Verify beacon node has seen the target block before.
if !s.db.HasBlock(ctx, bytesutil.ToBytes32(tgt.Root)) {
return 0, fmt.Errorf("target root %#x does not exist in db", bytesutil.Trunc(tgt.Root))
return ErrTargetRootNotInDB
}
// Verify attestation target has had a valid pre state produced by the target block.
baseState, err := s.verifyAttPreState(ctx, tgt)
if err != nil {
return 0, err
return err
}
// Verify attestation target is from current epoch or previous epoch.
if err := s.verifyAttTargetEpoch(ctx, baseState.GenesisTime, uint64(time.Now().Unix()), tgt); err != nil {
return err
}
// Verify Attestations cannot be from future epochs.
if err := helpers.VerifySlotTime(baseState.GenesisTime, tgtSlot); err != nil {
return 0, errors.Wrap(err, "could not verify attestation target slot")
return errors.Wrap(err, "could not verify attestation target slot")
}
// Verify attestation beacon block is known and not from the future.
if err := s.verifyBeaconBlock(ctx, a.Data); err != nil {
return errors.Wrap(err, "could not verify attestation beacon block")
}
// Store target checkpoint state if not yet seen.
baseState, err = s.saveCheckpointState(ctx, baseState, tgt)
if err != nil {
return 0, err
}
// Delay attestation processing until the subsequent slot.
if err := s.waitForAttInclDelay(ctx, a, baseState); err != nil {
return 0, err
return err
}
// Verify attestations can only affect the fork choice of subsequent slots.
if err := helpers.VerifySlotTime(baseState.GenesisTime, a.Data.Slot+1); err != nil {
return 0, err
return err
}
s.attsQueueLock.Lock()
defer s.attsQueueLock.Unlock()
atts := make([]*ethpb.Attestation, 0, len(s.attsQueue))
for root, a := range s.attsQueue {
log := log.WithFields(logrus.Fields{
"AggregatedBitfield": fmt.Sprintf("%08b", a.AggregationBits),
"Root": fmt.Sprintf("%#x", root),
})
log.Debug("Updating latest votes")
// Use the target state to to validate attestation and calculate the committees.
indexedAtt, err := s.verifyAttestation(ctx, baseState, a)
if err != nil {
log.WithError(err).Warn("Removing attestation from queue.")
delete(s.attsQueue, root)
continue
}
// Update every validator's latest vote.
if err := s.updateAttVotes(ctx, indexedAtt, tgt.Root, tgt.Epoch); err != nil {
return 0, err
}
// Mark attestation as seen we don't update votes when it appears in block.
if err := s.setSeenAtt(a); err != nil {
return 0, err
}
delete(s.attsQueue, root)
att, err := s.aggregatedAttestations(ctx, a)
if err != nil {
return 0, err
}
atts = append(atts, att...)
// Use the target state to to validate attestation and calculate the committees.
indexedAtt, err := s.verifyAttestation(ctx, baseState, a)
if err != nil {
return err
}
if err := s.db.SaveAttestations(ctx, atts); err != nil {
return 0, err
// Update every validator's latest vote.
if err := s.updateAttVotes(ctx, indexedAtt, tgt.Root, tgt.Epoch); err != nil {
return err
}
return tgtSlot, nil
if err := s.db.SaveAttestation(ctx, a); err != nil {
return err
}
log := log.WithFields(logrus.Fields{
"Slot": a.Data.Slot,
"Index": a.Data.CommitteeIndex,
"AggregatedBitfield": fmt.Sprintf("%08b", a.AggregationBits),
"BeaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.Data.BeaconBlockRoot)),
})
log.Debug("Updated latest votes")
return nil
}
// verifyAttPreState validates input attested check point has a valid pre-state.
@@ -147,8 +159,41 @@ func (s *Store) verifyAttPreState(ctx context.Context, c *ethpb.Checkpoint) (*pb
return baseState, nil
}
// verifyAttTargetEpoch validates attestation is from the current or previous epoch.
func (s *Store) verifyAttTargetEpoch(ctx context.Context, genesisTime uint64, nowTime uint64, c *ethpb.Checkpoint) error {
currentSlot := (nowTime - genesisTime) / params.BeaconConfig().SecondsPerSlot
currentEpoch := helpers.SlotToEpoch(currentSlot)
var prevEpoch uint64
// Prevents previous epoch under flow
if currentEpoch > 1 {
prevEpoch = currentEpoch - 1
}
if c.Epoch != prevEpoch && c.Epoch != currentEpoch {
return fmt.Errorf("target epoch %d does not match current epoch %d or prev epoch %d", c.Epoch, currentEpoch, prevEpoch)
}
return nil
}
// verifyBeaconBlock verifies beacon head block is known and not from the future.
func (s *Store) verifyBeaconBlock(ctx context.Context, data *ethpb.AttestationData) error {
b, err := s.db.Block(ctx, bytesutil.ToBytes32(data.BeaconBlockRoot))
if err != nil {
return err
}
if b == nil || b.Block == nil {
return fmt.Errorf("beacon block %#x does not exist", bytesutil.Trunc(data.BeaconBlockRoot))
}
if b.Block.Slot > data.Slot {
return fmt.Errorf("could not process attestation for future block, %d > %d", b.Block.Slot, data.Slot)
}
return nil
}
// saveCheckpointState saves and returns the processed state with the associated check point.
func (s *Store) saveCheckpointState(ctx context.Context, baseState *pb.BeaconState, c *ethpb.Checkpoint) (*pb.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.saveCheckpointState")
defer span.End()
s.checkpointStateLock.Lock()
defer s.checkpointStateLock.Unlock()
cachedState, err := s.checkpointState.StateByCheckpoint(c)
@@ -162,68 +207,57 @@ func (s *Store) saveCheckpointState(ctx context.Context, baseState *pb.BeaconSta
// Advance slots only when it's higher than current state slot.
if helpers.StartSlot(c.Epoch) > baseState.Slot {
stateCopy := proto.Clone(baseState).(*pb.BeaconState)
baseState, err = state.ProcessSlots(ctx, stateCopy, helpers.StartSlot(c.Epoch))
stateCopy, err = state.ProcessSlots(ctx, stateCopy, helpers.StartSlot(c.Epoch))
if err != nil {
return nil, errors.Wrapf(err, "could not process slots up to %d", helpers.StartSlot(c.Epoch))
}
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: c,
State: baseState,
}); err != nil {
return nil, errors.Wrap(err, "could not saved checkpoint state to cache")
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: c,
State: stateCopy,
}); err != nil {
return nil, errors.Wrap(err, "could not saved checkpoint state to cache")
}
return stateCopy, nil
}
return baseState, nil
}
// waitForAttInclDelay waits until the next slot because attestation can only affect
// fork choice of subsequent slot. This is to delay attestation inclusion for fork choice
// until the attested slot is in the past.
func (s *Store) waitForAttInclDelay(ctx context.Context, a *ethpb.Attestation, targetState *pb.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.forkchoice.waitForAttInclDelay")
defer span.End()
nextSlot := a.Data.Slot + 1
duration := time.Duration(nextSlot*params.BeaconConfig().SecondsPerSlot) * time.Second
timeToInclude := time.Unix(int64(targetState.GenesisTime), 0).Add(duration)
if err := s.aggregateAttestation(ctx, a); err != nil {
return errors.Wrap(err, "could not aggregate attestation")
}
time.Sleep(time.Until(timeToInclude))
return nil
}
// aggregateAttestation aggregates the attestations in the pending queue.
func (s *Store) aggregateAttestation(ctx context.Context, att *ethpb.Attestation) error {
s.attsQueueLock.Lock()
defer s.attsQueueLock.Unlock()
root, err := ssz.HashTreeRoot(att.Data)
if err != nil {
return err
}
if a, ok := s.attsQueue[root]; ok {
a, err := helpers.AggregateAttestation(a, att)
if err != nil {
return nil
}
s.attsQueue[root] = a
return nil
}
s.attsQueue[root] = proto.Clone(att).(*ethpb.Attestation)
return nil
}
// verifyAttestation validates input attestation is valid.
func (s *Store) verifyAttestation(ctx context.Context, baseState *pb.BeaconState, a *ethpb.Attestation) (*ethpb.IndexedAttestation, error) {
indexedAtt, err := blocks.ConvertToIndexed(ctx, baseState, a)
committee, err := helpers.BeaconCommitteeFromState(baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, err
}
indexedAtt, err := blocks.ConvertToIndexed(ctx, a, committee)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation")
}
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
// TODO(3603): Delete the following signature verify fallback when issue 3603 closes.
// When signature fails to verify with committee cache enabled at run time,
// the following re-runs the same signature verify routine without cache in play.
// This provides extra assurance that committee cache can't break run time.
if err == blocks.ErrSigFailedToVerify {
committee, err = helpers.BeaconCommitteeWithoutCache(baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation without cache")
}
indexedAtt, err = blocks.ConvertToIndexed(ctx, a, committee)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation")
}
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
return nil, errors.Wrap(err, "could not verify indexed attestation without cache")
}
sigFailsToVerify.Inc()
return indexedAtt, nil
}
return nil, errors.Wrap(err, "could not verify indexed attestation")
}
return indexedAtt, nil
@@ -236,36 +270,18 @@ func (s *Store) updateAttVotes(
tgtRoot []byte,
tgtEpoch uint64) error {
indices := append(indexedAtt.CustodyBit_0Indices, indexedAtt.CustodyBit_1Indices...)
newVoteIndices := make([]uint64, 0, len(indices))
newVotes := make([]*pb.ValidatorLatestVote, 0, len(indices))
indices := indexedAtt.AttestingIndices
s.voteLock.Lock()
defer s.voteLock.Unlock()
for _, i := range indices {
vote, err := s.db.ValidatorLatestVote(ctx, i)
if err != nil {
return errors.Wrapf(err, "could not get latest vote for validator %d", i)
}
if vote == nil || tgtEpoch > vote.Epoch {
newVotes = append(newVotes, &pb.ValidatorLatestVote{
vote, ok := s.latestVoteMap[i]
if !ok || tgtEpoch > vote.Epoch {
s.latestVoteMap[i] = &pb.ValidatorLatestVote{
Epoch: tgtEpoch,
Root: tgtRoot,
})
newVoteIndices = append(newVoteIndices, i)
}
}
}
return s.db.SaveValidatorLatestVotes(ctx, newVoteIndices, newVotes)
}
// setSeenAtt sets the attestation hash in seen attestation map to true.
func (s *Store) setSeenAtt(a *ethpb.Attestation) error {
s.seenAttsLock.Lock()
defer s.seenAttsLock.Unlock()
r, err := hashutil.HashProto(a)
if err != nil {
return err
}
s.seenAtts[r] = true
return nil
}

View File

@@ -1,18 +1,18 @@
package forkchoice
import (
"bytes"
"context"
"reflect"
"strings"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -24,31 +24,31 @@ func TestStore_OnAttestation(t *testing.T) {
store := NewForkChoiceService(ctx, db)
_, err := blockTree1(db)
_, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
BlkWithOutState := &ethpb.BeaconBlock{Slot: 0}
BlkWithOutState := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 0}}
if err := db.SaveBlock(ctx, BlkWithOutState); err != nil {
t.Fatal(err)
}
BlkWithOutStateRoot, _ := ssz.SigningRoot(BlkWithOutState)
BlkWithOutStateRoot, _ := ssz.HashTreeRoot(BlkWithOutState.Block)
BlkWithStateBadAtt := &ethpb.BeaconBlock{Slot: 1}
BlkWithStateBadAtt := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
if err := db.SaveBlock(ctx, BlkWithStateBadAtt); err != nil {
t.Fatal(err)
}
BlkWithStateBadAttRoot, _ := ssz.SigningRoot(BlkWithStateBadAtt)
BlkWithStateBadAttRoot, _ := ssz.HashTreeRoot(BlkWithStateBadAtt.Block)
if err := store.db.SaveState(ctx, &pb.BeaconState{}, BlkWithStateBadAttRoot); err != nil {
t.Fatal(err)
}
BlkWithValidState := &ethpb.BeaconBlock{Slot: 2}
BlkWithValidState := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 2}}
if err := db.SaveBlock(ctx, BlkWithValidState); err != nil {
t.Fatal(err)
}
BlkWithValidStateRoot, _ := ssz.SigningRoot(BlkWithValidState)
BlkWithValidStateRoot, _ := ssz.HashTreeRoot(BlkWithValidState.Block)
if err := store.db.SaveState(ctx, &pb.BeaconState{
Fork: &pb.Fork{
Epoch: 0,
@@ -67,12 +67,19 @@ func TestStore_OnAttestation(t *testing.T) {
wantErr bool
wantErrString string
}{
{
name: "attestation's data slot not aligned with target vote",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{}}},
s: &pb.BeaconState{},
wantErr: true,
wantErrString: "data slot is not in the same epoch as target 1 != 0",
},
{
name: "attestation's target root not in db",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: []byte{'A'}}}},
s: &pb.BeaconState{},
wantErr: true,
wantErrString: "target root 0x41 does not exist in db",
wantErrString: "target root does not exist in db",
},
{
name: "no pre state for attestations's target block",
@@ -82,22 +89,25 @@ func TestStore_OnAttestation(t *testing.T) {
wantErrString: "pre state of target block 0 does not exist",
},
{
name: "process attestation from future epoch",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: params.BeaconConfig().FarFutureEpoch,
name: "process attestation doesn't match current epoch",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}},
s: &pb.BeaconState{},
wantErr: true,
wantErrString: "could not process slot from the future",
wantErrString: "does not match current epoch",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
if err := store.GenesisStore(
ctx,
&ethpb.Checkpoint{Root: BlkWithValidStateRoot[:]},
&ethpb.Checkpoint{Root: BlkWithValidStateRoot[:]}); err != nil {
t.Fatal(err)
}
_, err := store.OnAttestation(ctx, tt.a)
err := store.OnAttestation(ctx, tt.a)
if tt.wantErr {
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnAttestation() error = %v, wantErr = %v", err, tt.wantErrString)
@@ -131,7 +141,11 @@ func TestStore_SaveCheckpointState(t *testing.T) {
Slashings: make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector),
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
r := [32]byte{'g'}
if err := store.db.SaveState(ctx, s, r); err != nil {
t.Fatal(err)
}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: r[:]}, &ethpb.Checkpoint{Root: r[:]}); err != nil {
t.Fatal(err)
}
@@ -178,7 +192,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
}
s.Slot = params.BeaconConfig().SlotsPerEpoch + 1
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: r[:]}, &ethpb.Checkpoint{Root: r[:]}); err != nil {
t.Fatal(err)
}
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'C'}}
@@ -191,51 +205,6 @@ func TestStore_SaveCheckpointState(t *testing.T) {
}
}
func TestStore_AggregateAttestation(t *testing.T) {
_, _, privKeys := testutil.SetupInitialDeposits(t, 100)
f := &pb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
Epoch: 0,
}
domain := helpers.Domain(f, 0, params.BeaconConfig().DomainBeaconAttester)
sig := privKeys[0].Sign([]byte{}, domain)
store := &Store{attsQueue: make(map[[32]byte]*ethpb.Attestation)}
b1 := bitfield.NewBitlist(8)
b1.SetBitAt(0, true)
a := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: b1, Signature: sig.Marshal()}
if err := store.aggregateAttestation(context.Background(), a); err != nil {
t.Fatal(err)
}
r, _ := ssz.HashTreeRoot(a.Data)
if !bytes.Equal(store.attsQueue[r].AggregationBits, b1) {
t.Error("Received incorrect aggregation bitfield")
}
b2 := bitfield.NewBitlist(8)
b2.SetBitAt(1, true)
a = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: b2, Signature: sig.Marshal()}
if err := store.aggregateAttestation(context.Background(), a); err != nil {
t.Fatal(err)
}
if !bytes.Equal(store.attsQueue[r].AggregationBits, []byte{3, 1}) {
t.Error("Received incorrect aggregation bitfield")
}
b3 := bitfield.NewBitlist(8)
b3.SetBitAt(7, true)
a = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: b3, Signature: sig.Marshal()}
if err := store.aggregateAttestation(context.Background(), a); err != nil {
t.Fatal(err)
}
if !bytes.Equal(store.attsQueue[r].AggregationBits, []byte{131, 1}) {
t.Error("Received incorrect aggregation bitfield")
}
}
func TestStore_ReturnAggregatedAttestation(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
@@ -258,3 +227,143 @@ func TestStore_ReturnAggregatedAttestation(t *testing.T) {
t.Error("did not retrieve saved attestation")
}
}
func TestStore_UpdateCheckpointState(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
epoch := uint64(1)
baseState, _ := testutil.DeterministicGenesisState(t, 1)
baseState.Slot = epoch * params.BeaconConfig().SlotsPerEpoch
checkpoint := &ethpb.Checkpoint{Epoch: epoch}
returned, err := store.saveCheckpointState(ctx, baseState, checkpoint)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(baseState, returned) {
t.Error("Incorrectly returned base state")
}
cached, err := store.checkpointState.StateByCheckpoint(checkpoint)
if err != nil {
t.Fatal(err)
}
if cached != nil {
t.Error("State shouldn't have been cached")
}
epoch = uint64(2)
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch}
returned, err = store.saveCheckpointState(ctx, baseState, newCheckpoint)
if err != nil {
t.Fatal(err)
}
baseState, err = state.ProcessSlots(ctx, baseState, helpers.StartSlot(newCheckpoint.Epoch))
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(baseState, returned) {
t.Error("Incorrectly returned base state")
}
cached, err = store.checkpointState.StateByCheckpoint(newCheckpoint)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(returned, cached) {
t.Error("Incorrectly cached base state")
}
}
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
if err := store.verifyAttTargetEpoch(
ctx,
0,
params.BeaconConfig().SlotsPerEpoch*params.BeaconConfig().SecondsPerSlot,
&ethpb.Checkpoint{}); err != nil {
t.Error(err)
}
}
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
if err := store.verifyAttTargetEpoch(
ctx,
0,
params.BeaconConfig().SlotsPerEpoch*params.BeaconConfig().SecondsPerSlot,
&ethpb.Checkpoint{Epoch: 1}); err != nil {
t.Error(err)
}
}
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
if err := store.verifyAttTargetEpoch(
ctx,
0,
2*params.BeaconConfig().SlotsPerEpoch*params.BeaconConfig().SecondsPerSlot,
&ethpb.Checkpoint{}); !strings.Contains(err.Error(),
"target epoch 0 does not match current epoch 2 or prev epoch 1") {
t.Error("Did not receive wanted error")
}
}
func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := NewForkChoiceService(ctx, db)
d := &ethpb.AttestationData{}
if err := s.verifyBeaconBlock(ctx, d); !strings.Contains(err.Error(), "beacon block does not exist") {
t.Error("Did not receive the wanted error")
}
}
func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := NewForkChoiceService(ctx, db)
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 2}}
s.db.SaveBlock(ctx, b)
r, _ := ssz.HashTreeRoot(b.Block)
d := &ethpb.AttestationData{Slot: 1, BeaconBlockRoot: r[:]}
if err := s.verifyBeaconBlock(ctx, d); !strings.Contains(err.Error(), "could not process attestation for future block") {
t.Error("Did not receive the wanted error")
}
}
func TestVerifyBeaconBlock_OK(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := NewForkChoiceService(ctx, db)
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 2}}
s.db.SaveBlock(ctx, b)
r, _ := ssz.HashTreeRoot(b.Block)
d := &ethpb.AttestationData{Slot: 2, BeaconBlockRoot: r[:]}
if err := s.verifyBeaconBlock(ctx, d); err != nil {
t.Error("Did not receive the wanted error")
}
}

View File

@@ -5,18 +5,19 @@ import (
"context"
"encoding/hex"
"fmt"
"time"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
@@ -49,15 +50,22 @@ import (
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// store.justified_checkpoint = state.current_justified_checkpoint
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
func (s *Store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
func (s *Store) OnBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onBlock")
defer span.End()
if signed == nil || signed.Block == nil {
return errors.New("nil block")
}
b := signed.Block
// Retrieve incoming block's pre state.
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
@@ -65,7 +73,7 @@ func (s *Store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
}
preStateValidatorCount := len(preState.Validators)
root, err := ssz.SigningRoot(b)
root, err := ssz.HashTreeRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
@@ -73,15 +81,12 @@ func (s *Store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
"slot": b.Slot,
"root": fmt.Sprintf("0x%s...", hex.EncodeToString(root[:])[:8]),
}).Info("Executing state transition on block")
postState, err := state.ExecuteStateTransition(ctx, preState, b)
postState, err := state.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
if err := s.updateBlockAttestationsVotes(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not update votes for attestations in block")
}
if err := s.db.SaveBlock(ctx, b); err != nil {
if err := s.db.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
if err := s.db.SaveState(ctx, postState, root); err != nil {
@@ -89,27 +94,26 @@ func (s *Store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint.Epoch > s.JustifiedCheckpt().Epoch {
s.justifiedCheckpt = postState.CurrentJustifiedCheckpoint
if err := s.db.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint); err != nil {
return errors.Wrap(err, "could not save justified checkpoint")
if postState.CurrentJustifiedCheckpoint.Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
// Update finalized check point.
// Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpoint.Epoch > s.finalizedCheckpt.Epoch {
s.clearSeenAtts()
helpers.ClearAllCaches()
if err := s.db.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch) + 1
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot+params.BeaconConfig().SlotsPerEpoch)
if endSlot > startSlot {
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot)
}
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
@@ -126,30 +130,42 @@ func (s *Store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if helpers.IsEpochStart(postState.Slot) {
if postState.Slot >= s.nextEpochBoundarySlot {
logEpochData(postState)
reportEpochMetrics(postState)
// Update committee shuffled indices at the end of every epoch
// Update committees cache at epoch boundary slot.
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(postState); err != nil {
if err := helpers.UpdateCommitteeCache(postState, helpers.CurrentEpoch(postState)); err != nil {
return err
}
}
s.nextEpochBoundarySlot = helpers.StartSlot(helpers.NextEpoch(postState))
}
return nil
}
// OnBlockNoVerifyStateTransition is called when an initial sync block is received.
// OnBlockInitialSyncStateTransition is called when an initial sync block is received.
// It runs state transition on the block and without any BLS verification. The BLS verification
// includes proposer signature, randao and attestation's aggregated signature.
func (s *Store) OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.BeaconBlock) error {
// includes proposer signature, randao and attestation's aggregated signature. It also does not save
// attestations.
func (s *Store) OnBlockInitialSyncStateTransition(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onBlock")
defer span.End()
if signed == nil || signed.Block == nil {
return errors.New("nil block")
}
b := signed.Block
s.initSyncStateLock.Lock()
defer s.initSyncStateLock.Unlock()
// Retrieve incoming block's pre state.
preState, err := s.getBlockPreState(ctx, b)
preState, err := s.cachedPreState(ctx, b)
if err != nil {
return err
}
@@ -157,41 +173,48 @@ func (s *Store) OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.Bea
log.WithField("slot", b.Slot).Debug("Executing state transition on block")
postState, err := state.ExecuteStateTransitionNoVerify(ctx, preState, b)
postState, err := state.ExecuteStateTransitionNoVerify(ctx, preState, signed)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
if err := s.db.SaveBlock(ctx, b); err != nil {
if err := s.db.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
root, err := ssz.SigningRoot(b)
root, err := ssz.HashTreeRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
if err := s.db.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
if featureconfig.Get().InitSyncCacheState {
s.initSyncState[root] = postState
} else {
if err := s.db.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
}
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint.Epoch > s.JustifiedCheckpt().Epoch {
s.justifiedCheckpt = postState.CurrentJustifiedCheckpoint
if err := s.db.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint); err != nil {
return errors.Wrap(err, "could not save justified checkpoint")
if postState.CurrentJustifiedCheckpoint.Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
// Update finalized check point.
// Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpoint.Epoch > s.finalizedCheckpt.Epoch {
s.clearSeenAtts()
helpers.ClearAllCaches()
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch) + 1
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot+params.BeaconConfig().SlotsPerEpoch)
if endSlot > startSlot {
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot)
}
}
if err := s.saveInitState(ctx, postState); err != nil {
return errors.Wrap(err, "could not save init sync finalized state")
}
if err := s.db.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint); err != nil {
@@ -206,21 +229,19 @@ func (s *Store) OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.Bea
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
// Save the unseen attestations from block to db.
if err := s.saveNewBlockAttestations(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not save attestations")
if flags.Get().EnableArchive {
// Save the unseen attestations from block to db.
if err := s.saveNewBlockAttestations(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not save attestations")
}
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if helpers.IsEpochStart(postState.Slot) {
if postState.Slot >= s.nextEpochBoundarySlot {
reportEpochMetrics(postState)
// Update committee shuffled indices at the end of every epoch
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(postState); err != nil {
return err
}
}
s.nextEpochBoundarySlot = helpers.StartSlot(helpers.NextEpoch(postState))
}
return nil
@@ -257,60 +278,6 @@ func (s *Store) getBlockPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb
return preState, nil
}
// updateBlockAttestationsVotes checks the attestations in block and filter out the seen ones,
// the unseen ones get passed to updateBlockAttestationVote for updating fork choice votes.
func (s *Store) updateBlockAttestationsVotes(ctx context.Context, atts []*ethpb.Attestation) error {
s.seenAttsLock.Lock()
defer s.seenAttsLock.Unlock()
for _, att := range atts {
// If we have not seen the attestation yet
r, err := hashutil.HashProto(att)
if err != nil {
return err
}
if s.seenAtts[r] {
continue
}
if err := s.updateBlockAttestationVote(ctx, att); err != nil {
log.WithError(err).Warn("Attestation failed to update vote")
}
s.seenAtts[r] = true
}
return nil
}
// updateBlockAttestationVotes checks the attestation to update validator's latest votes.
func (s *Store) updateBlockAttestationVote(ctx context.Context, att *ethpb.Attestation) error {
tgt := att.Data.Target
baseState, err := s.db.State(ctx, bytesutil.ToBytes32(tgt.Root))
if err != nil {
return errors.Wrap(err, "could not get state for attestation tgt root")
}
if baseState == nil {
return errors.New("no state found in db with attestation tgt root")
}
indexedAtt, err := blocks.ConvertToIndexed(ctx, baseState, att)
if err != nil {
return errors.Wrap(err, "could not convert attestation to indexed attestation")
}
for _, i := range append(indexedAtt.CustodyBit_0Indices, indexedAtt.CustodyBit_1Indices...) {
vote, err := s.db.ValidatorLatestVote(ctx, i)
if err != nil {
return errors.Wrapf(err, "could not get latest vote for validator %d", i)
}
if vote == nil || tgt.Epoch > vote.Epoch {
if err := s.db.SaveValidatorLatestVote(ctx, i, &pb.ValidatorLatestVote{
Epoch: tgt.Epoch,
Root: tgt.Root,
}); err != nil {
return errors.Wrapf(err, "could not save latest vote for validator %d", i)
}
}
}
return nil
}
// verifyBlkPreState validates input block has a valid pre-state.
func (s *Store) verifyBlkPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
preState, err := s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
@@ -329,10 +296,11 @@ func (s *Store) verifyBlkDescendant(ctx context.Context, root [32]byte, slot uin
ctx, span := trace.StartSpan(ctx, "forkchoice.verifyBlkDescendant")
defer span.End()
finalizedBlk, err := s.db.Block(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root))
if err != nil || finalizedBlk == nil {
finalizedBlkSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root))
if err != nil || finalizedBlkSigned == nil || finalizedBlkSigned.Block == nil {
return errors.Wrap(err, "could not get finalized block")
}
finalizedBlk := finalizedBlkSigned.Block
bFinalizedRoot, err := s.ancestor(ctx, root[:], finalizedBlk.Slot)
if err != nil {
@@ -393,20 +361,22 @@ func (s *Store) saveNewBlockAttestations(ctx context.Context, atts []*ethpb.Atte
return nil
}
// clearSeenAtts clears seen attestations map, it gets called upon new finalization.
func (s *Store) clearSeenAtts() {
s.seenAttsLock.Lock()
s.seenAttsLock.Unlock()
s.seenAtts = make(map[[32]byte]bool)
}
// rmStatesOlderThanLastFinalized deletes the states in db since last finalized check point.
func (s *Store) rmStatesOlderThanLastFinalized(ctx context.Context, startSlot uint64, endSlot uint64) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.rmStatesBySlots")
defer span.End()
if !featureconfig.Get().PruneFinalizedStates {
return nil
// Make sure start slot is not a skipped slot
for i := startSlot; i > 0; i-- {
filter := filters.NewFilter().SetStartSlot(i).SetEndSlot(i)
b, err := s.db.Blocks(ctx, filter)
if err != nil {
return err
}
if len(b) > 0 {
startSlot = i
break
}
}
// Make sure finalized slot is not a skipped slot.
@@ -426,17 +396,177 @@ func (s *Store) rmStatesOlderThanLastFinalized(ctx context.Context, startSlot ui
if startSlot == 0 {
startSlot++
}
// If end slot comes less than start slot
if endSlot < startSlot {
endSlot = startSlot
}
// Do not remove finalized state that's in the middle of slot ranges.
filter := filters.NewFilter().SetStartSlot(startSlot).SetEndSlot(endSlot)
roots, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return err
}
roots, err = s.filterBlockRoots(ctx, roots)
if err != nil {
return err
}
if err := s.db.DeleteStates(ctx, roots); err != nil {
return err
}
return nil
}
// shouldUpdateCurrentJustified prevents bouncing attack, by only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
func (s *Store) shouldUpdateCurrentJustified(ctx context.Context, newJustifiedCheckpt *ethpb.Checkpoint) (bool, error) {
if helpers.SlotsSinceEpochStarts(s.currentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
newJustifiedBlockSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(newJustifiedCheckpt.Root))
if err != nil {
return false, err
}
if newJustifiedBlockSigned == nil || newJustifiedBlockSigned.Block == nil {
return false, errors.New("nil new justified block")
}
newJustifiedBlock := newJustifiedBlockSigned.Block
if newJustifiedBlock.Slot <= helpers.StartSlot(s.justifiedCheckpt.Epoch) {
return false, nil
}
justifiedBlockSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
return false, err
}
if justifiedBlockSigned == nil || justifiedBlockSigned.Block == nil {
return false, errors.New("nil justified block")
}
justifiedBlock := justifiedBlockSigned.Block
b, err := s.ancestor(ctx, newJustifiedCheckpt.Root, justifiedBlock.Slot)
if err != nil {
return false, err
}
if !bytes.Equal(b, s.justifiedCheckpt.Root) {
return false, nil
}
return true, nil
}
func (s *Store) updateJustified(ctx context.Context, state *pb.BeaconState) error {
if state.CurrentJustifiedCheckpoint.Epoch > s.bestJustifiedCheckpt.Epoch {
s.bestJustifiedCheckpt = state.CurrentJustifiedCheckpoint
}
canUpdate, err := s.shouldUpdateCurrentJustified(ctx, state.CurrentJustifiedCheckpoint)
if err != nil {
return err
}
if canUpdate {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint
}
if featureconfig.Get().InitSyncCacheState {
justifiedRoot := bytesutil.ToBytes32(state.CurrentJustifiedCheckpoint.Root)
justifiedState := s.initSyncState[justifiedRoot]
if err := s.db.SaveState(ctx, justifiedState, justifiedRoot); err != nil {
return errors.Wrap(err, "could not save justified state")
}
}
return s.db.SaveJustifiedCheckpoint(ctx, state.CurrentJustifiedCheckpoint)
}
// currentSlot returns the current slot based on time.
func (s *Store) currentSlot() uint64 {
return (uint64(time.Now().Unix()) - s.genesisTime) / params.BeaconConfig().SecondsPerSlot
}
// updates justified check point in store if a better check point is known
func (s *Store) updateJustifiedCheckpoint() {
// Update at epoch boundary slot only
if !helpers.IsEpochStart(s.currentSlot()) {
return
}
if s.bestJustifiedCheckpt.Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = s.bestJustifiedCheckpt
}
}
// This receives cached state in memory for initial sync only during initial sync.
func (s *Store) cachedPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
if featureconfig.Get().InitSyncCacheState {
preState := s.initSyncState[bytesutil.ToBytes32(b.ParentRoot)]
var err error
if preState == nil {
preState, err = s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
}
return proto.Clone(preState).(*pb.BeaconState), nil
}
preState, err := s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
return preState, nil
}
// This saves every finalized state in DB during initial sync, needed as part of optimization to
// use cache state during initial sync in case of restart.
func (s *Store) saveInitState(ctx context.Context, state *pb.BeaconState) error {
if !featureconfig.Get().InitSyncCacheState {
return nil
}
finalizedRoot := bytesutil.ToBytes32(state.FinalizedCheckpoint.Root)
fs := s.initSyncState[finalizedRoot]
if err := s.db.SaveState(ctx, fs, finalizedRoot); err != nil {
return errors.Wrap(err, "could not save state")
}
for r, oldState := range s.initSyncState {
if oldState.Slot < state.FinalizedCheckpoint.Epoch*params.BeaconConfig().SlotsPerEpoch {
delete(s.initSyncState, r)
}
}
return nil
}
// This filters block roots that are not known as head root and finalized root in DB.
// It serves as the last line of defence before we prune states.
func (s *Store) filterBlockRoots(ctx context.Context, roots [][32]byte) ([][32]byte, error) {
f, err := s.db.FinalizedCheckpoint(ctx)
if err != nil {
return nil, err
}
fRoot := f.Root
h, err := s.db.HeadBlock(ctx)
if err != nil {
return nil, err
}
hRoot, err := ssz.SigningRoot(h)
if err != nil {
return nil, err
}
filtered := make([][32]byte, 0, len(roots))
for _, root := range roots {
if bytes.Equal(root[:], fRoot[:]) || bytes.Equal(root[:], hRoot[:]) {
continue
}
filtered = append(filtered, root)
}
return filtered, nil
}

View File

@@ -1,32 +1,26 @@
package forkchoice
import (
"bytes"
"context"
"reflect"
"strings"
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/stateutil"
)
func init() {
fc := featureconfig.Get()
fc.PruneFinalizedStates = true
featureconfig.Init(fc)
}
func TestStore_OnBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
@@ -34,23 +28,40 @@ func TestStore_OnBlock(t *testing.T) {
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
genesisStateRoot, err := stateutil.HashTreeRootState(&pb.BeaconState{})
if err != nil {
t.Error(err)
}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
if err := db.SaveBlock(ctx, genesis); err != nil {
t.Error(err)
}
validGenesisRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Error(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, validGenesisRoot); err != nil {
t.Fatal(err)
}
roots, err := blockTree1(db, validGenesisRoot[:])
if err != nil {
t.Fatal(err)
}
randomParentRoot := []byte{'a'}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(randomParentRoot)); err != nil {
random := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1, ParentRoot: validGenesisRoot[:]}}
if err := db.SaveBlock(ctx, random); err != nil {
t.Error(err)
}
randomParentRoot, err := ssz.HashTreeRoot(random.Block)
if err != nil {
t.Error(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, randomParentRoot); err != nil {
t.Fatal(err)
}
randomParentRoot2 := roots[1]
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(randomParentRoot2)); err != nil {
t.Fatal(err)
}
validGenesisRoot := []byte{'g'}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(validGenesisRoot)); err != nil {
t.Fatal(err)
}
tests := []struct {
name string
@@ -67,13 +78,13 @@ func TestStore_OnBlock(t *testing.T) {
},
{
name: "block is from the feature",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot, Slot: params.BeaconConfig().FarFutureEpoch},
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot[:], Slot: params.BeaconConfig().FarFutureEpoch},
s: &pb.BeaconState{},
wantErrString: "could not process slot from the future",
},
{
name: "could not get finalized block",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot},
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot[:]},
s: &pb.BeaconState{},
wantErrString: "block from slot 0 is not a descendent of the current finalized block",
},
@@ -87,12 +98,12 @@ func TestStore_OnBlock(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: validGenesisRoot[:]}, &ethpb.Checkpoint{Root: validGenesisRoot[:]}); err != nil {
t.Fatal(err)
}
store.finalizedCheckpt.Root = roots[0]
err := store.OnBlock(ctx, tt.blk)
err := store.OnBlock(ctx, &ethpb.SignedBeaconBlock{Block: tt.blk})
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnBlock() error = %v, wantErr = %v", err, tt.wantErrString)
}
@@ -126,108 +137,14 @@ func TestStore_SaveNewValidators(t *testing.T) {
}
}
func TestStore_UpdateBlockAttestationVote(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
store := NewForkChoiceService(ctx, db)
r := [32]byte{'A'}
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: r[:]},
},
AggregationBits: []byte{255},
CustodyBits: []byte{255},
}
if err := store.db.SaveState(ctx, beaconState, r); err != nil {
t.Fatal(err)
}
indices, err := blocks.ConvertToIndexed(ctx, beaconState, att)
if err != nil {
t.Fatal(err)
}
var attestedIndices []uint64
for _, k := range append(indices.CustodyBit_0Indices, indices.CustodyBit_1Indices...) {
attestedIndices = append(attestedIndices, k)
}
if err := store.updateBlockAttestationVote(ctx, att); err != nil {
t.Fatal(err)
}
for _, i := range attestedIndices {
v, err := store.db.ValidatorLatestVote(ctx, i)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(v.Root, r[:]) {
t.Error("Attested roots don't match")
}
}
}
func TestStore_UpdateBlockAttestationsVote(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
store := NewForkChoiceService(ctx, db)
r := [32]byte{'A'}
atts := make([]*ethpb.Attestation, 5)
hashes := make([][32]byte, 5)
for i := 0; i < len(atts); i++ {
atts[i] = &ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: r[:]},
},
AggregationBits: []byte{255},
CustodyBits: []byte{255},
}
h, _ := hashutil.HashProto(atts[i])
hashes[i] = h
}
if err := store.db.SaveState(ctx, beaconState, r); err != nil {
t.Fatal(err)
}
if err := store.updateBlockAttestationsVotes(ctx, atts); err != nil {
t.Fatal(err)
}
for _, h := range hashes {
if !store.seenAtts[h] {
t.Error("Seen attestation did not get recorded")
}
}
}
func TestStore_SavesNewBlockAttestations(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
a1 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b101}, CustodyBits: bitfield.NewBitlist(2)}
a2 := &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b110}, CustodyBits: bitfield.NewBitlist(2)}
a1 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b101}}
a2 := &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b110}}
r1, _ := ssz.HashTreeRoot(a1.Data)
r2, _ := ssz.HashTreeRoot(a2.Data)
@@ -251,8 +168,8 @@ func TestStore_SavesNewBlockAttestations(t *testing.T) {
t.Error("did not retrieve saved attestation")
}
a1 = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b111}, CustodyBits: bitfield.NewBitlist(2)}
a2 = &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b111}, CustodyBits: bitfield.NewBitlist(2)}
a1 = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b111}}
a2 = &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b111}}
if err := store.saveNewBlockAttestations(ctx, []*ethpb.Attestation{a1, a2}); err != nil {
t.Fatal(err)
@@ -286,13 +203,15 @@ func TestRemoveStateSinceLastFinalized(t *testing.T) {
// Save 100 blocks in DB, each has a state.
numBlocks := 100
totalBlocks := make([]*ethpb.BeaconBlock, numBlocks)
totalBlocks := make([]*ethpb.SignedBeaconBlock, numBlocks)
blockRoots := make([][32]byte, 0)
for i := 0; i < len(totalBlocks); i++ {
totalBlocks[i] = &ethpb.BeaconBlock{
Slot: uint64(i),
totalBlocks[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: uint64(i),
},
}
r, err := ssz.SigningRoot(totalBlocks[i])
r, err := ssz.HashTreeRoot(totalBlocks[i].Block)
if err != nil {
t.Fatal(err)
}
@@ -303,6 +222,9 @@ func TestRemoveStateSinceLastFinalized(t *testing.T) {
t.Fatal(err)
}
blockRoots = append(blockRoots, r)
if err := store.db.SaveHeadBlockRoot(ctx, r); err != nil {
t.Fatal(err)
}
}
// New finalized epoch: 1
@@ -341,3 +263,348 @@ func TestRemoveStateSinceLastFinalized(t *testing.T) {
}
}
}
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
store.genesisTime = uint64(time.Now().Unix())
update, err := store.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{})
if err != nil {
t.Fatal(err)
}
if !update {
t.Error("Should be able to update justified, received false")
}
lastJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: []byte{'G'}}}
lastJustifiedRoot, _ := ssz.HashTreeRoot(lastJustifiedBlk.Block)
newJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1, ParentRoot: lastJustifiedRoot[:]}}
newJustifiedRoot, _ := ssz.HashTreeRoot(newJustifiedBlk.Block)
if err := store.db.SaveBlock(ctx, newJustifiedBlk); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, lastJustifiedBlk); err != nil {
t.Fatal(err)
}
diff := (params.BeaconConfig().SlotsPerEpoch - 1) * params.BeaconConfig().SecondsPerSlot
store.genesisTime = uint64(time.Now().Unix()) - diff
store.justifiedCheckpt = &ethpb.Checkpoint{Root: lastJustifiedRoot[:]}
update, err = store.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
if err != nil {
t.Fatal(err)
}
if !update {
t.Error("Should be able to update justified, received false")
}
}
func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
lastJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: []byte{'G'}}}
lastJustifiedRoot, _ := ssz.HashTreeRoot(lastJustifiedBlk.Block)
newJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: lastJustifiedRoot[:]}}
newJustifiedRoot, _ := ssz.HashTreeRoot(newJustifiedBlk.Block)
if err := store.db.SaveBlock(ctx, newJustifiedBlk); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, lastJustifiedBlk); err != nil {
t.Fatal(err)
}
diff := (params.BeaconConfig().SlotsPerEpoch - 1) * params.BeaconConfig().SecondsPerSlot
store.genesisTime = uint64(time.Now().Unix()) - diff
store.justifiedCheckpt = &ethpb.Checkpoint{Root: lastJustifiedRoot[:]}
update, err := store.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
if err != nil {
t.Fatal(err)
}
if update {
t.Error("Should not be able to update justified, received true")
}
}
func TestUpdateJustifiedCheckpoint_Update(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
store.genesisTime = uint64(time.Now().Unix())
store.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.bestJustifiedCheckpt = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'B'}}
store.updateJustifiedCheckpoint()
if !bytes.Equal(store.justifiedCheckpt.Root, []byte{'B'}) {
t.Error("Justified check point root did not update")
}
}
func TestUpdateJustifiedCheckpoint_NoUpdate(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
store.genesisTime = uint64(time.Now().Unix()) - params.BeaconConfig().SecondsPerSlot
store.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.bestJustifiedCheckpt = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'B'}}
store.updateJustifiedCheckpoint()
if bytes.Equal(store.justifiedCheckpt.Root, []byte{'B'}) {
t.Error("Justified check point root was not suppose to update")
store := NewForkChoiceService(ctx, db)
// Save 5 blocks in DB, each has a state.
numBlocks := 5
totalBlocks := make([]*ethpb.SignedBeaconBlock, numBlocks)
blockRoots := make([][32]byte, 0)
for i := 0; i < len(totalBlocks); i++ {
totalBlocks[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: uint64(i),
},
}
r, err := ssz.HashTreeRoot(totalBlocks[i].Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{Slot: uint64(i)}, r); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, totalBlocks[i]); err != nil {
t.Fatal(err)
}
blockRoots = append(blockRoots, r)
}
if err := store.db.SaveHeadBlockRoot(ctx, blockRoots[0]); err != nil {
t.Fatal(err)
}
if err := store.rmStatesOlderThanLastFinalized(ctx, 10, 11); err != nil {
t.Fatal(err)
}
// Since 5-10 are skip slots, block with slot 4 should be deleted
s, err := store.db.State(ctx, blockRoots[4])
if err != nil {
t.Fatal(err)
}
if s != nil {
t.Error("Did not delete state for start slot")
}
}
}
func TestCachedPreState_CanGetFromCache(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
s := &pb.BeaconState{Slot: 1}
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
store.initSyncState[r] = s
wanted := "pre state of slot 1 does not exist"
if _, err := store.cachedPreState(ctx, b); !strings.Contains(err.Error(), wanted) {
t.Fatal("Not expected error")
}
}
func TestCachedPreState_CanGetFromCacheWithFeature(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
config := &featureconfig.Flags{
InitSyncCacheState: true,
}
featureconfig.Init(config)
store := NewForkChoiceService(ctx, db)
s := &pb.BeaconState{Slot: 1}
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
store.initSyncState[r] = s
received, err := store.cachedPreState(ctx, b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, received) {
t.Error("cached state not the same")
}
}
func TestCachedPreState_CanGetFromDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
_, err := store.cachedPreState(ctx, b)
wanted := "pre state of slot 1 does not exist"
if err.Error() != wanted {
t.Error("Did not get wanted error")
}
s := &pb.BeaconState{Slot: 1}
store.db.SaveState(ctx, s, r)
received, err := store.cachedPreState(ctx, b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, received) {
t.Error("cached state not the same")
}
}
func TestSaveInitState_CanSaveDelete(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
config := &featureconfig.Flags{
InitSyncCacheState: true,
}
featureconfig.Init(config)
for i := uint64(0); i < 64; i++ {
b := &ethpb.BeaconBlock{Slot: i}
s := &pb.BeaconState{Slot: i}
r, _ := ssz.HashTreeRoot(b)
store.initSyncState[r] = s
}
// Set finalized root as slot 32
finalizedRoot, _ := ssz.HashTreeRoot(&ethpb.BeaconBlock{Slot: 32})
if err := store.saveInitState(ctx, &pb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{
Epoch: 1, Root: finalizedRoot[:]}}); err != nil {
t.Fatal(err)
}
// Verify finalized state is saved in DB
finalizedState, err := store.db.State(ctx, finalizedRoot)
if err != nil {
t.Fatal(err)
}
if finalizedState == nil {
t.Error("finalized state can't be nil")
}
// Verify cached state is properly pruned
if len(store.initSyncState) != int(params.BeaconConfig().SlotsPerEpoch) {
t.Errorf("wanted: %d, got: %d", len(store.initSyncState), params.BeaconConfig().SlotsPerEpoch)
}
}
func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
signedBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, signedBlock); err != nil {
t.Fatal(err)
}
r, err := ssz.HashTreeRoot(signedBlock.Block)
if err != nil {
t.Fatal(err)
}
store.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.initSyncState[r] = &pb.BeaconState{}
if err := db.SaveState(ctx, &pb.BeaconState{}, r); err != nil {
t.Fatal(err)
}
// Could update
s := &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: r[:]}}
if err := store.updateJustified(context.Background(), s); err != nil {
t.Fatal(err)
}
if store.bestJustifiedCheckpt.Epoch != s.CurrentJustifiedCheckpoint.Epoch {
t.Error("Incorrect justified epoch in store")
}
// Could not update
store.bestJustifiedCheckpt.Epoch = 2
if err := store.updateJustified(context.Background(), s); err != nil {
t.Fatal(err)
}
if store.bestJustifiedCheckpt.Epoch != 2 {
t.Error("Incorrect justified epoch in store")
}
}
func TestFilterBlockRoots_CanFilter(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
fBlock := &ethpb.BeaconBlock{}
fRoot, _ := ssz.HashTreeRoot(fBlock)
hBlock := &ethpb.BeaconBlock{Slot: 1}
headRoot, _ := ssz.HashTreeRoot(hBlock)
if err := store.db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: fBlock}); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, fRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: fRoot[:]}); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: hBlock}); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, headRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveHeadBlockRoot(ctx, headRoot); err != nil {
t.Fatal(err)
}
roots := [][32]byte{{'C'}, {'D'}, headRoot, {'E'}, fRoot, {'F'}}
wanted := [][32]byte{{'C'}, {'D'}, {'E'}, {'F'}}
received, err := store.filterBlockRoots(ctx, roots)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(wanted, received) {
t.Error("Did not filter correctly")
}
}

View File

@@ -3,17 +3,23 @@ package forkchoice
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"sync"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/stateutil"
"go.opencensus.io/trace"
)
@@ -21,9 +27,9 @@ import (
// to beacon blocks to compute head.
type ForkChoicer interface {
Head(ctx context.Context) ([]byte, error)
OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error
OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.BeaconBlock) error
OnAttestation(ctx context.Context, a *ethpb.Attestation) (uint64, error)
OnBlock(ctx context.Context, b *ethpb.SignedBeaconBlock) error
OnBlockInitialSyncStateTransition(ctx context.Context, b *ethpb.SignedBeaconBlock) error
OnAttestation(ctx context.Context, a *ethpb.Attestation) error
GenesisStore(ctx context.Context, justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) error
FinalizedCheckpt() *ethpb.Checkpoint
}
@@ -31,18 +37,21 @@ type ForkChoicer interface {
// Store represents a service struct that handles the forkchoice
// logic of managing the full PoS beacon chain.
type Store struct {
ctx context.Context
cancel context.CancelFunc
db db.Database
justifiedCheckpt *ethpb.Checkpoint
finalizedCheckpt *ethpb.Checkpoint
prevFinalizedCheckpt *ethpb.Checkpoint
checkpointState *cache.CheckpointStateCache
checkpointStateLock sync.Mutex
attsQueue map[[32]byte]*ethpb.Attestation
attsQueueLock sync.Mutex
seenAtts map[[32]byte]bool
seenAttsLock sync.Mutex
ctx context.Context
cancel context.CancelFunc
db db.Database
justifiedCheckpt *ethpb.Checkpoint
finalizedCheckpt *ethpb.Checkpoint
prevFinalizedCheckpt *ethpb.Checkpoint
checkpointState *cache.CheckpointStateCache
checkpointStateLock sync.Mutex
genesisTime uint64
bestJustifiedCheckpt *ethpb.Checkpoint
latestVoteMap map[uint64]*pb.ValidatorLatestVote
voteLock sync.RWMutex
initSyncState map[[32]byte]*pb.BeaconState
initSyncStateLock sync.RWMutex
nextEpochBoundarySlot uint64
}
// NewForkChoiceService instantiates a new service instance that will
@@ -54,8 +63,8 @@ func NewForkChoiceService(ctx context.Context, db db.Database) *Store {
cancel: cancel,
db: db,
checkpointState: cache.NewCheckpointStateCache(),
attsQueue: make(map[[32]byte]*ethpb.Attestation),
seenAtts: make(map[[32]byte]bool),
latestVoteMap: make(map[uint64]*pb.ValidatorLatestVote),
initSyncState: make(map[[32]byte]*pb.BeaconState),
}
}
@@ -82,6 +91,7 @@ func (s *Store) GenesisStore(
finalizedCheckpoint *ethpb.Checkpoint) error {
s.justifiedCheckpt = proto.Clone(justifiedCheckpoint).(*ethpb.Checkpoint)
s.bestJustifiedCheckpt = proto.Clone(justifiedCheckpoint).(*ethpb.Checkpoint)
s.finalizedCheckpt = proto.Clone(finalizedCheckpoint).(*ethpb.Checkpoint)
s.prevFinalizedCheckpt = proto.Clone(finalizedCheckpoint).(*ethpb.Checkpoint)
@@ -97,6 +107,35 @@ func (s *Store) GenesisStore(
return errors.Wrap(err, "could not save genesis state in check point cache")
}
s.genesisTime = justifiedState.GenesisTime
if err := s.cacheGenesisState(ctx); err != nil {
return errors.Wrap(err, "could not cache initial sync state")
}
return nil
}
// This sets up gensis for initial sync state cache.
func (s *Store) cacheGenesisState(ctx context.Context) error {
if !featureconfig.Get().InitSyncCacheState {
return nil
}
genesisState, err := s.db.GenesisState(ctx)
if err != nil {
return err
}
stateRoot, err := stateutil.HashTreeRootState(genesisState)
if err != nil {
return errors.Wrap(err, "could not tree hash genesis state")
}
genesisBlk := blocks.NewGenesisBlock(stateRoot[:])
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
if err != nil {
return errors.Wrap(err, "could not get genesis block root")
}
s.initSyncState[genesisBlkRoot] = genesisState
return nil
}
@@ -115,10 +154,19 @@ func (s *Store) ancestor(ctx context.Context, root []byte, slot uint64) ([]byte,
ctx, span := trace.StartSpan(ctx, "forkchoice.ancestor")
defer span.End()
b, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
// Stop recursive ancestry lookup if context is cancelled.
if ctx.Err() != nil {
return nil, ctx.Err()
}
signed, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return nil, errors.Wrap(err, "could not get ancestor block")
}
if signed == nil || signed.Block == nil {
return nil, errors.New("nil block")
}
b := signed.Block
// If we dont have the ancestor in the DB, simply return nil so rest of fork choice
// operation can proceed. This is not an error condition.
@@ -162,18 +210,21 @@ func (s *Store) latestAttestingBalance(ctx context.Context, root []byte) (uint64
return 0, errors.Wrap(err, "could not get active indices for last justified checkpoint")
}
wantedBlk, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
wantedBlkSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return 0, errors.Wrap(err, "could not get target block")
}
if wantedBlkSigned == nil || wantedBlkSigned.Block == nil {
return 0, errors.New("nil wanted block")
}
wantedBlk := wantedBlkSigned.Block
balances := uint64(0)
s.voteLock.RLock()
defer s.voteLock.RUnlock()
for _, i := range activeIndices {
vote, err := s.db.ValidatorLatestVote(ctx, i)
if err != nil {
return 0, errors.Wrapf(err, "could not get validator %d's latest vote", i)
}
if vote == nil {
vote, ok := s.latestVoteMap[i]
if !ok {
continue
}
@@ -191,14 +242,16 @@ func (s *Store) latestAttestingBalance(ctx context.Context, root []byte) (uint64
// Head returns the head of the beacon chain.
//
// Spec pseudocode definition:
// def get_head(store: Store) -> Hash:
// def get_head(store: Store) -> Root:
// # Get filtered block tree that only includes viable branches
// blocks = get_filtered_block_tree(store)
// # Execute the LMD-GHOST fork choice
// head = store.justified_checkpoint.root
// justified_slot = compute_start_slot_of_epoch(store.justified_checkpoint.epoch)
// justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
// while True:
// children = [
// root for root in store.blocks.keys()
// if store.blocks[root].parent_root == head and store.blocks[root].slot > justified_slot
// root for root in blocks.keys()
// if blocks[root].parent_root == head and blocks[root].slot > justified_slot
// ]
// if len(children) == 0:
// return head
@@ -209,13 +262,18 @@ func (s *Store) Head(ctx context.Context) ([]byte, error) {
defer span.End()
head := s.JustifiedCheckpt().Root
filteredBlocks, err := s.getFilterBlockTree(ctx)
if err != nil {
return nil, err
}
justifiedSlot := helpers.StartSlot(s.justifiedCheckpt.Epoch)
for {
startSlot := s.JustifiedCheckpt().Epoch * params.BeaconConfig().SlotsPerEpoch
filter := filters.NewFilter().SetParentRoot(head).SetStartSlot(startSlot)
children, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve children info")
children := make([][32]byte, 0, len(filteredBlocks))
for root, block := range filteredBlocks {
if bytes.Equal(block.ParentRoot, head) && block.Slot > justifiedSlot {
children = append(children, root)
}
}
if len(children) == 0 {
@@ -246,6 +304,124 @@ func (s *Store) Head(ctx context.Context) ([]byte, error) {
}
}
// getFilterBlockTree retrieves a filtered block tree from store, it only returns branches
// whose leaf state's justified and finalized info agrees with what's in the store.
// Rationale: https://notes.ethereum.org/Fj-gVkOSTpOyUx-zkWjuwg?view
//
// Spec pseudocode definition:
// def get_filtered_block_tree(store: Store) -> Dict[Root, BeaconBlock]:
// """
// Retrieve a filtered block true from ``store``, only returning branches
// whose leaf state's justified/finalized info agrees with that in ``store``.
// """
// base = store.justified_checkpoint.root
// blocks: Dict[Root, BeaconBlock] = {}
// filter_block_tree(store, base, blocks)
// return blocks
func (s *Store) getFilterBlockTree(ctx context.Context) (map[[32]byte]*ethpb.BeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.getFilterBlockTree")
defer span.End()
baseRoot := bytesutil.ToBytes32(s.justifiedCheckpt.Root)
filteredBlocks := make(map[[32]byte]*ethpb.BeaconBlock)
if _, err := s.filterBlockTree(ctx, baseRoot, filteredBlocks); err != nil {
return nil, err
}
return filteredBlocks, nil
}
// filterBlockTree filters for branches that see latest finalized and justified info as correct on-chain
// before running Head.
//
// Spec pseudocode definition:
// def filter_block_tree(store: Store, block_root: Root, blocks: Dict[Root, BeaconBlock]) -> bool:
// block = store.blocks[block_root]
// children = [
// root for root in store.blocks.keys()
// if store.blocks[root].parent_root == block_root
// ]
// # If any children branches contain expected finalized/justified checkpoints,
// # add to filtered block-tree and signal viability to parent.
// if any(children):
// filter_block_tree_result = [filter_block_tree(store, child, blocks) for child in children]
// if any(filter_block_tree_result):
// blocks[block_root] = block
// return True
// return False
// # If leaf block, check finalized/justified checkpoints as matching latest.
// head_state = store.block_states[block_root]
// correct_justified = (
// store.justified_checkpoint.epoch == GENESIS_EPOCH
// or head_state.current_justified_checkpoint == store.justified_checkpoint
// )
// correct_finalized = (
// store.finalized_checkpoint.epoch == GENESIS_EPOCH
// or head_state.finalized_checkpoint == store.finalized_checkpoint
// )
// # If expected finalized/justified, add to viable block-tree and signal viability to parent.
// if correct_justified and correct_finalized:
// blocks[block_root] = block
// return True
// # Otherwise, branch not viable
// return False
func (s *Store) filterBlockTree(ctx context.Context, blockRoot [32]byte, filteredBlocks map[[32]byte]*ethpb.BeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.filterBlockTree")
defer span.End()
signed, err := s.db.Block(ctx, blockRoot)
if err != nil {
return false, err
}
if signed == nil || signed.Block == nil {
return false, errors.New("nil block")
}
block := signed.Block
filter := filters.NewFilter().SetParentRoot(blockRoot[:])
childrenRoots, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return false, err
}
if len(childrenRoots) != 0 {
var filtered bool
for _, childRoot := range childrenRoots {
didFilter, err := s.filterBlockTree(ctx, childRoot, filteredBlocks)
if err != nil {
return false, err
}
if didFilter {
filtered = true
}
}
if filtered {
filteredBlocks[blockRoot] = block
return true, nil
}
return false, nil
}
headState, err := s.db.State(ctx, blockRoot)
if err != nil {
return false, err
}
if headState == nil {
return false, fmt.Errorf("no state matching block root %v", hex.EncodeToString(blockRoot[:]))
}
correctJustified := s.justifiedCheckpt.Epoch == 0 ||
proto.Equal(s.justifiedCheckpt, headState.CurrentJustifiedCheckpoint)
correctFinalized := s.finalizedCheckpt.Epoch == 0 ||
proto.Equal(s.finalizedCheckpt, headState.FinalizedCheckpoint)
if correctJustified && correctFinalized {
filteredBlocks[blockRoot] = block
return true, nil
}
return false, nil
}
// JustifiedCheckpt returns the latest justified check point from fork choice store.
func (s *Store) JustifiedCheckpt() *ethpb.Checkpoint {
return proto.Clone(s.justifiedCheckpt).(*ethpb.Checkpoint)

View File

@@ -7,15 +7,16 @@ import (
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/stateutil"
)
func TestStore_GenesisStoreOk(t *testing.T) {
@@ -27,18 +28,21 @@ func TestStore_GenesisStoreOk(t *testing.T) {
genesisTime := time.Unix(9999, 0)
genesisState := &pb.BeaconState{GenesisTime: uint64(genesisTime.Unix())}
genesisStateRoot, err := ssz.HashTreeRoot(genesisState)
genesisStateRoot, err := stateutil.HashTreeRootState(genesisState)
if err != nil {
t.Fatal(err)
}
genesisBlk := blocks.NewGenesisBlock(genesisStateRoot[:])
genesisBlkRoot, err := ssz.SigningRoot(genesisBlk)
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveGenesisBlockRoot(ctx, genesisBlkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
@@ -68,7 +72,7 @@ func TestStore_AncestorOk(t *testing.T) {
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
@@ -108,7 +112,7 @@ func TestStore_AncestorNotPartOfTheChain(t *testing.T) {
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
@@ -139,7 +143,7 @@ func TestStore_LatestAttestingBalance(t *testing.T) {
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
@@ -150,18 +154,21 @@ func TestStore_LatestAttestingBalance(t *testing.T) {
}
s := &pb.BeaconState{Validators: validators}
stateRoot, err := ssz.HashTreeRoot(s)
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.SigningRoot(b)
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
@@ -174,17 +181,11 @@ func TestStore_LatestAttestingBalance(t *testing.T) {
for i := 0; i < len(validators); i++ {
switch {
case i < 33:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[1]}
case i > 66:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[7]}
default:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[8]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[8]}
}
}
@@ -211,15 +212,13 @@ func TestStore_LatestAttestingBalance(t *testing.T) {
}
func TestStore_ChildrenBlocksFromParentRoot(t *testing.T) {
helpers.ClearAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
@@ -250,7 +249,7 @@ func TestStore_GetHead(t *testing.T) {
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
@@ -261,15 +260,21 @@ func TestStore_GetHead(t *testing.T) {
}
s := &pb.BeaconState{Validators: validators}
stateRoot, err := ssz.HashTreeRoot(s)
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.SigningRoot(b)
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
@@ -293,17 +298,11 @@ func TestStore_GetHead(t *testing.T) {
for i := 0; i < len(validators); i++ {
switch {
case i < 33:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[1]}
case i > 66:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[7]}
default:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[8]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[8]}
}
}
@@ -317,9 +316,8 @@ func TestStore_GetHead(t *testing.T) {
}
// 1 validator switches vote to B7 to gain 34%, enough to switch head
if err := store.db.SaveValidatorLatestVote(ctx, 50, &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(50)] = &pb.ValidatorLatestVote{Root: roots[7]}
head, err = store.Head(ctx)
if err != nil {
t.Fatal(err)
@@ -331,9 +329,7 @@ func TestStore_GetHead(t *testing.T) {
// 18 validators switches vote to B1 to gain 51%, enough to switch head
for i := 0; i < 18; i++ {
idx := 50 + uint64(i)
if err := store.db.SaveValidatorLatestVote(ctx, idx, &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
t.Fatal(err)
}
store.latestVoteMap[uint64(idx)] = &pb.ValidatorLatestVote{Root: roots[1]}
}
head, err = store.Head(ctx)
if err != nil {
@@ -344,3 +340,178 @@ func TestStore_GetHead(t *testing.T) {
t.Error("Incorrect head")
}
}
func TestCacheGenesisState_Correct(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
config := &featureconfig.Flags{
InitSyncCacheState: true,
}
featureconfig.Init(config)
b := &ethpb.BeaconBlock{Slot: 1}
r, _ := ssz.HashTreeRoot(b)
s := &pb.BeaconState{GenesisTime: 99}
store.db.SaveState(ctx, s, r)
store.db.SaveGenesisBlockRoot(ctx, r)
if err := store.cacheGenesisState(ctx); err != nil {
t.Fatal(err)
}
for _, state := range store.initSyncState {
if !reflect.DeepEqual(s, state) {
t.Error("Did not get wanted state")
}
}
}
func TestStore_GetFilterBlockTree_CorrectLeaf(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
s := &pb.BeaconState{}
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
tree, err := store.getFilterBlockTree(ctx)
if err != nil {
t.Fatal(err)
}
wanted := make(map[[32]byte]*ethpb.BeaconBlock)
for _, root := range roots {
root32 := bytesutil.ToBytes32(root)
b, _ := store.db.Block(ctx, root32)
if b != nil {
wanted[root32] = b.Block
}
}
if !reflect.DeepEqual(tree, wanted) {
t.Error("Did not filter tree correctly")
}
}
func TestStore_GetFilterBlockTree_IncorrectLeaf(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
s := &pb.BeaconState{}
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
// Filter for incorrect leaves for 1, 7 and 8
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}, bytesutil.ToBytes32(roots[1]))
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}, bytesutil.ToBytes32(roots[7]))
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}, bytesutil.ToBytes32(roots[8]))
store.justifiedCheckpt.Epoch = 1
tree, err := store.getFilterBlockTree(ctx)
if err != nil {
t.Fatal(err)
}
if len(tree) != 0 {
t.Error("filtered tree should be 0 length")
}
// Set leave 1 as correct
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: store.justifiedCheckpt.Root}}, bytesutil.ToBytes32(roots[1]))
tree, err = store.getFilterBlockTree(ctx)
if err != nil {
t.Fatal(err)
}
wanted := make(map[[32]byte]*ethpb.BeaconBlock)
root32 := bytesutil.ToBytes32(roots[0])
b, err = store.db.Block(ctx, root32)
if err != nil {
t.Fatal(err)
}
wanted[root32] = b.Block
root32 = bytesutil.ToBytes32(roots[1])
b, err = store.db.Block(ctx, root32)
if err != nil {
t.Fatal(err)
}
wanted[root32] = b.Block
if !reflect.DeepEqual(tree, wanted) {
t.Error("Did not filter tree correctly")
}
}

View File

@@ -3,10 +3,10 @@ package forkchoice
import (
"context"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
@@ -15,31 +15,40 @@ import (
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
// (B1, and B3 are all from the same slots)
func blockTree1(db db.Database) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.SigningRoot(b0)
func blockTree1(db db.Database, genesisRoot []byte) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: genesisRoot}
r0, _ := ssz.HashTreeRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.SigningRoot(b1)
r1, _ := ssz.HashTreeRoot(b1)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r0[:]}
r3, _ := ssz.SigningRoot(b3)
r3, _ := ssz.HashTreeRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r3[:]}
r4, _ := ssz.SigningRoot(b4)
r4, _ := ssz.HashTreeRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r4[:]}
r5, _ := ssz.SigningRoot(b5)
r5, _ := ssz.HashTreeRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r4[:]}
r6, _ := ssz.SigningRoot(b6)
r6, _ := ssz.HashTreeRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r5[:]}
r7, _ := ssz.SigningRoot(b7)
r7, _ := ssz.HashTreeRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r6[:]}
r8, _ := ssz.SigningRoot(b8)
r8, _ := ssz.HashTreeRoot(b8)
for _, b := range []*ethpb.BeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} {
if err := db.SaveBlock(context.Background(), b); err != nil {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, r1); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, r7); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, r8); err != nil {
return nil, err
}
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
}
@@ -72,39 +81,39 @@ func blockTree1(db db.Database) ([][]byte, error) {
//}
func blockTree2(db db.Database) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.SigningRoot(b0)
r0, _ := ssz.HashTreeRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.SigningRoot(b1)
r1, _ := ssz.HashTreeRoot(b1)
b2 := &ethpb.BeaconBlock{Slot: 2, ParentRoot: r0[:]}
r2, _ := ssz.SigningRoot(b2)
r2, _ := ssz.HashTreeRoot(b2)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r1[:]}
r3, _ := ssz.SigningRoot(b3)
r3, _ := ssz.HashTreeRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r1[:]}
r4, _ := ssz.SigningRoot(b4)
r4, _ := ssz.HashTreeRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r2[:]}
r5, _ := ssz.SigningRoot(b5)
r5, _ := ssz.HashTreeRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r2[:]}
r6, _ := ssz.SigningRoot(b6)
r6, _ := ssz.HashTreeRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r3[:]}
r7, _ := ssz.SigningRoot(b7)
r7, _ := ssz.HashTreeRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r3[:]}
r8, _ := ssz.SigningRoot(b8)
r8, _ := ssz.HashTreeRoot(b8)
b9 := &ethpb.BeaconBlock{Slot: 9, ParentRoot: r3[:]}
r9, _ := ssz.SigningRoot(b9)
r9, _ := ssz.HashTreeRoot(b9)
b10 := &ethpb.BeaconBlock{Slot: 10, ParentRoot: r3[:]}
r10, _ := ssz.SigningRoot(b10)
r10, _ := ssz.HashTreeRoot(b10)
b11 := &ethpb.BeaconBlock{Slot: 11, ParentRoot: r4[:]}
r11, _ := ssz.SigningRoot(b11)
r11, _ := ssz.HashTreeRoot(b11)
b12 := &ethpb.BeaconBlock{Slot: 12, ParentRoot: r6[:]}
r12, _ := ssz.SigningRoot(b12)
r12, _ := ssz.HashTreeRoot(b12)
b13 := &ethpb.BeaconBlock{Slot: 13, ParentRoot: r6[:]}
r13, _ := ssz.SigningRoot(b13)
r13, _ := ssz.HashTreeRoot(b13)
b14 := &ethpb.BeaconBlock{Slot: 14, ParentRoot: r7[:]}
r14, _ := ssz.SigningRoot(b14)
r14, _ := ssz.HashTreeRoot(b14)
b15 := &ethpb.BeaconBlock{Slot: 15, ParentRoot: r7[:]}
r15, _ := ssz.SigningRoot(b15)
r15, _ := ssz.HashTreeRoot(b15)
for _, b := range []*ethpb.BeaconBlock{b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, b10, b11, b12, b13, b14, b15} {
if err := db.SaveBlock(context.Background(), b); err != nil {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
@@ -121,19 +130,19 @@ func blockTree3(db db.Database) ([][]byte, error) {
roots := make([][]byte, 0, blkCount)
blks := make([]*ethpb.BeaconBlock, 0, blkCount)
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.SigningRoot(b0)
r0, _ := ssz.HashTreeRoot(b0)
roots = append(roots, r0[:])
blks = append(blks, b0)
for i := 1; i < blkCount; i++ {
b := &ethpb.BeaconBlock{Slot: uint64(i), ParentRoot: roots[len(roots)-1]}
r, _ := ssz.SigningRoot(b)
r, _ := ssz.HashTreeRoot(b)
roots = append(roots, r[:])
blks = append(blks, b)
}
for _, b := range blks {
if err := db.SaveBlock(context.Background(), b); err != nil {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {

View File

@@ -1,7 +1,7 @@
package blockchain
import (
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/sirupsen/logrus"
)

View File

@@ -47,10 +47,22 @@ var (
Name: "processed_attestation_counter",
Help: "The # of processed attestation with pubsub and fork choice, this ususally means attestations from rpc",
})
headFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "head_finalized_epoch",
Help: "Last finalized epoch of the head state",
})
headFinalizedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "head_finalized_root",
Help: "Last finalized root of the head state",
})
)
func (s *Service) reportSlotMetrics(currentSlot uint64) {
beaconSlot.Set(float64(currentSlot))
beaconHeadSlot.Set(float64(s.HeadSlot()))
beaconHeadRoot.Set(float64(bytesutil.ToLowInt64(s.HeadRoot())))
if s.headState != nil {
headFinalizedEpoch.Set(float64(s.headState.FinalizedCheckpoint.Epoch))
headFinalizedRoot.Set(float64(bytesutil.ToLowInt64(s.headState.FinalizedCheckpoint.Root)))
}
}

View File

@@ -3,57 +3,23 @@ package blockchain
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/slotutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// AttestationReceiver interface defines the methods of chain service receive and processing new attestations.
type AttestationReceiver interface {
ReceiveAttestation(ctx context.Context, att *ethpb.Attestation) error
ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation) error
}
// ReceiveAttestation is a function that defines the operations that are preformed on
// attestation that is received from regular sync. The operations consist of:
// 1. Gossip attestation to other peers
// 2. Validate attestation, update validator's latest vote
// 3. Apply fork choice to the processed attestation
// 4. Save latest head info
func (s *Service) ReceiveAttestation(ctx context.Context, att *ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveAttestation")
defer span.End()
// Broadcast the new attestation to the network.
if err := s.p2p.Broadcast(ctx, att); err != nil {
return errors.Wrap(err, "could not broadcast attestation")
}
attDataRoot, err := ssz.HashTreeRoot(att.Data)
if err != nil {
log.WithError(err).Error("Failed to hash attestation")
}
log.WithFields(logrus.Fields{
"attRoot": fmt.Sprintf("%#x", attDataRoot),
"blockRoot": fmt.Sprintf("%#x", att.Data.BeaconBlockRoot),
}).Debug("Broadcasting attestation")
if err := s.ReceiveAttestationNoPubsub(ctx, att); err != nil {
return err
}
processedAtt.Inc()
return nil
}
// ReceiveAttestationNoPubsub is a function that defines the operations that are preformed on
// attestation that is received from regular sync. The operations consist of:
// 1. Validate attestation, update validator's latest vote
@@ -64,8 +30,7 @@ func (s *Service) ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Att
defer span.End()
// Update forkchoice store for the new attestation
attSlot, err := s.forkChoiceStore.OnAttestation(ctx, att)
if err != nil {
if err := s.forkChoiceStore.OnAttestation(ctx, att); err != nil {
return errors.Wrap(err, "could not process attestation from fork choice service")
}
@@ -76,37 +41,49 @@ func (s *Service) ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Att
}
// Only save head if it's different than the current head.
if !bytes.Equal(headRoot, s.HeadRoot()) {
headBlk, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
signed, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
if err != nil {
return errors.Wrap(err, "could not compute state from block head")
}
if err := s.saveHead(ctx, headBlk, bytesutil.ToBytes32(headRoot)); err != nil {
if signed == nil || signed.Block == nil {
return errors.New("nil head block")
}
if err := s.saveHead(ctx, signed, bytesutil.ToBytes32(headRoot)); err != nil {
return errors.Wrap(err, "could not save head")
}
}
// Skip checking for competing attestation's target roots at epoch boundary.
if !helpers.IsEpochStart(attSlot) {
s.headLock.RLock()
defer s.headLock.RUnlock()
targetRoot, err := helpers.BlockRoot(s.headState, att.Data.Target.Epoch)
if err != nil {
return errors.Wrapf(err, "could not get target root for epoch %d", att.Data.Target.Epoch)
}
isCompetingAtts(targetRoot, att.Data.Target.Root[:])
}
processedAttNoPubsub.Inc()
return nil
}
// This checks if the attestation is from a competing chain, emits warning and updates metrics.
func isCompetingAtts(headTargetRoot []byte, attTargetRoot []byte) {
if !bytes.Equal(attTargetRoot, headTargetRoot) {
log.WithFields(logrus.Fields{
"attTargetRoot": hex.EncodeToString(attTargetRoot),
"headTargetRoot": hex.EncodeToString(headTargetRoot),
}).Warn("target heads different from new attestation")
competingAtts.Inc()
// This processes attestations from the attestation pool to account for validator votes and fork choice.
func (s *Service) processAttestation() {
// Wait for state to be initialized.
stateChannel := make(chan *feed.Event, 1)
stateSub := s.stateNotifier.StateFeed().Subscribe(stateChannel)
<-stateChannel
stateSub.Unsubscribe()
st := slotutil.GetSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
for {
select {
case <-s.ctx.Done():
return
case <-st.C():
ctx := context.Background()
atts := s.attPool.ForkchoiceAttestations()
for _, a := range atts {
if err := s.attPool.DeleteForkchoiceAttestation(a); err != nil {
log.WithError(err).Error("Could not delete fork choice attestation in pool")
}
if err := s.ReceiveAttestationNoPubsub(ctx, a); err != nil {
log.WithFields(logrus.Fields{
"targetRoot": fmt.Sprintf("%#x", a.Data.Target.Root),
}).WithError(err).Error("Could not receive attestation in chain service")
}
}
}
}
}

View File

@@ -3,82 +3,15 @@ package blockchain
import (
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
"golang.org/x/net/context"
)
func TestReceiveAttestation_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.SigningRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
b := &ethpb.BeaconBlock{}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
a := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root[:]},
}}
if err := chainService.ReceiveAttestation(ctx, a); err != nil {
t.Fatal(err)
}
testutil.AssertLogsContain(t, hook, "Saved new head info")
testutil.AssertLogsContain(t, hook, "Broadcasting attestation")
}
func TestReceiveAttestation_SameHead(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.SigningRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
chainService.canonicalRoots[0] = r[:]
b := &ethpb.BeaconBlock{}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
a := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root[:]},
}}
if err := chainService.ReceiveAttestation(ctx, a); err != nil {
t.Fatal(err)
}
testutil.AssertLogsDoNotContain(t, hook, "Saved new head info")
testutil.AssertLogsContain(t, hook, "Broadcasting attestation")
}
func TestReceiveAttestationNoPubsub_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
@@ -86,14 +19,14 @@ func TestReceiveAttestationNoPubsub_ProcessCorrectly(t *testing.T) {
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.SigningRoot(&ethpb.BeaconBlock{})
r, _ := ssz.HashTreeRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
b := &ethpb.BeaconBlock{}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
root, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}

View File

@@ -7,9 +7,14 @@ import (
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -17,10 +22,10 @@ import (
// BlockReceiver interface defines the methods of chain service receive and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlockNoVerify(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlock(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedBeaconBlock) error
}
// ReceiveBlock is a function that defines the operations that are preformed on
@@ -29,11 +34,11 @@ type BlockReceiver interface {
// 2. Validate block, apply state transition and update check points
// 3. Apply fork choice to the processed block
// 4. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) error {
func (s *Service) ReceiveBlock(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlock")
defer span.End()
root, err := ssz.SigningRoot(block)
root, err := ssz.HashTreeRoot(block.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
@@ -59,10 +64,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) er
// 1. Validate block, apply state transition and update check points
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconBlock) error {
func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoPubsub")
defer span.End()
blockCopy := proto.Clone(block).(*ethpb.BeaconBlock)
blockCopy := proto.Clone(block).(*ethpb.SignedBeaconBlock)
// Apply state transition on the new block.
if err := s.forkChoiceStore.OnBlock(ctx, blockCopy); err != nil {
@@ -70,7 +75,7 @@ func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconB
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.SigningRoot(blockCopy)
root, err := ssz.HashTreeRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
@@ -80,36 +85,51 @@ func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconB
if err != nil {
return errors.Wrap(err, "could not get head from fork choice service")
}
headBlk, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
signedHeadBlock, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
if err != nil {
return errors.Wrap(err, "could not compute state from block head")
}
if signedHeadBlock == nil || signedHeadBlock.Block == nil {
return errors.New("nil head block")
}
// Only save head if it's different than the current head.
if !bytes.Equal(headRoot, s.HeadRoot()) {
if err := s.saveHead(ctx, headBlk, bytesutil.ToBytes32(headRoot)); err != nil {
if err := s.saveHead(ctx, signedHeadBlock, bytesutil.ToBytes32(headRoot)); err != nil {
return errors.Wrap(err, "could not save head")
}
}
// Remove block's contained deposits, attestations, and other operations from persistent storage.
if err := s.cleanupBlockOperations(ctx, blockCopy); err != nil {
return errors.Wrap(err, "could not clean up block deposits, attestations, and other operations")
// Send notification of the processed block to the state feed.
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: root,
Verified: true,
},
})
// Add attestations from the block to the pool for fork choice.
if err := s.attPool.SaveBlockAttestations(blockCopy.Block.Body.Attestations); err != nil {
log.Errorf("Could not save attestation for fork choice: %v", err)
return nil
}
// Reports on block and fork choice metrics.
s.reportSlotMetrics(blockCopy.Slot)
s.reportSlotMetrics(blockCopy.Block.Slot)
// Log if block is a competing block.
isCompetingBlock(root[:], blockCopy.Slot, headRoot, headBlk.Slot)
isCompetingBlock(root[:], blockCopy.Block.Slot, headRoot, signedHeadBlock.Block.Slot)
// Log state transition data.
logStateTransitionData(blockCopy, root[:])
logStateTransitionData(blockCopy.Block, root[:])
s.epochParticipationLock.Lock()
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
processedBlkNoPubsub.Inc()
// We write the latest saved head root to a feed for consumption by other services.
s.headUpdatedFeed.Send(bytesutil.ToBytes32(headRoot))
return nil
}
@@ -117,10 +137,10 @@ func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconB
// that are preformed blocks that is received from initial sync service. The operations consists of:
// 1. Validate block, apply state transition and update check points
// 2. Save latest head info
func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.BeaconBlock) error {
func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoForkchoice")
defer span.End()
blockCopy := proto.Clone(block).(*ethpb.BeaconBlock)
blockCopy := proto.Clone(block).(*ethpb.SignedBeaconBlock)
// Apply state transition on the incoming newly received block.
if err := s.forkChoiceStore.OnBlock(ctx, blockCopy); err != nil {
@@ -128,7 +148,7 @@ func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *eth
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.SigningRoot(blockCopy)
root, err := ssz.HashTreeRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
@@ -139,19 +159,25 @@ func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *eth
}
}
// Remove block's contained deposits, attestations, and other operations from persistent storage.
if err := s.cleanupBlockOperations(ctx, blockCopy); err != nil {
return errors.Wrap(err, "could not clean up block deposits, attestations, and other operations")
}
// Send notification of the processed block to the state feed.
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: root,
Verified: true,
},
})
// Reports on block and fork choice metrics.
s.reportSlotMetrics(blockCopy.Slot)
s.reportSlotMetrics(blockCopy.Block.Slot)
// Log state transition data.
logStateTransitionData(blockCopy, root[:])
logStateTransitionData(blockCopy.Block, root[:])
s.epochParticipationLock.Lock()
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
// We write the latest saved head root to a feed for consumption by other services.
s.headUpdatedFeed.Send(root)
processedBlkNoPubsubForkchoice.Inc()
return nil
}
@@ -159,57 +185,61 @@ func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *eth
// ReceiveBlockNoVerify runs state transition on a input block without verifying the block's BLS contents.
// Depends on the security model, this is the "minimal" work a node can do to sync the chain.
// It simulates light client behavior and assumes 100% trust with the syncing peer.
func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.BeaconBlock) error {
func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoVerify")
defer span.End()
blockCopy := proto.Clone(block).(*ethpb.BeaconBlock)
blockCopy := proto.Clone(block).(*ethpb.SignedBeaconBlock)
// Apply state transition on the incoming newly received blockCopy without verifying its BLS contents.
if err := s.forkChoiceStore.OnBlockNoVerifyStateTransition(ctx, blockCopy); err != nil {
if err := s.forkChoiceStore.OnBlockInitialSyncStateTransition(ctx, blockCopy); err != nil {
return errors.Wrap(err, "could not process blockCopy from fork choice service")
}
root, err := ssz.SigningRoot(blockCopy)
root, err := ssz.HashTreeRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received blockCopy")
}
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHead(ctx, blockCopy, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
if featureconfig.Get().InitSyncCacheState {
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHeadNoDB(ctx, blockCopy, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
}
}
} else {
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHead(ctx, blockCopy, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
}
}
}
// Send notification of the processed block to the state feed.
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
BlockRoot: root,
Verified: false,
},
})
// Reports on blockCopy and fork choice metrics.
s.reportSlotMetrics(blockCopy.Slot)
s.reportSlotMetrics(blockCopy.Block.Slot)
// Log state transition data.
log.WithFields(logrus.Fields{
"slot": blockCopy.Slot,
"attestations": len(blockCopy.Body.Attestations),
"deposits": len(blockCopy.Body.Deposits),
"slot": blockCopy.Block.Slot,
"attestations": len(blockCopy.Block.Body.Attestations),
"deposits": len(blockCopy.Block.Body.Deposits),
}).Debug("Finished applying state transition")
// We write the latest saved head root to a feed for consumption by other services.
s.headUpdatedFeed.Send(root)
return nil
}
s.epochParticipationLock.Lock()
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
// cleanupBlockOperations processes and cleans up any block operations relevant to the beacon node
// such as attestations, exits, and deposits. We update the latest seen attestation by validator
// in the local node's runtime, cleanup and remove pending deposits which have been included in the block
// from our node's local cache, and process validator exits and more.
func (s *Service) cleanupBlockOperations(ctx context.Context, block *ethpb.BeaconBlock) error {
// Forward processed block to operation pool to remove individual operation from DB.
if s.opsPoolService.IncomingProcessedBlockFeed().Send(block) == 0 {
log.Error("Sent processed block to no subscribers")
}
// Remove pending deposits from the deposit queue.
for _, dep := range block.Body.Deposits {
s.depositCache.RemovePendingDeposit(ctx, dep)
}
return nil
}

View File

@@ -6,12 +6,14 @@ import (
"reflect"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/stateutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -23,89 +25,38 @@ func TestReceiveBlock_ProcessCorrectly(t *testing.T) {
ctx := context.Background()
chainService := setupBeaconChain(t, db)
deposits, _, privKeys := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, 0, &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
beaconState.Eth1Data.BlockHash = nil
beaconState.Eth1DepositIndex = 100
stateRoot, err := ssz.HashTreeRoot(beaconState)
if err != nil {
t.Fatal(err)
}
genesis := b.NewGenesisBlock(stateRoot[:])
bodyRoot, err := ssz.HashTreeRoot(genesis.Body)
if err != nil {
t.Fatal(err)
}
genesisBlkRoot, err := ssz.SigningRoot(genesis)
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
genesis, _ := testutil.GenerateFullBlock(beaconState, privKeys, nil, beaconState.Slot+1)
beaconState, err := state.ExecuteStateTransition(ctx, beaconState, genesis)
if err != nil {
t.Fatal(err)
}
genesisBlkRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
cp := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := chainService.forkChoiceStore.GenesisStore(ctx, cp, cp); err != nil {
t.Fatal(err)
}
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{
Slot: genesis.Slot,
ParentRoot: genesis.ParentRoot,
BodyRoot: bodyRoot[:],
StateRoot: genesis.StateRoot,
}
if err := chainService.beaconDB.SaveBlock(ctx, genesis); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
parentRoot, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, parentRoot); err != nil {
if err := db.SaveState(ctx, beaconState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
slot := beaconState.Slot + 1
epoch := helpers.SlotToEpoch(slot)
beaconState.Slot++
randaoReveal, err := testutil.CreateRandaoReveal(beaconState, epoch, privKeys)
block, err := testutil.GenerateFullBlock(beaconState, privKeys, nil, slot)
if err != nil {
t.Fatal(err)
}
beaconState.Slot--
block := &ethpb.BeaconBlock{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{
Eth1Data: &ethpb.Eth1Data{
DepositCount: uint64(len(deposits)),
DepositRoot: []byte("a"),
BlockHash: []byte("b"),
},
RandaoReveal: randaoReveal[:],
Attestations: nil,
},
}
stateRootCandidate, err := state.ExecuteStateTransitionNoVerify(context.Background(), beaconState, block)
if err != nil {
t.Fatal(err)
}
stateRoot, err = ssz.HashTreeRoot(stateRootCandidate)
if err != nil {
t.Fatal(err)
}
block.StateRoot = stateRoot[:]
block, err = testutil.SignBlock(beaconState, block, privKeys)
if err != nil {
t.Error(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
@@ -123,19 +74,26 @@ func TestReceiveReceiveBlockNoPubsub_CanSaveHeadInfo(t *testing.T) {
chainService := setupBeaconChain(t, db)
headBlk := &ethpb.BeaconBlock{Slot: 100}
headBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 100}}
if err := db.SaveBlock(ctx, headBlk); err != nil {
t.Fatal(err)
}
r, err := ssz.SigningRoot(headBlk)
r, err := ssz.HashTreeRoot(headBlk.Block)
if err != nil {
t.Fatal(err)
}
head := &pb.BeaconState{Slot: 100, FinalizedCheckpoint: &ethpb.Checkpoint{Root: r[:]}}
if err := db.SaveState(ctx, head, r); err != nil {
t.Fatal(err)
}
chainService.forkChoiceStore = &store{headRoot: r[:]}
if err := chainService.ReceiveBlockNoPubsub(ctx, &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{}}); err != nil {
if err := chainService.ReceiveBlockNoPubsub(ctx, &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{},
},
}); err != nil {
t.Fatal(err)
}
@@ -158,14 +116,17 @@ func TestReceiveReceiveBlockNoPubsub_SameHead(t *testing.T) {
chainService := setupBeaconChain(t, db)
headBlk := &ethpb.BeaconBlock{}
headBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, headBlk); err != nil {
t.Fatal(err)
}
newBlk := &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{}}
newRoot, _ := ssz.SigningRoot(newBlk)
newBlk := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{},
},
}
newRoot, _ := ssz.HashTreeRoot(newBlk.Block)
if err := db.SaveBlock(ctx, newBlk); err != nil {
t.Fatal(err)
}
@@ -187,81 +148,40 @@ func TestReceiveBlockNoPubsubForkchoice_ProcessCorrectly(t *testing.T) {
ctx := context.Background()
chainService := setupBeaconChain(t, db)
deposits, _, privKeys := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, 0, &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
block, err := testutil.GenerateFullBlock(beaconState, privKeys, nil, beaconState.Slot)
if err != nil {
t.Fatal(err)
}
beaconState.Eth1DepositIndex = 100
stateRoot, err := ssz.HashTreeRoot(beaconState)
stateRoot, err := stateutil.HashTreeRootState(beaconState)
if err != nil {
t.Fatal(err)
}
genesis := b.NewGenesisBlock(stateRoot[:])
bodyRoot, err := ssz.HashTreeRoot(genesis.Body)
parentRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Fatal(err)
}
if err := chainService.forkChoiceStore.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{
Slot: genesis.Slot,
ParentRoot: genesis.ParentRoot,
BodyRoot: bodyRoot[:],
StateRoot: genesis.StateRoot,
}
if err := chainService.beaconDB.SaveBlock(ctx, genesis); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
parentRoot, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, parentRoot); err != nil {
t.Fatal(err)
}
slot := beaconState.Slot + 1
epoch := helpers.SlotToEpoch(slot)
beaconState.Slot++
randaoReveal, err := testutil.CreateRandaoReveal(beaconState, epoch, privKeys)
if err != nil {
t.Fatal(err)
}
beaconState.Slot--
block := &ethpb.BeaconBlock{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{
Eth1Data: &ethpb.Eth1Data{
DepositCount: uint64(len(deposits)),
DepositRoot: []byte("a"),
BlockHash: []byte("b"),
},
RandaoReveal: randaoReveal[:],
Attestations: nil,
},
}
stateRootCandidate, err := state.ExecuteStateTransitionNoVerify(context.Background(), beaconState, block)
if err != nil {
if err := chainService.forkChoiceStore.GenesisStore(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}, &ethpb.Checkpoint{Root: parentRoot[:]}); err != nil {
t.Fatal(err)
}
stateRoot, err = ssz.HashTreeRoot(stateRootCandidate)
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
block, err = testutil.GenerateFullBlock(beaconState, privKeys, nil, beaconState.Slot)
if err != nil {
t.Fatal(err)
}
block.StateRoot = stateRoot[:]
block, err = testutil.SignBlock(beaconState, block, privKeys)
if err != nil {
t.Error(err)
if err := db.SaveState(ctx, beaconState, bytesutil.ToBytes32(block.Block.ParentRoot)); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {

View File

@@ -11,58 +11,49 @@ import (
"time"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/operations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ChainFeeds interface defines the methods of the Service which provide state related
// information feeds to consumers.
type ChainFeeds interface {
StateInitializedFeed() *event.Feed
}
// NewHeadNotifier defines a struct which can notify many consumers of a new,
// canonical chain head event occuring in the node.
type NewHeadNotifier interface {
HeadUpdatedFeed() *event.Feed
}
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
depositCache *depositcache.DepositCache
chainStartFetcher powchain.ChainStartFetcher
opsPoolService operations.OperationFeeds
forkChoiceStore forkchoice.ForkChoicer
chainStartChan chan time.Time
genesisTime time.Time
stateInitializedFeed *event.Feed
headUpdatedFeed *event.Feed
p2p p2p.Broadcaster
maxRoutines int64
headSlot uint64
headBlock *ethpb.BeaconBlock
headState *pb.BeaconState
canonicalRoots map[uint64][]byte
headLock sync.RWMutex
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
depositCache *depositcache.DepositCache
chainStartFetcher powchain.ChainStartFetcher
attPool attestations.Pool
forkChoiceStore forkchoice.ForkChoicer
genesisTime time.Time
p2p p2p.Broadcaster
maxRoutines int64
headSlot uint64
headBlock *ethpb.SignedBeaconBlock
headState *pb.BeaconState
canonicalRoots map[uint64][]byte
headLock sync.RWMutex
stateNotifier statefeed.Notifier
genesisRoot [32]byte
epochParticipation map[uint64]*precompute.Balance
epochParticipationLock sync.RWMutex
}
// Config options for the service.
@@ -71,9 +62,10 @@ type Config struct {
ChainStartFetcher powchain.ChainStartFetcher
BeaconDB db.Database
DepositCache *depositcache.DepositCache
OpsPoolService operations.OperationFeeds
AttPool attestations.Pool
P2p p2p.Broadcaster
MaxRoutines int64
StateNotifier statefeed.Notifier
}
// NewService instantiates a new block service instance that will
@@ -82,19 +74,18 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
store := forkchoice.NewForkChoiceService(ctx, cfg.BeaconDB)
return &Service{
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
depositCache: cfg.DepositCache,
chainStartFetcher: cfg.ChainStartFetcher,
opsPoolService: cfg.OpsPoolService,
forkChoiceStore: store,
chainStartChan: make(chan time.Time),
stateInitializedFeed: new(event.Feed),
headUpdatedFeed: new(event.Feed),
p2p: cfg.P2p,
canonicalRoots: make(map[uint64][]byte),
maxRoutines: cfg.MaxRoutines,
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
depositCache: cfg.DepositCache,
chainStartFetcher: cfg.ChainStartFetcher,
attPool: cfg.AttPool,
forkChoiceStore: store,
p2p: cfg.P2p,
canonicalRoots: make(map[uint64][]byte),
maxRoutines: cfg.MaxRoutines,
stateNotifier: cfg.StateNotifier,
epochParticipation: make(map[uint64]*precompute.Balance),
}, nil
}
@@ -105,6 +96,23 @@ func (s *Service) Start() {
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
}
// For running initial sync with state cache, in an event of restart, we use
// last finalized check point as start point to sync instead of head
// state. This is because we no longer save state every slot during sync.
if featureconfig.Get().InitSyncCacheState {
cp, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
if beaconState == nil {
beaconState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
}
}
}
// If the chain has already been initialized, simply start the block processing routine.
if beaconState != nil {
log.Info("Blockchain data already exists in DB, initializing...")
@@ -123,31 +131,58 @@ func (s *Service) Start() {
if err := s.forkChoiceStore.GenesisStore(ctx, justifiedCheckpoint, finalizedCheckpoint); err != nil {
log.Fatalf("Could not start fork choice service: %v", err)
}
s.stateInitializedFeed.Send(s.genesisTime)
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized,
Data: &statefeed.InitializedData{
StartTime: s.genesisTime,
},
})
} else {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.chainStartFetcher == nil {
log.Fatal("Not configured web3Service for POW chain")
return // return need for TestStartUninitializedChainWithoutConfigPOWChain.
}
subChainStart := s.chainStartFetcher.ChainStartFeed().Subscribe(s.chainStartChan)
go func() {
genesisTime := <-s.chainStartChan
s.processChainStartTime(ctx, genesisTime, subChainStart)
return
stateChannel := make(chan *feed.Event, 1)
stateSub := s.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for {
select {
case event := <-stateChannel:
if event.Type == statefeed.ChainStarted {
data := event.Data.(*statefeed.ChainStartedData)
log.WithField("starttime", data.StartTime).Debug("Received chain start event")
s.processChainStartTime(ctx, data.StartTime)
return
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state notifier failed")
return
}
}
}()
}
go s.processAttestation()
}
// processChainStartTime initializes a series of deposits from the ChainStart deposits in the eth1
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Time, chainStartSub event.Subscription) {
initialDeposits := s.chainStartFetcher.ChainStartDeposits()
if err := s.initializeBeaconChain(ctx, genesisTime, initialDeposits, s.chainStartFetcher.ChainStartEth1Data()); err != nil {
func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Time) {
preGenesisState := s.chainStartFetcher.PreGenesisState()
if err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.chainStartFetcher.ChainStartEth1Data()); err != nil {
log.Fatalf("Could not initialize beacon chain: %v", err)
}
s.stateInitializedFeed.Send(genesisTime)
chainStartSub.Unsubscribe()
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized,
Data: &statefeed.InitializedData{
StartTime: genesisTime,
},
})
}
// initializes the state and genesis block of the beacon chain to persistent storage
@@ -156,7 +191,7 @@ func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Ti
func (s *Service) initializeBeaconChain(
ctx context.Context,
genesisTime time.Time,
deposits []*ethpb.Deposit,
preGenesisState *pb.BeaconState,
eth1data *ethpb.Eth1Data) error {
_, span := trace.StartSpan(context.Background(), "beacon-chain.Service.initializeBeaconChain")
defer span.End()
@@ -164,7 +199,7 @@ func (s *Service) initializeBeaconChain(
s.genesisTime = genesisTime
unixTime := uint64(genesisTime.Unix())
genesisState, err := state.GenesisBeaconState(deposits, unixTime, eth1data)
genesisState, err := state.OptimizedGenesisBeaconState(unixTime, preGenesisState, eth1data)
if err != nil {
return errors.Wrap(err, "could not initialize genesis state")
}
@@ -175,7 +210,7 @@ func (s *Service) initializeBeaconChain(
// Update committee shuffled indices for genesis epoch.
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(genesisState); err != nil {
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
return err
}
}
@@ -198,30 +233,48 @@ func (s *Service) Status() error {
return nil
}
// StateInitializedFeed returns a feed that is written to
// when the beacon state is first initialized.
func (s *Service) StateInitializedFeed() *event.Feed {
return s.stateInitializedFeed
}
// HeadUpdatedFeed is a feed containing the head block root and
// is written to when a new head block is saved to DB.
func (s *Service) HeadUpdatedFeed() *event.Feed {
return s.headUpdatedFeed
}
// This gets called to update canonical root mapping.
func (s *Service) saveHead(ctx context.Context, b *ethpb.BeaconBlock, r [32]byte) error {
func (s *Service) saveHead(ctx context.Context, signed *ethpb.SignedBeaconBlock, r [32]byte) error {
s.headLock.Lock()
defer s.headLock.Unlock()
s.headSlot = b.Slot
if signed == nil || signed.Block == nil {
return errors.New("cannot save nil head block")
}
s.canonicalRoots[b.Slot] = r[:]
s.headSlot = signed.Block.Slot
s.canonicalRoots[signed.Block.Slot] = r[:]
if err := s.beaconDB.SaveHeadBlockRoot(ctx, r); err != nil {
return errors.Wrap(err, "could not save head root in DB")
}
s.headBlock = signed
headState, err := s.beaconDB.State(ctx, r)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
s.headState = headState
log.WithFields(logrus.Fields{
"slot": signed.Block.Slot,
"headRoot": fmt.Sprintf("%#x", r),
}).Debug("Saved new head info")
return nil
}
// This gets called to update canonical root mapping. It does not save head block
// root in DB. With the inception of inital-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b *ethpb.SignedBeaconBlock, r [32]byte) error {
s.headLock.Lock()
defer s.headLock.Unlock()
s.headSlot = b.Block.Slot
s.canonicalRoots[b.Block.Slot] = r[:]
s.headBlock = b
headState, err := s.beaconDB.State(ctx, r)
@@ -231,7 +284,7 @@ func (s *Service) saveHead(ctx context.Context, b *ethpb.BeaconBlock, r [32]byte
s.headState = headState
log.WithFields(logrus.Fields{
"slot": b.Slot,
"slot": b.Block.Slot,
"headRoot": fmt.Sprintf("%#x", r),
}).Debug("Saved new head info")
return nil
@@ -257,7 +310,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconSt
return errors.Wrap(err, "could not tree hash genesis state")
}
genesisBlk := blocks.NewGenesisBlock(stateRoot[:])
genesisBlkRoot, err := ssz.SigningRoot(genesisBlk)
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
if err != nil {
return errors.Wrap(err, "could not get genesis block root")
}
@@ -265,15 +318,15 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconSt
if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil {
return errors.Wrap(err, "could not save genesis block")
}
if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save genesis state")
}
if err := s.beaconDB.SaveHeadBlockRoot(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save head block root")
}
if err := s.beaconDB.SaveGenesisBlockRoot(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could save genesis block root")
}
if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save genesis state")
}
if err := s.saveGenesisValidators(ctx, genesisState); err != nil {
return errors.Wrap(err, "could not save genesis validators")
}
@@ -283,6 +336,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconSt
return errors.Wrap(err, "Could not start fork choice service: %v")
}
s.genesisRoot = genesisBlkRoot
s.headBlock = genesisBlk
s.headState = genesisState
s.canonicalRoots[genesisState.Slot] = genesisBlkRoot[:]
@@ -295,6 +349,19 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
s.headLock.Lock()
defer s.headLock.Unlock()
genesisBlock, err := s.beaconDB.GenesisBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not get genesis block from db")
}
if genesisBlock == nil {
return errors.New("no genesis block in db")
}
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlock.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root of genesis block")
}
s.genesisRoot = genesisBlkRoot
finalized, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint from db")
@@ -313,7 +380,9 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
return errors.Wrap(err, "could not get finalized block from db")
}
s.headSlot = s.headState.Slot
if s.headBlock != nil && s.headBlock.Block != nil {
s.headSlot = s.headBlock.Block.Slot
}
s.canonicalRoots[s.headSlot] = finalized.Root
return nil

View File

@@ -5,8 +5,8 @@ import (
"io/ioutil"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/sirupsen/logrus"
)
@@ -25,13 +25,13 @@ func TestChainService_SaveHead_DataRace(t *testing.T) {
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 888},
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 888}},
[32]byte{},
)
}

View File

@@ -4,27 +4,27 @@ import (
"bytes"
"context"
"encoding/hex"
"errors"
"io/ioutil"
"math/big"
"reflect"
"testing"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
gethTypes "github.com/ethereum/go-ethereum/core/types"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
ssz "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -33,10 +33,6 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
// Ensure Service implements interfaces.
var _ = ChainFeeds(&Service{})
var _ = NewHeadNotifier(&Service{})
func init() {
logrus.SetLevel(logrus.DebugLevel)
logrus.SetOutput(ioutil.Discard)
@@ -46,16 +42,16 @@ type store struct {
headRoot []byte
}
func (s *store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
func (s *store) OnBlock(ctx context.Context, b *ethpb.SignedBeaconBlock) error {
return nil
}
func (s *store) OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.BeaconBlock) error {
func (s *store) OnBlockInitialSyncStateTransition(ctx context.Context, b *ethpb.SignedBeaconBlock) error {
return nil
}
func (s *store) OnAttestation(ctx context.Context, a *ethpb.Attestation) (uint64, error) {
return 0, nil
func (s *store) OnAttestation(ctx context.Context, a *ethpb.Attestation) error {
return nil
}
func (s *store) GenesisStore(ctx context.Context, justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) error {
@@ -70,105 +66,16 @@ func (s *store) Head(ctx context.Context) ([]byte, error) {
return s.headRoot, nil
}
type mockOperationService struct{}
func (ms *mockOperationService) IncomingProcessedBlockFeed() *event.Feed {
return new(event.Feed)
type mockBeaconNode struct {
stateFeed *event.Feed
}
func (ms *mockOperationService) IncomingAttFeed() *event.Feed {
return nil
}
func (ms *mockOperationService) IncomingExitFeed() *event.Feed {
return nil
}
type mockClient struct{}
func (m *mockClient) SubscribeNewHead(ctx context.Context, ch chan<- *gethTypes.Header) (ethereum.Subscription, error) {
return new(event.Feed).Subscribe(ch), nil
}
func (m *mockClient) BlockByHash(ctx context.Context, hash common.Hash) (*gethTypes.Block, error) {
head := &gethTypes.Header{Number: big.NewInt(0), Difficulty: big.NewInt(100)}
return gethTypes.NewBlockWithHeader(head), nil
}
func (m *mockClient) BlockByNumber(ctx context.Context, number *big.Int) (*gethTypes.Block, error) {
head := &gethTypes.Header{Number: big.NewInt(0), Difficulty: big.NewInt(100)}
return gethTypes.NewBlockWithHeader(head), nil
}
func (m *mockClient) HeaderByNumber(ctx context.Context, number *big.Int) (*gethTypes.Header, error) {
return &gethTypes.Header{Number: big.NewInt(0), Difficulty: big.NewInt(100)}, nil
}
func (m *mockClient) SubscribeFilterLogs(ctx context.Context, q ethereum.FilterQuery, ch chan<- gethTypes.Log) (ethereum.Subscription, error) {
return new(event.Feed).Subscribe(ch), nil
}
func (m *mockClient) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) {
return []byte{'t', 'e', 's', 't'}, nil
}
func (m *mockClient) CodeAt(ctx context.Context, account common.Address, blockNumber *big.Int) ([]byte, error) {
return []byte{'t', 'e', 's', 't'}, nil
}
func (m *mockClient) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]gethTypes.Log, error) {
logs := make([]gethTypes.Log, 3)
for i := 0; i < len(logs); i++ {
logs[i].Address = common.Address{}
logs[i].Topics = make([]common.Hash, 5)
logs[i].Topics[0] = common.Hash{'a'}
logs[i].Topics[1] = common.Hash{'b'}
logs[i].Topics[2] = common.Hash{'c'}
// StateFeed mocks the same method in the beacon node.
func (mbn *mockBeaconNode) StateFeed() *event.Feed {
if mbn.stateFeed == nil {
mbn.stateFeed = new(event.Feed)
}
return logs, nil
}
func (m *mockClient) LatestBlockHash() common.Hash {
return common.BytesToHash([]byte{'A'})
}
type faultyClient struct{}
func (f *faultyClient) SubscribeNewHead(ctx context.Context, ch chan<- *gethTypes.Header) (ethereum.Subscription, error) {
return new(event.Feed).Subscribe(ch), nil
}
func (f *faultyClient) BlockByHash(ctx context.Context, hash common.Hash) (*gethTypes.Block, error) {
return nil, errors.New("failed")
}
func (f *faultyClient) BlockByNumber(ctx context.Context, number *big.Int) (*gethTypes.Block, error) {
return nil, errors.New("failed")
}
func (f *faultyClient) HeaderByNumber(ctx context.Context, number *big.Int) (*gethTypes.Header, error) {
return nil, errors.New("failed")
}
func (f *faultyClient) SubscribeFilterLogs(ctx context.Context, q ethereum.FilterQuery, ch chan<- gethTypes.Log) (ethereum.Subscription, error) {
return new(event.Feed).Subscribe(ch), nil
}
func (f *faultyClient) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]gethTypes.Log, error) {
return nil, errors.New("unable to retrieve logs")
}
func (f *faultyClient) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) {
return []byte{}, errors.New("unable to retrieve contract code")
}
func (f *faultyClient) CodeAt(ctx context.Context, account common.Address, blockNumber *big.Int) ([]byte, error) {
return []byte{}, errors.New("unable to retrieve contract code")
}
func (f *faultyClient) LatestBlockHash() common.Hash {
return common.BytesToHash([]byte{'A'})
return mbn.stateFeed
}
type mockBroadcaster struct {
@@ -182,24 +89,13 @@ func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
var _ = p2p.Broadcaster(&mockBroadcaster{})
func setupGenesisBlock(t *testing.T, cs *Service) ([32]byte, *ethpb.BeaconBlock) {
genesis := b.NewGenesisBlock([]byte{})
if err := cs.beaconDB.SaveBlock(context.Background(), genesis); err != nil {
t.Fatalf("could not save block to db: %v", err)
}
parentHash, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatalf("unable to get tree hash root of canonical head: %v", err)
}
return parentHash, genesis
}
func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
endpoint := "ws://127.0.0.1"
ctx := context.Background()
var web3Service *powchain.Service
var err error
web3Service, err = powchain.NewService(ctx, &powchain.Web3ServiceConfig{
BeaconDB: beaconDB,
ETH1Endpoint: endpoint,
DepositContract: common.Address{},
})
@@ -212,8 +108,9 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
BeaconDB: beaconDB,
DepositCache: depositcache.NewDepositCache(),
ChainStartFetcher: web3Service,
OpsPoolService: &mockOperationService{},
P2p: &mockBroadcaster{},
StateNotifier: &mockBeaconNode{},
AttPool: attestations.NewPool(),
}
if err != nil {
t.Fatalf("could not register blockchain service: %v", err)
@@ -222,31 +119,47 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
if err != nil {
t.Fatalf("unable to setup chain service: %v", err)
}
chainService.genesisTime = time.Unix(1, 0) // non-zero time
return chainService
}
func TestChainStartStop_Uninitialized(t *testing.T) {
helpers.ClearAllCaches()
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
chainService := setupBeaconChain(t, db)
// Test the start function.
genesisChan := make(chan time.Time, 0)
sub := chainService.stateInitializedFeed.Subscribe(genesisChan)
defer sub.Unsubscribe()
// Listen for state events.
stateSubChannel := make(chan *feed.Event, 1)
stateSub := chainService.stateNotifier.StateFeed().Subscribe(stateSubChannel)
// Test the chain start state notifier.
genesisTime := time.Unix(1, 0)
chainService.Start()
chainService.chainStartChan <- time.Unix(0, 0)
genesisTime := <-genesisChan
if genesisTime != time.Unix(0, 0) {
t.Errorf(
"Expected genesis time to equal chainstart time (%v), received %v",
time.Unix(0, 0),
genesisTime,
)
event := &feed.Event{
Type: statefeed.ChainStarted,
Data: &statefeed.ChainStartedData{
StartTime: genesisTime,
},
}
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
for sent := 1; sent == 1; {
sent = chainService.stateNotifier.StateFeed().Send(event)
if sent == 1 {
// Flush our local subscriber.
<-stateSubChannel
}
}
// Now wait for notification the state is ready.
for stateInitialized := false; stateInitialized == false; {
recv := <-stateSubChannel
if recv.Type == statefeed.Initialized {
stateInitialized = true
}
}
stateSub.Unsubscribe()
beaconState, err := db.HeadState(context.Background())
if err != nil {
@@ -275,22 +188,22 @@ func TestChainStartStop_Initialized(t *testing.T) {
chainService := setupBeaconChain(t, db)
genesisBlk := b.NewGenesisBlock([]byte{})
blkRoot, err := ssz.SigningRoot(genesisBlk)
blkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(ctx, genesisBlk); err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, &pb.BeaconState{Slot: 1}, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveHeadBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, &pb.BeaconState{Slot: 1}, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}); err != nil {
t.Fatal(err)
}
@@ -315,11 +228,28 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
ctx := context.Background()
bc := setupBeaconChain(t, db)
var err error
// Set up 10 deposits pre chain start for validators to register
count := uint64(10)
deposits, _, _ := testutil.SetupInitialDeposits(t, count)
if err := bc.initializeBeaconChain(ctx, time.Unix(0, 0), deposits, &ethpb.Eth1Data{}); err != nil {
deposits, _, _ := testutil.DeterministicDepositsAndKeys(count)
trie, _, err := testutil.DepositTrieFromDeposits(deposits)
if err != nil {
t.Fatal(err)
}
hashTreeRoot := trie.HashTreeRoot()
genState := state.EmptyGenesisState()
genState.Eth1Data = &ethpb.Eth1Data{
DepositRoot: hashTreeRoot[:],
DepositCount: uint64(len(deposits)),
}
genState, err = b.ProcessDeposits(ctx, genState, &ethpb.BeaconBlockBody{Deposits: deposits})
if err != nil {
t.Fatal(err)
}
if err := bc.initializeBeaconChain(ctx, time.Unix(0, 0), genState, &ethpb.Eth1Data{
DepositRoot: hashTreeRoot[:],
}); err != nil {
t.Fatal(err)
}
@@ -334,8 +264,8 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
}
}
if bc.HeadState() == nil {
t.Error("Head state can't be nil after initialize beacon chain")
if _, err := bc.HeadState(ctx); err != nil {
t.Error(err)
}
if bc.HeadBlock() == nil {
t.Error("Head state can't be nil after initialize beacon chain")
@@ -351,7 +281,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
ctx := context.Background()
genesis := b.NewGenesisBlock([]byte{})
genesisRoot, err := ssz.SigningRoot(genesis)
genesisRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Fatal(err)
}
@@ -363,9 +293,9 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
}
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := &ethpb.BeaconBlock{Slot: finalizedSlot, ParentRoot: genesisRoot[:]}
headBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: finalizedSlot, ParentRoot: genesisRoot[:]}}
headState := &pb.BeaconState{Slot: finalizedSlot}
headRoot, _ := ssz.SigningRoot(headBlock)
headRoot, _ := ssz.HashTreeRoot(headBlock.Block)
if err := db.SaveState(ctx, headState, headRoot); err != nil {
t.Fatal(err)
}
@@ -388,13 +318,43 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
if !reflect.DeepEqual(c.HeadBlock(), headBlock) {
t.Error("head block incorrect")
}
if !reflect.DeepEqual(c.HeadState(), headState) {
t.Error("head block incorrect")
s, err := c.HeadState(ctx)
if err != nil {
t.Fatal(err)
}
if headBlock.Slot != c.HeadSlot() {
if !reflect.DeepEqual(s, headState) {
t.Error("head state incorrect")
}
if headBlock.Block.Slot != c.HeadSlot() {
t.Error("head slot incorrect")
}
if !bytes.Equal(headRoot[:], c.HeadRoot()) {
t.Error("head slot incorrect")
}
if c.genesisRoot != genesisRoot {
t.Error("genesis block root incorrect")
}
}
func TestChainService_SaveHeadNoDB(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, _ := ssz.HashTreeRoot(b)
if err := s.saveHeadNoDB(ctx, b, r); err != nil {
t.Fatal(err)
}
newB, err := s.beaconDB.HeadBlock(ctx)
if err != nil {
t.Fatal(err)
}
if reflect.DeepEqual(newB, b) {
t.Error("head block should not be equal")
}
}

View File

@@ -7,11 +7,16 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/event:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -6,53 +6,104 @@ import (
"time"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
opfeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
// ChainService defines the mock interface for testing
type ChainService struct {
State *pb.BeaconState
Root []byte
Block *ethpb.BeaconBlock
FinalizedCheckPoint *ethpb.Checkpoint
StateFeed *event.Feed
BlocksReceived []*ethpb.BeaconBlock
Genesis time.Time
Fork *pb.Fork
DB db.Database
State *pb.BeaconState
Root []byte
Block *ethpb.SignedBeaconBlock
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
BlocksReceived []*ethpb.SignedBeaconBlock
Balance *precompute.Balance
Genesis time.Time
Fork *pb.Fork
DB db.Database
stateNotifier statefeed.Notifier
opNotifier opfeed.Notifier
}
// StateNotifier mocks the same method in the chain service.
func (ms *ChainService) StateNotifier() statefeed.Notifier {
if ms.stateNotifier == nil {
ms.stateNotifier = &MockStateNotifier{}
}
return ms.stateNotifier
}
// MockStateNotifier mocks the state notifier.
type MockStateNotifier struct {
feed *event.Feed
}
// StateFeed returns a state feed.
func (msn *MockStateNotifier) StateFeed() *event.Feed {
if msn.feed == nil {
msn.feed = new(event.Feed)
}
return msn.feed
}
// OperationNotifier mocks the same method in the chain service.
func (ms *ChainService) OperationNotifier() opfeed.Notifier {
if ms.opNotifier == nil {
ms.opNotifier = &MockOperationNotifier{}
}
return ms.opNotifier
}
// MockOperationNotifier mocks the operation notifier.
type MockOperationNotifier struct {
feed *event.Feed
}
// OperationFeed returns an operation feed.
func (mon *MockOperationNotifier) OperationFeed() *event.Feed {
if mon.feed == nil {
mon.feed = new(event.Feed)
}
return mon.feed
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (ms *ChainService) ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) error {
func (ms *ChainService) ReceiveBlock(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
return nil
}
// ReceiveBlockNoVerify mocks ReceiveBlockNoVerify method in chain service.
func (ms *ChainService) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.BeaconBlock) error {
func (ms *ChainService) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
return nil
}
// ReceiveBlockNoPubsub mocks ReceiveBlockNoPubsub method in chain service.
func (ms *ChainService) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconBlock) error {
func (ms *ChainService) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
return nil
}
// ReceiveBlockNoPubsubForkchoice mocks ReceiveBlockNoPubsubForkchoice method in chain service.
func (ms *ChainService) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.BeaconBlock) error {
func (ms *ChainService) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
if ms.State == nil {
ms.State = &pb.BeaconState{}
}
if !bytes.Equal(ms.Root, block.ParentRoot) {
return errors.Errorf("wanted %#x but got %#x", ms.Root, block.ParentRoot)
if !bytes.Equal(ms.Root, block.Block.ParentRoot) {
return errors.Errorf("wanted %#x but got %#x", ms.Root, block.Block.ParentRoot)
}
ms.State.Slot = block.Slot
ms.State.Slot = block.Block.Slot
ms.BlocksReceived = append(ms.BlocksReceived, block)
signingRoot, err := ssz.SigningRoot(block)
signingRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
return err
}
@@ -60,7 +111,7 @@ func (ms *ChainService) ReceiveBlockNoPubsubForkchoice(ctx context.Context, bloc
if err := ms.DB.SaveBlock(ctx, block); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Slot)
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block.Slot)
}
ms.Root = signingRoot[:]
ms.Block = block
@@ -69,6 +120,9 @@ func (ms *ChainService) ReceiveBlockNoPubsubForkchoice(ctx context.Context, bloc
// HeadSlot mocks HeadSlot method in chain service.
func (ms *ChainService) HeadSlot() uint64 {
if ms.State == nil {
return 0
}
return ms.State.Slot
}
@@ -80,13 +134,13 @@ func (ms *ChainService) HeadRoot() []byte {
}
// HeadBlock mocks HeadBlock method in chain service.
func (ms *ChainService) HeadBlock() *ethpb.BeaconBlock {
func (ms *ChainService) HeadBlock() *ethpb.SignedBeaconBlock {
return ms.Block
}
// HeadState mocks HeadState method in chain service.
func (ms *ChainService) HeadState() *pb.BeaconState {
return ms.State
func (ms *ChainService) HeadState(context.Context) (*pb.BeaconState, error) {
return ms.State, nil
}
// CurrentFork mocks HeadState method in chain service.
@@ -99,6 +153,16 @@ func (ms *ChainService) FinalizedCheckpt() *ethpb.Checkpoint {
return ms.FinalizedCheckPoint
}
// CurrentJustifiedCheckpt mocks CurrentJustifiedCheckpt method in chain service.
func (ms *ChainService) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
return ms.CurrentJustifiedCheckPoint
}
// PreviousJustifiedCheckpt mocks PreviousJustifiedCheckpt method in chain service.
func (ms *ChainService) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
return ms.PreviousJustifiedCheckPoint
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
func (ms *ChainService) ReceiveAttestation(context.Context, *ethpb.Attestation) error {
return nil
@@ -109,21 +173,25 @@ func (ms *ChainService) ReceiveAttestationNoPubsub(context.Context, *ethpb.Attes
return nil
}
// StateInitializedFeed mocks the same method in the chain service.
func (ms *ChainService) StateInitializedFeed() *event.Feed {
if ms.StateFeed != nil {
return ms.StateFeed
// HeadValidatorsIndices mocks the same method in the chain service.
func (ms *ChainService) HeadValidatorsIndices(epoch uint64) ([]uint64, error) {
if ms.State == nil {
return []uint64{}, nil
}
ms.StateFeed = new(event.Feed)
return ms.StateFeed
return helpers.ActiveValidatorIndices(ms.State, epoch)
}
// HeadUpdatedFeed mocks the same method in the chain service.
func (ms *ChainService) HeadUpdatedFeed() *event.Feed {
return new(event.Feed)
// HeadSeed mocks the same method in the chain service.
func (ms *ChainService) HeadSeed(epoch uint64) ([32]byte, error) {
return helpers.Seed(ms.State, epoch, params.BeaconConfig().DomainBeaconAttester)
}
// GenesisTime mocks the same method in the chain service.
func (ms *ChainService) GenesisTime() time.Time {
return ms.Genesis
}
// Participation mocks the same method in the chain service.
func (ms *ChainService) Participation(epoch uint64) *precompute.Balance {
return ms.Balance
}

View File

@@ -3,8 +3,6 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"active_count.go",
"active_indices.go",
"attestation_data.go",
"checkpoint_state.go",
"committee.go",
@@ -15,8 +13,6 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
@@ -24,6 +20,7 @@ go_library(
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@io_k8s_client_go//tools/cache:go_default_library",
],
)
@@ -32,10 +29,7 @@ go_test(
name = "go_default_test",
size = "small",
srcs = [
"active_count_test.go",
"active_indices_test.go",
"attestation_data_test.go",
"benchmarks_test.go",
"checkpoint_state_test.go",
"committee_test.go",
"eth1_data_test.go",
@@ -45,11 +39,11 @@ go_test(
race = "on",
deps = [
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
],
)

View File

@@ -1,102 +0,0 @@
package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotActiveCountInfo will be returned when a cache object is not a pointer to
// a ActiveCountByEpoch struct.
ErrNotActiveCountInfo = errors.New("object is not a active count obj")
// maxActiveCountListSize defines the max number of active count can cache.
maxActiveCountListSize = 1000
// Metrics.
activeCountCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_count_cache_miss",
Help: "The number of active validator count requests that aren't present in the cache.",
})
activeCountCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_count_cache_hit",
Help: "The number of active validator count requests that are present in the cache.",
})
)
// ActiveCountByEpoch defines the active validator count per epoch.
type ActiveCountByEpoch struct {
Epoch uint64
ActiveCount uint64
}
// ActiveCountCache is a struct with 1 queue for looking up active count by epoch.
type ActiveCountCache struct {
activeCountCache *cache.FIFO
lock sync.RWMutex
}
// activeCountKeyFn takes the epoch as the key for the active count of a given epoch.
func activeCountKeyFn(obj interface{}) (string, error) {
aInfo, ok := obj.(*ActiveCountByEpoch)
if !ok {
return "", ErrNotActiveCountInfo
}
return strconv.Itoa(int(aInfo.Epoch)), nil
}
// NewActiveCountCache creates a new active count cache for storing/accessing active validator count.
func NewActiveCountCache() *ActiveCountCache {
return &ActiveCountCache{
activeCountCache: cache.NewFIFO(activeCountKeyFn),
}
}
// ActiveCountInEpoch fetches ActiveCountByEpoch by epoch. Returns true with a
// reference to the ActiveCountInEpoch info, if exists. Otherwise returns false, nil.
func (c *ActiveCountCache) ActiveCountInEpoch(epoch uint64) (uint64, error) {
if !featureconfig.Get().EnableActiveCountCache {
return params.BeaconConfig().FarFutureEpoch, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.activeCountCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return params.BeaconConfig().FarFutureEpoch, err
}
if exists {
activeCountCacheHit.Inc()
} else {
activeCountCacheMiss.Inc()
return params.BeaconConfig().FarFutureEpoch, nil
}
aInfo, ok := obj.(*ActiveCountByEpoch)
if !ok {
return params.BeaconConfig().FarFutureEpoch, ErrNotActiveCountInfo
}
return aInfo.ActiveCount, nil
}
// AddActiveCount adds ActiveCountByEpoch object to the cache. This method also trims the least
// recently added ActiveCountByEpoch object if the cache size has ready the max cache size limit.
func (c *ActiveCountCache) AddActiveCount(activeCount *ActiveCountByEpoch) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.activeCountCache.AddIfNotPresent(activeCount); err != nil {
return err
}
trim(c.activeCountCache, maxActiveCountListSize)
return nil
}

View File

@@ -1,83 +0,0 @@
package cache
import (
"reflect"
"strconv"
"testing"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestActiveCountKeyFn_OK(t *testing.T) {
aInfo := &ActiveCountByEpoch{
Epoch: 999,
ActiveCount: 10,
}
key, err := activeCountKeyFn(aInfo)
if err != nil {
t.Fatal(err)
}
if key != strconv.Itoa(int(aInfo.Epoch)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, strconv.Itoa(int(aInfo.Epoch)))
}
}
func TestActiveCountKeyFn_InvalidObj(t *testing.T) {
_, err := activeCountKeyFn("bad")
if err != ErrNotActiveCountInfo {
t.Errorf("Expected error %v, got %v", ErrNotActiveCountInfo, err)
}
}
func TestActiveCountCache_ActiveCountByEpoch(t *testing.T) {
cache := NewActiveCountCache()
aInfo := &ActiveCountByEpoch{
Epoch: 99,
ActiveCount: 11,
}
activeCount, err := cache.ActiveCountInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if activeCount != params.BeaconConfig().FarFutureEpoch {
t.Error("Expected active count not to exist in empty cache")
}
if err := cache.AddActiveCount(aInfo); err != nil {
t.Fatal(err)
}
activeCount, err = cache.ActiveCountInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(activeCount, aInfo.ActiveCount) {
t.Errorf(
"Expected fetched active count to be %v, got %v",
aInfo.ActiveCount,
activeCount,
)
}
}
func TestActiveCount_MaxSize(t *testing.T) {
cache := NewActiveCountCache()
for i := uint64(0); i < 1001; i++ {
aInfo := &ActiveCountByEpoch{
Epoch: i,
}
if err := cache.AddActiveCount(aInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.activeCountCache.ListKeys()) != maxActiveCountListSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxActiveCountListSize,
len(cache.activeCountCache.ListKeys()),
)
}
}

View File

@@ -1,106 +0,0 @@
package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotActiveIndicesInfo will be returned when a cache object is not a pointer to
// a ActiveIndicesByEpoch struct.
ErrNotActiveIndicesInfo = errors.New("object is not a active indices list")
// maxActiveIndicesListSize defines the max number of active indices can cache.
maxActiveIndicesListSize = 4
// Metrics.
activeIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_indices_cache_miss",
Help: "The number of active validator indices requests that aren't present in the cache.",
})
activeIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_indices_cache_hit",
Help: "The number of active validator indices requests that are present in the cache.",
})
)
// ActiveIndicesByEpoch defines the active validator indices per epoch.
type ActiveIndicesByEpoch struct {
Epoch uint64
ActiveIndices []uint64
}
// ActiveIndicesCache is a struct with 1 queue for looking up active indices by epoch.
type ActiveIndicesCache struct {
activeIndicesCache *cache.FIFO
lock sync.RWMutex
}
// activeIndicesKeyFn takes the epoch as the key for the active indices of a given epoch.
func activeIndicesKeyFn(obj interface{}) (string, error) {
aInfo, ok := obj.(*ActiveIndicesByEpoch)
if !ok {
return "", ErrNotActiveIndicesInfo
}
return strconv.Itoa(int(aInfo.Epoch)), nil
}
// NewActiveIndicesCache creates a new active indices cache for storing/accessing active validator indices.
func NewActiveIndicesCache() *ActiveIndicesCache {
return &ActiveIndicesCache{
activeIndicesCache: cache.NewFIFO(activeIndicesKeyFn),
}
}
// ActiveIndicesInEpoch fetches ActiveIndicesByEpoch by epoch. Returns true with a
// reference to the ActiveIndicesInEpoch info, if exists. Otherwise returns false, nil.
func (c *ActiveIndicesCache) ActiveIndicesInEpoch(epoch uint64) ([]uint64, error) {
if !featureconfig.Get().EnableActiveIndicesCache {
return nil, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.activeIndicesCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return nil, err
}
if exists {
activeIndicesCacheHit.Inc()
} else {
activeIndicesCacheMiss.Inc()
return nil, nil
}
aInfo, ok := obj.(*ActiveIndicesByEpoch)
if !ok {
return nil, ErrNotActiveIndicesInfo
}
return aInfo.ActiveIndices, nil
}
// AddActiveIndicesList adds ActiveIndicesByEpoch object to the cache. This method also trims the least
// recently added ActiveIndicesByEpoch object if the cache size has ready the max cache size limit.
func (c *ActiveIndicesCache) AddActiveIndicesList(activeIndices *ActiveIndicesByEpoch) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.activeIndicesCache.AddIfNotPresent(activeIndices); err != nil {
return err
}
trim(c.activeIndicesCache, maxActiveIndicesListSize)
return nil
}
// ActiveIndicesKeys returns the keys of the active indices cache.
func (c *ActiveIndicesCache) ActiveIndicesKeys() []string {
return c.activeIndicesCache.ListKeys()
}

View File

@@ -1,82 +0,0 @@
package cache
import (
"reflect"
"strconv"
"testing"
)
func TestActiveIndicesKeyFn_OK(t *testing.T) {
aInfo := &ActiveIndicesByEpoch{
Epoch: 999,
ActiveIndices: []uint64{1, 2, 3, 4, 5},
}
key, err := activeIndicesKeyFn(aInfo)
if err != nil {
t.Fatal(err)
}
if key != strconv.Itoa(int(aInfo.Epoch)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, strconv.Itoa(int(aInfo.Epoch)))
}
}
func TestActiveIndicesKeyFn_InvalidObj(t *testing.T) {
_, err := activeIndicesKeyFn("bad")
if err != ErrNotActiveIndicesInfo {
t.Errorf("Expected error %v, got %v", ErrNotActiveIndicesInfo, err)
}
}
func TestActiveIndicesCache_ActiveIndicesByEpoch(t *testing.T) {
cache := NewActiveIndicesCache()
aInfo := &ActiveIndicesByEpoch{
Epoch: 99,
ActiveIndices: []uint64{1, 2, 3, 4},
}
activeIndices, err := cache.ActiveIndicesInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if activeIndices != nil {
t.Error("Expected active indices not to exist in empty cache")
}
if err := cache.AddActiveIndicesList(aInfo); err != nil {
t.Fatal(err)
}
activeIndices, err = cache.ActiveIndicesInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(activeIndices, aInfo.ActiveIndices) {
t.Errorf(
"Expected fetched active indices to be %v, got %v",
aInfo.ActiveIndices,
activeIndices,
)
}
}
func TestActiveIndices_MaxSize(t *testing.T) {
cache := NewActiveIndicesCache()
for i := uint64(0); i < 100; i++ {
aInfo := &ActiveIndicesByEpoch{
Epoch: i,
}
if err := cache.AddActiveIndicesList(aInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.activeIndicesCache.ListKeys()) != maxActiveIndicesListSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxActiveIndicesListSize,
len(cache.activeIndicesCache.ListKeys()),
)
}
}

View File

@@ -10,8 +10,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"k8s.io/client-go/tools/cache"
)
@@ -59,7 +58,7 @@ func NewAttestationCache() *AttestationCache {
// Get waits for any in progress calculation to complete before returning a
// cached response, if any.
func (c *AttestationCache) Get(ctx context.Context, req *pb.AttestationRequest) (*ethpb.AttestationData, error) {
func (c *AttestationCache) Get(ctx context.Context, req *ethpb.AttestationDataRequest) (*ethpb.AttestationData, error) {
if !featureconfig.Get().EnableAttestationCache {
// Return a miss result if cache is not enabled.
attestationCacheMiss.Inc()
@@ -113,7 +112,7 @@ func (c *AttestationCache) Get(ctx context.Context, req *pb.AttestationRequest)
// MarkInProgress a request so that any other similar requests will block on
// Get until MarkNotInProgress is called.
func (c *AttestationCache) MarkInProgress(req *pb.AttestationRequest) error {
func (c *AttestationCache) MarkInProgress(req *ethpb.AttestationDataRequest) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
@@ -135,7 +134,7 @@ func (c *AttestationCache) MarkInProgress(req *pb.AttestationRequest) error {
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *AttestationCache) MarkNotInProgress(req *pb.AttestationRequest) error {
func (c *AttestationCache) MarkNotInProgress(req *ethpb.AttestationDataRequest) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
@@ -151,7 +150,7 @@ func (c *AttestationCache) MarkNotInProgress(req *pb.AttestationRequest) error {
}
// Put the response in the cache.
func (c *AttestationCache) Put(ctx context.Context, req *pb.AttestationRequest, res *ethpb.AttestationData) error {
func (c *AttestationCache) Put(ctx context.Context, req *ethpb.AttestationDataRequest, res *ethpb.AttestationData) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
@@ -180,11 +179,11 @@ func wrapperToKey(i interface{}) (string, error) {
return reqToKey(w.req)
}
func reqToKey(req *pb.AttestationRequest) (string, error) {
func reqToKey(req *ethpb.AttestationDataRequest) (string, error) {
return fmt.Sprintf("%d-%d", req.CommitteeIndex, req.Slot), nil
}
type attestationReqResWrapper struct {
req *pb.AttestationRequest
req *ethpb.AttestationDataRequest
res *ethpb.AttestationData
}

View File

@@ -5,16 +5,15 @@ import (
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
pb "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
func TestAttestationCache_RoundTrip(t *testing.T) {
ctx := context.Background()
c := cache.NewAttestationCache()
req := &pb.AttestationRequest{
req := &ethpb.AttestationDataRequest{
CommitteeIndex: 0,
Slot: 1,
}

View File

@@ -1,45 +0,0 @@
package cache
import (
"testing"
)
var indices300k = createIndices(300000)
var epoch = uint64(1)
func createIndices(count int) *ActiveIndicesByEpoch {
indices := make([]uint64, 0, count)
for i := 0; i < count; i++ {
indices = append(indices, uint64(i))
}
return &ActiveIndicesByEpoch{
Epoch: epoch,
ActiveIndices: indices,
}
}
func BenchmarkCachingAddRetrieve(b *testing.B) {
c := NewActiveIndicesCache()
b.Run("ADD300K", func(b *testing.B) {
b.N = 10
b.ResetTimer()
for i := 0; i < b.N; i++ {
if err := c.AddActiveIndicesList(indices300k); err != nil {
b.Fatal(err)
}
}
})
b.Run("RETR300K", func(b *testing.B) {
b.N = 10
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := c.ActiveIndicesInEpoch(epoch); err != nil {
b.Fatal(err)
}
}
})
}

View File

@@ -7,8 +7,8 @@ import (
"github.com/gogo/protobuf/proto"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"k8s.io/client-go/tools/cache"
)

View File

@@ -4,8 +4,8 @@ import (
"reflect"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
)

View File

@@ -2,7 +2,6 @@ package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
@@ -18,9 +17,10 @@ var (
// a Committee struct.
ErrNotCommittee = errors.New("object is not a committee struct")
// maxShuffledIndicesSize defines the max number of shuffled indices list can cache.
// 3 for previous, current epoch and next epoch.
maxShuffledIndicesSize = 3
// maxCommitteesCacheSize defines the max number of shuffled committees on per randao basis can cache.
// Due to reorgs, it's good to keep the old cache around for quickly switch over. 10 is a generous
// cache size as it considers 3 concurrent branches over 3 epochs.
maxCommitteesCacheSize = 10
// CommitteeCacheMiss tracks the number of committee requests that aren't present in the cache.
CommitteeCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
@@ -34,47 +34,47 @@ var (
})
)
// Committee defines the committee per epoch and index.
type Committee struct {
CommitteeCount uint64
Epoch uint64
Committee []uint64
// Committees defines the shuffled committees seed.
type Committees struct {
CommitteeCount uint64
Seed [32]byte
ShuffledIndices []uint64
SortedIndices []uint64
}
// CommitteeCache is a struct with 1 queue for looking up shuffled indices list by epoch and committee index.
// CommitteeCache is a struct with 1 queue for looking up shuffled indices list by seed.
type CommitteeCache struct {
CommitteeCache *cache.FIFO
lock sync.RWMutex
}
// committeeKeyFn takes the epoch as the key to retrieve shuffled indices of a committee in a given epoch.
// committeeKeyFn takes the seed as the key to retrieve shuffled indices of a committee in a given epoch.
func committeeKeyFn(obj interface{}) (string, error) {
info, ok := obj.(*Committee)
info, ok := obj.(*Committees)
if !ok {
return "", ErrNotCommittee
}
return strconv.Itoa(int(info.Epoch)), nil
return key(info.Seed), nil
}
// NewCommitteeCache creates a new committee cache for storing/accessing shuffled indices of a committee.
func NewCommitteeCache() *CommitteeCache {
// NewCommitteesCache creates a new committee cache for storing/accessing shuffled indices of a committee.
func NewCommitteesCache() *CommitteeCache {
return &CommitteeCache{
CommitteeCache: cache.NewFIFO(committeeKeyFn),
}
}
// ShuffledIndices fetches the shuffled indices by slot and committee index. Every list of indices
// Committee fetches the shuffled indices by slot and committee index. Every list of indices
// represent one committee. Returns true if the list exists with slot and committee index. Otherwise returns false, nil.
func (c *CommitteeCache) ShuffledIndices(slot uint64, index uint64) ([]uint64, error) {
func (c *CommitteeCache) Committee(slot uint64, seed [32]byte, index uint64) ([]uint64, error) {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return nil, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
epoch := int(slot / params.BeaconConfig().SlotsPerEpoch)
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(epoch))
obj, exists, err := c.CommitteeCache.GetByKey(key(seed))
if err != nil {
return nil, err
}
@@ -86,7 +86,7 @@ func (c *CommitteeCache) ShuffledIndices(slot uint64, index uint64) ([]uint64, e
return nil, nil
}
item, ok := obj.(*Committee)
item, ok := obj.(*Committees)
if !ok {
return nil, ErrNotCommittee
}
@@ -98,100 +98,36 @@ func (c *CommitteeCache) ShuffledIndices(slot uint64, index uint64) ([]uint64, e
indexOffSet := index + (slot%params.BeaconConfig().SlotsPerEpoch)*committeeCountPerSlot
start, end := startEndIndices(item, indexOffSet)
return item.Committee[start:end], nil
return item.ShuffledIndices[start:end], nil
}
// AddCommitteeShuffledList adds Committee shuffled list object to the cache. T
// his method also trims the least recently list if the cache size has ready the max cache size limit.
func (c *CommitteeCache) AddCommitteeShuffledList(committee *Committee) error {
func (c *CommitteeCache) AddCommitteeShuffledList(committees *Committees) error {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
if err := c.CommitteeCache.AddIfNotPresent(committee); err != nil {
if err := c.CommitteeCache.AddIfNotPresent(committees); err != nil {
return err
}
trim(c.CommitteeCache, maxShuffledIndicesSize)
trim(c.CommitteeCache, maxCommitteesCacheSize)
return nil
}
// Epochs returns the epochs stored in the committee cache. These are the keys to the cache.
func (c *CommitteeCache) Epochs() ([]uint64, error) {
if !featureconfig.Get().EnableShuffledIndexCache {
return nil, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
epochs := make([]uint64, len(c.CommitteeCache.ListKeys()))
for i, s := range c.CommitteeCache.ListKeys() {
epoch, err := strconv.Atoi(s)
if err != nil {
return nil, err
}
epochs[i] = uint64(epoch)
}
return epochs, nil
}
// EpochInCache returns true if an input epoch is part of keys in cache.
func (c *CommitteeCache) EpochInCache(wantedEpoch uint64) (bool, error) {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return false, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
for _, s := range c.CommitteeCache.ListKeys() {
epoch, err := strconv.Atoi(s)
if err != nil {
return false, err
}
if wantedEpoch == uint64(epoch) {
return true, nil
}
}
return false, nil
}
// CommitteeCountPerSlot returns the number of committees in a given slot as stored in cache.
func (c *CommitteeCache) CommitteeCountPerSlot(slot uint64) (uint64, bool, error) {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return 0, false, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
epoch := int(slot / params.BeaconConfig().SlotsPerEpoch)
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return 0, false, err
}
if exists {
CommitteeCacheHit.Inc()
} else {
CommitteeCacheMiss.Inc()
return 0, false, nil
}
item, ok := obj.(*Committee)
if !ok {
return 0, false, ErrNotCommittee
}
return item.CommitteeCount / params.BeaconConfig().SlotsPerEpoch, true, nil
}
// ActiveIndices returns the active indices of a given epoch stored in cache.
func (c *CommitteeCache) ActiveIndices(epoch uint64) ([]uint64, error) {
// ActiveIndices returns the active indices of a given seed stored in cache.
func (c *CommitteeCache) ActiveIndices(seed [32]byte) ([]uint64, error) {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return nil, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(int(epoch)))
obj, exists, err := c.CommitteeCache.GetByKey(key(seed))
if err != nil {
return nil, err
}
@@ -203,18 +139,25 @@ func (c *CommitteeCache) ActiveIndices(epoch uint64) ([]uint64, error) {
return nil, nil
}
item, ok := obj.(*Committee)
item, ok := obj.(*Committees)
if !ok {
return nil, ErrNotCommittee
}
return item.Committee, nil
return item.SortedIndices, nil
}
func startEndIndices(c *Committee, index uint64) (uint64, uint64) {
validatorCount := uint64(len(c.Committee))
func startEndIndices(c *Committees, index uint64) (uint64, uint64) {
validatorCount := uint64(len(c.ShuffledIndices))
start := sliceutil.SplitOffset(validatorCount, c.CommitteeCount, index)
end := sliceutil.SplitOffset(validatorCount, c.CommitteeCount, index+1)
return start, end
}
// Using seed as source for key to handle reorgs in the same epoch.
// The seed is derived from state's array of randao mixes and epoch value
// hashed together. This avoids collisions on different validator set. Spec definition:
// https://github.com/ethereum/eth2.0-specs/blob/v0.9.2/specs/core/0_beacon-chain.md#get_seed
func key(seed [32]byte) string {
return string(seed[:])
}

View File

@@ -2,25 +2,27 @@ package cache
import (
"reflect"
"sort"
"strconv"
"testing"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestCommitteeKeyFn_OK(t *testing.T) {
item := &Committee{
Epoch: 999,
CommitteeCount: 1,
Committee: []uint64{1, 2, 3, 4, 5},
item := &Committees{
CommitteeCount: 1,
Seed: [32]byte{'A'},
ShuffledIndices: []uint64{1, 2, 3, 4, 5},
}
key, err := committeeKeyFn(item)
k, err := committeeKeyFn(item)
if err != nil {
t.Fatal(err)
}
if key != strconv.Itoa(int(item.Epoch)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, strconv.Itoa(int(item.Epoch)))
if k != key(item.Seed) {
t.Errorf("Incorrect hash k: %s, expected %s", k, key(item.Seed))
}
}
@@ -32,17 +34,17 @@ func TestCommitteeKeyFn_InvalidObj(t *testing.T) {
}
func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
cache := NewCommitteeCache()
cache := NewCommitteesCache()
item := &Committee{
Epoch: 1,
Committee: []uint64{1, 2, 3, 4, 5, 6},
CommitteeCount: 3,
item := &Committees{
ShuffledIndices: []uint64{1, 2, 3, 4, 5, 6},
Seed: [32]byte{'A'},
CommitteeCount: 3,
}
slot := uint64(item.Epoch * params.BeaconConfig().SlotsPerEpoch)
slot := params.BeaconConfig().SlotsPerEpoch
committeeIndex := uint64(1)
indices, err := cache.ShuffledIndices(slot, committeeIndex)
indices, err := cache.Committee(slot, item.Seed, committeeIndex)
if err != nil {
t.Fatal(err)
}
@@ -54,102 +56,26 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
t.Fatal(err)
}
wantedIndex := uint64(0)
indices, err = cache.ShuffledIndices(slot, wantedIndex)
indices, err = cache.Committee(slot, item.Seed, wantedIndex)
if err != nil {
t.Fatal(err)
}
start, end := startEndIndices(item, wantedIndex)
if !reflect.DeepEqual(indices, item.Committee[start:end]) {
if !reflect.DeepEqual(indices, item.ShuffledIndices[start:end]) {
t.Errorf(
"Expected fetched active indices to be %v, got %v",
indices,
item.Committee[start:end],
item.ShuffledIndices[start:end],
)
}
}
func TestCommitteeCache_CanRotate(t *testing.T) {
cache := NewCommitteeCache()
item1 := &Committee{Epoch: 1}
if err := cache.AddCommitteeShuffledList(item1); err != nil {
t.Fatal(err)
}
item2 := &Committee{Epoch: 2}
if err := cache.AddCommitteeShuffledList(item2); err != nil {
t.Fatal(err)
}
epochs, err := cache.Epochs()
if err != nil {
t.Fatal(err)
}
wanted := item1.Epoch + item2.Epoch
if sum(epochs) != wanted {
t.Errorf("Wanted: %v, got: %v", wanted, sum(epochs))
}
item3 := &Committee{Epoch: 4}
if err := cache.AddCommitteeShuffledList(item3); err != nil {
t.Fatal(err)
}
epochs, err = cache.Epochs()
if err != nil {
t.Fatal(err)
}
wanted = item1.Epoch + item2.Epoch + item3.Epoch
if sum(epochs) != wanted {
t.Errorf("Wanted: %v, got: %v", wanted, sum(epochs))
}
item4 := &Committee{Epoch: 6}
if err := cache.AddCommitteeShuffledList(item4); err != nil {
t.Fatal(err)
}
epochs, err = cache.Epochs()
if err != nil {
t.Fatal(err)
}
wanted = item2.Epoch + item3.Epoch + item4.Epoch
if sum(epochs) != wanted {
t.Errorf("Wanted: %v, got: %v", wanted, sum(epochs))
}
}
func TestCommitteeCache_EpochInCache(t *testing.T) {
cache := NewCommitteeCache()
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 1}); err != nil {
t.Fatal(err)
}
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 2}); err != nil {
t.Fatal(err)
}
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 99}); err != nil {
t.Fatal(err)
}
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 100}); err != nil {
t.Fatal(err)
}
inCache, err := cache.EpochInCache(1)
if err != nil {
t.Fatal(err)
}
if inCache {
t.Error("Epoch shouldn't be in cache")
}
inCache, err = cache.EpochInCache(100)
if err != nil {
t.Fatal(err)
}
if !inCache {
t.Error("Epoch should be in cache")
}
}
func TestCommitteeCache_ActiveIndices(t *testing.T) {
cache := NewCommitteeCache()
cache := NewCommitteesCache()
item := &Committee{Epoch: 1, Committee: []uint64{1, 2, 3, 4, 5, 6}}
indices, err := cache.ActiveIndices(1)
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []uint64{1, 2, 3, 4, 5, 6}}
indices, err := cache.ActiveIndices(item.Seed)
if err != nil {
t.Fatal(err)
}
@@ -161,19 +87,41 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
t.Fatal(err)
}
indices, err = cache.ActiveIndices(1)
indices, err = cache.ActiveIndices(item.Seed)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(indices, item.Committee) {
if !reflect.DeepEqual(indices, item.SortedIndices) {
t.Error("Did not receive correct active indices from cache")
}
}
func sum(values []uint64) uint64 {
sum := uint64(0)
for _, v := range values {
sum = v + sum
func TestCommitteeCache_CanRotate(t *testing.T) {
cache := NewCommitteesCache()
// Should rotate out all the epochs except 190 through 199.
for i := 100; i < 200; i++ {
s := []byte(strconv.Itoa(i))
item := &Committees{Seed: bytesutil.ToBytes32(s)}
if err := cache.AddCommitteeShuffledList(item); err != nil {
t.Fatal(err)
}
}
k := cache.CommitteeCache.ListKeys()
if len(k) != maxCommitteesCacheSize {
t.Errorf("wanted: %d, got: %d", maxCommitteesCacheSize, len(k))
}
sort.Slice(k, func(i, j int) bool {
return k[i] < k[j]
})
s := bytesutil.ToBytes32([]byte(strconv.Itoa(190)))
if k[0] != key(s) {
t.Error("incorrect key received for slot 190")
}
s = bytesutil.ToBytes32([]byte(strconv.Itoa(199)))
if k[len(k)-1] != key(s) {
t.Error("incorrect key received for slot 199")
}
return sum
}

View File

@@ -9,10 +9,12 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//proto/eth/v1alpha1:go_default_library",
"//proto/beacon/db:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/hashutil:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
@@ -26,9 +28,10 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//proto/eth/v1alpha1:go_default_library",
"//proto/beacon/db:go_default_library",
"//shared/bytesutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -10,7 +10,9 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -33,28 +35,19 @@ type DepositFetcher interface {
// stores all the deposit related data that is required by the beacon-node.
type DepositCache struct {
// Beacon chain deposits in memory.
pendingDeposits []*DepositContainer
deposits []*DepositContainer
pendingDeposits []*dbpb.DepositContainer
deposits []*dbpb.DepositContainer
depositsLock sync.RWMutex
chainStartDeposits []*ethpb.Deposit
chainstartPubkeys map[string]bool
chainstartPubkeysLock sync.RWMutex
}
// DepositContainer object for holding the deposit and a reference to the block in
// which the deposit transaction was included in the proof of work chain.
type DepositContainer struct {
Deposit *ethpb.Deposit
Block *big.Int
Index int
depositRoot [32]byte
}
// NewDepositCache instantiates a new deposit cache
func NewDepositCache() *DepositCache {
return &DepositCache{
pendingDeposits: []*DepositContainer{},
deposits: []*DepositContainer{},
pendingDeposits: []*dbpb.DepositContainer{},
deposits: []*dbpb.DepositContainer{},
chainstartPubkeys: make(map[string]bool),
chainStartDeposits: make([]*ethpb.Deposit, 0),
}
@@ -62,10 +55,10 @@ func NewDepositCache() *DepositCache {
// InsertDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blockNum *big.Int, index int, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.InsertDeposit")
func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertDeposit")
defer span.End()
if d == nil || blockNum == nil {
if d == nil {
log.WithFields(log.Fields{
"block": blockNum,
"deposit": d,
@@ -78,14 +71,36 @@ func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blo
defer dc.depositsLock.Unlock()
// keep the slice sorted on insertion in order to avoid costly sorting on retrival.
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Index >= index })
newDeposits := append([]*DepositContainer{{Deposit: d, Block: blockNum, depositRoot: depositRoot, Index: index}}, dc.deposits[heightIdx:]...)
newDeposits := append([]*dbpb.DepositContainer{{Deposit: d, Eth1BlockHeight: blockNum, DepositRoot: depositRoot[:], Index: index}}, dc.deposits[heightIdx:]...)
dc.deposits = append(dc.deposits[:heightIdx], newDeposits...)
historicalDepositsCount.Inc()
}
// InsertDepositContainers inserts a set of deposit containers into our deposit cache.
func (dc *DepositCache) InsertDepositContainers(ctx context.Context, ctrs []*dbpb.DepositContainer) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertDepositContainers")
defer span.End()
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
sort.SliceStable(ctrs, func(i int, j int) bool { return ctrs[i].Index < ctrs[j].Index })
dc.deposits = ctrs
historicalDepositsCount.Add(float64(len(ctrs)))
}
// AllDepositContainers returns a list of deposits all historical deposit containers until the given block number.
func (dc *DepositCache) AllDepositContainers(ctx context.Context) []*dbpb.DepositContainer {
ctx, span := trace.StartSpan(ctx, "BeaconDB.AllDepositContainers")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
return dc.deposits
}
// MarkPubkeyForChainstart sets the pubkey deposit status to true.
func (dc *DepositCache) MarkPubkeyForChainstart(ctx context.Context, pubkey string) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.MarkPubkeyForChainstart")
ctx, span := trace.StartSpan(ctx, "DepositsCache.MarkPubkeyForChainstart")
defer span.End()
dc.chainstartPubkeysLock.Lock()
defer dc.chainstartPubkeysLock.Unlock()
@@ -94,7 +109,7 @@ func (dc *DepositCache) MarkPubkeyForChainstart(ctx context.Context, pubkey stri
// PubkeyInChainstart returns bool for whether the pubkey passed in has deposited.
func (dc *DepositCache) PubkeyInChainstart(ctx context.Context, pubkey string) bool {
ctx, span := trace.StartSpan(ctx, "BeaconDB.PubkeyInChainstart")
ctx, span := trace.StartSpan(ctx, "DepositsCache.PubkeyInChainstart")
defer span.End()
dc.chainstartPubkeysLock.Lock()
defer dc.chainstartPubkeysLock.Unlock()
@@ -108,14 +123,14 @@ func (dc *DepositCache) PubkeyInChainstart(ctx context.Context, pubkey string) b
// AllDeposits returns a list of deposits all historical deposits until the given block number
// (inclusive). If no block is specified then this method returns all historical deposits.
func (dc *DepositCache) AllDeposits(ctx context.Context, beforeBlk *big.Int) []*ethpb.Deposit {
ctx, span := trace.StartSpan(ctx, "BeaconDB.AllDeposits")
ctx, span := trace.StartSpan(ctx, "DepositsCache.AllDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var deposits []*ethpb.Deposit
for _, ctnr := range dc.deposits {
if beforeBlk == nil || beforeBlk.Cmp(ctnr.Block) > -1 {
if beforeBlk == nil || beforeBlk.Uint64() >= ctnr.Eth1BlockHeight {
deposits = append(deposits, ctnr.Deposit)
}
}
@@ -125,23 +140,23 @@ func (dc *DepositCache) AllDeposits(ctx context.Context, beforeBlk *big.Int) []*
// DepositsNumberAndRootAtHeight returns number of deposits made prior to blockheight and the
// root that corresponds to the latest deposit at that blockheight.
func (dc *DepositCache) DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight *big.Int) (uint64, [32]byte) {
ctx, span := trace.StartSpan(ctx, "Beacondb.DepositsNumberAndRootAtHeight")
ctx, span := trace.StartSpan(ctx, "DepositsCache.DepositsNumberAndRootAtHeight")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Block.Cmp(blockHeight) > 0 })
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Eth1BlockHeight > blockHeight.Uint64() })
// send the deposit root of the empty trie, if eth1follow distance is greater than the time of the earliest
// deposit.
if heightIdx == 0 {
return 0, [32]byte{}
}
return uint64(heightIdx), dc.deposits[heightIdx-1].depositRoot
return uint64(heightIdx), bytesutil.ToBytes32(dc.deposits[heightIdx-1].DepositRoot)
}
// DepositByPubkey looks through historical deposits and finds one which contains
// a certain public key within its deposit data.
func (dc *DepositCache) DepositByPubkey(ctx context.Context, pubKey []byte) (*ethpb.Deposit, *big.Int) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.DepositByPubkey")
ctx, span := trace.StartSpan(ctx, "DepositsCache.DepositByPubkey")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
@@ -151,7 +166,7 @@ func (dc *DepositCache) DepositByPubkey(ctx context.Context, pubKey []byte) (*et
for _, ctnr := range dc.deposits {
if bytes.Equal(ctnr.Deposit.Data.PublicKey, pubKey) {
deposit = ctnr.Deposit
blockNum = ctnr.Block
blockNum = big.NewInt(int64(ctnr.Eth1BlockHeight))
break
}
}

View File

@@ -6,7 +6,8 @@ import (
"math/big"
"testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -19,21 +20,7 @@ func TestBeaconDB_InsertDeposit_LogsOnNilDepositInsertion(t *testing.T) {
hook := logTest.NewGlobal()
dc := DepositCache{}
dc.InsertDeposit(context.Background(), nil, big.NewInt(1), 0, [32]byte{})
if len(dc.deposits) != 0 {
t.Fatal("Number of deposits changed")
}
if hook.LastEntry().Message != nilDepositErr {
t.Errorf("Did not log correct message, wanted \"Ignoring nil deposit insertion\", got \"%s\"", hook.LastEntry().Message)
}
}
func TestBeaconDB_InsertDeposit_LogsOnNilBlockNumberInsertion(t *testing.T) {
hook := logTest.NewGlobal()
dc := DepositCache{}
dc.InsertDeposit(context.Background(), &ethpb.Deposit{}, nil, 0, [32]byte{})
dc.InsertDeposit(context.Background(), nil, 1, 0, [32]byte{})
if len(dc.deposits) != 0 {
t.Fatal("Number of deposits changed")
@@ -47,27 +34,27 @@ func TestBeaconDB_InsertDeposit_MaintainsSortedOrderByIndex(t *testing.T) {
dc := DepositCache{}
insertions := []struct {
blkNum *big.Int
blkNum uint64
deposit *ethpb.Deposit
index int
index int64
}{
{
blkNum: big.NewInt(0),
blkNum: 0,
deposit: &ethpb.Deposit{},
index: 0,
},
{
blkNum: big.NewInt(0),
blkNum: 0,
deposit: &ethpb.Deposit{},
index: 3,
},
{
blkNum: big.NewInt(0),
blkNum: 0,
deposit: &ethpb.Deposit{},
index: 1,
},
{
blkNum: big.NewInt(0),
blkNum: 0,
deposit: &ethpb.Deposit{},
index: 4,
},
@@ -77,7 +64,7 @@ func TestBeaconDB_InsertDeposit_MaintainsSortedOrderByIndex(t *testing.T) {
dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{})
}
expectedIndices := []int{0, 1, 3, 4}
expectedIndices := []int64{0, 1, 3, 4}
for i, ei := range expectedIndices {
if dc.deposits[i].Index != ei {
t.Errorf("dc.deposits[%d].Index = %d, wanted %d", i, dc.deposits[i].Index, ei)
@@ -88,34 +75,34 @@ func TestBeaconDB_InsertDeposit_MaintainsSortedOrderByIndex(t *testing.T) {
func TestBeaconDB_AllDeposits_ReturnsAllDeposits(t *testing.T) {
dc := DepositCache{}
deposits := []*DepositContainer{
deposits := []*dbpb.DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{},
},
}
dc.deposits = deposits
@@ -129,34 +116,34 @@ func TestBeaconDB_AllDeposits_ReturnsAllDeposits(t *testing.T) {
func TestBeaconDB_AllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testing.T) {
dc := DepositCache{}
deposits := []*DepositContainer{
deposits := []*dbpb.DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{},
},
}
dc.deposits = deposits
@@ -171,35 +158,35 @@ func TestBeaconDB_AllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testi
func TestBeaconDB_DepositsNumberAndRootAtHeight_ReturnsAppropriateCountAndRoot(t *testing.T) {
dc := DepositCache{}
dc.deposits = []*DepositContainer{
dc.deposits = []*dbpb.DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
depositRoot: bytesutil.ToBytes32([]byte("root")),
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{},
DepositRoot: []byte("root"),
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{},
},
}
@@ -216,16 +203,16 @@ func TestBeaconDB_DepositsNumberAndRootAtHeight_ReturnsAppropriateCountAndRoot(t
func TestBeaconDB_DepositsNumberAndRootAtHeight_ReturnsEmptyTrieIfBlockHeightLessThanOldestDeposit(t *testing.T) {
dc := DepositCache{}
dc.deposits = []*DepositContainer{
dc.deposits = []*dbpb.DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
depositRoot: bytesutil.ToBytes32([]byte("root")),
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{},
DepositRoot: []byte("root"),
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
depositRoot: bytesutil.ToBytes32([]byte("root")),
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{},
DepositRoot: []byte("root"),
},
}
@@ -242,9 +229,9 @@ func TestBeaconDB_DepositsNumberAndRootAtHeight_ReturnsEmptyTrieIfBlockHeightLes
func TestBeaconDB_DepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
dc := DepositCache{}
dc.deposits = []*DepositContainer{
dc.deposits = []*dbpb.DepositContainer{
{
Block: big.NewInt(9),
Eth1BlockHeight: 9,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk0"),
@@ -252,7 +239,7 @@ func TestBeaconDB_DepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
{
Block: big.NewInt(10),
Eth1BlockHeight: 10,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk1"),
@@ -260,7 +247,7 @@ func TestBeaconDB_DepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
{
Block: big.NewInt(11),
Eth1BlockHeight: 11,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk1"),
@@ -268,7 +255,7 @@ func TestBeaconDB_DepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
{
Block: big.NewInt(12),
Eth1BlockHeight: 12,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk2"),

View File

@@ -7,7 +7,8 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/hashutil"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -23,15 +24,15 @@ var (
// PendingDepositsFetcher specifically outlines a struct that can retrieve deposits
// which have not yet been included in the chain.
type PendingDepositsFetcher interface {
PendingContainers(ctx context.Context, beforeBlk *big.Int) []*DepositContainer
PendingContainers(ctx context.Context, beforeBlk *big.Int) []*dbpb.DepositContainer
}
// InsertPendingDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum *big.Int, index int, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.InsertPendingDeposit")
func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.InsertPendingDeposit")
defer span.End()
if d == nil || blockNum == nil {
if d == nil {
log.WithFields(log.Fields{
"block": blockNum,
"deposit": d,
@@ -40,7 +41,8 @@ func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Depos
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
dc.pendingDeposits = append(dc.pendingDeposits, &DepositContainer{Deposit: d, Block: blockNum, Index: index, depositRoot: depositRoot})
dc.pendingDeposits = append(dc.pendingDeposits,
&dbpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, Index: index, DepositRoot: depositRoot[:]})
pendingDepositsCount.Inc()
span.AddAttributes(trace.Int64Attribute("count", int64(len(dc.pendingDeposits))))
}
@@ -54,9 +56,9 @@ func (dc *DepositCache) PendingDeposits(ctx context.Context, beforeBlk *big.Int)
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var depositCntrs []*DepositContainer
var depositCntrs []*dbpb.DepositContainer
for _, ctnr := range dc.pendingDeposits {
if beforeBlk == nil || beforeBlk.Cmp(ctnr.Block) > -1 {
if beforeBlk == nil || beforeBlk.Uint64() >= ctnr.Eth1BlockHeight {
depositCntrs = append(depositCntrs, ctnr)
}
}
@@ -77,15 +79,15 @@ func (dc *DepositCache) PendingDeposits(ctx context.Context, beforeBlk *big.Int)
// PendingContainers returns a list of deposit containers until the given block number
// (inclusive).
func (dc *DepositCache) PendingContainers(ctx context.Context, beforeBlk *big.Int) []*DepositContainer {
func (dc *DepositCache) PendingContainers(ctx context.Context, beforeBlk *big.Int) []*dbpb.DepositContainer {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var depositCntrs []*DepositContainer
var depositCntrs []*dbpb.DepositContainer
for _, ctnr := range dc.pendingDeposits {
if beforeBlk == nil || beforeBlk.Cmp(ctnr.Block) > -1 {
if beforeBlk == nil || beforeBlk.Uint64() >= ctnr.Eth1BlockHeight {
depositCntrs = append(depositCntrs, ctnr)
}
}
@@ -151,9 +153,9 @@ func (dc *DepositCache) PrunePendingDeposits(ctx context.Context, merkleTreeInde
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
var cleanDeposits []*DepositContainer
var cleanDeposits []*dbpb.DepositContainer
for _, dp := range dc.pendingDeposits {
if dp.Index >= merkleTreeIndex {
if dp.Index >= int64(merkleTreeIndex) {
cleanDeposits = append(cleanDeposits, dp)
}
}

View File

@@ -7,14 +7,15 @@ import (
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
)
var _ = PendingDepositsFetcher(&DepositCache{})
func TestInsertPendingDeposit_OK(t *testing.T) {
dc := DepositCache{}
dc.InsertPendingDeposit(context.Background(), &ethpb.Deposit{}, big.NewInt(111), 100, [32]byte{})
dc.InsertPendingDeposit(context.Background(), &ethpb.Deposit{}, 111, 100, [32]byte{})
if len(dc.pendingDeposits) != 1 {
t.Error("Deposit not inserted")
@@ -23,7 +24,7 @@ func TestInsertPendingDeposit_OK(t *testing.T) {
func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, nil /*blockNum*/, 0, [32]byte{})
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
if len(dc.pendingDeposits) > 0 {
t.Error("Unexpected deposit insertion")
@@ -34,7 +35,7 @@ func TestRemovePendingDeposit_OK(t *testing.T) {
db := DepositCache{}
depToRemove := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
otherDep := &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}
db.pendingDeposits = []*DepositContainer{
db.pendingDeposits = []*dbpb.DepositContainer{
{Deposit: depToRemove, Index: 1},
{Deposit: otherDep, Index: 5},
}
@@ -47,7 +48,7 @@ func TestRemovePendingDeposit_OK(t *testing.T) {
func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{{Deposit: &ethpb.Deposit{}}}
dc.pendingDeposits = []*dbpb.DepositContainer{{Deposit: &ethpb.Deposit{}}}
dc.RemovePendingDeposit(context.Background(), nil /*deposit*/)
if len(dc.pendingDeposits) != 1 {
t.Errorf("Deposit unexpectedly removed")
@@ -57,7 +58,7 @@ func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
func TestPendingDeposit_RoundTrip(t *testing.T) {
dc := DepositCache{}
dep := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
dc.InsertPendingDeposit(context.Background(), dep, big.NewInt(111), 100, [32]byte{})
dc.InsertPendingDeposit(context.Background(), dep, 111, 100, [32]byte{})
dc.RemovePendingDeposit(context.Background(), dep)
if len(dc.pendingDeposits) != 0 {
t.Error("Failed to insert & delete a pending deposit")
@@ -67,10 +68,10 @@ func TestPendingDeposit_RoundTrip(t *testing.T) {
func TestPendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}},
{Block: big.NewInt(4), Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}},
{Block: big.NewInt(6), Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
dc.pendingDeposits = []*dbpb.DepositContainer{
{Eth1BlockHeight: 2, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}},
{Eth1BlockHeight: 4, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}},
{Eth1BlockHeight: 6, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
}
deposits := dc.PendingDeposits(context.Background(), big.NewInt(4))
@@ -92,25 +93,24 @@ func TestPendingDeposits_OK(t *testing.T) {
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
dc.pendingDeposits = []*dbpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
expected := []*dbpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
if !reflect.DeepEqual(dc.pendingDeposits, expected) {
t.Errorf("Unexpected deposits. got=%+v want=%+v", dc.pendingDeposits, expected)
}
@@ -119,40 +119,40 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
dc.pendingDeposits = []*dbpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*DepositContainer{
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
expected := []*dbpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
if !reflect.DeepEqual(dc.pendingDeposits, expected) {
t.Errorf("Unexpected deposits. got=%+v want=%+v", dc.pendingDeposits, expected)
}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
dc.pendingDeposits = []*dbpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*DepositContainer{
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
expected = []*dbpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
if !reflect.DeepEqual(dc.pendingDeposits, expected) {

View File

@@ -3,12 +3,9 @@ package cache
import "github.com/prysmaticlabs/prysm/shared/featureconfig"
func init() {
featureconfig.Init(&featureconfig.Flag{
featureconfig.Init(&featureconfig.Flags{
EnableAttestationCache: true,
EnableEth1DataVoteCache: true,
EnableShuffledIndexCache: true,
EnableCommitteeCache: true,
EnableActiveCountCache: true,
EnableActiveIndicesCache: true,
})
}

View File

@@ -17,16 +17,17 @@ go_library(
"//beacon-chain/core/state/stateutils:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"//shared/sliceutil:go_default_library",
"//shared/trieutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
@@ -37,6 +38,7 @@ go_test(
name = "go_default_test",
size = "medium",
srcs = [
"block_operations_fuzz_test.go",
"block_operations_test.go",
"block_test.go",
"eth1_data_test.go",
@@ -44,17 +46,15 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/trieutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_phoreproject_bls//:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -4,18 +4,20 @@
package blocks
import (
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
// NewGenesisBlock returns the canonical, genesis block for the beacon chain protocol.
func NewGenesisBlock(stateRoot []byte) *ethpb.BeaconBlock {
func NewGenesisBlock(stateRoot []byte) *ethpb.SignedBeaconBlock {
zeroHash := params.BeaconConfig().ZeroHash[:]
genBlock := &ethpb.BeaconBlock{
ParentRoot: zeroHash,
StateRoot: stateRoot,
Body: &ethpb.BeaconBlockBody{},
Signature: params.BeaconConfig().EmptySignature[:],
}
return genBlock
return &ethpb.SignedBeaconBlock{
Block: genBlock,
Signature: params.BeaconConfig().EmptySignature[:],
}
}

View File

@@ -4,23 +4,24 @@ import (
"bytes"
"context"
"encoding/binary"
"fmt"
"reflect"
"sort"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state/stateutils"
v "github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"github.com/prysmaticlabs/prysm/shared/trieutil"
@@ -32,7 +33,31 @@ var log = logrus.WithField("prefix", "blocks")
var eth1DataCache = cache.NewEth1DataVoteCache()
// ErrSigFailedToVerify returns when a signature of a block object(ie attestation, slashing, exit... etc)
// failed to verify.
var ErrSigFailedToVerify = errors.New("signature did not verify")
func verifySigningRoot(obj interface{}, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
}
sig, err := bls.SignatureFromBytes(signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
root, err := ssz.HashTreeRoot(obj)
if err != nil {
return errors.Wrap(err, "could not get signing root")
}
if !sig.Verify(root[:], publicKey, domain) {
return ErrSigFailedToVerify
}
return nil
}
// Deprecated: This method uses deprecated ssz.SigningRoot.
func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
@@ -46,7 +71,7 @@ func verifySigningRoot(obj interface{}, pub []byte, signature []byte, domain uin
return errors.Wrap(err, "could not get signing root")
}
if !sig.Verify(root[:], publicKey, domain) {
return fmt.Errorf("signature did not verify")
return ErrSigFailedToVerify
}
return nil
}
@@ -61,7 +86,7 @@ func verifySignature(signedData []byte, pub []byte, signature []byte, domain uin
return errors.Wrap(err, "could not convert bytes to signature")
}
if !sig.Verify(signedData, publicKey, domain) {
return fmt.Errorf("signature did not verify")
return ErrSigFailedToVerify
}
return nil
}
@@ -157,9 +182,9 @@ func Eth1DataHasEnoughSupport(beaconState *pb.BeaconState, data *ethpb.Eth1Data)
// assert bls_verify(proposer.pubkey, signing_root(block), block.signature, get_domain(state, DOMAIN_BEACON_PROPOSER))
func ProcessBlockHeader(
beaconState *pb.BeaconState,
block *ethpb.BeaconBlock,
block *ethpb.SignedBeaconBlock,
) (*pb.BeaconState, error) {
beaconState, err := ProcessBlockHeaderNoVerify(beaconState, block)
beaconState, err := ProcessBlockHeaderNoVerify(beaconState, block.Block)
if err != nil {
return nil, err
}
@@ -169,15 +194,12 @@ func ProcessBlockHeader(
return nil, err
}
proposer := beaconState.Validators[idx]
if proposer.Slashed {
return nil, fmt.Errorf("proposer at index %d was previously slashed", idx)
}
// Verify proposer signature.
currentEpoch := helpers.CurrentEpoch(beaconState)
domain := helpers.Domain(beaconState.Fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err := verifySigningRoot(block, proposer.PublicKey, block.Signature, domain); err != nil {
return nil, errors.Wrap(err, "could not verify block signature")
if err := verifySigningRoot(block.Block, proposer.PublicKey, block.Signature, domain); err != nil {
return nil, ErrSigFailedToVerify
}
return beaconState, nil
@@ -210,11 +232,14 @@ func ProcessBlockHeaderNoVerify(
beaconState *pb.BeaconState,
block *ethpb.BeaconBlock,
) (*pb.BeaconState, error) {
if block == nil {
return nil, errors.New("nil block")
}
if beaconState.Slot != block.Slot {
return nil, fmt.Errorf("state slot: %d is different then block slot: %d", beaconState.Slot, block.Slot)
}
parentRoot, err := ssz.SigningRoot(beaconState.LatestBlockHeader)
parentRoot, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader)
if err != nil {
return nil, err
}
@@ -224,17 +249,24 @@ func ProcessBlockHeaderNoVerify(
block.ParentRoot, parentRoot)
}
idx, err := helpers.BeaconProposerIndex(beaconState)
if err != nil {
return nil, err
}
proposer := beaconState.Validators[idx]
if proposer.Slashed {
return nil, fmt.Errorf("proposer at index %d was previously slashed", idx)
}
bodyRoot, err := ssz.HashTreeRoot(block.Body)
if err != nil {
return nil, err
}
emptySig := make([]byte, 96)
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{
Slot: block.Slot,
ParentRoot: block.ParentRoot,
StateRoot: params.BeaconConfig().ZeroHash[:],
BodyRoot: bodyRoot[:],
Signature: emptySig,
}
return beaconState, nil
}
@@ -358,8 +390,8 @@ func VerifyProposerSlashing(
) error {
proposer := beaconState.Validators[slashing.ProposerIndex]
if slashing.Header_1.Slot != slashing.Header_2.Slot {
return fmt.Errorf("mismatched header slots, received %d == %d", slashing.Header_1.Slot, slashing.Header_2.Slot)
if slashing.Header_1.Header.Slot != slashing.Header_2.Header.Slot {
return fmt.Errorf("mismatched header slots, received %d == %d", slashing.Header_1.Header.Slot, slashing.Header_2.Header.Slot)
}
if proto.Equal(slashing.Header_1, slashing.Header_2) {
return errors.New("expected slashing headers to differ")
@@ -368,10 +400,10 @@ func VerifyProposerSlashing(
return fmt.Errorf("validator with key %#x is not slashable", proposer.PublicKey)
}
// Using headerEpoch1 here because both of the headers should have the same epoch.
domain := helpers.Domain(beaconState.Fork, helpers.StartSlot(slashing.Header_1.Slot), params.BeaconConfig().DomainBeaconProposer)
headers := append([]*ethpb.BeaconBlockHeader{slashing.Header_1}, slashing.Header_2)
domain := helpers.Domain(beaconState.Fork, helpers.StartSlot(slashing.Header_1.Header.Slot), params.BeaconConfig().DomainBeaconProposer)
headers := []*ethpb.SignedBeaconBlockHeader{slashing.Header_1, slashing.Header_2}
for _, header := range headers {
if err := verifySigningRoot(header, proposer.PublicKey, header.Signature, domain); err != nil {
if err := verifySigningRoot(header.Header, proposer.PublicKey, header.Signature, domain); err != nil {
return errors.Wrap(err, "could not verify beacon block header")
}
}
@@ -384,19 +416,15 @@ func VerifyProposerSlashing(
//
// Spec pseudocode definition:
// def process_attester_slashing(state: BeaconState, attester_slashing: AttesterSlashing) -> None:
// """
// Process ``AttesterSlashing`` operation.
// """
// attestation_1 = attester_slashing.attestation_1
// attestation_2 = attester_slashing.attestation_2
// assert is_slashable_attestation_data(attestation_1.data, attestation_2.data)
// validate_indexed_attestation(state, attestation_1)
// validate_indexed_attestation(state, attestation_2)
// assert is_valid_indexed_attestation(state, attestation_1)
// assert is_valid_indexed_attestation(state, attestation_2)
//
// slashed_any = False
// attesting_indices_1 = attestation_1.custody_bit_0_indices + attestation_1.custody_bit_1_indices
// attesting_indices_2 = attestation_2.custody_bit_0_indices + attestation_2.custody_bit_1_indices
// for index in sorted(set(attesting_indices_1).intersection(attesting_indices_2)):
// indices = set(attestation_1.attesting_indices).intersection(attestation_2.attesting_indices)
// for index in sorted(indices):
// if is_slashable_validator(state.validators[index], get_current_epoch(state)):
// slash_validator(state, index)
// slashed_any = True
@@ -468,10 +496,8 @@ func IsSlashableAttestationData(data1 *ethpb.AttestationData, data2 *ethpb.Attes
}
func slashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
att1 := slashing.Attestation_1
att2 := slashing.Attestation_1
indices1 := append(att1.CustodyBit_0Indices, att1.CustodyBit_1Indices...)
indices2 := append(att2.CustodyBit_0Indices, att2.CustodyBit_1Indices...)
indices1 := slashing.Attestation_1.AttestingIndices
indices2 := slashing.Attestation_1.AttestingIndices
return sliceutil.IntersectionUint64(indices1, indices2)
}
@@ -509,10 +535,11 @@ func ProcessAttestationsNoVerify(ctx context.Context, beaconState *pb.BeaconStat
// data = attestation.data
// assert data.index < get_committee_count_at_slot(state, data.slot)
// assert data.target.epoch in (get_previous_epoch(state), get_current_epoch(state))
// assert data.target.epoch == compute_epoch_at_slot(data.slot)
// assert data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot <= data.slot + SLOTS_PER_EPOCH
//
// committee = get_beacon_committee(state, data.slot, data.index)
// assert len(attestation.aggregation_bits) == len(attestation.custody_bits) == len(committee)
// assert len(attestation.aggregation_bits) == len(committee)
//
// pending_attestation = PendingAttestation(
// data=data,
@@ -544,6 +571,10 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
ctx, span := trace.StartSpan(ctx, "core.ProcessAttestationNoVerify")
defer span.End()
if att == nil || att.Data == nil || att.Data.Target == nil {
return nil, errors.New("nil attestation data target")
}
data := att.Data
if data.Target.Epoch != helpers.PrevEpoch(beaconState) && data.Target.Epoch != helpers.CurrentEpoch(beaconState) {
return nil, fmt.Errorf(
@@ -553,6 +584,9 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
helpers.CurrentEpoch(beaconState),
)
}
if helpers.SlotToEpoch(data.Slot) != data.Target.Epoch {
return nil, fmt.Errorf("data slot is not in the same epoch as target %d != %d", helpers.SlotToEpoch(data.Slot), data.Target.Epoch)
}
s := att.Data.Slot
minInclusionCheck := s+params.BeaconConfig().MinAttestationInclusionDelay <= beaconState.Slot
@@ -618,60 +652,38 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
// ConvertToIndexed converts attestation to (almost) indexed-verifiable form.
//
// Note about spec pseudocode definition. The state was used by get_attesting_indices to determine
// the attestation committee. Now that we provide this as an argument, we no longer need to provide
// a state.
//
// Spec pseudocode definition:
// def get_indexed_attestation(state: BeaconState, attestation: Attestation) -> IndexedAttestation:
// """
// Return the indexed attestation corresponding to ``attestation``.
// """
// attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
// custody_bit_1_indices = get_attesting_indices(state, attestation.data, attestation.custody_bits)
// assert custody_bit_1_indices.issubset(attesting_indices)
// custody_bit_0_indices = attesting_indices.difference(custody_bit_1_indices)
//
// return IndexedAttestation(
// custody_bit_0_indices=sorted(custody_bit_0_indices),
// custody_bit_1_indices=sorted(custody_bit_1_indices),
// attesting_indices=sorted(attesting_indices),
// data=attestation.data,
// signature=attestation.signature,
// )
func ConvertToIndexed(ctx context.Context, state *pb.BeaconState, attestation *ethpb.Attestation) (*ethpb.IndexedAttestation, error) {
func ConvertToIndexed(ctx context.Context, attestation *ethpb.Attestation, committee []uint64) (*ethpb.IndexedAttestation, error) {
ctx, span := trace.StartSpan(ctx, "core.ConvertToIndexed")
defer span.End()
attIndices, err := helpers.AttestingIndices(state, attestation.Data, attestation.AggregationBits)
attIndices, err := helpers.AttestingIndices(attestation.AggregationBits, committee)
if err != nil {
return nil, errors.Wrap(err, "could not get attesting indices")
}
cb1i, err := helpers.AttestingIndices(state, attestation.Data, attestation.CustodyBits)
if err != nil {
return nil, err
}
if !sliceutil.SubsetUint64(cb1i, attIndices) {
return nil, fmt.Errorf("%v is not a subset of %v", cb1i, attIndices)
}
cb1Map := make(map[uint64]bool)
for _, idx := range cb1i {
cb1Map[idx] = true
}
cb0i := []uint64{}
for _, idx := range attIndices {
if !cb1Map[idx] {
cb0i = append(cb0i, idx)
}
}
sort.Slice(cb0i, func(i, j int) bool {
return cb0i[i] < cb0i[j]
})
sort.Slice(cb1i, func(i, j int) bool {
return cb1i[i] < cb1i[j]
sort.Slice(attIndices, func(i, j int) bool {
return attIndices[i] < attIndices[j]
})
inAtt := &ethpb.IndexedAttestation{
Data: attestation.Data,
Signature: attestation.Signature,
CustodyBit_0Indices: cb0i,
CustodyBit_1Indices: cb1i,
Data: attestation.Data,
Signature: attestation.Signature,
AttestingIndices: attIndices,
}
return inAtt, nil
}
@@ -683,33 +695,19 @@ func ConvertToIndexed(ctx context.Context, state *pb.BeaconState, attestation *e
// """
// Check if ``indexed_attestation`` has valid indices and signature.
// """
// bit_0_indices = indexed_attestation.custody_bit_0_indices
// bit_1_indices = indexed_attestation.custody_bit_1_indices
// indices = indexed_attestation.attesting_indices
//
// # Verify no index has custody bit equal to 1 [to be removed in phase 1]
// if not len(bit_1_indices) == 0:
// return False
// # Verify max number of indices
// if not len(bit_0_indices) + len(bit_1_indices) <= MAX_VALIDATORS_PER_COMMITTEE:
// return False
// # Verify index sets are disjoint
// if not len(set(bit_0_indices).intersection(bit_1_indices)) == 0:
// return False
// # Verify indices are sorted
// if not (bit_0_indices == sorted(bit_0_indices) and bit_1_indices == sorted(bit_1_indices)):
// if not len(indices) <= MAX_VALIDATORS_PER_COMMITTEE:
// return False
// # Verify indices are sorted and unique
// if not indices == sorted(set(indices)):
// # Verify aggregate signature
// if not bls_verify_multiple(
// pubkeys=[
// bls_aggregate_pubkeys([state.validators[i].pubkey for i in bit_0_indices]),
// bls_aggregate_pubkeys([state.validators[i].pubkey for i in bit_1_indices]),
// ],
// message_hashes=[
// hash_tree_root(AttestationDataAndCustodyBit(data=indexed_attestation.data, custody_bit=0b0)),
// hash_tree_root(AttestationDataAndCustodyBit(data=indexed_attestation.data, custody_bit=0b1)),
// ],
// if not bls_verify(
// pubkey=bls_aggregate_pubkeys([state.validators[i].pubkey for i in indices]),
// message_hash=hash_tree_root(indexed_attestation.data),
// signature=indexed_attestation.signature,
// domain=get_domain(state, DOMAIN_ATTESTATION, indexed_attestation.data.target.epoch),
// domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),
// ):
// return False
// return True
@@ -717,87 +715,48 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *pb.BeaconState,
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
defer span.End()
custodyBit0Indices := indexedAtt.CustodyBit_0Indices
custodyBit1Indices := indexedAtt.CustodyBit_1Indices
indices := indexedAtt.AttestingIndices
// To be removed in phase 1
if len(custodyBit1Indices) != 0 {
return fmt.Errorf("expected no bit 1 indices, received %v", len(custodyBit1Indices))
if uint64(len(indices)) > params.BeaconConfig().MaxValidatorsPerCommittee {
return fmt.Errorf("validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE, %d > %d", len(indices), params.BeaconConfig().MaxValidatorsPerCommittee)
}
maxIndices := params.BeaconConfig().MaxValidatorsPerCommittee
totalIndicesLength := uint64(len(custodyBit0Indices) + len(custodyBit1Indices))
if totalIndicesLength > maxIndices {
return fmt.Errorf("over max number of allowed indices per attestation: %d", totalIndicesLength)
set := make(map[uint64]bool)
setIndices := make([]uint64, 0, len(indices))
for _, i := range indices {
if ok := set[i]; ok {
continue
}
setIndices = append(setIndices, i)
set[i] = true
}
custodyBitIntersection := sliceutil.IntersectionUint64(custodyBit0Indices, custodyBit1Indices)
if len(custodyBitIntersection) != 0 {
return fmt.Errorf("expected disjoint indices intersection, received %v", custodyBitIntersection)
}
custodyBit0IndicesIsSorted := sort.SliceIsSorted(custodyBit0Indices, func(i, j int) bool {
return custodyBit0Indices[i] < custodyBit0Indices[j]
sort.SliceStable(setIndices, func(i, j int) bool {
return setIndices[i] < setIndices[j]
})
if !custodyBit0IndicesIsSorted {
return fmt.Errorf("custody Bit0 indices are not sorted, got %v", custodyBit0Indices)
}
custodyBit1IndicesIsSorted := sort.SliceIsSorted(custodyBit1Indices, func(i, j int) bool {
return custodyBit1Indices[i] < custodyBit1Indices[j]
})
if !custodyBit1IndicesIsSorted {
return fmt.Errorf("custody Bit1 indices are not sorted, got %v", custodyBit1Indices)
if !reflect.DeepEqual(setIndices, indices) {
return errors.New("attesting indices is not uniquely sorted")
}
domain := helpers.Domain(beaconState.Fork, indexedAtt.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester)
var pubkeys []*bls.PublicKey
if len(custodyBit0Indices) > 0 {
pubkey, err := bls.PublicKeyFromBytes(beaconState.Validators[custodyBit0Indices[0]].PublicKey)
var pubkey *bls.PublicKey
var err error
if len(indices) > 0 {
pubkey, err = bls.PublicKeyFromBytes(beaconState.Validators[indices[0]].PublicKey)
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
for _, i := range custodyBit0Indices[1:] {
for _, i := range indices[1:] {
pk, err := bls.PublicKeyFromBytes(beaconState.Validators[i].PublicKey)
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
pubkey.Aggregate(pk)
}
pubkeys = append(pubkeys, pubkey)
}
if len(custodyBit1Indices) > 0 {
pubkey, err := bls.PublicKeyFromBytes(beaconState.Validators[custodyBit1Indices[0]].PublicKey)
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
for _, i := range custodyBit1Indices[1:] {
pk, err := bls.PublicKeyFromBytes(beaconState.Validators[i].PublicKey)
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
pubkey.Aggregate(pk)
}
pubkeys = append(pubkeys, pubkey)
}
var msgs [][32]byte
cus0 := &pb.AttestationDataAndCustodyBit{Data: indexedAtt.Data, CustodyBit: false}
cus1 := &pb.AttestationDataAndCustodyBit{Data: indexedAtt.Data, CustodyBit: true}
if len(custodyBit0Indices) > 0 {
cus0Root, err := ssz.HashTreeRoot(cus0)
if err != nil {
return errors.Wrap(err, "could not tree hash att data and custody bit 0")
}
msgs = append(msgs, cus0Root)
}
if len(custodyBit1Indices) > 0 {
cus1Root, err := ssz.HashTreeRoot(cus1)
if err != nil {
return errors.Wrap(err, "could not tree hash att data and custody bit 1")
}
msgs = append(msgs, cus1Root)
messageHash, err := ssz.HashTreeRoot(indexedAtt.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash att data")
}
sig, err := bls.SignatureFromBytes(indexedAtt.Signature)
@@ -805,10 +764,9 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *pb.BeaconState,
return errors.Wrap(err, "could not convert bytes to signature")
}
hasVotes := len(custodyBit0Indices) > 0 || len(custodyBit1Indices) > 0
if hasVotes && !sig.VerifyAggregate(pubkeys, msgs, domain) {
return fmt.Errorf("attestation aggregation signature did not verify")
voted := len(indices) > 0
if voted && !sig.Verify(messageHash[:], pubkey, domain) {
return ErrSigFailedToVerify
}
return nil
}
@@ -816,7 +774,11 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *pb.BeaconState,
// VerifyAttestation converts and attestation into an indexed attestation and verifies
// the signature in that attestation.
func VerifyAttestation(ctx context.Context, beaconState *pb.BeaconState, att *ethpb.Attestation) error {
indexedAtt, err := ConvertToIndexed(ctx, beaconState, att)
committee, err := helpers.BeaconCommitteeFromState(beaconState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return err
}
indexedAtt, err := ConvertToIndexed(ctx, att, committee)
if err != nil {
return errors.Wrap(err, "could not convert to indexed attestation")
}
@@ -844,6 +806,29 @@ func ProcessDeposits(ctx context.Context, beaconState *pb.BeaconState, body *eth
return beaconState, nil
}
// ProcessPreGenesisDeposit processes a deposit for the beacon state before chainstart.
func ProcessPreGenesisDeposit(ctx context.Context, beaconState *pb.BeaconState,
deposit *ethpb.Deposit, validatorIndices map[[48]byte]int) (*pb.BeaconState, error) {
var err error
beaconState, err = ProcessDeposit(beaconState, deposit, validatorIndices)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
pubkey := deposit.Data.PublicKey
index, ok := validatorIndices[bytesutil.ToBytes48(pubkey)]
if !ok {
return beaconState, nil
}
balance := beaconState.Balances[index]
beaconState.Validators[index].EffectiveBalance = mathutil.Min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
if beaconState.Validators[index].EffectiveBalance ==
params.BeaconConfig().MaxEffectiveBalance {
beaconState.Validators[index].ActivationEligibilityEpoch = 0
beaconState.Validators[index].ActivationEpoch = 0
}
return beaconState, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
//
@@ -898,7 +883,7 @@ func ProcessDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit, valInde
if !ok {
domain := bls.ComputeDomain(params.BeaconConfig().DomainDeposit)
depositSig := deposit.Data.Signature
if err := verifySigningRoot(deposit.Data, pubKey, depositSig, domain); err != nil {
if err := verifyDepositDataSigningRoot(deposit.Data, pubKey, depositSig, domain); err != nil {
// Ignore this error as in the spec pseudo code.
log.Errorf("Skipping deposit: could not verify deposit data signature: %v", err)
return beaconState, nil
@@ -918,6 +903,7 @@ func ProcessDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit, valInde
EffectiveBalance: effectiveBalance,
})
beaconState.Balances = append(beaconState.Balances, amount)
valIndexMap[bytesutil.ToBytes48(pubKey)] = len(beaconState.Validators) - 1
} else {
beaconState = helpers.IncreaseBalance(beaconState, uint64(index), amount)
}
@@ -977,7 +963,7 @@ func ProcessVoluntaryExits(ctx context.Context, beaconState *pb.BeaconState, bod
if err := VerifyExit(beaconState, exit); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, err = v.InitiateValidatorExit(beaconState, exit.ValidatorIndex)
beaconState, err = v.InitiateValidatorExit(beaconState, exit.Exit.ValidatorIndex)
if err != nil {
return nil, err
}
@@ -995,7 +981,7 @@ func ProcessVoluntaryExitsNoVerify(
exits := body.VoluntaryExits
for idx, exit := range exits {
beaconState, err = v.InitiateValidatorExit(beaconState, exit.ValidatorIndex)
beaconState, err = v.InitiateValidatorExit(beaconState, exit.Exit.ValidatorIndex)
if err != nil {
return nil, errors.Wrapf(err, "failed to process voluntary exit at index %d", idx)
}
@@ -1022,7 +1008,12 @@ func ProcessVoluntaryExitsNoVerify(
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, exit.epoch)
// assert bls_verify(validator.pubkey, signing_root(exit), exit.signature, domain)
func VerifyExit(beaconState *pb.BeaconState, exit *ethpb.VoluntaryExit) error {
func VerifyExit(beaconState *pb.BeaconState, signed *ethpb.SignedVoluntaryExit) error {
if signed == nil || signed.Exit == nil {
return errors.New("nil exit")
}
exit := signed.Exit
if int(exit.ValidatorIndex) >= len(beaconState.Validators) {
return fmt.Errorf("validator index out of bound %d > %d", exit.ValidatorIndex, len(beaconState.Validators))
}
@@ -1050,8 +1041,8 @@ func VerifyExit(beaconState *pb.BeaconState, exit *ethpb.VoluntaryExit) error {
)
}
domain := helpers.Domain(beaconState.Fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit)
if err := verifySigningRoot(exit, validator.PublicKey, exit.Signature, domain); err != nil {
return errors.Wrap(err, "could not verify voluntary exit signature")
if err := verifySigningRoot(exit, validator.PublicKey, signed.Signature, domain); err != nil {
return ErrSigFailedToVerify
}
return nil
}

View File

@@ -0,0 +1,36 @@
package blocks_test
import (
"context"
"testing"
fuzz "github.com/google/gofuzz"
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
ethereum_beacon_p2p_v1 "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestFuzzProcessAttestation_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
ctx := context.Background()
state := &ethereum_beacon_p2p_v1.BeaconState{}
att := &eth.Attestation{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(att)
_, _ = blocks.ProcessAttestationNoVerify(ctx, state, att)
}
}
func TestFuzzProcessBlockHeader_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
block := &eth.SignedBeaconBlock{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(block)
_, _ = blocks.ProcessBlockHeader(state, block)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -11,11 +11,11 @@ func TestGenesisBlock_InitializedCorrectly(t *testing.T) {
stateHash := []byte{0}
b1 := blocks.NewGenesisBlock(stateHash)
if b1.ParentRoot == nil {
if b1.Block.ParentRoot == nil {
t.Error("genesis block missing ParentHash field")
}
if !bytes.Equal(b1.StateRoot, stateHash) {
if !bytes.Equal(b1.Block.StateRoot, stateHash) {
t.Error("genesis block StateRootHash32 isn't initialized correctly")
}
}

View File

@@ -4,9 +4,9 @@ import (
"fmt"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)

View File

@@ -36,10 +36,10 @@ go_test(
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/params/spectest:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
@@ -69,10 +69,10 @@ go_test(
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/params/spectest:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",

View File

@@ -4,9 +4,9 @@ import (
"path"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)

View File

@@ -4,9 +4,9 @@ import (
"path"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)

View File

@@ -8,11 +8,10 @@ import (
"github.com/bazelbuild/rules_go/go/tools/bazel"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
"gopkg.in/d4l3k/messagediff.v1"
@@ -26,7 +25,6 @@ func runBlockHeaderTest(t *testing.T, config string) {
testFolders, testsFolderPath := testutil.TestFolders(t, config, "operations/block_header/pyspec_tests")
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
helpers.ClearAllCaches()
blockFile, err := testutil.BazelFileBytes(testsFolderPath, folder.Name(), "block.ssz")
if err != nil {
t.Fatal(err)
@@ -54,7 +52,8 @@ func runBlockHeaderTest(t *testing.T, config string) {
t.Fatal(err)
}
beaconState, err := blocks.ProcessBlockHeader(preBeaconState, block)
// Spectest blocks are not signed, so we'll call NoVerify to skip sig verification.
beaconState, err := blocks.ProcessBlockHeaderNoVerify(preBeaconState, block)
if postSSZExists {
if err != nil {
t.Fatalf("Unexpected error: %v", err)

View File

@@ -10,11 +10,10 @@ import (
"github.com/bazelbuild/rules_go/go/tools/bazel"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
"gopkg.in/d4l3k/messagediff.v1"
@@ -28,7 +27,6 @@ func runBlockProcessingTest(t *testing.T, config string) {
testFolders, testsFolderPath := testutil.TestFolders(t, config, "sanity/blocks/pyspec_tests")
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
helpers.ClearAllCaches()
preBeaconStateFile, err := testutil.BazelFileBytes(testsFolderPath, folder.Name(), "pre.ssz")
if err != nil {
t.Fatal(err)
@@ -55,7 +53,7 @@ func runBlockProcessingTest(t *testing.T, config string) {
if err != nil {
t.Fatal(err)
}
block := &ethpb.BeaconBlock{}
block := &ethpb.SignedBeaconBlock{}
if err := ssz.Unmarshal(blockFile, block); err != nil {
t.Fatalf("Failed to unmarshal: %v", err)
}

View File

@@ -4,9 +4,9 @@ import (
"path"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)

View File

@@ -4,9 +4,9 @@ import (
"path"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)

View File

@@ -4,9 +4,9 @@ import (
"path"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -24,12 +24,12 @@ func runVoluntaryExitTest(t *testing.T, config string) {
if err != nil {
t.Fatal(err)
}
voluntaryExit := &ethpb.VoluntaryExit{}
voluntaryExit := &ethpb.SignedVoluntaryExit{}
if err := ssz.Unmarshal(exitFile, voluntaryExit); err != nil {
t.Fatalf("Failed to unmarshal: %v", err)
}
body := &ethpb.BeaconBlockBody{VoluntaryExits: []*ethpb.VoluntaryExit{voluntaryExit}}
body := &ethpb.BeaconBlockBody{VoluntaryExits: []*ethpb.SignedVoluntaryExit{voluntaryExit}}
testutil.RunBlockOperationTest(t, folderPath, body, blocks.ProcessVoluntaryExits)
})
}

View File

@@ -2,20 +2,17 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"epoch_processing.go",
"participation.go",
],
srcs = ["epoch_processing.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/epoch",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
@@ -24,17 +21,17 @@ go_test(
name = "go_default_test",
size = "small",
srcs = [
"epoch_processing_fuzz_test.go",
"epoch_processing_test.go",
"participation_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -5,93 +5,32 @@
package epoch
import (
"bytes"
"fmt"
"sort"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
// MatchedAttestations is an object that contains the correctly
// voted attestations based on source, target and head criteria.
type MatchedAttestations struct {
source []*pb.PendingAttestation
Target []*pb.PendingAttestation
head []*pb.PendingAttestation
}
var epochState *pb.BeaconState
// MatchAttestations matches the attestations gathered in a span of an epoch
// and categorize them whether they correctly voted for source, target and head.
// We combined the individual helpers from spec for efficiency and to achieve O(N) run time.
//
// Spec pseudocode definition:
// def get_matching_source_attestations(state: BeaconState, epoch: Epoch) -> List[PendingAttestation]:
// assert epoch in (get_current_epoch(state), get_previous_epoch(state))
// return state.current_epoch_attestations if epoch == get_current_epoch(state) else state.previous_epoch_attestations
//
// def get_matching_target_attestations(state: BeaconState, epoch: Epoch) -> List[PendingAttestation]:
// return [
// a for a in get_matching_source_attestations(state, epoch)
// if a.data.target_root == get_block_root(state, epoch)
// ]
//
// def get_matching_head_attestations(state: BeaconState, epoch: Epoch) -> List[PendingAttestation]:
// return [
// a for a in get_matching_source_attestations(state, epoch)
// if a.data.beacon_block_root == get_block_root_at_slot(state, get_attestation_data_slot(state, a.data))
// ]
func MatchAttestations(state *pb.BeaconState, epoch uint64) (*MatchedAttestations, error) {
currentEpoch := helpers.CurrentEpoch(state)
previousEpoch := helpers.PrevEpoch(state)
// sortableIndices implements the Sort interface to sort newly activated validator indices
// by activation epoch and by index number.
type sortableIndices []uint64
// Input epoch for matching the source attestations has to be within range
// of current epoch & previous epoch.
if epoch != currentEpoch && epoch != previousEpoch {
return nil, fmt.Errorf("input epoch: %d != current epoch: %d or previous epoch: %d",
epoch, currentEpoch, previousEpoch)
func (s sortableIndices) Len() int { return len(s) }
func (s sortableIndices) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s sortableIndices) Less(i, j int) bool {
if epochState.Validators[s[i]].ActivationEligibilityEpoch == epochState.Validators[s[j]].ActivationEligibilityEpoch {
return s[i] < s[j]
}
// Decide if the source attestations are coming from current or previous epoch.
var srcAtts []*pb.PendingAttestation
if epoch == currentEpoch {
srcAtts = state.CurrentEpochAttestations
} else {
srcAtts = state.PreviousEpochAttestations
}
targetRoot, err := helpers.BlockRoot(state, epoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for epoch %d", epoch)
}
tgtAtts := make([]*pb.PendingAttestation, 0, len(srcAtts))
headAtts := make([]*pb.PendingAttestation, 0, len(srcAtts))
for _, srcAtt := range srcAtts {
// If the target root matches attestation's target root,
// then we know this attestation has correctly voted for target.
if bytes.Equal(srcAtt.Data.Target.Root, targetRoot) {
tgtAtts = append(tgtAtts, srcAtt)
}
headRoot, err := helpers.BlockRootAtSlot(state, srcAtt.Data.Slot)
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for slot %d", srcAtt.Data.Slot)
}
if bytes.Equal(srcAtt.Data.BeaconBlockRoot, headRoot) {
headAtts = append(headAtts, srcAtt)
}
}
return &MatchedAttestations{
source: srcAtts,
Target: tgtAtts,
head: headAtts,
}, nil
return epochState.Validators[s[i]].ActivationEligibilityEpoch < epochState.Validators[s[j]].ActivationEligibilityEpoch
}
// AttestingBalance returns the total balance from all the attesting indices.
@@ -111,183 +50,39 @@ func AttestingBalance(state *pb.BeaconState, atts []*pb.PendingAttestation) (uin
return helpers.TotalBalance(state, indices), nil
}
// ProcessJustificationAndFinalization processes justification and finalization during
// epoch processing. This is where a beacon node can justify and finalize a new epoch.
//
// Spec pseudocode definition:
// def process_justification_and_finalization(state: BeaconState) -> None:
// if get_current_epoch(state) <= GENESIS_EPOCH + 1:
// return
//
// previous_epoch = get_previous_epoch(state)
// current_epoch = get_current_epoch(state)
// old_previous_justified_checkpoint = state.previous_justified_checkpoint
// old_current_justified_checkpoint = state.current_justified_checkpoint
//
// # Process justifications
// state.previous_justified_checkpoint = state.current_justified_checkpoint
// state.justification_bits[1:] = state.justification_bits[:-1]
// state.justification_bits[0] = 0b0
// matching_target_attestations = get_matching_target_attestations(state, previous_epoch) # Previous epoch
// if get_attesting_balance(state, matching_target_attestations) * 3 >= get_total_active_balance(state) * 2:
// state.current_justified_checkpoint = Checkpoint(epoch=previous_epoch,
// root=get_block_root(state, previous_epoch))
// state.justification_bits[1] = 0b1
// matching_target_attestations = get_matching_target_attestations(state, current_epoch) # Current epoch
// if get_attesting_balance(state, matching_target_attestations) * 3 >= get_total_active_balance(state) * 2:
// state.current_justified_checkpoint = Checkpoint(epoch=current_epoch,
// root=get_block_root(state, current_epoch))
// state.justification_bits[0] = 0b1
//
// # Process finalizations
// bits = state.justification_bits
// # The 2nd/3rd/4th most recent epochs are justified, the 2nd using the 4th as source
// if all(bits[1:4]) and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
// state.finalized_checkpoint = old_previous_justified_checkpoint
// # The 2nd/3rd most recent epochs are justified, the 2nd using the 3rd as source
// if all(bits[1:3]) and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
// state.finalized_checkpoint = old_previous_justified_checkpoint
// # The 1st/2nd/3rd most recent epochs are justified, the 1st using the 3rd as source
// if all(bits[0:3]) and old_current_justified_checkpoint.epoch + 2 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
// # The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
// if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
// state.finalized_checkpoint = old_current_justified_checkpoint
func ProcessJustificationAndFinalization(state *pb.BeaconState, prevAttestedBal uint64, currAttestedBal uint64) (*pb.BeaconState, error) {
if state.Slot <= helpers.StartSlot(2) {
return state, nil
}
prevEpoch := helpers.PrevEpoch(state)
currentEpoch := helpers.CurrentEpoch(state)
oldPrevJustifiedCheckpoint := state.PreviousJustifiedCheckpoint
oldCurrJustifiedCheckpoint := state.CurrentJustifiedCheckpoint
totalBal, err := helpers.TotalActiveBalance(state)
if err != nil {
return nil, errors.Wrap(err, "could not get total balance")
}
// Process justifications
state.PreviousJustifiedCheckpoint = state.CurrentJustifiedCheckpoint
state.JustificationBits.Shift(1)
// Note: the spec refers to the bit index position starting at 1 instead of starting at zero.
// We will use that paradigm here for consistency with the godoc spec definition.
// If 2/3 or more of total balance attested in the previous epoch.
if 3*prevAttestedBal >= 2*totalBal {
blockRoot, err := helpers.BlockRoot(state, prevEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for previous epoch %d", prevEpoch)
}
state.CurrentJustifiedCheckpoint = &ethpb.Checkpoint{Epoch: prevEpoch, Root: blockRoot}
state.JustificationBits.SetBitAt(1, true)
}
// If 2/3 or more of the total balance attested in the current epoch.
if 3*currAttestedBal >= 2*totalBal {
blockRoot, err := helpers.BlockRoot(state, currentEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for current epoch %d", prevEpoch)
}
state.CurrentJustifiedCheckpoint = &ethpb.Checkpoint{Epoch: currentEpoch, Root: blockRoot}
state.JustificationBits.SetBitAt(0, true)
}
// Process finalization according to ETH2.0 specifications.
justification := state.JustificationBits.Bytes()[0]
// 2nd/3rd/4th (0b1110) most recent epochs are justified, the 2nd using the 4th as source.
if justification&0x0E == 0x0E && (oldPrevJustifiedCheckpoint.Epoch+3) == currentEpoch {
state.FinalizedCheckpoint = oldPrevJustifiedCheckpoint
}
// 2nd/3rd (0b0110) most recent epochs are justified, the 2nd using the 3rd as source.
if justification&0x06 == 0x06 && (oldPrevJustifiedCheckpoint.Epoch+2) == currentEpoch {
state.FinalizedCheckpoint = oldPrevJustifiedCheckpoint
}
// 1st/2nd/3rd (0b0111) most recent epochs are justified, the 1st using the 3rd as source.
if justification&0x07 == 0x07 && (oldCurrJustifiedCheckpoint.Epoch+2) == currentEpoch {
state.FinalizedCheckpoint = oldCurrJustifiedCheckpoint
}
// The 1st/2nd (0b0011) most recent epochs are justified, the 1st using the 2nd as source
if justification&0x03 == 0x03 && (oldCurrJustifiedCheckpoint.Epoch+1) == currentEpoch {
state.FinalizedCheckpoint = oldCurrJustifiedCheckpoint
}
return state, nil
}
// ProcessRewardsAndPenalties processes the rewards and penalties of individual validator.
//
// Spec pseudocode definition:
// def process_rewards_and_penalties(state: BeaconState) -> None:
// if get_current_epoch(state) == GENESIS_EPOCH:
// return
//
// rewards1, penalties1 = get_attestation_deltas(state)
// rewards2, penalties2 = get_crosslink_deltas(state)
// for i in range(len(state.validator_registry)):
// increase_balance(state, i, rewards1[i] + rewards2[i])
// decrease_balance(state, i, penalties1[i] + penalties2[i])
func ProcessRewardsAndPenalties(state *pb.BeaconState) (*pb.BeaconState, error) {
// Can't process rewards and penalties in genesis epoch.
if helpers.CurrentEpoch(state) == 0 {
return state, nil
}
attsRewards, attsPenalties, err := attestationDelta(state)
if err != nil {
return nil, errors.Wrap(err, "could not get attestation delta")
}
for i := 0; i < len(state.Validators); i++ {
state = helpers.IncreaseBalance(state, uint64(i), attsRewards[i])
state = helpers.DecreaseBalance(state, uint64(i), attsPenalties[i])
}
return state, nil
}
// ProcessRegistryUpdates rotates validators in and out of active pool.
// the amount to rotate is determined churn limit.
//
// Spec pseudocode definition:
// def process_registry_updates(state: BeaconState) -> None:
// # Process activation eligibility and ejections
// for index, validator in enumerate(state.validator_registry):
// if (
// validator.activation_eligibility_epoch == FAR_FUTURE_EPOCH and
// validator.effective_balance >= MAX_EFFECTIVE_BALANCE
// ):
// validator.activation_eligibility_epoch = get_current_epoch(state)
// for index, validator in enumerate(state.validators):
// if is_eligible_for_activation_queue(validator):
// validator.activation_eligibility_epoch = get_current_epoch(state) + 1
//
// if is_active_validator(validator, get_current_epoch(state)) and validator.effective_balance <= EJECTION_BALANCE:
// initiate_validator_exit(state, index)
// initiate_validator_exit(state, ValidatorIndex(index))
//
// # Queue validators eligible for activation and not dequeued for activation prior to finalized epoch
// # Queue validators eligible for activation and not yet dequeued for activation
// activation_queue = sorted([
// index for index, validator in enumerate(state.validator_registry) if
// validator.activation_eligibility_epoch != FAR_FUTURE_EPOCH and
// validator.activation_epoch >= get_delayed_activation_exit_epoch(state.finalized_epoch)
// ], key=lambda index: state.validator_registry[index].activation_eligibility_epoch)
// # Dequeued validators for activation up to churn limit (without resetting activation epoch)
// for index in activation_queue[:get_churn_limit(state)]:
// validator = state.validator_registry[index]
// if validator.activation_epoch == FAR_FUTURE_EPOCH:
// validator.activation_epoch = get_delayed_activation_exit_epoch(get_current_epoch(state))
// index for index, validator in enumerate(state.validators)
// if is_eligible_for_activation(state, validator)
// # Order by the sequence of activation_eligibility_epoch setting and then index
// ], key=lambda index: (state.validators[index].activation_eligibility_epoch, index))
// # Dequeued validators for activation up to churn limit
// for index in activation_queue[:get_validator_churn_limit(state)]:
// validator = state.validators[index]
// validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
currentEpoch := helpers.CurrentEpoch(state)
var err error
for idx, validator := range state.Validators {
// Process the validators for activation eligibility.
eligibleToActivate := validator.ActivationEligibilityEpoch == params.BeaconConfig().FarFutureEpoch
properBalance := validator.EffectiveBalance >= params.BeaconConfig().MaxEffectiveBalance
if eligibleToActivate && properBalance {
validator.ActivationEligibilityEpoch = currentEpoch
if helpers.IsEligibleForActivationQueue(validator) {
validator.ActivationEligibilityEpoch = helpers.CurrentEpoch(state) + 1
}
// Process the validators for ejection.
isActive := helpers.IsActiveValidator(validator, currentEpoch)
belowEjectionBalance := validator.EffectiveBalance <= params.BeaconConfig().EjectionBalance
@@ -299,22 +94,25 @@ func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
}
}
// Queue the validators whose eligible to activate and sort them by activation eligibility epoch number
// Queue validators eligible for activation and not yet dequeued for activation.
var activationQ []uint64
for idx, validator := range state.Validators {
eligibleActivated := validator.ActivationEligibilityEpoch != params.BeaconConfig().FarFutureEpoch
canBeActive := validator.ActivationEpoch >= helpers.DelayedActivationExitEpoch(state.FinalizedCheckpoint.Epoch)
if eligibleActivated && canBeActive {
if helpers.IsEligibleForActivation(state, validator) {
activationQ = append(activationQ, uint64(idx))
}
}
sort.Slice(activationQ, func(i, j int) bool {
return state.Validators[i].ActivationEligibilityEpoch < state.Validators[j].ActivationEligibilityEpoch
})
epochState = state
sort.Sort(sortableIndices(activationQ))
// Only activate just enough validators according to the activation churn limit.
limit := len(activationQ)
churnLimit, err := helpers.ValidatorChurnLimit(state)
activeValidatorCount, err := helpers.ActiveValidatorCount(state, currentEpoch)
if err != nil {
return nil, errors.Wrap(err, "could not get active validator count")
}
churnLimit, err := helpers.ValidatorChurnLimit(activeValidatorCount)
if err != nil {
return nil, errors.Wrap(err, "could not get churn limit")
}
@@ -323,12 +121,12 @@ func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
if int(churnLimit) < limit {
limit = int(churnLimit)
}
for _, index := range activationQ[:limit] {
validator := state.Validators[index]
if validator.ActivationEpoch == params.BeaconConfig().FarFutureEpoch {
validator.ActivationEpoch = helpers.DelayedActivationExitEpoch(currentEpoch)
}
validator.ActivationEpoch = helpers.DelayedActivationExitEpoch(currentEpoch)
}
return state, nil
}
@@ -420,6 +218,12 @@ func ProcessFinalUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
// Update effective balances with hysteresis.
for i, v := range state.Validators {
if v == nil {
return nil, fmt.Errorf("validator %d is nil in state", i)
}
if i >= len(state.Balances) {
return nil, fmt.Errorf("validator index exceeds validator length in state %d >= %d", i, len(state.Balances))
}
balance := state.Balances[i]
halfInc := params.BeaconConfig().EffectiveBalanceIncrement / 2
if balance < v.EffectiveBalance || v.EffectiveBalance+3*halfInc < balance {
@@ -432,10 +236,17 @@ func ProcessFinalUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
// Set total slashed balances.
slashedExitLength := params.BeaconConfig().EpochsPerSlashingsVector
state.Slashings[nextEpoch%slashedExitLength] = 0
slashedEpoch := int(nextEpoch % slashedExitLength)
if len(state.Slashings) != int(slashedExitLength) {
return nil, fmt.Errorf("state slashing length %d different than EpochsPerHistoricalVector %d", len(state.Slashings), slashedExitLength)
}
state.Slashings[slashedEpoch] = 0
// Set RANDAO mix.
randaoMixLength := params.BeaconConfig().EpochsPerHistoricalVector
if len(state.RandaoMixes) != int(randaoMixLength) {
return nil, fmt.Errorf("state randao length %d different than EpochsPerHistoricalVector %d", len(state.RandaoMixes), randaoMixLength)
}
mix := helpers.RandaoMix(state, currentEpoch)
state.RandaoMixes[nextEpoch%randaoMixLength] = mix
@@ -473,8 +284,13 @@ func ProcessFinalUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
func unslashedAttestingIndices(state *pb.BeaconState, atts []*pb.PendingAttestation) ([]uint64, error) {
var setIndices []uint64
seen := make(map[uint64]bool)
for _, att := range atts {
attestingIndices, err := helpers.AttestingIndices(state, att.Data, att.AggregationBits)
committee, err := helpers.BeaconCommitteeFromState(state, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return nil, err
}
attestingIndices, err := helpers.AttestingIndices(att.AggregationBits, committee)
if err != nil {
return nil, errors.Wrap(err, "could not get attester indices")
}
@@ -521,183 +337,3 @@ func BaseReward(state *pb.BeaconState, index uint64) (uint64, error) {
mathutil.IntegerSquareRoot(totalBalance) / params.BeaconConfig().BaseRewardsPerEpoch
return baseReward, nil
}
// attestationDelta calculates the rewards and penalties of individual
// validator for voting the correct FFG source, FFG target, and head. It
// also calculates proposer delay inclusion and inactivity rewards
// and penalties. Individual rewards and penalties are returned in list.
//
// Note: we calculated adjusted quotient outside of base reward because it's too inefficient
// to repeat the same calculation for every validator versus just doing it once.
//
// Spec pseudocode definition:
// def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
// previous_epoch = get_previous_epoch(state)
// total_balance = get_total_active_balance(state)
// rewards = [Gwei(0) for _ in range(len(state.validators))]
// penalties = [Gwei(0) for _ in range(len(state.validators))]
// eligible_validator_indices = [
// ValidatorIndex(index) for index, v in enumerate(state.validators)
// if is_active_validator(v, previous_epoch) or (v.slashed and previous_epoch + 1 < v.withdrawable_epoch)
// ]
//
// # Micro-incentives for matching FFG source, FFG target, and head
// matching_source_attestations = get_matching_source_attestations(state, previous_epoch)
// matching_target_attestations = get_matching_target_attestations(state, previous_epoch)
// matching_head_attestations = get_matching_head_attestations(state, previous_epoch)
// for attestations in (matching_source_attestations, matching_target_attestations, matching_head_attestations):
// unslashed_attesting_indices = get_unslashed_attesting_indices(state, attestations)
// attesting_balance = get_total_balance(state, unslashed_attesting_indices)
// for index in eligible_validator_indices:
// if index in unslashed_attesting_indices:
// rewards[index] += get_base_reward(state, index) * attesting_balance // total_balance
// else:
// penalties[index] += get_base_reward(state, index)
//
// # Proposer and inclusion delay micro-rewards
// for index in get_unslashed_attesting_indices(state, matching_source_attestations):
// index = ValidatorIndex(index)
// attestation = min([
// a for a in matching_source_attestations
// if index in get_attesting_indices(state, a.data, a.aggregation_bits)
// ], key=lambda a: a.inclusion_delay)
// proposer_reward = Gwei(get_base_reward(state, index) // PROPOSER_REWARD_QUOTIENT)
// rewards[attestation.proposer_index] += proposer_reward
// max_attester_reward = get_base_reward(state, index) - proposer_reward
// rewards[index] += Gwei(max_attester_reward // attestation.inclusion_delay)
//
// # Inactivity penalty
// finality_delay = previous_epoch - state.finalized_checkpoint.epoch
// if finality_delay > MIN_EPOCHS_TO_INACTIVITY_PENALTY:
// matching_target_attesting_indices = get_unslashed_attesting_indices(state, matching_target_attestations)
// for index in eligible_validator_indices:
// index = ValidatorIndex(index)
// penalties[index] += Gwei(BASE_REWARDS_PER_EPOCH * get_base_reward(state, index))
// if index not in matching_target_attesting_indices:
// penalties[index] += Gwei(
// state.validators[index].effective_balance * finality_delay // INACTIVITY_PENALTY_QUOTIENT
// )
//
// return rewards, penalties
func attestationDelta(state *pb.BeaconState) ([]uint64, []uint64, error) {
prevEpoch := helpers.PrevEpoch(state)
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get total active balance")
}
rewards := make([]uint64, len(state.Validators))
penalties := make([]uint64, len(state.Validators))
// Filter out the list of eligible validator indices. The eligible validator
// has to be active or slashed but before withdrawn.
var eligible []uint64
for i, v := range state.Validators {
isActive := helpers.IsActiveValidator(v, prevEpoch)
isSlashed := v.Slashed && (prevEpoch+1 < v.WithdrawableEpoch)
if isActive || isSlashed {
eligible = append(eligible, uint64(i))
}
}
// Apply rewards and penalties for voting correct source target and head.
// Construct a attestations list contains source, target and head attestations.
atts, err := MatchAttestations(state, prevEpoch)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get source, target and head attestations")
}
var attsPackage [][]*pb.PendingAttestation
attsPackage = append(attsPackage, atts.source)
attsPackage = append(attsPackage, atts.Target)
attsPackage = append(attsPackage, atts.head)
// Cache the validators who voted correctly for source in a map
// to calculate earliest attestation rewards later.
attestersVotedSource := make(map[uint64]*pb.PendingAttestation)
// Compute rewards / penalties for each attestation in the list and update
// the rewards and penalties lists.
for i, matchAtt := range attsPackage {
indices, err := unslashedAttestingIndices(state, matchAtt)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get attestation indices")
}
attested := make(map[uint64]bool)
// Construct a map to look up validators that voted for source, target or head.
for _, index := range indices {
if i == 0 {
attestersVotedSource[index] = &pb.PendingAttestation{InclusionDelay: params.BeaconConfig().FarFutureEpoch}
}
attested[index] = true
}
attestedBalance := helpers.TotalBalance(state, indices)
// Update rewards and penalties to each eligible validator index.
for _, index := range eligible {
base, err := BaseReward(state, index)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get base reward")
}
if _, ok := attested[index]; ok {
rewards[index] += base * attestedBalance / totalBalance
} else {
penalties[index] += base
}
}
}
// For every index, filter the matching source attestation that correspond to the index,
// sort by inclusion delay and get the one that was included on chain first.
for _, att := range atts.source {
indices, err := helpers.AttestingIndices(state, att.Data, att.AggregationBits)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get attester indices")
}
for _, i := range indices {
if _, ok := attestersVotedSource[i]; ok {
if attestersVotedSource[i].InclusionDelay > att.InclusionDelay {
attestersVotedSource[i] = att
}
}
}
}
for i, a := range attestersVotedSource {
baseReward, err := BaseReward(state, i)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get proposer reward")
}
proposerReward := baseReward / params.BeaconConfig().ProposerRewardQuotient
rewards[a.ProposerIndex] += proposerReward
attesterReward := baseReward - proposerReward
rewards[i] += attesterReward / a.InclusionDelay
}
// Apply penalties for quadratic leaks.
// When epoch since finality exceeds inactivity penalty constant, the penalty gets increased
// based on the finality delay.
finalityDelay := prevEpoch - state.FinalizedCheckpoint.Epoch
if finalityDelay > params.BeaconConfig().MinEpochsToInactivityPenalty {
targetIndices, err := unslashedAttestingIndices(state, atts.Target)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get attestation indices")
}
attestedTarget := make(map[uint64]bool)
for _, index := range targetIndices {
attestedTarget[index] = true
}
for _, index := range eligible {
base, err := BaseReward(state, index)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get base reward")
}
penalties[index] += params.BeaconConfig().BaseRewardsPerEpoch * base
if _, ok := attestedTarget[index]; !ok {
penalties[index] += state.Validators[index].EffectiveBalance * finalityDelay /
params.BeaconConfig().InactivityPenaltyQuotient
}
}
}
return rewards, penalties, nil
}

View File

@@ -0,0 +1,18 @@
package epoch
import (
"testing"
fuzz "github.com/google/gofuzz"
ethereum_beacon_p2p_v1 "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestFuzzFinalUpdates_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
_, _ = ProcessFinalUpdates(state)
}
}

View File

@@ -2,15 +2,13 @@ package epoch
import (
"bytes"
"reflect"
"strings"
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -107,8 +105,6 @@ func TestUnslashedAttestingIndices_DuplicatedAttestations(t *testing.T) {
}
func TestAttestingBalance_CorrectBalance(t *testing.T) {
helpers.ClearAllCaches()
// Generate 2 attestations.
atts := make([]*pb.PendingAttestation, 2)
for i := 0; i < len(atts); i++ {
@@ -151,159 +147,7 @@ func TestAttestingBalance_CorrectBalance(t *testing.T) {
}
}
func TestMatchAttestations_PrevEpoch(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
s := uint64(0) // slot
// The correct epoch for source is the first epoch
// The correct vote for target is '1'
// The correct vote for head is '2'
prevAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{}}}, // source
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // source, target
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{3}}}}, // source
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // source, target
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{}}}, // source, head
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{4}, Target: &ethpb.Checkpoint{}}}, // source
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // source, target, head
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{5}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // source, target
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{6}}}}, // source, head
}
currentAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{}}}, // none
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{2}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // none
}
blockRoots := make([][]byte, 128)
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i + 1)}
}
state := &pb.BeaconState{
Slot: s + e + 2,
CurrentEpochAttestations: currentAtts,
PreviousEpochAttestations: prevAtts,
BlockRoots: blockRoots,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
mAtts, err := MatchAttestations(state, 0)
if err != nil {
t.Fatal(err)
}
wantedSrcAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{3}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{4}, Target: &ethpb.Checkpoint{}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{5}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{6}}}},
}
if !reflect.DeepEqual(mAtts.source, wantedSrcAtts) {
t.Error("source attestations don't match")
}
wantedTgtAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{5}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
}
if !reflect.DeepEqual(mAtts.Target, wantedTgtAtts) {
t.Error("target attestations don't match")
}
wantedHeadAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{1}}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{6}}}},
}
if !reflect.DeepEqual(mAtts.head, wantedHeadAtts) {
t.Error("head attestations don't match")
}
}
func TestMatchAttestations_CurrentEpoch(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
s := uint64(0) // slot
// The correct epoch for source is the first epoch
// The correct vote for target is '33'
// The correct vote for head is '34'
prevAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{}}}, // none
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{2}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // none
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{5}, Target: &ethpb.Checkpoint{Root: []byte{1}}}}, // none
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{2}, Target: &ethpb.Checkpoint{Root: []byte{6}}}}, // none
}
currentAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{}}}, // source
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{33}}}}, // source, target, head
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{69}, Target: &ethpb.Checkpoint{Root: []byte{33}}}}, // source, target
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{68}}}}, // source, head
}
blockRoots := make([][]byte, 128)
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i + 1)}
}
state := &pb.BeaconState{
Slot: s + e + 2,
CurrentEpochAttestations: currentAtts,
PreviousEpochAttestations: prevAtts,
BlockRoots: blockRoots,
}
mAtts, err := MatchAttestations(state, 1)
if err != nil {
t.Fatal(err)
}
wantedSrcAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, Target: &ethpb.Checkpoint{}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{33}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{69}, Target: &ethpb.Checkpoint{Root: []byte{33}}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{68}}}},
}
if !reflect.DeepEqual(mAtts.source, wantedSrcAtts) {
t.Error("source attestations don't match")
}
wantedTgtAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{33}}}},
{Data: &ethpb.AttestationData{Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{69}, Target: &ethpb.Checkpoint{Root: []byte{33}}}},
}
if !reflect.DeepEqual(mAtts.Target, wantedTgtAtts) {
t.Error("target attestations don't match")
}
wantedHeadAtts := []*pb.PendingAttestation{
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{33}}}},
{Data: &ethpb.AttestationData{Slot: 33, Source: &ethpb.Checkpoint{}, BeaconBlockRoot: []byte{34}, Target: &ethpb.Checkpoint{Root: []byte{68}}}},
}
if !reflect.DeepEqual(mAtts.head, wantedHeadAtts) {
t.Error("head attestations don't match")
}
}
func TestMatchAttestations_EpochOutOfBound(t *testing.T) {
_, err := MatchAttestations(&pb.BeaconState{Slot: 1}, 2 /* epoch */)
if !strings.Contains(err.Error(), "input epoch: 2 != current epoch: 0") {
t.Fatal("Did not receive wanted error")
}
}
func TestBaseReward_AccurateRewards(t *testing.T) {
helpers.ClearAllCaches()
tests := []struct {
a uint64
b uint64
@@ -315,7 +159,6 @@ func TestBaseReward_AccurateRewards(t *testing.T) {
{40 * 1e9, params.BeaconConfig().MaxEffectiveBalance, 2862174},
}
for _, tt := range tests {
helpers.ClearAllCaches()
state := &pb.BeaconState{
Validators: []*ethpb.Validator{
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: tt.b}},
@@ -332,211 +175,6 @@ func TestBaseReward_AccurateRewards(t *testing.T) {
}
}
func TestProcessJustificationAndFinalization_CantJustifyFinalize(t *testing.T) {
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
state := &pb.BeaconState{
JustificationBits: []byte{0x00},
Slot: params.BeaconConfig().SlotsPerEpoch * 2,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
Validators: []*ethpb.Validator{{ExitEpoch: e, EffectiveBalance: a}, {ExitEpoch: e, EffectiveBalance: a},
{ExitEpoch: e, EffectiveBalance: a}, {ExitEpoch: e, EffectiveBalance: a}},
}
// Since Attested balances are less than total balances, nothing happened.
newState, err := ProcessJustificationAndFinalization(state, 0, 0)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, newState) {
t.Error("Did not get the original state")
}
}
func TestProcessJustificationAndFinalization_NoBlockRootCurrentEpoch(t *testing.T) {
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch * 3,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: []byte{0x03}, // 0b0011
Validators: []*ethpb.Validator{{ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}},
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots,
}
attestedBalance := 4 * e * 3 / 2
_, err := ProcessJustificationAndFinalization(state, 0, attestedBalance)
want := "could not get block root for current epoch"
if err == nil || !strings.Contains(err.Error(), want) {
t.Fatal("Did not receive correct error")
}
}
func TestProcessJustificationAndFinalization_ConsecutiveEpochs(t *testing.T) {
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch*2 + 1,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: bitfield.Bitvector4{0x0F}, // 0b1111
Validators: []*ethpb.Validator{{ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}},
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots,
}
attestedBalance := 4 * e * 3 / 2
newState, err := ProcessJustificationAndFinalization(state, 0, attestedBalance)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint.Root, []byte{byte(64)}) {
t.Errorf("Wanted current justified root: %v, got: %v",
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint.Root)
}
if newState.CurrentJustifiedCheckpoint.Epoch != 2 {
t.Errorf("Wanted justified epoch: %d, got: %d",
2, newState.CurrentJustifiedCheckpoint.Epoch)
}
if newState.PreviousJustifiedCheckpoint.Epoch != 0 {
t.Errorf("Wanted previous justified epoch: %d, got: %d",
0, newState.PreviousJustifiedCheckpoint.Epoch)
}
if !bytes.Equal(newState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
t.Errorf("Wanted current finalized root: %v, got: %v",
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint.Root)
}
if newState.FinalizedCheckpoint.Epoch != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpoint.Epoch)
}
}
func TestProcessJustificationAndFinalization_JustifyCurrentEpoch(t *testing.T) {
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch*2 + 1,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: bitfield.Bitvector4{0x03}, // 0b0011
Validators: []*ethpb.Validator{{ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}},
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots,
}
attestedBalance := 4 * e * 3 / 2
newState, err := ProcessJustificationAndFinalization(state, 0, attestedBalance)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint.Root, []byte{byte(64)}) {
t.Errorf("Wanted current justified root: %v, got: %v",
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint.Root)
}
if newState.CurrentJustifiedCheckpoint.Epoch != 2 {
t.Errorf("Wanted justified epoch: %d, got: %d",
2, newState.CurrentJustifiedCheckpoint.Epoch)
}
if newState.PreviousJustifiedCheckpoint.Epoch != 0 {
t.Errorf("Wanted previous justified epoch: %d, got: %d",
0, newState.PreviousJustifiedCheckpoint.Epoch)
}
if !bytes.Equal(newState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
t.Errorf("Wanted current finalized root: %v, got: %v",
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint.Root)
}
if newState.FinalizedCheckpoint.Epoch != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpoint.Epoch)
}
}
func TestProcessJustificationAndFinalization_JustifyPrevEpoch(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch*2 + 1,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: params.BeaconConfig().ZeroHash[:],
},
JustificationBits: bitfield.Bitvector4{0x03}, // 0b0011
Validators: []*ethpb.Validator{{ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}, {ExitEpoch: e}},
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots, FinalizedCheckpoint: &ethpb.Checkpoint{},
}
attestedBalance := 4 * e * 3 / 2
newState, err := ProcessJustificationAndFinalization(state, attestedBalance, 0)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint.Root, []byte{byte(64)}) {
t.Errorf("Wanted current justified root: %v, got: %v",
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint.Root)
}
if newState.PreviousJustifiedCheckpoint.Epoch != 0 {
t.Errorf("Wanted previous justified epoch: %d, got: %d",
0, newState.PreviousJustifiedCheckpoint.Epoch)
}
if newState.CurrentJustifiedCheckpoint.Epoch != 2 {
t.Errorf("Wanted justified epoch: %d, got: %d",
2, newState.CurrentJustifiedCheckpoint.Epoch)
}
if !bytes.Equal(newState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
t.Errorf("Wanted current finalized root: %v, got: %v",
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint.Root)
}
if newState.FinalizedCheckpoint.Epoch != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpoint.Epoch)
}
}
func TestProcessSlashings_NotSlashed(t *testing.T) {
s := &pb.BeaconState{
Slot: 0,
@@ -555,7 +193,6 @@ func TestProcessSlashings_NotSlashed(t *testing.T) {
}
func TestProcessSlashings_SlashedLess(t *testing.T) {
tests := []struct {
state *pb.BeaconState
want uint64
@@ -624,8 +261,6 @@ func TestProcessSlashings_SlashedLess(t *testing.T) {
for i, tt := range tests {
t.Run(string(i), func(t *testing.T) {
helpers.ClearAllCaches()
original := proto.Clone(tt.state)
newState, err := ProcessSlashings(tt.state)
if err != nil {
@@ -709,218 +344,12 @@ func TestProcessRegistryUpdates_NoRotation(t *testing.T) {
}
}
func TestAttestationDelta_CantGetBlockRoot(t *testing.T) {
e := params.BeaconConfig().SlotsPerEpoch
state := buildState(2*e, 1)
state.Slot = 0
_, _, err := attestationDelta(state)
wanted := "could not get block root for epoch"
if !strings.Contains(err.Error(), wanted) {
t.Fatalf("Got: %v, want: %v", err.Error(), wanted)
}
}
func TestAttestationDelta_CantGetAttestation(t *testing.T) {
state := buildState(0, 1)
_, _, err := attestationDelta(state)
wanted := "could not get source, target and head attestations"
if !strings.Contains(err.Error(), wanted) {
t.Fatalf("Got: %v, want: %v", err.Error(), wanted)
}
}
func TestAttestationDelta_NoOneAttested(t *testing.T) {
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount / 32
state := buildState(e+2, validatorCount)
//startShard := uint64(960)
atts := make([]*pb.PendingAttestation, 2)
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{},
Source: &ethpb.Checkpoint{},
},
InclusionDelay: uint64(i + 100),
AggregationBits: bitfield.Bitlist{0xC0, 0x01},
}
}
rewards, penalties, err := attestationDelta(state)
if err != nil {
t.Fatal(err)
}
for i := uint64(0); i < validatorCount; i++ {
// Since no one attested, all the validators should gain 0 reward
if rewards[i] != 0 {
t.Errorf("Wanted reward balance 0, got %d", rewards[i])
}
// Since no one attested, all the validators should get penalized the same
// it's 3 times the penalized amount because source, target and head.
base, err := BaseReward(state, i)
if err != nil {
t.Errorf("Could not get base reward: %v", err)
}
wanted := 3 * base
if penalties[i] != wanted {
t.Errorf("Wanted penalty balance %d, got %d",
wanted, penalties[i])
}
}
}
func TestAttestationDelta_SomeAttested(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount / 8
state := buildState(e+2, validatorCount)
atts := make([]*pb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{},
Source: &ethpb.Checkpoint{},
},
AggregationBits: bitfield.Bitlist{0xC0, 0xC0, 0xC0, 0xC0, 0x01},
InclusionDelay: 1,
}
}
state.PreviousEpochAttestations = atts
rewards, penalties, err := attestationDelta(state)
if err != nil {
t.Fatal(err)
}
attestedBalance, err := AttestingBalance(state, atts)
if err != nil {
t.Error(err)
}
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
t.Fatal(err)
}
attestedIndices := []uint64{100, 106, 196, 641, 654, 1606}
for _, i := range attestedIndices {
base, err := BaseReward(state, i)
if err != nil {
t.Errorf("Could not get base reward: %v", err)
}
// Base rewards for getting source right
wanted := 3 * (base * attestedBalance / totalBalance)
// Base rewards for proposer and attesters working together getting attestation
// on chain in the fastest manner
proposerReward := base / params.BeaconConfig().ProposerRewardQuotient
wanted += (base - proposerReward) / params.BeaconConfig().MinAttestationInclusionDelay
if rewards[i] != wanted {
t.Errorf("Wanted reward balance %d, got %d", wanted, rewards[i])
}
// Since all these validators attested, they shouldn't get penalized.
if penalties[i] != 0 {
t.Errorf("Wanted penalty balance 0, got %d", penalties[i])
}
}
nonAttestedIndices := []uint64{12, 23, 45, 79}
for _, i := range nonAttestedIndices {
base, err := BaseReward(state, i)
if err != nil {
t.Errorf("Could not get base reward: %v", err)
}
wanted := 3 * base
// Since all these validators did not attest, they shouldn't get rewarded.
if rewards[i] != 0 {
t.Errorf("Wanted reward balance 0, got %d", rewards[i])
}
// Base penalties for not attesting.
if penalties[i] != wanted {
t.Errorf("Wanted penalty balance %d, got %d", wanted, penalties[i])
}
}
}
func TestAttestationDelta_SomeAttestedFinalityDelay(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount / 8
state := buildState(e+4, validatorCount)
atts := make([]*pb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{},
Source: &ethpb.Checkpoint{},
},
AggregationBits: bitfield.Bitlist{0xC0, 0xC0, 0xC0, 0xC0, 0x01},
InclusionDelay: 1,
}
}
state.PreviousEpochAttestations = atts
state.FinalizedCheckpoint.Epoch = 0
rewards, penalties, err := attestationDelta(state)
if err != nil {
t.Fatal(err)
}
attestedBalance, err := AttestingBalance(state, atts)
if err != nil {
t.Error(err)
}
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
t.Fatal(err)
}
attestedIndices := []uint64{100, 106, 196, 641, 654, 1606}
for _, i := range attestedIndices {
base, err := BaseReward(state, i)
if err != nil {
t.Errorf("Could not get base reward: %v", err)
}
// Base rewards for getting source right
wanted := 3 * (base * attestedBalance / totalBalance)
// Base rewards for proposer and attesters working together getting attestation
// on chain in the fatest manner
proposerReward := base / params.BeaconConfig().ProposerRewardQuotient
wanted += (base - proposerReward) * params.BeaconConfig().MinAttestationInclusionDelay
if rewards[i] != wanted {
t.Errorf("Wanted reward balance %d, got %d", wanted, rewards[i])
}
// Since all these validators attested, they shouldn't get penalized.
if penalties[i] != 0 {
t.Errorf("Wanted penalty balance 0, got %d", penalties[i])
}
}
nonAttestedIndices := []uint64{12, 23, 45, 79}
for _, i := range nonAttestedIndices {
base, err := BaseReward(state, i)
if err != nil {
t.Errorf("Could not get base reward: %v", err)
}
wanted := 3 * base
// Since all these validators did not attest, they shouldn't get rewarded.
if rewards[i] != 0 {
t.Errorf("Wanted reward balance 0, got %d", rewards[i])
}
// Base penalties for not attesting.
if penalties[i] != wanted {
t.Errorf("Wanted penalty balance %d, got %d", wanted, penalties[i])
}
}
}
func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
state := &pb.BeaconState{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &ethpb.Checkpoint{},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 6},
}
limit, err := helpers.ValidatorChurnLimit(state)
limit, err := helpers.ValidatorChurnLimit(0)
if err != nil {
t.Error(err)
}
@@ -937,7 +366,7 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
t.Error(err)
}
for i, validator := range newState.Validators {
if validator.ActivationEligibilityEpoch != currentEpoch {
if validator.ActivationEligibilityEpoch != currentEpoch+1 {
t.Errorf("Could not update registry %d, wanted activation eligibility epoch %d got %d",
i, currentEpoch, validator.ActivationEligibilityEpoch)
}
@@ -1033,51 +462,6 @@ func TestProcessRegistryUpdates_CanExits(t *testing.T) {
}
}
func TestProcessRewardsAndPenalties_GenesisEpoch(t *testing.T) {
state := &pb.BeaconState{Slot: params.BeaconConfig().SlotsPerEpoch - 1}
newState, err := ProcessRewardsAndPenalties(state)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, newState) {
t.Error("genesis state mutated")
}
}
func TestProcessRewardsAndPenalties_SomeAttested(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount / 8
state := buildState(e+2, validatorCount)
atts := make([]*pb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{},
Source: &ethpb.Checkpoint{},
},
AggregationBits: bitfield.Bitlist{0xC0, 0xC0, 0xC0, 0xC0, 0x01},
InclusionDelay: 1,
}
}
state.PreviousEpochAttestations = atts
state, err := ProcessRewardsAndPenalties(state)
if err != nil {
t.Fatal(err)
}
wanted := uint64(31999873505)
if state.Balances[0] != wanted {
t.Errorf("wanted balance: %d, got: %d",
wanted, state.Balances[0])
}
wanted = uint64(31999810265)
if state.Balances[4] != wanted {
t.Errorf("wanted balance: %d, got: %d",
wanted, state.Balances[1])
}
}
func buildState(slot uint64, validatorCount uint64) *pb.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {

View File

@@ -1,42 +0,0 @@
package epoch
import (
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
// ComputeValidatorParticipation by matching validator attestations from the previous epoch,
// computing the attesting balance, and how much attested compared to the total balance.
func ComputeValidatorParticipation(state *pb.BeaconState, epoch uint64) (*ethpb.ValidatorParticipation, error) {
currentEpoch := helpers.CurrentEpoch(state)
previousEpoch := helpers.PrevEpoch(state)
if epoch != currentEpoch && epoch != previousEpoch {
return nil, fmt.Errorf(
"requested epoch is not previous epoch %d or current epoch %d, requested %d",
previousEpoch,
currentEpoch,
epoch,
)
}
atts, err := MatchAttestations(state, epoch)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve head attestations")
}
attestedBalances, err := AttestingBalance(state, atts.Target)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve attested balances")
}
totalBalances, err := helpers.TotalActiveBalance(state)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve total balances")
}
return &ethpb.ValidatorParticipation{
GlobalParticipationRate: float32(attestedBalances) / float32(totalBalances),
VotedEther: attestedBalances,
EligibleEther: totalBalances,
}, nil
}

View File

@@ -1,166 +0,0 @@
package epoch_test
import (
"reflect"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestComputeValidatorParticipation_PreviousEpoch(t *testing.T) {
params.OverrideBeaconConfig(params.MinimalSpecConfig())
e := uint64(1)
attestedBalance := uint64(20) * params.BeaconConfig().MaxEffectiveBalance
validatorCount := uint64(100)
validators := make([]*ethpb.Validator, validatorCount)
balances := make([]uint64, validatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
balances[i] = params.BeaconConfig().MaxEffectiveBalance
}
blockRoots := make([][]byte, 256)
for i := 0; i < len(blockRoots); i++ {
slot := bytesutil.Bytes32(uint64(i))
blockRoots[i] = slot
}
target := &ethpb.Checkpoint{
Epoch: e,
Root: blockRoots[0],
}
atts := []*pb.PendingAttestation{
{
Data: &ethpb.AttestationData{Target: target, Slot: 0},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: 1},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: 2},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: 3},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: 4},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
}
s := &pb.BeaconState{
Slot: e*params.BeaconConfig().SlotsPerEpoch + 1,
Validators: validators,
Balances: balances,
BlockRoots: blockRoots,
Slashings: []uint64{0, 1e9, 1e9},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
PreviousEpochAttestations: atts,
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: bitfield.Bitvector4{0x00},
PreviousJustifiedCheckpoint: target,
}
res, err := epoch.ComputeValidatorParticipation(s, e-1)
if err != nil {
t.Fatal(err)
}
wanted := &ethpb.ValidatorParticipation{
VotedEther: attestedBalance,
EligibleEther: validatorCount * params.BeaconConfig().MaxEffectiveBalance,
GlobalParticipationRate: float32(attestedBalance) / float32(validatorCount*params.BeaconConfig().MaxEffectiveBalance),
}
if !reflect.DeepEqual(res, wanted) {
t.Errorf("Incorrect validator participation, wanted %v received %v", wanted, res)
}
}
func TestComputeValidatorParticipation_CurrentEpoch(t *testing.T) {
params.OverrideBeaconConfig(params.MinimalSpecConfig())
e := uint64(1)
attestedBalance := uint64(16) * params.BeaconConfig().MaxEffectiveBalance
validatorCount := uint64(100)
validators := make([]*ethpb.Validator, validatorCount)
balances := make([]uint64, validatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
balances[i] = params.BeaconConfig().MaxEffectiveBalance
}
slot := e*params.BeaconConfig().SlotsPerEpoch + 4
blockRoots := make([][]byte, 256)
for i := 0; i < len(blockRoots); i++ {
slot := bytesutil.Bytes32(uint64(i))
blockRoots[i] = slot
}
target := &ethpb.Checkpoint{
Epoch: e,
Root: blockRoots[params.BeaconConfig().SlotsPerEpoch],
}
atts := []*pb.PendingAttestation{
{
Data: &ethpb.AttestationData{Target: target, Slot: slot - 4},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: slot - 3},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: slot - 2},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
{
Data: &ethpb.AttestationData{Target: target, Slot: slot - 1},
AggregationBits: []byte{0xFF, 0xFF, 0xFF, 0xFF},
},
}
s := &pb.BeaconState{
Slot: slot,
Validators: validators,
Balances: balances,
BlockRoots: blockRoots,
Slashings: []uint64{0, 1e9, 1e9},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentEpochAttestations: atts,
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: bitfield.Bitvector4{0x00},
CurrentJustifiedCheckpoint: target,
}
res, err := epoch.ComputeValidatorParticipation(s, e)
if err != nil {
t.Fatal(err)
}
wanted := &ethpb.ValidatorParticipation{
VotedEther: attestedBalance,
EligibleEther: validatorCount * params.BeaconConfig().MaxEffectiveBalance,
GlobalParticipationRate: float32(attestedBalance) / float32(validatorCount*params.BeaconConfig().MaxEffectiveBalance),
}
if !reflect.DeepEqual(res, wanted) {
t.Errorf("Incorrect validator participation, wanted %v received %v", wanted, res)
}
}

View File

@@ -15,11 +15,11 @@ go_library(
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
@@ -37,12 +37,11 @@ go_test(
deps = [
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -11,6 +11,10 @@ import (
"go.opencensus.io/trace"
)
// Balances stores balances such as prev/current total validator balances, attested balances and more.
// It's used for metrics reporting.
var Balances *Balance
// ProcessAttestations process the attestations in state and update individual validator's pre computes,
// it also tracks and updates epoch attesting balances.
func ProcessAttestations(
@@ -23,6 +27,7 @@ func ProcessAttestations(
v := &Validator{}
var err error
for _, a := range append(state.PreviousEpochAttestations, state.CurrentEpochAttestations...) {
v.IsCurrentEpochAttester, v.IsCurrentEpochTargetAttester, err = AttestedCurrentEpoch(state, a)
if err != nil {
@@ -35,8 +40,11 @@ func ProcessAttestations(
return nil, nil, errors.Wrap(err, "could not check validator attested previous epoch")
}
// Get attested indices and update the pre computed fields for each attested validators.
indices, err := helpers.AttestingIndices(state, a.Data, a.AggregationBits)
committee, err := helpers.BeaconCommitteeFromState(state, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, nil, err
}
indices, err := helpers.AttestingIndices(a.AggregationBits, committee)
if err != nil {
return nil, nil, err
}
@@ -44,6 +52,7 @@ func ProcessAttestations(
}
bp = UpdateBalance(vp, bp)
Balances = bp
return vp, bp, nil
}

View File

@@ -5,11 +5,10 @@ import (
"reflect"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -73,12 +72,7 @@ func TestUpdateBalance(t *testing.T) {
}
func TestSameHead(t *testing.T) {
helpers.ClearAllCaches()
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = 1
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}}}
@@ -103,11 +97,7 @@ func TestSameHead(t *testing.T) {
}
func TestSameTarget(t *testing.T) {
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = 1
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}}}
@@ -132,11 +122,7 @@ func TestSameTarget(t *testing.T) {
}
func TestAttestedPrevEpoch(t *testing.T) {
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = params.BeaconConfig().SlotsPerEpoch
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}}}
@@ -160,11 +146,7 @@ func TestAttestedPrevEpoch(t *testing.T) {
}
func TestAttestedCurrentEpoch(t *testing.T) {
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = params.BeaconConfig().SlotsPerEpoch + 1
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 1}}}
@@ -185,17 +167,11 @@ func TestAttestedCurrentEpoch(t *testing.T) {
}
func TestProcessAttestations(t *testing.T) {
helpers.ClearAllCaches()
params.UseMinimalConfig()
defer params.UseMainnetConfig()
validators := uint64(64)
deposits, _, _ := testutil.SetupInitialDeposits(t, validators)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, validators)
beaconState.Slot = params.BeaconConfig().SlotsPerEpoch
bf := []byte{0xff}
@@ -219,17 +195,26 @@ func TestProcessAttestations(t *testing.T) {
vp[i] = &precompute.Validator{CurrentEpochEffectiveBalance: 100}
}
bp := &precompute.Balance{}
vp, bp, err = precompute.ProcessAttestations(context.Background(), beaconState, vp, bp)
vp, bp, err := precompute.ProcessAttestations(context.Background(), beaconState, vp, bp)
if err != nil {
t.Fatal(err)
}
indices, _ := helpers.AttestingIndices(beaconState, att1.Data, att1.AggregationBits)
committee, err := helpers.BeaconCommitteeFromState(beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
if err != nil {
t.Error(err)
}
indices, _ := helpers.AttestingIndices(att1.AggregationBits, committee)
for _, i := range indices {
if !vp[i].IsPrevEpochAttester {
t.Error("Not a prev epoch attester")
}
}
indices, _ = helpers.AttestingIndices(beaconState, att2.Data, att2.AggregationBits)
committee, err = helpers.BeaconCommitteeFromState(beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
if err != nil {
t.Error(err)
}
indices, _ = helpers.AttestingIndices(att2.AggregationBits, committee)
for _, i := range indices {
if !vp[i].IsPrevEpochAttester {
t.Error("Not a prev epoch attester")

View File

@@ -2,9 +2,9 @@ package precompute
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
// ProcessJustificationAndFinalizationPreCompute processes justification and finalization during

View File

@@ -4,10 +4,10 @@ import (
"bytes"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)

View File

@@ -5,9 +5,9 @@ import (
"reflect"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)

View File

@@ -76,7 +76,7 @@ func attestationDelta(state *pb.BeaconState, bp *Balance, v *Validator) (uint64,
p += br
}
// Process heard reward / penalty
// Process head reward / penalty
if v.IsPrevEpochHeadAttester && !v.IsSlashed {
r += br * bp.PrevEpochHeadAttesters / bp.CurrentEpoch
} else {

View File

@@ -4,16 +4,15 @@ import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := uint64(2048)
state := buildState(e+3, validatorCount)
@@ -57,7 +56,6 @@ func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
}
func TestAttestationDeltaPrecompute(t *testing.T) {
helpers.ClearAllCaches()
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := uint64(2048)
state := buildState(e+2, validatorCount)

View File

@@ -4,10 +4,9 @@ import (
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -97,8 +96,6 @@ func TestProcessSlashingsPrecompute_SlashedLess(t *testing.T) {
for i, tt := range tests {
t.Run(string(i), func(t *testing.T) {
helpers.ClearAllCaches()
ab := uint64(0)
for i, b := range tt.state.Balances {
// Skip validator 0 since it's slashed

View File

@@ -1,15 +1,11 @@
package spectest
import (
"bytes"
"context"
"fmt"
"path"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -25,40 +21,11 @@ func runJustificationAndFinalizationTests(t *testing.T, config string) {
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
folderPath := path.Join(testsFolderPath, folder.Name())
testutil.RunEpochOperationTest(t, folderPath, processJustificationAndFinalizationWrapper)
testutil.RunEpochOperationTest(t, folderPath, processJustificationAndFinalizationPrecomputeWrapper)
})
}
}
// This is a subset of state.ProcessEpoch. The spec test defines input data for
// `justification_and_finalization` only.
func processJustificationAndFinalizationWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
prevEpochAtts, err := targetAtts(state, helpers.PrevEpoch(state))
if err != nil {
t.Fatalf("could not get target atts prev epoch %d: %v", helpers.PrevEpoch(state), err)
}
currentEpochAtts, err := targetAtts(state, helpers.CurrentEpoch(state))
if err != nil {
t.Fatalf("could not get target atts current epoch %d: %v", helpers.CurrentEpoch(state), err)
}
prevEpochAttestedBalance, err := epoch.AttestingBalance(state, prevEpochAtts)
if err != nil {
t.Fatalf("could not get attesting balance prev epoch: %v", err)
}
currentEpochAttestedBalance, err := epoch.AttestingBalance(state, currentEpochAtts)
if err != nil {
t.Fatalf("could not get attesting balance current epoch: %v", err)
}
state, err = epoch.ProcessJustificationAndFinalization(state, prevEpochAttestedBalance, currentEpochAttestedBalance)
if err != nil {
t.Fatalf("could not process justification: %v", err)
}
return state, nil
}
func processJustificationAndFinalizationPrecomputeWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
ctx := context.Background()
vp, bp := precompute.New(ctx, state)
@@ -74,36 +41,3 @@ func processJustificationAndFinalizationPrecomputeWrapper(t *testing.T, state *p
return state, nil
}
func targetAtts(state *pb.BeaconState, epoch uint64) ([]*pb.PendingAttestation, error) {
currentEpoch := helpers.CurrentEpoch(state)
previousEpoch := helpers.PrevEpoch(state)
// Input epoch for matching the source attestations has to be within range
// of current epoch & previous epoch.
if epoch != currentEpoch && epoch != previousEpoch {
return nil, fmt.Errorf("input epoch: %d != current epoch: %d or previous epoch: %d",
epoch, currentEpoch, previousEpoch)
}
// Decide if the source attestations are coming from current or previous epoch.
var srcAtts []*pb.PendingAttestation
if epoch == currentEpoch {
srcAtts = state.CurrentEpochAttestations
} else {
srcAtts = state.PreviousEpochAttestations
}
targetRoot, err := helpers.BlockRoot(state, epoch)
if err != nil {
return nil, err
}
tgtAtts := make([]*pb.PendingAttestation, 0, len(srcAtts))
for _, srcAtt := range srcAtts {
if bytes.Equal(srcAtt.Data.Target.Root, targetRoot) {
tgtAtts = append(tgtAtts, srcAtt)
}
}
return tgtAtts, nil
}

View File

@@ -0,0 +1,37 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["validation.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/exit",
visibility = [
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bls:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["validation_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)

View File

@@ -0,0 +1,66 @@
package exit
import (
"fmt"
"time"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
)
// ValidateVoluntaryExit validates the voluntary exit.
// If it is invalid for some reason an error, if valid it will return no error.
func ValidateVoluntaryExit(state *pb.BeaconState, genesisTime time.Time, signed *ethpb.SignedVoluntaryExit) error {
if signed == nil || signed.Exit == nil {
return errors.New("nil signed voluntary exit")
}
ve := signed.Exit
if ve.ValidatorIndex >= uint64(len(state.Validators)) {
return fmt.Errorf("unknown validator index %d", ve.ValidatorIndex)
}
validator := state.Validators[ve.ValidatorIndex]
if !helpers.IsActiveValidator(validator, ve.Epoch) {
return fmt.Errorf("validator %d not active at epoch %d", ve.ValidatorIndex, ve.Epoch)
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return fmt.Errorf("validator %d already exiting or exited", ve.ValidatorIndex)
}
secondsPerEpoch := params.BeaconConfig().SecondsPerSlot * params.BeaconConfig().SlotsPerEpoch
currentEpoch := uint64(roughtime.Now().Unix()-genesisTime.Unix()) / secondsPerEpoch
earliestRequestedExitEpoch := mathutil.Max(ve.Epoch, currentEpoch)
earliestExitEpoch := validator.ActivationEpoch + params.BeaconConfig().PersistentCommitteePeriod
if earliestRequestedExitEpoch < earliestExitEpoch {
return fmt.Errorf("validator %d cannot exit before epoch %d", ve.ValidatorIndex, earliestExitEpoch)
}
// Confirm signature is valid
root, err := ssz.HashTreeRoot(ve)
if err != nil {
return errors.Wrap(err, "cannot confirm signature")
}
sig, err := bls.SignatureFromBytes(signed.Signature)
if err != nil {
return errors.Wrap(err, "malformed signature")
}
validatorPubKey, err := bls.PublicKeyFromBytes(validator.PublicKey)
if err != nil {
return errors.Wrap(err, "invalid validator public key")
}
domain := bls.ComputeDomain(params.BeaconConfig().DomainVoluntaryExit)
verified := sig.Verify(root[:], validatorPubKey, domain)
if !verified {
return errors.New("incorrect signature")
}
// Parameters are valid.
return nil
}

View File

@@ -0,0 +1,125 @@
package exit_test
import (
"context"
"errors"
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
mockChain "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
blk "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/exit"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
dbutil "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
// Set genesis to a small set for faster test processing.
func init() {
p := params.BeaconConfig()
p.MinGenesisActiveValidatorCount = 8
params.OverrideBeaconConfig(p)
}
func TestValidation(t *testing.T) {
tests := []struct {
name string
epoch uint64
validatorIndex uint64
signature []byte
err error
}{
{
name: "MissingValidator",
epoch: 2048,
validatorIndex: 16,
err: errors.New("unknown validator index 16"),
},
{
name: "EarlyExit",
epoch: 2047,
validatorIndex: 0,
err: errors.New("validator 0 cannot exit before epoch 2048"),
},
{
name: "NoSignature",
epoch: 2048,
validatorIndex: 0,
err: errors.New("malformed signature: signature must be 96 bytes"),
},
{
name: "InvalidSignature",
epoch: 2048,
validatorIndex: 0,
signature: []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
err: errors.New("malformed signature: could not unmarshal bytes into signature: err blsSignatureDeserialize 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
},
{
name: "IncorrectSignature",
epoch: 2048,
validatorIndex: 0,
signature: []byte{0xab, 0xb0, 0x12, 0x4c, 0x75, 0x74, 0xf2, 0x81, 0xa2, 0x93, 0xf4, 0x18, 0x5c, 0xad, 0x3c, 0xb2, 0x26, 0x81, 0xd5, 0x20, 0x91, 0x7c, 0xe4, 0x66, 0x65, 0x24, 0x3e, 0xac, 0xb0, 0x51, 0x00, 0x0d, 0x8b, 0xac, 0xf7, 0x5e, 0x14, 0x51, 0x87, 0x0c, 0xa6, 0xb3, 0xb9, 0xe6, 0xc9, 0xd4, 0x1a, 0x7b, 0x02, 0xea, 0xd2, 0x68, 0x5a, 0x84, 0x18, 0x8a, 0x4f, 0xaf, 0xd3, 0x82, 0x5d, 0xaf, 0x6a, 0x98, 0x96, 0x25, 0xd7, 0x19, 0xcc, 0xd2, 0xd8, 0x3a, 0x40, 0x10, 0x1f, 0x4a, 0x45, 0x3f, 0xca, 0x62, 0x87, 0x8c, 0x89, 0x0e, 0xca, 0x62, 0x23, 0x63, 0xf9, 0xdd, 0xb8, 0xf3, 0x67, 0xa9, 0x1e, 0x84},
err: errors.New("incorrect signature"),
},
{
name: "Good",
epoch: 2048,
validatorIndex: 0,
signature: []byte{0xb3, 0xe1, 0x9d, 0xc6, 0x7c, 0x78, 0x6c, 0xcf, 0x33, 0x1d, 0xb9, 0x6f, 0x59, 0x64, 0x44, 0xe1, 0x29, 0xd0, 0x87, 0x03, 0x26, 0x6e, 0x49, 0x1c, 0x05, 0xae, 0x16, 0x7b, 0x04, 0x0f, 0x3f, 0xf8, 0x82, 0x77, 0x60, 0xfc, 0xcf, 0x2f, 0x59, 0xc7, 0x40, 0x0b, 0x2c, 0xa9, 0x23, 0x8a, 0x6c, 0x8d, 0x01, 0x21, 0x5e, 0xa8, 0xac, 0x36, 0x70, 0x31, 0xb0, 0xe1, 0xa8, 0xb8, 0x8f, 0x93, 0x8c, 0x1c, 0xa2, 0x86, 0xe7, 0x22, 0x00, 0x6a, 0x7d, 0x36, 0xc0, 0x2b, 0x86, 0x2c, 0xf5, 0xf9, 0x10, 0xb9, 0xf2, 0xbd, 0x5e, 0xa6, 0x5f, 0x12, 0x86, 0x43, 0x20, 0x4d, 0xa2, 0x9d, 0x8b, 0xe6, 0x6f, 0x09},
},
}
db := dbutil.SetupDB(t)
defer dbutil.TeardownDB(t, db)
ctx := context.Background()
deposits, _, _ := testutil.DeterministicDepositsAndKeys(params.BeaconConfig().MinGenesisActiveValidatorCount)
beaconState, err := state.GenesisBeaconState(deposits, 0, &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
if err != nil {
t.Fatal(err)
}
block := blk.NewGenesisBlock([]byte{})
if err := db.SaveBlock(ctx, block); err != nil {
t.Fatalf("Could not save genesis block: %v", err)
}
genesisRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
t.Fatalf("Could not get signing root %v", err)
}
// Set genesis time to be 100 epochs ago
genesisTime := time.Now().Add(time.Duration(-100*int64(params.BeaconConfig().SecondsPerSlot*params.BeaconConfig().SlotsPerEpoch)) * time.Second)
mockChainService := &mockChain.ChainService{State: beaconState, Root: genesisRoot[:], Genesis: genesisTime}
headState, err := mockChainService.HeadState(context.Background())
if err != nil {
t.Fatal("Failed to obtain head state")
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: test.epoch,
ValidatorIndex: test.validatorIndex,
},
Signature: test.signature,
}
err := exit.ValidateVoluntaryExit(headState, genesisTime, req)
if test.err == nil {
if err != nil {
t.Errorf("Unexpected error: received %v", err)
}
} else {
if err == nil {
t.Error("Failed to receive expected error")
}
if err.Error() != test.err.Error() {
t.Errorf("Unexpected error: expected %s, received %s", test.err.Error(), err.Error())
}
}
})
}
}

View File

@@ -0,0 +1,8 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["event.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/feed",
visibility = ["//beacon-chain:__subpackages__"],
)

View File

@@ -0,0 +1,19 @@
package feed
// How to add a new event to the feed:
// 1. Add a file for the new type of feed.
// 2. Add a constant describing the list of events.
// 3. Add a structure with the name `<event>Data` containing any data fields that should be supplied with the event.
//
// Note that the same event is supplied to all subscribers, so the event received by subscribers should be considered read-only.
// EventType is the type that defines the type of event.
type EventType int
// Event is the event that is sent with operation feed updates.
type Event struct {
// Type is the type of event.
Type EventType
// Data is event-specific data.
Data interface{}
}

View File

@@ -0,0 +1,15 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"events.go",
"notifier.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/operation",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//shared/event:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
],
)

View File

@@ -0,0 +1,36 @@
package operation
import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
)
const (
// UnaggregatedAttReceived is sent after an unaggregated attestation object has been received
// from the outside world. (eg. in RPC or sync)
UnaggregatedAttReceived = iota + 1
// AggregatedAttReceived is sent after an aggregated attestation object has been received
// from the outside world. (eg. in sync)
AggregatedAttReceived
// ExitReceived is sent after an voluntary exit object has been received from the outside world (eg in RPC or sync)
ExitReceived
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
type UnAggregatedAttReceivedData struct {
// Attestation is the unaggregated attestation object.
Attestation *ethpb.Attestation
}
// AggregatedAttReceivedData is the data sent with AggregatedAttReceived events.
type AggregatedAttReceivedData struct {
// Attestation is the aggregated attestation object.
Attestation *ethpb.AggregateAttestationAndProof
}
// ExitReceivedData is the data sent with ExitReceived events.
type ExitReceivedData struct {
// Exit is the voluntary exit object.
Exit *ethpb.SignedVoluntaryExit
}

View File

@@ -0,0 +1,8 @@
package operation
import "github.com/prysmaticlabs/prysm/shared/event"
// Notifier interface defines the methods of the service that provides beacon block operation updates to consumers.
type Notifier interface {
OperationFeed() *event.Feed
}

View File

@@ -0,0 +1,12 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"events.go",
"notifier.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state",
visibility = ["//beacon-chain:__subpackages__"],
deps = ["//shared/event:go_default_library"],
)

View File

@@ -0,0 +1,32 @@
package state
import "time"
const (
// BlockProcessed is sent after a block has been processed and updated the state database.
BlockProcessed = iota + 1
// ChainStarted is sent when enough validators are active to start proposing blocks.
ChainStarted
// Initialized is sent when the internal beacon node's state is ready to be accessed.
Initialized
)
// BlockProcessedData is the data sent with BlockProcessed events.
type BlockProcessedData struct {
// BlockHash is the hash of the processed block.
BlockRoot [32]byte
// Verified is true if the block's BLS contents have been verified.
Verified bool
}
// ChainStartedData is the data sent with ChainStarted events.
type ChainStartedData struct {
// StartTime is the time at which the chain started.
StartTime time.Time
}
// InitializedData is the data sent with Initialized events.
type InitializedData struct {
// StartTime is the time at which the chain started.
StartTime time.Time
}

View File

@@ -0,0 +1,8 @@
package state
import "github.com/prysmaticlabs/prysm/shared/event"
// Notifier interface defines the methods of the service that provides state updates to consumers.
type Notifier interface {
StateFeed() *event.Feed
}

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"attestation.go",
"block.go",
"cache.go",
"committee.go",
"randao.go",
"rewards_penalties.go",
@@ -18,12 +17,12 @@ go_library(
"//beacon-chain:__subpackages__",
"//shared/testutil:__pkg__",
"//slasher:__subpackages__",
"//tools:__subpackages__",
"//validator:__subpackages__",
],
deps = [
"//beacon-chain/cache:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
@@ -33,13 +32,15 @@ go_library(
"//shared/sliceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
go_test(
name = "go_default_test",
size = "small",
size = "medium",
srcs = [
"attestation_test.go",
"block_test.go",
@@ -54,12 +55,14 @@ go_test(
shard_count = 2,
deps = [
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
"//shared/sliceutil:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
@@ -83,8 +86,8 @@ go_test(
"no-cache",
],
deps = [
"//proto/eth/v1alpha1:go_default_library",
"//shared/bls:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -1,19 +1,19 @@
package helpers
import (
"encoding/binary"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
var (
// ErrAttestationDataSlotNilState is returned when a nil state argument
// is provided to AttestationDataSlot.
ErrAttestationDataSlotNilState = errors.New("nil state provided for AttestationDataSlot")
// ErrAttestationDataSlotNilData is returned when a nil attestation data
// argument is provided to AttestationDataSlot.
ErrAttestationDataSlotNilData = errors.New("nil data provided for AttestationDataSlot")
// ErrAttestationAggregationBitsOverlap is returned when two attestations aggregation
// bits overlap with each other.
ErrAttestationAggregationBitsOverlap = errors.New("overlapping aggregation bits")
@@ -105,3 +105,61 @@ func AggregateAttestation(a1 *ethpb.Attestation, a2 *ethpb.Attestation) (*ethpb.
return baseAtt, nil
}
// SlotSignature returns the signed signature of the hash tree root of input slot.
//
// Spec pseudocode definition:
// def get_slot_signature(state: BeaconState, slot: Slot, privkey: int) -> BLSSignature:
// domain = get_domain(state, DOMAIN_BEACON_ATTESTER, compute_epoch_at_slot(slot))
// return bls_sign(privkey, hash_tree_root(slot), domain)
func SlotSignature(state *pb.BeaconState, slot uint64, privKey *bls.SecretKey) (*bls.Signature, error) {
d := Domain(state.Fork, CurrentEpoch(state), params.BeaconConfig().DomainBeaconAttester)
s, err := ssz.HashTreeRoot(slot)
if err != nil {
return nil, err
}
return privKey.Sign(s[:], d), nil
}
// IsAggregator returns true if the signature is from the input validator. The committee
// count is provided as an argument rather than direct implementation from spec. Having
// committee count as an argument allows cheaper computation at run time.
//
// Spec pseudocode definition:
// def is_aggregator(state: BeaconState, slot: Slot, index: CommitteeIndex, slot_signature: BLSSignature) -> bool:
// committee = get_beacon_committee(state, slot, index)
// modulo = max(1, len(committee) // TARGET_AGGREGATORS_PER_COMMITTEE)
// return bytes_to_int(hash(slot_signature)[0:8]) % modulo == 0
func IsAggregator(committeeCount uint64, slot uint64, index uint64, slotSig []byte) (bool, error) {
modulo := uint64(1)
if committeeCount/params.BeaconConfig().TargetAggregatorsPerCommittee > 1 {
modulo = committeeCount / params.BeaconConfig().TargetAggregatorsPerCommittee
}
b := hashutil.Hash(slotSig)
return binary.LittleEndian.Uint64(b[:8])%modulo == 0, nil
}
// AggregateSignature returns the aggregated signature of the input attestations.
//
// Spec pseudocode definition:
// def get_aggregate_signature(attestations: Sequence[Attestation]) -> BLSSignature:
// signatures = [attestation.signature for attestation in attestations]
// return bls_aggregate_signatures(signatures)
func AggregateSignature(attestations []*ethpb.Attestation) (*bls.Signature, error) {
sigs := make([]*bls.Signature, len(attestations))
var err error
for i := 0; i < len(sigs); i++ {
sigs[i], err = signatureFromBytes(attestations[i].Signature)
if err != nil {
return nil, err
}
}
return aggregateSignatures(sigs), nil
}
// IsAggregated returns true if the attestation is an aggregated attestation,
// false otherwise.
func IsAggregated(attestation *ethpb.Attestation) bool {
return attestation.AggregationBits.Count() > 1
}

View File

@@ -3,8 +3,8 @@ package helpers
import (
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bls"
)
@@ -89,7 +89,6 @@ func BenchmarkAggregateAttestations(b *testing.B) {
atts[i] = &ethpb.Attestation{
AggregationBits: b,
Data: nil,
CustodyBits: nil,
Signature: bls.NewAggregateSignature().Marshal(),
}
}

Some files were not shown because too many files have changed in this diff Show More