Align code base to v0.11 (#5127)

* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* add in new patch and workspace
* update cloners
* Handle rewards overflow (#5122)

* Refactoring of initial sync (#5096)

* implements blocks queue

* refactors updateCounter method

* fixes deadlock on stop w/o start

* refactors updateSchedulerState

* more tests on schduler

* parseFetchResponse tests

* wraps up tests for blocks queue

* eod commit

* fixes data race in round robin

* revamps fetcher

* fixes race conditions + livelocks + deadlocks

* less verbose output

* fixes data race, by isolating critical sections

* minor refactoring: resolves blocking calls

* implements init-sync queue

* udpate fetch/send buffers in blocks fetcher

* blockState enum-like type alias

* refactors common code into releaseTicket()

* better gc

* linter

* minor fix to round robin

* moves original round robin into its own package

* adds enableInitSyncQueue flag

* fixes issue with init-sync service selection

* Update beacon-chain/sync/initial-sync/round_robin.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* initsyncv1 -> initsyncold

* adds span

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* Handle rewards overflow

* Revert "Refactoring of initial sync (#5096)"

This reverts commit 3ec2a0f9e0.

Co-authored-by: Victor Farazdagi <simple.square@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* updated block operations
* updated validator client
* Merge refs/heads/master into v0.10.1
* updated block operations test
* skip benchmark test
* updated transition test
* updated db kv tests
* updated ops tests
* updated ops tests
* updated slashing tests
* updated rpc tests
* updated state utils
* updated test utils and miscs
* Temp skips minimal spec tests
* Fixed proposer slashing test
* Gaz
* Skip 2 more minimal tests
* Skip 2 more minimal tests
* Update readme
* gaz
* Conflict
* Fix import and not use
* Update workspace for new spec test
* Fix workspace
* Merge refs/heads/master into v0.10.1
* Update workspace with new ethapi commit
* Unblock a few tests
* Merge refs/heads/master into v0.10.1
* fixed block op test
* gaz
* Merge refs/heads/master into v0.10.1
* Skip gen state test (test setup issue
* Updated hysteresis config
* Updated epoch processing for new hyteresis
* Updated tests
* regen proto beacon
* update state util for state root
* update state types
* update getter and setters
* update compute domain and get domain and tests
* update validators
* Add forkdata proto
* Updated compute domain api, moved it to helper pkg
* Merge refs/heads/master into v0.10.1
* Fixed all core tests
* Fixed all the sync tests
* Fixed all the rpc tests
* Merge refs/heads/master into v0.10.1
* Merge refs/heads/master into v0.10.1
* Fixed conflict
* Fixed conflict
* Conflict fix
* visibility
* Fixed validator tests
* Fixing test util
* Fixed rest of non spec tests
* Fixed a bug proposer index wasn't included
* gaz
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Updated eth1 data voting period to epoch based
* Fixed failed tests
* fix bug
* fix error
* Fixed more misc tests
* Add new SignedAggregateAndProof to pass spec test
* Update minimalConfig.PersistentCommitteePeriod
* allow to rebuild trie
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Skip e2e tests
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Align aggregator action with v0.11 (#5146)
* Remove Head Root from Beacon Block by Range Request (#5165)

* make proto changes
* remove head root
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.11
* add back herumi's library
* Update ethapi in workspace, started fixing test. Hand off to Nishant
* fix build
* All tests passing
* Align finalized slot check with v0.11 (#5166)
* Merge branch 'master' into v0.11
* Add DoS resistance for v0.11 (#5158)
* Add Fork Digest Helper (#5173)
* Extend DoS prevention to rest of operation objects (#5174)

* Update mapping

* Add caches

* Update seen block in validation pipeline

* Update seen att in validation pipeline

* Update seen att in validation pipeline

* Fixed rest of tests

* Gazelle

* Better writes

* Lint

* Preston's feedback

* Switched to LRU cache and fixed tests

* Gazelle

* Fix test

* Update proposer slashing

* Update proposer slashing

* Fixed a block test

* Update exit

* Update atteser slashing

* Raul's feedback

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Add remote keymanager (#5133)

* Add remote keymanager

* Add generic signRoot() helper

* Add tests for remote keymanager

* NewRemote -> NewRemoteWallet

* signRoot -> signOject, to increase reuse

* Fix end-to-end compile error

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
* Add Snappy Framing to the Encoder (#5172)

* change to framing

* more fixes

* fix everything

* add stricter limits

* preston feedback

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: rauljordan <raul@prysmaticlabs.com>
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Move Subnet Functionality to its Own File (#5179)

* move subnets to their own file

* fix build fail

* build

* Update beacon-chain/p2p/discovery_test.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Sync with master
* Verify proposer signature in sync (#5206)
* Fix Signed Attestation In Sync (#5207)
* Add Eth2 Fork ENR Functionality (#5181)

* add fork entry enr

* add in fork

* add the required fork entry to node

* add and retrieve fork entry

* await state initialized

* utilize new structure

* more progress, utilizing a config map instead

* send the genesis validators root via the event feed

* struct method for discovery

* fix broken builds

* fixed up more tsts using state feed initializer

* fix up most tests

* only one more failing test

* almost done with tests

* p2p tests all pass

* config fix

* fix blockchain test

* gaz

* add in todo

* lint

* add compare func

* ensure fork ENR versions match between peers

* add in test for discovery

* test name

* tests complete

* tests done

* done

* comments

* fix all flakes

* addressed comments

* build using ssz gen

* marshal record

* use custom ssz

* deduplicate import

* fix build

* add enr proto

* p2p tests done

Co-authored-by: nisdas <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Verify aggregator signature in sync (#5208)
* Add Fork Digest For Gossip Topics (#5191)

* update for the day

* fix remaining failing test

* fix one more test

* change message

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* terence's review

* implement fork digest'

* align digest to interface'

* passed all tests

* spawn in goroutine

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Fix Incorrect Attester Slashing Method (#5229)
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Remove keystore keymanager from validator (#5236)

* Remove keystore keymanager from validator

* Update dependency

* Update validator/flags/flags.go

* Update validator/flags/flags.go

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
* fix broadcaster
* update metrics with fork digest for p2p (#5251)

* update metrics with fork digest for p2p

* update p2p metrics

* update metrics using att values

* wrapped up

* fix bug

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Fix incorrect domain type comments (#5250)

* Fix incorrect domain type comments
* resolve conflicts
* fix broken broadcast test
* fix tests
* include protocol suffix
* fix confs
* lint
* fix test
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.11
* resolve broken slasher test'
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Merge branch 'master' into v0.11
* fix config override
* Remove deprecated parameters (#5249)
* Avoid div by zero in extreme balance case (#5273)

* Return effective balance increment instead of 1

* Update to new spec tests v0.11.1

* Revert "Regen historical states for `new-state-mgmt` compatibility (#5261)"

This reverts commit df9a534826.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Revert "Remove deprecated parameters (#5249)" (#5276)

This reverts commit 7d17c9ac34.
* Verify block proposer index before gossip  (#5274)

* Update pipeline

* Update tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Add in Proposer Index to Custom HTR (#5269)

* fix test

* Update beacon-chain/state/stateutil/blocks_test.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Resolve Flakey P2P Tests (#5285)

* double time for flakey test

* fix test flakeyness in p2p:

* flakey

* time tolerance

* greater tolerance
* Merge branch 'master' into v0.11
* release resources correctly (#5287)
* Merge refs/heads/master into v0.11
* Enable NOISE Handshake by Default v0.11 (#5272)

* noise handshakes by default

* fix build

* noisy noise everywhere

* deprecated noisy noise flag with more noise

* add secio as fallback

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: nisdas <nishdas93@gmail.com>
* Merge refs/heads/master into v0.11
* new ports
* fix broken build
* Make `new-state-mgmt` canonical  (#5289)

* Invert the flags
* Update checking messages
* Fixed all db tests
* Fixed rest of the block chain tests
* Fix chain race tests
* Fixed rpc tests
* Disable soudns better...
* Merge branch 'v0.11' into invert-new-state-mgmt
* Merge refs/heads/v0.11 into invert-new-state-mgmt
* Fix export
* Merge branch 'invert-new-state-mgmt' of github.com:prysmaticlabs/prysm into invert-new-state-mgmt
* Fix conflict tests
* Gazelle
* Merge refs/heads/v0.11 into invert-new-state-mgmt
* Merge refs/heads/v0.11 into invert-new-state-mgmt
* Merge branch 'master' into v0.11
* resolve flakeyness
* Merge refs/heads/master into v0.11
* Merge refs/heads/master into v0.11
* Detect Proposer Slashing Implementation (#5139)

* detect blocks

* detect blocks

* use stub

* use stub

* use stub

* todo

* fix test

* add tests and utils

* fix imports

* fix imports

* fix comment

* todo

* proposerIndex

* fix broken test

* formatting and simplified if

* Update slasher/detection/service.go

* Update slasher/detection/testing/utils.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* fixed up final comments

* better naming

* Update slasher/detection/service.go

* Update slasher/detection/service.go

* Update slasher/detection/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* no more named args

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.11
* Add Metadata And Ping RPC methods (#5271)

* add new proto files

* add flag and helper

* add initializer

* imports

* add ping method

* add receive/send ping request

* add ping test

* refactor rpc methods and add ping test

* finish adding all tests

* fix up tests

* Apply suggestions from code review

* lint

* imports

* lint

* Update beacon-chain/p2p/service.go

* Update shared/cmd/flags.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.11
* Updates for remote keymanager (#5260)
* Merge branch 'spec-v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Merge remote-tracking branch 'origin' into v0.11
* Update to slash by slot instead of epoch (#5297)

* change to slash by slot instead of epoch

* gaz

* fix test

* fix test

* fix infinite loop on error parse
* Sync with master
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Update proposer protection to v0.11 (#5292)

* Complete most of changes

* Fix other tests

* Test progress

* Tests

* Finish tests

* update pbs

* Fix mocked tests

* Gazelle

* pt 2

* Fix

* Fixes

* Fix tests wit hwrong copying
* Merge refs/heads/master into v0.11
* Merge refs/heads/master into v0.11
* Implement `SubscribeCommitteeSubnet` method (#5299)

* Add client implementation

* Update workspace

* Update server

* Update service

* Gaz

* Mocks

* Fixed validator tests

* Add round tirp tests

* Fixed subnet test

* Comment

* Update committee cache

* Comment

* Update RPC

* Fixed test

* Nishant's comment

* Gaz

* Refresh ENR is for epoch

* Needs to be append
* Merge refs/heads/master into v0.11
* resolve confs
* Validator subscribe subnet to next epoch (#5312)

* Alert to subscribe to next epoch

* Fixed tests

* Comments

* Fixed tests

* Update validator/client/validator.go

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Revert "Revert "Remove deprecated parameters (#5249)" (#5276)" (#5277)

This reverts commit 47e5a2cf96.
* Aggregate on demand for v0.11 (#5302)

* Add client implementation

* Update workspace

* Update server

* Update service

* Gaz

* Mocks

* Fixed validator tests

* Add round tirp tests

* Fixed subnet test

* Wait 1/3 on validator side

* Lint

* Comment

* Update committee cache

* Comment

* Update RPC

* Fixed test

* Nishant's comment

* Gaz

* Refresh ENR is for epoch

* Needs to be append

* Fixed duplication

* Tests

* Skip e2e

* Update beacon-chain/rpc/validator/aggregator.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* Apply suggestions from code review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: shayzluf <thezluf@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Refactor Dynamic Subscriptions (#5318)

* clean up

* comment

* metrics

* fix

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge refs/heads/master into v0.11
* Fix listindexed attestations and detect historic attestations (#5321)

* fix list indexed attestations

* fix tests

* goimports

* names
* Add check for slot == 0 (#5322)
* Change attester protection to return default if DB is empty (#5323)

* Change how default values are set

* Remove unused imports

* Remove wasteful db call

* Fix db tests

* Fix db test
* Merge refs/heads/master into v0.11
* fix it (#5326)
* V0.11 run time fixes to use interop config (#5324)

* Started testing
* Bunch of fixes
* use-interop
* Sync with v0.11
* Conflict
* Uncomment wait for activation
* Move pending block queue from subscriber to validator pipeline
* Merge branch 'v0.11' into use-interop-config
* passing tests
* Merge refs/heads/v0.11 into use-interop-config
* Merge refs/heads/v0.11 into use-interop-config
* Merge refs/heads/master into v0.11
* Merge refs/heads/master into v0.11
* Merge refs/heads/master into v0.11
* Nil Checks in Process Attestation v0.11 (#5331)

* Started testing

* Bunch of fixes

* use-interop

* Sync with v0.11

* Uncomment wait for activation

* Move pending block queue from subscriber to validator pipeline

* passing tests

* nil checks to prevent panics

* lint

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* Validator batch subscribe subnets (#5332)

* Update both beacon node and validator

* Comments

* Tests

* Lint

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Validator smarter subscribe (#5334)
* Fix incorrect proposer index calculation (#5336)

* Use correct parent state

* Fixed test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* enhance error
* enhance error
* Update P2P Service to Handle Local Metadata (#5319)

* add metadata to ENR

* add new methods

* glue everything

* fix all tests and refs

* add tests

* add more tests

* Apply suggestions from code review

* fix method

* raul's review

* gaz

* fix test setup

* fix all tests

* better naming

* fix broken test

* validate nil

Co-authored-by: rauljordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Revert "Revert "Revert "Remove deprecated parameters (#5249)" (#5276)" (#5277)" (#5343)

This reverts commit e5aef1686e.
* Wait for Genesis Event to Start P2P (#5303)

* use event feed for state initialized events

* add in handler for tests

* wait till genesis for p2p

* Apply suggestions from code review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge refs/heads/master into v0.11
* Avoid duplicated aggregation request (#5346)

* Avoid duplicated aggregation request

* Test and lock

* Gaz
* Fix Validate For Metadata (#5348)

* return true

* shay's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Multiple Proposer Slots Allowed Per Epoch for Validators (#5344)

* allow multiple proposer slots

* multi propose

* proposer indices to slots map

* remove deprecated comm assign

* Apply suggestions from code review

* resolve broken tests, add logic in validator client

* fix val tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Networking Fixes (#5349)

* close stream later

* add ping method

* add method

* lint
* More efficient aggregation on demand (#5354)
* Return Nil Error if Pre-Genesis in P2P Service Healthz Check (#5355)

* pregenesis healthz check:

* optimal

* right order

* Update beacon-chain/p2p/service.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/p2p/service.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* no comment

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
* Release DiscoveryV5 for Testnet Restart (#5357)

* release discv5

* fix build
* Fix Overflow in Status Check (#5361)

* fix overflow

* Apply suggestions from code review
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.11
* fix after merge
* Merge refs/heads/master into v0.11
* Make Mainnet Config Default, No More Demo Config  (#5367)

* bye bye demo config

* gaz

* fix usage

* fix dep

* gaz

* Update default balance for sendDeposits

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
* Use FastSSZ Marshal/Unmarshal for DB Encodings in v0.11.1 (#5351)

* try

* use marshaler structure for db instead of proto

* white list types

* attempt

* revert

* testutil.NewBeaconState()

* Fully populate fields for round trip ssz marshal

* fix //beacon-chain/db/kv:go_default_test

* more passing tests

* another test target passed

* fixed stategen

* blockchain tests green

* passing sync

* more targets fixed

* more test fixes in rpc/validator

* most rpc val

* validators test fixes

* skip round robin old

* aggregate test

* whitelist done

* Update beacon-chain/rpc/validator/attester_test.go

* edit baz

* Fixed tests

* Fixed getblock test

* Add back init

* reduce test size

* fix broken build

* tests pass

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* Reconnect slasher streams on beacon node shutdown (#5376)

* restart streams on beacon node shutdown

* fix comment

* remove export

* ivan feedback

* ivan feedback

* case insensitive

* Update slasher/beaconclient/receivers.go

* raul feedback

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' into v0.11
* Merge refs/heads/master into v0.11
* Amend Faucet to Offer 32.5 ETH for v0.11 (#5378)

* deposit amount in faucet

* fix eth amount

* gas cost
* unskip exec transition test
* Revert "Enable NOISE Handshake by Default v0.11 (#5272)" (#5381)

This reverts commit a8d32d504a.
* Merge refs/heads/master into v0.11
* use string for deposit flag
* Update Bootnode to v0.11 (#5387)

* fix bootnode

* add changes

* gaz

* fix docker
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.11
* build fix
* fix flaky test
* Merge refs/heads/master into v0.11
* Unskip E2E for V0.11 (#5386)

* Begin work on fixing e2e for v0.11

* Start bootnode work

* Begin implementing bootnode into e2e

* Fix E2E for v0.11

* Remove extra

* gaz

* Remove unused key gen code

* Remove trailing multiaddr code

* add skip for slashing

* Fix slashing e2e

* Fix docker image build
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.11
* Merge refs/heads/master into v0.11
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.11
* Update beacon-chain/p2p/broadcaster_test.go
* Merge refs/heads/master into v0.11
* Pass E2E Tests for v0.11 and Enable Attestation Subnets By Default (#5407)
* Update README.md

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Apply suggestions from code review

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update beacon-chain/p2p/config.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update shared/keystore/deposit_input.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update tools/faucet/server.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update beacon-chain/p2p/service.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update shared/benchutil/pregen_test.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update shared/benchutil/pregen_test.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update proto/beacon/p2p/v1/BUILD.bazel

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update shared/benchutil/pregen_test.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Update shared/bls/spectest/aggregate_verify_test.go
* Addressed feedback. All test passing
* Merge branch 'v0.11' of github.com:prysmaticlabs/prysm into v0.11
* Update beacon-chain/core/blocks/block_operations_fuzz_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/core/blocks/block_operations_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update shared/testutil/helpers.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/core/helpers/signing_root.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Resolve Misc v0.11 Items (Raul) (#5414)

* address all comments

* set faucet

* nishant feedback

* Update beacon-chain/p2p/service.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Revert keymanager changes (#5416)

* Revert "Updates for remote keymanager (#5260)"

This reverts commit bbcd895db5.

* Revert "Remove keystore keymanager from validator (#5236)"

This reverts commit 46008770c1.

* Revert "Update eth2 wallet keymanager (#4984)"

This reverts commit 7f7ef43f21.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Update BLS and limit visibility (#5415)

* remove duplicated BLS, add golang.org/x/mod

* Update BLS and restrict visibility

* fix build
* Fix eth1data test and fix order of ops (#5413)
* use multiaddr builder (#5419)
* Unskip benchutil and minor v0.11 fixes (#5417)

* Unskip benchutil tests

* Remove protos and gaz

* Fixes
* Networking Fixes (#5421)

* check

* fix test

* fix size

* fix test

* more fixes

* fix test again
* Update ethereum APIs with latest master
* Error handling for v0.11 tests (#5428)

* Proper err handling for tests

* Lint

* Fixed rest of the tests

* Gaz

* Fixed old master tests
* Sync with master
* Rm old aggregate_test.go
This commit is contained in:
terence tsao
2020-04-14 13:27:03 -07:00
committed by GitHub
parent 748d513c62
commit cb045dd0e3
328 changed files with 9632 additions and 5775 deletions

View File

@@ -15,7 +15,7 @@ run --host_force_python=PY2
# Network sandboxing only works on linux. # Network sandboxing only works on linux.
--experimental_sandbox_default_allow_network=false --experimental_sandbox_default_allow_network=false
# Use minimal protobufs at runtime # Use mainnet protobufs at runtime
run --define ssz=mainnet run --define ssz=mainnet
test --define ssz=mainnet test --define ssz=mainnet
build --define ssz=mainnet build --define ssz=mainnet

View File

@@ -1,7 +1,7 @@
# Prysm: An Ethereum 2.0 Client Written in Go # Prysm: An Ethereum 2.0 Client Written in Go
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm) [![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![ETH2.0_Spec_Version 0.9.3](https://img.shields.io/badge/ETH2.0%20Spec%20Version-v0.9.3-blue.svg)](https://github.com/ethereum/eth2.0-specs/tree/v0.9.3) [![ETH2.0_Spec_Version 0.11.1](https://img.shields.io/badge/ETH2.0%20Spec%20Version-v0.11.1-blue.svg)](https://github.com/ethereum/eth2.0-specs/tree/v0.11.1)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/KSA7rPr) [![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/KSA7rPr)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)

View File

@@ -197,8 +197,8 @@ filegroup(
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
) )
""", """,
sha256 = "72c6ee3c20d19736b1203f364a6eb0ddee2c173073e20bee2beccd288fdc42be", sha256 = "b90221d87b3b4cb17d7f195f8852f5dd8fec1cf623d42443b97bdb5a216ae61d",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.4/general.tar.gz", url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.11.1/general.tar.gz",
) )
http_archive( http_archive(
@@ -213,8 +213,8 @@ filegroup(
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
) )
""", """,
sha256 = "a3cc860a3679f6f62ee57b65677a9b48a65fdebb151cdcbf50f23852632845ef", sha256 = "316b227c0198f55872e46d601a578afeac88aab36ed38e3f01af753e98db156f",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.4/minimal.tar.gz", url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.11.1/minimal.tar.gz",
) )
http_archive( http_archive(
@@ -229,8 +229,8 @@ filegroup(
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
) )
""", """,
sha256 = "8fc1b6220973ca30fa4ddc4ed24d66b1719abadca8bedb5e06c3bd9bc0df28e9", sha256 = "b9c52f60293bcc1acfd4f8ab7ddf8bf8222ddd6a105e93d384542d1396e1b07a",
url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.9.4/mainnet.tar.gz", url = "https://github.com/ethereum/eth2.0-spec-tests/releases/download/v0.11.1/mainnet.tar.gz",
) )
http_archive( http_archive(
@@ -1305,7 +1305,7 @@ go_repository(
go_repository( go_repository(
name = "com_github_prysmaticlabs_ethereumapis", name = "com_github_prysmaticlabs_ethereumapis",
commit = "62fd1d2ec119bc93b0473fde17426c63a85197ed", commit = "6607cc86ddb7c78acfe3b1f0dfb115489a96d46d",
importpath = "github.com/prysmaticlabs/ethereumapis", importpath = "github.com/prysmaticlabs/ethereumapis",
patch_args = ["-p1"], patch_args = ["-p1"],
patches = [ patches = [
@@ -1639,6 +1639,14 @@ go_repository(
version = "v1.20.0", version = "v1.20.0",
) )
go_repository(
name = "com_github_wealdtech_eth2_signer_api",
build_file_proto_mode = "disable_global",
importpath = "github.com/wealdtech/eth2-signer-api",
sum = "h1:fqJYjKwG/FeUAJYYiZblIP6agiz3WWB+Hxpw85Fnr5I=",
version = "v1.0.1",
)
go_repository( go_repository(
name = "com_github_prysmaticlabs_prombbolt", name = "com_github_prysmaticlabs_prombbolt",
importpath = "github.com/prysmaticlabs/prombbolt", importpath = "github.com/prysmaticlabs/prombbolt",
@@ -1656,3 +1664,10 @@ go_repository(
sum = "h1:GWsU1WjSE2rtvyTYGcndqmPPkQkBNV7pEuZdnGtwtu4=", sum = "h1:GWsU1WjSE2rtvyTYGcndqmPPkQkBNV7pEuZdnGtwtu4=",
version = "v0.0.0-20200321040036-d43e30eacb43", version = "v0.0.0-20200321040036-d43e30eacb43",
) )
go_repository(
name = "org_golang_x_mod",
importpath = "golang.org/x/mod",
sum = "h1:KU7oHjnv3XNWfa5COkzUifxZmxp1TyI7ImMXqFxLwvQ=",
version = "v0.2.0",
)

View File

@@ -39,7 +39,6 @@ go_test(
"//shared/testutil:go_default_library", "//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library", "@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library", "@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library", "@com_github_sirupsen_logrus//hooks/test:go_default_library",
], ],

View File

@@ -9,7 +9,6 @@ import (
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing" mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute" "github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed" "github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
@@ -34,10 +33,8 @@ func TestArchiverService_ReceivesBlockProcessedEvent(t *testing.T) {
hook := logTest.NewGlobal() hook := logTest.NewGlobal()
svc, beaconDB := setupService(t) svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB) defer dbutil.TeardownDB(t, beaconDB)
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{ st := testutil.NewBeaconState()
Slot: 1, if err := st.SetSlot(1); err != nil {
})
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
svc.headFetcher = &mock.ChainService{ svc.headFetcher = &mock.ChainService{
@@ -61,10 +58,8 @@ func TestArchiverService_OnlyArchiveAtEpochEnd(t *testing.T) {
svc, beaconDB := setupService(t) svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB) defer dbutil.TeardownDB(t, beaconDB)
// The head state is NOT an epoch end. // The head state is NOT an epoch end.
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{ st := testutil.NewBeaconState()
Slot: params.BeaconConfig().SlotsPerEpoch - 2, if err := st.SetSlot(params.BeaconConfig().SlotsPerEpoch - 2); err != nil {
})
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
svc.headFetcher = &mock.ChainService{ svc.headFetcher = &mock.ChainService{
@@ -433,18 +428,20 @@ func setupState(validatorCount uint64) (*stateTrie.BeaconState, error) {
// We initialize a head state that has attestations from participated // We initialize a head state that has attestations from participated
// validators in a simulated fashion. // validators in a simulated fashion.
return stateTrie.InitializeFromProto(&pb.BeaconState{ st := testutil.NewBeaconState()
Slot: (2 * params.BeaconConfig().SlotsPerEpoch) - 1, if err := st.SetSlot((2 * params.BeaconConfig().SlotsPerEpoch) - 1); err != nil {
Validators: validators, return nil, err
Balances: balances, }
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot), if err := st.SetValidators(validators); err != nil {
Slashings: []uint64{0, 1e9, 1e9}, return nil, err
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector), }
CurrentEpochAttestations: atts, if err := st.SetBalances(balances); err != nil {
FinalizedCheckpoint: &ethpb.Checkpoint{}, return nil, err
JustificationBits: bitfield.Bitvector4{0x00}, }
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}, if err := st.SetCurrentEpochAttestations(atts); err != nil {
}) return nil, err
}
return st, nil
} }
func setupService(t *testing.T) (*Service, db.Database) { func setupService(t *testing.T) (*Service, db.Database) {

View File

@@ -5,7 +5,9 @@ import (
"testing" "testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing" testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
) )
func TestHeadSlot_DataRace(t *testing.T) { func TestHeadSlot_DataRace(t *testing.T) {
@@ -28,6 +30,7 @@ func TestHeadRoot_DataRace(t *testing.T) {
s := &Service{ s := &Service{
beaconDB: db, beaconDB: db,
head: &head{root: [32]byte{'A'}}, head: &head{root: [32]byte{'A'}},
stateGen: stategen.New(db, cache.NewStateSummaryCache()),
} }
go func() { go func() {
if err := s.saveHead(context.Background(), [32]byte{}, ); err != nil { if err := s.saveHead(context.Background(), [32]byte{}, ); err != nil {
@@ -45,6 +48,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
s := &Service{ s := &Service{
beaconDB: db, beaconDB: db,
head: &head{block: &ethpb.SignedBeaconBlock{}}, head: &head{block: &ethpb.SignedBeaconBlock{}},
stateGen: stategen.New(db, cache.NewStateSummaryCache()),
} }
go func() { go func() {
if err := s.saveHead(context.Background(), [32]byte{}, ); err != nil { if err := s.saveHead(context.Background(), [32]byte{}, ); err != nil {
@@ -61,6 +65,7 @@ func TestHeadState_DataRace(t *testing.T) {
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
s := &Service{ s := &Service{
beaconDB: db, beaconDB: db,
stateGen: stategen.New(db, cache.NewStateSummaryCache()),
} }
go func() { go func() {
if err := s.saveHead(context.Background(), [32]byte{}, ); err != nil { if err := s.saveHead(context.Background(), [32]byte{}, ); err != nil {

View File

@@ -166,7 +166,7 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
} }
func TestHeadState_CanRetrieve(t *testing.T) { func TestHeadState_CanRetrieve(t *testing.T) {
s, err := state.InitializeFromProto(&pb.BeaconState{Slot: 2}) s, err := state.InitializeFromProto(&pb.BeaconState{Slot: 2, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -176,7 +176,7 @@ func TestHeadState_CanRetrieve(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(s.InnerStateUnsafe(), headState.InnerStateUnsafe()) { if !proto.Equal(s.InnerStateUnsafe(), headState.InnerStateUnsafe()) {
t.Error("incorrect head state received") t.Error("incorrect head state received")
} }
} }

View File

@@ -59,7 +59,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
// If the head state is not available, just return nil. // If the head state is not available, just return nil.
// There's nothing to cache // There's nothing to cache
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if !s.stateGen.StateSummaryExists(ctx, headRoot) { if !s.stateGen.StateSummaryExists(ctx, headRoot) {
return nil return nil
} }
@@ -81,7 +81,7 @@ func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
// Get the new head state from cached state or DB. // Get the new head state from cached state or DB.
var newHeadState *state.BeaconState var newHeadState *state.BeaconState
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
newHeadState, err = s.stateGen.StateByRoot(ctx, headRoot) newHeadState, err = s.stateGen.StateByRoot(ctx, headRoot)
if err != nil { if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB") return errors.Wrap(err, "could not retrieve head state in DB")
@@ -121,7 +121,7 @@ func (s *Service) saveHeadNoDB(ctx context.Context, b *ethpb.SignedBeaconBlock,
var headState *state.BeaconState var headState *state.BeaconState
var err error var err error
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
headState, err = s.stateGen.StateByRoot(ctx, r) headState, err = s.stateGen.StateByRoot(ctx, r)
if err != nil { if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB") return errors.Wrap(err, "could not retrieve head state in DB")

View File

@@ -9,8 +9,8 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing" testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/testutil"
) )
func TestSaveHead_Same(t *testing.T) { func TestSaveHead_Same(t *testing.T) {
@@ -44,6 +44,7 @@ func TestSaveHead_Different(t *testing.T) {
newHeadBlock := &ethpb.BeaconBlock{Slot: 1} newHeadBlock := &ethpb.BeaconBlock{Slot: 1}
newHeadSignedBlock := &ethpb.SignedBeaconBlock{Block: newHeadBlock} newHeadSignedBlock := &ethpb.SignedBeaconBlock{Block: newHeadBlock}
if err := service.beaconDB.SaveBlock(context.Background(), newHeadSignedBlock); err != nil { if err := service.beaconDB.SaveBlock(context.Background(), newHeadSignedBlock); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -51,8 +52,11 @@ func TestSaveHead_Different(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
headState, err := state.InitializeFromProto(&pb.BeaconState{Slot: 1}) headState := testutil.NewBeaconState()
if err != nil { if err := headState.SetSlot(1); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveStateSummary(context.Background(), &pb.StateSummary{Slot: 1, Root: newRoot[:]}); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveState(context.Background(), headState, newRoot); err != nil { if err := service.beaconDB.SaveState(context.Background(), headState, newRoot); err != nil {

View File

@@ -98,7 +98,7 @@ func (s *Service) onAttestation(ctx context.Context, a *ethpb.Attestation) ([]ui
} }
// Verify Attestations cannot be from future epochs. // Verify Attestations cannot be from future epochs.
if err := helpers.VerifySlotTime(genesisTime, tgtSlot); err != nil { if err := helpers.VerifySlotTime(genesisTime, tgtSlot, helpers.TimeShiftTolerance); err != nil {
return nil, errors.Wrap(err, "could not verify attestation target slot") return nil, errors.Wrap(err, "could not verify attestation target slot")
} }
@@ -108,7 +108,7 @@ func (s *Service) onAttestation(ctx context.Context, a *ethpb.Attestation) ([]ui
} }
// Verify attestations can only affect the fork choice of subsequent slots. // Verify attestations can only affect the fork choice of subsequent slots.
if err := helpers.VerifySlotTime(genesisTime, a.Data.Slot); err != nil { if err := helpers.VerifySlotTime(genesisTime, a.Data.Slot, helpers.TimeShiftTolerance); err != nil {
return nil, err return nil, err
} }
@@ -125,6 +125,16 @@ func (s *Service) onAttestation(ctx context.Context, a *ethpb.Attestation) ([]ui
} }
} }
if indexedAtt.AttestingIndices == nil {
return nil, errors.New("nil attesting indices")
}
if a.Data == nil {
return nil, errors.New("nil att data")
}
if a.Data.Target == nil {
return nil, errors.New("nil att target")
}
// Update forkchoice store with the new attestation for updating weight. // Update forkchoice store with the new attestation for updating weight.
s.forkChoiceStore.ProcessAttestation(ctx, indexedAtt.AttestingIndices, bytesutil.ToBytes32(a.Data.BeaconBlockRoot), a.Data.Target.Epoch) s.forkChoiceStore.ProcessAttestation(ctx, indexedAtt.AttestingIndices, bytesutil.ToBytes32(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)

View File

@@ -32,7 +32,7 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (*sta
} }
var baseState *stateTrie.BeaconState var baseState *stateTrie.BeaconState
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
baseState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(c.Root)) baseState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(c.Root))
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", helpers.StartSlot(c.Epoch)) return nil, errors.Wrapf(err, "could not get pre state for slot %d", helpers.StartSlot(c.Epoch))
@@ -123,21 +123,25 @@ func (s *Service) verifyAttestation(ctx context.Context, baseState *stateTrie.Be
} }
indexedAtt := attestationutil.ConvertToIndexed(ctx, a, committee) indexedAtt := attestationutil.ConvertToIndexed(ctx, a, committee)
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil { if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
if err == blocks.ErrSigFailedToVerify { if err == helpers.ErrSigFailedToVerify {
// When sig fails to verify, check if there's a differences in committees due to // When sig fails to verify, check if there's a differences in committees due to
// different seeds. // different seeds.
var aState *stateTrie.BeaconState var aState *stateTrie.BeaconState
var err error var err error
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
aState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot)) aState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
return nil, err if err != nil {
return nil, err
}
} else {
aState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
if err != nil {
return nil, err
}
} }
if aState == nil {
aState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot)) return nil, fmt.Errorf("nil state for block root %#x", a.Data.BeaconBlockRoot)
if err != nil {
return nil, err
} }
epoch := helpers.SlotToEpoch(a.Data.Slot) epoch := helpers.SlotToEpoch(a.Data.Slot)
origSeed, err := helpers.Seed(baseState, epoch, params.BeaconConfig().DomainBeaconAttester) origSeed, err := helpers.Seed(baseState, epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil { if err != nil {

View File

@@ -2,18 +2,19 @@ package blockchain
import ( import (
"context" "context"
"reflect"
"strings" "strings"
"testing" "testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state" "github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing" testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray" "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
@@ -25,7 +26,11 @@ func TestStore_OnAttestation(t *testing.T) {
db := testDB.SetupDB(t) db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})} cfg := &Config{
BeaconDB: db,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(db, cache.NewStateSummaryCache()),
}
service, err := NewService(ctx, cfg) service, err := NewService(ctx, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -54,7 +59,7 @@ func TestStore_OnAttestation(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{}) s := testutil.NewBeaconState()
if err := s.SetSlot(100 * params.BeaconConfig().SlotsPerEpoch); err != nil { if err := s.SetSlot(100 * params.BeaconConfig().SlotsPerEpoch); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -66,19 +71,17 @@ func TestStore_OnAttestation(t *testing.T) {
if err := db.SaveBlock(ctx, BlkWithValidState); err != nil { if err := db.SaveBlock(ctx, BlkWithValidState); err != nil {
t.Fatal(err) t.Fatal(err)
} }
BlkWithValidStateRoot, err := ssz.HashTreeRoot(BlkWithValidState.Block) BlkWithValidStateRoot, err := ssz.HashTreeRoot(BlkWithValidState.Block)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
s, err = stateTrie.InitializeFromProto(&pb.BeaconState{ s = testutil.NewBeaconState()
Fork: &pb.Fork{ if err := s.SetFork(&pb.Fork{
Epoch: 0, Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion, CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion, PreviousVersion: params.BeaconConfig().GenesisForkVersion,
}, }); err != nil {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveState(ctx, s, BlkWithValidStateRoot); err != nil { if err := service.beaconDB.SaveState(ctx, s, BlkWithValidStateRoot); err != nil {
@@ -111,7 +114,7 @@ func TestStore_OnAttestation(t *testing.T) {
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}, a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}},
s: &pb.BeaconState{}, s: &pb.BeaconState{},
wantErr: true, wantErr: true,
wantErrString: "pre state of target block 0 does not exist", wantErrString: "could not get pre state for slot 0: unknown boundary state",
}, },
{ {
name: "process attestation doesn't match current epoch", name: "process attestation doesn't match current epoch",
@@ -141,9 +144,11 @@ func TestStore_SaveCheckpointState(t *testing.T) {
ctx := context.Background() ctx := context.Background()
db := testDB.SetupDB(t) db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
params.UseDemoBeaconConfig()
cfg := &Config{BeaconDB: db} cfg := &Config{
BeaconDB: db,
StateGen: stategen.New(db, cache.NewStateSummaryCache()),
}
service, err := NewService(ctx, cfg) service, err := NewService(ctx, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -172,15 +177,20 @@ func TestStore_SaveCheckpointState(t *testing.T) {
if err := service.beaconDB.SaveState(ctx, s, r); err != nil { if err := service.beaconDB.SaveState(ctx, s, r); err != nil {
t.Fatal(err) t.Fatal(err)
} }
service.justifiedCheckpt = &ethpb.Checkpoint{Root: r[:]} service.justifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: r[:]} service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]} service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: r[:]} service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
r = bytesutil.ToBytes32([]byte{'A'})
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)} cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
if err := service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})); err != nil { if err := service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Root: bytesutil.PadTo([]byte{'A'}, 32)}); err != nil {
t.Fatal(err)
}
s1, err := service.getAttPreState(ctx, cp1) s1, err := service.getAttPreState(ctx, cp1)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -193,6 +203,9 @@ func TestStore_SaveCheckpointState(t *testing.T) {
if err := service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'B'})); err != nil { if err := service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'B'})); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Root: bytesutil.PadTo([]byte{'B'}, 32)}); err != nil {
t.Fatal(err)
}
s2, err := service.getAttPreState(ctx, cp2) s2, err := service.getAttPreState(ctx, cp2)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -236,6 +249,9 @@ func TestStore_SaveCheckpointState(t *testing.T) {
if err := service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})); err != nil { if err := service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Root: bytesutil.PadTo([]byte{'C'}, 32)}); err != nil {
t.Fatal(err)
}
s3, err := service.getAttPreState(ctx, cp3) s3, err := service.getAttPreState(ctx, cp3)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -250,7 +266,10 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
db := testDB.SetupDB(t) db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db} cfg := &Config{
BeaconDB: db,
StateGen: stategen.New(db, cache.NewStateSummaryCache()),
}
service, err := NewService(ctx, cfg) service, err := NewService(ctx, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -302,7 +321,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(returned, cached) { if !proto.Equal(returned.InnerStateUnsafe(), cached.InnerStateUnsafe()) {
t.Error("Incorrectly cached base state") t.Error("Incorrectly cached base state")
} }
} }

View File

@@ -92,7 +92,7 @@ func (s *Service) onBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock)
return nil, errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot) return nil, errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot)
} }
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if err := s.stateGen.SaveState(ctx, root, postState); err != nil { if err := s.stateGen.SaveState(ctx, root, postState); err != nil {
return nil, errors.Wrap(err, "could not save state") return nil, errors.Wrap(err, "could not save state")
} }
@@ -122,7 +122,7 @@ func (s *Service) onBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock)
return nil, errors.Wrap(err, "could not save finalized checkpoint") return nil, errors.Wrap(err, "could not save finalized checkpoint")
} }
if !featureconfig.Get().NewStateMgmt { if featureconfig.Get().DisableNewStateMgmt {
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch) startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch) endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if endSlot > startSlot { if endSlot > startSlot {
@@ -147,7 +147,7 @@ func (s *Service) onBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock)
return nil, errors.Wrap(err, "could not save new justified") return nil, errors.Wrap(err, "could not save new justified")
} }
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root) fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
fBlock, err := s.beaconDB.Block(ctx, fRoot) fBlock, err := s.beaconDB.Block(ctx, fRoot)
if err != nil { if err != nil {
@@ -233,7 +233,7 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
return errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot) return errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot)
} }
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if err := s.stateGen.SaveState(ctx, root, postState); err != nil { if err := s.stateGen.SaveState(ctx, root, postState); err != nil {
return errors.Wrap(err, "could not save state") return errors.Wrap(err, "could not save state")
} }
@@ -268,7 +268,7 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
// Update finalized check point. Prune the block cache and helper caches on every new finalized epoch. // Update finalized check point. Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch { if postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch {
if !featureconfig.Get().NewStateMgmt { if featureconfig.Get().DisableNewStateMgmt {
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch) startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch) endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if endSlot > startSlot { if endSlot > startSlot {
@@ -301,7 +301,7 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
return errors.Wrap(err, "could not save new justified") return errors.Wrap(err, "could not save new justified")
} }
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root) fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
fBlock, err := s.beaconDB.Block(ctx, fRoot) fBlock, err := s.beaconDB.Block(ctx, fRoot)
if err != nil { if err != nil {
@@ -313,7 +313,7 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
} }
} }
if !featureconfig.Get().NewStateMgmt { if featureconfig.Get().DisableNewStateMgmt {
numOfStates := len(s.boundaryRoots) numOfStates := len(s.boundaryRoots)
if numOfStates > initialSyncCacheSize { if numOfStates > initialSyncCacheSize {
if err = s.persistCachedStates(ctx, numOfStates); err != nil { if err = s.persistCachedStates(ctx, numOfStates); err != nil {
@@ -338,7 +338,7 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
return err return err
} }
if !featureconfig.Get().NewStateMgmt && helpers.IsEpochStart(postState.Slot()) { if featureconfig.Get().DisableNewStateMgmt && helpers.IsEpochStart(postState.Slot()) {
if err := s.beaconDB.SaveState(ctx, postState, root); err != nil { if err := s.beaconDB.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state") return errors.Wrap(err, "could not save state")
} }

View File

@@ -38,7 +38,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b *ethpb.BeaconBlock) (*
} }
// Verify block slot time is not from the feature. // Verify block slot time is not from the feature.
if err := helpers.VerifySlotTime(preState.GenesisTime(), b.Slot); err != nil { if err := helpers.VerifySlotTime(preState.GenesisTime(), b.Slot, helpers.TimeShiftTolerance); err != nil {
return nil, err return nil, err
} }
@@ -60,8 +60,12 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b *ethpb.BeaconBlock) (
ctx, span := trace.StartSpan(ctx, "chainService.verifyBlkPreState") ctx, span := trace.StartSpan(ctx, "chainService.verifyBlkPreState")
defer span.End() defer span.End()
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
preState, err := s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(b.ParentRoot)) parentRoot := bytesutil.ToBytes32(b.ParentRoot)
if !s.stateGen.StateSummaryExists(ctx, parentRoot) {
return nil, errors.New("provided block root does not have block saved in the db")
}
preState, err := s.stateGen.StateByRoot(ctx, parentRoot)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot) return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
} }
@@ -265,7 +269,7 @@ func (s *Service) updateJustified(ctx context.Context, state *stateTrie.BeaconSt
s.justifiedCheckpt = cpt s.justifiedCheckpt = cpt
} }
if !featureconfig.Get().NewStateMgmt { if featureconfig.Get().DisableNewStateMgmt {
justifiedRoot := bytesutil.ToBytes32(cpt.Root) justifiedRoot := bytesutil.ToBytes32(cpt.Root)
justifiedState := s.initSyncState[justifiedRoot] justifiedState := s.initSyncState[justifiedRoot]

View File

@@ -9,12 +9,14 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks" "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db" "github.com/prysmaticlabs/prysm/beacon-chain/db"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing" testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray" "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
@@ -26,7 +28,10 @@ func TestStore_OnBlock(t *testing.T) {
db := testDB.SetupDB(t) db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db} cfg := &Config{
BeaconDB: db,
StateGen: stategen.New(db, cache.NewStateSummaryCache()),
}
service, err := NewService(ctx, cfg) service, err := NewService(ctx, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -41,10 +46,7 @@ func TestStore_OnBlock(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil { if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -60,10 +62,16 @@ func TestStore_OnBlock(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Slot: st.Slot(), Root: randomParentRoot[:]}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), randomParentRoot); err != nil { if err := service.beaconDB.SaveState(ctx, st.Copy(), randomParentRoot); err != nil {
t.Fatal(err) t.Fatal(err)
} }
randomParentRoot2 := roots[1] randomParentRoot2 := roots[1]
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Slot: st.Slot(), Root: randomParentRoot2[:]}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)); err != nil { if err := service.beaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -144,8 +152,8 @@ func TestRemoveStateSinceLastFinalized(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
s, err := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: uint64(i)}) s := testutil.NewBeaconState()
if err != nil { if err := s.SetSlot(uint64(i)); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveState(ctx, s, r); err != nil { if err := service.beaconDB.SaveState(ctx, s, r); err != nil {
@@ -290,24 +298,32 @@ func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
} }
} }
func TestCachedPreState_CanGetFromCache(t *testing.T) { func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
ctx := context.Background() ctx := context.Background()
db := testDB.SetupDB(t) db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db} cfg := &Config{
BeaconDB: db,
StateGen: stategen.New(db, cache.NewStateSummaryCache()),
}
service, err := NewService(ctx, cfg) service, err := NewService(ctx, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
s, err := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: 1}) s, err := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: 1, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
r := [32]byte{'A'} r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]} b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
service.initSyncState[r] = s if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Slot: 1, Root: r[:]}); err != nil {
t.Fatal(err)
}
if err := service.stateGen.SaveState(ctx, r, s); err != nil {
t.Fatal(err)
}
received, err := service.verifyBlkPreState(ctx, b) received, err := service.verifyBlkPreState(ctx, b)
if err != nil { if err != nil {
@@ -323,7 +339,10 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
db := testDB.SetupDB(t) db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db) defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db} cfg := &Config{
BeaconDB: db,
StateGen: stategen.New(db, cache.NewStateSummaryCache()),
}
service, err := NewService(ctx, cfg) service, err := NewService(ctx, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -334,8 +353,8 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]} service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
_, err = service.verifyBlkPreState(ctx, b) _, err = service.verifyBlkPreState(ctx, b)
wanted := "pre state of slot 1 does not exist" wanted := "provided block root does not have block saved in the db"
if err == nil || err.Error() != wanted { if err.Error() != wanted {
t.Error("Did not get wanted error") t.Error("Did not get wanted error")
} }
@@ -343,7 +362,10 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.beaconDB.SaveState(ctx, s, r); err != nil { if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Slot: 1, Root: r[:]}); err != nil {
t.Fatal(err)
}
if err := service.stateGen.SaveState(ctx, r, s); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -351,7 +373,7 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(s, received) { if s.Slot() != received.Slot() {
t.Error("cached state not the same") t.Error("cached state not the same")
} }
} }
@@ -369,8 +391,8 @@ func TestSaveInitState_CanSaveDelete(t *testing.T) {
for i := uint64(0); i < 64; i++ { for i := uint64(0); i < 64; i++ {
b := &ethpb.BeaconBlock{Slot: i} b := &ethpb.BeaconBlock{Slot: i}
s, err := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: i}) s := testutil.NewBeaconState()
if err != nil { if err := s.SetSlot(i); err != nil {
t.Fatal(err) t.Fatal(err)
} }
r, err := ssz.HashTreeRoot(b) r, err := ssz.HashTreeRoot(b)
@@ -385,10 +407,9 @@ func TestSaveInitState_CanSaveDelete(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
s := testutil.NewBeaconState()
s, err := stateTrie.InitializeFromProto(&pb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{ if err := s.SetFinalizedCheckpoint(&ethpb.Checkpoint{
Epoch: 1, Root: finalizedRoot[:]}}) Epoch: 1, Root: finalizedRoot[:]}); err != nil {
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.saveInitState(ctx, s); err != nil { if err := service.saveInitState(ctx, s); err != nil {
@@ -426,18 +447,15 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
} }
service.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}} service.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}} service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
service.initSyncState[r] = st.Copy() service.initSyncState[r] = st.Copy()
if err := db.SaveState(ctx, st.Copy(), r); err != nil { if err := db.SaveState(ctx, st.Copy(), r); err != nil {
t.Fatal(err) t.Fatal(err)
} }
// Could update // Could update
s, err := stateTrie.InitializeFromProto(&pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: r[:]}}) s := testutil.NewBeaconState()
if err != nil { if err := s.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: 1, Root: r[:]}); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := service.updateJustified(context.Background(), s); err != nil { if err := service.updateJustified(context.Background(), s); err != nil {
@@ -480,10 +498,7 @@ func TestFilterBlockRoots_CanFilter(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: fBlock}); err != nil { if err := service.beaconDB.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: fBlock}); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -526,10 +541,7 @@ func TestPersistCache_CanSave(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
for i := uint64(0); i < initialSyncCacheSize; i++ { for i := uint64(0); i < initialSyncCacheSize; i++ {
if err := st.SetSlot(i); err != nil { if err := st.SetSlot(i); err != nil {
@@ -583,10 +595,8 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil { if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -640,10 +650,8 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil { if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -732,10 +740,8 @@ func blockTree1(db db.Database, genesisRoot []byte) ([][]byte, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
return nil, err
}
for _, b := range []*ethpb.BeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} { for _, b := range []*ethpb.BeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil { if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err return nil, err

View File

@@ -91,7 +91,7 @@ func (s *Service) processAttestation(subscribedToStateEvents chan struct{}) {
atts := s.attPool.ForkchoiceAttestations() atts := s.attPool.ForkchoiceAttestations()
for _, a := range atts { for _, a := range atts {
var hasState bool var hasState bool
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
hasState = s.stateGen.StateSummaryExists(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot)) hasState = s.stateGen.StateSummaryExists(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
} else { } else {
hasState = s.beaconDB.HasState(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot)) && s.beaconDB.HasState(ctx, bytesutil.ToBytes32(a.Data.Target.Root)) hasState = s.beaconDB.HasState(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot)) && s.beaconDB.HasState(ctx, bytesutil.ToBytes32(a.Data.Target.Root))

View File

@@ -140,7 +140,7 @@ func (s *Service) Start() {
} }
if beaconState == nil { if beaconState == nil {
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
beaconState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(cp.Root)) beaconState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil { if err != nil {
log.Fatalf("Could not fetch beacon state by root: %v", err) log.Fatalf("Could not fetch beacon state by root: %v", err)
@@ -181,7 +181,7 @@ func (s *Service) Start() {
s.prevFinalizedCheckpt = stateTrie.CopyCheckpoint(finalizedCheckpoint) s.prevFinalizedCheckpt = stateTrie.CopyCheckpoint(finalizedCheckpoint)
s.resumeForkChoice(justifiedCheckpoint, finalizedCheckpoint) s.resumeForkChoice(justifiedCheckpoint, finalizedCheckpoint)
if !featureconfig.Get().NewStateMgmt { if featureconfig.Get().DisableNewStateMgmt {
if finalizedCheckpoint.Epoch > 1 { if finalizedCheckpoint.Epoch > 1 {
if err := s.pruneGarbageState(ctx, helpers.StartSlot(finalizedCheckpoint.Epoch)-params.BeaconConfig().SlotsPerEpoch); err != nil { if err := s.pruneGarbageState(ctx, helpers.StartSlot(finalizedCheckpoint.Epoch)-params.BeaconConfig().SlotsPerEpoch); err != nil {
log.WithError(err).Warn("Could not prune old states") log.WithError(err).Warn("Could not prune old states")
@@ -192,7 +192,8 @@ func (s *Service) Start() {
s.stateNotifier.StateFeed().Send(&feed.Event{ s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized, Type: statefeed.Initialized,
Data: &statefeed.InitializedData{ Data: &statefeed.InitializedData{
StartTime: s.genesisTime, StartTime: s.genesisTime,
GenesisValidatorsRoot: beaconState.GenesisValidatorRoot(),
}, },
}) })
} else { } else {
@@ -237,13 +238,15 @@ func (s *Service) Start() {
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain. // deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Time) { func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Time) {
preGenesisState := s.chainStartFetcher.PreGenesisState() preGenesisState := s.chainStartFetcher.PreGenesisState()
if err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.chainStartFetcher.ChainStartEth1Data()); err != nil { initializedState, err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.chainStartFetcher.ChainStartEth1Data())
if err != nil {
log.Fatalf("Could not initialize beacon chain: %v", err) log.Fatalf("Could not initialize beacon chain: %v", err)
} }
s.stateNotifier.StateFeed().Send(&feed.Event{ s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized, Type: statefeed.Initialized,
Data: &statefeed.InitializedData{ Data: &statefeed.InitializedData{
StartTime: genesisTime, StartTime: genesisTime,
GenesisValidatorsRoot: initializedState.GenesisValidatorRoot(),
}, },
}) })
} }
@@ -255,7 +258,7 @@ func (s *Service) initializeBeaconChain(
ctx context.Context, ctx context.Context,
genesisTime time.Time, genesisTime time.Time,
preGenesisState *stateTrie.BeaconState, preGenesisState *stateTrie.BeaconState,
eth1data *ethpb.Eth1Data) error { eth1data *ethpb.Eth1Data) (*stateTrie.BeaconState, error) {
_, span := trace.StartSpan(context.Background(), "beacon-chain.Service.initializeBeaconChain") _, span := trace.StartSpan(context.Background(), "beacon-chain.Service.initializeBeaconChain")
defer span.End() defer span.End()
s.genesisTime = genesisTime s.genesisTime = genesisTime
@@ -263,11 +266,11 @@ func (s *Service) initializeBeaconChain(
genesisState, err := state.OptimizedGenesisBeaconState(unixTime, preGenesisState, eth1data) genesisState, err := state.OptimizedGenesisBeaconState(unixTime, preGenesisState, eth1data)
if err != nil { if err != nil {
return errors.Wrap(err, "could not initialize genesis state") return nil, errors.Wrap(err, "could not initialize genesis state")
} }
if err := s.saveGenesisData(ctx, genesisState); err != nil { if err := s.saveGenesisData(ctx, genesisState); err != nil {
return errors.Wrap(err, "could not save genesis data") return nil, errors.Wrap(err, "could not save genesis data")
} }
log.Info("Initialized beacon chain genesis state") log.Info("Initialized beacon chain genesis state")
@@ -277,15 +280,15 @@ func (s *Service) initializeBeaconChain(
// Update committee shuffled indices for genesis epoch. // Update committee shuffled indices for genesis epoch.
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil { if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
return err return nil, err
} }
if err := helpers.UpdateProposerIndicesInCache(genesisState, 0 /* genesis epoch */); err != nil { if err := helpers.UpdateProposerIndicesInCache(genesisState, 0 /* genesis epoch */); err != nil {
return err return nil, err
} }
s.opsService.SetGenesisTime(genesisState.GenesisTime()) s.opsService.SetGenesisTime(genesisState.GenesisTime())
return nil return genesisState, nil
} }
// Stop the blockchain service's main event loop and associated goroutines. // Stop the blockchain service's main event loop and associated goroutines.
@@ -324,7 +327,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *stateTrie.B
if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil { if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil {
return errors.Wrap(err, "could not save genesis block") return errors.Wrap(err, "could not save genesis block")
} }
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if err := s.stateGen.SaveState(ctx, genesisBlkRoot, genesisState); err != nil { if err := s.stateGen.SaveState(ctx, genesisBlkRoot, genesisState); err != nil {
return errors.Wrap(err, "could not save genesis state") return errors.Wrap(err, "could not save genesis state")
} }
@@ -412,7 +415,7 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
} }
finalizedRoot := bytesutil.ToBytes32(finalized.Root) finalizedRoot := bytesutil.ToBytes32(finalized.Root)
var finalizedState *stateTrie.BeaconState var finalizedState *stateTrie.BeaconState
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
finalizedRoot = s.beaconDB.LastArchivedIndexRoot(ctx) finalizedRoot = s.beaconDB.LastArchivedIndexRoot(ctx)
finalizedState, err = s.stateGen.Resume(ctx) finalizedState, err = s.stateGen.Resume(ctx)
if err != nil { if err != nil {

View File

@@ -12,6 +12,7 @@ import (
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
ssz "github.com/prysmaticlabs/go-ssz" ssz "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache" "github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks" b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed" "github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
@@ -26,6 +27,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/p2p" "github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain" "github.com/prysmaticlabs/prysm/beacon-chain/powchain"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state" beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
protodb "github.com/prysmaticlabs/prysm/proto/beacon/db" protodb "github.com/prysmaticlabs/prysm/proto/beacon/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/event" "github.com/prysmaticlabs/prysm/shared/event"
@@ -144,12 +146,10 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
P2p: &mockBroadcaster{}, P2p: &mockBroadcaster{},
StateNotifier: &mockBeaconNode{}, StateNotifier: &mockBeaconNode{},
AttPool: attestations.NewPool(), AttPool: attestations.NewPool(),
StateGen: stategen.New(beaconDB, cache.NewStateSummaryCache()),
ForkChoiceStore: protoarray.New(0, 0, params.BeaconConfig().ZeroHash), ForkChoiceStore: protoarray.New(0, 0, params.BeaconConfig().ZeroHash),
OpsService: opsService, OpsService: opsService,
} }
if err != nil {
t.Fatalf("could not register blockchain service: %v", err)
}
chainService, err := NewService(ctx, cfg) chainService, err := NewService(ctx, cfg)
if err != nil { if err != nil {
@@ -231,8 +231,8 @@ func TestChainStartStop_Initialized(t *testing.T) {
if err := db.SaveBlock(ctx, genesisBlk); err != nil { if err := db.SaveBlock(ctx, genesisBlk); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{Slot: 1}) s := testutil.NewBeaconState()
if err != nil { if err := s.SetSlot(1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(ctx, s, blkRoot); err != nil { if err := db.SaveState(ctx, s, blkRoot); err != nil {
@@ -289,14 +289,14 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
DepositRoot: hashTreeRoot[:], DepositRoot: hashTreeRoot[:],
DepositCount: uint64(len(deposits)), DepositCount: uint64(len(deposits)),
}) })
if err != nil { for _, deposit := range deposits {
t.Fatal(err) genState, err = b.ProcessPreGenesisDeposit(ctx, genState, deposit)
if err != nil {
t.Fatal(err)
}
} }
genState, err = b.ProcessDeposits(ctx, genState, &ethpb.BeaconBlockBody{Deposits: deposits})
if err != nil { if _, err := bc.initializeBeaconChain(ctx, time.Unix(0, 0), genState, &ethpb.Eth1Data{
t.Fatal(err)
}
if err := bc.initializeBeaconChain(ctx, time.Unix(0, 0), genState, &ethpb.Eth1Data{
DepositRoot: hashTreeRoot[:], DepositRoot: hashTreeRoot[:],
}); err != nil { }); err != nil {
t.Fatal(err) t.Fatal(err)
@@ -336,8 +336,11 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1 finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: finalizedSlot, ParentRoot: genesisRoot[:]}} headBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: finalizedSlot, ParentRoot: genesisRoot[:]}}
headState, err := beaconstate.InitializeFromProto(&pb.BeaconState{Slot: finalizedSlot}) headState := testutil.NewBeaconState()
if err != nil { if err := headState.SetSlot(finalizedSlot); err != nil {
t.Fatal(err)
}
if err := headState.SetGenesisValidatorRoot(params.BeaconConfig().ZeroHash[:]); err != nil {
t.Fatal(err) t.Fatal(err)
} }
headRoot, err := ssz.HashTreeRoot(headBlock.Block) headRoot, err := ssz.HashTreeRoot(headBlock.Block)
@@ -347,6 +350,9 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
if err := db.SaveState(ctx, headState, headRoot); err != nil { if err := db.SaveState(ctx, headState, headRoot); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(ctx, headState, genesisRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(ctx, headBlock); err != nil { if err := db.SaveBlock(ctx, headBlock); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -359,7 +365,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
if err := db.SaveBlock(ctx, headBlock); err != nil { if err := db.SaveBlock(ctx, headBlock); err != nil {
t.Fatal(err) t.Fatal(err)
} }
c := &Service{beaconDB: db} c := &Service{beaconDB: db, stateGen: stategen.New(db, cache.NewStateSummaryCache())}
if err := c.initializeChainInfo(ctx); err != nil { if err := c.initializeChainInfo(ctx); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -398,17 +404,18 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
ctx := context.Background() ctx := context.Background()
s := &Service{ s := &Service{
beaconDB: db, beaconDB: db,
stateGen: stategen.New(db, cache.NewStateSummaryCache()),
} }
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}} b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, err := ssz.HashTreeRoot(b) r, err := ssz.HashTreeRoot(b)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
state := &pb.BeaconState{} newState := testutil.NewBeaconState()
newState, err := beaconstate.InitializeFromProto(state) if err := s.stateGen.SaveState(ctx, r, newState); err != nil {
if err := s.beaconDB.SaveState(ctx, newState, r); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := s.saveHeadNoDB(ctx, b, r); err != nil { if err := s.saveHeadNoDB(ctx, b, r); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -439,9 +446,8 @@ func TestChainService_PruneOldStates(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
state := &pb.BeaconState{Slot: uint64(i)} newState := testutil.NewBeaconState()
newState, err := beaconstate.InitializeFromProto(state) if err := newState.SetSlot(uint64(i)); err != nil {
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := s.beaconDB.SaveState(ctx, newState, r); err != nil { if err := s.beaconDB.SaveState(ctx, newState, r); err != nil {

View File

@@ -41,6 +41,7 @@ go_test(
"attestation_data_test.go", "attestation_data_test.go",
"checkpoint_state_test.go", "checkpoint_state_test.go",
"committee_fuzz_test.go", "committee_fuzz_test.go",
"committee_ids_test.go",
"committee_test.go", "committee_test.go",
"eth1_data_test.go", "eth1_data_test.go",
"feature_flag_test.go", "feature_flag_test.go",

View File

@@ -4,10 +4,12 @@ import (
"reflect" "reflect"
"testing" "testing"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/shared/params"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil" "github.com/prysmaticlabs/prysm/shared/hashutil"
) )
@@ -48,7 +50,8 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)} cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{ st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 64, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:],
Slot: 64,
}) })
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -72,7 +75,7 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(state.InnerStateUnsafe(), info1.State.InnerStateUnsafe()) { if !proto.Equal(state.InnerStateUnsafe(), info1.State.InnerStateUnsafe()) {
t.Error("incorrectly cached state") t.Error("incorrectly cached state")
} }

View File

@@ -4,39 +4,82 @@ import (
"sync" "sync"
lru "github.com/hashicorp/golang-lru" lru "github.com/hashicorp/golang-lru"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil" "github.com/prysmaticlabs/prysm/shared/sliceutil"
) )
type committeeIDs struct { type committeeIDs struct {
cache *lru.Cache attester *lru.Cache
lock sync.RWMutex attesterLock sync.RWMutex
aggregator *lru.Cache
aggregatorLock sync.RWMutex
} }
// CommitteeIDs for attestations. // CommitteeIDs for attester and aggregator.
var CommitteeIDs = newCommitteeIDs() var CommitteeIDs = newCommitteeIDs()
func newCommitteeIDs() *committeeIDs { func newCommitteeIDs() *committeeIDs {
cache, err := lru.New(8) // Given a node can calculate committee assignments of current epoch and next epoch.
// Max size is set to 2 epoch length.
cacheSize := int(params.BeaconConfig().MaxCommitteesPerSlot * params.BeaconConfig().SlotsPerEpoch * 2)
attesterCache, err := lru.New(cacheSize)
if err != nil { if err != nil {
panic(err) panic(err)
} }
return &committeeIDs{cache: cache} aggregatorCache, err := lru.New(cacheSize)
} if err != nil {
panic(err)
// AddIDs to the cache for attestation committees by epoch.
func (t *committeeIDs) AddIDs(indices []uint64, epoch uint64) {
t.lock.Lock()
defer t.lock.Unlock()
val, exists := t.cache.Get(epoch)
if exists {
indices = sliceutil.UnionUint64(append(indices, val.([]uint64)...))
} }
t.cache.Add(epoch, indices) return &committeeIDs{attester: attesterCache, aggregator: aggregatorCache}
} }
// GetIDs from the cache for attestation committees by epoch. // AddAttesterCommiteeID adds committee ID for subscribing subnet for the attester of a given slot.
func (t *committeeIDs) GetIDs(epoch uint64) []uint64 { func (c *committeeIDs) AddAttesterCommiteeID(slot uint64, committeeID uint64) {
val, exists := t.cache.Get(epoch) c.attesterLock.Lock()
defer c.attesterLock.Unlock()
ids := []uint64{committeeID}
val, exists := c.attester.Get(slot)
if exists {
ids = sliceutil.UnionUint64(append(val.([]uint64), ids...))
}
c.attester.Add(slot, ids)
}
// GetAttesterCommitteeIDs gets the committee ID for subscribing subnet for attester of the slot.
func (c *committeeIDs) GetAttesterCommitteeIDs(slot uint64) []uint64 {
c.attesterLock.RLock()
defer c.attesterLock.RUnlock()
val, exists := c.attester.Get(slot)
if !exists {
return nil
}
if v, ok := val.([]uint64); ok {
return v
}
return nil
}
// AddAggregatorCommiteeID adds committee ID for subscribing subnet for the aggregator of a given slot.
func (c *committeeIDs) AddAggregatorCommiteeID(slot uint64, committeeID uint64) {
c.aggregatorLock.Lock()
defer c.aggregatorLock.Unlock()
ids := []uint64{committeeID}
val, exists := c.aggregator.Get(slot)
if exists {
ids = sliceutil.UnionUint64(append(val.([]uint64), ids...))
}
c.aggregator.Add(slot, ids)
}
// GetAggregatorCommitteeIDs gets the committee ID for subscribing subnet for aggregator of the slot.
func (c *committeeIDs) GetAggregatorCommitteeIDs(slot uint64) []uint64 {
c.aggregatorLock.RLock()
defer c.aggregatorLock.RUnlock()
val, exists := c.aggregator.Get(slot)
if !exists { if !exists {
return []uint64{} return []uint64{}
} }

View File

@@ -0,0 +1,56 @@
package cache
import (
"reflect"
"testing"
)
func TestCommitteeIDCache_RoundTrip(t *testing.T) {
c := newCommitteeIDs()
slot := uint64(100)
committeeIDs := c.GetAggregatorCommitteeIDs(slot)
if len(committeeIDs) != 0 {
t.Errorf("Empty cache returned an object: %v", committeeIDs)
}
c.AddAggregatorCommiteeID(slot, 1)
res := c.GetAggregatorCommitteeIDs(slot)
if !reflect.DeepEqual(res, []uint64{1}) {
t.Error("Expected equal value to return from cache")
}
c.AddAggregatorCommiteeID(slot, 2)
res = c.GetAggregatorCommitteeIDs(slot)
if !reflect.DeepEqual(res, []uint64{1, 2}) {
t.Error("Expected equal value to return from cache")
}
c.AddAggregatorCommiteeID(slot, 3)
res = c.GetAggregatorCommitteeIDs(slot)
if !reflect.DeepEqual(res, []uint64{1, 2, 3}) {
t.Error("Expected equal value to return from cache")
}
committeeIDs = c.GetAttesterCommitteeIDs(slot)
if len(committeeIDs) != 0 {
t.Errorf("Empty cache returned an object: %v", committeeIDs)
}
c.AddAttesterCommiteeID(slot, 11)
res = c.GetAttesterCommitteeIDs(slot)
if !reflect.DeepEqual(res, []uint64{11}) {
t.Error("Expected equal value to return from cache")
}
c.AddAttesterCommiteeID(slot, 22)
res = c.GetAttesterCommitteeIDs(slot)
if !reflect.DeepEqual(res, []uint64{11, 22}) {
t.Error("Expected equal value to return from cache")
}
c.AddAttesterCommiteeID(slot, 33)
res = c.GetAttesterCommitteeIDs(slot)
if !reflect.DeepEqual(res, []uint64{11, 22, 33}) {
t.Error("Expected equal value to return from cache")
}
}

View File

@@ -42,6 +42,7 @@ go_test(
srcs = [ srcs = [
"block_operations_fuzz_test.go", "block_operations_fuzz_test.go",
"block_operations_test.go", "block_operations_test.go",
"block_regression_test.go",
"block_test.go", "block_test.go",
"eth1_data_test.go", "eth1_data_test.go",
], ],
@@ -49,6 +50,7 @@ go_test(
deps = [ deps = [
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library", "//beacon-chain/state:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/p2p/v1:go_default_library", "//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library", "//shared/attestationutil:go_default_library",
"//shared/bls:go_default_library", "//shared/bls:go_default_library",

View File

@@ -35,50 +35,8 @@ var log = logrus.WithField("prefix", "blocks")
var eth1DataCache = cache.NewEth1DataVoteCache() var eth1DataCache = cache.NewEth1DataVoteCache()
// ErrSigFailedToVerify returns when a signature of a block object(ie attestation, slashing, exit... etc)
// failed to verify.
var ErrSigFailedToVerify = errors.New("signature did not verify")
func verifySigningRoot(obj interface{}, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
}
sig, err := bls.SignatureFromBytes(signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
root, err := ssz.HashTreeRoot(obj)
if err != nil {
return errors.Wrap(err, "could not get signing root")
}
if !sig.Verify(root[:], publicKey, domain) {
return ErrSigFailedToVerify
}
return nil
}
func verifyBlockRoot(blk *ethpb.BeaconBlock, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
}
sig, err := bls.SignatureFromBytes(signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
root, err := stateutil.BlockRoot(blk)
if err != nil {
return errors.Wrap(err, "could not get signing root")
}
if !sig.Verify(root[:], publicKey, domain) {
return ErrSigFailedToVerify
}
return nil
}
// Deprecated: This method uses deprecated ssz.SigningRoot. // Deprecated: This method uses deprecated ssz.SigningRoot.
func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, pub []byte, signature []byte, domain uint64) error { func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, pub []byte, signature []byte, domain []byte) error {
publicKey, err := bls.PublicKeyFromBytes(pub) publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil { if err != nil {
return errors.Wrap(err, "could not convert bytes to public key") return errors.Wrap(err, "could not convert bytes to public key")
@@ -91,13 +49,21 @@ func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, pub []byte, signature
if err != nil { if err != nil {
return errors.Wrap(err, "could not get signing root") return errors.Wrap(err, "could not get signing root")
} }
if !sig.Verify(root[:], publicKey, domain) { sigRoot := &pb.SigningRoot{
return ErrSigFailedToVerify ObjectRoot: root[:],
Domain: domain,
}
ctrRoot, err := ssz.HashTreeRoot(sigRoot)
if err != nil {
return errors.Wrap(err, "could not get container root")
}
if !sig.Verify(ctrRoot[:], publicKey) {
return helpers.ErrSigFailedToVerify
} }
return nil return nil
} }
func verifySignature(signedData []byte, pub []byte, signature []byte, domain uint64) error { func verifySignature(signedData []byte, pub []byte, signature []byte, domain []byte) error {
publicKey, err := bls.PublicKeyFromBytes(pub) publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil { if err != nil {
return errors.Wrap(err, "could not convert bytes to public key") return errors.Wrap(err, "could not convert bytes to public key")
@@ -106,8 +72,16 @@ func verifySignature(signedData []byte, pub []byte, signature []byte, domain uin
if err != nil { if err != nil {
return errors.Wrap(err, "could not convert bytes to signature") return errors.Wrap(err, "could not convert bytes to signature")
} }
if !sig.Verify(signedData, publicKey, domain) { ctr := &pb.SigningRoot{
return ErrSigFailedToVerify ObjectRoot: signedData,
Domain: domain,
}
root, err := ssz.HashTreeRoot(ctr)
if err != nil {
return errors.Wrap(err, "could not hash container")
}
if !sig.Verify(root[:], publicKey) {
return helpers.ErrSigFailedToVerify
} }
return nil return nil
} }
@@ -119,7 +93,7 @@ func verifySignature(signedData []byte, pub []byte, signature []byte, domain uin
// Official spec definition: // Official spec definition:
// def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None: // def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None:
// state.eth1_data_votes.append(body.eth1_data) // state.eth1_data_votes.append(body.eth1_data)
// if state.eth1_data_votes.count(body.eth1_data) * 2 > SLOTS_PER_ETH1_VOTING_PERIOD: // if state.eth1_data_votes.count(body.eth1_data) * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH:
// state.latest_eth1_data = body.eth1_data // state.latest_eth1_data = body.eth1_data
func ProcessEth1DataInBlock(beaconState *stateTrie.BeaconState, block *ethpb.BeaconBlock) (*stateTrie.BeaconState, error) { func ProcessEth1DataInBlock(beaconState *stateTrie.BeaconState, block *ethpb.BeaconBlock) (*stateTrie.BeaconState, error) {
if beaconState == nil { if beaconState == nil {
@@ -170,7 +144,6 @@ func Eth1DataHasEnoughSupport(beaconState *stateTrie.BeaconState, data *ethpb.Et
if err != nil { if err != nil {
return false, errors.Wrap(err, "could not retrieve eth1 data vote cache") return false, errors.Wrap(err, "could not retrieve eth1 data vote cache")
} }
} }
if voteCount == 0 { if voteCount == 0 {
for _, vote := range beaconState.Eth1DataVotes() { for _, vote := range beaconState.Eth1DataVotes() {
@@ -193,7 +166,8 @@ func Eth1DataHasEnoughSupport(beaconState *stateTrie.BeaconState, data *ethpb.Et
// If 50+% majority converged on the same eth1data, then it has enough support to update the // If 50+% majority converged on the same eth1data, then it has enough support to update the
// state. // state.
return voteCount*2 > params.BeaconConfig().SlotsPerEth1VotingPeriod, nil support := params.BeaconConfig().EpochsPerEth1VotingPeriod * params.BeaconConfig().SlotsPerEpoch
return voteCount*2 > support, nil
} }
// ProcessBlockHeader validates a block by its header. // ProcessBlockHeader validates a block by its header.
@@ -203,6 +177,8 @@ func Eth1DataHasEnoughSupport(beaconState *stateTrie.BeaconState, data *ethpb.Et
// def process_block_header(state: BeaconState, block: BeaconBlock) -> None: // def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
// # Verify that the slots match // # Verify that the slots match
// assert block.slot == state.slot // assert block.slot == state.slot
// # Verify that proposer index is the correct index
// assert block.proposer_index == get_beacon_proposer_index(state)
// # Verify that the parent matches // # Verify that the parent matches
// assert block.parent_root == signing_root(state.latest_block_header) // assert block.parent_root == signing_root(state.latest_block_header)
// # Save current block as the new latest block // # Save current block as the new latest block
@@ -227,28 +203,29 @@ func ProcessBlockHeader(
return nil, err return nil, err
} }
idx, err := helpers.BeaconProposerIndex(beaconState)
if err != nil {
return nil, err
}
proposer, err := beaconState.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
// Verify proposer signature. // Verify proposer signature.
currentEpoch := helpers.SlotToEpoch(beaconState.Slot()) if err := VerifyBlockHeaderSignature(beaconState, block); err != nil {
domain, err := helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return nil, err return nil, err
} }
if err := verifyBlockRoot(block.Block, proposer.PublicKey, block.Signature, domain); err != nil {
return nil, ErrSigFailedToVerify
}
return beaconState, nil return beaconState, nil
} }
// VerifyBlockHeaderSignature verifies the proposer signature of a beacon block.
func VerifyBlockHeaderSignature(beaconState *stateTrie.BeaconState, block *ethpb.SignedBeaconBlock) error {
proposer, err := beaconState.ValidatorAtIndex(block.Block.ProposerIndex)
if err != nil {
return err
}
currentEpoch := helpers.SlotToEpoch(beaconState.Slot())
domain, err := helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorRoot())
if err != nil {
return err
}
return helpers.VerifySigningRoot(block.Block, proposer.PublicKey, block.Signature, domain)
}
// ProcessBlockHeaderNoVerify validates a block by its header but skips proposer // ProcessBlockHeaderNoVerify validates a block by its header but skips proposer
// signature verification. // signature verification.
// //
@@ -259,6 +236,8 @@ func ProcessBlockHeader(
// def process_block_header(state: BeaconState, block: BeaconBlock) -> None: // def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
// # Verify that the slots match // # Verify that the slots match
// assert block.slot == state.slot // assert block.slot == state.slot
// # Verify that proposer index is the correct index
// assert block.proposer_index == get_beacon_proposer_index(state)
// # Verify that the parent matches // # Verify that the parent matches
// assert block.parent_root == signing_root(state.latest_block_header) // assert block.parent_root == signing_root(state.latest_block_header)
// # Save current block as the new latest block // # Save current block as the new latest block
@@ -280,7 +259,14 @@ func ProcessBlockHeaderNoVerify(
return nil, errors.New("nil block") return nil, errors.New("nil block")
} }
if beaconState.Slot() != block.Slot { if beaconState.Slot() != block.Slot {
return nil, fmt.Errorf("state slot: %d is different then block slot: %d", beaconState.Slot(), block.Slot) return nil, fmt.Errorf("state slot: %d is different than block slot: %d", beaconState.Slot(), block.Slot)
}
idx, err := helpers.BeaconProposerIndex(beaconState)
if err != nil {
return nil, err
}
if block.ProposerIndex != idx {
return nil, fmt.Errorf("proposer index: %d is different than calculated: %d", block.ProposerIndex, idx)
} }
parentRoot, err := stateutil.BlockHeaderRoot(beaconState.LatestBlockHeader()) parentRoot, err := stateutil.BlockHeaderRoot(beaconState.LatestBlockHeader())
if err != nil { if err != nil {
@@ -293,10 +279,6 @@ func ProcessBlockHeaderNoVerify(
block.ParentRoot, parentRoot) block.ParentRoot, parentRoot)
} }
idx, err := helpers.BeaconProposerIndex(beaconState)
if err != nil {
return nil, err
}
proposer, err := beaconState.ValidatorAtIndex(idx) proposer, err := beaconState.ValidatorAtIndex(idx)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -310,10 +292,11 @@ func ProcessBlockHeaderNoVerify(
return nil, err return nil, err
} }
if err := beaconState.SetLatestBlockHeader(&ethpb.BeaconBlockHeader{ if err := beaconState.SetLatestBlockHeader(&ethpb.BeaconBlockHeader{
Slot: block.Slot, Slot: block.Slot,
ParentRoot: block.ParentRoot, ProposerIndex: block.ProposerIndex,
StateRoot: params.BeaconConfig().ZeroHash[:], ParentRoot: block.ParentRoot,
BodyRoot: bodyRoot[:], StateRoot: params.BeaconConfig().ZeroHash[:],
BodyRoot: bodyRoot[:],
}); err != nil { }); err != nil {
return nil, err return nil, err
} }
@@ -353,7 +336,7 @@ func ProcessRandao(
buf := make([]byte, 32) buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, currentEpoch) binary.LittleEndian.PutUint64(buf, currentEpoch)
domain, err := helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainRandao) domain, err := helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainRandao, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -434,17 +417,14 @@ func ProcessProposerSlashings(
if slashing == nil { if slashing == nil {
return nil, errors.New("nil proposer slashings in block body") return nil, errors.New("nil proposer slashings in block body")
} }
if int(slashing.ProposerIndex) >= beaconState.NumValidators() {
return nil, fmt.Errorf("invalid proposer index given in slashing %d", slashing.ProposerIndex)
}
if err = VerifyProposerSlashing(beaconState, slashing); err != nil { if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrapf(err, "could not verify proposer slashing %d", idx) return nil, errors.Wrapf(err, "could not verify proposer slashing %d", idx)
} }
beaconState, err = v.SlashValidator( beaconState, err = v.SlashValidator(
beaconState, slashing.ProposerIndex, 0, /* proposer is whistleblower */ beaconState, slashing.Header_1.Header.ProposerIndex, 0, /* proposer is whistleblower */
) )
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.ProposerIndex) return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
} }
} }
return beaconState, nil return beaconState, nil
@@ -455,30 +435,33 @@ func VerifyProposerSlashing(
beaconState *stateTrie.BeaconState, beaconState *stateTrie.BeaconState,
slashing *ethpb.ProposerSlashing, slashing *ethpb.ProposerSlashing,
) error { ) error {
proposer, err := beaconState.ValidatorAtIndex(slashing.ProposerIndex)
if err != nil {
return err
}
if slashing.Header_1 == nil || slashing.Header_1.Header == nil || slashing.Header_2 == nil || slashing.Header_2.Header == nil { if slashing.Header_1 == nil || slashing.Header_1.Header == nil || slashing.Header_2 == nil || slashing.Header_2.Header == nil {
return errors.New("nil header cannot be verified") return errors.New("nil header cannot be verified")
} }
if slashing.Header_1.Header.Slot != slashing.Header_2.Header.Slot { if slashing.Header_1.Header.Slot != slashing.Header_2.Header.Slot {
return fmt.Errorf("mismatched header slots, received %d == %d", slashing.Header_1.Header.Slot, slashing.Header_2.Header.Slot) return fmt.Errorf("mismatched header slots, received %d == %d", slashing.Header_1.Header.Slot, slashing.Header_2.Header.Slot)
} }
if slashing.Header_1.Header.ProposerIndex != slashing.Header_2.Header.ProposerIndex {
return fmt.Errorf("mismatched indices, received %d == %d", slashing.Header_1.Header.ProposerIndex, slashing.Header_2.Header.ProposerIndex)
}
if proto.Equal(slashing.Header_1, slashing.Header_2) { if proto.Equal(slashing.Header_1, slashing.Header_2) {
return errors.New("expected slashing headers to differ") return errors.New("expected slashing headers to differ")
} }
proposer, err := beaconState.ValidatorAtIndex(slashing.Header_1.Header.ProposerIndex)
if err != nil {
return err
}
if !helpers.IsSlashableValidator(proposer, helpers.SlotToEpoch(beaconState.Slot())) { if !helpers.IsSlashableValidator(proposer, helpers.SlotToEpoch(beaconState.Slot())) {
return fmt.Errorf("validator with key %#x is not slashable", proposer.PublicKey) return fmt.Errorf("validator with key %#x is not slashable", proposer.PublicKey)
} }
// Using headerEpoch1 here because both of the headers should have the same epoch. // Using headerEpoch1 here because both of the headers should have the same epoch.
domain, err := helpers.Domain(beaconState.Fork(), helpers.SlotToEpoch(slashing.Header_1.Header.Slot), params.BeaconConfig().DomainBeaconProposer) domain, err := helpers.Domain(beaconState.Fork(), helpers.SlotToEpoch(slashing.Header_1.Header.Slot), params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
return err return err
} }
headers := []*ethpb.SignedBeaconBlockHeader{slashing.Header_1, slashing.Header_2} headers := []*ethpb.SignedBeaconBlockHeader{slashing.Header_1, slashing.Header_2}
for _, header := range headers { for _, header := range headers {
if err := verifySigningRoot(header.Header, proposer.PublicKey, header.Signature, domain); err != nil { if err := helpers.VerifySigningRoot(header.Header, proposer.PublicKey, header.Signature, domain); err != nil {
return errors.Wrap(err, "could not verify beacon block header") return errors.Wrap(err, "could not verify beacon block header")
} }
} }
@@ -596,7 +579,7 @@ func slashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
return nil return nil
} }
indices1 := slashing.Attestation_1.AttestingIndices indices1 := slashing.Attestation_1.AttestingIndices
indices2 := slashing.Attestation_1.AttestingIndices indices2 := slashing.Attestation_2.AttestingIndices
return sliceutil.IntersectionUint64(indices1, indices2) return sliceutil.IntersectionUint64(indices1, indices2)
} }
@@ -827,30 +810,25 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *stateTrie.Beacon
return errors.New("attesting indices is not uniquely sorted") return errors.New("attesting indices is not uniquely sorted")
} }
domain, err := helpers.Domain(beaconState.Fork(), indexedAtt.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester) domain, err := helpers.Domain(beaconState.Fork(), indexedAtt.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
return err return err
} }
var pubkey *bls.PublicKey pubkeys := []*bls.PublicKey{}
if len(indices) > 0 { if len(indices) > 0 {
pubkeyAtIdx := beaconState.PubkeyAtIndex(indices[0]) for i := 0; i < len(indices); i++ {
pubkey, err = bls.PublicKeyFromBytes(pubkeyAtIdx[:]) pubkeyAtIdx := beaconState.PubkeyAtIndex(indices[i])
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
for i := 1; i < len(indices); i++ {
pubkeyAtIdx = beaconState.PubkeyAtIndex(indices[i])
pk, err := bls.PublicKeyFromBytes(pubkeyAtIdx[:]) pk, err := bls.PublicKeyFromBytes(pubkeyAtIdx[:])
if err != nil { if err != nil {
return errors.Wrap(err, "could not deserialize validator public key") return errors.Wrap(err, "could not deserialize validator public key")
} }
pubkey.Aggregate(pk) pubkeys = append(pubkeys, pk)
} }
} }
messageHash, err := ssz.HashTreeRoot(indexedAtt.Data) messageHash, err := helpers.ComputeSigningRoot(indexedAtt.Data, domain)
if err != nil { if err != nil {
return errors.Wrap(err, "could not tree hash att data") return errors.Wrap(err, "could not get signing root of object")
} }
sig, err := bls.SignatureFromBytes(indexedAtt.Signature) sig, err := bls.SignatureFromBytes(indexedAtt.Signature)
@@ -859,8 +837,8 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *stateTrie.Beacon
} }
voted := len(indices) > 0 voted := len(indices) > 0
if voted && !sig.Verify(messageHash[:], pubkey, domain) { if voted && !sig.FastAggregateVerify(pubkeys, messageHash) {
return ErrSigFailedToVerify return helpers.ErrSigFailedToVerify
} }
return nil return nil
} }
@@ -1002,7 +980,10 @@ func ProcessDeposit(
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey)) index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
numVals := beaconState.NumValidators() numVals := beaconState.NumValidators()
if !ok { if !ok {
domain := bls.ComputeDomain(params.BeaconConfig().DomainDeposit) domain, err := helpers.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return nil, err
}
depositSig := deposit.Data.Signature depositSig := deposit.Data.Signature
if err := verifyDepositDataSigningRoot(deposit.Data, pubKey, depositSig, domain); err != nil { if err := verifyDepositDataSigningRoot(deposit.Data, pubKey, depositSig, domain); err != nil {
// Ignore this error as in the spec pseudo code. // Ignore this error as in the spec pseudo code.
@@ -1112,7 +1093,7 @@ func ProcessVoluntaryExits(
if err != nil { if err != nil {
return nil, err return nil, err
} }
if err := VerifyExit(val, beaconState.Slot(), beaconState.Fork(), exit); err != nil { if err := VerifyExit(val, beaconState.Slot(), beaconState.Fork(), exit, beaconState.GenesisValidatorRoot()); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx) return nil, errors.Wrapf(err, "could not verify exit %d", idx)
} }
beaconState, err = v.InitiateValidatorExit(beaconState, exit.Exit.ValidatorIndex) beaconState, err = v.InitiateValidatorExit(beaconState, exit.Exit.ValidatorIndex)
@@ -1163,7 +1144,7 @@ func ProcessVoluntaryExitsNoVerify(
// # Verify signature // # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, exit.epoch) // domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, exit.epoch)
// assert bls_verify(validator.pubkey, signing_root(exit), exit.signature, domain) // assert bls_verify(validator.pubkey, signing_root(exit), exit.signature, domain)
func VerifyExit(validator *ethpb.Validator, currentSlot uint64, fork *pb.Fork, signed *ethpb.SignedVoluntaryExit) error { func VerifyExit(validator *ethpb.Validator, currentSlot uint64, fork *pb.Fork, signed *ethpb.SignedVoluntaryExit, genesisRoot []byte) error {
if signed == nil || signed.Exit == nil { if signed == nil || signed.Exit == nil {
return errors.New("nil exit") return errors.New("nil exit")
} }
@@ -1190,12 +1171,12 @@ func VerifyExit(validator *ethpb.Validator, currentSlot uint64, fork *pb.Fork, s
validator.ActivationEpoch+params.BeaconConfig().PersistentCommitteePeriod, validator.ActivationEpoch+params.BeaconConfig().PersistentCommitteePeriod,
) )
} }
domain, err := helpers.Domain(fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit) domain, err := helpers.Domain(fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit, genesisRoot)
if err != nil { if err != nil {
return err return err
} }
if err := verifySigningRoot(exit, validator.PublicKey, signed.Signature, domain); err != nil { if err := helpers.VerifySigningRoot(exit, validator.PublicKey, signed.Signature, domain); err != nil {
return ErrSigFailedToVerify return helpers.ErrSigFailedToVerify
} }
return nil return nil
} }

View File

@@ -4,12 +4,11 @@ import (
"context" "context"
"testing" "testing"
fuzz "github.com/google/gofuzz"
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
//"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks" //"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state" beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
@@ -54,35 +53,6 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
} }
} }
func TestFuzzverifySigningRoot_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
pubkey := [48]byte{}
sig := [96]byte{}
domain := [4]byte{}
p := []byte{}
s := []byte{}
d := uint64(0)
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(&pubkey)
fuzzer.Fuzz(&sig)
fuzzer.Fuzz(&domain)
fuzzer.Fuzz(state)
fuzzer.Fuzz(&p)
fuzzer.Fuzz(&s)
fuzzer.Fuzz(&d)
domain := bytesutil.FromBytes4(domain[:])
if err := verifySigningRoot(state, pubkey[:], sig[:], domain); err != nil {
t.Log(err)
}
if err := verifySigningRoot(state, p, s, d); err != nil {
t.Log(err)
}
}
}
func TestFuzzverifyDepositDataSigningRoot_10000(t *testing.T) { func TestFuzzverifyDepositDataSigningRoot_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0) fuzzer := fuzz.NewWithSeed(0)
ba := []byte{} ba := []byte{}
@@ -91,7 +61,7 @@ func TestFuzzverifyDepositDataSigningRoot_10000(t *testing.T) {
domain := [4]byte{} domain := [4]byte{}
p := []byte{} p := []byte{}
s := []byte{} s := []byte{}
d := uint64(0) d := []byte{}
for i := 0; i < 10000; i++ { for i := 0; i < 10000; i++ {
fuzzer.Fuzz(&ba) fuzzer.Fuzz(&ba)
fuzzer.Fuzz(&pubkey) fuzzer.Fuzz(&pubkey)
@@ -100,13 +70,13 @@ func TestFuzzverifyDepositDataSigningRoot_10000(t *testing.T) {
fuzzer.Fuzz(&p) fuzzer.Fuzz(&p)
fuzzer.Fuzz(&s) fuzzer.Fuzz(&s)
fuzzer.Fuzz(&d) fuzzer.Fuzz(&d)
domain := bytesutil.FromBytes4(domain[:]) if err := verifySignature(ba, pubkey[:], sig[:], domain[:]); err != nil {
if err := verifySignature(ba, pubkey[:], sig[:], domain); err != nil {
t.Log(err) t.Log(err)
} }
if err := verifySignature(ba, p, s, d); err != nil { if err := verifySignature(ba, p, s, d); err != nil {
t.Log(err) t.Log(err)
} }
} }
} }
@@ -525,7 +495,7 @@ func TestFuzzVerifyExit_10000(t *testing.T) {
fuzzer.Fuzz(val) fuzzer.Fuzz(val)
fuzzer.Fuzz(fork) fuzzer.Fuzz(fork)
fuzzer.Fuzz(&slot) fuzzer.Fuzz(&slot)
if err := VerifyExit(val, slot, fork, ve); err != nil { if err := VerifyExit(val, slot, fork, ve, params.BeaconConfig().ZeroHash[:]); err != nil {
t.Log(err) t.Log(err)
} }
} }

View File

@@ -17,9 +17,11 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks" "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil" "github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bls" "github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil" "github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/trieutil" "github.com/prysmaticlabs/prysm/shared/trieutil"
@@ -37,7 +39,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
lbhsr, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader()) lbhdr, err := stateutil.BlockHeaderRoot(beaconState.LatestBlockHeader())
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
@@ -49,22 +51,23 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
Slot: 0, ProposerIndex: proposerIdx,
Slot: 0,
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: []byte{'A', 'B', 'C'}, RandaoReveal: []byte{'A', 'B', 'C'},
}, },
ParentRoot: lbhsr[:], ParentRoot: lbhdr[:],
}, },
} }
signingRoot, err := ssz.HashTreeRoot(block.Block) dt, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorRoot())
if err != nil {
t.Fatalf("Failed to get signing root of block: %v", err)
}
dt, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconProposer)
if err != nil { if err != nil {
t.Fatalf("Failed to get domain form state: %v", err) t.Fatalf("Failed to get domain form state: %v", err)
} }
blockSig := privKeys[proposerIdx+1].Sign(signingRoot[:], dt) signingRoot, err := helpers.ComputeSigningRoot(block.Block, dt)
if err != nil {
t.Fatalf("Failed to get signing root of block: %v", err)
}
blockSig := privKeys[proposerIdx+1].Sign(signingRoot[:])
block.Signature = blockSig.Marshal()[:] block.Signature = blockSig.Marshal()[:]
_, err = blocks.ProcessBlockHeader(beaconState, block) _, err = blocks.ProcessBlockHeader(beaconState, block)
@@ -103,12 +106,16 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
t.Error(err) t.Error(err)
} }
currentEpoch := helpers.CurrentEpoch(state) currentEpoch := helpers.CurrentEpoch(state)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer) dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, state.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatalf("Failed to get domain form state: %v", err) t.Fatalf("Failed to get domain form state: %v", err)
} }
priv := bls.RandKey() priv := bls.RandKey()
blockSig := priv.Sign([]byte("hello"), dt) root, err := helpers.ComputeSigningRoot([]byte("hello"), dt)
if err != nil {
t.Error(err)
}
blockSig := priv.Sign(root[:])
validators[5896].PublicKey = priv.PublicKey().Marshal() validators[5896].PublicKey = priv.PublicKey().Marshal()
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
@@ -122,7 +129,7 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
} }
_, err = blocks.ProcessBlockHeader(state, block) _, err = blocks.ProcessBlockHeader(state, block)
want := "is different then block slot" want := "is different than block slot"
if err == nil || !strings.Contains(err.Error(), want) { if err == nil || !strings.Contains(err.Error(), want) {
t.Errorf("Expected %v, received %v", want, err) t.Errorf("Expected %v, received %v", want, err)
} }
@@ -152,16 +159,21 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
} }
currentEpoch := helpers.CurrentEpoch(state) currentEpoch := helpers.CurrentEpoch(state)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer) dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, state.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatalf("Failed to get domain form state: %v", err) t.Fatalf("Failed to get domain form state: %v", err)
} }
priv := bls.RandKey() priv := bls.RandKey()
blockSig := priv.Sign([]byte("hello"), dt) root, err := helpers.ComputeSigningRoot([]byte("hello"), dt)
if err != nil {
t.Error(err)
}
blockSig := priv.Sign(root[:])
validators[5896].PublicKey = priv.PublicKey().Marshal() validators[5896].PublicKey = priv.PublicKey().Marshal()
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
Slot: 0, ProposerIndex: 5669,
Slot: 0,
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: []byte{'A', 'B', 'C'}, RandaoReveal: []byte{'A', 'B', 'C'},
}, },
@@ -200,21 +212,26 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
parentRoot, err := ssz.HashTreeRoot(state.LatestBlockHeader()) parentRoot, err := stateutil.BlockHeaderRoot(state.LatestBlockHeader())
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
currentEpoch := helpers.CurrentEpoch(state) currentEpoch := helpers.CurrentEpoch(state)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer) dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, state.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatalf("Failed to get domain form state: %v", err) t.Fatalf("Failed to get domain form state: %v", err)
} }
priv := bls.RandKey() priv := bls.RandKey()
blockSig := priv.Sign([]byte("hello"), dt) root, err := helpers.ComputeSigningRoot([]byte("hello"), dt)
if err != nil {
t.Error(err)
}
blockSig := priv.Sign(root[:])
validators[12683].PublicKey = priv.PublicKey().Marshal() validators[12683].PublicKey = priv.PublicKey().Marshal()
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
Slot: 0, ProposerIndex: 5669,
Slot: 0,
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: []byte{'A', 'B', 'C'}, RandaoReveal: []byte{'A', 'B', 'C'},
}, },
@@ -253,30 +270,32 @@ func TestProcessBlockHeader_OK(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
latestBlockSignedRoot, err := ssz.HashTreeRoot(state.LatestBlockHeader()) latestBlockSignedRoot, err := stateutil.BlockHeaderRoot(state.LatestBlockHeader())
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
currentEpoch := helpers.CurrentEpoch(state) currentEpoch := helpers.CurrentEpoch(state)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer) dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, state.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatalf("Failed to get domain form state: %v", err) t.Fatalf("Failed to get domain form state: %v", err)
} }
priv := bls.RandKey() priv := bls.RandKey()
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
Slot: 0, ProposerIndex: 5669,
Slot: 0,
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: []byte{'A', 'B', 'C'}, RandaoReveal: []byte{'A', 'B', 'C'},
}, },
ParentRoot: latestBlockSignedRoot[:], ParentRoot: latestBlockSignedRoot[:],
}, },
} }
signingRoot, err := ssz.HashTreeRoot(block.Block) signingRoot, err := helpers.ComputeSigningRoot(block.Block, dt)
if err != nil { if err != nil {
t.Fatalf("Failed to get signing root of block: %v", err) t.Fatalf("Failed to get signing root of block: %v", err)
} }
blockSig := priv.Sign(signingRoot[:], dt) blockSig := priv.Sign(signingRoot[:])
block.Signature = blockSig.Marshal()[:] block.Signature = blockSig.Marshal()[:]
bodyRoot, err := ssz.HashTreeRoot(block.Block.Body) bodyRoot, err := ssz.HashTreeRoot(block.Block.Body)
if err != nil { if err != nil {
@@ -301,10 +320,11 @@ func TestProcessBlockHeader_OK(t *testing.T) {
var zeroHash [32]byte var zeroHash [32]byte
nsh := newState.LatestBlockHeader() nsh := newState.LatestBlockHeader()
expected := &ethpb.BeaconBlockHeader{ expected := &ethpb.BeaconBlockHeader{
Slot: block.Block.Slot, ProposerIndex: 5669,
ParentRoot: latestBlockSignedRoot[:], Slot: block.Block.Slot,
BodyRoot: bodyRoot[:], ParentRoot: latestBlockSignedRoot[:],
StateRoot: zeroHash[:], BodyRoot: bodyRoot[:],
StateRoot: zeroHash[:],
} }
if !proto.Equal(nsh, expected) { if !proto.Equal(nsh, expected) {
t.Errorf("Expected %v, received %v", expected, nsh) t.Errorf("Expected %v, received %v", expected, nsh)
@@ -321,12 +341,16 @@ func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
epoch := uint64(0) epoch := uint64(0)
buf := make([]byte, 32) buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, epoch) binary.LittleEndian.PutUint64(buf, epoch)
domain, err := helpers.Domain(beaconState.Fork(), epoch, params.BeaconConfig().DomainRandao) domain, err := helpers.Domain(beaconState.Fork(), epoch, params.BeaconConfig().DomainRandao, beaconState.GenesisValidatorRoot())
if err != nil {
t.Fatal(err)
}
root, err := ssz.HashTreeRoot(&pb.SigningRoot{ObjectRoot: buf, Domain: domain})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
// We make the previous validator's index sign the message instead of the proposer. // We make the previous validator's index sign the message instead of the proposer.
epochSignature := privKeys[proposerIdx-1].Sign(buf, domain) epochSignature := privKeys[proposerIdx-1].Sign(root[:])
block := &ethpb.BeaconBlock{ block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: epochSignature.Marshal(), RandaoReveal: epochSignature.Marshal(),
@@ -391,7 +415,9 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
}, },
}, },
} }
for i := uint64(0); i < params.BeaconConfig().SlotsPerEth1VotingPeriod; i++ {
period := params.BeaconConfig().EpochsPerEth1VotingPeriod * params.BeaconConfig().SlotsPerEpoch
for i := uint64(0); i < period; i++ {
beaconState, err = blocks.ProcessEth1DataInBlock(beaconState, block) beaconState, err = blocks.ProcessEth1DataInBlock(beaconState, block)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -415,15 +441,16 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
currentSlot := uint64(0) currentSlot := uint64(0)
slashings := []*ethpb.ProposerSlashing{ slashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: 1,
Header_1: &ethpb.SignedBeaconBlockHeader{ Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: params.BeaconConfig().SlotsPerEpoch + 1, ProposerIndex: 1,
Slot: params.BeaconConfig().SlotsPerEpoch + 1,
}, },
}, },
Header_2: &ethpb.SignedBeaconBlockHeader{ Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 1,
Slot: 0,
}, },
}, },
}, },
@@ -449,15 +476,16 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
currentSlot := uint64(0) currentSlot := uint64(0)
slashings := []*ethpb.ProposerSlashing{ slashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: 1,
Header_1: &ethpb.SignedBeaconBlockHeader{ Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 1,
Slot: 0,
}, },
}, },
Header_2: &ethpb.SignedBeaconBlockHeader{ Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 1,
Slot: 0,
}, },
}, },
}, },
@@ -490,16 +518,17 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
currentSlot := uint64(0) currentSlot := uint64(0)
slashings := []*ethpb.ProposerSlashing{ slashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: 0,
Header_1: &ethpb.SignedBeaconBlockHeader{ Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 0,
Slot: 0,
}, },
Signature: []byte("A"), Signature: []byte("A"),
}, },
Header_2: &ethpb.SignedBeaconBlockHeader{ Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 0,
Slot: 0,
}, },
Signature: []byte("B"), Signature: []byte("B"),
}, },
@@ -535,39 +564,40 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100) beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
proposerIdx := uint64(1) proposerIdx := uint64(1)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconProposer) domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
header1 := &ethpb.SignedBeaconBlockHeader{ header1 := &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: proposerIdx,
StateRoot: []byte("A"), Slot: 0,
StateRoot: []byte("A"),
}, },
} }
signingRoot, err := ssz.HashTreeRoot(header1.Header) signingRoot, err := helpers.ComputeSigningRoot(header1.Header, domain)
if err != nil { if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err) t.Errorf("Could not get signing root of beacon block header: %v", err)
} }
header1.Signature = privKeys[proposerIdx].Sign(signingRoot[:], domain).Marshal()[:] header1.Signature = privKeys[proposerIdx].Sign(signingRoot[:]).Marshal()[:]
header2 := &ethpb.SignedBeaconBlockHeader{ header2 := &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: proposerIdx,
StateRoot: []byte("B"), Slot: 0,
StateRoot: []byte("B"),
}, },
} }
signingRoot, err = ssz.HashTreeRoot(header2.Header) signingRoot, err = helpers.ComputeSigningRoot(header2.Header, domain)
if err != nil { if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err) t.Errorf("Could not get signing root of beacon block header: %v", err)
} }
header2.Signature = privKeys[proposerIdx].Sign(signingRoot[:], domain).Marshal()[:] header2.Signature = privKeys[proposerIdx].Sign(signingRoot[:]).Marshal()[:]
slashings := []*ethpb.ProposerSlashing{ slashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: proposerIdx, Header_1: header1,
Header_1: header1, Header_2: header2,
Header_2: header2,
}, },
} }
@@ -706,16 +736,16 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
}, },
AttestingIndices: []uint64{0, 1}, AttestingIndices: []uint64{0, 1},
} }
hashTreeRoot, err := ssz.HashTreeRoot(att1.Data) domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil {
t.Error(err)
}
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig0 := privKeys[0].Sign(hashTreeRoot[:], domain) signingRoot, err := helpers.ComputeSigningRoot(att1.Data, domain)
sig1 := privKeys[1].Sign(hashTreeRoot[:], domain) if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err)
}
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]*bls.Signature{sig0, sig1}) aggregateSig := bls.AggregateSignatures([]*bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()[:] att1.Signature = aggregateSig.Marshal()[:]
@@ -726,12 +756,12 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
}, },
AttestingIndices: []uint64{0, 1}, AttestingIndices: []uint64{0, 1},
} }
hashTreeRoot, err = ssz.HashTreeRoot(att2.Data) signingRoot, err = helpers.ComputeSigningRoot(att2.Data, domain)
if err != nil { if err != nil {
t.Error(err) t.Errorf("Could not get signing root of beacon block header: %v", err)
} }
sig0 = privKeys[0].Sign(hashTreeRoot[:], domain) sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(hashTreeRoot[:], domain) sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]*bls.Signature{sig0, sig1}) aggregateSig = bls.AggregateSignatures([]*bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()[:] att2.Signature = aggregateSig.Marshal()[:]
@@ -1020,17 +1050,17 @@ func TestProcessAttestations_OK(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
hashTreeRoot, err := ssz.HashTreeRoot(att.Data) domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil {
t.Error(err)
}
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
hashTreeRoot, err := helpers.ComputeSigningRoot(att.Data, domain)
if err != nil {
t.Error(err)
}
sigs := make([]*bls.Signature, len(attestingIndices)) sigs := make([]*bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices { for i, indice := range attestingIndices {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
att.Signature = bls.AggregateSignatures(sigs).Marshal()[:] att.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -1054,7 +1084,7 @@ func TestProcessAttestations_OK(t *testing.T) {
func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) { func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100) beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester) domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -1088,13 +1118,13 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
hashTreeRoot, err := ssz.HashTreeRoot(att1.Data) hashTreeRoot, err := helpers.ComputeSigningRoot(att1.Data, domain)
if err != nil { if err != nil {
t.Fatal(err) t.Error(err)
} }
sigs := make([]*bls.Signature, len(attestingIndices1)) sigs := make([]*bls.Signature, len(attestingIndices1))
for i, indice := range attestingIndices1 { for i, indice := range attestingIndices1 {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
att1.Signature = bls.AggregateSignatures(sigs).Marshal()[:] att1.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -1116,13 +1146,13 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
hashTreeRoot, err = ssz.HashTreeRoot(data) hashTreeRoot, err = helpers.ComputeSigningRoot(data, domain)
if err != nil { if err != nil {
t.Fatal(err) t.Error(err)
} }
sigs = make([]*bls.Signature, len(attestingIndices2)) sigs = make([]*bls.Signature, len(attestingIndices2))
for i, indice := range attestingIndices2 { for i, indice := range attestingIndices2 {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
att2.Signature = bls.AggregateSignatures(sigs).Marshal()[:] att2.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -1135,7 +1165,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) { func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 300) beaconState, privKeys := testutil.DeterministicGenesisState(t, 300)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester) domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -1170,13 +1200,13 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
hashTreeRoot, err := ssz.HashTreeRoot(data) hashTreeRoot, err := helpers.ComputeSigningRoot(data, domain)
if err != nil { if err != nil {
t.Fatal(err) t.Error(err)
} }
sigs := make([]*bls.Signature, len(attestingIndices1)) sigs := make([]*bls.Signature, len(attestingIndices1))
for i, indice := range attestingIndices1 { for i, indice := range attestingIndices1 {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
att1.Signature = bls.AggregateSignatures(sigs).Marshal()[:] att1.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -1197,13 +1227,13 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
hashTreeRoot, err = ssz.HashTreeRoot(data) hashTreeRoot, err = helpers.ComputeSigningRoot(data, domain)
if err != nil { if err != nil {
t.Fatal(err) t.Error(err)
} }
sigs = make([]*bls.Signature, len(attestingIndices2)) sigs = make([]*bls.Signature, len(attestingIndices2))
for i, indice := range attestingIndices2 { for i, indice := range attestingIndices2 {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
att2.Signature = bls.AggregateSignatures(sigs).Marshal()[:] att2.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -1412,18 +1442,17 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
} }
for _, tt := range tests { for _, tt := range tests {
domain, err := helpers.Domain(state.Fork(), tt.attestation.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester) domain, err := helpers.Domain(state.Fork(), tt.attestation.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester, state.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
root, err := ssz.HashTreeRoot(tt.attestation.Data) root, err := helpers.ComputeSigningRoot(tt.attestation.Data, domain)
if err != nil { if err != nil {
t.Errorf("Could not find the ssz root: %v", err) t.Error(err)
continue
} }
var sig []*bls.Signature var sig []*bls.Signature
for _, idx := range tt.attestation.AttestingIndices { for _, idx := range tt.attestation.AttestingIndices {
validatorSig := keys[idx].Sign(root[:], domain) validatorSig := keys[idx].Sign(root[:])
sig = append(sig, validatorSig) sig = append(sig, validatorSig)
} }
aggSig := bls.AggregateSignatures(sig) aggSig := bls.AggregateSignatures(sig)
@@ -1604,11 +1633,11 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
Amount: 1000, Amount: 1000,
}, },
} }
sr, err := ssz.HashTreeRoot(deposit.Data) sr, err := helpers.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 8))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig := sk.Sign(sr[:], 3) sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal() deposit.Data.Signature = sig.Marshal()
leaf, err := ssz.HashTreeRoot(deposit.Data) leaf, err := ssz.HashTreeRoot(deposit.Data)
if err != nil { if err != nil {
@@ -1915,15 +1944,15 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
if err := state.UpdateValidatorAtIndex(0, val); err != nil { if err := state.UpdateValidatorAtIndex(0, val); err != nil {
t.Fatal(err) t.Fatal(err)
} }
signingRoot, err := ssz.HashTreeRoot(exits[0].Exit) domain, err := helpers.Domain(state.Fork(), helpers.CurrentEpoch(state), params.BeaconConfig().DomainVoluntaryExit, state.GenesisValidatorRoot())
if err != nil {
t.Error(err)
}
domain, err := helpers.Domain(state.Fork(), helpers.CurrentEpoch(state), params.BeaconConfig().DomainVoluntaryExit)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig := priv.Sign(signingRoot[:], domain) signingRoot, err := helpers.ComputeSigningRoot(exits[0].Exit, domain)
if err != nil {
t.Error(err)
}
sig := priv.Sign(signingRoot[:])
exits[0].Signature = sig.Marshal() exits[0].Signature = sig.Marshal()
block := &ethpb.BeaconBlock{ block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{

View File

@@ -0,0 +1,113 @@
package blocks_test
import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
testutil.ResetCache()
beaconState, privKeys := testutil.DeterministicGenesisState(t, 5500)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = 1 * params.BeaconConfig().SlotsPerEpoch
}
// This set of indices is very similar to the one from our sapphire testnet
// when close to 100 validators were incorrectly slashed. The set is from 0 -5500,
// instead of 55000 as it would take too long to generate a state.
setA := []uint64{21, 92, 236, 244, 281, 321, 510, 524,
538, 682, 828, 858, 913, 920, 922, 959, 1176, 1207,
1222, 1229, 1354, 1394, 1436, 1454, 1510, 1550,
1552, 1576, 1645, 1704, 1842, 1967, 2076, 2111, 2134, 2307,
2343, 2354, 2417, 2524, 2532, 2555, 2740, 2749, 2759, 2762,
2800, 2809, 2824, 2987, 3110, 3125, 3559, 3583, 3599, 3608,
3657, 3685, 3723, 3756, 3759, 3761, 3820, 3826, 3979, 4030,
4141, 4170, 4205, 4247, 4257, 4479, 4492, 4569, 5091,
}
// Only 2800 is the slashable index.
setB := []uint64{1361, 1438, 2383, 2800}
expectedSlashedVal := 2800
root1 := [32]byte{'d', 'o', 'u', 'b', 'l', 'e', '1'}
att1 := &ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0},
Target: &ethpb.Checkpoint{Epoch: 0, Root: root1[:]},
},
AttestingIndices: setA,
}
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil {
t.Fatal(err)
}
signingRoot, err := helpers.ComputeSigningRoot(att1.Data, domain)
if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err)
}
aggSigs := []*bls.Signature{}
for _, index := range setA {
sig := privKeys[index].Sign(signingRoot[:])
aggSigs = append(aggSigs, sig)
}
aggregateSig := bls.AggregateSignatures(aggSigs)
att1.Signature = aggregateSig.Marshal()[:]
root2 := [32]byte{'d', 'o', 'u', 'b', 'l', 'e', '2'}
att2 := &ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0},
Target: &ethpb.Checkpoint{Epoch: 0, Root: root2[:]},
},
AttestingIndices: setB,
}
signingRoot, err = helpers.ComputeSigningRoot(att2.Data, domain)
if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err)
}
aggSigs = []*bls.Signature{}
for _, index := range setB {
sig := privKeys[index].Sign(signingRoot[:])
aggSigs = append(aggSigs, sig)
}
aggregateSig = bls.AggregateSignatures(aggSigs)
att2.Signature = aggregateSig.Marshal()[:]
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
if err := beaconState.SetSlot(currentSlot); err != nil {
t.Fatal(err)
}
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, block.Body)
if err != nil {
t.Fatal(err)
}
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {
t.Errorf("Validator with index %d was not slashed despite performing a double vote", expectedSlashedVal)
}
for idx, val := range newRegistry {
if val.Slashed && idx != expectedSlashedVal {
t.Errorf("validator with index: %d was unintentionally slashed", idx)
}
}
}

View File

@@ -11,6 +11,17 @@ import (
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
) )
func FakeDeposits(n int) []*ethpb.Eth1Data {
deposits := make([]*ethpb.Eth1Data, n)
for i := 0; i < n; i++ {
deposits[i] = &ethpb.Eth1Data{
DepositCount: 1,
DepositRoot: []byte("root"),
}
}
return deposits
}
func TestEth1DataHasEnoughSupport(t *testing.T) { func TestEth1DataHasEnoughSupport(t *testing.T) {
tests := []struct { tests := []struct {
stateVotes []*ethpb.Eth1Data stateVotes []*ethpb.Eth1Data
@@ -19,21 +30,7 @@ func TestEth1DataHasEnoughSupport(t *testing.T) {
votingPeriodLength uint64 votingPeriodLength uint64
}{ }{
{ {
stateVotes: []*ethpb.Eth1Data{ stateVotes: FakeDeposits(4 * int(params.BeaconConfig().SlotsPerEpoch)),
{
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
},
},
data: &ethpb.Eth1Data{ data: &ethpb.Eth1Data{
DepositCount: 1, DepositCount: 1,
DepositRoot: []byte("root"), DepositRoot: []byte("root"),
@@ -41,21 +38,7 @@ func TestEth1DataHasEnoughSupport(t *testing.T) {
hasSupport: true, hasSupport: true,
votingPeriodLength: 7, votingPeriodLength: 7,
}, { }, {
stateVotes: []*ethpb.Eth1Data{ stateVotes: FakeDeposits(4 * int(params.BeaconConfig().SlotsPerEpoch)),
{
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
},
},
data: &ethpb.Eth1Data{ data: &ethpb.Eth1Data{
DepositCount: 1, DepositCount: 1,
DepositRoot: []byte("root"), DepositRoot: []byte("root"),
@@ -63,21 +46,7 @@ func TestEth1DataHasEnoughSupport(t *testing.T) {
hasSupport: false, hasSupport: false,
votingPeriodLength: 8, votingPeriodLength: 8,
}, { }, {
stateVotes: []*ethpb.Eth1Data{ stateVotes: FakeDeposits(4 * int(params.BeaconConfig().SlotsPerEpoch)),
{
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
}, {
DepositCount: 1,
DepositRoot: []byte("root"),
},
},
data: &ethpb.Eth1Data{ data: &ethpb.Eth1Data{
DepositCount: 1, DepositCount: 1,
DepositRoot: []byte("root"), DepositRoot: []byte("root"),
@@ -90,7 +59,7 @@ func TestEth1DataHasEnoughSupport(t *testing.T) {
for i, tt := range tests { for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
c := params.BeaconConfig() c := params.BeaconConfig()
c.SlotsPerEth1VotingPeriod = tt.votingPeriodLength c.EpochsPerEth1VotingPeriod = tt.votingPeriodLength
params.OverrideBeaconConfig(c) params.OverrideBeaconConfig(c)
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{ s, err := beaconstate.InitializeFromProto(&pb.BeaconState{
@@ -106,8 +75,7 @@ func TestEth1DataHasEnoughSupport(t *testing.T) {
if result != tt.hasSupport { if result != tt.hasSupport {
t.Errorf( t.Errorf(
"blocks.Eth1DataHasEnoughSupport(%+v, %+v) = %t, wanted %t", "blocks.Eth1DataHasEnoughSupport(%+v) = %t, wanted %t",
s,
tt.data, tt.data,
result, result,
tt.hasSupport, tt.hasSupport,

View File

@@ -98,8 +98,8 @@ func runBlockProcessingTest(t *testing.T, config string) {
t.Fatalf("Failed to unmarshal: %v", err) t.Fatalf("Failed to unmarshal: %v", err)
} }
if !proto.Equal(beaconState.CloneInnerState(), postBeaconState) { if !proto.Equal(beaconState.InnerStateUnsafe(), postBeaconState) {
diff, _ := messagediff.PrettyDiff(beaconState.CloneInnerState(), postBeaconState) diff, _ := messagediff.PrettyDiff(beaconState.InnerStateUnsafe(), postBeaconState)
t.Log(diff) t.Log(diff)
t.Fatal("Post state does not match expected") t.Fatal("Post state does not match expected")
} }

View File

@@ -194,15 +194,18 @@ func ProcessSlashings(state *stateTrie.BeaconState) (*stateTrie.BeaconState, err
// current_epoch = get_current_epoch(state) // current_epoch = get_current_epoch(state)
// next_epoch = Epoch(current_epoch + 1) // next_epoch = Epoch(current_epoch + 1)
// # Reset eth1 data votes // # Reset eth1 data votes
// if (state.slot + 1) % SLOTS_PER_ETH1_VOTING_PERIOD == 0: // if next_epoch % EPOCHS_PER_ETH1_VOTING_PERIOD == 0:
// state.eth1_data_votes = [] // state.eth1_data_votes = []
// # Update effective balances with hysteresis // # Update effective balances with hysteresis
// for index, validator in enumerate(state.validators): // for index, validator in enumerate(state.validators):
// balance = state.balances[index] // balance = state.balances[index]
// HALF_INCREMENT = EFFECTIVE_BALANCE_INCREMENT // 2 // HYSTERESIS_INCREMENT = EFFECTIVE_BALANCE_INCREMENT // HYSTERESIS_QUOTIENT
// if balance < validator.effective_balance or validator.effective_balance + 3 * HALF_INCREMENT < balance: // DOWNWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_DOWNWARD_MULTIPLIER
// validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE) // UPWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_UPWARD_MULTIPLIER
// # Set active index root // if (
// balance + DOWNWARD_THRESHOLD < validator.effective_balance
// or validator.effective_balance + UPWARD_THRESHOLD < balance
// ):
// index_epoch = Epoch(next_epoch + ACTIVATION_EXIT_DELAY) // index_epoch = Epoch(next_epoch + ACTIVATION_EXIT_DELAY)
// index_root_position = index_epoch % EPOCHS_PER_HISTORICAL_VECTOR // index_root_position = index_epoch % EPOCHS_PER_HISTORICAL_VECTOR
// indices_list = List[ValidatorIndex, VALIDATOR_REGISTRY_LIMIT](get_active_validator_indices(state, index_epoch)) // indices_list = List[ValidatorIndex, VALIDATOR_REGISTRY_LIMIT](get_active_validator_indices(state, index_epoch))
@@ -228,7 +231,7 @@ func ProcessFinalUpdates(state *stateTrie.BeaconState) (*stateTrie.BeaconState,
nextEpoch := currentEpoch + 1 nextEpoch := currentEpoch + 1
// Reset ETH1 data votes. // Reset ETH1 data votes.
if (state.Slot()+1)%params.BeaconConfig().SlotsPerEth1VotingPeriod == 0 { if nextEpoch%params.BeaconConfig().EpochsPerEth1VotingPeriod == 0 {
if err := state.SetEth1DataVotes([]*ethpb.Eth1Data{}); err != nil { if err := state.SetEth1DataVotes([]*ethpb.Eth1Data{}); err != nil {
return nil, err return nil, err
} }
@@ -244,8 +247,11 @@ func ProcessFinalUpdates(state *stateTrie.BeaconState) (*stateTrie.BeaconState,
return false, fmt.Errorf("validator index exceeds validator length in state %d >= %d", idx, len(state.Balances())) return false, fmt.Errorf("validator index exceeds validator length in state %d >= %d", idx, len(state.Balances()))
} }
balance := bals[idx] balance := bals[idx]
halfInc := params.BeaconConfig().EffectiveBalanceIncrement / 2 hysteresisInc := params.BeaconConfig().EffectiveBalanceIncrement / params.BeaconConfig().HysteresisQuotient
if balance < val.EffectiveBalance || val.EffectiveBalance+3*halfInc < balance { downwardThreshold := hysteresisInc * params.BeaconConfig().HysteresisDownwardMultiplier
upwardThreshold := hysteresisInc * params.BeaconConfig().HysteresisUpwardMultiplier
if balance+downwardThreshold < val.EffectiveBalance || val.EffectiveBalance+upwardThreshold < balance {
val.EffectiveBalance = params.BeaconConfig().MaxEffectiveBalance val.EffectiveBalance = params.BeaconConfig().MaxEffectiveBalance
if val.EffectiveBalance > balance-balance%params.BeaconConfig().EffectiveBalanceIncrement { if val.EffectiveBalance > balance-balance%params.BeaconConfig().EffectiveBalanceIncrement {
val.EffectiveBalance = balance - balance%params.BeaconConfig().EffectiveBalanceIncrement val.EffectiveBalance = balance - balance%params.BeaconConfig().EffectiveBalanceIncrement

View File

@@ -317,10 +317,12 @@ func TestProcessFinalUpdates_CanProcess(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
balances := s.Balances() balances := s.Balances()
balances[0] = 29 * 1e9 balances[0] = 31.75 * 1e9
balances[1] = 31.74 * 1e9
if err := s.SetBalances(balances); err != nil { if err := s.SetBalances(balances); err != nil {
t.Fatal(err) t.Fatal(err)
} }
slashings := s.Slashings() slashings := s.Slashings()
slashings[ce] = 0 slashings[ce] = 0
if err := s.SetSlashings(slashings); err != nil { if err := s.SetSlashings(slashings); err != nil {
@@ -337,9 +339,12 @@ func TestProcessFinalUpdates_CanProcess(t *testing.T) {
} }
// Verify effective balance is correctly updated. // Verify effective balance is correctly updated.
if newS.Validators()[0].EffectiveBalance != 29*1e9 { if newS.Validators()[0].EffectiveBalance != params.BeaconConfig().MaxEffectiveBalance {
t.Errorf("effective balance incorrectly updated, got %d", s.Validators()[0].EffectiveBalance) t.Errorf("effective balance incorrectly updated, got %d", s.Validators()[0].EffectiveBalance)
} }
if newS.Validators()[1].EffectiveBalance != 31*1e9 {
t.Errorf("effective balance incorrectly updated, got %d", s.Validators()[1].EffectiveBalance)
}
// Verify slashed balances correctly updated. // Verify slashed balances correctly updated.
if newS.Slashings()[ce] != newS.Slashings()[ne] { if newS.Slashings()[ce] != newS.Slashings()[ne] {

View File

@@ -83,7 +83,9 @@ func attestationDelta(state *stateTrie.BeaconState, bp *Balance, v *Validator) (
// Process source reward / penalty // Process source reward / penalty
if v.IsPrevEpochAttester && !v.IsSlashed { if v.IsPrevEpochAttester && !v.IsSlashed {
r += br * bp.PrevEpochAttesters / bp.CurrentEpoch inc := params.BeaconConfig().EffectiveBalanceIncrement
rewardNumerator := br * bp.PrevEpochAttesters / inc
r += rewardNumerator / (bp.CurrentEpoch / inc)
proposerReward := br / params.BeaconConfig().ProposerRewardQuotient proposerReward := br / params.BeaconConfig().ProposerRewardQuotient
maxAtteserReward := br - proposerReward maxAtteserReward := br - proposerReward
r += maxAtteserReward / v.InclusionDistance r += maxAtteserReward / v.InclusionDistance
@@ -93,14 +95,18 @@ func attestationDelta(state *stateTrie.BeaconState, bp *Balance, v *Validator) (
// Process target reward / penalty // Process target reward / penalty
if v.IsPrevEpochTargetAttester && !v.IsSlashed { if v.IsPrevEpochTargetAttester && !v.IsSlashed {
r += br * bp.PrevEpochTargetAttesters / bp.CurrentEpoch inc := params.BeaconConfig().EffectiveBalanceIncrement
rewardNumerator := br * bp.PrevEpochAttesters / inc
r += rewardNumerator / (bp.CurrentEpoch / inc)
} else { } else {
p += br p += br
} }
// Process head reward / penalty // Process head reward / penalty
if v.IsPrevEpochHeadAttester && !v.IsSlashed { if v.IsPrevEpochHeadAttester && !v.IsSlashed {
r += br * bp.PrevEpochHeadAttesters / bp.CurrentEpoch inc := params.BeaconConfig().EffectiveBalanceIncrement
rewardNumerator := br * bp.PrevEpochAttesters / inc
r += rewardNumerator / (bp.CurrentEpoch / inc)
} else { } else {
p += br p += br
} }

View File

@@ -31,4 +31,6 @@ type ChainStartedData struct {
type InitializedData struct { type InitializedData struct {
// StartTime is the time at which the chain started. // StartTime is the time at which the chain started.
StartTime time.Time StartTime time.Time
// GenesisValidatorsRoot represents ssz.HashTreeRoot(state.validators).
GenesisValidatorsRoot []byte
} }

View File

@@ -9,6 +9,7 @@ go_library(
"randao.go", "randao.go",
"rewards_penalties.go", "rewards_penalties.go",
"shuffle.go", "shuffle.go",
"signing_root.go",
"slot_epoch.go", "slot_epoch.go",
"validators.go", "validators.go",
], ],
@@ -17,9 +18,12 @@ go_library(
"//beacon-chain:__subpackages__", "//beacon-chain:__subpackages__",
"//shared/benchutil/benchmark_files:__subpackages__", "//shared/benchutil/benchmark_files:__subpackages__",
"//shared/testutil:__pkg__", "//shared/testutil:__pkg__",
"//shared/keystore:__pkg__",
"//shared/interop:__pkg__",
"//slasher:__subpackages__", "//slasher:__subpackages__",
"//tools:__subpackages__", "//tools:__subpackages__",
"//validator:__subpackages__", "//validator:__subpackages__",
"//endtoend/evaluators:__pkg__",
], ],
deps = [ deps = [
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
@@ -48,6 +52,7 @@ go_test(
"randao_test.go", "randao_test.go",
"rewards_penalties_test.go", "rewards_penalties_test.go",
"shuffle_test.go", "shuffle_test.go",
"signing_root_test.go",
"slot_epoch_test.go", "slot_epoch_test.go",
"validators_test.go", "validators_test.go",
], ],
@@ -64,6 +69,7 @@ go_test(
"//shared/params:go_default_library", "//shared/params:go_default_library",
"//shared/sliceutil:go_default_library", "//shared/sliceutil:go_default_library",
"//shared/testutil:go_default_library", "//shared/testutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library", "@com_github_prysmaticlabs_go_ssz//:go_default_library",

View File

@@ -5,7 +5,6 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bls" "github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/hashutil" "github.com/prysmaticlabs/prysm/shared/hashutil"
@@ -124,15 +123,15 @@ func AggregateAttestation(a1 *ethpb.Attestation, a2 *ethpb.Attestation) (*ethpb.
// domain = get_domain(state, DOMAIN_BEACON_ATTESTER, compute_epoch_at_slot(slot)) // domain = get_domain(state, DOMAIN_BEACON_ATTESTER, compute_epoch_at_slot(slot))
// return bls_sign(privkey, hash_tree_root(slot), domain) // return bls_sign(privkey, hash_tree_root(slot), domain)
func SlotSignature(state *stateTrie.BeaconState, slot uint64, privKey *bls.SecretKey) (*bls.Signature, error) { func SlotSignature(state *stateTrie.BeaconState, slot uint64, privKey *bls.SecretKey) (*bls.Signature, error) {
d, err := Domain(state.Fork(), CurrentEpoch(state), params.BeaconConfig().DomainBeaconAttester) d, err := Domain(state.Fork(), CurrentEpoch(state), params.BeaconConfig().DomainBeaconAttester, state.GenesisValidatorRoot())
if err != nil { if err != nil {
return nil, err return nil, err
} }
s, err := ssz.HashTreeRoot(slot) s, err := ComputeSigningRoot(slot, d)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return privKey.Sign(s[:], d), nil return privKey.Sign(s[:]), nil
} }
// IsAggregator returns true if the signature is from the input validator. The committee // IsAggregator returns true if the signature is from the input validator. The committee

View File

@@ -202,7 +202,7 @@ func TestAggregateAttestations(t *testing.T) {
atts := make([]*ethpb.Attestation, len(bl)) atts := make([]*ethpb.Attestation, len(bl))
for i, b := range bl { for i, b := range bl {
sk := bls.RandKey() sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/) sig := sk.Sign([]byte("dummy_test_data"))
atts[i] = &ethpb.Attestation{ atts[i] = &ethpb.Attestation{
AggregationBits: b, AggregationBits: b,
Data: nil, Data: nil,
@@ -258,15 +258,15 @@ func TestSlotSignature_Verify(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
domain, err := helpers.Domain(state.Fork(), helpers.CurrentEpoch(state), params.BeaconConfig().DomainBeaconAttester) domain, err := helpers.Domain(state.Fork(), helpers.CurrentEpoch(state), params.BeaconConfig().DomainBeaconAttester, state.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
msg, err := ssz.HashTreeRoot(slot) msg, err := helpers.ComputeSigningRoot(slot, domain)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !sig.Verify(msg[:], pub, domain) { if !sig.Verify(msg[:], pub) {
t.Error("Could not verify slot signature") t.Error("Could not verify slot signature")
} }
} }
@@ -278,7 +278,7 @@ func TestIsAggregator_True(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig := privKeys[0].Sign([]byte{}, 0) sig := privKeys[0].Sign([]byte{'A'})
agg, err := helpers.IsAggregator(uint64(len(committee)), sig.Marshal()) agg, err := helpers.IsAggregator(uint64(len(committee)), sig.Marshal())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -297,7 +297,7 @@ func TestIsAggregator_False(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig := privKeys[0].Sign([]byte{}, 0) sig := privKeys[0].Sign([]byte{'A'})
agg, err := helpers.IsAggregator(uint64(len(committee)), sig.Marshal()) agg, err := helpers.IsAggregator(uint64(len(committee)), sig.Marshal())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -310,11 +310,11 @@ func TestIsAggregator_False(t *testing.T) {
func TestAggregateSignature_True(t *testing.T) { func TestAggregateSignature_True(t *testing.T) {
pubkeys := make([]*bls.PublicKey, 0, 100) pubkeys := make([]*bls.PublicKey, 0, 100)
atts := make([]*ethpb.Attestation, 0, 100) atts := make([]*ethpb.Attestation, 0, 100)
msg := []byte("hello") msg := bytesutil.ToBytes32([]byte("hello"))
for i := 0; i < 100; i++ { for i := 0; i < 100; i++ {
priv := bls.RandKey() priv := bls.RandKey()
pub := priv.PublicKey() pub := priv.PublicKey()
sig := priv.Sign(msg[:], 0) sig := priv.Sign(msg[:])
pubkeys = append(pubkeys, pub) pubkeys = append(pubkeys, pub)
att := &ethpb.Attestation{Signature: sig.Marshal()} att := &ethpb.Attestation{Signature: sig.Marshal()}
atts = append(atts, att) atts = append(atts, att)
@@ -323,7 +323,7 @@ func TestAggregateSignature_True(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !aggSig.VerifyAggregateCommon(pubkeys, bytesutil.ToBytes32(msg), 0) { if !aggSig.FastAggregateVerify(pubkeys, msg) {
t.Error("Signature did not verify") t.Error("Signature did not verify")
} }
} }
@@ -335,7 +335,7 @@ func TestAggregateSignature_False(t *testing.T) {
for i := 0; i < 100; i++ { for i := 0; i < 100; i++ {
priv := bls.RandKey() priv := bls.RandKey()
pub := priv.PublicKey() pub := priv.PublicKey()
sig := priv.Sign(msg[:], 0) sig := priv.Sign(msg[:])
pubkeys = append(pubkeys, pub) pubkeys = append(pubkeys, pub)
att := &ethpb.Attestation{Signature: sig.Marshal()} att := &ethpb.Attestation{Signature: sig.Marshal()}
atts = append(atts, att) atts = append(atts, att)
@@ -344,7 +344,7 @@ func TestAggregateSignature_False(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if aggSig.VerifyAggregateCommon(pubkeys, bytesutil.ToBytes32(msg), 0) { if aggSig.FastAggregateVerify(pubkeys, bytesutil.ToBytes32(msg)) {
t.Error("Signature not suppose to verify") t.Error("Signature not suppose to verify")
} }
} }

View File

@@ -181,7 +181,10 @@ type CommitteeAssignmentContainer struct {
// 2. Compute all committees. // 2. Compute all committees.
// 3. Determine the attesting slot for each committee. // 3. Determine the attesting slot for each committee.
// 4. Construct a map of validator indices pointing to the respective committees. // 4. Construct a map of validator indices pointing to the respective committees.
func CommitteeAssignments(state *stateTrie.BeaconState, epoch uint64) (map[uint64]*CommitteeAssignmentContainer, map[uint64]uint64, error) { func CommitteeAssignments(
state *stateTrie.BeaconState,
epoch uint64,
) (map[uint64]*CommitteeAssignmentContainer, map[uint64][]uint64, error) {
nextEpoch := NextEpoch(state) nextEpoch := NextEpoch(state)
if epoch > nextEpoch { if epoch > nextEpoch {
return nil, nil, fmt.Errorf( return nil, nil, fmt.Errorf(
@@ -191,9 +194,11 @@ func CommitteeAssignments(state *stateTrie.BeaconState, epoch uint64) (map[uint6
) )
} }
// Track which slot has which proposer. // We determine the slots in which proposers are supposed to act.
// Some validators may need to propose multiple times per epoch, so
// we use a map of proposer idx -> []slot to keep track of this possibility.
startSlot := StartSlot(epoch) startSlot := StartSlot(epoch)
proposerIndexToSlot := make(map[uint64]uint64) proposerIndexToSlots := make(map[uint64][]uint64)
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ { for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
if err := state.SetSlot(slot); err != nil { if err := state.SetSlot(slot); err != nil {
return nil, nil, err return nil, nil, err
@@ -202,7 +207,7 @@ func CommitteeAssignments(state *stateTrie.BeaconState, epoch uint64) (map[uint6
if err != nil { if err != nil {
return nil, nil, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot()) return nil, nil, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot())
} }
proposerIndexToSlot[i] = slot proposerIndexToSlots[i] = append(proposerIndexToSlots[i], slot)
} }
activeValidatorIndices, err := ActiveValidatorIndices(state, epoch) activeValidatorIndices, err := ActiveValidatorIndices(state, epoch)
@@ -235,85 +240,7 @@ func CommitteeAssignments(state *stateTrie.BeaconState, epoch uint64) (map[uint6
} }
} }
return validatorIndexToCommittee, proposerIndexToSlot, nil return validatorIndexToCommittee, proposerIndexToSlots, nil
}
// CommitteeAssignment is used to query committee assignment from
// current and previous epoch.
//
// Deprecated: Consider using CommitteeAssignments, especially when computing more than one
// validator assignment as this method is O(n^2) in computational complexity. This method exists to
// ensure spec definition conformance and otherwise should probably not be used.
//
// Spec pseudocode definition:
// def get_committee_assignment(state: BeaconState,
// epoch: Epoch,
// validator_index: ValidatorIndex
// ) -> Optional[Tuple[Sequence[ValidatorIndex], CommitteeIndex, Slot]]:
// """
// Return the committee assignment in the ``epoch`` for ``validator_index``.
// ``assignment`` returned is a tuple of the following form:
// * ``assignment[0]`` is the list of validators in the committee
// * ``assignment[1]`` is the index to which the committee is assigned
// * ``assignment[2]`` is the slot at which the committee is assigned
// Return None if no assignment.
// """
// next_epoch = get_current_epoch(state) + 1
// assert epoch <= next_epoch
//
// start_slot = compute_start_slot_at_epoch(epoch)
// for slot in range(start_slot, start_slot + SLOTS_PER_EPOCH):
// for index in range(get_committee_count_at_slot(state, Slot(slot))):
// committee = get_beacon_committee(state, Slot(slot), CommitteeIndex(index))
// if validator_index in committee:
// return committee, CommitteeIndex(index), Slot(slot)
// return None
func CommitteeAssignment(
state *stateTrie.BeaconState,
epoch uint64,
validatorIndex uint64,
) ([]uint64, uint64, uint64, uint64, error) {
nextEpoch := NextEpoch(state)
if epoch > nextEpoch {
return nil, 0, 0, 0, fmt.Errorf(
"epoch %d can't be greater than next epoch %d",
epoch, nextEpoch)
}
// Track which slot has which proposer.
startSlot := StartSlot(epoch)
proposerIndexToSlot := make(map[uint64]uint64)
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
if err := state.SetSlot(slot); err != nil {
return nil, 0, 0, 0, err
}
i, err := BeaconProposerIndex(state)
if err != nil {
return nil, 0, 0, 0, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot())
}
proposerIndexToSlot[i] = slot
}
activeValidatorIndices, err := ActiveValidatorIndices(state, epoch)
if err != nil {
return nil, 0, 0, 0, err
}
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
countAtSlot := SlotCommitteeCount(uint64(len(activeValidatorIndices)))
for i := uint64(0); i < countAtSlot; i++ {
committee, err := BeaconCommitteeFromState(state, slot, i)
if err != nil {
return nil, 0, 0, 0, errors.Wrapf(err, "could not get crosslink committee at slot %d", slot)
}
for _, v := range committee {
if validatorIndex == v {
proposerSlot, _ := proposerIndexToSlot[v]
return committee, i, slot, proposerSlot, nil
}
}
}
}
return []uint64{}, 0, 0, 0, fmt.Errorf("validator with index %d not found in assignments", validatorIndex)
} }
// VerifyBitfieldLength verifies that a bitfield length matches the given committee size. // VerifyBitfieldLength verifies that a bitfield length matches the given committee size.
@@ -409,7 +336,6 @@ func UpdateCommitteeCache(state *stateTrie.BeaconState, epoch uint64) error {
// UpdateProposerIndicesInCache updates proposer indices entry of the committee cache. // UpdateProposerIndicesInCache updates proposer indices entry of the committee cache.
func UpdateProposerIndicesInCache(state *stateTrie.BeaconState, epoch uint64) error { func UpdateProposerIndicesInCache(state *stateTrie.BeaconState, epoch uint64) error {
indices, err := ActiveValidatorIndices(state, epoch) indices, err := ActiveValidatorIndices(state, epoch)
if err != nil { if err != nil {
return nil return nil

View File

@@ -4,7 +4,6 @@ import (
"fmt" "fmt"
"reflect" "reflect"
"strconv" "strconv"
"strings"
"testing" "testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
@@ -193,160 +192,6 @@ func TestVerifyBitfieldLength_OK(t *testing.T) {
} }
} }
func TestCommitteeAssignment_CanRetrieve(t *testing.T) {
ClearCache()
// Initialize test with 128 validators, each slot and each index gets 2 validators.
validators := make([]*ethpb.Validator, 2*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state, err := beaconstate.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: params.BeaconConfig().SlotsPerEpoch,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
if err != nil {
t.Fatal(err)
}
tests := []struct {
index uint64
slot uint64
committee []uint64
committeeIndex uint64
isProposer bool
proposerSlot uint64
}{
{
index: 0,
slot: 78,
committee: []uint64{0, 38},
committeeIndex: 0,
isProposer: false,
},
{
index: 1,
slot: 71,
committee: []uint64{1, 4},
committeeIndex: 0,
isProposer: true,
proposerSlot: 79,
},
{
index: 11,
slot: 90,
committee: []uint64{31, 11},
committeeIndex: 0,
isProposer: false,
},
}
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
committee, committeeIndex, slot, proposerSlot, err := CommitteeAssignment(state, tt.slot/params.BeaconConfig().SlotsPerEpoch, tt.index)
if err != nil {
t.Fatalf("failed to execute NextEpochCommitteeAssignment: %v", err)
}
if committeeIndex != tt.committeeIndex {
t.Errorf("wanted committeeIndex %d, got committeeIndex %d for validator index %d",
tt.committeeIndex, committeeIndex, tt.index)
}
if slot != tt.slot {
t.Errorf("wanted slot %d, got slot %d for validator index %d",
tt.slot, slot, tt.index)
}
if proposerSlot != tt.proposerSlot {
t.Errorf("wanted proposer slot %d, got proposer slot %d for validator index %d",
tt.proposerSlot, proposerSlot, tt.index)
}
if !reflect.DeepEqual(committee, tt.committee) {
t.Errorf("wanted committee %v, got committee %v for validator index %d",
tt.committee, committee, tt.index)
}
if proposerSlot != tt.proposerSlot {
t.Errorf("wanted proposer slot slot %d, got slot %d for validator index %d",
tt.slot, slot, tt.index)
}
})
}
}
func TestCommitteeAssignment_CantFindValidator(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, 1)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state, err := beaconstate.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: params.BeaconConfig().SlotsPerEpoch,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
if err != nil {
t.Fatal(err)
}
index := uint64(10000)
_, _, _, _, err = CommitteeAssignment(state, 1, index)
if err != nil && !strings.Contains(err.Error(), "not found in assignments") {
t.Errorf("Wanted 'not found in assignments', received %v", err)
}
}
// Test helpers.CommitteeAssignments against the results of helpers.CommitteeAssignment by validator
// index. Warning: this test is a bit slow!
func TestCommitteeAssignments_AgreesWithSpecDefinitionMethod(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state, err := beaconstate.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: params.BeaconConfig().SlotsPerEpoch,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
if err != nil {
t.Fatal(err)
}
// Test for 2 epochs.
for epoch := uint64(0); epoch < 2; epoch++ {
state, err := beaconstate.InitializeFromProto(state.CloneInnerState())
if err != nil {
t.Fatal(err)
}
assignments, proposers, err := CommitteeAssignments(state, epoch)
if err != nil {
t.Fatal(err)
}
for i := uint64(0); int(i) < len(validators); i++ {
committee, committeeIndex, slot, proposerSlot, err := CommitteeAssignment(state, epoch, i)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(committee, assignments[i].Committee) {
t.Errorf("Computed different committees for validator %d", i)
}
if committeeIndex != assignments[i].CommitteeIndex {
t.Errorf("Computed different committee index for validator %d", i)
}
if slot != assignments[i].AttesterSlot {
t.Errorf("Computed different attesting slot for validator %d", i)
}
if proposerSlot != proposers[i] {
t.Errorf("Computed different proposing slot for validator %d", i)
}
}
}
}
func TestCommitteeAssignments_CanRetrieve(t *testing.T) { func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
// Initialize test with 256 validators, each slot and each index gets 4 validators. // Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch) validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
@@ -412,7 +257,7 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
for i, tt := range tests { for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
ClearCache() ClearCache()
validatorIndexToCommittee, proposerIndexToSlot, err := CommitteeAssignments(state, SlotToEpoch(tt.slot)) validatorIndexToCommittee, proposerIndexToSlots, err := CommitteeAssignments(state, SlotToEpoch(tt.slot))
if err != nil { if err != nil {
t.Fatalf("failed to determine CommitteeAssignments: %v", err) t.Fatalf("failed to determine CommitteeAssignments: %v", err)
} }
@@ -425,9 +270,9 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
t.Errorf("wanted slot %d, got slot %d for validator index %d", t.Errorf("wanted slot %d, got slot %d for validator index %d",
tt.slot, cac.AttesterSlot, tt.index) tt.slot, cac.AttesterSlot, tt.index)
} }
if proposerIndexToSlot[tt.index] != tt.proposerSlot { if len(proposerIndexToSlots[tt.index]) > 0 && proposerIndexToSlots[tt.index][0] != tt.proposerSlot {
t.Errorf("wanted proposer slot %d, got proposer slot %d for validator index %d", t.Errorf("wanted proposer slot %d, got proposer slot %d for validator index %d",
tt.proposerSlot, proposerIndexToSlot[tt.index], tt.index) tt.proposerSlot, proposerIndexToSlots[tt.index][0], tt.index)
} }
if !reflect.DeepEqual(cac.Committee, tt.committee) { if !reflect.DeepEqual(cac.Committee, tt.committee) {
t.Errorf("wanted committee %v, got committee %v for validator index %d", t.Errorf("wanted committee %v, got committee %v for validator index %d",

View File

@@ -2,6 +2,7 @@ package helpers
import ( import (
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params"
) )
// TotalBalance returns the total amount at stake in Gwei // TotalBalance returns the total amount at stake in Gwei
@@ -10,9 +11,10 @@ import (
// Spec pseudocode definition: // Spec pseudocode definition:
// def get_total_balance(state: BeaconState, indices: Set[ValidatorIndex]) -> Gwei: // def get_total_balance(state: BeaconState, indices: Set[ValidatorIndex]) -> Gwei:
// """ // """
// Return the combined effective balance of the ``indices``. (1 Gwei minimum to avoid divisions by zero.) // Return the combined effective balance of the ``indices``.
// ``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
// """ // """
// return Gwei(max(1, sum([state.validators[index].effective_balance for index in indices]))) // return Gwei(max(EFFECTIVE_BALANCE_INCREMENT, sum([state.validators[index].effective_balance for index in indices])))
func TotalBalance(state *stateTrie.BeaconState, indices []uint64) uint64 { func TotalBalance(state *stateTrie.BeaconState, indices []uint64) uint64 {
total := uint64(0) total := uint64(0)
@@ -24,9 +26,9 @@ func TotalBalance(state *stateTrie.BeaconState, indices []uint64) uint64 {
total += val.EffectiveBalance() total += val.EffectiveBalance()
} }
// Return 1 Gwei minimum to avoid divisions by zero // Return EFFECTIVE_BALANCE_INCREMENT to avoid divisions by zero.
if total == 0 { if total == 0 {
return 1 return params.BeaconConfig().EffectiveBalanceIncrement
} }
return total return total

View File

@@ -27,14 +27,14 @@ func TestTotalBalance_OK(t *testing.T) {
} }
} }
func TestTotalBalance_ReturnsOne(t *testing.T) { func TestTotalBalance_ReturnsEffectiveBalanceIncrement(t *testing.T) {
state, err := beaconstate.InitializeFromProto(&pb.BeaconState{Validators: []*ethpb.Validator{}}) state, err := beaconstate.InitializeFromProto(&pb.BeaconState{Validators: []*ethpb.Validator{}})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
balance := TotalBalance(state, []uint64{}) balance := TotalBalance(state, []uint64{})
wanted := uint64(1) wanted := params.BeaconConfig().EffectiveBalanceIncrement
if balance != wanted { if balance != wanted {
t.Errorf("Incorrect TotalBalance. Wanted: %d, got: %d", wanted, balance) t.Errorf("Incorrect TotalBalance. Wanted: %d, got: %d", wanted, balance)

View File

@@ -0,0 +1,146 @@
package helpers
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
p2ppb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
// ForkVersionByteLength length of fork version byte array.
const ForkVersionByteLength = 4
// DomainByteLength length of domain byte array.
const DomainByteLength = 4
// ErrSigFailedToVerify returns when a signature of a block object(ie attestation, slashing, exit... etc)
// failed to verify.
var ErrSigFailedToVerify = errors.New("signature did not verify")
// ComputeSigningRoot computes the root of the object by calculating the root of the object domain tree.
//
// Spec pseudocode definition:
// def compute_signing_root(ssz_object: SSZObject, domain: Domain) -> Root:
// """
// Return the signing root of an object by calculating the root of the object-domain tree.
// """
// domain_wrapped_object = SigningRoot(
// object_root=hash_tree_root(ssz_object),
// domain=domain,
// )
// return hash_tree_root(domain_wrapped_object)
func ComputeSigningRoot(object interface{}, domain []byte) ([32]byte, error) {
objRoot, err := ssz.HashTreeRoot(object)
if err != nil {
return [32]byte{}, err
}
container := &p2ppb.SigningRoot{
ObjectRoot: objRoot[:],
Domain: domain,
}
return ssz.HashTreeRoot(container)
}
// VerifySigningRoot verifies the signing root of an object given it's public key, signature and domain.
func VerifySigningRoot(obj interface{}, pub []byte, signature []byte, domain []byte) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
}
sig, err := bls.SignatureFromBytes(signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
root, err := ComputeSigningRoot(obj, domain)
if err != nil {
return errors.Wrap(err, "could not compute signing root")
}
if !sig.Verify(root[:], publicKey) {
return ErrSigFailedToVerify
}
return nil
}
// ComputeDomain returns the domain version for BLS private key to sign and verify with a zeroed 4-byte
// array as the fork version.
//
// def compute_domain(domain_type: DomainType, fork_version: Version=None, genesis_validators_root: Root=None) -> Domain:
// """
// Return the domain for the ``domain_type`` and ``fork_version``.
// """
// if fork_version is None:
// fork_version = GENESIS_FORK_VERSION
// if genesis_validators_root is None:
// genesis_validators_root = Root() # all bytes zero by default
// fork_data_root = compute_fork_data_root(fork_version, genesis_validators_root)
// return Domain(domain_type + fork_data_root[:28])
func ComputeDomain(domainType [DomainByteLength]byte, forkVersion []byte, genesisValidatorsRoot []byte) ([]byte, error) {
if forkVersion == nil {
forkVersion = params.BeaconConfig().GenesisForkVersion
}
if genesisValidatorsRoot == nil {
genesisValidatorsRoot = params.BeaconConfig().ZeroHash[:]
}
forkBytes := [ForkVersionByteLength]byte{}
copy(forkBytes[:], forkVersion)
forkDataRoot, err := computeForkDataRoot(forkBytes[:], genesisValidatorsRoot)
if err != nil {
return nil, err
}
return domain(domainType, forkDataRoot[:]), nil
}
// This returns the bls domain given by the domain type and fork data root.
func domain(domainType [DomainByteLength]byte, forkDataRoot []byte) []byte {
b := []byte{}
b = append(b, domainType[:4]...)
b = append(b, forkDataRoot[:28]...)
return b
}
// this returns the 32byte fork data root for the ``current_version`` and ``genesis_validators_root``.
// This is used primarily in signature domains to avoid collisions across forks/chains.
//
// Spec pseudocode definition:
// def compute_fork_data_root(current_version: Version, genesis_validators_root: Root) -> Root:
// """
// Return the 32-byte fork data root for the ``current_version`` and ``genesis_validators_root``.
// This is used primarily in signature domains to avoid collisions across forks/chains.
// """
// return hash_tree_root(ForkData(
// current_version=current_version,
// genesis_validators_root=genesis_validators_root,
// ))
func computeForkDataRoot(version []byte, root []byte) ([32]byte, error) {
r, err := ssz.HashTreeRoot(&pb.ForkData{
CurrentVersion: version,
GenesisValidatorsRoot: root,
})
if err != nil {
return [32]byte{}, err
}
return r, nil
}
// ComputeForkDigest returns the fork for the current version and genesis validator root
//
// Spec pseudocode definition:
// def compute_fork_digest(current_version: Version, genesis_validators_root: Root) -> ForkDigest:
// """
// Return the 4-byte fork digest for the ``current_version`` and ``genesis_validators_root``.
// This is a digest primarily used for domain separation on the p2p layer.
// 4-bytes suffices for practical separation of forks/chains.
// """
// return ForkDigest(compute_fork_data_root(current_version, genesis_validators_root)[:4])
func ComputeForkDigest(version []byte, genesisValidatorsRoot []byte) ([4]byte, error) {
dataRoot, err := computeForkDataRoot(version, genesisValidatorsRoot)
if err != nil {
return [4]byte{}, nil
}
return bytesutil.ToBytes4(dataRoot[:]), nil
}

View File

@@ -0,0 +1,86 @@
package helpers
import (
"bytes"
"testing"
fuzz "github.com/google/gofuzz"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
ethereum_beacon_p2p_v1 "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestSigningRoot_ComputeOK(t *testing.T) {
emptyBlock := &ethpb.BeaconBlock{}
_, err := ComputeSigningRoot(emptyBlock, []byte{'T', 'E', 'S', 'T'})
if err != nil {
t.Errorf("Could not compute signing root of block: %v", err)
}
}
func TestComputeDomain_OK(t *testing.T) {
tests := []struct {
epoch uint64
domainType [4]byte
domain []byte
}{
{epoch: 1, domainType: [4]byte{4, 0, 0, 0}, domain: []byte{4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
{epoch: 2, domainType: [4]byte{4, 0, 0, 0}, domain: []byte{4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
{epoch: 2, domainType: [4]byte{5, 0, 0, 0}, domain: []byte{5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
{epoch: 3, domainType: [4]byte{4, 0, 0, 0}, domain: []byte{4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
{epoch: 3, domainType: [4]byte{5, 0, 0, 0}, domain: []byte{5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
}
for _, tt := range tests {
if !bytes.Equal(domain(tt.domainType, params.BeaconConfig().ZeroHash[:]), tt.domain) {
t.Errorf("wanted domain version: %d, got: %d", tt.domain, domain(tt.domainType, params.BeaconConfig().ZeroHash[:]))
}
}
}
func TestComputeForkDigest_OK(t *testing.T) {
tests := []struct {
version []byte
root [32]byte
result [4]byte
}{
{version: []byte{'A', 'B', 'C', 'D'}, root: [32]byte{'i', 'o', 'p'}, result: [4]byte{0x69, 0x5c, 0x26, 0x47}},
{version: []byte{'i', 'm', 'n', 'a'}, root: [32]byte{'z', 'a', 'b'}, result: [4]byte{0x1c, 0x38, 0x84, 0x58}},
{version: []byte{'b', 'w', 'r', 't'}, root: [32]byte{'r', 'd', 'c'}, result: [4]byte{0x83, 0x34, 0x38, 0x88}},
}
for _, tt := range tests {
digest, err := ComputeForkDigest(tt.version, tt.root[:])
if err != nil {
t.Error(err)
}
if digest != tt.result {
t.Errorf("wanted domain version: %#x, got: %#x", digest, tt.result)
}
}
}
func TestFuzzverifySigningRoot_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
pubkey := [48]byte{}
sig := [96]byte{}
domain := [4]byte{}
p := []byte{}
s := []byte{}
d := []byte{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(&pubkey)
fuzzer.Fuzz(&sig)
fuzzer.Fuzz(&domain)
fuzzer.Fuzz(state)
fuzzer.Fuzz(&p)
fuzzer.Fuzz(&s)
fuzzer.Fuzz(&d)
if err := VerifySigningRoot(state, pubkey[:], sig[:], domain[:]); err != nil {
t.Log(err)
}
if err := VerifySigningRoot(state, p, s, d); err != nil {
t.Log(err)
}
}
}

View File

@@ -89,15 +89,17 @@ func SlotsSinceEpochStarts(slot uint64) uint64 {
return slot - StartSlot(SlotToEpoch(slot)) return slot - StartSlot(SlotToEpoch(slot))
} }
// Allow for slots "from the future" within a certain tolerance. // TimeShiftTolerance specifies the tolerance threshold for slots "from the future".
const timeShiftTolerance = 10 // ms const TimeShiftTolerance = 500 * time.Millisecond // ms
// VerifySlotTime validates the input slot is not from the future. // VerifySlotTime validates the input slot is not from the future.
func VerifySlotTime(genesisTime uint64, slot uint64) error { func VerifySlotTime(genesisTime uint64, slot uint64, timeTolerance time.Duration) error {
slotTime := genesisTime + slot*params.BeaconConfig().SecondsPerSlot // denominate everything in milliseconds
currentTime := uint64(roughtime.Now().Unix()) slotTime := 1000 * (genesisTime + slot*params.BeaconConfig().SecondsPerSlot)
if slotTime > currentTime+timeShiftTolerance { currentTime := 1000 * uint64(roughtime.Now().Unix())
return fmt.Errorf("could not process slot from the future, slot time %d > current time %d", slotTime, currentTime) tolerance := uint64(timeTolerance.Milliseconds())
if slotTime > currentTime+tolerance {
return fmt.Errorf("could not process slot from the future, slot time(ms) %d > current time(ms) %d", slotTime, currentTime)
} }
return nil return nil
} }

View File

@@ -235,18 +235,16 @@ func ComputeProposerIndex(validators []*ethpb.Validator, activeIndices []uint64,
// Domain returns the domain version for BLS private key to sign and verify. // Domain returns the domain version for BLS private key to sign and verify.
// //
// Spec pseudocode definition: // Spec pseudocode definition:
// def get_domain(state: BeaconState, // def get_domain(state: BeaconState, domain_type: DomainType, epoch: Epoch=None) -> Domain:
// domain_type: int,
// message_epoch: Epoch=None) -> int:
// """ // """
// Return the signature domain (fork version concatenated with domain type) of a message. // Return the signature domain (fork version concatenated with domain type) of a message.
// """ // """
// epoch = get_current_epoch(state) if message_epoch is None else message_epoch // epoch = get_current_epoch(state) if epoch is None else epoch
// fork_version = state.fork.previous_version if epoch < state.fork.epoch else state.fork.current_version // fork_version = state.fork.previous_version if epoch < state.fork.epoch else state.fork.current_version
// return bls_domain(domain_type, fork_version) // return compute_domain(domain_type, fork_version, state.genesis_validators_root)
func Domain(fork *pb.Fork, epoch uint64, domainType [bls.DomainByteLength]byte) (uint64, error) { func Domain(fork *pb.Fork, epoch uint64, domainType [bls.DomainByteLength]byte, genesisRoot []byte) ([]byte, error) {
if fork == nil { if fork == nil {
return 0, errors.New("nil fork or domain type") return []byte{}, errors.New("nil fork or domain type")
} }
var forkVersion []byte var forkVersion []byte
if epoch < fork.Epoch { if epoch < fork.Epoch {
@@ -255,11 +253,11 @@ func Domain(fork *pb.Fork, epoch uint64, domainType [bls.DomainByteLength]byte)
forkVersion = fork.CurrentVersion forkVersion = fork.CurrentVersion
} }
if len(forkVersion) != 4 { if len(forkVersion) != 4 {
return 0, errors.New("fork version length is not 4 byte") return []byte{}, errors.New("fork version length is not 4 byte")
} }
var forkVersionArray [4]byte var forkVersionArray [4]byte
copy(forkVersionArray[:], forkVersion[:4]) copy(forkVersionArray[:], forkVersion[:4])
return bls.Domain(domainType, forkVersionArray), nil return ComputeDomain(domainType, forkVersionArray[:], genesisRoot)
} }
// IsEligibleForActivationQueue checks if the validator is eligible to // IsEligibleForActivationQueue checks if the validator is eligible to

View File

@@ -1,6 +1,7 @@
package helpers package helpers
import ( import (
"bytes"
"reflect" "reflect"
"testing" "testing"
@@ -243,22 +244,22 @@ func TestDomain_OK(t *testing.T) {
} }
tests := []struct { tests := []struct {
epoch uint64 epoch uint64
domainType uint64 domainType [4]byte
version uint64 result []byte
}{ }{
{epoch: 1, domainType: 4, version: 144115188075855876}, {epoch: 1, domainType: bytesutil.ToBytes4(bytesutil.Bytes4(4)), result: bytesutil.ToBytes(947067381421703172, 32)},
{epoch: 2, domainType: 4, version: 144115188075855876}, {epoch: 2, domainType: bytesutil.ToBytes4(bytesutil.Bytes4(4)), result: bytesutil.ToBytes(947067381421703172, 32)},
{epoch: 2, domainType: 5, version: 144115188075855877}, {epoch: 2, domainType: bytesutil.ToBytes4(bytesutil.Bytes4(5)), result: bytesutil.ToBytes(947067381421703173, 32)},
{epoch: 3, domainType: 4, version: 216172782113783812}, {epoch: 3, domainType: bytesutil.ToBytes4(bytesutil.Bytes4(4)), result: bytesutil.ToBytes(9369798235163459588, 32)},
{epoch: 3, domainType: 5, version: 216172782113783813}, {epoch: 3, domainType: bytesutil.ToBytes4(bytesutil.Bytes4(5)), result: bytesutil.ToBytes(9369798235163459589, 32)},
} }
for _, tt := range tests { for _, tt := range tests {
domain, err := Domain(state.Fork, tt.epoch, bytesutil.ToBytes4(bytesutil.Bytes4(tt.domainType))) domain, err := Domain(state.Fork, tt.epoch, tt.domainType, nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if domain != tt.version { if !bytes.Equal(domain[:8], tt.result[:8]) {
t.Errorf("wanted domain version: %d, got: %d", tt.version, domain) t.Errorf("wanted domain version: %d, got: %d", tt.result, domain)
} }
} }
} }

View File

@@ -57,6 +57,7 @@ go_test(
"//beacon-chain/core/blocks:go_default_library", "//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library", "//beacon-chain/state:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/p2p/v1:go_default_library", "//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library", "//shared/attestationutil:go_default_library",
"//shared/bls:go_default_library", "//shared/bls:go_default_library",

View File

@@ -27,9 +27,14 @@ func TestBenchmarkExecuteStateTransition(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
if _, err := state.ExecuteStateTransition(context.Background(), beaconState, block); err != nil { oldSlot := beaconState.Slot()
beaconState, err = state.ExecuteStateTransition(context.Background(), beaconState, block)
if err != nil {
t.Fatalf("failed to process block, benchmarks will fail: %v", err) t.Fatalf("failed to process block, benchmarks will fail: %v", err)
} }
if oldSlot == beaconState.Slot() {
t.Fatal("Expected slots to be different")
}
} }
func BenchmarkExecuteStateTransition_FullBlock(b *testing.B) { func BenchmarkExecuteStateTransition_FullBlock(b *testing.B) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks" b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/trieutil" "github.com/prysmaticlabs/prysm/shared/trieutil"
@@ -137,10 +138,16 @@ func OptimizedGenesisBeaconState(genesisTime uint64, preState *stateTrie.BeaconS
slashings := make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector) slashings := make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector)
genesisValidatorsRoot, err := stateutil.ValidatorRegistryRoot(preState.Validators())
if err != nil {
return nil, errors.Wrapf(err, "could not hash tree root genesis validators %v", err)
}
state := &pb.BeaconState{ state := &pb.BeaconState{
// Misc fields. // Misc fields.
Slot: 0, Slot: 0,
GenesisTime: genesisTime, GenesisTime: genesisTime,
GenesisValidatorsRoot: genesisValidatorsRoot[:],
Fork: &pb.Fork{ Fork: &pb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion, PreviousVersion: params.BeaconConfig().GenesisForkVersion,

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state" "github.com/prysmaticlabs/prysm/beacon-chain/core/state"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state" beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil" "github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bls" "github.com/prysmaticlabs/prysm/shared/bls"
@@ -75,7 +76,7 @@ func TestExecuteStateTransition_FullProcess(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
parentRoot, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader()) parentRoot, err := stateutil.BlockHeaderRoot(beaconState.LatestBlockHeader())
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
@@ -93,8 +94,9 @@ func TestExecuteStateTransition_FullProcess(t *testing.T) {
} }
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
Slot: beaconState.Slot() + 1, ProposerIndex: 74,
ParentRoot: parentRoot[:], Slot: beaconState.Slot() + 1,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: randaoReveal, RandaoReveal: randaoReveal,
Eth1Data: eth1Data, Eth1Data: eth1Data,
@@ -146,10 +148,6 @@ func TestProcessBlock_IncorrectProposerSlashing(t *testing.T) {
} }
block.Block.Body.ProposerSlashings = []*ethpb.ProposerSlashing{slashing} block.Block.Body.ProposerSlashings = []*ethpb.ProposerSlashing{slashing}
blockRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
t.Fatal(err)
}
if err := beaconState.SetSlot(beaconState.Slot() + 1); err != nil { if err := beaconState.SetSlot(beaconState.Slot() + 1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -160,11 +158,15 @@ func TestProcessBlock_IncorrectProposerSlashing(t *testing.T) {
if err := beaconState.SetSlot(beaconState.Slot() - 1); err != nil { if err := beaconState.SetSlot(beaconState.Slot() - 1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
domain, err := helpers.Domain(beaconState.Fork(), helpers.CurrentEpoch(beaconState), params.BeaconConfig().DomainBeaconProposer) domain, err := helpers.Domain(beaconState.Fork(), helpers.CurrentEpoch(beaconState), params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig := privKeys[proposerIdx].Sign(blockRoot[:], domain) root, err := helpers.ComputeSigningRoot(block.Block, domain)
if err != nil {
t.Fatal(err)
}
sig := privKeys[proposerIdx].Sign(root[:])
block.Signature = sig.Marshal() block.Signature = sig.Marshal()
beaconState, err = state.ProcessSlots(context.Background(), beaconState, 1) beaconState, err = state.ProcessSlots(context.Background(), beaconState, 1)
@@ -194,10 +196,6 @@ func TestProcessBlock_IncorrectProcessBlockAttestations(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
block.Block.Body.Attestations = []*ethpb.Attestation{att} block.Block.Body.Attestations = []*ethpb.Attestation{att}
blockRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
t.Fatal(err)
}
if err := beaconState.SetSlot(beaconState.Slot() + 1); err != nil { if err := beaconState.SetSlot(beaconState.Slot() + 1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -208,11 +206,15 @@ func TestProcessBlock_IncorrectProcessBlockAttestations(t *testing.T) {
if err := beaconState.SetSlot(beaconState.Slot() - 1); err != nil { if err := beaconState.SetSlot(beaconState.Slot() - 1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
domain, err := helpers.Domain(beaconState.Fork(), helpers.CurrentEpoch(beaconState), params.BeaconConfig().DomainBeaconProposer) domain, err := helpers.Domain(beaconState.Fork(), helpers.CurrentEpoch(beaconState), params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig := privKeys[proposerIdx].Sign(blockRoot[:], domain) root, err := helpers.ComputeSigningRoot(block.Block, domain)
if err != nil {
t.Fatal(err)
}
sig := privKeys[proposerIdx].Sign(root[:])
block.Signature = sig.Marshal() block.Signature = sig.Marshal()
beaconState, err = state.ProcessSlots(context.Background(), beaconState, 1) beaconState, err = state.ProcessSlots(context.Background(), beaconState, 1)
@@ -232,16 +234,17 @@ func TestProcessBlock_IncorrectProcessExits(t *testing.T) {
proposerSlashings := []*ethpb.ProposerSlashing{ proposerSlashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: 3,
Header_1: &ethpb.SignedBeaconBlockHeader{ Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 1, ProposerIndex: 3,
Slot: 1,
}, },
Signature: []byte("A"), Signature: []byte("A"),
}, },
Header_2: &ethpb.SignedBeaconBlockHeader{ Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 1, ProposerIndex: 3,
Slot: 1,
}, },
Signature: []byte("B"), Signature: []byte("B"),
}, },
@@ -378,6 +381,7 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
beaconState.Fork(), beaconState.Fork(),
currentEpoch, currentEpoch,
params.BeaconConfig().DomainBeaconProposer, params.BeaconConfig().DomainBeaconProposer,
beaconState.GenesisValidatorRoot(),
) )
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -385,33 +389,34 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
header1 := &ethpb.SignedBeaconBlockHeader{ header1 := &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 1, ProposerIndex: proposerSlashIdx,
StateRoot: []byte("A"), Slot: 1,
StateRoot: []byte("A"),
}, },
} }
signingRoot, err := ssz.HashTreeRoot(header1.Header) root, err := helpers.ComputeSigningRoot(header1.Header, domain)
if err != nil { if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err) t.Fatal(err)
} }
header1.Signature = privKeys[proposerSlashIdx].Sign(signingRoot[:], domain).Marshal()[:] header1.Signature = privKeys[proposerSlashIdx].Sign(root[:]).Marshal()[:]
header2 := &ethpb.SignedBeaconBlockHeader{ header2 := &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 1, ProposerIndex: proposerSlashIdx,
StateRoot: []byte("B"), Slot: 1,
StateRoot: []byte("B"),
}, },
} }
signingRoot, err = ssz.HashTreeRoot(header2.Header) root, err = helpers.ComputeSigningRoot(header2.Header, domain)
if err != nil { if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err) t.Fatal(err)
} }
header2.Signature = privKeys[proposerSlashIdx].Sign(signingRoot[:], domain).Marshal()[:] header2.Signature = privKeys[proposerSlashIdx].Sign(root[:]).Marshal()[:]
proposerSlashings := []*ethpb.ProposerSlashing{ proposerSlashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: proposerSlashIdx, Header_1: header1,
Header_1: header1, Header_2: header2,
Header_2: header2,
}, },
} }
validators := beaconState.Validators() validators := beaconState.Validators()
@@ -427,16 +432,16 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
Target: &ethpb.Checkpoint{Epoch: 0}}, Target: &ethpb.Checkpoint{Epoch: 0}},
AttestingIndices: []uint64{0, 1}, AttestingIndices: []uint64{0, 1},
} }
hashTreeRoot, err := ssz.HashTreeRoot(att1.Data) domain, err = helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorRoot())
if err != nil {
t.Error(err)
}
domain, err = helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sig0 := privKeys[0].Sign(hashTreeRoot[:], domain) hashTreeRoot, err := helpers.ComputeSigningRoot(att1.Data, domain)
sig1 := privKeys[1].Sign(hashTreeRoot[:], domain) if err != nil {
t.Error(err)
}
sig0 := privKeys[0].Sign(hashTreeRoot[:])
sig1 := privKeys[1].Sign(hashTreeRoot[:])
aggregateSig := bls.AggregateSignatures([]*bls.Signature{sig0, sig1}) aggregateSig := bls.AggregateSignatures([]*bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()[:] att1.Signature = aggregateSig.Marshal()[:]
@@ -447,12 +452,13 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
Target: &ethpb.Checkpoint{Epoch: 0}}, Target: &ethpb.Checkpoint{Epoch: 0}},
AttestingIndices: []uint64{0, 1}, AttestingIndices: []uint64{0, 1},
} }
hashTreeRoot, err = ssz.HashTreeRoot(att2.Data)
hashTreeRoot, err = helpers.ComputeSigningRoot(att2.Data, domain)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
sig0 = privKeys[0].Sign(hashTreeRoot[:], domain) sig0 = privKeys[0].Sign(hashTreeRoot[:])
sig1 = privKeys[1].Sign(hashTreeRoot[:], domain) sig1 = privKeys[1].Sign(hashTreeRoot[:])
aggregateSig = bls.AggregateSignatures([]*bls.Signature{sig0, sig1}) aggregateSig = bls.AggregateSignatures([]*bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()[:] att2.Signature = aggregateSig.Marshal()[:]
@@ -492,13 +498,13 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
hashTreeRoot, err = ssz.HashTreeRoot(blockAtt.Data) hashTreeRoot, err = helpers.ComputeSigningRoot(blockAtt.Data, domain)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
sigs := make([]*bls.Signature, len(attestingIndices)) sigs := make([]*bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices { for i, indice := range attestingIndices {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
blockAtt.Signature = bls.AggregateSignatures(sigs).Marshal()[:] blockAtt.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -509,17 +515,17 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
Epoch: 0, Epoch: 0,
}, },
} }
signingRoot, err = ssz.HashTreeRoot(exit.Exit) domain, err = helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainVoluntaryExit, beaconState.GenesisValidatorRoot())
if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err)
}
domain, err = helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainVoluntaryExit)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
exit.Signature = privKeys[exit.Exit.ValidatorIndex].Sign(signingRoot[:], domain).Marshal()[:] signingRoot, err := helpers.ComputeSigningRoot(exit.Exit, domain)
if err != nil {
t.Errorf("Could not get signing root of beacon block header: %v", err)
}
exit.Signature = privKeys[exit.Exit.ValidatorIndex].Sign(signingRoot[:]).Marshal()[:]
parentRoot, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader()) parentRoot, err := stateutil.BlockHeaderRoot(beaconState.LatestBlockHeader())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -530,8 +536,9 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
} }
block := &ethpb.SignedBeaconBlock{ block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
ParentRoot: parentRoot[:], ParentRoot: parentRoot[:],
Slot: beaconState.Slot(), Slot: beaconState.Slot(),
ProposerIndex: 17,
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
RandaoReveal: randaoReveal, RandaoReveal: randaoReveal,
ProposerSlashings: proposerSlashings, ProposerSlashings: proposerSlashings,
@@ -557,12 +564,12 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
t.Fatalf("Expected block to pass processing conditions: %v", err) t.Fatalf("Expected block to pass processing conditions: %v", err)
} }
v, err := beaconState.ValidatorAtIndex(proposerSlashings[0].ProposerIndex) v, err := beaconState.ValidatorAtIndex(proposerSlashings[0].Header_1.Header.ProposerIndex)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !v.Slashed { if !v.Slashed {
t.Errorf("Expected validator at index %d to be slashed, received false", proposerSlashings[0].ProposerIndex) t.Errorf("Expected validator at index %d to be slashed, received false", proposerSlashings[0].Header_1.Header.ProposerIndex)
} }
v, err = beaconState.ValidatorAtIndex(1) v, err = beaconState.ValidatorAtIndex(1)
if err != nil { if err != nil {
@@ -661,16 +668,17 @@ func BenchmarkProcessBlk_65536Validators_FullBlock(b *testing.B) {
// Set up proposer slashing object for block // Set up proposer slashing object for block
proposerSlashings := []*ethpb.ProposerSlashing{ proposerSlashings := []*ethpb.ProposerSlashing{
{ {
ProposerIndex: 1,
Header_1: &ethpb.SignedBeaconBlockHeader{ Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 1,
Slot: 0,
}, },
Signature: []byte("A"), Signature: []byte("A"),
}, },
Header_2: &ethpb.SignedBeaconBlockHeader{ Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 0, ProposerIndex: 1,
Slot: 0,
}, },
Signature: []byte("B"), Signature: []byte("B"),
}, },
@@ -723,11 +731,19 @@ func BenchmarkProcessBlk_65536Validators_FullBlock(b *testing.B) {
v[proposerIdx].PublicKey = priv.PublicKey().Marshal() v[proposerIdx].PublicKey = priv.PublicKey().Marshal()
buf := make([]byte, 32) buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, 0) binary.LittleEndian.PutUint64(buf, 0)
domain, err := helpers.Domain(s.Fork(), 0, params.BeaconConfig().DomainRandao) domain, err := helpers.Domain(s.Fork(), 0, params.BeaconConfig().DomainRandao, s.GenesisValidatorRoot())
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }
epochSignature := priv.Sign(buf, domain) ctr := &pb.SigningRoot{
ObjectRoot: buf,
Domain: domain,
}
root, err = ssz.HashTreeRoot(ctr)
if err != nil {
b.Fatal(err)
}
epochSignature := priv.Sign(root[:])
buf = []byte{params.BeaconConfig().BLSWithdrawalPrefixByte} buf = []byte{params.BeaconConfig().BLSWithdrawalPrefixByte}
pubKey := []byte("A") pubKey := []byte("A")
@@ -826,17 +842,17 @@ func TestProcessBlk_AttsBasedOnValidatorCount(t *testing.T) {
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
domain, err := helpers.Domain(s.Fork(), 0, params.BeaconConfig().DomainBeaconAttester) domain, err := helpers.Domain(s.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, s.GenesisValidatorRoot())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
sigs := make([]*bls.Signature, len(attestingIndices)) sigs := make([]*bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices { for i, indice := range attestingIndices {
hashTreeRoot, err := ssz.HashTreeRoot(att.Data) hashTreeRoot, err := helpers.ComputeSigningRoot(att.Data, domain)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
} }
sig := privKeys[indice].Sign(hashTreeRoot[:], domain) sig := privKeys[indice].Sign(hashTreeRoot[:])
sigs[i] = sig sigs[i] = sig
} }
att.Signature = bls.AggregateSignatures(sigs).Marshal()[:] att.Signature = bls.AggregateSignatures(sigs).Marshal()[:]
@@ -847,14 +863,15 @@ func TestProcessBlk_AttsBasedOnValidatorCount(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
parentRoot, err := ssz.HashTreeRoot(s.LatestBlockHeader()) parentRoot, err := stateutil.BlockHeaderRoot(s.LatestBlockHeader())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
blk := &ethpb.SignedBeaconBlock{ blk := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{ Block: &ethpb.BeaconBlock{
Slot: s.Slot(), ProposerIndex: 72,
ParentRoot: parentRoot[:], Slot: s.Slot(),
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{ Body: &ethpb.BeaconBlockBody{
Eth1Data: &ethpb.Eth1Data{}, Eth1Data: &ethpb.Eth1Data{},
RandaoReveal: epochSignature, RandaoReveal: epochSignature,

View File

@@ -43,6 +43,7 @@ go_library(
"//shared/traceutil:go_default_library", "//shared/traceutil:go_default_library",
"@com_github_dgraph_io_ristretto//:go_default_library", "@com_github_dgraph_io_ristretto//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library", "@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library", "@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_golang_snappy//:go_default_library", "@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library", "@com_github_pkg_errors//:go_default_library",
@@ -78,7 +79,6 @@ go_test(
deps = [ deps = [
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/db/filters:go_default_library", "//beacon-chain/db/filters:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library", "//proto/beacon/p2p/v1:go_default_library",
"//proto/testing:go_default_library", "//proto/testing:go_default_library",
"//shared/bytesutil:go_default_library", "//shared/bytesutil:go_default_library",
@@ -89,5 +89,6 @@ go_test(
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library", "@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
], ],
) )

View File

@@ -38,22 +38,23 @@ func TestStore_ArchivedActiveValidatorChanges(t *testing.T) {
}, },
ProposerSlashings: []*ethpb.ProposerSlashing{ ProposerSlashings: []*ethpb.ProposerSlashing{
{ {
ProposerIndex: 1212,
Header_1: &ethpb.SignedBeaconBlockHeader{ Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 10, ProposerIndex: 1212,
ParentRoot: someRoot[:], Slot: 10,
StateRoot: someRoot[:], ParentRoot: someRoot[:],
BodyRoot: someRoot[:], StateRoot: someRoot[:],
BodyRoot: someRoot[:],
}, },
Signature: make([]byte, 96), Signature: make([]byte, 96),
}, },
Header_2: &ethpb.SignedBeaconBlockHeader{ Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ Header: &ethpb.BeaconBlockHeader{
Slot: 10, ProposerIndex: 1212,
ParentRoot: someRoot[:], Slot: 10,
StateRoot: someRoot[:], ParentRoot: someRoot[:],
BodyRoot: someRoot[:], StateRoot: someRoot[:],
BodyRoot: someRoot[:],
}, },
Signature: make([]byte, 96), Signature: make([]byte, 96),
}, },

View File

@@ -8,8 +8,7 @@ import (
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state" "github.com/prysmaticlabs/prysm/shared/testutil"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
) )
func TestStore_Backup(t *testing.T) { func TestStore_Backup(t *testing.T) {
@@ -26,7 +25,7 @@ func TestStore_Backup(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := state.InitializeFromProto(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err := db.SaveState(ctx, st, root); err != nil { if err := db.SaveState(ctx, st, root); err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@@ -263,9 +263,12 @@ func (k *Store) SaveHeadBlockRoot(ctx context.Context, blockRoot [32]byte) error
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveHeadBlockRoot") ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveHeadBlockRoot")
defer span.End() defer span.End()
return k.db.Update(func(tx *bolt.Tx) error { return k.db.Update(func(tx *bolt.Tx) error {
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if tx.Bucket(stateSummaryBucket).Get(blockRoot[:]) == nil && !k.stateSummaryCache.Has(blockRoot) { hasStateSummaryInCache := k.stateSummaryCache.Has(blockRoot)
return errors.New("no state summary found with head block root") hasStateSummaryInDB := tx.Bucket(stateSummaryBucket).Get(blockRoot[:]) != nil
hasStateInDB := tx.Bucket(stateBucket).Get(blockRoot[:]) != nil
if !(hasStateInDB || hasStateSummaryInDB || hasStateSummaryInCache) {
return errors.New("no state or state summary found with head block root")
} }
} else { } else {
if tx.Bucket(stateBucket).Get(blockRoot[:]) == nil { if tx.Bucket(stateBucket).Get(blockRoot[:]) == nil {

View File

@@ -12,7 +12,7 @@ import (
var historicalStateDeletedKey = []byte("historical-states-deleted") var historicalStateDeletedKey = []byte("historical-states-deleted")
func (kv *Store) ensureNewStateServiceCompatible(ctx context.Context) error { func (kv *Store) ensureNewStateServiceCompatible(ctx context.Context) error {
if !featureconfig.Get().NewStateMgmt { if featureconfig.Get().DisableNewStateMgmt {
return kv.db.Update(func(tx *bolt.Tx) error { return kv.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(newStateServiceCompatibleBucket) bkt := tx.Bucket(newStateServiceCompatibleBucket)
return bkt.Put(historicalStateDeletedKey, []byte{0x01}) return bkt.Put(historicalStateDeletedKey, []byte{0x01})
@@ -32,9 +32,9 @@ func (kv *Store) ensureNewStateServiceCompatible(ctx context.Context) error {
regenHistoricalStatesConfirmed := false regenHistoricalStatesConfirmed := false
var err error var err error
if historicalStateDeleted { if historicalStateDeleted {
actionText := "Looks like you stopped using --new-state-mgmt. To reuse it, the node will need " + actionText := "--disable-new-state-mgmt was used. To proceed without the flag, the db will need " +
"to generate and save historical states. The process may take a while, - do you want to proceed? (Y/N)" "to generate and save historical states. This process may take a while, - do you want to proceed? (Y/N)"
deniedText := "Historical states will not be generated. Please remove usage --new-state-mgmt" deniedText := "Historical states will not be generated. Please continue use --disable-new-state-mgmt"
regenHistoricalStatesConfirmed, err = cmd.ConfirmAction(actionText, deniedText) regenHistoricalStatesConfirmed, err = cmd.ConfirmAction(actionText, deniedText)
if err != nil { if err != nil {
@@ -42,7 +42,7 @@ func (kv *Store) ensureNewStateServiceCompatible(ctx context.Context) error {
} }
if !regenHistoricalStatesConfirmed { if !regenHistoricalStatesConfirmed {
return errors.New("exiting... please do not run with flag --new-state-mgmt") return errors.New("exiting... please use --disable-new-state-mgmt")
} }
if err := kv.regenHistoricalStates(ctx); err != nil { if err := kv.regenHistoricalStates(ctx); err != nil {

View File

@@ -12,7 +12,7 @@ import (
"go.opencensus.io/trace" "go.opencensus.io/trace"
) )
var errMissingStateForCheckpoint = errors.New("no state exists with checkpoint root") var errMissingStateForCheckpoint = errors.New("missing state summary for finalized root")
// JustifiedCheckpoint returns the latest justified checkpoint in beacon chain. // JustifiedCheckpoint returns the latest justified checkpoint in beacon chain.
func (k *Store) JustifiedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error) { func (k *Store) JustifiedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error) {
@@ -65,8 +65,11 @@ func (k *Store) SaveJustifiedCheckpoint(ctx context.Context, checkpoint *ethpb.C
} }
return k.db.Update(func(tx *bolt.Tx) error { return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(checkpointBucket) bucket := tx.Bucket(checkpointBucket)
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) == nil && !k.stateSummaryCache.Has(bytesutil.ToBytes32(checkpoint.Root)) { hasStateSummaryInDB := tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) != nil
hasStateSummaryInCache := k.stateSummaryCache.Has(bytesutil.ToBytes32(checkpoint.Root))
hasStateInDB := tx.Bucket(stateBucket).Get(checkpoint.Root) != nil
if !(hasStateInDB || hasStateSummaryInDB || hasStateSummaryInCache) {
return errors.New("missing state summary for finalized root") return errors.New("missing state summary for finalized root")
} }
} else { } else {
@@ -93,8 +96,11 @@ func (k *Store) SaveFinalizedCheckpoint(ctx context.Context, checkpoint *ethpb.C
} }
return k.db.Update(func(tx *bolt.Tx) error { return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(checkpointBucket) bucket := tx.Bucket(checkpointBucket)
if featureconfig.Get().NewStateMgmt { if !featureconfig.Get().DisableNewStateMgmt {
if tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) == nil && !k.stateSummaryCache.Has(bytesutil.ToBytes32(checkpoint.Root)) { hasStateSummaryInDB := tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) != nil
hasStateSummaryInCache := k.stateSummaryCache.Has(bytesutil.ToBytes32(checkpoint.Root))
hasStateInDB := tx.Bucket(stateBucket).Get(checkpoint.Root) != nil
if !(hasStateInDB || hasStateSummaryInDB || hasStateSummaryInCache) {
return errors.New("missing state summary for finalized root") return errors.New("missing state summary for finalized root")
} }
} else { } else {

View File

@@ -2,14 +2,14 @@ package kv
import ( import (
"context" "context"
"strings"
"testing" "testing"
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
) )
func TestStore_JustifiedCheckpoint_CanSaveRetrieve(t *testing.T) { func TestStore_JustifiedCheckpoint_CanSaveRetrieve(t *testing.T) {
@@ -21,10 +21,11 @@ func TestStore_JustifiedCheckpoint_CanSaveRetrieve(t *testing.T) {
Epoch: 10, Epoch: 10,
Root: root[:], Root: root[:],
} }
st, err := state.InitializeFromProto(&pb.BeaconState{Slot: 1}) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(ctx, st, root); err != nil { if err := db.SaveState(ctx, st, root); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -73,8 +74,8 @@ func TestStore_FinalizedCheckpoint_CanSaveRetrieve(t *testing.T) {
if err := db.SaveBlock(ctx, blk); err != nil { if err := db.SaveBlock(ctx, blk); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := state.InitializeFromProto(&pb.BeaconState{Slot: 1}) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
// a state is required to save checkpoint // a state is required to save checkpoint
@@ -144,7 +145,7 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
Root: []byte{'B'}, Root: []byte{'B'},
} }
if err := db.SaveFinalizedCheckpoint(ctx, cp); err != errMissingStateForCheckpoint { if err := db.SaveFinalizedCheckpoint(ctx, cp); !strings.Contains(err.Error(), errMissingStateForCheckpoint.Error()) {
t.Fatalf("wanted err %v, got %v", errMissingStateForCheckpoint, err) t.Fatalf("wanted err %v, got %v", errMissingStateForCheckpoint, err)
} }
} }

View File

@@ -4,8 +4,11 @@ import (
"errors" "errors"
"reflect" "reflect"
fastssz "github.com/ferranbt/fastssz"
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
"github.com/golang/snappy" "github.com/golang/snappy"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
) )
func decode(data []byte, dst proto.Message) error { func decode(data []byte, dst proto.Message) error {
@@ -13,20 +16,49 @@ func decode(data []byte, dst proto.Message) error {
if err != nil { if err != nil {
return err return err
} }
if err := proto.Unmarshal(data, dst); err != nil { if isWhitelisted(dst) {
return err return dst.(fastssz.Unmarshaler).UnmarshalSSZ(data)
} }
return nil return proto.Unmarshal(data, dst)
} }
func encode(msg proto.Message) ([]byte, error) { func encode(msg proto.Message) ([]byte, error) {
if msg == nil || reflect.ValueOf(msg).IsNil() { if msg == nil || reflect.ValueOf(msg).IsNil() {
return nil, errors.New("cannot encode nil message") return nil, errors.New("cannot encode nil message")
} }
enc, err := proto.Marshal(msg) var enc []byte
if err != nil { var err error
return nil, err if isWhitelisted(msg) {
enc, err = msg.(fastssz.Marshaler).MarshalSSZ()
if err != nil {
return nil, err
}
} else {
enc, err = proto.Marshal(msg)
if err != nil {
return nil, err
}
} }
return snappy.Encode(nil, enc), nil return snappy.Encode(nil, enc), nil
} }
func isWhitelisted(obj interface{}) bool {
switch obj.(type) {
case *pb.BeaconState:
return true
case *ethpb.BeaconBlock:
return true
case *ethpb.Attestation:
return true
case *ethpb.Deposit:
return true
case *ethpb.AttesterSlashing:
return true
case *ethpb.ProposerSlashing:
return true
case *ethpb.VoluntaryExit:
return true
default:
return false
}
}

View File

@@ -6,10 +6,9 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
) )
var genesisBlockRoot = bytesutil.ToBytes32([]byte{'G', 'E', 'N', 'E', 'S', 'I', 'S'}) var genesisBlockRoot = bytesutil.ToBytes32([]byte{'G', 'E', 'N', 'E', 'S', 'I', 'S'})
@@ -39,10 +38,7 @@ func TestStore_IsFinalizedBlock(t *testing.T) {
Root: root[:], Root: root[:],
} }
st, err := state.InitializeFromProto(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
// a state is required to save checkpoint // a state is required to save checkpoint
if err := db.SaveState(ctx, st, root); err != nil { if err := db.SaveState(ctx, st, root); err != nil {
t.Fatal(err) t.Fatal(err)
@@ -115,10 +111,7 @@ func TestStore_IsFinalized_ForkEdgeCase(t *testing.T) {
Epoch: 1, Epoch: 1,
} }
st, err := state.InitializeFromProto(&pb.BeaconState{}) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
// A state is required to save checkpoint // A state is required to save checkpoint
if err := db.SaveState(ctx, st, bytesutil.ToBytes32(checkpoint1.Root)); err != nil { if err := db.SaveState(ctx, st, bytesutil.ToBytes32(checkpoint1.Root)); err != nil {
t.Fatal(err) t.Fatal(err)

View File

@@ -14,7 +14,24 @@ func TestStore_ProposerSlashing_CRUD(t *testing.T) {
defer teardownDB(t, db) defer teardownDB(t, db)
ctx := context.Background() ctx := context.Background()
prop := &ethpb.ProposerSlashing{ prop := &ethpb.ProposerSlashing{
ProposerIndex: 5, Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 5,
BodyRoot: make([]byte, 32),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
},
Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 5,
BodyRoot: make([]byte, 32),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
},
} }
slashingRoot, err := ssz.HashTreeRoot(prop) slashingRoot, err := ssz.HashTreeRoot(prop)
if err != nil { if err != nil {
@@ -57,13 +74,31 @@ func TestStore_AttesterSlashing_CRUD(t *testing.T) {
Data: &ethpb.AttestationData{ Data: &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, 32), BeaconBlockRoot: make([]byte, 32),
Slot: 5, Slot: 5,
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
Target: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
}, },
Signature: make([]byte, 96),
}, },
Attestation_2: &ethpb.IndexedAttestation{ Attestation_2: &ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{ Data: &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, 32), BeaconBlockRoot: make([]byte, 32),
Slot: 7, Slot: 7,
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
Target: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
}, },
Signature: make([]byte, 96),
}, },
} }
slashingRoot, err := ssz.HashTreeRoot(att) slashingRoot, err := ssz.HashTreeRoot(att)

View File

@@ -11,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state" "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
bolt "go.etcd.io/bbolt" bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace" "go.opencensus.io/trace"
) )
@@ -194,15 +193,9 @@ func (k *Store) DeleteState(ctx context.Context, blockRoot [32]byte) error {
bkt = tx.Bucket(blocksBucket) bkt = tx.Bucket(blocksBucket)
headBlkRoot := bkt.Get(headBlockRootKey) headBlkRoot := bkt.Get(headBlockRootKey)
if featureconfig.Get().NewStateMgmt { // Safe guard against deleting genesis, finalized, head state.
if tx.Bucket(stateSummaryBucket).Get(blockRoot[:]) == nil { if bytes.Equal(blockRoot[:], checkpoint.Root) || bytes.Equal(blockRoot[:], genesisBlockRoot) || bytes.Equal(blockRoot[:], headBlkRoot) {
return errors.New("cannot delete state without state summary") return errors.New("cannot delete genesis, finalized, or head state")
}
} else {
// Safe guard against deleting genesis, finalized, head state.
if bytes.Equal(blockRoot[:], checkpoint.Root) || bytes.Equal(blockRoot[:], genesisBlockRoot) || bytes.Equal(blockRoot[:], headBlkRoot) {
return errors.New("cannot delete genesis, finalized, or head state")
}
} }
slot, err := slotByBlockRoot(ctx, tx, blockRoot[:]) slot, err := slotByBlockRoot(ctx, tx, blockRoot[:])
@@ -253,15 +246,9 @@ func (k *Store) DeleteStates(ctx context.Context, blockRoots [][32]byte) error {
for blockRoot, _ := c.First(); blockRoot != nil; blockRoot, _ = c.Next() { for blockRoot, _ := c.First(); blockRoot != nil; blockRoot, _ = c.Next() {
if rootMap[bytesutil.ToBytes32(blockRoot)] { if rootMap[bytesutil.ToBytes32(blockRoot)] {
if featureconfig.Get().NewStateMgmt { // Safe guard against deleting genesis, finalized, head state.
if tx.Bucket(stateSummaryBucket).Get(blockRoot[:]) == nil { if bytes.Equal(blockRoot[:], checkpoint.Root) || bytes.Equal(blockRoot[:], genesisBlockRoot) || bytes.Equal(blockRoot[:], headBlkRoot) {
return errors.New("cannot delete state without state summary") return errors.New("cannot delete genesis, finalized, or head state")
}
} else {
// Safe guard against deleting genesis, finalized, head state.
if bytes.Equal(blockRoot[:], checkpoint.Root) || bytes.Equal(blockRoot[:], genesisBlockRoot) || bytes.Equal(blockRoot[:], headBlkRoot) {
return errors.New("cannot delete genesis, finalized, or head state")
}
} }
slot, err := slotByBlockRoot(ctx, tx, blockRoot) slot, err := slotByBlockRoot(ctx, tx, blockRoot)
@@ -296,47 +283,45 @@ func slotByBlockRoot(ctx context.Context, tx *bolt.Tx, blockRoot []byte) (uint64
ctx, span := trace.StartSpan(ctx, "BeaconDB.slotByBlockRoot") ctx, span := trace.StartSpan(ctx, "BeaconDB.slotByBlockRoot")
defer span.End() defer span.End()
if featureconfig.Get().NewStateMgmt { bkt := tx.Bucket(stateSummaryBucket)
bkt := tx.Bucket(stateSummaryBucket)
enc := bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state summary enc can't be nil")
}
stateSummary := &pb.StateSummary{}
if err := decode(enc, stateSummary); err != nil {
return 0, err
}
return stateSummary.Slot, nil
}
bkt := tx.Bucket(blocksBucket)
enc := bkt.Get(blockRoot) enc := bkt.Get(blockRoot)
if enc == nil { if enc == nil {
// fallback and check the state. // Fall back to check the block.
bkt = tx.Bucket(stateBucket) bkt := tx.Bucket(blocksBucket)
enc = bkt.Get(blockRoot) enc := bkt.Get(blockRoot)
if enc == nil { if enc == nil {
return 0, errors.New("state enc can't be nil") // Fallback and check the state.
bkt = tx.Bucket(stateBucket)
enc = bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state enc can't be nil")
}
s, err := createState(enc)
if err != nil {
return 0, err
}
if s == nil {
return 0, errors.New("state can't be nil")
}
return s.Slot, nil
} }
s, err := createState(enc) b := &ethpb.SignedBeaconBlock{}
err := decode(enc, b)
if err != nil { if err != nil {
return 0, err return 0, err
} }
if s == nil { if b.Block == nil {
return 0, errors.New("state can't be nil") return 0, errors.New("block can't be nil")
} }
return s.Slot, nil return b.Block.Slot, nil
} }
stateSummary := &pb.StateSummary{}
b := &ethpb.SignedBeaconBlock{} if err := decode(enc, stateSummary); err != nil {
err := decode(enc, b)
if err != nil {
return 0, err return 0, err
} }
if b.Block == nil { return stateSummary.Slot, nil
return 0, errors.New("block can't be nil")
}
return b.Block.Slot, nil
} }
// HighestSlotStates returns the states with the highest slot from the db. // HighestSlotStates returns the states with the highest slot from the db.

View File

@@ -8,24 +8,23 @@ import (
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
"gopkg.in/d4l3k/messagediff.v1"
) )
func TestState_CanSaveRetrieve(t *testing.T) { func TestState_CanSaveRetrieve(t *testing.T) {
db := setupDB(t) db := setupDB(t)
defer teardownDB(t, db) defer teardownDB(t, db)
s := &pb.BeaconState{Slot: 100}
r := [32]byte{'A'} r := [32]byte{'A'}
if db.HasState(context.Background(), r) { if db.HasState(context.Background(), r) {
t.Fatal("Wanted false") t.Fatal("Wanted false")
} }
st, err := state.InitializeFromProto(s) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(100); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -42,8 +41,9 @@ func TestState_CanSaveRetrieve(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(st, savedS) { if !reflect.DeepEqual(st.InnerStateUnsafe(), savedS.InnerStateUnsafe()) {
t.Errorf("Did not retrieve saved state: %v != %v", s, savedS) diff, _ := messagediff.PrettyDiff(st.InnerStateUnsafe(), savedS.InnerStateUnsafe())
t.Errorf("Did not retrieve saved state: %v", diff)
} }
savedS, err = db.State(context.Background(), [32]byte{'B'}) savedS, err = db.State(context.Background(), [32]byte{'B'})
@@ -60,11 +60,10 @@ func TestHeadState_CanSaveRetrieve(t *testing.T) {
db := setupDB(t) db := setupDB(t)
defer teardownDB(t, db) defer teardownDB(t, db)
s := &pb.BeaconState{Slot: 100}
headRoot := [32]byte{'A'} headRoot := [32]byte{'A'}
st, err := state.InitializeFromProto(s) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(100); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -81,7 +80,7 @@ func TestHeadState_CanSaveRetrieve(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(st, savedHeadS) { if !reflect.DeepEqual(st.InnerStateUnsafe(), savedHeadS.InnerStateUnsafe()) {
t.Error("did not retrieve saved state") t.Error("did not retrieve saved state")
} }
} }
@@ -90,11 +89,10 @@ func TestGenesisState_CanSaveRetrieve(t *testing.T) {
db := setupDB(t) db := setupDB(t)
defer teardownDB(t, db) defer teardownDB(t, db)
s := &pb.BeaconState{Slot: 1}
headRoot := [32]byte{'B'} headRoot := [32]byte{'B'}
st, err := state.InitializeFromProto(s) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -111,7 +109,7 @@ func TestGenesisState_CanSaveRetrieve(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
if !reflect.DeepEqual(st, savedGenesisS) { if !reflect.DeepEqual(st.InnerStateUnsafe(), savedGenesisS.InnerStateUnsafe()) {
t.Error("did not retrieve saved state") t.Error("did not retrieve saved state")
} }
@@ -148,8 +146,8 @@ func TestStore_StatesBatchDelete(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := state.InitializeFromProto(&pb.BeaconState{Slot: uint64(i)}) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(uint64(i)); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(context.Background(), st, r); err != nil { if err := db.SaveState(context.Background(), st, r); err != nil {
@@ -191,9 +189,8 @@ func TestStore_DeleteGenesisState(t *testing.T) {
if err := db.SaveGenesisBlockRoot(ctx, genesisBlockRoot); err != nil { if err := db.SaveGenesisBlockRoot(ctx, genesisBlockRoot); err != nil {
t.Fatal(err) t.Fatal(err)
} }
genesisState := &pb.BeaconState{Slot: 100} st := testutil.NewBeaconState()
st, err := state.InitializeFromProto(genesisState) if err := st.SetSlot(100); err != nil {
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(ctx, st, genesisBlockRoot); err != nil { if err := db.SaveState(ctx, st, genesisBlockRoot); err != nil {
@@ -230,8 +227,8 @@ func TestStore_DeleteFinalizedState(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
finalizedState, err := state.InitializeFromProto(&pb.BeaconState{Slot: 100}) finalizedState := testutil.NewBeaconState()
if err != nil { if err := finalizedState.SetSlot(100); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(ctx, finalizedState, finalizedBlockRoot); err != nil { if err := db.SaveState(ctx, finalizedState, finalizedBlockRoot); err != nil {
@@ -243,6 +240,7 @@ func TestStore_DeleteFinalizedState(t *testing.T) {
} }
wantedErr := "cannot delete genesis, finalized, or head state" wantedErr := "cannot delete genesis, finalized, or head state"
if err := db.DeleteState(ctx, finalizedBlockRoot); err.Error() != wantedErr { if err := db.DeleteState(ctx, finalizedBlockRoot); err.Error() != wantedErr {
t.Log(err.Error())
t.Error("Did not receive wanted error") t.Error("Did not receive wanted error")
} }
} }
@@ -271,9 +269,8 @@ func TestStore_DeleteHeadState(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
headState := &pb.BeaconState{Slot: 100} st := testutil.NewBeaconState()
st, err := state.InitializeFromProto(headState) if err := st.SetSlot(100); err != nil {
if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(ctx, st, headBlockRoot); err != nil { if err := db.SaveState(ctx, st, headBlockRoot); err != nil {
@@ -292,7 +289,6 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
db := setupDB(t) db := setupDB(t)
defer teardownDB(t, db) defer teardownDB(t, db)
s0 := &pb.BeaconState{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}} b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, err := ssz.HashTreeRoot(b.Block) r, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -301,15 +297,15 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := state.InitializeFromProto(s0) st := testutil.NewBeaconState()
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r); err != nil { if err := db.SaveState(context.Background(), st, r); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveGenesisBlockRoot(context.Background(), r); err != nil {
t.Error(err)
}
s0 := st.InnerStateUnsafe()
s1 := &pb.BeaconState{Slot: 999}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 999}} b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 999}}
r1, err := ssz.HashTreeRoot(b.Block) r1, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -318,10 +314,11 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err = state.InitializeFromProto(s1) st = testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(999); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s1 := st.InnerStateUnsafe()
if err := db.SaveState(context.Background(), st, r1); err != nil { if err := db.SaveState(context.Background(), st, r1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -334,7 +331,6 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1) t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
} }
s2 := &pb.BeaconState{Slot: 1000}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1000}} b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1000}}
r2, err := ssz.HashTreeRoot(b.Block) r2, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -343,10 +339,11 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err = state.InitializeFromProto(s2) st = testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(1000); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s2 := st.InnerStateUnsafe()
if err := db.SaveState(context.Background(), st, r2); err != nil { if err := db.SaveState(context.Background(), st, r2); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -377,8 +374,12 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if highest[0] == nil {
t.Fatal("returned nil state ")
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s0) { if !proto.Equal(highest[0].InnerStateUnsafe(), s0) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1) diff, _ := messagediff.PrettyDiff(highest[0].InnerStateUnsafe(), s0)
t.Errorf("Did not retrieve saved state: %v", diff)
} }
} }
@@ -386,7 +387,6 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
db := setupDB(t) db := setupDB(t)
defer teardownDB(t, db) defer teardownDB(t, db)
s0 := &pb.BeaconState{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}} b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, err := ssz.HashTreeRoot(b.Block) r, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -395,15 +395,15 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := state.InitializeFromProto(s0) st := testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s0 := st.InnerStateUnsafe()
if err := db.SaveState(context.Background(), st, r); err != nil { if err := db.SaveState(context.Background(), st, r); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s1 := &pb.BeaconState{Slot: 100}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 100}} b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 100}}
r1, err := ssz.HashTreeRoot(b.Block) r1, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -412,10 +412,11 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err = state.InitializeFromProto(s1) st = testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(100); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s1 := st.InnerStateUnsafe()
if err := db.SaveState(context.Background(), st, r1); err != nil { if err := db.SaveState(context.Background(), st, r1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -428,7 +429,6 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1) t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
} }
s2 := &pb.BeaconState{Slot: 1000}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1000}} b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1000}}
r2, err := ssz.HashTreeRoot(b.Block) r2, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -437,10 +437,12 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err = state.InitializeFromProto(s2) st = testutil.NewBeaconState()
if err != nil { if err := st.SetSlot(1000); err != nil {
t.Fatal(err) t.Fatal(err)
} }
s2 := st.InnerStateUnsafe()
if err := db.SaveState(context.Background(), st, r2); err != nil { if err := db.SaveState(context.Background(), st, r2); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -474,11 +476,7 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
db := setupDB(t) db := setupDB(t)
defer teardownDB(t, db) defer teardownDB(t, db)
s := &pb.BeaconState{} genesisState := testutil.NewBeaconState()
genesisState, err := state.InitializeFromProto(s)
if err != nil {
t.Fatal(err)
}
genesisRoot := [32]byte{'a'} genesisRoot := [32]byte{'a'}
if err := db.SaveGenesisBlockRoot(context.Background(), genesisRoot); err != nil { if err := db.SaveGenesisBlockRoot(context.Background(), genesisRoot); err != nil {
t.Fatal(err) t.Fatal(err)
@@ -487,7 +485,6 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
s0 := &pb.BeaconState{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}} b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, err := ssz.HashTreeRoot(b.Block) r, err := ssz.HashTreeRoot(b.Block)
if err != nil { if err != nil {
@@ -496,8 +493,9 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
if err := db.SaveBlock(context.Background(), b); err != nil { if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err) t.Fatal(err)
} }
st, err := state.InitializeFromProto(s0)
if err != nil { st := testutil.NewBeaconState()
if err := st.SetSlot(1); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := db.SaveState(context.Background(), st, r); err != nil { if err := db.SaveState(context.Background(), st, r); err != nil {
@@ -508,8 +506,8 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !proto.Equal(highest[0].InnerStateUnsafe(), s0) { if !proto.Equal(highest[0].InnerStateUnsafe(), st.InnerStateUnsafe()) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0) t.Errorf("Did not retrieve saved state: %v != %v", highest, st.InnerStateUnsafe())
} }
highest, err = db.HighestSlotStatesBelow(context.Background(), 1) highest, err = db.HighestSlotStatesBelow(context.Background(), 1)
@@ -517,13 +515,13 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
if !proto.Equal(highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe()) { if !proto.Equal(highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe()) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0) t.Errorf("Did not retrieve saved state: %v != %v", highest, genesisState.InnerStateUnsafe())
} }
highest, err = db.HighestSlotStatesBelow(context.Background(), 0) highest, err = db.HighestSlotStatesBelow(context.Background(), 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if !proto.Equal(highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe()) { if !proto.Equal(highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe()) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0) t.Errorf("Did not retrieve saved state: %v != %v", highest, genesisState.InnerStateUnsafe())
} }
} }

View File

@@ -105,9 +105,9 @@ var (
Usage: "The slot durations of when an archived state gets saved in the DB.", Usage: "The slot durations of when an archived state gets saved in the DB.",
Value: 128, Value: 128,
} }
// EnableDiscv5 enables running discv5. // DisableDiscv5 disables running discv5.
EnableDiscv5 = &cli.BoolFlag{ DisableDiscv5 = &cli.BoolFlag{
Name: "enable-discv5", Name: "disable-discv5",
Usage: "Starts dv5 dht.", Usage: "Does not run the discoveryV5 dht.",
} }
) )

View File

@@ -14,7 +14,7 @@ type GlobalFlags struct {
EnableArchivedBlocks bool EnableArchivedBlocks bool
EnableArchivedAttestations bool EnableArchivedAttestations bool
UnsafeSync bool UnsafeSync bool
EnableDiscv5 bool DisableDiscv5 bool
MinimumSyncPeers int MinimumSyncPeers int
MaxPageSize int MaxPageSize int
DeploymentBlock int DeploymentBlock int
@@ -54,8 +54,8 @@ func ConfigureGlobalFlags(ctx *cli.Context) {
if ctx.Bool(UnsafeSync.Name) { if ctx.Bool(UnsafeSync.Name) {
cfg.UnsafeSync = true cfg.UnsafeSync = true
} }
if ctx.Bool(EnableDiscv5.Name) { if ctx.Bool(DisableDiscv5.Name) {
cfg.EnableDiscv5 = true cfg.DisableDiscv5 = true
} }
cfg.MaxPageSize = ctx.Int(RPCMaxPageSize.Name) cfg.MaxPageSize = ctx.Int(RPCMaxPageSize.Name)
cfg.DeploymentBlock = ctx.Int(ContractDeploymentBlock.Name) cfg.DeploymentBlock = ctx.Int(ContractDeploymentBlock.Name)

View File

@@ -166,6 +166,12 @@ func (s *Service) saveGenesisState(ctx context.Context, genesisState *stateTrie.
if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil { if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil {
return errors.Wrap(err, "could not save genesis block") return errors.Wrap(err, "could not save genesis block")
} }
if err := s.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: 0,
Root: genesisBlkRoot[:],
}); err != nil {
return err
}
if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil { if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save genesis state") return errors.Wrap(err, "could not save genesis state")
} }

View File

@@ -38,7 +38,7 @@ var appFlags = []cli.Flag{
flags.ContractDeploymentBlock, flags.ContractDeploymentBlock,
flags.SetGCPercent, flags.SetGCPercent,
flags.UnsafeSync, flags.UnsafeSync,
flags.EnableDiscv5, flags.DisableDiscv5,
flags.InteropMockEth1DataVotesFlag, flags.InteropMockEth1DataVotesFlag,
flags.InteropGenesisStateFlag, flags.InteropGenesisStateFlag,
flags.InteropNumValidatorsFlag, flags.InteropNumValidatorsFlag,
@@ -59,6 +59,7 @@ var appFlags = []cli.Flag{
cmd.P2PHostDNS, cmd.P2PHostDNS,
cmd.P2PMaxPeers, cmd.P2PMaxPeers,
cmd.P2PPrivKey, cmd.P2PPrivKey,
cmd.P2PMetadata,
cmd.P2PWhitelist, cmd.P2PWhitelist,
cmd.P2PEncoding, cmd.P2PEncoding,
cmd.DataDirFlag, cmd.DataDirFlag,

View File

@@ -298,13 +298,15 @@ func (b *BeaconNode) registerP2P(ctx *cli.Context) error {
HostAddress: ctx.String(cmd.P2PHost.Name), HostAddress: ctx.String(cmd.P2PHost.Name),
HostDNS: ctx.String(cmd.P2PHostDNS.Name), HostDNS: ctx.String(cmd.P2PHostDNS.Name),
PrivateKey: ctx.String(cmd.P2PPrivKey.Name), PrivateKey: ctx.String(cmd.P2PPrivKey.Name),
MetaDataDir: ctx.String(cmd.P2PMetadata.Name),
TCPPort: ctx.Uint(cmd.P2PTCPPort.Name), TCPPort: ctx.Uint(cmd.P2PTCPPort.Name),
UDPPort: ctx.Uint(cmd.P2PUDPPort.Name), UDPPort: ctx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: ctx.Uint(cmd.P2PMaxPeers.Name), MaxPeers: ctx.Uint(cmd.P2PMaxPeers.Name),
WhitelistCIDR: ctx.String(cmd.P2PWhitelist.Name), WhitelistCIDR: ctx.String(cmd.P2PWhitelist.Name),
EnableUPnP: ctx.Bool(cmd.EnableUPnPFlag.Name), EnableUPnP: ctx.Bool(cmd.EnableUPnPFlag.Name),
EnableDiscv5: ctx.Bool(flags.EnableDiscv5.Name), DisableDiscv5: ctx.Bool(flags.DisableDiscv5.Name),
Encoding: ctx.String(cmd.P2PEncoding.Name), Encoding: ctx.String(cmd.P2PEncoding.Name),
StateNotifier: b,
}) })
if err != nil { if err != nil {
return err return err
@@ -441,6 +443,7 @@ func (b *BeaconNode) registerSyncService(ctx *cli.Context) error {
ExitPool: b.exitPool, ExitPool: b.exitPool,
SlashingPool: b.slashingsPool, SlashingPool: b.slashingsPool,
StateSummaryCache: b.stateSummaryCache, StateSummaryCache: b.stateSummaryCache,
StateGen: b.stateGen,
}) })
return b.services.RegisterService(rs) return b.services.RegisterService(rs)

View File

@@ -3,7 +3,6 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library( go_library(
name = "go_default_library", name = "go_default_library",
srcs = [ srcs = [
"aggregate.go",
"log.go", "log.go",
"metrics.go", "metrics.go",
"pool.go", "pool.go",
@@ -34,7 +33,6 @@ go_library(
go_test( go_test(
name = "go_default_test", name = "go_default_test",
srcs = [ srcs = [
"aggregate_test.go",
"pool_test.go", "pool_test.go",
"prepare_forkchoice_test.go", "prepare_forkchoice_test.go",
"prune_expired_test.go", "prune_expired_test.go",
@@ -50,6 +48,5 @@ go_test(
"@com_github_gogo_protobuf//proto:go_default_library", "@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
], ],
) )

View File

@@ -1,79 +0,0 @@
package attestations
import (
"context"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
// Define time to aggregate the unaggregated attestations at 2 times per slot, this gives
// enough confidence all the unaggregated attestations will be aggregated as aggregator requests.
var timeToAggregate = time.Duration(params.BeaconConfig().SecondsPerSlot/2) * time.Second
// This kicks off a routine to aggregate the unaggregated attestations from pool.
func (s *Service) aggregateRoutine() {
ticker := time.NewTicker(timeToAggregate)
ctx := context.TODO()
for {
select {
case <-s.ctx.Done():
return
case <-ticker.C:
attsToBeAggregated := append(s.pool.UnaggregatedAttestations(), s.pool.AggregatedAttestations()...)
if err := s.aggregateAttestations(ctx, attsToBeAggregated); err != nil {
log.WithError(err).Error("Could not aggregate attestation")
}
// Update metrics for aggregated and unaggregated attestations count.
s.updateMetrics()
}
}
}
// This aggregates the input attestations via AggregateAttestations helper
// function.
func (s *Service) aggregateAttestations(ctx context.Context, attsToBeAggregated []*ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "Operations.attestations.aggregateAttestations")
defer span.End()
attsByRoot := make(map[[32]byte][]*ethpb.Attestation)
for _, att := range attsToBeAggregated {
attDataRoot, err := ssz.HashTreeRoot(att.Data)
if err != nil {
return err
}
attsByRoot[attDataRoot] = append(attsByRoot[attDataRoot], att)
}
for _, atts := range attsByRoot {
for _, att := range atts {
if !helpers.IsAggregated(att) && len(atts) > 1 {
if err := s.pool.DeleteUnaggregatedAttestation(att); err != nil {
return err
}
}
}
}
for _, atts := range attsByRoot {
aggregatedAtts, err := helpers.AggregateAttestations(atts)
if err != nil {
return err
}
for _, att := range aggregatedAtts {
if helpers.IsAggregated(att) {
if err := s.pool.SaveAggregatedAttestation(att); err != nil {
return err
}
}
}
}
return nil
}

View File

@@ -1,139 +0,0 @@
package attestations
import (
"context"
"reflect"
"sort"
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bls"
"gopkg.in/d4l3k/messagediff.v1"
)
func TestAggregateAttestations_SingleAttestation(t *testing.T) {
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
if err != nil {
t.Fatal(err)
}
sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/)
unaggregatedAtts := []*ethpb.Attestation{
{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b100001}, Signature: sig.Marshal()},
}
if err := s.aggregateAttestations(context.Background(), unaggregatedAtts); err != nil {
t.Fatal(err)
}
if len(s.pool.AggregatedAttestations()) != 0 {
t.Error("Nothing should be aggregated")
}
if len(s.pool.UnaggregatedAttestations()) != 0 {
t.Error("Unaggregated pool should be empty")
}
}
func TestAggregateAttestations_MultipleAttestationsSameRoot(t *testing.T) {
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
if err != nil {
t.Fatal(err)
}
sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/)
data := &ethpb.AttestationData{
Source: &ethpb.Checkpoint{},
Target: &ethpb.Checkpoint{},
}
attsToBeAggregated := []*ethpb.Attestation{
{Data: data, AggregationBits: bitfield.Bitlist{0b110001}, Signature: sig.Marshal()},
{Data: data, AggregationBits: bitfield.Bitlist{0b100010}, Signature: sig.Marshal()},
{Data: data, AggregationBits: bitfield.Bitlist{0b101100}, Signature: sig.Marshal()},
}
if err := s.aggregateAttestations(context.Background(), attsToBeAggregated); err != nil {
t.Fatal(err)
}
if len(s.pool.UnaggregatedAttestations()) != 0 {
t.Error("Nothing should be unaggregated")
}
wanted, err := helpers.AggregateAttestations(attsToBeAggregated)
if err != nil {
t.Fatal(err)
}
got := s.pool.AggregatedAttestations()
if !reflect.DeepEqual(wanted, got) {
diff, _ := messagediff.PrettyDiff(got[0], wanted[0])
t.Log(diff)
t.Error("Did not aggregate attestations")
}
}
func TestAggregateAttestations_MultipleAttestationsDifferentRoots(t *testing.T) {
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
if err != nil {
t.Fatal(err)
}
mockRoot := [32]byte{}
d := &ethpb.AttestationData{
BeaconBlockRoot: mockRoot[:],
Source: &ethpb.Checkpoint{Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Root: mockRoot[:]},
}
d1, ok := proto.Clone(d).(*ethpb.AttestationData)
if !ok {
t.Fatal("Entity is not of type *ethpb.AttestationData")
}
d1.Slot = 1
d2, ok := proto.Clone(d).(*ethpb.AttestationData)
if !ok {
t.Fatal("Entity is not of type *ethpb.AttestationData")
}
d2.Slot = 2
sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/)
atts := []*ethpb.Attestation{
{Data: d, AggregationBits: bitfield.Bitlist{0b100001}, Signature: sig.Marshal()},
{Data: d, AggregationBits: bitfield.Bitlist{0b100010}, Signature: sig.Marshal()},
{Data: d1, AggregationBits: bitfield.Bitlist{0b100001}, Signature: sig.Marshal()},
{Data: d1, AggregationBits: bitfield.Bitlist{0b100110}, Signature: sig.Marshal()},
{Data: d2, AggregationBits: bitfield.Bitlist{0b100100}, Signature: sig.Marshal()},
}
if err := s.aggregateAttestations(context.Background(), atts); err != nil {
t.Fatal(err)
}
if len(s.pool.UnaggregatedAttestations()) != 0 {
t.Error("Unaggregated att pool did not clean up")
}
received := s.pool.AggregatedAttestations()
sort.Slice(received, func(i, j int) bool {
return received[i].Data.Slot < received[j].Data.Slot
})
att1, err := helpers.AggregateAttestations([]*ethpb.Attestation{atts[0], atts[1]})
if err != nil {
t.Error(err)
}
att2, err := helpers.AggregateAttestations([]*ethpb.Attestation{atts[2], atts[3]})
if err != nil {
t.Error(err)
}
wanted := append(att1, att2...)
if !reflect.DeepEqual(wanted, received) {
t.Error("Did not aggregate attestations")
}
}

View File

@@ -17,6 +17,7 @@ go_library(
"//shared/hashutil:go_default_library", "//shared/hashutil:go_default_library",
"@com_github_pkg_errors//:go_default_library", "@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
], ],
) )
@@ -31,6 +32,7 @@ go_test(
], ],
embed = [":go_default_library"], embed = [":go_default_library"],
deps = [ deps = [
"//shared/bls:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
], ],

View File

@@ -3,10 +3,68 @@ package kv
import ( import (
"github.com/pkg/errors" "github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state" stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
) )
// AggregateUnaggregatedAttestations aggregates the unaggregated attestations and save the
// newly aggregated attestations in the pool.
// It tracks the unaggregated attestations that weren't able to aggregate to prevent
// the deletion of unaggregated attestations in the pool.
func (p *AttCaches) AggregateUnaggregatedAttestations() error {
attsByDataRoot := make(map[[32]byte][]*ethpb.Attestation)
unaggregatedAtts := p.UnaggregatedAttestations()
for _, att := range unaggregatedAtts {
attDataRoot, err := ssz.HashTreeRoot(att.Data)
if err != nil {
return err
}
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
}
// Aggregate unaggregated attestations from the pool and save them in the pool.
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
for _, atts := range attsByDataRoot {
aggregatedAtts := make([]*ethpb.Attestation, 0, len(atts))
processedAtts, err := helpers.AggregateAttestations(atts)
if err != nil {
return err
}
for _, att := range processedAtts {
if helpers.IsAggregated(att) {
aggregatedAtts = append(aggregatedAtts, att)
} else {
h, err := ssz.HashTreeRoot(att)
if err != nil {
return err
}
leftOverUnaggregatedAtt[h] = true
}
}
if err := p.SaveAggregatedAttestations(aggregatedAtts); err != nil {
return err
}
}
// Remove the unaggregated attestations from the pool that were successfully aggregated.
for _, att := range unaggregatedAtts {
h, err := ssz.HashTreeRoot(att)
if err != nil {
return err
}
if leftOverUnaggregatedAtt[h] {
continue
}
if err := p.DeleteUnaggregatedAttestation(att); err != nil {
return err
}
}
return nil
}
// SaveAggregatedAttestation saves an aggregated attestation in cache. // SaveAggregatedAttestation saves an aggregated attestation in cache.
func (p *AttCaches) SaveAggregatedAttestation(att *ethpb.Attestation) error { func (p *AttCaches) SaveAggregatedAttestation(att *ethpb.Attestation) error {
if att == nil || att.Data == nil { if att == nil || att.Data == nil {

View File

@@ -3,21 +3,39 @@ package kv
import ( import (
"reflect" "reflect"
"sort" "sort"
"strings"
"testing" "testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield" "github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/shared/bls"
) )
func TestKV_Aggregated_NotAggregated(t *testing.T) { func TestKV_AggregateUnaggregatedAttestations(t *testing.T) {
cache := NewAttCaches() cache := NewAttCaches()
priv := bls.RandKey()
sig1 := priv.Sign([]byte{'a'})
sig2 := priv.Sign([]byte{'b'})
att1 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig1.Marshal()}
att2 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1010}, Signature: sig1.Marshal()}
att3 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1100}, Signature: sig1.Marshal()}
att4 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig2.Marshal()}
att5 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig1.Marshal()}
att6 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1010}, Signature: sig1.Marshal()}
att7 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1100}, Signature: sig1.Marshal()}
att8 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1001}, Signature: sig2.Marshal()}
atts := []*ethpb.Attestation{att1, att2, att3, att4, att5, att6, att7, att8}
if err := cache.SaveUnaggregatedAttestations(atts); err != nil {
t.Fatal(err)
}
if err := cache.AggregateUnaggregatedAttestations(); err != nil {
t.Fatal(err)
}
att := &ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11}, Data: &ethpb.AttestationData{}} if len(cache.AggregatedAttestationsBySlotIndex(1, 0)) != 1 {
t.Fatal("Did not aggregate correctly")
wanted := "attestation is not aggregated" }
if err := cache.SaveAggregatedAttestation(att); !strings.Contains(err.Error(), wanted) { if len(cache.AggregatedAttestationsBySlotIndex(2, 0)) != 1 {
t.Error("Did not received wanted error") t.Fatal("Did not aggregate correctly")
} }
} }

View File

@@ -11,6 +11,7 @@ import (
// for aggregator actor. // for aggregator actor.
type Pool interface { type Pool interface {
// For Aggregated attestations // For Aggregated attestations
AggregateUnaggregatedAttestations() error
SaveAggregatedAttestation(att *ethpb.Attestation) error SaveAggregatedAttestation(att *ethpb.Attestation) error
SaveAggregatedAttestations(atts []*ethpb.Attestation) error SaveAggregatedAttestations(atts []*ethpb.Attestation) error
AggregatedAttestations() []*ethpb.Attestation AggregatedAttestations() []*ethpb.Attestation

View File

@@ -46,8 +46,10 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
attsByDataRoot := make(map[[32]byte][]*ethpb.Attestation) attsByDataRoot := make(map[[32]byte][]*ethpb.Attestation)
atts := append(s.pool.UnaggregatedAttestations(), s.pool.AggregatedAttestations()...) if err := s.pool.AggregateUnaggregatedAttestations(); err != nil {
atts = append(atts, s.pool.BlockAttestations()...) return err
}
atts := append(s.pool.AggregatedAttestations(), s.pool.BlockAttestations()...)
atts = append(atts, s.pool.ForkchoiceAttestations()...) atts = append(atts, s.pool.ForkchoiceAttestations()...)
// Consolidate attestations by aggregating them by similar data root. // Consolidate attestations by aggregating them by similar data root.

View File

@@ -20,7 +20,7 @@ func TestBatchAttestations_Multiple(t *testing.T) {
} }
sk := bls.RandKey() sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/) sig := sk.Sign([]byte("dummy_test_data"))
var mockRoot [32]byte var mockRoot [32]byte
unaggregatedAtts := []*ethpb.Attestation{ unaggregatedAtts := []*ethpb.Attestation{
@@ -98,21 +98,24 @@ func TestBatchAttestations_Multiple(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
wanted, err := helpers.AggregateAttestations([]*ethpb.Attestation{unaggregatedAtts[0], aggregatedAtts[0], blockAtts[0]}) wanted, err := helpers.AggregateAttestations([]*ethpb.Attestation{aggregatedAtts[0], blockAtts[0]})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
aggregated, err := helpers.AggregateAttestations([]*ethpb.Attestation{unaggregatedAtts[1], aggregatedAtts[1], blockAtts[1]}) aggregated, err := helpers.AggregateAttestations([]*ethpb.Attestation{aggregatedAtts[1], blockAtts[1]})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
wanted = append(wanted, aggregated...) wanted = append(wanted, aggregated...)
aggregated, err = helpers.AggregateAttestations([]*ethpb.Attestation{unaggregatedAtts[2], aggregatedAtts[2], blockAtts[2]}) aggregated, err = helpers.AggregateAttestations([]*ethpb.Attestation{aggregatedAtts[2], blockAtts[2]})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
wanted = append(wanted, aggregated...) wanted = append(wanted, aggregated...)
if err := s.pool.AggregateUnaggregatedAttestations(); err != nil {
return
}
received := s.pool.ForkchoiceAttestations() received := s.pool.ForkchoiceAttestations()
sort.Slice(received, func(i, j int) bool { sort.Slice(received, func(i, j int) bool {
@@ -134,7 +137,7 @@ func TestBatchAttestations_Single(t *testing.T) {
} }
sk := bls.RandKey() sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/) sig := sk.Sign([]byte("dummy_test_data"))
mockRoot := [32]byte{} mockRoot := [32]byte{}
d := &ethpb.AttestationData{ d := &ethpb.AttestationData{
BeaconBlockRoot: mockRoot[:], BeaconBlockRoot: mockRoot[:],
@@ -194,7 +197,7 @@ func TestAggregateAndSaveForkChoiceAtts_Single(t *testing.T) {
} }
sk := bls.RandKey() sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/) sig := sk.Sign([]byte("dummy_test_data"))
mockRoot := [32]byte{} mockRoot := [32]byte{}
d := &ethpb.AttestationData{ d := &ethpb.AttestationData{
BeaconBlockRoot: mockRoot[:], BeaconBlockRoot: mockRoot[:],
@@ -226,7 +229,7 @@ func TestAggregateAndSaveForkChoiceAtts_Multiple(t *testing.T) {
} }
sk := bls.RandKey() sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/) sig := sk.Sign([]byte("dummy_test_data"))
mockRoot := [32]byte{} mockRoot := [32]byte{}
d := &ethpb.AttestationData{ d := &ethpb.AttestationData{
BeaconBlockRoot: mockRoot[:], BeaconBlockRoot: mockRoot[:],

View File

@@ -43,7 +43,6 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
// Start an attestation pool service's main event loop. // Start an attestation pool service's main event loop.
func (s *Service) Start() { func (s *Service) Start() {
go s.prepareForkChoiceAtts() go s.prepareForkChoiceAtts()
go s.aggregateRoutine()
go s.pruneAttsPool() go s.pruneAttsPool()
} }

View File

@@ -150,7 +150,7 @@ func (p *Pool) InsertProposerSlashing(
return errors.Wrap(err, "could not verify proposer slashing") return errors.Wrap(err, "could not verify proposer slashing")
} }
idx := slashing.ProposerIndex idx := slashing.Header_1.Header.ProposerIndex
ok, err := p.validatorSlashingPreconditionCheck(state, idx) ok, err := p.validatorSlashingPreconditionCheck(state, idx)
if err != nil { if err != nil {
return err return err
@@ -166,16 +166,17 @@ func (p *Pool) InsertProposerSlashing(
// Check if the validator already exists in the list of slashings. // Check if the validator already exists in the list of slashings.
// Use binary search to find the answer. // Use binary search to find the answer.
found := sort.Search(len(p.pendingProposerSlashing), func(i int) bool { found := sort.Search(len(p.pendingProposerSlashing), func(i int) bool {
return p.pendingProposerSlashing[i].ProposerIndex >= slashing.ProposerIndex return p.pendingProposerSlashing[i].Header_1.Header.ProposerIndex >= slashing.Header_1.Header.ProposerIndex
}) })
if found != len(p.pendingProposerSlashing) && p.pendingProposerSlashing[found].ProposerIndex == slashing.ProposerIndex { if found != len(p.pendingProposerSlashing) && p.pendingProposerSlashing[found].Header_1.Header.ProposerIndex ==
slashing.Header_1.Header.ProposerIndex {
return errors.New("slashing object already exists in pending proposer slashings") return errors.New("slashing object already exists in pending proposer slashings")
} }
// Insert into pending list and sort again. // Insert into pending list and sort again.
p.pendingProposerSlashing = append(p.pendingProposerSlashing, slashing) p.pendingProposerSlashing = append(p.pendingProposerSlashing, slashing)
sort.Slice(p.pendingProposerSlashing, func(i, j int) bool { sort.Slice(p.pendingProposerSlashing, func(i, j int) bool {
return p.pendingProposerSlashing[i].ProposerIndex < p.pendingProposerSlashing[j].ProposerIndex return p.pendingProposerSlashing[i].Header_1.Header.ProposerIndex < p.pendingProposerSlashing[j].Header_1.Header.ProposerIndex
}) })
return nil return nil
} }
@@ -206,12 +207,12 @@ func (p *Pool) MarkIncludedProposerSlashing(ps *ethpb.ProposerSlashing) {
p.lock.Lock() p.lock.Lock()
defer p.lock.Unlock() defer p.lock.Unlock()
i := sort.Search(len(p.pendingProposerSlashing), func(i int) bool { i := sort.Search(len(p.pendingProposerSlashing), func(i int) bool {
return p.pendingProposerSlashing[i].ProposerIndex >= ps.ProposerIndex return p.pendingProposerSlashing[i].Header_1.Header.ProposerIndex >= ps.Header_1.Header.ProposerIndex
}) })
if i != len(p.pendingProposerSlashing) && p.pendingProposerSlashing[i].ProposerIndex == ps.ProposerIndex { if i != len(p.pendingProposerSlashing) && p.pendingProposerSlashing[i].Header_1.Header.ProposerIndex == ps.Header_1.Header.ProposerIndex {
p.pendingProposerSlashing = append(p.pendingProposerSlashing[:i], p.pendingProposerSlashing[i+1:]...) p.pendingProposerSlashing = append(p.pendingProposerSlashing[:i], p.pendingProposerSlashing[i+1:]...)
} }
p.included[ps.ProposerIndex] = true p.included[ps.Header_1.Header.ProposerIndex] = true
numProposerSlashingsIncluded.Inc() numProposerSlashingsIncluded.Inc()
} }

View File

@@ -15,7 +15,12 @@ import (
func proposerSlashingForValIdx(valIdx uint64) *ethpb.ProposerSlashing { func proposerSlashingForValIdx(valIdx uint64) *ethpb.ProposerSlashing {
return &ethpb.ProposerSlashing{ return &ethpb.ProposerSlashing{
ProposerIndex: valIdx, Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ProposerIndex: valIdx},
},
Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{ProposerIndex: valIdx},
},
} }
} }
@@ -191,12 +196,12 @@ func TestPool_InsertProposerSlashing(t *testing.T) {
t.Fatalf("Mismatched lengths of pending list. Got %d, wanted %d.", len(p.pendingProposerSlashing), len(tt.want)) t.Fatalf("Mismatched lengths of pending list. Got %d, wanted %d.", len(p.pendingProposerSlashing), len(tt.want))
} }
for i := range p.pendingAttesterSlashing { for i := range p.pendingAttesterSlashing {
if p.pendingProposerSlashing[i].ProposerIndex != tt.want[i].ProposerIndex { if p.pendingProposerSlashing[i].Header_1.Header.ProposerIndex != tt.want[i].Header_1.Header.ProposerIndex {
t.Errorf( t.Errorf(
"Pending proposer to slash at index %d does not match expected. Got=%v wanted=%v", "Pending proposer to slash at index %d does not match expected. Got=%v wanted=%v",
i, i,
p.pendingProposerSlashing[i].ProposerIndex, p.pendingProposerSlashing[i].Header_1.Header.ProposerIndex,
tt.want[i].ProposerIndex, tt.want[i].Header_1.Header.ProposerIndex,
) )
} }
if !proto.Equal(p.pendingProposerSlashing[i], tt.want[i]) { if !proto.Equal(p.pendingProposerSlashing[i], tt.want[i]) {

View File

@@ -9,6 +9,7 @@ go_library(
"dial_relay_node.go", "dial_relay_node.go",
"discovery.go", "discovery.go",
"doc.go", "doc.go",
"fork.go",
"gossip_topic_mappings.go", "gossip_topic_mappings.go",
"handshake.go", "handshake.go",
"info.go", "info.go",
@@ -20,6 +21,7 @@ go_library(
"rpc_topic_mappings.go", "rpc_topic_mappings.go",
"sender.go", "sender.go",
"service.go", "service.go",
"subnets.go",
"utils.go", "utils.go",
"watch_peers.go", "watch_peers.go",
], ],
@@ -30,6 +32,9 @@ go_library(
], ],
deps = [ deps = [
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/p2p/connmgr:go_default_library", "//beacon-chain/p2p/connmgr:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library", "//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library", "//beacon-chain/p2p/peers:go_default_library",
@@ -38,7 +43,9 @@ go_library(
"//shared/featureconfig:go_default_library", "//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library", "//shared/hashutil:go_default_library",
"//shared/iputils:go_default_library", "//shared/iputils:go_default_library",
"//shared/params:go_default_library",
"//shared/runutil:go_default_library", "//shared/runutil:go_default_library",
"//shared/sliceutil:go_default_library",
"//shared/traceutil:go_default_library", "//shared/traceutil:go_default_library",
"@com_github_btcsuite_btcd//btcec:go_default_library", "@com_github_btcsuite_btcd//btcec:go_default_library",
"@com_github_dgraph_io_ristretto//:go_default_library", "@com_github_dgraph_io_ristretto//:go_default_library",
@@ -72,6 +79,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library", "@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library", "@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library", "@io_opencensus_go//trace:go_default_library",
], ],
@@ -84,20 +92,29 @@ go_test(
"broadcaster_test.go", "broadcaster_test.go",
"dial_relay_node_test.go", "dial_relay_node_test.go",
"discovery_test.go", "discovery_test.go",
"fork_test.go",
"gossip_topic_mappings_test.go", "gossip_topic_mappings_test.go",
"options_test.go", "options_test.go",
"parameter_test.go", "parameter_test.go",
"sender_test.go", "sender_test.go",
"service_test.go", "service_test.go",
"subnets_test.go",
], ],
embed = [":go_default_library"], embed = [":go_default_library"],
flaky = True, flaky = True,
tags = ["block-network"], tags = ["block-network"],
deps = [ deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library", "//beacon-chain/cache:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/testing:go_default_library", "//beacon-chain/p2p/testing:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/testing:go_default_library", "//proto/testing:go_default_library",
"//shared/iputils:go_default_library", "//shared/iputils:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library", "//shared/testutil:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library", "@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
@@ -114,6 +131,8 @@ go_test(
"@com_github_multiformats_go_multiaddr//:go_default_library", "@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library", "@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library", "@com_github_sirupsen_logrus//hooks/test:go_default_library",
], ],
) )

View File

@@ -22,11 +22,15 @@ var ErrMessageNotMapped = errors.New("message type is not mapped to a PubSub top
func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error { func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error {
ctx, span := trace.StartSpan(ctx, "p2p.Broadcast") ctx, span := trace.StartSpan(ctx, "p2p.Broadcast")
defer span.End() defer span.End()
forkDigest, err := s.ForkDigest()
if err != nil {
return err
}
var topic string var topic string
switch msg.(type) { switch msg.(type) {
case *eth.Attestation: case *eth.Attestation:
topic = attestationToTopic(msg.(*eth.Attestation)) topic = attestationToTopic(msg.(*eth.Attestation), forkDigest)
default: default:
var ok bool var ok bool
topic, ok = GossipTypeMapping[reflect.TypeOf(msg)] topic, ok = GossipTypeMapping[reflect.TypeOf(msg)]
@@ -34,6 +38,7 @@ func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error {
traceutil.AnnotateError(span, ErrMessageNotMapped) traceutil.AnnotateError(span, ErrMessageNotMapped)
return ErrMessageNotMapped return ErrMessageNotMapped
} }
topic = fmt.Sprintf(topic, forkDigest)
} }
span.AddAttributes(trace.StringAttribute("topic", topic)) span.AddAttributes(trace.StringAttribute("topic", topic))
@@ -59,11 +64,11 @@ func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error {
return nil return nil
} }
const attestationSubnetTopicFormat = "/eth2/committee_index%d_beacon_attestation" const attestationSubnetTopicFormat = "/eth2/%x/committee_index%d_beacon_attestation"
func attestationToTopic(att *eth.Attestation) string { func attestationToTopic(att *eth.Attestation, forkDigest [4]byte) string {
if att == nil || att.Data == nil { if att == nil || att.Data == nil {
return "" return ""
} }
return fmt.Sprintf(attestationSubnetTopicFormat, att.Data.CommitteeIndex) return fmt.Sprintf(attestationSubnetTopicFormat, forkDigest, att.Data.CommitteeIndex)
} }

View File

@@ -2,6 +2,7 @@ package p2p
import ( import (
"context" "context"
"fmt"
"reflect" "reflect"
"sync" "sync"
"testing" "testing"
@@ -9,8 +10,8 @@ import (
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1" eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
p2ptest "github.com/prysmaticlabs/prysm/beacon-chain/p2p/testing"
testpb "github.com/prysmaticlabs/prysm/proto/testing" testpb "github.com/prysmaticlabs/prysm/proto/testing"
p2ptest "github.com/prysmaticlabs/prysm/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/shared/testutil" "github.com/prysmaticlabs/prysm/shared/testutil"
) )
@@ -34,11 +35,17 @@ func TestService_Broadcast(t *testing.T) {
Bar: 55, Bar: 55,
} }
topic := "/eth2/%x/testing"
// Set a test gossip mapping for testpb.TestSimpleMessage. // Set a test gossip mapping for testpb.TestSimpleMessage.
GossipTypeMapping[reflect.TypeOf(msg)] = "/testing" GossipTypeMapping[reflect.TypeOf(msg)] = topic
digest, err := p.ForkDigest()
if err != nil {
t.Fatal(err)
}
topic = fmt.Sprintf(topic, digest)
// External peer subscribes to the topic. // External peer subscribes to the topic.
topic := "/testing" + p.Encoding().ProtocolSuffix() topic += p.Encoding().ProtocolSuffix()
sub, err := p2.PubSub().Subscribe(topic) sub, err := p2.PubSub().Subscribe(topic)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@@ -49,24 +56,24 @@ func TestService_Broadcast(t *testing.T) {
// Async listen for the pubsub, must be before the broadcast. // Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() { go func(tt *testing.T) {
defer wg.Done() defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel() defer cancel()
incomingMessage, err := sub.Next(ctx) incomingMessage, err := sub.Next(ctx)
if err != nil { if err != nil {
t.Fatal(err) tt.Fatal(err)
} }
result := &testpb.TestSimpleMessage{} result := &testpb.TestSimpleMessage{}
if err := p.Encoding().Decode(incomingMessage.Data, result); err != nil { if err := p.Encoding().Decode(incomingMessage.Data, result); err != nil {
t.Fatal(err) tt.Fatal(err)
} }
if !proto.Equal(result, msg) { if !proto.Equal(result, msg) {
t.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg) tt.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg)
} }
}() }(t)
// Broadcast to peers and wait. // Broadcast to peers and wait.
if err := p.Broadcast(context.Background(), msg); err != nil { if err := p.Broadcast(context.Background(), msg); err != nil {
@@ -99,7 +106,7 @@ func TestService_Attestation_Subnet(t *testing.T) {
CommitteeIndex: 0, CommitteeIndex: 0,
}, },
}, },
topic: "/eth2/committee_index0_beacon_attestation", topic: "/eth2/00000000/committee_index0_beacon_attestation",
}, },
{ {
att: &eth.Attestation{ att: &eth.Attestation{
@@ -107,7 +114,7 @@ func TestService_Attestation_Subnet(t *testing.T) {
CommitteeIndex: 11, CommitteeIndex: 11,
}, },
}, },
topic: "/eth2/committee_index11_beacon_attestation", topic: "/eth2/00000000/committee_index11_beacon_attestation",
}, },
{ {
att: &eth.Attestation{ att: &eth.Attestation{
@@ -115,7 +122,7 @@ func TestService_Attestation_Subnet(t *testing.T) {
CommitteeIndex: 55, CommitteeIndex: 55,
}, },
}, },
topic: "/eth2/committee_index55_beacon_attestation", topic: "/eth2/00000000/committee_index55_beacon_attestation",
}, },
{ {
att: &eth.Attestation{}, att: &eth.Attestation{},
@@ -126,7 +133,7 @@ func TestService_Attestation_Subnet(t *testing.T) {
}, },
} }
for _, tt := range tests { for _, tt := range tests {
if res := attestationToTopic(tt.att); res != tt.topic { if res := attestationToTopic(tt.att, [4]byte{} /* fork digest */); res != tt.topic {
t.Errorf("Wrong topic, got %s wanted %s", res, tt.topic) t.Errorf("Wrong topic, got %s wanted %s", res, tt.topic)
} }
} }

View File

@@ -1,11 +1,15 @@
package p2p package p2p
import (
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
)
// Config for the p2p service. These parameters are set from application level flags // Config for the p2p service. These parameters are set from application level flags
// to initialize the p2p service. // to initialize the p2p service.
type Config struct { type Config struct {
NoDiscovery bool NoDiscovery bool
EnableUPnP bool EnableUPnP bool
EnableDiscv5 bool DisableDiscv5 bool
StaticPeers []string StaticPeers []string
BootstrapNodeAddr []string BootstrapNodeAddr []string
KademliaBootStrapAddr []string KademliaBootStrapAddr []string
@@ -16,9 +20,11 @@ type Config struct {
HostDNS string HostDNS string
PrivateKey string PrivateKey string
DataDir string DataDir string
MetaDataDir string
TCPPort uint TCPPort uint
UDPPort uint UDPPort uint
MaxPeers uint MaxPeers uint
WhitelistCIDR string WhitelistCIDR string
Encoding string Encoding string
StateNotifier statefeed.Notifier
} }

View File

@@ -14,12 +14,8 @@ import (
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
) )
const attestationSubnetCount = 64
const attSubnetEnrKey = "attnets"
// Listener defines the discovery V5 network interface that is used // Listener defines the discovery V5 network interface that is used
// to communicate with other peers. // to communicate with other peers.
type Listener interface { type Listener interface {
@@ -34,10 +30,13 @@ type Listener interface {
LocalNode() *enode.LocalNode LocalNode() *enode.LocalNode
} }
func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *discover.UDPv5 { func (s *Service) createListener(
ipAddr net.IP,
privKey *ecdsa.PrivateKey,
) *discover.UDPv5 {
udpAddr := &net.UDPAddr{ udpAddr := &net.UDPAddr{
IP: ipAddr, IP: ipAddr,
Port: int(cfg.UDPPort), Port: int(s.cfg.UDPPort),
} }
// assume ip is either ipv4 or ipv6 // assume ip is either ipv4 or ipv6
networkVersion := "" networkVersion := ""
@@ -50,12 +49,17 @@ func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *disc
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
localNode, err := createLocalNode(privKey, ipAddr, int(cfg.UDPPort), int(cfg.TCPPort)) localNode, err := s.createLocalNode(
privKey,
ipAddr,
int(s.cfg.UDPPort),
int(s.cfg.TCPPort),
)
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
if cfg.HostAddress != "" { if s.cfg.HostAddress != "" {
hostIP := net.ParseIP(cfg.HostAddress) hostIP := net.ParseIP(s.cfg.HostAddress)
if hostIP.To4() == nil && hostIP.To16() == nil { if hostIP.To4() == nil && hostIP.To16() == nil {
log.Errorf("Invalid host address given: %s", hostIP.String()) log.Errorf("Invalid host address given: %s", hostIP.String())
} else { } else {
@@ -66,7 +70,7 @@ func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *disc
PrivateKey: privKey, PrivateKey: privKey,
} }
dv5Cfg.Bootnodes = []*enode.Node{} dv5Cfg.Bootnodes = []*enode.Node{}
for _, addr := range cfg.Discv5BootStrapAddr { for _, addr := range s.cfg.Discv5BootStrapAddr {
bootNode, err := enode.Parse(enode.ValidSchemes, addr) bootNode, err := enode.Parse(enode.ValidSchemes, addr)
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
@@ -81,7 +85,12 @@ func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *disc
return network return network
} }
func createLocalNode(privKey *ecdsa.PrivateKey, ipAddr net.IP, udpPort int, tcpPort int) (*enode.LocalNode, error) { func (s *Service) createLocalNode(
privKey *ecdsa.PrivateKey,
ipAddr net.IP,
udpPort int,
tcpPort int,
) (*enode.LocalNode, error) {
db, err := enode.OpenDB("") db, err := enode.OpenDB("")
if err != nil { if err != nil {
return nil, errors.Wrap(err, "could not open node's peer database") return nil, errors.Wrap(err, "could not open node's peer database")
@@ -96,11 +105,18 @@ func createLocalNode(privKey *ecdsa.PrivateKey, ipAddr net.IP, udpPort int, tcpP
localNode.SetFallbackIP(ipAddr) localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort) localNode.SetFallbackUDP(udpPort)
localNode, err = addForkEntry(localNode, s.genesisTime, s.genesisValidatorsRoot)
if err != nil {
return nil, errors.Wrap(err, "could not add eth2 fork version entry to enr")
}
return intializeAttSubnets(localNode), nil return intializeAttSubnets(localNode), nil
} }
func startDiscoveryV5(addr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) (*discover.UDPv5, error) { func (s *Service) startDiscoveryV5(
listener := createListener(addr, privKey, cfg) addr net.IP,
privKey *ecdsa.PrivateKey,
) (*discover.UDPv5, error) {
listener := s.createListener(addr, privKey)
record := listener.Self() record := listener.Self()
log.WithField("ENR", record.String()).Info("Started discovery v5") log.WithField("ENR", record.String()).Info("Started discovery v5")
return listener, nil return listener, nil
@@ -120,29 +136,6 @@ func startDHTDiscovery(host core.Host, bootstrapAddr string) error {
return err return err
} }
func intializeAttSubnets(node *enode.LocalNode) *enode.LocalNode {
bitV := bitfield.NewBitvector64()
entry := enr.WithEntry(attSubnetEnrKey, bitV.Bytes())
node.Set(entry)
return node
}
func retrieveAttSubnets(record *enr.Record) ([]uint64, error) {
bitV := bitfield.NewBitvector64()
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
err := record.Load(entry)
if err != nil {
return nil, err
}
committeeIdxs := []uint64{}
for i := uint64(0); i < 64; i++ {
if bitV.BitAt(i) {
committeeIdxs = append(committeeIdxs, i)
}
}
return committeeIdxs, nil
}
func parseBootStrapAddrs(addrs []string) (discv5Nodes []string, kadDHTNodes []string) { func parseBootStrapAddrs(addrs []string) (discv5Nodes []string, kadDHTNodes []string) {
discv5Nodes, kadDHTNodes = parseGenericAddrs(addrs) discv5Nodes, kadDHTNodes = parseGenericAddrs(addrs)
if len(discv5Nodes) == 0 && len(kadDHTNodes) == 0 { if len(discv5Nodes) == 0 && len(kadDHTNodes) == 0 {

View File

@@ -13,10 +13,11 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover" "github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/host" "github.com/libp2p/go-libp2p-core/host"
"github.com/prysmaticlabs/go-bitfield" mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache" "github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/shared/iputils" "github.com/prysmaticlabs/prysm/shared/iputils"
"github.com/prysmaticlabs/prysm/shared/testutil" "github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test" logTest "github.com/sirupsen/logrus/hooks/test"
@@ -51,7 +52,10 @@ func createAddrAndPrivKey(t *testing.T) (net.IP, *ecdsa.PrivateKey) {
func TestCreateListener(t *testing.T) { func TestCreateListener(t *testing.T) {
port := 1024 port := 1024
ipAddr, pkey := createAddrAndPrivKey(t) ipAddr, pkey := createAddrAndPrivKey(t)
listener := createListener(ipAddr, pkey, &Config{UDPPort: uint(port)}) s := &Service{
cfg: &Config{UDPPort: uint(port)},
}
listener := s.createListener(ipAddr, pkey)
defer listener.Close() defer listener.Close()
if !listener.Self().IP().Equal(ipAddr) { if !listener.Self().IP().Equal(ipAddr) {
@@ -73,26 +77,44 @@ func TestCreateListener(t *testing.T) {
func TestStartDiscV5_DiscoverAllPeers(t *testing.T) { func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
port := 2000 port := 2000
ipAddr, pkey := createAddrAndPrivKey(t) ipAddr, pkey := createAddrAndPrivKey(t)
bootListener := createListener(ipAddr, pkey, &Config{UDPPort: uint(port)}) genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener := s.createListener(ipAddr, pkey)
defer bootListener.Close() defer bootListener.Close()
bootNode := bootListener.Self() bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddr: []string{bootNode.String()},
Encoding: "ssz",
}
var listeners []*discover.UDPv5 var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ { for i := 1; i <= 5; i++ {
port = 3000 + i port = 3000 + i
cfg.UDPPort = uint(port) cfg := &Config{
Discv5BootStrapAddr: []string{bootNode.String()},
Encoding: "ssz",
UDPPort: uint(port),
}
ipAddr, pkey := createAddrAndPrivKey(t) ipAddr, pkey := createAddrAndPrivKey(t)
listener, err := startDiscoveryV5(ipAddr, pkey, cfg) s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
if err != nil { if err != nil {
t.Errorf("Could not start discovery for node: %v", err) t.Errorf("Could not start discovery for node: %v", err)
} }
listeners = append(listeners, listener) listeners = append(listeners, listener)
} }
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes // Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime) time.Sleep(discoveryWaitTime)
@@ -103,105 +125,13 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+ t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes)) "Expected more than or equal to %d but got %d", 4, len(nodes))
} }
// Close all ports
for _, listener := range listeners {
listener.Close()
}
}
func TestStartDiscV5_DiscoverPeersWithSubnets(t *testing.T) {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
bootListener := createListener(ipAddr, pkey, &Config{UDPPort: uint(port)})
defer bootListener.Close()
bootNode := bootListener.Self()
cfg := &Config{
BootstrapNodeAddr: []string{bootNode.String()},
Discv5BootStrapAddr: []string{bootNode.String()},
Encoding: "ssz",
MaxPeers: 30,
}
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
defer func() {
pollingPeriod = currentPeriod
}()
var listeners []*discover.UDPv5
for i := 1; i <= 3; i++ {
port = 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
listener, err := startDiscoveryV5(ipAddr, pkey, cfg)
if err != nil {
t.Errorf("Could not start discovery for node: %v", err)
}
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(uint64(i), true)
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
listener.LocalNode().Set(entry)
listeners = append(listeners, listener)
}
// Make one service on port 3001.
port = 4000
cfg.UDPPort = uint(port)
s, err := NewService(cfg)
if err != nil {
t.Fatal(err)
}
s.Start()
defer func() {
if err := s.Stop(); err != nil {
t.Log(err)
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
// look up 3 different subnets
exists, err := s.FindPeersWithSubnet(1)
if err != nil {
t.Fatal(err)
}
exists2, err := s.FindPeersWithSubnet(2)
if err != nil {
t.Fatal(err)
}
exists3, err := s.FindPeersWithSubnet(3)
if err != nil {
t.Fatal(err)
}
if !exists || !exists2 || !exists3 {
t.Fatal("Peer with subnet doesn't exist")
}
// update ENR of a peer
testService := &Service{dv5Listener: listeners[0]}
cache.CommitteeIDs.AddIDs([]uint64{10}, 0)
testService.RefreshENR(0)
time.Sleep(2 * time.Second)
exists, err = s.FindPeersWithSubnet(2)
if err != nil {
t.Fatal(err)
}
if !exists {
t.Fatal("Peer with subnet doesn't exist")
}
} }
func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) { func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
addr := net.ParseIP("invalidIP") addr := net.ParseIP("invalidIP")
_, pkey := createAddrAndPrivKey(t) _, pkey := createAddrAndPrivKey(t)
node, err := createLocalNode(pkey, addr, 0, 0) s := &Service{}
node, err := s.createLocalNode(pkey, addr, 0, 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -214,7 +144,14 @@ func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
func TestMultiAddrConversion_OK(t *testing.T) { func TestMultiAddrConversion_OK(t *testing.T) {
hook := logTest.NewGlobal() hook := logTest.NewGlobal()
ipAddr, pkey := createAddrAndPrivKey(t) ipAddr, pkey := createAddrAndPrivKey(t)
listener := createListener(ipAddr, pkey, &Config{}) s := &Service{
cfg: &Config{
TCPPort: 0,
UDPPort: 0,
},
}
listener := s.createListener(ipAddr, pkey)
defer listener.Close()
_ = convertToMultiAddr([]*enode.Node{listener.Self()}) _ = convertToMultiAddr([]*enode.Node{listener.Self()})
testutil.AssertLogsDoNotContain(t, hook, "Node doesn't have an ip4 address") testutil.AssertLogsDoNotContain(t, hook, "Node doesn't have an ip4 address")
@@ -223,8 +160,12 @@ func TestMultiAddrConversion_OK(t *testing.T) {
} }
func TestStaticPeering_PeersAreAdded(t *testing.T) { func TestStaticPeering_PeersAreAdded(t *testing.T) {
cfg := &Config{Encoding: "ssz", MaxPeers: 30} db := testDB.SetupDB(t)
port := 3000 defer testDB.TeardownDB(t, db)
cfg := &Config{
Encoding: "ssz", MaxPeers: 30,
}
port := 6000
var staticPeers []string var staticPeers []string
var hosts []host.Host var hosts []host.Host
// setup other nodes // setup other nodes
@@ -242,26 +183,37 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
} }
}() }()
cfg.TCPPort = 14001 cfg.TCPPort = 14500
cfg.UDPPort = 14000 cfg.UDPPort = 14501
cfg.StaticPeers = staticPeers cfg.StaticPeers = staticPeers
cfg.StateNotifier = &mock.MockStateNotifier{}
s, err := NewService(cfg) s, err := NewService(cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
s.Start() exitRoutine := make(chan bool)
s.dv5Listener = &mockListener{} go func() {
defer func() { s.Start()
if err := s.Stop(); err != nil { <-exitRoutine
t.Log(err)
}
}() }()
time.Sleep(100 * time.Millisecond) // Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
for sent := 0; sent == 0; {
sent = s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized,
Data: &statefeed.InitializedData{
StartTime: time.Now(),
GenesisValidatorsRoot: make([]byte, 32),
},
})
}
time.Sleep(4 * time.Second)
peers := s.host.Network().Peers() peers := s.host.Network().Peers()
if len(peers) != 5 { if len(peers) != 5 {
t.Errorf("Not all peers added to peerstore, wanted %d but got %d", 5, len(peers)) t.Errorf("Not all peers added to peerstore, wanted %d but got %d", 5, len(peers))
} }
if err := s.Stop(); err != nil {
t.Fatal(err)
}
exitRoutine <- true
} }

View File

@@ -16,6 +16,7 @@ go_library(
"@com_github_gogo_protobuf//proto:go_default_library", "@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_golang_snappy//:go_default_library", "@com_github_golang_snappy//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library", "@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
], ],
) )

View File

@@ -1,12 +1,14 @@
package encoder package encoder
import ( import (
"bytes"
"fmt" "fmt"
"io" "io"
"github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/proto"
"github.com/golang/snappy" "github.com/golang/snappy"
"github.com/prysmaticlabs/go-ssz" "github.com/prysmaticlabs/go-ssz"
"github.com/sirupsen/logrus"
) )
var _ = NetworkEncoding(&SszNetworkEncoder{}) var _ = NetworkEncoding(&SszNetworkEncoder{})
@@ -21,14 +23,7 @@ type SszNetworkEncoder struct {
} }
func (e SszNetworkEncoder) doEncode(msg interface{}) ([]byte, error) { func (e SszNetworkEncoder) doEncode(msg interface{}) ([]byte, error) {
b, err := ssz.Marshal(msg) return ssz.Marshal(msg)
if err != nil {
return nil, err
}
if e.UseSnappyCompression {
b = snappy.Encode(nil /*dst*/, b)
}
return b, nil
} }
// Encode the proto message to the io.Writer. // Encode the proto message to the io.Writer.
@@ -36,11 +31,13 @@ func (e SszNetworkEncoder) Encode(w io.Writer, msg interface{}) (int, error) {
if msg == nil { if msg == nil {
return 0, nil return 0, nil
} }
b, err := e.doEncode(msg) b, err := e.doEncode(msg)
if err != nil { if err != nil {
return 0, err return 0, err
} }
if e.UseSnappyCompression {
return writeSnappyBuffer(w, b)
}
return w.Write(b) return w.Write(b)
} }
@@ -54,7 +51,14 @@ func (e SszNetworkEncoder) EncodeWithLength(w io.Writer, msg interface{}) (int,
if err != nil { if err != nil {
return 0, err return 0, err
} }
b = append(proto.EncodeVarint(uint64(len(b))), b...) // write varint first
_, err = w.Write(proto.EncodeVarint(uint64(len(b))))
if err != nil {
return 0, err
}
if e.UseSnappyCompression {
return writeSnappyBuffer(w, b)
}
return w.Write(b) return w.Write(b)
} }
@@ -71,21 +75,34 @@ func (e SszNetworkEncoder) EncodeWithMaxLength(w io.Writer, msg interface{}, max
if uint64(len(b)) > maxSize { if uint64(len(b)) > maxSize {
return 0, fmt.Errorf("size of encoded message is %d which is larger than the provided max limit of %d", len(b), maxSize) return 0, fmt.Errorf("size of encoded message is %d which is larger than the provided max limit of %d", len(b), maxSize)
} }
b = append(proto.EncodeVarint(uint64(len(b))), b...) // write varint first
_, err = w.Write(proto.EncodeVarint(uint64(len(b))))
if err != nil {
return 0, err
}
if e.UseSnappyCompression {
return writeSnappyBuffer(w, b)
}
return w.Write(b) return w.Write(b)
} }
func (e SszNetworkEncoder) doDecode(b []byte, to interface{}) error {
return ssz.Unmarshal(b, to)
}
// Decode the bytes to the protobuf message provided. // Decode the bytes to the protobuf message provided.
func (e SszNetworkEncoder) Decode(b []byte, to interface{}) error { func (e SszNetworkEncoder) Decode(b []byte, to interface{}) error {
if e.UseSnappyCompression { if e.UseSnappyCompression {
var err error newBuffer := bytes.NewBuffer(b)
b, err = snappy.Decode(nil /*dst*/, b) r := snappy.NewReader(newBuffer)
newObj := make([]byte, len(b))
numOfBytes, err := r.Read(newObj)
if err != nil { if err != nil {
return err return err
} }
return e.doDecode(newObj[:numOfBytes], to)
} }
return e.doDecode(b, to)
return ssz.Unmarshal(b, to)
} }
// DecodeWithLength the bytes from io.Reader to the protobuf message provided. // DecodeWithLength the bytes from io.Reader to the protobuf message provided.
@@ -103,15 +120,18 @@ func (e SszNetworkEncoder) DecodeWithMaxLength(r io.Reader, to interface{}, maxS
if err != nil { if err != nil {
return err return err
} }
if e.UseSnappyCompression {
r = snappy.NewReader(r)
}
if msgLen > maxSize { if msgLen > maxSize {
return fmt.Errorf("size of decoded message is %d which is larger than the provided max limit of %d", msgLen, maxSize) return fmt.Errorf("size of decoded message is %d which is larger than the provided max limit of %d", msgLen, maxSize)
} }
b := make([]byte, msgLen) b := make([]byte, e.MaxLength(int(msgLen)))
_, err = r.Read(b) numOfBytes, err := r.Read(b)
if err != nil { if err != nil {
return err return err
} }
return e.Decode(b, to) return e.doDecode(b[:numOfBytes], to)
} }
// ProtocolSuffix returns the appropriate suffix for protocol IDs. // ProtocolSuffix returns the appropriate suffix for protocol IDs.
@@ -121,3 +141,23 @@ func (e SszNetworkEncoder) ProtocolSuffix() string {
} }
return "/ssz" return "/ssz"
} }
// MaxLength specifies the maximum possible length of an encoded
// chunk of data.
func (e SszNetworkEncoder) MaxLength(length int) int {
if e.UseSnappyCompression {
return snappy.MaxEncodedLen(length)
}
return length
}
// Writes a bytes value through a snappy buffered writer.
func writeSnappyBuffer(w io.Writer, b []byte) (int, error) {
bufWriter := snappy.NewBufferedWriter(w)
defer func() {
if err := bufWriter.Close(); err != nil {
logrus.WithError(err).Error("Failed to close snappy buffered writer")
}
}()
return bufWriter.Write(b)
}

147
beacon-chain/p2p/fork.go Normal file
View File

@@ -0,0 +1,147 @@
package p2p
import (
"bytes"
"encoding/base64"
"fmt"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
// ENR key used for eth2-related fork data.
const eth2ENRKey = "eth2"
// ForkDigest returns the current fork digest of
// the node.
func (s *Service) ForkDigest() ([4]byte, error) {
return createForkDigest(s.genesisTime, s.genesisValidatorsRoot)
}
// Compares fork ENRs between an incoming peer's record and our node's
// local record values for current and next fork version/epoch.
func (s *Service) compareForkENR(record *enr.Record) error {
currentRecord := s.dv5Listener.LocalNode().Node().Record()
peerForkENR, err := retrieveForkEntry(record)
if err != nil {
return err
}
currentForkENR, err := retrieveForkEntry(currentRecord)
if err != nil {
return err
}
// Clients SHOULD connect to peers with current_fork_digest, next_fork_version,
// and next_fork_epoch that match local values.
if !bytes.Equal(peerForkENR.CurrentForkDigest, currentForkENR.CurrentForkDigest) {
return fmt.Errorf(
"fork digest of peer with ENR %v: %v, does not match local value: %v",
record,
peerForkENR.CurrentForkDigest,
currentForkENR.CurrentForkDigest,
)
}
// Clients MAY connect to peers with the same current_fork_version but a
// different next_fork_version/next_fork_epoch. Unless ENRForkID is manually
// updated to matching prior to the earlier next_fork_epoch of the two clients,
// these type of connecting clients will be unable to successfully interact
// starting at the earlier next_fork_epoch.
buf := bytes.NewBuffer([]byte{})
if err := record.EncodeRLP(buf); err != nil {
return errors.Wrap(err, "could not encode ENR record to bytes")
}
enrString := base64.URLEncoding.EncodeToString(buf.Bytes())
if peerForkENR.NextForkEpoch != currentForkENR.NextForkEpoch {
log.WithFields(logrus.Fields{
"peerNextForkEpoch": peerForkENR.NextForkEpoch,
"peerENR": enrString,
}).Debug("Peer matches fork digest but has different next fork epoch")
}
if !bytes.Equal(peerForkENR.NextForkVersion, currentForkENR.NextForkVersion) {
log.WithFields(logrus.Fields{
"peerNextForkVersion": peerForkENR.NextForkVersion,
"peerENR": enrString,
}).Debug("Peer matches fork digest but has different next fork version")
}
return nil
}
// Creates a fork digest from a genesis time and genesis
// validators root, utilizing the current slot to determine
// the active fork version in the node.
func createForkDigest(
genesisTime time.Time,
genesisValidatorsRoot []byte,
) ([4]byte, error) {
currentSlot := helpers.SlotsSince(genesisTime)
currentEpoch := helpers.SlotToEpoch(currentSlot)
// We retrieve a list of scheduled forks by epoch.
// We loop through the keys in this map to determine the current
// fork version based on the current, time-based epoch number
// since the genesis time.
currentForkVersion := params.BeaconConfig().GenesisForkVersion
scheduledForks := params.BeaconConfig().ForkVersionSchedule
for epoch, forkVersion := range scheduledForks {
if epoch <= currentEpoch {
currentForkVersion = forkVersion
}
}
digest, err := helpers.ComputeForkDigest(currentForkVersion, genesisValidatorsRoot)
if err != nil {
return [4]byte{}, err
}
return digest, nil
}
// Adds a fork entry as an ENR record under the eth2EnrKey for
// the local node. The fork entry is an ssz-encoded enrForkID type
// which takes into account the current fork version from the current
// epoch to create a fork digest, the next fork version,
// and the next fork epoch.
func addForkEntry(
node *enode.LocalNode,
genesisTime time.Time,
genesisValidatorsRoot []byte,
) (*enode.LocalNode, error) {
digest, err := createForkDigest(genesisTime, genesisValidatorsRoot)
if err != nil {
return nil, err
}
nextForkEpoch := params.BeaconConfig().NextForkEpoch
enrForkID := &pb.ENRForkID{
CurrentForkDigest: digest[:],
NextForkVersion: params.BeaconConfig().NextForkVersion,
NextForkEpoch: nextForkEpoch,
}
enc, err := ssz.Marshal(enrForkID)
if err != nil {
return nil, err
}
forkEntry := enr.WithEntry(eth2ENRKey, enc)
node.Set(forkEntry)
return node, nil
}
// Retrieves an enrForkID from an ENR record by key lookup
// under the eth2EnrKey.
func retrieveForkEntry(record *enr.Record) (*pb.ENRForkID, error) {
sszEncodedForkEntry := make([]byte, 16)
entry := enr.WithEntry(eth2ENRKey, &sszEncodedForkEntry)
err := record.Load(entry)
if err != nil {
return nil, err
}
forkEntry := &pb.ENRForkID{}
if err := ssz.Unmarshal(sszEncodedForkEntry, forkEntry); err != nil {
return nil, err
}
return forkEntry, nil
}

View File

@@ -0,0 +1,267 @@
package p2p
import (
"bytes"
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener := s.createListener(ipAddr, pkey)
defer bootListener.Close()
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddr: []string{bootNode.String()},
Encoding: "ssz",
UDPPort: uint(port),
}
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
// We give every peer a different genesis validators root, which
// will cause each peer to have a different ForkDigest, preventing
// them from connecting according to our discovery rules for eth2.
root := make([]byte, 32)
copy(root, strconv.Itoa(port))
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: root,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
if err != nil {
t.Errorf("Could not start discovery for node: %v", err)
}
listeners = append(listeners, listener)
}
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
// Now, we start a new p2p service. It should have no peers aside from the
// bootnode given all nodes provided by discv5 will have different fork digests.
cfg.UDPPort = 14000
cfg.TCPPort = 14001
s, err := NewService(cfg)
if err != nil {
t.Fatal(err)
}
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
multiAddrs := s.processPeers(nodes)
// We should not have valid peers if the fork digest mismatched.
if len(multiAddrs) != 0 {
t.Errorf("Expected 0 valid peers, got %d", len(multiAddrs))
}
if err := s.Stop(); err != nil {
t.Fatal(err)
}
}
func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.DebugLevel)
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener := s.createListener(ipAddr, pkey)
defer bootListener.Close()
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddr: []string{bootNode.String()},
Encoding: "ssz",
UDPPort: uint(port),
}
originalBeaconConfig := params.BeaconConfig()
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
c := params.BeaconConfig()
nextForkEpoch := uint64(i)
c.NextForkEpoch = nextForkEpoch
params.OverrideBeaconConfig(c)
// We give every peer a different genesis validators root, which
// will cause each peer to have a different ForkDigest, preventing
// them from connecting according to our discovery rules for eth2.
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
if err != nil {
t.Errorf("Could not start discovery for node: %v", err)
}
listeners = append(listeners, listener)
}
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
// Now, we start a new p2p service. It should have no peers aside from the
// bootnode given all nodes provided by discv5 will have different fork digests.
cfg.UDPPort = 14000
cfg.TCPPort = 14001
params.OverrideBeaconConfig(originalBeaconConfig)
s, err := NewService(cfg)
if err != nil {
t.Fatal(err)
}
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
multiAddrs := s.processPeers(nodes)
if len(multiAddrs) == 0 {
t.Error("Expected to have valid peers, got 0")
}
testutil.AssertLogsContain(t, hook, "Peer matches fork digest but has different next fork epoch")
if err := s.Stop(); err != nil {
t.Fatal(err)
}
}
func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
c := params.BeaconConfig()
originalConfig := c
c.ForkVersionSchedule = map[uint64][]byte{
0: params.BeaconConfig().GenesisForkVersion,
1: {0, 0, 0, 1},
}
nextForkEpoch := uint64(1)
nextForkVersion := []byte{0, 0, 0, 1}
c.NextForkEpoch = nextForkEpoch
c.NextForkVersion = nextForkVersion
params.OverrideBeaconConfig(c)
defer params.OverrideBeaconConfig(originalConfig)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
digest, err := createForkDigest(genesisTime, make([]byte, 32))
if err != nil {
t.Fatal(err)
}
enrForkID := &pb.ENRForkID{
CurrentForkDigest: digest[:],
NextForkVersion: nextForkVersion,
NextForkEpoch: nextForkEpoch,
}
enc, err := ssz.Marshal(enrForkID)
if err != nil {
t.Fatal(err)
}
forkEntry := enr.WithEntry(eth2ENRKey, enc)
// In epoch 1 of current time, the fork version should be
// {0, 0, 0, 1} according to the configuration override above.
temp := testutil.TempDir()
randNum := rand.Int()
tempPath := path.Join(temp, strconv.Itoa(randNum))
if err := os.Mkdir(tempPath, 0700); err != nil {
t.Fatal(err)
}
pkey, err := privKey(&Config{Encoding: "ssz", DataDir: tempPath})
if err != nil {
t.Fatalf("Could not get private key: %v", err)
}
db, err := enode.OpenDB("")
if err != nil {
t.Fatal(err)
}
localNode := enode.NewLocalNode(db, pkey)
localNode.Set(forkEntry)
want, err := helpers.ComputeForkDigest([]byte{0, 0, 0, 0}, genesisValidatorsRoot)
if err != nil {
t.Fatal(err)
}
resp, err := retrieveForkEntry(localNode.Node().Record())
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(resp.CurrentForkDigest, want[:]) {
t.Errorf("Wanted fork digest: %v, received %v", want, resp.CurrentForkDigest)
}
if !bytes.Equal(resp.NextForkVersion[:], nextForkVersion) {
t.Errorf("Wanted next fork version: %v, received %v", nextForkVersion, resp.NextForkVersion)
}
if resp.NextForkEpoch != nextForkEpoch {
t.Errorf("Wanted next for epoch: %d, received: %d", nextForkEpoch, resp.NextForkEpoch)
}
}

View File

@@ -10,12 +10,12 @@ import (
// GossipTopicMappings represent the protocol ID to protobuf message type map for easy // GossipTopicMappings represent the protocol ID to protobuf message type map for easy
// lookup. // lookup.
var GossipTopicMappings = map[string]proto.Message{ var GossipTopicMappings = map[string]proto.Message{
"/eth2/beacon_block": &pb.SignedBeaconBlock{}, "/eth2/%x/beacon_block": &pb.SignedBeaconBlock{},
"/eth2/committee_index%d_beacon_attestation": &pb.Attestation{}, "/eth2/%x/committee_index%d_beacon_attestation": &pb.Attestation{},
"/eth2/voluntary_exit": &pb.SignedVoluntaryExit{}, "/eth2/%x/voluntary_exit": &pb.SignedVoluntaryExit{},
"/eth2/proposer_slashing": &pb.ProposerSlashing{}, "/eth2/%x/proposer_slashing": &pb.ProposerSlashing{},
"/eth2/attester_slashing": &pb.AttesterSlashing{}, "/eth2/%x/attester_slashing": &pb.AttesterSlashing{},
"/eth2/beacon_aggregate_and_proof": &pb.AggregateAttestationAndProof{}, "/eth2/%x/beacon_aggregate_and_proof": &pb.SignedAggregateAttestationAndProof{},
} }
// GossipTypeMapping is the inverse of GossipTopicMappings so that an arbitrary protobuf message // GossipTypeMapping is the inverse of GossipTopicMappings so that an arbitrary protobuf message

View File

@@ -26,7 +26,7 @@ func (s *Service) AddConnectionHandler(reqFunc func(ctx context.Context, id peer
log.WithField("currentState", peerConnectionState).WithField("reason", "already active").Trace("Ignoring connection request") log.WithField("currentState", peerConnectionState).WithField("reason", "already active").Trace("Ignoring connection request")
return return
} }
s.peers.Add(conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction, nil) s.peers.Add(nil /* ENR */, conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction)
if len(s.peers.Active()) >= int(s.cfg.MaxPeers) { if len(s.peers.Active()) >= int(s.cfg.MaxPeers) {
log.WithField("reason", "at peer limit").Trace("Ignoring connection request") log.WithField("reason", "at peer limit").Trace("Ignoring connection request")
if err := s.Disconnect(conn.RemotePeer()); err != nil { if err := s.Disconnect(conn.RemotePeer()); err != nil {

View File

@@ -22,7 +22,7 @@ self=%s
%v %v
`, `,
s.cfg.BootstrapNodeAddr, s.cfg.BootstrapNodeAddr,
selfAddresses(s.host), s.selfAddresses(),
len(s.host.Network().Peers()), len(s.host.Network().Peers()),
formatPeers(s.host), // Must be last. Writes one entry per row. formatPeers(s.host), // Must be last. Writes one entry per row.
); err != nil { ); err != nil {
@@ -37,10 +37,13 @@ self=%s
} }
// selfAddresses formats the host data into dialable strings, comma separated. // selfAddresses formats the host data into dialable strings, comma separated.
func selfAddresses(h host.Host) string { func (s *Service) selfAddresses() string {
var addresses []string var addresses []string
for _, ma := range h.Addrs() { if s.dv5Listener != nil {
addresses = append(addresses, ma.String()+"/p2p/"+h.ID().Pretty()) addresses = append(addresses, s.dv5Listener.Self().String())
}
for _, ma := range s.host.Addrs() {
addresses = append(addresses, ma.String()+"/p2p/"+s.host.ID().Pretty())
} }
return strings.Join(addresses, ",") return strings.Join(addresses, ",")
} }

View File

@@ -9,6 +9,7 @@ import (
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/encoder" "github.com/prysmaticlabs/prysm/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers" "github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
) )
// P2P represents the full p2p interface composed of all of the sub-interfaces. // P2P represents the full p2p interface composed of all of the sub-interfaces.
@@ -21,6 +22,7 @@ type P2P interface {
Sender Sender
ConnectionHandler ConnectionHandler
PeersProvider PeersProvider
MetadataProvider
} }
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol. // Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
@@ -42,6 +44,7 @@ type ConnectionHandler interface {
// EncodingProvider provides p2p network encoding. // EncodingProvider provides p2p network encoding.
type EncodingProvider interface { type EncodingProvider interface {
Encoding() encoder.NetworkEncoding Encoding() encoder.NetworkEncoding
ForkDigest() ([4]byte, error)
} }
// PubSubProvider provides the p2p pubsub protocol. // PubSubProvider provides the p2p pubsub protocol.
@@ -55,14 +58,21 @@ type PeerManager interface {
PeerID() peer.ID PeerID() peer.ID
RefreshENR(epoch uint64) RefreshENR(epoch uint64)
FindPeersWithSubnet(index uint64) (bool, error) FindPeersWithSubnet(index uint64) (bool, error)
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
} }
// Sender abstracts the sending functionality from libp2p. // Sender abstracts the sending functionality from libp2p.
type Sender interface { type Sender interface {
Send(context.Context, interface{}, peer.ID) (network.Stream, error) Send(context.Context, interface{}, string, peer.ID) (network.Stream, error)
} }
// PeersProvider abstracts obtaining our current list of known peers status. // PeersProvider abstracts obtaining our current list of known peers status.
type PeersProvider interface { type PeersProvider interface {
Peers() *peers.Status Peers() *peers.Status
} }
// MetadataProvider returns the metadata related information for the local peer.
type MetadataProvider interface {
Metadata() *pb.MetaData
MetadataSeq() uint64
}

View File

@@ -6,11 +6,6 @@ import (
) )
var ( var (
p2pTopicPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_topic_peer_count",
Help: "The number of peers subscribed to a given topic.",
},
[]string{"topic"})
p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{ p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count", Name: "p2p_peer_count",
Help: "The number of peers in a given state.", Help: "The number of peers in a given state.",
@@ -19,10 +14,6 @@ var (
) )
func (s *Service) updateMetrics() { func (s *Service) updateMetrics() {
for topic := range GossipTopicMappings {
topic += s.Encoding().ProtocolSuffix()
p2pTopicPeerCount.WithLabelValues(topic).Set(float64(len(s.pubsub.ListPeers(topic))))
}
p2pPeerCount.WithLabelValues("Connected").Set(float64(len(s.peers.Connected()))) p2pPeerCount.WithLabelValues("Connected").Set(float64(len(s.peers.Connected())))
p2pPeerCount.WithLabelValues("Disconnected").Set(float64(len(s.peers.Disconnected()))) p2pPeerCount.WithLabelValues("Disconnected").Set(float64(len(s.peers.Disconnected())))
p2pPeerCount.WithLabelValues("Connecting").Set(float64(len(s.peers.Connecting()))) p2pPeerCount.WithLabelValues("Connecting").Set(float64(len(s.peers.Connecting())))

View File

@@ -9,7 +9,6 @@ import (
"github.com/libp2p/go-libp2p" "github.com/libp2p/go-libp2p"
noise "github.com/libp2p/go-libp2p-noise" noise "github.com/libp2p/go-libp2p-noise"
filter "github.com/libp2p/go-maddr-filter" filter "github.com/libp2p/go-maddr-filter"
"github.com/multiformats/go-multiaddr"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/connmgr" "github.com/prysmaticlabs/prysm/beacon-chain/p2p/connmgr"
@@ -42,7 +41,7 @@ func buildOptions(cfg *Config, ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
options = append(options, libp2p.AddrsFactory(withRelayAddrs(cfg.RelayNodeAddr))) options = append(options, libp2p.AddrsFactory(withRelayAddrs(cfg.RelayNodeAddr)))
} }
if cfg.HostAddress != "" { if cfg.HostAddress != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []multiaddr.Multiaddr) []multiaddr.Multiaddr { options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := multiAddressBuilder(cfg.HostAddress, cfg.TCPPort) external, err := multiAddressBuilder(cfg.HostAddress, cfg.TCPPort)
if err != nil { if err != nil {
log.WithError(err).Error("Unable to create external multiaddress") log.WithError(err).Error("Unable to create external multiaddress")
@@ -53,8 +52,8 @@ func buildOptions(cfg *Config, ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
})) }))
} }
if cfg.HostDNS != "" { if cfg.HostDNS != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []multiaddr.Multiaddr) []multiaddr.Multiaddr { options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := multiaddr.NewMultiaddr(fmt.Sprintf("/dns4/%s/tcp/%d", cfg.HostDNS, cfg.TCPPort)) external, err := ma.NewMultiaddr(fmt.Sprintf("/dns4/%s/tcp/%d", cfg.HostDNS, cfg.TCPPort))
if err != nil { if err != nil {
log.WithError(err).Error("Unable to create external multiaddress") log.WithError(err).Error("Unable to create external multiaddress")
} else { } else {

View File

@@ -10,9 +10,12 @@ go_library(
"//proto/beacon/p2p/v1:go_default_library", "//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library", "//shared/bytesutil:go_default_library",
"//shared/roughtime:go_default_library", "//shared/roughtime:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library", "@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library", "@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library", "@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
], ],
) )
@@ -23,8 +26,10 @@ go_test(
deps = [ deps = [
"//proto/beacon/p2p/v1:go_default_library", "//proto/beacon/p2p/v1:go_default_library",
"//shared/params:go_default_library", "//shared/params:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library", "@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_peer//:go_default_library", "@com_github_libp2p_go_libp2p_peer//:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library", "@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
], ],
) )

View File

@@ -25,9 +25,12 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/gogo/protobuf/proto"
"github.com/libp2p/go-libp2p-core/network" "github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers" "github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil" "github.com/prysmaticlabs/prysm/shared/bytesutil"
@@ -66,9 +69,10 @@ type peerStatus struct {
direction network.Direction direction network.Direction
peerState PeerConnectionState peerState PeerConnectionState
chainState *pb.Status chainState *pb.Status
enr *enr.Record
metaData *pb.MetaData
chainStateLastUpdated time.Time chainStateLastUpdated time.Time
badResponses int badResponses int
committeeIndices []uint64
} }
// NewStatus creates a new status entity. // NewStatus creates a new status entity.
@@ -86,7 +90,7 @@ func (p *Status) MaxBadResponses() int {
// Add adds a peer. // Add adds a peer.
// If a peer already exists with this ID its address and direction are updated with the supplied data. // If a peer already exists with this ID its address and direction are updated with the supplied data.
func (p *Status) Add(pid peer.ID, address ma.Multiaddr, direction network.Direction, indices []uint64) { func (p *Status) Add(record *enr.Record, pid peer.ID, address ma.Multiaddr, direction network.Direction) {
p.lock.Lock() p.lock.Lock()
defer p.lock.Unlock() defer p.lock.Unlock()
@@ -94,19 +98,21 @@ func (p *Status) Add(pid peer.ID, address ma.Multiaddr, direction network.Direct
// Peer already exists, just update its address info. // Peer already exists, just update its address info.
status.address = address status.address = address
status.direction = direction status.direction = direction
if indices != nil { if record != nil {
status.committeeIndices = indices status.enr = record
} }
return return
} }
status := &peerStatus{
p.status[pid] = &peerStatus{
address: address, address: address,
direction: direction, direction: direction,
// Peers start disconnected; state will be updated when the handshake process begins. // Peers start disconnected; state will be updated when the handshake process begins.
peerState: PeerDisconnected, peerState: PeerDisconnected,
committeeIndices: indices,
} }
if record != nil {
status.enr = record
}
p.status[pid] = status
} }
// Address returns the multiaddress of the given remote peer. // Address returns the multiaddress of the given remote peer.
@@ -133,6 +139,17 @@ func (p *Status) Direction(pid peer.ID) (network.Direction, error) {
return network.DirUnknown, ErrPeerUnknown return network.DirUnknown, ErrPeerUnknown
} }
// ENR returns the enr for the corresponding peer id.
func (p *Status) ENR(pid peer.ID) (*enr.Record, error) {
p.lock.RLock()
defer p.lock.RUnlock()
if status, ok := p.status[pid]; ok {
return status.enr, nil
}
return nil, ErrPeerUnknown
}
// SetChainState sets the chain state of the given remote peer. // SetChainState sets the chain state of the given remote peer.
func (p *Status) SetChainState(pid peer.ID, chainState *pb.Status) { func (p *Status) SetChainState(pid peer.ID, chainState *pb.Status) {
p.lock.Lock() p.lock.Lock()
@@ -165,16 +182,37 @@ func (p *Status) IsActive(pid peer.ID) bool {
return ok && (status.peerState == PeerConnected || status.peerState == PeerConnecting) return ok && (status.peerState == PeerConnected || status.peerState == PeerConnecting)
} }
// SetMetadata sets the metadata of the given remote peer.
func (p *Status) SetMetadata(pid peer.ID, metaData *pb.MetaData) {
p.lock.Lock()
defer p.lock.Unlock()
status := p.fetch(pid)
status.metaData = metaData
}
// Metadata returns a copy of the metadata corresponding to the provided
// peer id.
func (p *Status) Metadata(pid peer.ID) (*pb.MetaData, error) {
p.lock.RLock()
defer p.lock.RUnlock()
if status, ok := p.status[pid]; ok {
return proto.Clone(status.metaData).(*pb.MetaData), nil
}
return nil, ErrPeerUnknown
}
// CommitteeIndices retrieves the committee subnets the peer is subscribed to. // CommitteeIndices retrieves the committee subnets the peer is subscribed to.
func (p *Status) CommitteeIndices(pid peer.ID) ([]uint64, error) { func (p *Status) CommitteeIndices(pid peer.ID) ([]uint64, error) {
p.lock.RLock() p.lock.RLock()
defer p.lock.RUnlock() defer p.lock.RUnlock()
if status, ok := p.status[pid]; ok { if status, ok := p.status[pid]; ok {
if status.committeeIndices == nil { if status.enr == nil || status.metaData == nil {
return []uint64{}, nil return []uint64{}, nil
} }
return status.committeeIndices, nil return retrieveIndicesFromBitfield(status.metaData.Attnets), nil
} }
return nil, ErrPeerUnknown return nil, ErrPeerUnknown
} }
@@ -189,10 +227,12 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
for pid, status := range p.status { for pid, status := range p.status {
// look at active peers // look at active peers
if status.peerState == PeerConnecting || status.peerState == PeerConnected && if status.peerState == PeerConnecting || status.peerState == PeerConnected &&
status.committeeIndices != nil { status.metaData != nil {
for _, idx := range status.committeeIndices { indices := retrieveIndicesFromBitfield(status.metaData.Attnets)
for _, idx := range indices {
if idx == index { if idx == index {
peers = append(peers, pid) peers = append(peers, pid)
break
} }
} }
} }
@@ -455,3 +495,13 @@ func (p *Status) CurrentEpoch() uint64 {
} }
return helpers.SlotToEpoch(highestSlot) return helpers.SlotToEpoch(highestSlot)
} }
func retrieveIndicesFromBitfield(bitV bitfield.Bitvector64) []uint64 {
committeeIdxs := []uint64{}
for i := uint64(0); i < 64; i++ {
if bitV.BitAt(i) {
committeeIdxs = append(committeeIdxs, i)
}
}
return committeeIdxs
}

View File

@@ -4,11 +4,14 @@ import (
"bytes" "bytes"
"crypto/rand" "crypto/rand"
"fmt" "fmt"
"reflect"
"testing" "testing"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/network" "github.com/libp2p/go-libp2p-core/network"
peer "github.com/libp2p/go-libp2p-peer" peer "github.com/libp2p/go-libp2p-peer"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers" "github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params" "github.com/prysmaticlabs/prysm/shared/params"
@@ -38,7 +41,7 @@ func TestPeerExplicitAdd(t *testing.T) {
t.Fatalf("Failed to create address: %v", err) t.Fatalf("Failed to create address: %v", err)
} }
direction := network.DirInbound direction := network.DirInbound
p.Add(id, address, direction, []uint64{}) p.Add(new(enr.Record), id, address, direction)
resAddress, err := p.Address(id) resAddress, err := p.Address(id)
if err != nil { if err != nil {
@@ -62,7 +65,7 @@ func TestPeerExplicitAdd(t *testing.T) {
t.Fatalf("Failed to create address: %v", err) t.Fatalf("Failed to create address: %v", err)
} }
direction2 := network.DirOutbound direction2 := network.DirOutbound
p.Add(id, address2, direction2, []uint64{}) p.Add(new(enr.Record), id, address2, direction2)
resAddress2, err := p.Address(id) resAddress2, err := p.Address(id)
if err != nil { if err != nil {
@@ -81,6 +84,58 @@ func TestPeerExplicitAdd(t *testing.T) {
} }
} }
func TestPeerNoENR(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(maxBadResponses)
id, err := peer.IDB58Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
if err != nil {
t.Fatalf("Failed to create ID: %v", err)
}
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
if err != nil {
t.Fatalf("Failed to create address: %v", err)
}
direction := network.DirInbound
p.Add(nil, id, address, direction)
retrievedENR, err := p.ENR(id)
if err != nil {
t.Fatalf("Could not retrieve chainstate: %v", err)
}
if retrievedENR != nil {
t.Error("Wanted a nil enr to be saved")
}
}
func TestPeerNoOverwriteENR(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(maxBadResponses)
id, err := peer.IDB58Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
if err != nil {
t.Fatalf("Failed to create ID: %v", err)
}
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
if err != nil {
t.Fatalf("Failed to create address: %v", err)
}
direction := network.DirInbound
record := new(enr.Record)
record.Set(enr.WithEntry("test", []byte{'a'}))
p.Add(record, id, address, direction)
// try to overwrite
p.Add(nil, id, address, direction)
retrievedENR, err := p.ENR(id)
if err != nil {
t.Fatalf("Could not retrieve chainstate: %v", err)
}
if retrievedENR == nil {
t.Error("Wanted a non-nil enr")
}
}
func TestErrUnknownPeer(t *testing.T) { func TestErrUnknownPeer(t *testing.T) {
maxBadResponses := 2 maxBadResponses := 2
p := peers.NewStatus(maxBadResponses) p := peers.NewStatus(maxBadResponses)
@@ -121,6 +176,94 @@ func TestErrUnknownPeer(t *testing.T) {
} }
} }
func TestPeerCommitteeIndices(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(maxBadResponses)
id, err := peer.IDB58Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
if err != nil {
t.Fatalf("Failed to create ID: %v", err)
}
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
if err != nil {
t.Fatalf("Failed to create address: %v", err)
}
direction := network.DirInbound
record := new(enr.Record)
record.Set(enr.WithEntry("test", []byte{'a'}))
p.Add(record, id, address, direction)
bitV := bitfield.NewBitvector64()
for i := 0; i < 64; i++ {
if i == 2 || i == 8 || i == 9 {
bitV.SetBitAt(uint64(i), true)
}
}
p.SetMetadata(id, &pb.MetaData{
SeqNumber: 2,
Attnets: bitV,
})
wantedIndices := []uint64{2, 8, 9}
indices, err := p.CommitteeIndices(id)
if err != nil {
t.Fatalf("Could not retrieve committee indices: %v", err)
}
if !reflect.DeepEqual(indices, wantedIndices) {
t.Errorf("Wanted indices of %v but got %v", wantedIndices, indices)
}
}
func TestPeerSubscribedToSubnet(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(maxBadResponses)
// Add some peers with different states
numPeers := 2
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.PeerConnected)
}
expectedPeer := p.All()[1]
bitV := bitfield.NewBitvector64()
for i := 0; i < 64; i++ {
if i == 2 || i == 8 || i == 9 {
bitV.SetBitAt(uint64(i), true)
}
}
p.SetMetadata(expectedPeer, &pb.MetaData{
SeqNumber: 2,
Attnets: bitV,
})
numPeers = 3
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.PeerDisconnected)
}
peers := p.SubscribedToSubnet(2)
if len(peers) != 1 {
t.Errorf("Expected num of peers to be %d but got %d", 1, len(peers))
}
if peers[0] != expectedPeer {
t.Errorf("Expected peer of %s but got %s", expectedPeer, peers[0])
}
peers = p.SubscribedToSubnet(8)
if len(peers) != 1 {
t.Errorf("Expected num of peers to be %d but got %d", 1, len(peers))
}
if peers[0] != expectedPeer {
t.Errorf("Expected peer of %s but got %s", expectedPeer, peers[0])
}
peers = p.SubscribedToSubnet(9)
if len(peers) != 1 {
t.Errorf("Expected num of peers to be %d but got %d", 1, len(peers))
}
if peers[0] != expectedPeer {
t.Errorf("Expected peer of %s but got %s", expectedPeer, peers[0])
}
}
func TestPeerImplicitAdd(t *testing.T) { func TestPeerImplicitAdd(t *testing.T) {
maxBadResponses := 2 maxBadResponses := 2
p := peers.NewStatus(maxBadResponses) p := peers.NewStatus(maxBadResponses)
@@ -156,7 +299,7 @@ func TestPeerChainState(t *testing.T) {
t.Fatalf("Failed to create address: %v", err) t.Fatalf("Failed to create address: %v", err)
} }
direction := network.DirInbound direction := network.DirInbound
p.Add(id, address, direction, []uint64{}) p.Add(new(enr.Record), id, address, direction)
oldChainStartLastUpdated, err := p.ChainStateLastUpdated(id) oldChainStartLastUpdated, err := p.ChainStateLastUpdated(id)
if err != nil { if err != nil {
@@ -208,7 +351,7 @@ func TestPeerBadResponses(t *testing.T) {
t.Fatalf("Failed to create address: %v", err) t.Fatalf("Failed to create address: %v", err)
} }
direction := network.DirInbound direction := network.DirInbound
p.Add(id, address, direction, []uint64{}) p.Add(new(enr.Record), id, address, direction)
resBadResponses, err := p.BadResponses(id) resBadResponses, err := p.BadResponses(id)
if err != nil { if err != nil {
@@ -258,6 +401,32 @@ func TestPeerBadResponses(t *testing.T) {
} }
} }
func TestAddMetaData(t *testing.T) {
maxBadResponses := 2
p := peers.NewStatus(maxBadResponses)
// Add some peers with different states
numPeers := 5
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.PeerConnected)
}
newPeer := p.All()[2]
newMetaData := &pb.MetaData{
SeqNumber: 8,
Attnets: bitfield.NewBitvector64(),
}
p.SetMetadata(newPeer, newMetaData)
md, err := p.Metadata(newPeer)
if err != nil {
t.Fatal(err)
}
if md.SeqNumber != newMetaData.SeqNumber {
t.Errorf("Wanted sequence number of %d but got %d", newMetaData.SeqNumber, md.SeqNumber)
}
}
func TestPeerConnectionStatuses(t *testing.T) { func TestPeerConnectionStatuses(t *testing.T) {
maxBadResponses := 2 maxBadResponses := 2
p := peers.NewStatus(maxBadResponses) p := peers.NewStatus(maxBadResponses)
@@ -470,7 +639,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
p := peers.NewStatus(maxBadResponses) p := peers.NewStatus(maxBadResponses)
for i := 0; i <= maxPeers+100; i++ { for i := 0; i <= maxPeers+100; i++ {
p.Add(peer.ID(i), nil, network.DirOutbound, []uint64{}) p.Add(new(enr.Record), peer.ID(i), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(i), peers.PeerConnected) p.SetConnectionState(peer.ID(i), peers.PeerConnected)
p.SetChainState(peer.ID(i), &pb.Status{ p.SetChainState(peer.ID(i), &pb.Status{
FinalizedEpoch: 10, FinalizedEpoch: 10,
@@ -520,7 +689,11 @@ func addPeer(t *testing.T, p *peers.Status, state peers.PeerConnectionState) pee
if err != nil { if err != nil {
t.Fatalf("Unexpected error: %v", err) t.Fatalf("Unexpected error: %v", err)
} }
p.Add(id, nil, network.DirUnknown, []uint64{}) p.Add(new(enr.Record), id, nil, network.DirUnknown)
p.SetConnectionState(id, state) p.SetConnectionState(id, state)
p.SetMetadata(id, &pb.MetaData{
SeqNumber: 0,
Attnets: bitfield.NewBitvector64(),
})
return id return id
} }

View File

@@ -1,27 +1,32 @@
package p2p package p2p
import ( import (
"reflect"
p2ppb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1" p2ppb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
) )
const (
// RPCStatusTopic defines the topic for the status rpc method.
RPCStatusTopic = "/eth2/beacon_chain/req/status/1"
// RPCGoodByeTopic defines the topic for the goodbye rpc method.
RPCGoodByeTopic = "/eth2/beacon_chain/req/goodbye/1"
// RPCBlocksByRangeTopic defines the topic for the blocks by range rpc method.
RPCBlocksByRangeTopic = "/eth2/beacon_chain/req/beacon_blocks_by_range/1"
// RPCBlocksByRootTopic defines the topic for the blocks by root rpc method.
RPCBlocksByRootTopic = "/eth2/beacon_chain/req/beacon_blocks_by_root/1"
// RPCPingTopic defines the topic for the ping rpc method.
RPCPingTopic = "/eth2/beacon_chain/req/ping/1"
// RPCMetaDataTopic defines the topic for the metadata rpc method.
RPCMetaDataTopic = "/eth2/beacon_chain/req/metadata/1"
)
// RPCTopicMappings represent the protocol ID to protobuf message type map for easy // RPCTopicMappings represent the protocol ID to protobuf message type map for easy
// lookup. These mappings should be used for outbound sending only. Peers may respond // lookup. These mappings should be used for outbound sending only. Peers may respond
// with a different message type as defined by the p2p protocol. // with a different message type as defined by the p2p protocol.
var RPCTopicMappings = map[string]interface{}{ var RPCTopicMappings = map[string]interface{}{
"/eth2/beacon_chain/req/status/1": &p2ppb.Status{}, RPCStatusTopic: &p2ppb.Status{},
"/eth2/beacon_chain/req/goodbye/1": new(uint64), RPCGoodByeTopic: new(uint64),
"/eth2/beacon_chain/req/beacon_blocks_by_range/1": &p2ppb.BeaconBlocksByRangeRequest{}, RPCBlocksByRangeTopic: &p2ppb.BeaconBlocksByRangeRequest{},
"/eth2/beacon_chain/req/beacon_blocks_by_root/1": [][32]byte{}, RPCBlocksByRootTopic: [][32]byte{},
} RPCPingTopic: new(uint64),
RPCMetaDataTopic: new(interface{}),
// RPCTypeMapping is the inverse of RPCTopicMappings so that an arbitrary protobuf message
// can be mapped to a protocol ID string.
var RPCTypeMapping = make(map[reflect.Type]string)
func init() {
for k, v := range RPCTopicMappings {
RPCTypeMapping[reflect.TypeOf(v)] = k
}
} }

View File

@@ -2,7 +2,6 @@ package p2p
import ( import (
"context" "context"
"reflect"
"time" "time"
"github.com/libp2p/go-libp2p-core/network" "github.com/libp2p/go-libp2p-core/network"
@@ -14,10 +13,10 @@ import (
// Send a message to a specific peer. The returned stream may be used for reading, but has been // Send a message to a specific peer. The returned stream may be used for reading, but has been
// closed for writing. // closed for writing.
func (s *Service) Send(ctx context.Context, message interface{}, pid peer.ID) (network.Stream, error) { func (s *Service) Send(ctx context.Context, message interface{}, baseTopic string, pid peer.ID) (network.Stream, error) {
ctx, span := trace.StartSpan(ctx, "p2p.Send") ctx, span := trace.StartSpan(ctx, "p2p.Send")
defer span.End() defer span.End()
topic := RPCTypeMapping[reflect.TypeOf(message)] + s.Encoding().ProtocolSuffix() topic := baseTopic + s.Encoding().ProtocolSuffix()
span.AddAttributes(trace.StringAttribute("topic", topic)) span.AddAttributes(trace.StringAttribute("topic", topic))
// TTFB_TIME (5s) + RESP_TIMEOUT (10s). // TTFB_TIME (5s) + RESP_TIMEOUT (10s).
@@ -38,6 +37,11 @@ func (s *Service) Send(ctx context.Context, message interface{}, pid peer.ID) (n
traceutil.AnnotateError(span, err) traceutil.AnnotateError(span, err)
return nil, err return nil, err
} }
// do not encode anything if we are sending a metadata request
if baseTopic == RPCMetaDataTopic {
return stream, nil
}
if _, err := s.Encoding().EncodeWithLength(stream, message); err != nil { if _, err := s.Encoding().EncodeWithLength(stream, message); err != nil {
traceutil.AnnotateError(span, err) traceutil.AnnotateError(span, err)
return nil, err return nil, err

View File

@@ -2,7 +2,6 @@ package p2p
import ( import (
"context" "context"
"reflect"
"sync" "sync"
"testing" "testing"
"time" "time"
@@ -29,29 +28,25 @@ func TestService_Send(t *testing.T) {
Bar: 55, Bar: 55,
} }
// Register testing topic.
RPCTypeMapping[reflect.TypeOf(msg)] = "/testing/1"
// Register external listener which will repeat the message back. // Register external listener which will repeat the message back.
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() {
p2.SetStreamHandler("/testing/1/ssz", func(stream network.Stream) {
rcvd := &testpb.TestSimpleMessage{}
if err := svc.Encoding().DecodeWithLength(stream, rcvd); err != nil {
t.Fatal(err)
}
if _, err := svc.Encoding().EncodeWithLength(stream, rcvd); err != nil {
t.Fatal(err)
}
if err := stream.Close(); err != nil {
t.Error(err)
}
wg.Done()
})
}()
stream, err := svc.Send(context.Background(), msg, p2.Host.ID()) p2.SetStreamHandler("/testing/1/ssz", func(stream network.Stream) {
rcvd := &testpb.TestSimpleMessage{}
if err := svc.Encoding().DecodeWithLength(stream, rcvd); err != nil {
t.Fatal(err)
}
if _, err := svc.Encoding().EncodeWithLength(stream, rcvd); err != nil {
t.Fatal(err)
}
if err := stream.Close(); err != nil {
t.Error(err)
}
wg.Done()
})
stream, err := svc.Send(context.Background(), msg, "/testing/1", p2.Host.ID())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -65,5 +60,4 @@ func TestService_Send(t *testing.T) {
if !proto.Equal(rcvd, msg) { if !proto.Equal(rcvd, msg) {
t.Errorf("Expected identical message to be received. got %v want %v", rcvd, msg) t.Errorf("Expected identical message to be received. got %v want %v", rcvd, msg)
} }
} }

Some files were not shown because too many files have changed in this diff Show More