Compare commits

...

544 Commits

Author SHA1 Message Date
Preston Van Loon
0fff07a93b Create CODEOWNERS (#5330)
* Create CODEOWNERS
* Move codeowners file to proper location
2020-04-07 03:22:43 +00:00
Preston Van Loon
70e64be8d6 Remove old cross compile starlark rules (#5329)
* Add buildbuddy BES (#5325)

* Add buildbuddy BES
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* remove old cross compile rules
2020-04-07 03:01:20 +00:00
Preston Van Loon
33ffa34ea7 Use less goroutines in validator runner (#5328)
* Add buildbuddy BES (#5325)

* Add buildbuddy BES
* Use less goroutines when running validator
* per-role based goroutines
* Merge branch 'master' into validator-issue-4702
2020-04-07 01:34:01 +00:00
Preston Van Loon
bcebf63cab Add buildbuddy BES (#5325)
* Add buildbuddy BES
2020-04-07 00:54:21 +00:00
terence tsao
d6f7d67ee9 Interop batch save validator indices (#5320)
* Batch save indices

* Update beacon-chain/interop-cold-start/service.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* Update beacon-chain/interop-cold-start/service.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* Update service.go

Co-authored-by: shayzluf <thezluf@gmail.com>
2020-04-06 13:40:42 -05:00
Chris Hobcroft
b7afc90266 Fixed typo in stdout log (#5317)
"Round robin**g** sync request failed"

changed to

"Round robin sync request failed"
2020-04-06 20:34:24 +08:00
Ivan Martinez
4b64a75c77 Remove unused validator protos (#5304)
* Remove unneeded protos

* Remove unused api point

* Gazelle

* Fix visibility

* Rename

* Change type

* Use iota for validator role
2020-04-06 11:24:24 +08:00
Ivan Martinez
fcf131412f Fix cluster in bazel and remove unused file (#5316)
* Fix cluster vazel

* remove unneeded file
2020-04-06 10:23:05 +08:00
Mattia
279dd5ac8d fix broken links in readme (#5313)
* fix broken links in readme

Activating a validator is referencing a broken link. This commit points to the new location of the documentation.
* Merge branch 'master' into master
2020-04-05 20:19:36 +00:00
Jim McDonald
c7a4fcd098 Reduce noise in validator logs (#5307)
* Reduce number of info-level messages on start of validator

* Test service, not log entry

* Gazelle

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
2020-04-05 15:36:18 -04:00
Ivan Martinez
07753189fd Remove unused slashing protos (#5308) 2020-04-05 12:50:52 -04:00
terence tsao
e162d27634 Save init synced cached blocks to db (#5309) 2020-04-04 16:25:04 -07:00
Victor Farazdagi
f440c815f9 Init sync update highest slot (#5298)
* updates highest slot before wrapping up
* more verbose error message
* error w/o stack
* revert back
2020-04-04 17:11:38 +03:00
Preston Van Loon
9aac572c21 Update @terencechain public key. (#5301)
* Update @terencechain public key.
2020-04-03 22:01:44 +00:00
Preston Van Loon
7bdd1355b8 Add maligned struct static check (#5296)
* Add maligned static check
* Add file, oops
* lint
2020-04-03 05:09:15 +00:00
Preston Van Loon
477b014bd1 Set a max limit for decoding ssz objects from p2p (#5295)
* Set a max limit for decoding ssz objects from p2p
2020-04-02 23:51:54 +00:00
terence tsao
ec7f7aebdc Clear init sync blocks on the correct line (#5294)
* Add disable-init-sync-batch-save-blocks
* Fix test
* Remove flag
* Merge branch 'master' into disable-init-sync-batch-save
* Quick fix
* Quick fix
* Merge branch 'master' of github.com:prysmaticlabs/prysm into disable-init-sync-batch-save
* Merge branch 'disable-init-sync-batch-save' of github.com:prysmaticlabs/prysm into disable-init-sync-batch-save
* Clear init sync blocks at the right place
2020-04-02 22:27:51 +00:00
terence tsao
3544ed2818 Invert init-sync-batch-save-blocks flag for v0.11 (#5293) 2020-04-02 14:46:14 -07:00
Victor Farazdagi
b43e43b4a9 Init sync release queue (#5286)
* fix naming slot -> epoch

* better handling of long periods w/o finality

* bazel

* fixes issue with pointer goint to far ahead

* adds func comment

* hides original sync behind --disable-init-sync-queue

* adds func comment

* deprecated

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-04-02 11:06:32 -05:00
Victor Farazdagi
c26a492225 Init sync optimizations (#5284)
* fix naming slot -> epoch
* better handling of long periods w/o finality
* fixes issue with pointer going too far ahead
2020-04-02 06:54:05 +03:00
shayzluf
0df12261a1 slasher retrieve and cache validator public key (#5220)
* cache and retrieval of validator public keys

* fix comments

* fix comment

* fix variables

* gaz

* ivan feedback fixes

* goimports

* fix test

* comments on in line slice update

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-04-02 06:08:23 +03:00
Raul Jordan
f385a1dea6 Release Skip Slot Cache to All (#5280)
* no more skip slot cache

* imports

* deprecated

* fix flakeyness

* disable in e2e

* build

* fix viz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-04-01 19:09:54 -07:00
Preston Van Loon
8376fb36ca Enable slashing protection in validator by default (#5278)
* Invert slashing protection validator flags for issue #5267
* remove from e2e flags
* Make error level
* Merge refs/heads/master into flip-propose
2020-04-01 23:17:32 +00:00
Jim McDonald
02b238bda2 Add latch for proposer warnings (#5258)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
2020-04-01 17:28:02 -05:00
Ivan Martinez
3dd5576e33 Improvement, flake fixes (#5263) 2020-04-01 10:23:23 -05:00
terence tsao
df9a534826 Regen historical states for new-state-mgmt compatibility (#5261) 2020-03-31 16:54:24 -07:00
Nishant Das
7e50c36725 Add Hasher To State Data Types (#5244)
* add hasher
* re-order stuff
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
2020-03-31 18:57:19 +00:00
terence tsao
e22365c4a8 Uncomment out cold state tests (#5252)
* Fixed most of the tests

* All tests passing

* All tests passing

* Fix merge conflict

* Fixed error test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-31 11:23:39 -05:00
Nishant Das
c8f8e3f1e0 Unmarshal Block instead of State (#5246)
* unmarshal block instead of state
* add fallback
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
2020-03-31 15:25:58 +00:00
Ivan Martinez
3e81afd7ab Skip anti-flake E2E tests (#5257)
* Skip anti-flake

* Log out the shard index to see it per shard

* Attempt fixes

* Remove unneeded log

* Change eth1 ports

* Remove skips

* Remove log

* Attempt local build

* Fix formatting

* Formatting

* Skip anti flake tests
2020-03-31 23:15:33 +08:00
Ivan Martinez
404a0f6bda Attempt E2E flaking fix (#5256)
* Fix test sharding

* Attempt fix
2020-03-31 12:39:56 +08:00
Preston Van Loon
00ef08b3dc Debug: add cgo symbolizer (#5255)
* Add cgo_symbolizer config

* Add comment

* use import block
2020-03-30 20:20:27 -07:00
Preston Van Loon
6edb3018f9 Add configurations for BLS builds (#5254)
* Add configurations for BLS builds
* Merge refs/heads/master into bls-configurations
2020-03-31 01:58:27 +00:00
Preston Van Loon
17516b625e Use math.Sqrt for IntegerSquareRoot (#5253)
* use std
* Merge refs/heads/master into faster-sqrt
2020-03-31 01:27:37 +00:00
terence tsao
7f7866ff2a Micro optimizations on new-state-mgmt service for initial syncing (#5241)
* Starting a quick PoC

* Rate limit to one epoch worth of blocks in memory

* Proof of concept working

* Quick comment out

* Save previous finalized checkpoint

* Test

* Minor fixes

* More run time fixes

* Remove panic

* Feature flag

* Removed unused methods

* Fixed tests

* E2e test

* comment

* Compatible with current initial sync

* Starting

* New cache

* Cache getters and setters

* It should be part of state gen

* Need to use cache for DB

* Don't have to use finalized state

* Rm unused file

* some changes to memory mgmt when using mempool

* More run time fixes

* Can sync to head

* Feedback

* Revert "some changes to memory mgmt when using mempool"

This reverts commit f5b3e7ff47.

* Fixed sync tests

* Fixed existing tests

* Test for state summary getter

* Gaz

* Fix kafka passthrough

* Fixed inputs

* Gaz

* Fixed build

* Fixed visibility

* Trying without the ignore

* Didn't work..

* Fix kafka

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2020-03-30 17:10:45 -05:00
terence tsao
c5f186d56f Batch save blocks for initial sync. 80% faster BPS (#5215)
* Starting a quick PoC
* Rate limit to one epoch worth of blocks in memory
* Proof of concept working
* Quick comment out
* Save previous finalized checkpoint
* Merge branch 'master' of github.com:prysmaticlabs/prysm into batch-save
* Test
* Merge branch 'prev-finalized-getter' into batch-save
* Minor fixes
* Use a map
* More run time fixes
* Remove panic
* Feature flag
* Removed unused methods
* Fixed tests
* E2e test
* Merge branch 'master' into batch-save
* comment
* Merge branch 'master' into batch-save
* Compatible with current initial sync
* Merge branch 'batch-save' of github.com:prysmaticlabs/prysm into batch-save
* Merge refs/heads/master into batch-save
* Merge refs/heads/master into batch-save
* Merge refs/heads/master into batch-save
* Merge branch 'master' of github.com:prysmaticlabs/prysm into batch-save
* Feedback
* Merge branch 'batch-save' of github.com:prysmaticlabs/prysm into batch-save
* Merge refs/heads/master into batch-save
2020-03-30 18:04:10 +00:00
Ivan Martinez
0982ff124e Fix E2E test sharding (#5248) 2020-03-30 12:10:00 -04:00
Nishant Das
63df1d0b8d Add Merkleize With Customized Hasher (#5234)
* add buffer for merkleizer
* add comment
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into merkleize
* Merge branch 'merkleize' of https://github.com/prysmaticlabs/geth-sharding into merkleize
* lint
* Merge refs/heads/master into merkleize
2020-03-29 06:13:24 +00:00
Ivan Martinez
cb9ac6282f Separate anti flakes to prevent E2E issues (#5238)
* Separate anti flakes

* Gaz
2020-03-29 13:54:13 +08:00
terence tsao
c67b01e5d3 Check new state mgmt service is compatible with DB (#5231) 2020-03-28 18:07:51 -07:00
terence tsao
b40e6db1e5 Fix save blocks return nil (#5237)
* Fixed save blocks return nil
* Merge refs/heads/master into fix-batch-save-blocks
* Merge refs/heads/master into fix-batch-save-blocks
2020-03-28 19:05:56 +00:00
Preston Van Loon
f89d753275 Add configurable e2e epochs (#5235)
* Add configurable e2e epochs
* Merge refs/heads/master into configurable-test-epochs
* Merge refs/heads/master into configurable-test-epochs
2020-03-28 18:47:31 +00:00
Preston Van Loon
a24546152b HashProto: Use fastssz when available (#5218)
* Use fastssz when available
* fix tests
* fix most tests
* Merge branch 'master' into faster-hash-proto
* Merge refs/heads/master into faster-hash-proto
* Merge refs/heads/master into faster-hash-proto
* Merge refs/heads/master into faster-hash-proto
* fix last test
* Merge branch 'faster-hash-proto' of github.com:prysmaticlabs/prysm into faster-hash-proto-2
* lint
* fix last test
* fix again
* Update beacon-chain/cache/checkpoint_state_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Merge refs/heads/master into faster-hash-proto
2020-03-28 18:32:11 +00:00
Preston Van Loon
6bc70e228f Prevent panic for different size bitlists (#5233)
* Fix #5232
* Merge branch 'master' into bugfix-5232
2020-03-28 06:25:49 +00:00
terence tsao
f2a3fadda7 Productionization new state service part 1 (#5230)
* Fixed last play methods

* Fixed a regression. Genesis case for state gen

* Comment

* Starting

* Update proto

* Remove boundary root usages

* Update migrate

* Clean up

* Remove unused db methods

* Kafta

* Kafta

* Update tests

* Comments

* Fix state summary tests

* Missed one pass through for kafta
2020-03-27 13:28:38 -07:00
terence tsao
6a4b17f237 Prune garbage state is not for new state mgmt (#5225)
* Prune garbage state is not for new state mgmt
* Merge branch 'master' into state-mgmt-pruning
* Merge branch 'master' into state-mgmt-pruning
* Merge branch 'master' into state-mgmt-pruning
2020-03-27 14:30:24 +00:00
Victor Farazdagi
7ebb3c1784 init-sync revamp (#5148)
* fix issue with rate limiting
* force fetcher to wait for minimum peers
* adds backoff interval
* cap the max blocks requested from a peer
* queue rewritten
* adds docs to fsm
* fix visibility
* updates fsm
* fsm tests added
* optimizes queue resource allocations
* removes debug log
* replace auto-fixed comment
* fixes typo
* better handling of evil peers
* fixes test
* minor fixes to fsm
* better interface for findEpochState func
2020-03-27 09:54:57 +03:00
Nishant Das
33f6c22607 Revert "Add Fast Copy of Trie" (#5228)
* Revert "new fixes (#5221)"

This reverts commit 4118fa5242.
2020-03-27 01:06:30 +00:00
terence tsao
1a0a399bed Handle genesis case for blocks/states at slot index (#5224)
* Handle highest slot = 0
* TestStore_GenesisState_CanGetHighestBelow
* TestStore_GenesisBlock_CanGetHighestAt
* Merge refs/heads/master into handle-genesis
2020-03-27 00:09:14 +00:00
Preston Van Loon
c4c9a8465a Faster hashing for attestation pool (#5217)
* use faster hash proto
* Merge branch 'master' into faster-att-pool
* gaz
* Merge branch 'faster-att-pool' of github.com:prysmaticlabs/prysm into faster-att-pool
* nil checks and failing tests
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Fix
* Merge branch 'faster-att-pool' of github.com:prysmaticlabs/prysm into faster-att-pool
* Fix tests
2020-03-26 23:55:25 +00:00
terence tsao
5e2faf1a9d Short circuit genesis condition for new state mgmt (#5223)
* Fixed a regression. Genesis case for state gen

* Comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-26 14:13:45 -05:00
shayzluf
93e68db5e6 is slashable attestation endpoint implementation (#5209)
* is slashable attestation endpoint implementation
* fix todo
* comment
* Merge refs/heads/master into is_slashable_attestation
* Merge refs/heads/master into is_slashable_attestation
* Merge refs/heads/master into is_slashable_attestation
* Update slasher/rpc/server.go
* Update slasher/rpc/server.go
* Update slasher/rpc/service.go
2020-03-26 18:31:20 +00:00
Nishant Das
cdac3d61ea Custom Block HTR (#5219)
* add custom htr

* fix root

* fix everything

* Apply suggestions from code review

* Update beacon-chain/state/stateutil/blocks.go

* Update beacon-chain/blockchain/receive_block.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/blockchain/process_block.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* terence's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-26 13:10:22 -05:00
Nishant Das
4118fa5242 new fixes (#5221)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-26 11:21:02 -05:00
terence tsao
2df76798bc Add HighestSlotStatesBelow DB getter (#5213)
* Add HighestSlotStatesBelow
* Tests for HighestSlotStatesBelow
* Typos
* Comment
* Merge refs/heads/master into states-slots-saved-at
* Quick fix
* Merge branch 'states-slots-saved-at' of github.com:prysmaticlabs/prysm into states-slots-saved-at
* Prevent underflow foreal, thanks nishant!
2020-03-26 15:37:40 +00:00
Preston Van Loon
3792bf67b6 Add alpine based docker images for validator and beacon chain (#5214)
* Add alpine based images for validator and beacon chain

* Use an alpine image with glibc

* manual tags on transitional targets

* poke buildkite

* poke buildkite
2020-03-25 19:36:28 -05:00
Nishant Das
e077d3ddc9 Fix Incorrect Logging for IPV6 Addresses (#5204)
* fix ipv6 issues
* Merge branch 'master' into fixIPV6
* imports
* Merge branch 'fixIPV6' of https://github.com/prysmaticlabs/geth-sharding into fixIPV6
* Merge branch 'master' into fixIPV6
2020-03-25 17:19:11 +00:00
Preston Van Loon
2ad5cec56c Add gRPC headers flag support for validator client side (#5203)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-25 11:29:04 -05:00
terence tsao
6b1e60c404 Remove extra udp port log (#5205)
* Remove extra udp port log
* Merge branch 'master' into rm-log
2020-03-25 15:28:29 +00:00
terence tsao
48e984f526 Add HighestSlotBlocksBelow getter for db (#5195)
* Add HighestSlotBlockAt

* Start testing

* Apply fixes

* Typo

* Test

* Rename for clarity

* Use length

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-25 08:41:48 -05:00
Preston Van Loon
fbee94a0e9 Remove deprecated aggregator (#5200) 2020-03-25 06:14:21 -07:00
Preston Van Loon
9740245ca5 Add enable-state-field-trie for e2e (#5198)
* Add enable-state-field-trie for e2e
* Merge refs/heads/master into e2e-enable-state-field-trie
* Merge refs/heads/master into e2e-enable-state-field-trie
* fix all this
* Update shared/sliceutil/slice.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* terence's review
* comment
* Merge branch 'e2e-enable-state-field-trie' of https://github.com/prysmaticlabs/geth-sharding into e2e-enable-state-field-trie
2020-03-25 06:54:56 +00:00
Preston Van Loon
48d4a8655a Add ipv6 multiaddr support (#5199)
* Add ipv6 multiaddr support
* allow ipv6 for discv5
2020-03-25 04:03:51 +00:00
terence tsao
e15d92df06 Apply fixes to block slots methods in DB (#5194)
* Apply fixes
* Typo
* Merge refs/heads/master into fix-slots-saved-for-blocks
2020-03-25 03:05:20 +00:00
Preston Van Loon
729bd83734 Add span to HTR and skip slot cache (#5197)
* Add a bit more span data
* missing import
* Merge branch 'master' into more-spans
2020-03-25 01:15:00 +00:00
terence tsao
c63fb2cd44 Add HighestSlotState Getter for db (#5192) 2020-03-24 14:51:24 -07:00
terence tsao
78a865eb0b Replace boltdb imports with bbolt import (#5193)
* Replaced. Debugging missing strict dependencies...
* Merge branch 'master' into bbolt-import
* Update import path
* Merge branch 'bbolt-import' of github.com:prysmaticlabs/prysm into bbolt-import
* use forked prombbolt
* Merge branch 'bbolt-import' of github.com:prysmaticlabs/prysm into bbolt-import
* fix
* remove old boltdb reference
* Use correct bolt for pk manager
* Merge branch 'bbolt-import' of github.com:prysmaticlabs/prysm into bbolt-import
* fix for docker build
* gaz, oops
2020-03-24 20:00:54 +00:00
shayzluf
6e516dabf9 Setup Slasher RPC server (#5190)
* slasher rpc server

* fix comment

* fix comment

* remove server implementation from pr

* Apply suggestions from code review

* Gazelle

* Update slasher/rpc/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* Update slasher/detection/detect.go

* Update slasher/detection/detect.go

* Update slasher/detection/detect.go

* Update slasher/detection/detect.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-24 14:30:21 -04:00
Nishant Das
454e02ac4f Add Improvements to State Benchmarks (#5187)
* add improvements

* clean up
2020-03-24 09:16:07 -05:00
Preston Van Loon
35d74981a0 Correctly return attestation data for late requests (#5183)
* Add functionality to support attestation requests that are older than the head state

* lint

* gaz

* Handle nil state case

* handle underflow of first epoch

* Remove redundant and possibly wrong genesisTime struct field

* fix remaining tests

* gofmt

* remove debug comment

* use stategen.StateByRoot interchangably with beaconDB.State

* gofmt

* goimports

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-23 21:30:28 -07:00
Ivan Martinez
d8bcd891c4 Change E2E config ports to be non-commonly used (#5184)
* Change config ports to be non-commonly used
2020-03-24 03:20:38 +00:00
terence tsao
2e0158d7c5 Add HighestSlotBlock Getter for db (#5182)
* Starting

* Update block getters in db

* New test

* New test for save blocks as well

* Delete blocks can clear bits tests

* Fmt
2020-03-23 18:42:41 -05:00
Ivan Martinez
bdb80f4639 Change ListAttestations to get attestations from blocks (#5145)
* Start fixing api to get from blocks

* Fix listatts tests

* Fix slasher

* Improve blocks

* Change att grouping

* Use faster att concat

* Try to fix e2e

* Change back time

* tiny e2e fix

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-23 16:22:37 -04:00
Preston Van Loon
0043fb0441 Shard //beacon-chain/core/state:go_default_test (#5180)
* Add shards to core state tests, some of the fuzzing tests can be very slow
* Merge refs/heads/master into shard-core-state-tests
2020-03-23 19:08:39 +00:00
Preston Van Loon
f520472fc6 Buildkite changes (#5178)
* Do not override jobs, dont print colors
* Merge branch 'master' of github.com:prysmaticlabs/prysm into buildkite-changes
* use composite flag for minimal downloads
* Add repository cache
* use hardlinks
* repository cache common
* query and build repo cache
2020-03-23 19:00:37 +00:00
Preston Van Loon
5241582ece Add CORS preflight support (#5177)
* Add CORS preflight support

* lint

* clarify description
2020-03-23 13:17:17 -05:00
Nishant Das
b0128ad894 Add Attestation Subnet Bitfield (#4989)
* bump bitfield dep

* add new methods

* get it working

* add nil check

* add check

* one more check

* add flag

* everything works local run

* add debug log

* more changes

* ensuring p2p interface works enough for tests to pass

* all tests pass

* include proper naming and comments to fix lint

* Apply suggestions from code review

* discover by peers

* cannot figure out why 0 peers

* remove keys

* fix test

* fix it

* fix again

* remove log

* change back

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-23 09:41:47 -05:00
Preston Van Loon
5d1c3da85c BLS: some minor improvements (#5161)
* some improvements
* gofmt
* Merge refs/heads/master into bls-improvements
* Merge refs/heads/master into bls-improvements
* Merge refs/heads/master into bls-improvements
2020-03-22 23:40:39 +00:00
terence tsao
301c2a1448 New byteutils for state gen (#5163)
* New byteutils for state gen
* Added empty slice and nil checks
* Merge branch 'master' into bit-utils
* SetBit to extend bit
* Merge branch 'bit-utils' of github.com:prysmaticlabs/prysm into bit-utils
* Comment
* Add HighestBitIndexBelow
* Test for HighestBitIndexBelow
* another test and better test fail output
* Feedback
* Merge branch 'bit-utils' of github.com:prysmaticlabs/prysm into bit-utils
* Feedback
* Preston's feedback, thanks!
* Use var
* Use var
* Merge refs/heads/master into bit-utils
2020-03-22 23:19:38 +00:00
Ivan Martinez
bc3d673ea4 Parallelize E2E Testing (#5168)
* Begin cleanup for E2E

* Parellize testing

* Add comments

* Add comment
2020-03-22 19:04:23 -04:00
Preston Van Loon
3d092d3eed Update go-bitfield (#5162)
* Update go-bitfield from https://github.com/prysmaticlabs/go-bitfield/pull/28
2020-03-22 04:51:06 +00:00
Preston Van Loon
4df5c042d9 Use faster bitfield BitIndices to build attesting indices (#5160)
* Refactor AttestingIndices to not return any error. Add tests. Add shortcut for fully attested attestation
* attestationutil.ConvertToIndexed never returned error either
* Working with benchmark:
* fix test
* Merge branch 'attestationutil-improvements-0' into attestationutil-improvements-1
* out of bounds check
* Update after merge of https://github.com/prysmaticlabs/go-bitfield/pull/26
* remove shortcut
* Merge refs/heads/attestationutil-improvements-0 into attestationutil-improvements-1
* Merge branch 'attestationutil-improvements-0' into attestationutil-improvements-1
* Merge branch 'attestationutil-improvements-1' of github.com:prysmaticlabs/prysm into attestationutil-improvements-1
* revert test...
* Merge refs/heads/attestationutil-improvements-0 into attestationutil-improvements-1
* Merge branch 'master' of github.com:prysmaticlabs/prysm into attestationutil-improvements-1
* Merge branch 'attestationutil-improvements-1' of github.com:prysmaticlabs/prysm into attestationutil-improvements-1
* Update go-bitfield after https://github.com/prysmaticlabs/go-bitfield/pull/27
2020-03-22 01:42:51 +00:00
Preston Van Loon
d06b0e8a86 Refactor attestationutil.AttestingIndices (#5159)
* Refactor AttestingIndices to not return any error. Add tests. Add shortcut for fully attested attestation
* attestationutil.ConvertToIndexed never returned error either
* fix test
* remove shortcut
* revert test...
2020-03-22 00:23:37 +00:00
Jim McDonald
4f8d9c59dd Replace default value for datadir (#5147) 2020-03-21 23:30:51 +08:00
Ivan Martinez
021d777b5e Add Anti-Flake test for E2E (#5149)
* Add antiflake test

* Respond to comments

* Comment

* Change issue num
2020-03-21 14:42:51 +08:00
terence tsao
dc3fb018fe Fix new state mgmt sync stuck in a loop (#5142) 2020-03-19 18:46:35 -07:00
Preston Van Loon
2ab4b86f9b Allow setting flags via yaml config file. (#4878) 2020-03-19 14:46:44 -07:00
Ivan Martinez
b30a089548 Add fetching validators by indices and public keys (#5141)
* update ethereumapis with patch
* Add indices and pubkeys to ListValidators request
* Add sorting
* Merge branch 'master' into validators-by-keys-indices
* Rename to index
* Merge branch 'validators-by-keys-indices' of https://github.com/prysmaticlabs/prysm into validators-by-keys-indices
* Add comment
2020-03-19 20:30:40 +00:00
Ivan Martinez
271938202e Improve validator logs (#5140)
* Imporve validator logging

* Update validator/client/validator_log.go
2020-03-19 13:34:50 -04:00
shayzluf
6fe814c5aa double proposal detector (#5120)
* proposal detector

* comment fixes

* comment fixes

* raul feedback

* fix todo

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-19 17:29:35 +05:30
Preston Van Loon
a9f4d1d02d Attestation: Add a check for overflow (#5136)
* Add a check for overflow
* gofmt beacon-chain/cache/committee_test.go
2020-03-19 04:41:05 +00:00
Preston Van Loon
7c110e54f0 Add ssz marshal and unmarshal for most data structures (#5121)
* Add ssz marshal and unmarshal for most data structures
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Update ferran SSZ
* Update ferran's SSZ
* Merge refs/heads/master into ssz-stuff
* fix tests
* Merge branch 'ssz-stuff' of github.com:prysmaticlabs/prysm into ssz-stuff
* gaz
2020-03-19 02:39:23 +00:00
Raul Jordan
3043d4722f Attestation Dynamic Committee Subnets (#5123)
* initiate cache
* imports fix
* add in feature config flag
* utilize a dynamic set of subnets
* Merge branch 'master' into att-subnets
* add in feature config flag
* Merge branch 'att-subnets' of github.com:prysmaticlabs/prysm into att-subnets
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into att-subnets
* shift
* more changes
* gaz
* Update beacon-chain/rpc/validator/assignments.go
* Update beacon-chain/rpc/validator/assignments.go
* add flag
* Merge branch 'att-subnets' of https://github.com/prysmaticlabs/geth-sharding into att-subnets
* Merge branch 'master' into att-subnets
* Merge refs/heads/master into att-subnets
* no double flag
* Merge branch 'att-subnets' of github.com:prysmaticlabs/prysm into att-subnets
* amend committee ids to better name
* gaz
2020-03-18 23:13:37 +00:00
Ivan Martinez
c96c8b4aa3 Minor slasher fixes (#5129)
* Minor fixes

* Change log
2020-03-18 14:49:20 -05:00
Nishant Das
9f46000dba change to batch size (#5128)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-18 17:57:20 +08:00
Nishant Das
5450b3155e Integrate Field Tries into Current State (#5082)
* add new methods
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into improvedHTRArrays
* new field trie
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into improvedHTRArrays
* perform better copying
* fix bug
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into improvedHTRArrays
* add support for variable length arrays
* get it running
* save all new progress
* more fixes
* more fixes
* more cleanup
* some more clean up
* new memory pool
* remove lock
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into improvedHTRArrays
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into improvedHTRArrays
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into improvedHTRArrays
* use wrapper
* remove redundant methods
* cleanup
* cleanup
* remove unused method
* change field
* Update beacon-chain/state/types.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Update beacon-chain/state/types.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
2020-03-18 04:52:08 +00:00
Nishant Das
1bb12c3568 Add Field Trie to State (#5118)
* add new helpers

* make zerohash public

* remove unused method

* add more tests

* cleanup

* add in new tests

* fix all tests

* Apply suggestions from code review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-18 10:09:31 +08:00
Ivan Martinez
1be8b3aa5e Slasher lag fix (#5124)
* Slasher fixes

* fix
2020-03-17 16:53:08 -05:00
Nishant Das
431762164e Add New State Utils (#5117)
* add new helpers

* make zerohash public

* remove unused method

* add more tests

* cleanup

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-17 14:25:17 -05:00
Victor Farazdagi
3ec2a0f9e0 Refactoring of initial sync (#5096)
* implements blocks queue

* refactors updateCounter method

* fixes deadlock on stop w/o start

* refactors updateSchedulerState

* more tests on schduler

* parseFetchResponse tests

* wraps up tests for blocks queue

* eod commit

* fixes data race in round robin

* revamps fetcher

* fixes race conditions + livelocks + deadlocks

* less verbose output

* fixes data race, by isolating critical sections

* minor refactoring: resolves blocking calls

* implements init-sync queue

* udpate fetch/send buffers in blocks fetcher

* blockState enum-like type alias

* refactors common code into releaseTicket()

* better gc

* linter

* minor fix to round robin

* moves original round robin into its own package

* adds enableInitSyncQueue flag

* fixes issue with init-sync service selection

* Update beacon-chain/sync/initial-sync/round_robin.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* initsyncv1 -> initsyncold

* adds span

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-03-17 12:27:18 -05:00
Victor Farazdagi
e96b45b29c asserts non-nil state (#5115) 2020-03-17 07:58:16 -07:00
terence tsao
e529f5b1d6 Part 1 of integrating new state mgmt to run time (#5108) 2020-03-16 12:07:07 -07:00
Victor Farazdagi
f18bada8c9 Init sync blocks queue (#5064)
* fixes data race, by isolating critical sections

* minor refactoring: resolves blocking calls

* implements init-sync queue

* udpate fetch/send buffers in blocks fetcher

* blockState enum-like type alias

* refactors common code into releaseTicket()

* better gc

* linter

* Update beacon-chain/sync/initial-sync/blocks_queue.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* Update beacon-chain/sync/initial-sync/blocks_queue_test.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: shayzluf <thezluf@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-16 18:21:36 +03:00
terence tsao
5657535c52 Fixed saveNewValidators error log (#5109) 2020-03-15 16:21:56 -07:00
Preston Van Loon
9da9fbdfba Fix reward and penality zero epoch bug (#5107)
* Fix reward and penality bug https://github.com/prysmaticlabs/prysm/issues/5105
* Merge branch 'master' into fuzz-fix-attestationDelta
2020-03-15 19:14:52 +00:00
Ivan Martinez
de2ec8e575 Update README for Slasher (#5106)
* Add readme
2020-03-15 18:46:21 +00:00
terence tsao
3660732f44 Resume new state mgmt (#5102) 2020-03-15 09:47:49 -07:00
Jim McDonald
8e6c16d416 Tweak validator logging (#5103)
* Tidy up logging
2020-03-15 15:46:22 +00:00
Ivan Martinez
8143cc36bc Add Slasher to E2E (#5061)
* Start adding "inject slashing into pool"

* Attempt at slashing

* Remove unneded

* Fix

* Begin adding slasher client to e2e

* Start slasher in e2e

* Get slashing detection working

* Get slashing evaluators working

* Progress on e2e

* Cleanup e2e

* Fix slasher e2e!

* lint

* Comment

* Fixes

* Improve accuracy of balance check

* REmove extra

* Remove extra

* Make more accurate
2020-03-15 01:09:23 -04:00
terence tsao
eeffa4fb30 New state getter (#5101)
* getter.go
* getter_test.go
* fixed a cold bug
* fmt gaz
* All tests pass
* Merge branch 'master' into new-state-getter
* Merge refs/heads/master into new-state-getter
2020-03-14 18:39:23 +00:00
Victor Farazdagi
1137403e4b Init sync pre queue (#5098)
* fixes data race, by isolating critical sections

* minor refactoring: resolves blocking calls

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-14 13:21:07 -05:00
terence tsao
f17818b1c0 New state setter (#5100)
* setter.go
* tests
* fmt
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into new-state-setter
* Merge refs/heads/master into new-state-setter
2020-03-14 16:31:21 +00:00
Nishant Das
691f0bba70 Minor Improvements (#5099)
* add fixes
2020-03-14 16:12:22 +00:00
terence tsao
b024191887 Get cold intermediate state with slot (#5097)
* loadColdIntermediateStateWithSlot

* Starting test

* Tests
2020-03-14 10:34:37 -05:00
Raul Jordan
1f87cb11fc Use Current Time Slot for Fetching Committees in RPC (#5094)
* use genesis time fetcher
* Merge branch 'master' into use-time-fetcher
* fix breaking
* Merge branch 'use-time-fetcher' of github.com:prysmaticlabs/prysm into use-time-fetcher
* list beacon committees tests fixed
* Merge branch 'master' into use-time-fetcher
* Merge branch 'master' into use-time-fetcher
* Merge refs/heads/master into use-time-fetcher
* Update beacon-chain/rpc/beacon/committees_test.go
2020-03-14 03:32:51 +00:00
Preston Van Loon
a0b142a26c Update to go 1.14 (#4947)
* Update to go 1.14
* Update with fix from https://github.com/bazelbuild/rules_go/pull/2388
* Merge branch 'master' into go-1.14
* Merge refs/heads/master into go-1.14
* Merge branch 'master' of github.com:prysmaticlabs/prysm into go-1.14
* Update gRPC
* Merge branch 'go-1.14' of github.com:prysmaticlabs/prysm into go-1.14
* Update golang.org/x/crypto
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Merge refs/heads/master into go-1.14
* Committing gc_goopts for issue repro
* Fix race and msan builds
* Merge branch 'master' of github.com:prysmaticlabs/prysm into go-1.14
* Merge refs/heads/master into go-1.14
* switch to LRU
* Merge branch 'go-1.14' of github.com:prysmaticlabs/prysm into go-1.14
* Fixed, but dont feel good about this
* Switch append ordering
2020-03-14 00:12:52 +00:00
shayzluf
035eaffd9d handle slashing from p2p (#5047)
* handle slashing from p2p

* gaz

* remove unneeded check

* add tests

* gaz  goimports

* text update

* Apply suggestions from code review

* add proto.equal

* fix test

* add context to call

* fix state bug found by terence

* fix tests add error type handling

* nil checks

* nil head state check

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-13 16:47:27 -05:00
Ivan Martinez
c41244ad34 Make spanner tests more thorough, fixes (#5093)
* Fix tests for spanner

* Start change to indexed atts

* Improve tests for spanner

* Fix tests

* Remove extra
2020-03-13 14:04:22 -04:00
terence tsao
c20d9ccbb3 Better attestation pool with map instead of expiration cache (#5087)
* update aggregated

* update block att

* update forkchoice att

* update unaggregated att

* gazelle

* Use copy

* Locks

* Genesis time

* Fixed tests

* Comments

* Fixed tests
2020-03-13 12:35:28 -05:00
Ivan Martinez
3380d14475 Include ejected indices in ActiveSetChanges endpoint (#5066)
* Fix ActiveSetChanges

* Include ejected indices in ActiveSetChanges RPC

* Fix test fails

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-13 12:23:19 -04:00
shayzluf
4f031d1988 fix slasher rpc disconnect on error (#5092) 2020-03-13 10:59:14 -05:00
Jim McDonald
02afb53ea4 Remove suprious error messages in wallet keymanager (#5090)
* Handle multiple passphrases

* Add tests
2020-03-13 05:26:10 -07:00
terence tsao
0974c02a00 Load cold state by root (#5086) 2020-03-12 15:27:55 -07:00
Raul Jordan
c6acf0a28c Use Target Epoch to Determine Indexed Attestations for Slasher (#5085)
* no more head fetchre

* no need for head fetcher

* nil checks
2020-03-12 17:02:12 -05:00
terence tsao
ed7ad4525e Method to retrieve block slot via block root (#5084)
* blockRootSlot

* Tests

* Gaz

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-12 16:04:24 -05:00
terence tsao
7fcc07fb45 Save hot state (#5083)
* loadEpochBoundaryRoot
* Tests
* Span
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into save-hot-state
* Starting test
* Tests
* Merge refs/heads/master into save-hot-state
* Merge branch 'master' into save-hot-state
* Use copy
* Merge branch 'save-hot-state' of https://github.com/prysmaticlabs/prysm into save-hot-state
* Merge refs/heads/master into save-hot-state
2020-03-12 20:48:07 +00:00
shayzluf
f937713fe9 Broadcast slashing (#5073)
* add flag
* broadcast slashings
* Merge branch 'master' of github.com:prysmaticlabs/prysm into broadcast_slashing

# Conflicts:
#	beacon-chain/rpc/beacon/slashings_test.go
* fix tests
* goimports
* goimports
* Merge branch 'master' into broadcast_slashing
* Merge branch 'master' into broadcast_slashing
* Merge branch 'master' into broadcast_slashing
2020-03-12 20:29:23 +00:00
terence tsao
359e0abe1d Load epoch boundary root (#5079)
* loadEpochBoundaryRoot

* Tests

* Span

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-12 15:00:37 -05:00
tzapu
0704ba685a Return statuses on duties (#5069)
* try to return somethign for everything
* default to unknown
* debug
* moar debug
* move else to outer check
* working
* reorder imports
* cleanup
* fix TestGetDuties_NextEpoch_CantFindValidatorIdx
* Merge branch 'master' into return-statuses-on-duties
* Update validator/client/validator.go
* Merge branch 'master' into return-statuses-on-duties
* Merge branch 'master' into return-statuses-on-duties
2020-03-12 19:07:37 +00:00
shayzluf
0f95b797af Save slashings to slasher DB (#5081)
* fix tests add error type handling

* Update slasher/detection/detect_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* goimports

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
2020-03-12 22:08:58 +05:30
terence tsao
43722e45f4 Save cold state (#5077) 2020-03-12 05:58:06 -07:00
terence tsao
ff4ed413a3 State migration from hot to cold (archived) (#5076)
* Starting

* Test

* Tests

* comments

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-11 21:27:16 -05:00
Raul Jordan
f1a42eb589 Verify Slashing Signatures Before Putting Into Blocks (#5071)
* rem slasher proto
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* verify slashing
* added in test for pending att slashing
* tests starting to apss
* sig failed verify regression test
* tests passing for ops pool
* Update beacon-chain/operations/slashings/service.go
* Merge refs/heads/master into verify-slash-sig
* verify on insert
* tests starting to pass
* all code paths fixed
* imports
* fix build
* fix rpc errors
* Merge refs/heads/master into verify-slash-sig
2020-03-12 01:16:55 +00:00
terence tsao
a90ffaba49 Archived point retrieval and recovery (#5075) 2020-03-11 17:38:30 -07:00
Raul Jordan
663d919b6f Include Bazel Genrule for Fast SSZ (#5070)
* rem slasher proto
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* included new ssz bzl rule
* Merge branch 'master' into add-in-starlark-rule
* Update tools/ssz.bzl

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2020-03-11 19:50:22 +00:00
Victor Farazdagi
7b30845c01 fixes races in blocks fetcher (#5068) 2020-03-11 14:21:41 +03:00
Victor Farazdagi
46eb228379 fixes data race in state.Slot (#5067)
* fixes data race in state/getters
2020-03-11 09:11:07 +00:00
Raul Jordan
8d3fc1ad3e Add in Slasher Metrics (#5060)
* added in slasher metrics
* Merge branch 'master' into slasher-metrics
* add in prom bolt metrics for slasher
* Merge branch 'slasher-metrics' of github.com:prysmaticlabs/prysm into slasher-metrics
* imports
* include all metrics
* no dup bolt collector
* Update slasher/detection/attestations/spanner.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* naming best practices for prom, thx Terence
* Merge branch 'slasher-metrics' of github.com:prysmaticlabs/prysm into slasher-metrics
2020-03-10 19:41:55 +00:00
Nishant Das
93195b762b Improve HTR of State (#5058)
* add cache
* Update beacon-chain/state/stateutil/blocks.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Update beacon-chain/state/stateutil/blocks.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Update beacon-chain/state/stateutil/hash_function.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Merge branch 'master' into improveHTR
* add back string casting
* fix imports
2020-03-10 16:26:54 +00:00
Jim McDonald
f0abf0d7d5 Reduce frequency of 'eth1 client not syncing' messages (#5057) 2020-03-10 09:51:41 -05:00
Nishant Das
9d27449212 Discovery Fixes (#5050)
* connect to dv5 bootnodes

* fix test

* change polling period

* ignore

* Update beacon-chain/p2p/service.go

* Update beacon-chain/p2p/service_test.go

* fix test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-03-09 19:53:37 -07:00
Preston Van Loon
edb6590764 Build herumi's BLS from source (#5055)
* Build herumi from source. Working so far on linux_amd64 for compile, but tests fail to initialize the curve appropriately

* Add copts to go_default_library

* llvm toolchain, still WIP

* Fixes, make llvm a config flag

* fix gazelle resolution

* comment

* comment

* update herumi to the v0.9.4 version

* Apply @nisdas patch from https://github.com/herumi/bls-eth-go-binary/pull/5
2020-03-09 21:22:41 -05:00
Raul Jordan
e77cf724b8 Better Nil Check in Slasher (#5053)
* rem slasher proto
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* some nil checks in slasher
2020-03-09 21:21:39 +00:00
Ivan Martinez
b633dfe880 Change detection and updating in Slasher to per attestation (#5043)
* Change span updates to update multiple validators at once

* Change detection to perform on multiple validators at once

* Fix minspan issue

* Fix indices

* Fix test

* Remove logs

* Remove more logs

* Update slasher/detection/attestations/spanner_test.go

* Update slasher/detection/attestations/spanner_test.go

* Update slasher/detection/attestations/spanner_test.go

* Update slasher/detection/detect.go

* nil check

* fix ununsed import

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-09 13:14:19 -05:00
Ivan Martinez
8334aac111 Batch saving of attestations from stream for slasher (#5041)
* Batch saving of attestations from stream for slasher

* Progress on test

* Fixes

* Fix test

* Rename

* Modify logs and timing

* Change

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-09 12:49:40 -05:00
Preston Van Loon
4c1e2ba196 Add prysm.sh script (#5042)
* Add prysm.sh script

* Add dist to gitignore

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-09 12:19:53 -05:00
terence tsao
25c13663d2 Add hot state by slot retrival (#5052)
* Update replay conditions

* loadHotStateBySlot

* Tests and gaz

* Tests
2020-03-09 11:22:45 -05:00
Jim McDonald
0c3af32274 Use BeaconBlockHeader in place of BeaconBlock (#5049) 2020-03-09 21:08:30 +08:00
shayzluf
01cb01a8f2 On eviction test fix (#5046) 2020-03-09 01:35:39 -04:00
Raul Jordan
0c9e99e04a Aggregate Attestations Before Streaming to Slasher (#5029)
* rem slasher proto
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* aggregate before streaming
* Merge branch 'agg-idx-atts' of github.com:prysmaticlabs/prysm into agg-idx-atts
* collect atts and increase buffer size
* fix test for func
* Merge refs/heads/master into agg-idx-atts
* Update beacon-chain/rpc/beacon/attestations.go
* Merge refs/heads/master into agg-idx-atts
* naming
* Merge branch 'agg-idx-atts' of github.com:prysmaticlabs/prysm into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* comment terence feedback
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Merge refs/heads/master into agg-idx-atts
* Fix tests
2020-03-08 21:39:54 +00:00
Ivan Martinez
d4cd51f23e Change slasher cache to LRU cache (#5037)
* Change cache to LRU cache

* fixes

* REduce db usage

* Fix function name

* Merge issues

* Save on eviction

* Fixes

* Fix

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-03-08 17:11:59 -04:00
terence tsao
962fe8552d Compute state up to slot (#5035) 2020-03-08 21:41:24 +01:00
Raul Jordan
eddaea869b Prepare Slasher for Production (#5020)
* rem slasher proto
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* add a bit more better logging
* Empty db fix
* Improve logs
* Fix small issues in spanner, improvements
* Change costs back to 1 for now
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* Change the cache back to 0
* Cleanup
* Merge branch 'master' into cleanup-slasher
* lint
* added in better spans
* log
* rem spanner in super intensive operation
* Merge branch 'master' into cleanup-slasher
* add todo
* Merge branch 'cleanup-slasher' of github.com:prysmaticlabs/prysm into cleanup-slasher
* Merge branch 'master' into cleanup-slasher
* Apply suggestions from code review
* no logrus
* Merge branch 'master' into cleanup-slasher
* Merge branch 'cleanup-slasher' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* Remove spammy logs
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* gaz
* Rename func
* Add back needed code
* Add todo
* Add span to cache func
2020-03-08 17:56:43 +00:00
Nishant Das
300d072456 Add Config Change for Validator (#5038)
* add config for validator
* gaz
* Merge refs/heads/master into configureValidator
* Merge refs/heads/master into configureValidator
2020-03-08 06:45:36 +00:00
Nishant Das
ac1c92e241 Add Prometheus Service for Slasher (#5039)
* add prometheus service
* Update slasher/node/node.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Merge refs/heads/master into addPromServiceSlasher
2020-03-08 06:35:37 +00:00
terence tsao
2452c7403b Load hot state by root (#5034)
* Add loadHotStateByRoot

* Touchup loadHotStateByRoot

* Tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-08 14:24:57 +08:00
Preston Van Loon
b97e22107c Update rbe_autoconf (#5036)
* Update rbe_autoconf
* Update timestamps
2020-03-07 21:18:16 +00:00
Preston Van Loon
98faf95943 Define -c opt for release builds (#5033)
* define -c opt for release builds
* Merge branch 'master' into c-opt
2020-03-07 05:50:26 +00:00
Preston Van Loon
af28862e94 Add sha256 to external dependency librdkafka (#5032)
* Add sha256 to external dependency librdkafka
2020-03-07 05:31:07 +00:00
Jim McDonald
b133eb6c4a Warn rather than fail on incorrect keystore password (#5024)
* Warn on failure to decrypt a keystore validator

* Update test

* Update tools

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-06 23:05:48 -06:00
Nishant Das
345ec1bf8c Fix Custom Delay Flag (#5026)
* fix flag
* Merge refs/heads/master into fixFlag
* Merge refs/heads/master into fixFlag
* Merge refs/heads/master into fixFlag
* Merge refs/heads/master into fixFlag
* Merge refs/heads/master into fixFlag
* fix config
* Merge branch 'fixFlag' of https://github.com/prysmaticlabs/geth-sharding into fixFlag
2020-03-07 03:52:40 +00:00
Nishant Das
d1fea430d6 change limit (#5027)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-06 17:26:08 -06:00
terence tsao
054b15bc45 Add SlotsPerArchivedPoint flag and a check (#5023)
* Flag

* Service

* Tests

* Tests and comments

* Lint

* Add to usages

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-06 17:06:01 -06:00
Preston Van Loon
6a2955d43c Update bazel.sh (#5028)
* Add google auth creds as environment variable for CI. Add a comment why this script is helpful
* Add google auth creds as environment variable for CI. Add a comment why this script is helpful
2020-03-06 17:43:06 +00:00
Jim McDonald
0ecd83afbb Avoid crash due to invalid index (#5025)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-06 09:38:43 -06:00
Nishant Das
069f2c5fb6 Asynchronous Dials To Peers (#5021)
* make dial non-blocking

* add sleep

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-06 22:57:47 +08:00
Raul Jordan
acb15a1f04 Report Validator Status in Duties (#5017)
* fix up status reporting
* Merge refs/heads/master into properly-report-status
* Merge refs/heads/master into properly-report-status
* Merge refs/heads/master into properly-report-status
2020-03-06 06:18:14 +00:00
Preston Van Loon
e2af70f692 Run buildifer, remove duplicated WORKSPACE entries (#5018)
* Buildifier, add release config
* Merge branch 'master' into bazel-stuff
* Merge refs/heads/master into bazel-stuff
* Merge refs/heads/master into bazel-stuff
* revert gnostic
* Set kafka for CI tests only
* add bazel.sh script
* set home
2020-03-06 04:42:27 +00:00
Raul Jordan
15b5ec89b2 Cross-compile OSX: Remove dropbox link, add sha256 check (#5019)
* Remove dropbox link, add sha256 check

* Use docker's S3 link instead of dropbox

* Update image sha, workspace

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-05 16:32:52 -06:00
Victor Farazdagi
b4aaa610a1 Fixes race condition at genesis (#5016)
* fixes race condition at genesis
2020-03-06 00:20:38 +03:00
Raul Jordan
6158a648cd Plugging in spanner db (#5009)
* rem slasher proto

* with cache

* delete old code

* moving to bytes.go fix traces

* moving to bytes.go fix traces

* raul feedback

* raul feedback

* begin

* add eviction test

* ivan feedback

* ivan feedback

* test is running

* some comment improvements

* test included for bytes and bool

* import

* cleanup

* tests pass

* fill in all fields in test

* gaz

* fix integration

* gaz + goimports

* fix service.go

* remove sleep

* cleanup

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-05 12:11:54 -06:00
prylabs-bulldozer[bot]
e2a6f5a6ea First cut at multi-arch cross compiling toolchain (#4945)
*  PRYSM-2849 first cut at multi-arch cross compiling toolchain.  currently supports arm64 and amd64 via docker cross compiler image
* picky linter
* some readme cleanup
* remove arm 8.2 revision for arm64 builds (cortex a72 is ARMv8.0-A)
remove arm32 toolchain from multiarch dockerfile
* remove extranous WORKSPACE entries
* add docker remote execution configs for amd64 and arm64
* add osx bazelrc configs
* working osx toolchain
* update readme
* cleanup for amd, arm and osx cross before beginning windows
* initial stab at mingw windows cross
* add docker target for windows_amd64 and update readme for cross-compiling
* little more cleanup for readability
* Check in generated RBE. Still tweaking config but linux amd64 -> linux amd64 on RBE works OK. Cross compile does not work properly in RBE yet.
* fix
* update image
* Making some progress
* delete artifacts
* Working build
* Add remote config
* remove some things I added to README
* Tidy
* Update readme
* remove 2 commented lines
* buildifer
* Merge pull request #1 from prysmaticlabs/cross-compile-with-suburbandad

Cross compile with suburbandad
* Merge branch 'master' into clang-cross-compile
* buildifier on generated stuff
* Merge branch 'master' into clang-cross-compile
* Merge branch 'master' into clang-cross-compile
* Merge branch 'master' into clang-cross-compile
2020-03-05 16:59:56 +00:00
Raul Jordan
aebc883a0d Methods to retrieves last saved state and block for stategen pkg (#5005)
* Added last saved block and state

* Genesis tests

* Gaz

* Added state tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-05 10:22:20 -06:00
Raul Jordan
f3dc113dba add multi-options (#5012) 2020-03-05 08:49:26 -06:00
prylabs-bulldozer[bot]
5961aaf588 Windows friendly stdin reads for passwords (#5010)
* cast os.stdin filehandle since on windows syscall.Stdin is not int
* import ordering
* Merge branch 'master' into prysm-5008-windows-stdin-not-int
2020-03-05 07:31:11 +00:00
prylabs-bulldozer[bot]
e635e5b205 Feature flag to gate prune state upon start up (#5011)
* Added feature flag to gate prune state upon start up
2020-03-05 06:24:59 +00:00
Raul Jordan
66991f0efe Spanner db (#4997)
* rem slasher proto

* with cache

* delete old code

* moving to bytes.go fix traces

* moving to bytes.go fix traces

* raul feedback

* raul feedback

* add eviction test

* ivan feedback

* ivan feedback

* some comment improvements

* test included for bytes and bool

* import

* cleanup

* tests pass

* fill in all fields in test

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-04 23:35:14 -06:00
prylabs-bulldozer[bot]
e339b07ac7 Remove unused DB functions and proto from Slasher (#4996)
* Remove unused DB functions
* goimports
* Fix bug and improve tests
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-remove-old-db
2020-03-04 22:29:36 +00:00
prylabs-bulldozer[bot]
139f51efaa LRU cache for state gen (#5004)
* Add hot state cache
* Gaz
* Test
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into state-gen-lru-cache
* Merge refs/heads/master into state-gen-lru-cache
2020-03-04 22:09:21 +00:00
prylabs-bulldozer[bot]
a43a40c1c9 Create .bazelversion (#5003)
* Create .bazelversion
2020-03-04 21:49:22 +00:00
Raul Jordan
0bdd0dba67 Add warning if shell expansion characters make it in to the path (#5001) 2020-03-04 12:34:23 -06:00
Victor Farazdagi
239efe7410 init sync: adds blocks fetcher service (#4978)
* init sync: adds blocks fetcher service

* init-sync: rework ctx handling

* init-sync: fix long lines

* removes redundant method

* adds buffer to requests channel

* adds jaeger spans

* fixes overly long comment line
2020-03-04 20:19:09 +03:00
terence tsao
e5da756c47 Add error definitions for state gen pkg (#5000)
* Add error file
* Fmt
* Merge branch 'master' into error-file
2020-03-04 16:50:08 +00:00
terence tsao
a612557fe7 Add log file (#4999) 2020-03-04 10:30:52 -06:00
Raul Jordan
26582cbf2e Stub Slasher RPC Methods (#4995)
* rem slasher proto
* Remove unneeded protos
* Rework api proto
* Add back proto
* regen slashing proto
* Merge branch 'rem-rpc' of github.com:prysmaticlabs/prysm into rem-rpc
* Fix comments
* Merge branch 'rem-rpc' of https://github.com/prysmaticlabs/Prysm into rem-rpc
2020-03-03 22:09:35 +00:00
Raul Jordan
d68636bc7f Remove Deprecated Slasher Code (#4994)
* remove old code
* Clear out service from bazel
2020-03-03 19:40:09 +00:00
terence tsao
699e7efc61 Add epoch boundary root map (#4993)
* Add to struct

* Add implementations

* Tests
2020-03-03 13:07:34 -06:00
Ivan Martinez
ba6b8c9321 Update outdated spec function names and comments (#4992)
* Update outdated spec function names and comments
* VerifyMerkleBranch
* Remove error handle
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slash-spec-refresh
* Merge branch 'master' into slash-spec-refresh
2020-03-03 18:29:41 +00:00
Ivan Martinez
cc5fc0af1a Plug-in double voting detection into detection service (#4960)
* Add double vote detection to spanner
* Add documentation
* Update slasher/detection/attestations/spanner.go
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-spanner-double
* Merge branch 'slasher-spanner-double' of https://github.com/0xKiwi/Prysm into slasher-spanner-double
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-spanner-double
* Gazelle
* Add double vote detection func
* Implement double voting detection
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-implement-double
* Merge branch 'master' into slasher-implement-double
* Merge branch 'slasher-implement-double' of https://github.com/0xKiwi/Prysm into slasher-implement-double
* Fix typo
* Remove filter, replace with slot + committee index
* Change bloom filter to 2 sig bytes
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-change-filter
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-implement-double
* Merge branch 'slasher-change-filter' of https://github.com/0xKiwi/Prysm into slasher-implement-double
* Change detection to use prefix
* Fix runtime
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-implement-double
* Fix bug and comments
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-implement-double
* Fix flaky test
* Merge branch 'master' into slasher-implement-double
* Improve logs
* Merge branch 'slasher-implement-double' of https://github.com/0xKiwi/Prysm into slasher-implement-double
* Add ok check
* Fix test
* Merge branch 'master' into slasher-implement-double
2020-03-03 18:08:21 +00:00
Nishant Das
0093218e41 Add Noise Support To Prysm (#4991)
* add dep
* add feature config
* Merge branch 'master' into addNoiseSupport
* gaz and victor's review
* Merge branch 'addNoiseSupport' of https://github.com/prysmaticlabs/geth-sharding into addNoiseSupport
2020-03-03 17:45:51 +00:00
tzapu
c09ae21ab0 show full public key in metrics (#4988)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-03 10:36:49 -06:00
Jim McDonald
4c43616647 Add ProtecingKeymanager interface and calls (#4982)
* Add ProtectedKeymanager interface and calls

* Rename interface

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-03 10:21:58 -06:00
shayzluf
59575bcac9 fuzz core/blocks package (#4907)
* fuzz core/blocks package

* gaz goimports

* fix test

* terence feedback

* terence feedback

* add error to domain. halfway through

* adding error to domain

* goimports

* added error handling to test

* error instead of continue

* terence and nishant feedback

* domain error handling

* domain error handling

* handle nil validator in ReadOnlyValidator creation

* goinmports

* [4]byte domain type

* [4]byte domain type

* [4]byte domain type fix tests

* fix tests

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-03-03 19:02:14 +05:30
Ivan Martinez
703ce63c12 Change span representation to a struct in Slasher (#4981)
* Remove filter, replace with slot + committee index

* Change bloom filter to 2 sig bytes

* Fix comment

* Fix line
2020-03-03 15:28:23 +05:30
Ivan Martinez
69845cad77 Fix stalling bug in slashing pool (#4985)
* Fix bug in slashing pool

* Remove unneeded logs

* remove line

* Remove val

* Fix tests

* Add regression test

* Add nother regresstion test
2020-03-03 00:52:35 -05:00
terence tsao
a07e604eea Better logs for forking (#4966)
* Move `updateHead` to ReceiveAttestationNoPubsub and better forking mesasges

* Typo

* Import

* f

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-02 09:51:26 -08:00
Jim McDonald
044d72064f Pre-allocate slices when reporting validator performance (#4979) 2020-03-02 23:00:23 +08:00
Jim McDonald
5cb51263b0 Fix crash when reporting validator metrics (#4968)
* Fix crash when reporting validator metrics
* Merge branch 'master' into validator-reporting-crash
* Merge branch 'master' into validator-reporting-crash
2020-03-02 08:51:07 +00:00
terence tsao
d9d4a9954e Archived point index DB methods (#4977) 2020-03-02 08:55:38 +01:00
Nishant Das
3989b65667 Add Flag for Checking HeadState (#4974)
* gate feature
* imports
* add flag
* Merge branch 'master' into gateFeature
2020-03-02 06:06:20 +00:00
terence tsao
9fe2cdd5ca State summary DB methods (#4971)
* Define proto
* Regen
* Delete slasher.pb.go
* Gaz
* Merge branch 'state-summary-proto' of https://github.com/prysmaticlabs/prysm into state-summary-proto
* Revert "Delete slasher.pb.go"

This reverts commit 19bfa21cd3.
* Add state_summary.go
* Test
* Gaz
* Interaces
* pass through
* Merge refs/heads/master into state-summary-db
* Merge refs/heads/master into state-summary-db
2020-03-02 04:27:55 +00:00
terence tsao
cb163d8910 Define state summary proto (#4967)
* Define proto

* Regen

* Delete slasher.pb.go

* Gaz

* Revert "Delete slasher.pb.go"

This reverts commit 19bfa21cd3.

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-01 23:10:52 -05:00
Nishant Das
cd6e06f01e add helpers (#4972) 2020-03-01 16:22:49 +01:00
Preston Van Loon
af5cc31565 Use correct image name for validator debug image (#4963)
* Use correct image name for validator debug image
* Merge refs/heads/master into fix-validator-debug-image
2020-02-28 01:22:32 +00:00
terence tsao
5a5cdc1b02 Removed (#4962) 2020-02-27 17:06:25 -08:00
Ivan Martinez
31b1e6a7a8 Add double vote detection to spanner (#4954)
* Add double vote detection to spanner

* Add documentation

* Update slasher/detection/attestations/spanner.go

* Gazelle

* Fix filter output
2020-02-27 17:21:05 -05:00
Preston Van Loon
05a5bad476 Migrate SubmitAggregateAndProof (#4951)
* Remove unused services, mark everything as deprecated, regen pb.go
* remove some code from cluster pk manager, gazelle
* goimports
* remove mocks
* Update WORKSPACE, deprecate old method, stub new method
* Move implementation to ethereumapis definition
* gofmt
* Add TODO for #4952
* Merge branch 'master' into migrate-submitaggregateandproof
* Update validator client to use new submit aggregate and proof method
* Merge branch 'migrate-submitaggregateandproof' of github.com:prysmaticlabs/prysm into migrate-submitaggregateandproof
* gaz
* rename
* rename
* Merge refs/heads/master into migrate-submitaggregateandproof
* Merge refs/heads/master into migrate-submitaggregateandproof
* Merge refs/heads/master into migrate-submitaggregateandproof
* Merge refs/heads/master into migrate-submitaggregateandproof
* Merge refs/heads/master into migrate-submitaggregateandproof
* fix tests
* Merge branch 'migrate-submitaggregateandproof' of github.com:prysmaticlabs/prysm into migrate-submitaggregateandproof
2020-02-27 20:23:35 +00:00
terence tsao
2fef9d3e5e Move blockchain service metrics package (#4959)
* Moved metrics package
2020-02-27 19:38:22 +00:00
Raul Jordan
14b3181e67 Plug-In Attester Slashing Detection Into Slasher Runtime (#4937)
* more spanner additions

* implement iface

* begin implement

* wrapped up spanner functions

* rem interface

* added in necessary comments

* comments on enums

* begin adding tests

* plug in surround vote detection

* saved indexed db implementation

* finally plugin slashing for historical data

* Small fixes

* add in all gazelle

* save incoming new functions

* resolve todo

* fix broken test channel item

* tests passing when fixing certain arguments and setups

* Add comment and change unimplemented

* find surround

* added in gazelle

* gazz

* feedback from shay

* fixed up naming

* Update

* Add tests for detectSurroundVotes

* Remove logs

* Fix slasher test

* formatting

* Remove unneeded condition

* Test indices better

* fixing broken build

* pass tests

* skip tests

* imports

* Update slasher/detection/attestations/attestations_test.go

* Update slasher/beaconclient/historical_data_retrieval_test.go

* Address comments

* Rename function

* Add comment for future optimization

* Fix comment

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
2020-02-27 12:22:39 -05:00
Ivan Martinez
e7b94123ce Add interface for spanner and MockSpanner (#4956)
* Add interface for spanner

* Add MockSpanner for future testing

* Add comment

* gazelle
2020-02-26 22:48:02 -06:00
Ivan Martinez
76aad0f444 Add simple bloom filter implementation for double vote detection (#4948)
* Add simple bloom filter implementation for detecting similarity of 1 key

* Change hash to keccak

* Fix receiver name

* Fix bug

* Fix comments and organize test

* Add comment detailing hash functions

* Add bloom to test names
2020-02-26 19:37:36 -05:00
terence tsao
2c1c41d1d6 Move stategen package under /state (#4950)
* Move state gen to state
* Merge branch 'master' into move-state-gen
* Merge refs/heads/master into move-state-gen
2020-02-26 22:09:14 +00:00
Preston Van Loon
921a44d9fd Clean up unused / deprecated protobuf definitions (#4949)
* Remove unused services, mark everything as deprecated, regen pb.go
* remove some code from cluster pk manager, gazelle
* goimports
* remove mocks
* Merge branch 'master' into remove-deprecated-proto-defs
2020-02-26 21:15:36 +00:00
Raul Jordan
22bbed0059 Stream Indexed Attestations RPC Implementation (#4941)
* stream indexed attestations impl
* mock regen
* test for stream indexed
* atts test
* no bls
* gaz
* Merge refs/heads/master into implement-stream-indexed
* use feed for atts instead
* remove unused imports
* Merge refs/heads/master into implement-stream-indexed
* fix tests in beacon
* properly use pointers
* imports
* import
2020-02-26 20:14:22 +00:00
terence tsao
b1231f3ddf Replay blocks and generate state without sig verification (#4943)
* New file

* Add transition_stategen.go

* Update ProcessBlockForStateRoot

* Feature flags

* Fixed tests

* Gaz

* Make them private

* E2e flags

* Gazelle

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-26 10:40:33 -06:00
Ivan Martinez
c2b30cf801 Prepare spanner for double vote detection and fix a few bugs (#4940)
* Rename vars for clarity

* Change spanner to take target epoch as key

* Fix tests, add multiple val test

* Fixes

* Change the spanner to take in att on detect

* Add back proto diagram tests

* Remove unneeded comments
2020-02-25 21:35:34 -06:00
Preston Van Loon
b647ca5dd2 Release --initial-sync-cache-state (#4938)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-24 19:20:00 -06:00
Preston Van Loon
c0f1a1d674 Validator: cache domain data calls (#4914)
* Use a domain data cache to reduce the number of calls per epoch

* fix fakevalidator

* Refactor to use a feature flag, use proto.clone, move interceptor to its own file

* gofmt

* fix comment

* tune cache slightly

* log if error on domain data

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-24 13:02:45 -08:00
Preston Van Loon
855f5d2986 Add a configurable flag for gRPC retries (#4926)
* Add a configurable flag for gRPC retries
* Merge refs/heads/master into configurable-retry
* Merge refs/heads/master into configurable-retry
* Merge refs/heads/master into configurable-retry
* Merge refs/heads/master into configurable-retry
* Merge refs/heads/master into configurable-retry
* add in flag to usage
2020-02-24 18:00:22 +00:00
Jim McDonald
5f0ed8388e Use --deposit-contract with default value (#4925)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-24 11:24:05 -06:00
Jim McDonald
a951c4f6ab Add Ethereum 1 block->timestamp cache (#4924)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-24 10:53:35 -06:00
terence tsao
0470d37072 Consider missing validator count for performance metric (#4928)
* Consider missing validator count
* Use validator count reported
* Merge branch 'master' into missing-validators
* Merge refs/heads/master into missing-validators
2020-02-24 16:28:22 +00:00
terence tsao
15b649d760 Fix aggregated attestation pool grows large in size (#4932)
* Add metrics

* Use it

* Use it

* Fixed exp time and tests

* Update on save too

* Expose getters

* One epoch purge time

* Fixed a timing issue

* Clean up

* Gazelle

* Interface

* Prune every epoch

* Aggregate twice per slot

* Revert attsToBeAggregated

* Delete expired atts

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-24 10:18:34 -06:00
terence tsao
2e56a59473 attestations in pool count metrics (#4930)
* Add metrics
* Use it
* Use it
* Expose getters
2020-02-23 22:30:52 +00:00
Raul Jordan
6fe86a3b30 Define an Efficient Spanner Struct Implementation for Slasher (#4920)
* more spanner additions

* implement iface

* begin implement

* wrapped up spanner functions

* rem interface

* added in necessary comments

* comments on enums

* begin adding tests

* test for detection

* add all detection tests

* moar tests

* tests for deleting pass

* dd test for update spans

* tests for updating

* include tracing utils

* gaz

* add mutexes

* ivan feedback
2020-02-22 08:57:24 -06:00
Nishant Das
83945ca54b Shift Stateutils to State Package (#4921)
* shift over
* new changes
* imports
* Merge branch 'master' into shiftUtils
2020-02-21 16:52:21 +00:00
Nishant Das
47bb927029 Fix Fork Copying (#4922)
* add fix and reg test

* goimports
2020-02-21 08:49:42 -05:00
garyschulte
597b21c40a fix missing metrics label on attetation fail (#4917)
Co-authored-by: garyschulteog <30323939+garyschulteog@users.noreply.github.com>
2020-02-20 21:28:55 -06:00
Raul Jordan
39aa791dcc Add Slashings Into Blocks from Pool (#4902)
* tests pass

* fix broken test

* addressed feedback

* Update beacon-chain/rpc/validator/proposer_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* Update beacon-chain/rpc/validator/proposer_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: shayzluf <thezluf@gmail.com>
2020-02-20 15:10:51 -06:00
Ivan Martinez
90ed37a655 Cleanup detection code (#4915) 2020-02-20 08:56:37 -06:00
Raul Jordan
d143187b7e Request All Indexed Attestations Since Genesis in Slasher on Startup (#4894)
* include fixes

* rev

* logrus

* tests for query sync status and chain head

* begin tests for indexed atts

* test passing for requesting historical atts

* Update slasher/beaconclient/chain_data_test.go

* Update slasher/beaconclient/historical_data_retrieval.go

* lint

* fixed up wanted vs receied

* fix mock

* gazelle

* fix broken build

* tests pass

* dep

* gaz

* add dep

* tests pass

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-19 16:26:14 -06:00
Preston Van Loon
3735e6b8af Add a clarifying comment from #4909 (#4911)
* Add a clarifying comment from #4909
* Merge refs/heads/master into clarify
2020-02-19 21:10:56 +00:00
Jim McDonald
deb76f1c15 Fix double period in span name (#4910) 2020-02-19 14:52:44 -06:00
Jim McDonald
6baffd4ccb Infostream (#4760)
* Add validators stream

* Ignore unknown keys rather than error on them

* Reduce accesses to common structures

* Ensure correct information returned for deposited validators

* Short-term cache for remote deposit data

* Name epoch duration for clarity

* Break out duplicated logic in to a single function

* Add capacities for slices and maps where appropriate

* Break out functions; add tests

* Allow stream errors not related to context

Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-19 11:09:23 -06:00
terence tsao
731cc0bd44 Handing pending atts if they dont have the state (#4904) 2020-02-19 06:46:30 -08:00
Nishant Das
641ad51dd4 check db for justified state (#4905) 2020-02-19 22:18:44 +08:00
Nishant Das
8e55c81bd5 Delete States More Efficiently (#4909)
* only sync at end of method

* chunk roots

* very fast iteration

* delete correctly
2020-02-19 19:36:37 +08:00
Preston Van Loon
f737267e54 gRPC retry requests (#4908)
* gRPC retry requests

* with 5 retries default
2020-02-19 18:42:17 +08:00
Nishant Das
44856f9500 Add Unsafe Sync (#4906)
* add unsafe flag

* imports

* Use finalized epoch

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2020-02-19 00:25:07 -08:00
Nishant Das
4389e9d3c9 Add mempool feature flag (#4824) (#4903)
* Add mempool feature flag

* gate put too

* fix

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-19 14:29:49 +08:00
Nishant Das
655f57e3f2 Bound Initial Sync Cache Size (#4844)
* bound initial sync

* fix lint

* Revert "Better block attestation inclusion (#4838)"

This reverts commit 090d9627fe.

* add memory pool

* more fixes

* revert changes

* add hack

* revert hack

* push halving

* bring back hack

* increase cache size

* more fixes

* more changes

* new fixes

* add test

* add reverse test

* more tests and clean up

* add helper

* more cleanup and tests

* fix test

* remove code

* set gc percent flag

* lint

* lint

* Fix comment formatting

* Fix some formatting

* inverse if statement

* remove debug log

* Apply suggestions from code review

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* Update beacon-chain/state/getters.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* Update beacon-chain/db/kv/state.go

* integrate state generator

* gaz

* fixes

* terence's review

* reduce bound further

* fix test

* separate into new files

* gaz

* mod build file

* add test

* revert changes

* fix test

* Update beacon-chain/core/helpers/slot_epoch.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* handle edge case

* add back test

* fix test again

* handle edge case

* Update beacon-chain/blockchain/init_sync_process_block.go

* Update beacon-chain/blockchain/init_sync_process_block.go

* Update beacon-chain/stategen/service_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update beacon-chain/blockchain/init_sync_process_block.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update beacon-chain/stategen/service.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update beacon-chain/stategen/service.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul's review

* raul's review

* fix refs

* terence's review

* one more fix

* Update beacon-chain/blockchain/init_sync_process_block.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-02-18 15:10:54 -06:00
terence tsao
c0d4cabdb7 Check seeds (#4901) 2020-02-18 12:05:36 -06:00
terence tsao
0e37b4926a Export LoadBlocks and ReplayBlocks (#4898)
* Define StateGenerator

* Gaz

* Delete interface.go

* Update BUILD.bazel

Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-18 13:54:56 +08:00
shayzluf
25308ef9fa fuzzing core/state package without skip slot cache (#4883)
* fuzzing core/state package

* named error msg

* err comment

* terence feedback

* preston feedback

* preston feedback

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-18 10:38:15 +05:30
Preston Van Loon
40afef8b9e Only set gc percent if the flag is set (#4899)
* only set gc percent if the flag is set
2020-02-18 01:37:35 +00:00
Nishant Das
c7d0ced5d1 Utilise a Flag to Toggle With the GC (#4897)
* set flag
* Merge refs/heads/master into setGC
2020-02-18 00:37:37 +00:00
Nishant Das
3d12322103 Use Memory Pool for Randao Mixes (#4896)
* add mem pool

* use mem pool

* Update shared/memorypool/memorypool.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update shared/memorypool/memorypool.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-02-18 08:25:39 +08:00
Raul Jordan
b4881e3cd5 Save Attestations In Initial Sync if Archive Enabled (#4895)
* receive block enable archive
* add to initial sync func
* Merge branch 'master' into archive-save-atts
2020-02-17 22:42:21 +00:00
Preston Van Loon
d1eaa8e09e Define debug images for Prysm beacon chain and validator binaries (#4893)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-17 14:13:34 -08:00
Raul Jordan
5db8c5ad0c Implement ListIndexedAttestations Endpoint in Prysm (#4892)
* update patch and workspace

* stub methods

* implementation of indexed attestations list

* include latest ethereumapis

* update request type

* compute committee pure function

* use compute committee helper

* add test into list indexed attestations

* regenerate mock

* imports and out of range check

* test passing for archived epoch

* add comment

* comment

* better comment on func

* throw in continue instead
2020-02-17 15:57:13 -06:00
terence tsao
d7db8b1f5d Replay block same slots different root edge case (#4889) 2020-02-17 12:33:00 -07:00
terence tsao
6b8ec26c56 Handle nil head block in cache (#4888)
* Nil check

* Fixed tests

* Covered rest of the codebase

* Race tests

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-17 12:21:42 -06:00
terence tsao
9b2aa66667 Fix reply block edge cases and tests (#4881)
* Covered 2 edge cases and tests

* Consider process epoch
2020-02-17 13:10:23 +08:00
terence tsao
b9c140c17d Hot/cold state management: Replay blocks and gen state (#4877)
* Starting stategen

* Replay implementations

* Replay tests

* Gazelle

* Fixed tests

* Dont have to verify sig
2020-02-17 07:28:20 +08:00
Aranha
8885d715f2 fixed panic: runtime error: integer divide by zero #4777 (#4823) 2020-02-16 15:13:03 -07:00
Preston Van Loon
0a2763b380 Check attestation bitlist length in aggregation to prevent panic (#4876)
* Check attestation bitlist length in aggregation to prevent panic

* Add case for overlap too

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-16 10:18:48 -07:00
Raul Jordan
3fcb4e8a12 Update README for Patching Ethereum APIs (#4871)
* edit readme
* Merge branch 'master' into readme-patch
* Merge branch 'master' into readme-patch
2020-02-16 17:07:06 +00:00
Preston Van Loon
db68c8a57b Enable attestation cache flag by default, deprecate feature flag (#4873)
* Enable attester flag by default
2020-02-16 01:07:14 +00:00
SuburbanDad
7899dc115e prevent additional array OOB errors for validator balances (#4872)
Co-authored-by: garyschulte <garyschulte@gmail.com>
2020-02-15 15:47:45 -07:00
terence tsao
456ac5f9a3 Better head object coupling for chain service (#4869)
* Done
* Fixed lock
* Fixed all the tests
* Comments
* Fixed more tests
* Merge branch 'master' into better-head-obj
* Fixed more tests
* Merge branch 'better-head-obj' of git+ssh://github.com/prysmaticlabs/prysm into better-head-obj
* Prestons feedback & fixed test
* Nishant's feedback
* Participation edge case
* Gaz
* Merge branch 'master' into better-head-obj
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into better-head-obj
* Raul's feedback
* Merge branch 'better-head-obj' of git+ssh://github.com/prysmaticlabs/prysm into better-head-obj
2020-02-15 18:57:49 +00:00
terence tsao
92a91476ef Better log (#4870) 2020-02-15 04:09:26 -08:00
Raul Jordan
868c8f5dd4 Detection Service Creation (#4867)
* visibility added

* register in node

* fixed up imports

* include detection listeners for feed

* subscribe to blocks and todos

* tests passing

* todos

* pkg comment
2020-02-14 13:03:25 -06:00
Raul Jordan
38fed735b2 Send Slashing Objects to Beacon Node via RPC (#4866)
* submit slashing objects

* tests complete
2020-02-14 11:11:14 -07:00
terence tsao
4a446329b2 Prevent balance goes out of bound (#4865)
* Prevent balance goes out of bound
* Prevent balance goes out of bound
* Merge branch 'master' into fix-balance
2020-02-14 16:59:20 +00:00
Ivan Martinez
6b40fa01ec Add detection package for slashing detection functions (#4861)
* Move detection to its own package

* Fix renames

* More fixes

* Revert "Fix renames"

This reverts commit 3200f89a1b.

* Fix

* Fix renames again

* Fix another rename

* Fix comment

* unused

* add comment

* gazelle

* Add spans

* Unexport helper functions

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-14 10:46:55 -06:00
Nishant Das
214121b0ab Dont Initialize Zeroed Out State (#4863)
* dont initialize map with empty registry
* check zeroed pointer
* check zeroed pointer
* imports
* gaz
* better check
* gaz
* fix it finally
* finally fix it
* gaz
2020-02-14 12:19:50 +00:00
Raul Jordan
b99779fe94 Implementing Slasher Node Runtime (#4856)
* include slasher node
* slasher node runtime added
* added in register for beacon client
* streaming blocks fixed up
* all subs working
* gazelle
* handle errors
* Merge branch 'master' into slasher-node
* Update slasher/node/BUILD.bazel
* x up slasher test
* Merge refs/heads/master into slasher-node
* Merge refs/heads/master into slasher-node
* add in force clear into usage
* Merge refs/heads/master into slasher-node
* usage
* Merge refs/heads/master into slasher-node
* Fix streamblocks test
* Merge refs/heads/master into slasher-node
* Fix docker image compile
* Merge branch 'slasher-node' of https://github.com/prysmaticlabs/Prysm into slasher-node
2020-02-14 07:09:54 +00:00
Nishant Das
b263efefeb Copy Checkpoint Root Properly (#4862)
* change to custom hashing
* Merge branch 'master' into minorOpt
* goimports
* Merge branch 'minorOpt' of https://github.com/prysmaticlabs/geth-sharding into minorOpt
* gaz
* pad to 32 bytes
* one more case
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into minorOpt
* one more case
* more cases
* some more cases
* do it better
2020-02-14 06:35:16 +00:00
Nishant Das
ecfd7bdfa1 Change to Custom Hashing for BlockHeaders (#4860)
* change to custom hashing
* Merge branch 'master' into minorOpt
* goimports
* Merge branch 'minorOpt' of https://github.com/prysmaticlabs/geth-sharding into minorOpt
* gaz
* pad to 32 bytes
2020-02-14 05:19:20 +00:00
Raul Jordan
549b0f66fa Include Slashing Submission Endpoints + Slashing Pool in Beacon Node (#4858)
* add to workspace

* impl

* include tests for func

* fix broken build

* test passing, found 2 bugs

* add errors package

* added in mockgen

* we check for insertion into the pool based on attester slashings

* test passing

* proper test

* Update beacon-chain/rpc/beacon/slashings.go

* Update beacon-chain/rpc/beacon/slashings_test.go
2020-02-13 22:20:45 -06:00
garyschulteog
27ec40f269 Remove remaining instances of proto.clone() (#4806)
* prysm-4757 remove proto.Clone() in favor of existing getters.Copy* methods
* prysm-4757 added a bunch of copy methods, and broke some tests
* squash commits
 fix tests and getter implementations
 remove usage of CopySignedBeaconBlock from ReceiveBlockNoVerify
* correctly copy Deposit proof and remove proto.clone() again
* Merge branch 'master' into prysm-4757-no-proto-clone
* Merge branch 'master' into prysm-4757-no-proto-clone
* Fix for comments, inline possible function calls
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into prysm-4757-no-proto-clone
* Merge branch 'master' into prysm-4757-no-proto-clone
* updated with feedback from review
* Merge branch 'master' into prysm-4757-no-proto-clone
* Merge branch 'master' into prysm-4757-no-proto-clone
* Merge branch 'master' into prysm-4757-no-proto-clone
2020-02-14 01:03:51 +00:00
terence tsao
bb60b2f523 Add balances to voting summary log (#4857) 2020-02-13 14:52:35 -08:00
Preston Van Loon
4072eb711f Beacon State: More consistent nil return for state (#4854)
* More consistent nil return for state
* Merge refs/heads/master into nil-state
* Add a check for encode(nil)
* Merge branch 'nil-state' of github.com:prysmaticlabs/prysm into nil-state
* fix test, thanks @rauljordan
* fix tests
* gofmt
2020-02-13 20:34:50 +00:00
Ivan Martinez
2473680759 Add spans to Slasher DB functions (#4855)
* Add interface and move slashing types to /types package

* Add spans for all DB functions

* Fix packages

* Fix func call
2020-02-13 13:51:30 -06:00
Ivan Martinez
c44a30672e Change slasher DB structure to mirror beacon-chains (#4848)
* Add interface and move slashing types to /types package

* WIP restructure to match beacon chain DB

* Fix build

* Fix comment

* Fix comments

* fix comments for sure

* Use wrapper function for evict

* Remove unused

* Update slasher/db/kv/kv.go

* Update slasher/db/testing/BUILD.bazel

* Update slasher/db/types/BUILD.bazel

* Update slasher/db/types/types.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-13 10:19:46 -06:00
Nishant Das
db21f98053 Change Positionining of Warning Log (#4850)
* change logging

* change positioning of log

* fix status test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-13 00:39:15 -08:00
Nishant Das
b7adf55336 Revert "Check HeadState First (#4830)" (#4851)
This reverts commit 601f93a0a1.
2020-02-12 23:33:45 -08:00
Preston Van Loon
f06dfd6108 Secure lock when accessing the map only (#4849)
* Secure lock when accessing the map

* wrong lock

* Remove some deadlocks
2020-02-12 20:33:14 -06:00
Preston Van Loon
bb4c8ba83e Create backups without freelist. (#4847)
* Create backups without freelist. This is a bit slower, but more accurate
* Merge refs/heads/master into better-backups
2020-02-12 22:01:07 +00:00
terence tsao
16fef1c658 Better attesting summary reporting (#4845) 2020-02-12 13:38:19 -08:00
Preston Van Loon
090d9627fe Better block attestation inclusion (#4838)
* Fill blocks with unaggregated atts when possible, don't delete from pool until the block is submitted

* Add back invalid atts removal

* Don't delete attestations unless they can be aggregated

* comment and test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-12 11:54:07 -08:00
Nishant Das
c7698cda1c Add Pending Attestation Lock fix (#4840)
* add pending lock fix

* use easier locking
2020-02-12 08:58:40 -08:00
Nishant Das
a11f1804a2 lock fix (#4839) 2020-02-11 23:33:15 -08:00
Preston Van Loon
0882908d2c Improve attestation cache check from O(n) to O(1) access time (#4837)
* use O(1) method to access the blocks
* Merge branch 'master' into better-has-aggregated-attestation
2020-02-11 21:57:42 +00:00
Raul Jordan
a7325315a8 Include Beacon Client Package in Slasher (#4835)
* begin beacon client

* adding in the proper receivers

* include all parts of the beacon client

* all comments included

* visibility and package comment
2020-02-11 15:35:31 -06:00
Raul Jordan
c3785e03ba Add Lock to Processing Pending Attestations (#4833)
* add lock to pending atts
* Merge branch 'master' into add-att-lock
* changed to rlock
* broken build
* Merge refs/heads/master into add-att-lock
* refactor to prevent deadlock
2020-02-11 20:01:33 +00:00
Raul Jordan
297247d915 Add Paginated Attestation Pool to Prysm (#4827)
* added pagination to atts

* tests pass for atts

* add mock

* fix

* add to val

* fix

* add in proper mock

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-11 13:18:30 -06:00
Nishant Das
601f93a0a1 Check HeadState First (#4830)
* easy optimization
* Merge refs/heads/master into easyOptimization
* cache miss
2020-02-11 16:22:51 +00:00
Nishant Das
8c90e38770 Fix Goodbye RPC handler (#4831)
* fix goodbye messages

* fix test

* fix test
2020-02-11 09:08:01 -06:00
Preston Van Loon
661e48f549 Revert state copy PR #4811 (#4825)
* Revert "Add mempool feature flag (#4824)"

This reverts commit b3f2a330dc.
* Revert "Optimize Copying of Fields (#4811)"

This reverts commit 4f654d30ac.
2020-02-11 01:47:31 +00:00
Preston Van Loon
b3f2a330dc Add mempool feature flag (#4824)
* Add mempool feature flag

* gate put too

* fix

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-10 16:20:48 -06:00
Preston Van Loon
5c14cd64c5 nil check (#4822)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-10 14:01:37 -08:00
terence tsao
56fcca69d7 Optimize Hasblock (#4821)
* Expose node

* Comments

* Extra line

* Add has block

* Test

* Usages

* Fixed tests
2020-02-10 15:48:28 -06:00
shayzluf
02b6d7706f Slasher committees cache (#4812)
* add committees cache
* committees cache usage
* fix test
* fix log
* goimports
* Merge branch 'master' of github.com:prysmaticlabs/prysm into slasher_committees_cache

# Conflicts:
#	slasher/service/data_update.go
* fix imports
* fix comment
* fix comment
* Merge refs/heads/master into slasher_committees_cache
* Merge refs/heads/master into slasher_committees_cache
* Update slasher/cache/BUILD.bazel

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Merge refs/heads/master into slasher_committees_cache
* Merge refs/heads/master into slasher_committees_cache
* Merge refs/heads/master into slasher_committees_cache
* added in the service context
* baz
* Merge refs/heads/master into slasher_committees_cache
* Merge refs/heads/master into slasher_committees_cache
2020-02-10 20:09:14 +00:00
Preston Van Loon
0ed8246e28 Release flag to aggregate attestations in fork choice. (#4820)
* Release flag to aggregate attestations in fork choice.
* Merge refs/heads/master into aggregate-atts-in-fc
* fix test
* Merge branch 'aggregate-atts-in-fc' of github.com:prysmaticlabs/prysm into aggregate-atts-in-fc
* Merge refs/heads/master into aggregate-atts-in-fc
2020-02-10 19:57:30 +00:00
Jim McDonald
dfe52e1752 Add command to display private keys from keystore (#4793)
* Add command to display private keys from keystore

* Update output format

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-10 13:47:54 -06:00
terence tsao
52524d5acc Expose fork choice node (#4819)
* Expose node

* Comments

* Extra line

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-10 13:09:12 -06:00
Preston Van Loon
4598344918 Clear initial sync state caches after round robin sync (#4817)
* Clear initial sync state caches after round robin sync

* fix test mock

* lint

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-10 10:39:18 -08:00
terence tsao
1a5c5153be Increase size to 10 (#4818) 2020-02-10 10:14:31 -08:00
terence tsao
6c00f5fff7 Update attester wait time (#4791)
* Update attester submit strategy

* Tests

* Gaz

* Fixed rest of the tests

* Updated design to use feed

* Use roughtime for Now

* gaz

* Gaz

* Send block processed after fork choice

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2020-02-10 10:59:55 -06:00
Nishant Das
4f654d30ac Optimize Copying of Fields (#4811)
* add new changes

* memory pool

* add test

* final optimization

* preston's review
2020-02-10 23:05:58 +08:00
Ivan Martinez
18fbdd53b9 Slasher proto and function renames (#4797)
* Rename elements for clarity
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-renames
* Fix test
* Rename more functions
* Cleanup
* Fix logs
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into slasher-renames
* Reorganize and clean up logs
* Address comments
* Add comments
2020-02-10 05:57:20 +00:00
terence tsao
5be4fee810 Removed spans for fork choice helpers (#4808)
* Removed spans
2020-02-09 23:32:08 +00:00
terence tsao
8a02003d4b Feature flag to disable head update on attestation basis (#4802) 2020-02-09 11:44:17 -08:00
terence tsao
bdcd06a708 Handle head state for init sync cache state (#4800)
* Don't save nil head state
* Update head
* Don't update head on new att
* Handle initial sync state in DB can be nil
* Don't update head if it's nil
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into handle-init-sync-cache-state
* Merge refs/heads/master into handle-init-sync-cache-state
* Update beacon-chain/blockchain/head.go
2020-02-09 16:12:06 +00:00
Preston Van Loon
7e0d0502aa Prevent panic on wrong interface conversion (#4803)
* Prevent panic on wrong interface conversion
* remove import
2020-02-09 08:41:50 +00:00
terence tsao
f14ff34797 Delete block attestations from the pool (#4798) 2020-02-08 21:30:45 -08:00
terence tsao
16a0c9f463 Don't save nil head state (#4799)
* Don't save nil head state
* Update head
2020-02-09 05:08:21 +00:00
Preston Van Loon
70cb06d50f Report unhealthy if we think we are out of sync (#4796)
* Report unhealthy if we think we are out of sync
* gofmt
* Merge refs/heads/master into out-of-sync-unhealthy
2020-02-08 18:50:14 +00:00
Jim McDonald
0725e2dba7 Downgrade log entry (#4795) 2020-02-08 10:05:13 -08:00
terence tsao
031b51e294 Update head on per attestation and minor refactor clean ups (#4786)
* head.go
* Tests
* Tests
* Merge branch 'master' into call-head
* Merge refs/heads/master into call-head
* Merge refs/heads/master into call-head
* Merge refs/heads/master into call-head
* Merge branch 'master' into call-head
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into call-head
* Conflict
2020-02-08 02:05:43 +00:00
Preston Van Loon
015c8c4cd2 Use helper to aggregate attestations in pool (#4794)
* Aggregate attestations in pool
* test
* clarify test
* fix test
2020-02-08 01:03:50 +00:00
Nishant Das
efd27c7b2b Fix Dynamic Topic Subscriptions (#4767)
* check for sync status

* check for chainstart

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-08 06:49:17 +08:00
Nishant Das
f16a71f178 Revert "Update Slices More Efficiently" (#4790)
* Revert "Update Slices More Efficiently (#4789)"

This reverts commit 669e1ea787.
2020-02-07 16:58:36 +00:00
Nishant Das
669e1ea787 Update Slices More Efficiently (#4789)
* better cached states

* lint

* jim's review
2020-02-07 09:06:39 -06:00
Jim McDonald
7ba2c897ad Add location option for wallet keymanager (#4788)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-06 23:03:33 -06:00
Preston Van Loon
34178aff2a Strict verify attestations in pubsub (#4782)
* Verify attestations before putting them into the pool
* use existing method
* Validate aggregated ones too
* Revert "Validate aggregated ones too"

This reverts commit a55646d131.
* Merge branch 'master' of github.com:prysmaticlabs/prysm into verify-all-atts
* Add feature flag
* The remaining shared reference fields with conditional copy on write
* Merge branch 'master' into better-copy-2
* Merge branch 'better-copy-2' of github.com:prysmaticlabs/prysm into verify-all-atts
* gaz
* fix build, put into validate
* lint
* Merge branch 'master' of github.com:prysmaticlabs/prysm into verify-all-atts
* why does goland do this to me
* revert unrelated change
* fix tests
* Update shared/featureconfig/config.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Merge refs/heads/master into verify-all-atts
* Update beacon-chain/blockchain/testing/mock.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* gofmt
2020-02-07 03:21:55 +00:00
shayzluf
4df74a3b9d Slashing operations pool (#4726)
* first iteration

* with initial tests

* comment fix

* fix lint

* Start fixing other tests

* Cleanup

* Finish att slashing tests

* Finish up tests for proposer

* Fix docs

* Fix tests

* Fix pending att slashings to not include duplicates

* Fix max list

* Add test to make sure no duplicate slashings

* Address comments, improve insertion

* Fix error

* Update beacon-chain/operations/slashings/service.go

* Update beacon-chain/operations/slashings/service.go

* Update beacon-chain/operations/slashings/service.go

* Update beacon-chain/operations/slashings/service.go

* include a helper function for deduplication, fix some comments

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-07 10:32:51 +08:00
Preston Van Loon
f6dfaef537 BeaconState: remaining shared reference fields with conditional copy on write (#4785)
* The remaining shared reference fields with conditional copy on write
* Merge branch 'master' into better-copy-2
2020-02-06 21:55:29 +00:00
Raul Jordan
a9d144ad1f Stream Blocks Functionality for RPC (#4771)
* stream blocks functionality included
* necessary tests for stream blocks and notifier
* gazelle and tests passing
* gazelle and tests passing
* Merge branch 'master' into stream-block
* Update beacon-chain/core/feed/block/events.go
* Merge refs/heads/master into stream-block
* Merge refs/heads/master into stream-block
* Merge refs/heads/master into stream-block
* Merge refs/heads/master into stream-block
* naming
* build
* Merge refs/heads/master into stream-block
* Merge refs/heads/master into stream-block
* Merge refs/heads/master into stream-block
* fix up tests
* Merge branch 'stream-block' of github.com:prysmaticlabs/prysm into stream-block
* Merge refs/heads/master into stream-block
* shay comment
* Merge refs/heads/master into stream-block
* Merge branch 'stream-block' of github.com:prysmaticlabs/prysm into stream-block
* Merge refs/heads/master into stream-block
2020-02-06 20:14:38 +00:00
terence tsao
9cf30002d4 Rlock for computing head (#4784)
* Add lock
* Space
2020-02-06 19:59:50 +00:00
AgentJ-WR
e2faa391c3 Fix(Genesis): Api genesis block now returns properly (#4736)
* begin state service
* begin on the state trie idea
* created beacon state structure
* add in the full clone getter
* return by value instead
* add all setters
* new state setters are being completed
* arrays roots exposed
*  close to finishing all these headerssss
* functionality complete
* added in proto benchmark test
* test for compatibility
* add test for compat
* comments fixed
* Merge branch 'master' into state-service
* add clone
* add clone
* remove underlying copies
* make it immutable
* integrate it into chainservice
* revert
* wrap up comments for package
* address all comments and godocs
* address all comments
* Merge branch 'master' into state-service
* clone the pending attestation properly
* Merge branch 'state-service' of github.com:prysmaticlabs/prysm into state-service
* properly clone remaining items
* tests pass fixed bug
* begin using it instead of head state
* prevent nil pointer exceptions
* Merge branch 'state-service' into use-state-in-runtime
* begin using new struct in db
* integrated new type into db package
* add proper nil checks
* using new state in archiver
* refactored much of core
* editing all the precompute functions
* done with most core refactor
* fixed up some bugs in the clone comparisons
* Merge branch 'state-service' into use-state-in-runtime
* append current epoch atts
* merged master
* add missing setters
* add new setters
* fix other core methods
* fix up transition
* main service and forkchoice
* fix rpc
* integrated to powchain
* some more changes
* fix build
* improve processing of deposits
* fix error
* prevent panic
* comment
* fix process att
* gaz
* fix up att process
* resolve existing review comments
* Merge branch 'master' into use-state-in-runtime
* resolve another batch of gh comments
* resolve broken cpt state
* revise testutil to use the new state
* begin updating the state transition func to pass in more compartmentalized args
* finish editing transition function to return errors
* block operations pretty much done with refactor
* state transition fully refactored
* got epoch processing completed
* fix build in fork choice
* fixing more of the build
* fix up broken sync package
* it builds nowww it buildssss
* revert registry changes
* Recompute on Read (#4627)

* compute on read

* fix up eth1 data votes

* looking into slashings bug introduced in core/

* able to advance more slots

* add logging

* can now sync with testnet yay

* remove the leaves algorithm and other merkle imports

* expose initialize unsafe funcs

* Update beacon-chain/db/kv/state.go

* lint

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* include master
* More Optimizations for New State (#4641)

* map optimization

* more optimizations

* use a custom hasher

* comment

* block operations optimizations

* Update beacon-chain/state/types.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* fixed up various operations to use the validator index map access

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* archiver tests pass
* fixing cache tests
* cache tests passing
* edited validator tests
* powchain tests passing
* halfway thru sync tests
* more sync test fixes
* add in tests for state/
* working through rpc tests
* assignments tests passed
* almost done with rpc/beacon tests
* resolved painful validator test
* fixed up even more tests
* resolve tests
* fix build
* reduce a randao mixes copy
* fixes under //beacon-chain/blockchain/...
* build //beacon-chain/core/...
* fixes
* Runtime Optimizations (#4648)

* parallelize shuffling

* clean up

* lint

* fix build

* use callback to read from registry

* fix array roots and size map

* new improvements

* reduce hash allocs

* improved shuffling

* terence's review

* use different method

* raul's comment

* new array roots

* remove clone in pre-compute

* Update beacon-chain/state/types.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul's review

* lint

* fix build issues

* fix visibility

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Merge branch 'use-state-in-runtime' of https://github.com/prysmaticlabs/geth-sharding into resolve-tests
* fix visibility
* build works for all
* fix blockchain test
* fix a few tests
* fix more tests
* resolve conf
* sync
* update validator in slashing
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* archiver passing
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* fixed rpc/validator
* progress on core tests
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* resolve broken rpc tests
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* blockchain tests passed
* fix up some tests in core
* Merge branch 'master' of github.com:prysmaticlabs/prysm into resolve-tests
* fix message diff
* remove unnecessary save
* Merge branch 'master' of github.com:prysmaticlabs/prysm into resolve-tests
* Save validator after slashing
* Update validators one by one
* another update
* fix everything
* Merge branch 'resolve-tests' of https://github.com/prysmaticlabs/geth-sharding into resolve-tests
* fix more precompute tests
* fix blocks tests
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* more elegant fix
* more helper fixes
* Merge branch 'resolve-tests' of https://github.com/prysmaticlabs/geth-sharding into resolve-tests
* change back ?
* fix test
* fix skip slot
* fix test
* reset caches
* fix testutil
* raceoff fixed
* passing
* Retrieve cached state in the beginning
* lint
* Merge branch 'master' of github.com:prysmaticlabs/prysm into resolve-tests
* Fixed tests part 1
* Fixed rest of the tests
* Merge branch 'master' into optimize-process-att
* Minor changes to avoid copying, small refactor to reduce deplicated code
* Merge branch 'master' into resolve-tests
* Handle att req for slot 0
* New beacon state: Only populate merkle layers as needed, copy merkle layers on copy/clone. (#4689)

* Only populate merkle layers as needed, copy merkle layers on copy/clone.

* use custom copy

* Make maps of correct size

* slightly fast, doesn't wait for lock

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* Merge branch 'master' into resolve-tests
* Target root can't be 0x00
* Merge refs/heads/master into resolve-tests
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* Don't use cache for current slot (may not be the right fix)
* Merge branch 'resolve-tests' of git+ssh://github.com/prysmaticlabs/prysm into resolve-tests
* Merge branch 'resolve-tests' of github.com:prysmaticlabs/prysm into resolve-tests
* fixed up tests
* Remove some copy for init sync. Not sure if it is safe enough for runtime though... testing...
* Align with prev logic for process slots cachedState.Slot() < slot
* Fix Initial Sync Flag (#4692)

* fixes

* fix up some test failures due to lack of nil checks

* fix up some test failures due to lack of nil checks

* fix up imports

* revert some changes

* imports

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Conflict
* Conflict
* resolve confs
* resolving further conflicts
* Better skip slot cache (#4694)

* Return copy of skip slot cache state, disable skip slot cache on sync

* fix
* Fix pruning
* Merge refs/heads/master into resolve-tests
* Merge refs/heads/master into resolve-tests
* copy on write method
* gaz
* fix confs
* Merge refs/heads/resolve-tests into copy-on-write
* fix tests
* Merge branch 'copy-on-write' of github.com:prysmaticlabs/prysm into copy-on-write
* fix up issues with broken tests
* Merge refs/heads/resolve-tests into copy-on-write
* Merge branch 'master' of github.com:prysmaticlabs/prysm into copy-on-write
* remove extra update
* remove debugging lines
* gofmt
* Merge refs/heads/master into copy-on-write
* Merge refs/heads/master into copy-on-write
* Merge refs/heads/master into copy-on-write
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* Passing 1 to SetEndSlot
* Better way to get genesis block
* Merge branch 'master' of github.com:prysmaticlabs/prysm into bug/ApiGenesisBlock
* Checking for nil genBlk
* Testing conditional order
* Reverting conditions, and no idea why build fails
* Merge branch 'master' into bug/ApiGenesisBlock
* Saving to genesis block root in tests
* Merge branch 'bug/ApiGenesisBlock' of github.com:AgentJ-WR/prysm into bug/ApiGenesisBlock
* Saving root
* Merge branch 'master' of github.com:prysmaticlabs/prysm into bug/ApiGenesisBlock
* Adding regression test
* Updating regression test
* Merge branch 'master' into bug/ApiGenesisBlock
* Merge branch 'master' into bug/ApiGenesisBlock
* Merge branch 'master' into bug/ApiGenesisBlock
* Merge branch 'master' into bug/ApiGenesisBlock
2020-02-06 19:23:39 +00:00
terence tsao
5b83dffbe4 Use proto array forkchoice as default (#4778)
* Starting

* Removing feature flag

* Minor touchups service.go

* Conflict

* Started fixing test

* Init fork choice store for tests
2020-02-06 13:03:26 -06:00
terence tsao
b8383da468 Add forkchoiceAggregateAttestations to flag list (#4780)
* Added
* Merge branch 'master' into fix-flag
2020-02-06 18:09:51 +00:00
Preston Van Loon
c7fb28d42e Faster BLS publickey.Copy (#4770)
* Use balancesLength and randaoMixesLength to save copy on read
* use a cheaper copy for BLS publickey.Copy()
* Merge branch 'master' into bls-better-copy
* Merge refs/heads/master into bls-better-copy
* Merge refs/heads/master into bls-better-copy
* Merge refs/heads/master into bls-better-copy
* Merge refs/heads/master into bls-better-copy
* Merge branch 'master' of github.com:prysmaticlabs/prysm into bls-better-copy
* quick test
* Merge refs/heads/master into bls-better-copy
2020-02-06 17:35:38 +00:00
Preston Van Loon
3a9c8eb8b1 Log a warning if attempting to save a nil state (#4779)
* Log a warning if attempting to save a nil state
* Log a warning if attempting to save a nil state
2020-02-06 17:22:44 +00:00
Preston Van Loon
a9f1de354b Use balancesLength and randaoMixesLength to save copy on read (#4769)
* Use balancesLength and randaoMixesLength to save copy on read

* Update beacon-chain/state/getters.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: shayzluf <thezluf@gmail.com>
2020-02-06 10:06:27 -06:00
terence tsao
69c3d9dec2 Save attestation to DB gated by archival flag (#4776)
* Gated by flag
* Gaz
* Merge branch 'master' into archival-saves-att
2020-02-06 15:14:52 +00:00
Nishant Das
b99ae2cbe4 Remove All Batch DB Calls (#4775)
* remove all batch calls
2020-02-06 08:23:06 +00:00
Preston Van Loon
bfa103317e Disable forkchoice pre-processing of attestations (#4774)
* Disable forkchoice pre-processing of attestations
* space
* gaz
2020-02-06 07:53:08 +00:00
Nishant Das
dc1432f8d8 Attestation Verification Improvements (#4753)
* add fixes for sig verify
* minor fix
* Merge branch 'master' into fixSigVerify
* fmt
* Merge branch 'fixSigVerify' of https://github.com/prysmaticlabs/geth-sharding into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* use custom att copy
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* Merge refs/heads/master into fixSigVerify
* finnaly fixed all this
* Merge branch 'fixSigVerify' of https://github.com/prysmaticlabs/geth-sharding into fixSigVerify
2020-02-06 05:46:25 +00:00
terence tsao
85b379c08c Fix block tree cosmetic bugs (#4768)
* Fixed
* Merge branch 'master' into check-ready
2020-02-06 04:49:12 +00:00
Preston Van Loon
91b8760632 Beacon State: Track field references (#4751)
* track field references
* gofmt, RLock
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* cleanup comments
* Merge branch 'randao-ref-tracking' of github.com:prysmaticlabs/prysm into randao-ref-tracking
* Add a test for finalizer
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* maybe fix data race
* maybe fix data race
* temp comment out test file to find which one fails race test
* its definitely something with the checkpoint state cache
* its definitely something with the checkpoint state cache
* its definitely something with the checkpoint state cache
* Merge refs/heads/master into randao-ref-tracking
* This should fix it
* Merge branch 'randao-ref-tracking' of github.com:prysmaticlabs/prysm into randao-ref-tracking
* gaz
* Merge refs/heads/master into randao-ref-tracking
* Merge refs/heads/master into randao-ref-tracking
* turn off race detection, i dont understand why is broken
* Merge branch 'randao-ref-tracking' of github.com:prysmaticlabs/prysm into randao-ref-tracking
* feedback
* @nisdas feedback
* Revert "@nisdas feedback"

This reverts commit 6129cf84e6.
* Merge refs/heads/master into randao-ref-tracking
2020-02-06 04:07:23 +00:00
Preston Van Loon
27e7be6279 Add & use FinalizedCheckpointEpoch() to state (#4766)
* Add and use FinalizedCheckpointEpoch() to state
* comment
2020-02-06 03:26:15 +00:00
Ivan Martinez
ebd4541dcd Add test for GetBlock (#4765)
* Add a test for GetBlock
* Fix formatting
* Fix grafitti
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into test-getblock
* Merge branch 'master' into test-getblock
2020-02-05 23:56:38 +00:00
Raul Jordan
32b5b8fa69 Include Latest Ethereum APIs Definitions in Prysm (#4759)
* include latest ethereumapis and implement streams
* add comments and error unimpl
* Merge branch 'master' into blocks-stream
* add strm
* Merge refs/heads/master into blocks-stream
* Merge refs/heads/master into blocks-stream
* Merge refs/heads/master into blocks-stream
* add in mocks
* Merge branch 'blocks-stream' of github.com:prysmaticlabs/prysm into blocks-stream
* use right mock
* gaz
* ptypes
* gaz
* gomock dep
* Merge refs/heads/master into blocks-stream
2020-02-05 23:43:36 +00:00
terence tsao
c6e3d67241 Block tree enhancements (#4764)
* Add votes, correct conversion and green for head

* Starting testing

* Fix for run time
2020-02-05 17:30:55 -06:00
Raul Jordan
cb33deab36 batch to update (#4763) 2020-02-05 13:01:56 -08:00
terence tsao
113ac460c0 Update higherThanFinalized in the loop (#4761)
* Update higherThanFinalized in the loop
* Addded a test
* Fixed test
* Merge branch 'master' into update-finalized
2020-02-05 20:13:38 +00:00
Preston Van Loon
c496170c33 Do not attempt to save a nil state (#4758) 2020-02-05 11:52:14 -08:00
Jim McDonald
945edb6c8f Sync to highest possible head given the peers available (#4570)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-05 12:44:32 -06:00
garyschulte
24a5a9c112 add prometheus metrics for validator accounts (#4724)
* add prometheus metrics for validator accounts:
  * gauge for balances
  * counters for attestations and failures
  * counters for aggregations and failures
  * counters for proposals and failures

put validator account metrics behind flag

* run gazelle to reorganize deps

* fix typo

Co-authored-by: garyschulteog <30323939+garyschulteog@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-05 11:49:27 -06:00
terence tsao
0180051b5e Update head slot metric after compute head (#4754)
* Update head slot after compute head
* Merge branch 'master' into properly-update-head-metric
* Merge refs/heads/master into properly-update-head-metric
2020-02-05 17:24:05 +00:00
terence tsao
ce79d8e295 Insert initial sync missing blocks to fork choice store (#4750)
* Upon start up, don't insert head to proto array node DAG as index 0
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into fix-save-head-root
* Add HasNode
* Add fillInForkChoiceMissingBlocks
* Comments
* Test
* Fixed test
* Merge branch 'master' into proto-array-process-missing-block
* Merge branch 'master' into proto-array-process-missing-block
* Merge branch 'master' into proto-array-process-missing-block
2020-02-05 17:05:51 +00:00
Raul Jordan
9579a5520b Update All Libp2p Dependencies (#4746)
* all libp2p deps added
* Merge branch 'master' into update-libp2p
* add in dep
* kad
* fix p2p test
* p2p
* Merge refs/heads/master into update-libp2p
* Merge refs/heads/master into update-libp2p
* Merge refs/heads/master into update-libp2p
* Merge refs/heads/master into update-libp2p
2020-02-05 16:37:33 +00:00
Preston Van Loon
9958afe79d RPC: Use the proper db access level, use head root from head fetcher (#4752)
* Use the proper db access level, use head root from head fetcher
* Reuse head root
2020-02-05 08:50:07 +00:00
terence tsao
8ad174ffd8 Upon start up, don't insert head to proto array node DAG as index 0 (#4749) 2020-02-05 11:11:51 +08:00
Preston Van Loon
68b6a7c172 Render graphviz graph in page (#4748) 2020-02-04 17:14:43 -08:00
terence tsao
8c5c7352b1 Update blockchain metrics (#4747)
* Make new metrics the canonical one
* Both threads share same metric set
* Comments
* Fix vis
2020-02-04 22:57:36 +00:00
Raul Jordan
b705ab0239 Update dependencies from renovate (#4745)
* add in proto deps

* more deps

* more deps

* some revert

* add machinery

* downgrade k8s api

* gaz

* thrift
2020-02-04 14:13:40 -06:00
Ivan Martinez
923d5fc903 Cleanup slasher codebase (#4698)
* First wave of changes
* More changes
* More renames, changes
* Fix errors
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* Fix errors, more cleaning
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* Merge branch 'master' into cleanup-slasher
* fix err
* Merge branch 'cleanup-slasher' of https://github.com/0xKiwi/Prysm into cleanup-slasher
* Fix strings
* More cleanup
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* Fix interface
* Fix
* Merge branch 'master' into cleanup-slasher
* Merge branch 'master' into cleanup-slasher
* Merge branch 'master' into cleanup-slasher
* Address comments
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into cleanup-slasher
* Merge branch 'cleanup-slasher' of https://github.com/0xKiwi/Prysm into cleanup-slasher
2020-02-04 18:48:51 +00:00
terence tsao
9c1a294bf7 Misc fork choice improvements (#4744)
* Added missing spec implementation
* Use it
* Rename
* Merge branch 'master' into update-justified
2020-02-04 17:35:06 +00:00
Raul Jordan
061960c9e2 Resolve Miscellaneous Bugs in Beacon Node (#4743)
* add in nil check for head block

* fix logic

* unused import
2020-02-04 11:21:02 -06:00
terence tsao
80248cd296 Fix pre state of target block does not exist error (#4740)
* Save state even w/ initial sync cache state flag

* Tested

Co-authored-by: Nishant Das <nish1993@hotmail.com>
2020-02-04 20:19:09 +08:00
Nishant Das
95b6cca399 add lock (#4739) 2020-02-04 16:31:31 +08:00
Raul Jordan
1478882b41 Add Endpoint to Return Current Chain Config Parameters (#4595)
* add patch and endpoint
* formatting
* Merge branch 'master' into chain-config-endpoint
* Merge branch 'master' into chain-config-endpoint
* include beacon config
* config params
* Merge branch 'chain-config-endpoint' of github.com:prysmaticlabs/prysm into chain-config-endpoint
* include tests
* resolve confs
* use patch
* Merge branch 'master' into chain-config-endpoint
* passing tests
* Merge branch 'chain-config-endpoint' of github.com:prysmaticlabs/prysm into chain-config-endpoint
* Merge branch 'master' into chain-config-endpoint
* Merge branch 'master' into chain-config-endpoint
* Merge refs/heads/master into chain-config-endpoint
2020-02-04 05:28:35 +00:00
Preston Van Loon
7c4950832c Increase BLS pubkey cache to 100k from 10k (#4737)
* Increase BLS pubkey cache to 100k from 10k
2020-02-04 05:00:20 +00:00
Preston Van Loon
ce0b55d13e Ensure all fields are dirty on initialization (#4735) 2020-02-03 20:38:13 -06:00
Preston Van Loon
e63119b254 Better state locking (#4733)
* Rearrange lock a bit

* better locking without deadlock

* reorder lock
2020-02-03 17:19:22 -06:00
terence tsao
8492273fa7 Better log for Requesting block for pending attestation... (#4731)
* Better log
* Use debug
* Merge branch 'master' into req-blk-log
* Merge branch 'master' into req-blk-log
2020-02-03 21:11:21 +00:00
Preston Van Loon
5b4025efcd SSZ state cache: Only use cached value when flag is on (#4732)
* Only use cached value when flag is on
2020-02-03 20:49:07 +00:00
terence tsao
c69385e71d Clean up verify attestation and better error log (#4729) 2020-02-03 11:46:26 -08:00
terence tsao
cdfa969ced Insert block to fork choice after saving the block to DB (#4728)
* Let's try this
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm
* Better place to insert block to fork choice store
* Fmt
* Revert a few changes
* Revert a few changes
* Comments
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into insert-blk-forkchoice
* Merge refs/heads/master into insert-blk-forkchoice
2020-02-03 17:41:36 +00:00
shayzluf
a1dc4ddc40 Add get slashing endpoints (#4674)
* update go pbs
* protos
* merge
* pbs
* implement first version
* add slashing status endpoints and test
* Merge branch 'master' of github.com:prysmaticlabs/prysm into get_slashings
* add tests
* gaz and goimports
* gaz
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Merge refs/heads/master into get_slashings
* Update proto/slashing/slashing.proto
* Update proto/slashing/slashing.proto
* Update proto/slashing/slashing.proto
* Merge refs/heads/master into get_slashings
2020-02-03 17:31:54 +00:00
Jim McDonald
648584b356 Add wallet keymanager (#4687)
* Add wallet keymanager

* Read keymanageropts from file if not JSON

Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-02-03 11:13:58 -06:00
Nishant Das
fb7a75d2c3 Release Proposer Index Cache (#4717)
* release cache

* gaz

* fix all tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-02-03 10:23:04 -06:00
Jim McDonald
00a6361c66 Relegate some p2p messages (#4725)
* Relegate some p2p messages
2020-02-03 11:27:20 +00:00
Preston Van Loon
6213c94a14 Fork choice: only update head if the new block is a higher block slot (#4722)
* only update head IF the new block is a higher block slot
* Merge refs/heads/master into forkchoice-by-highest-blockslot
2020-02-03 01:40:20 +00:00
Preston Van Loon
397b7d807a Pubsub: Ignore block already in database (#4721)
* Ignore block already in database
* Merge refs/heads/master into skip-block-already-in-db
2020-02-03 00:37:50 +00:00
Preston Van Loon
05876d6250 Only update committee cache if it doesn't have that key already (#4719)
* Only update committee cache if it doesn't have that key already
2020-02-03 00:24:54 +00:00
Preston Van Loon
bd334c4192 Minor fixes (#4716)
* copy state in cache, ensure pre state exists before attepting to process attestation in fork choice
* copy state in cache, ensure pre state exists before attepting to process attestation in fork choice
* fix test
2020-02-02 05:23:05 +00:00
Preston Van Loon
4f38333e54 Copy head state to ensure it is never mutated (#4715)
* Copy head state to ensure it is never mutated
* Merge refs/heads/master into copy-head-state
* fix tests
* Merge branch 'copy-head-state' of github.com:prysmaticlabs/prysm into copy-head-state
2020-02-02 03:19:51 +00:00
Nishant Das
d6bd389d5c Custom Copy of Pending Attestations (#4711)
* custom copy
* lint
* Merge refs/heads/master into customCopy
* preston's review
* Merge branch 'customCopy' of https://github.com/prysmaticlabs/geth-sharding into customCopy
* Merge refs/heads/master into customCopy
* Merge refs/heads/master into customCopy
* nil check
* Merge refs/heads/master into customCopy
* Merge branch 'customCopy' of https://github.com/prysmaticlabs/geth-sharding into customCopy
* Merge refs/heads/master into customCopy
* fixed test
* Merge branch 'customCopy' of https://github.com/prysmaticlabs/geth-sharding into customCopy
2020-02-02 02:56:53 +00:00
Preston Van Loon
962be9b4f8 Propagate blocks again after we process it in pending blocks queue (#4714)
* propagate blocks again after we process it in pending blocks queue
* Merge refs/heads/master into reprop-block
2020-02-02 02:22:59 +00:00
terence tsao
f77049ae74 Handle attestations with missing block (#4705)
* Fmt
* Starting
* Cont
* Store aggregate attestation is better
* Conflict
* Done
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into process-pending-atts
* Comment
* Better logs
* Better logs
* Fix existing tests
* Update metric names
* Preston's feedback
* Broadcast atts once it's valid
* Gazelle
* Test for validating atts and pruning
* Tests
* Removed debug log
* Conflict
* Feedback
* Merge refs/heads/master into process-pending-atts
* Merge refs/heads/master into process-pending-atts
2020-02-02 01:42:29 +00:00
Preston Van Loon
f432f7851e BeaconState: Use copy on write for validator index map (#4713)
* Use copy on write for validator index map
* Merge refs/heads/master into copy-on-write-2
2020-02-02 01:21:50 +00:00
Preston Van Loon
b7e6012628 Better eth1data equals (#4712)
* Better eth1data equals
* Merge branch 'master' into better-eth1-data-equal
2020-02-02 01:03:21 +00:00
Preston Van Loon
79434fc2d1 Use sync.pool for keccak256 and sha256 (#4710)
* Use sync.pool for keccak

* Add sha256 too

* custom hasher use pool too

* reset custom hasher

* fix?

* Add comment that customHasher should only be used in cases of more than 5 usages
2020-02-01 16:37:37 -08:00
Preston Van Loon
069ec1726b Pending blocks queue: Better locking priority (#4709)
* Better locking priority, use correct lock on validating pending blocks
2020-02-01 22:47:51 +00:00
Preston Van Loon
2a79c572a5 Pruning old states: Use a warning level log instead of fatal (#4707)
* Use a warning level log instead of fatal
2020-02-01 20:51:20 +00:00
Preston Van Loon
c2fbb40909 Beacon state: copy on write for certain large fields (#4699)
* begin state service

* begin on the state trie idea

* created beacon state structure

* add in the full clone getter

* return by value instead

* add all setters

* new state setters are being completed

* arrays roots exposed

*  close to finishing all these headerssss

* functionality complete

* added in proto benchmark test

* test for compatibility

* add test for compat

* comments fixed

* add clone

* add clone

* remove underlying copies

* make it immutable

* integrate it into chainservice

* revert

* wrap up comments for package

* address all comments and godocs

* address all comments

* clone the pending attestation properly

* properly clone remaining items

* tests pass fixed bug

* begin using it instead of head state

* prevent nil pointer exceptions

* begin using new struct in db

* integrated new type into db package

* add proper nil checks

* using new state in archiver

* refactored much of core

* editing all the precompute functions

* done with most core refactor

* fixed up some bugs in the clone comparisons

* append current epoch atts

* add missing setters

* add new setters

* fix other core methods

* fix up transition

* main service and forkchoice

* fix rpc

* integrated to powchain

* some more changes

* fix build

* improve processing of deposits

* fix error

* prevent panic

* comment

* fix process att

* gaz

* fix up att process

* resolve existing review comments

* resolve another batch of gh comments

* resolve broken cpt state

* revise testutil to use the new state

* begin updating the state transition func to pass in more compartmentalized args

* finish editing transition function to return errors

* block operations pretty much done with refactor

* state transition fully refactored

* got epoch processing completed

* fix build in fork choice

* fixing more of the build

* fix up broken sync package

* it builds nowww it buildssss

* revert registry changes

* Recompute on Read (#4627)

* compute on read

* fix up eth1 data votes

* looking into slashings bug introduced in core/

* able to advance more slots

* add logging

* can now sync with testnet yay

* remove the leaves algorithm and other merkle imports

* expose initialize unsafe funcs

* Update beacon-chain/db/kv/state.go

* lint

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* More Optimizations for New State (#4641)

* map optimization

* more optimizations

* use a custom hasher

* comment

* block operations optimizations

* Update beacon-chain/state/types.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* fixed up various operations to use the validator index map access

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* archiver tests pass

* fixing cache tests

* cache tests passing

* edited validator tests

* powchain tests passing

* halfway thru sync tests

* more sync test fixes

* add in tests for state/

* working through rpc tests

* assignments tests passed

* almost done with rpc/beacon tests

* resolved painful validator test

* fixed up even more tests

* resolve tests

* fix build

* reduce a randao mixes copy

* fixes under //beacon-chain/blockchain/...

* build //beacon-chain/core/...

* fixes

* Runtime Optimizations (#4648)

* parallelize shuffling

* clean up

* lint

* fix build

* use callback to read from registry

* fix array roots and size map

* new improvements

* reduce hash allocs

* improved shuffling

* terence's review

* use different method

* raul's comment

* new array roots

* remove clone in pre-compute

* Update beacon-chain/state/types.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul's review

* lint

* fix build issues

* fix visibility

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* fix visibility

* build works for all

* fix blockchain test

* fix a few tests

* fix more tests

* update validator in slashing

* archiver passing

* fixed rpc/validator

* progress on core tests

* resolve broken rpc tests

* blockchain tests passed

* fix up some tests in core

* fix message diff

* remove unnecessary save

* Save validator after slashing

* Update validators one by one

* another update

* fix everything

* fix more precompute tests

* fix blocks tests

* more elegant fix

* more helper fixes

* change back ?

* fix test

* fix skip slot

* fix test

* reset caches

* fix testutil

* raceoff fixed

* passing

* Retrieve cached state in the beginning

* lint

* Fixed tests part 1

* Fixed rest of the tests

* Minor changes to avoid copying, small refactor to reduce deplicated code

* Handle att req for slot 0

* New beacon state: Only populate merkle layers as needed, copy merkle layers on copy/clone. (#4689)

* Only populate merkle layers as needed, copy merkle layers on copy/clone.

* use custom copy

* Make maps of correct size

* slightly fast, doesn't wait for lock

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Target root can't be 0x00

* Don't use cache for current slot (may not be the right fix)

* fixed up tests

* Remove some copy for init sync. Not sure if it is safe enough for runtime though... testing...

* Align with prev logic for process slots cachedState.Slot() < slot

* Fix Initial Sync Flag (#4692)

* fixes

* fix up some test failures due to lack of nil checks

* fix up some test failures due to lack of nil checks

* fix up imports

* revert some changes

* imports

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* resolving further conflicts

* Better skip slot cache (#4694)

* Return copy of skip slot cache state, disable skip slot cache on sync

* fix

* Fix pruning

* copy on write method

* gaz

* fix tests

* fix up issues with broken tests

* remove extra update

* remove debugging lines

* gofmt

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: shayzluf <thezluf@gmail.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-31 23:23:34 -08:00
Preston Van Loon
d32493d43b Ensure exits are not for an already exited validator (#4701)
* ensure exits are not for an already exited validator
2020-02-01 01:18:36 +00:00
terence tsao
0b2b77c5b0 Remove validate_beacon_attestation (#4700) 2020-01-31 15:35:13 -08:00
terence tsao
d8c26590ca Prune dangling states in DB upon start up (#4697)
* Add pruneGarbageState and test
* Comments
* Update beacon-chain/blockchain/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/blockchain/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/blockchain/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Fixed test
* Merge refs/heads/master into prune-start-up
* Fixed test
2020-01-31 23:15:35 +00:00
Raul Jordan
cc741ed8af Ensure New State Type Tests Pass in Prysm (#4646)
* begin state service

* begin on the state trie idea

* created beacon state structure

* add in the full clone getter

* return by value instead

* add all setters

* new state setters are being completed

* arrays roots exposed

*  close to finishing all these headerssss

* functionality complete

* added in proto benchmark test

* test for compatibility

* add test for compat

* comments fixed

* add clone

* add clone

* remove underlying copies

* make it immutable

* integrate it into chainservice

* revert

* wrap up comments for package

* address all comments and godocs

* address all comments

* clone the pending attestation properly

* properly clone remaining items

* tests pass fixed bug

* begin using it instead of head state

* prevent nil pointer exceptions

* begin using new struct in db

* integrated new type into db package

* add proper nil checks

* using new state in archiver

* refactored much of core

* editing all the precompute functions

* done with most core refactor

* fixed up some bugs in the clone comparisons

* append current epoch atts

* add missing setters

* add new setters

* fix other core methods

* fix up transition

* main service and forkchoice

* fix rpc

* integrated to powchain

* some more changes

* fix build

* improve processing of deposits

* fix error

* prevent panic

* comment

* fix process att

* gaz

* fix up att process

* resolve existing review comments

* resolve another batch of gh comments

* resolve broken cpt state

* revise testutil to use the new state

* begin updating the state transition func to pass in more compartmentalized args

* finish editing transition function to return errors

* block operations pretty much done with refactor

* state transition fully refactored

* got epoch processing completed

* fix build in fork choice

* fixing more of the build

* fix up broken sync package

* it builds nowww it buildssss

* revert registry changes

* Recompute on Read (#4627)

* compute on read

* fix up eth1 data votes

* looking into slashings bug introduced in core/

* able to advance more slots

* add logging

* can now sync with testnet yay

* remove the leaves algorithm and other merkle imports

* expose initialize unsafe funcs

* Update beacon-chain/db/kv/state.go

* lint

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* More Optimizations for New State (#4641)

* map optimization

* more optimizations

* use a custom hasher

* comment

* block operations optimizations

* Update beacon-chain/state/types.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* fixed up various operations to use the validator index map access

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* archiver tests pass

* fixing cache tests

* cache tests passing

* edited validator tests

* powchain tests passing

* halfway thru sync tests

* more sync test fixes

* add in tests for state/

* working through rpc tests

* assignments tests passed

* almost done with rpc/beacon tests

* resolved painful validator test

* fixed up even more tests

* resolve tests

* fix build

* reduce a randao mixes copy

* fixes under //beacon-chain/blockchain/...

* build //beacon-chain/core/...

* fixes

* Runtime Optimizations (#4648)

* parallelize shuffling

* clean up

* lint

* fix build

* use callback to read from registry

* fix array roots and size map

* new improvements

* reduce hash allocs

* improved shuffling

* terence's review

* use different method

* raul's comment

* new array roots

* remove clone in pre-compute

* Update beacon-chain/state/types.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul's review

* lint

* fix build issues

* fix visibility

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* fix visibility

* build works for all

* fix blockchain test

* fix a few tests

* fix more tests

* update validator in slashing

* archiver passing

* fixed rpc/validator

* progress on core tests

* resolve broken rpc tests

* blockchain tests passed

* fix up some tests in core

* fix message diff

* remove unnecessary save

* Save validator after slashing

* Update validators one by one

* another update

* fix everything

* fix more precompute tests

* fix blocks tests

* more elegant fix

* more helper fixes

* change back ?

* fix test

* fix skip slot

* fix test

* reset caches

* fix testutil

* raceoff fixed

* passing

* Retrieve cached state in the beginning

* lint

* Fixed tests part 1

* Fixed rest of the tests

* Minor changes to avoid copying, small refactor to reduce deplicated code

* Handle att req for slot 0

* New beacon state: Only populate merkle layers as needed, copy merkle layers on copy/clone. (#4689)

* Only populate merkle layers as needed, copy merkle layers on copy/clone.

* use custom copy

* Make maps of correct size

* slightly fast, doesn't wait for lock

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Target root can't be 0x00

* Don't use cache for current slot (may not be the right fix)

* fixed up tests

* Remove some copy for init sync. Not sure if it is safe enough for runtime though... testing...

* Align with prev logic for process slots cachedState.Slot() < slot

* Fix Initial Sync Flag (#4692)

* fixes

* fix up some test failures due to lack of nil checks

* fix up some test failures due to lack of nil checks

* fix up imports

* revert some changes

* imports

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* resolving further conflicts

* Better skip slot cache (#4694)

* Return copy of skip slot cache state, disable skip slot cache on sync

* fix

* Fix pruning

* fix up issues with broken tests

Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: shayzluf <thezluf@gmail.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-31 12:57:01 -08:00
Raul Jordan
f97ac5f0d7 Remove Already Exited Validators From Queue (#4695)
* added regression test

* fixed the regression test units

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-30 17:57:42 -08:00
Ivan Martinez
85a38e6053 Make E2E more consistent, change log file setup (#4696)
* Make E2E more consistent, change log file setup
* Merge branch 'master' into fix-e2e-sync
2020-01-31 00:16:36 +00:00
terence tsao
7f07ad831e Update seen for attestation pool (#4669)
* Use `contain` instead of `overlap`
* Tests
* Update seen
* Merge branch 'master' into use-contain
* Use OR to track all of the bits we have seen so far
* Added one more test case
* gofmt
* Conflict
* Picked up Preston's changes
* Merge branch 'use-contain' of git+ssh://github.com/prysmaticlabs/prysm into use-contain
2020-01-30 23:30:37 +00:00
Jim McDonald
ad7d9ab1da Validator status updates (#4675)
* Update ValidatorStatus to match Ethereum APIs
* Tidy up status calculation
* Merge branch 'master' into validator-status-updates
* Merge branch 'master' into validator-status-updates
* Update beacon-chain/rpc/beacon/config.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update test names
2020-01-30 19:46:37 +00:00
terence tsao
e452b950d0 Better caching of attestation pre state (#4688) 2020-01-30 11:06:20 -08:00
Ivan Martinez
2e2cec3a61 Make E2E more resilient, check balance and participation every epoch (#4679) 2020-01-29 16:14:10 -08:00
Ivan Martinez
c80ffc640f Fix flag bug (#4690) 2020-01-29 16:16:37 -06:00
Dmitri Tsumak
f6b4637a91 Update Eth1FollowDistance to 16 for minimal config (#4566)
* Update Eth1FollowDistance to 16 for minimal config
* Merge branch 'master' into fix-minimal-config
* Merge branch 'master' into fix-minimal-config
* Merge branch 'master' into fix-minimal-config
* Merge branch 'master' into fix-minimal-config
* Merge branch 'master' into fix-minimal-config
* Fix tests after minimal config changes
2020-01-29 17:56:21 +00:00
Ivan Martinez
3e9bf58d81 Fix validator assignments on slot 0 (#4682)
* Fix validator acting upon first slot
* Change log to debug
* Fix roles at slot 0
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into fix-val
* Add regression test
* Formatting
* Add slot ticker regression test
2020-01-29 04:30:30 +00:00
terence tsao
a22c97739e Remove prune state (#4680)
* Rm prune state
* Merge branch 'master' into rm-prune-state
2020-01-29 03:53:57 +00:00
shayzluf
ade61717a4 Slasher data update from archive (#4563)
* first version

* cli context

* fix service

* starting change to ccache

* ristretto cache

* added test

* test on evict

* remove evict test

* test onevict

* comment for exported flag

* update all span maps on load

* fix setup db

* span cache added to help flags

* start save cache on exit

* save cache to db before close

* comment fix

* fix flags

* setup db new

* data update from archive node

* gaz

* slashing detection on old attestations

* un-export

* rename

* nishant feedback

* workspace cr

* lint fix

* fix calls

* start db

* fix test

* Update slasher/db/db.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* add flag

* fix fail to start beacon client

* mock beacon service

* fix imports

* gaz

* goimports

* add clear db flag

* print finalized epoch

* better msg

* Update slasher/db/attester_slashings.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul feedback

* raul feedback

* raul feedback

* raul feedback

* raul feedback

* add detection in runtime

* fix tests

* raul feedbacks

* raul feedback

* raul feedback

* goimports

* Update beacon-chain/blockchain/process_attestation_helpers.go

* Update beacon-chain/blockchain/receive_block.go

* Update beacon-chain/core/blocks/block_operations_test.go

* Update beacon-chain/core/blocks/block_operations.go

* Update beacon-chain/core/epoch/epoch_processing.go

* Update beacon-chain/sync/validate_aggregate_proof_test.go

* Update shared/testutil/block.go

* Update slasher/service/data_update.go

* Update tools/blocktree/main.go

* Update slasher/service/service.go

* Update beacon-chain/core/epoch/precompute/attestation_test.go

* Update beacon-chain/core/helpers/committee_test.go

* Update beacon-chain/core/state/transition_test.go

* Update beacon-chain/rpc/aggregator/server_test.go

* Update beacon-chain/sync/validate_aggregate_proof.go

* Update beacon-chain/rpc/validator/proposer_test.go

* Update beacon-chain/blockchain/forkchoice/process_attestation.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/db/indexed_attestations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/service/data_update.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* terence feedback

* terence feedback

* goimports

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-01-29 07:14:51 +05:30
Ivan Martinez
9149c2e4f4 Replace no-genesis-delay with custom-genesis-delay (#4678)
* Change NoGenesisDelay to CustomGenesisDelay
* Implement flag
* gazelle
* Fix docs
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into custom-genesis-delay
* Fix
* Gazelle
* Add case to fix bad math
* Merge branch 'master' into custom-genesis-delay
2020-01-28 21:07:43 +00:00
terence tsao
07ba594023 Fix initial sync cache state (#4677) 2020-01-28 12:43:54 -08:00
Ivan Martinez
ad01bfbcde Add sync test to E2E (#4654)
* Complete evaluator for chain consensus

* Add sync e2e test

* Cleanup

* Rename

* Add tad more offset for correct head

* Change offset to middle of slot

* Change head block root to head epoch

* comment

* Fix eth1

* Address comments

* Gazelle

* Change to use file

* Change to use reader

* Use fil
2020-01-28 13:16:00 -06:00
Raul Jordan
439a84fcb9 Clear Run Error in Powchain Service Upon Reconnect (#4671)
* clear the run err on reconnect
* Merge refs/heads/master into clear-err-on-reconnect
* nishant feedback
2020-01-28 04:04:38 +00:00
terence tsao
e2be2a21d0 Part 2 of block chain service refactor - move process attestation (#4672) 2020-01-27 18:04:43 -08:00
terence tsao
eaf7ae3774 Part 1 of block chain service refactor - move process block (#4670) 2020-01-27 13:48:16 -08:00
terence tsao
1fa301c79c Update node count based on insertion (#4653)
* Update node count based on insertion

* Update nodes.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-27 15:24:58 -06:00
Preston Van Loon
1c759f6404 Disable more fork choice options with flag on (#4665)
* Disable updating latest votes if disable fork choice
* do not recompute the block tree cache if the fork choice is not being used
2020-01-27 09:11:28 +00:00
Preston Van Loon
4960acb285 Fork choice: Ensure lengths are the same before checking overlap (#4663)
* Ensure lengths are the same before checking overlap
2020-01-27 07:18:02 +00:00
Ivan Martinez
127f05d531 Allow easy plugin of featureflags into E2E (#4659)
* Enable easy plugin of featureflags into E2E

* Gazelle

* Fix text

* Fix whitespace
2020-01-26 21:42:10 -05:00
Preston Van Loon
2f02a2baa3 Actually wire up exits (#4661)
* Actually wire up exits
* Merge branch 'master' into exit-fixes
2020-01-27 01:49:37 +00:00
terence tsao
d4bea51482 Proto array fork choice tree handler (#4658) 2020-01-26 12:25:33 -08:00
Nishant Das
4ea5661f8f Clear Pre-Genesis Objects (#4656)
* remove pre-genesis data
* Merge branch 'master' into clearUnusedObjects
* lint
* Merge branch 'clearUnusedObjects' of https://github.com/prysmaticlabs/geth-sharding into clearUnusedObjects
* fic build
* gaz
* faulty mock
* Update beacon-chain/blockchain/service.go
2020-01-26 17:50:40 +00:00
terence tsao
5eece9a507 Integrate proto array forkchoice to run time (#4649)
* Run time

* Fixed pruning

* Fixed test

* Fixed test

* Process attestations during init sync

* Raul's feedback

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-25 14:22:25 -06:00
Nishant Das
417480ffa8 fix bug (#4650) 2020-01-25 09:19:50 -08:00
Jim McDonald
dd5a3fe80d Update docs for keymanager (#4651) 2020-01-25 06:25:58 -08:00
Ivan Martinez
fa2acb3632 Improve E2E to be more consistent with timing, and allow for custom flags (#4620)
* Add committees helper, benchmark, results show 62ms for 8k validators which was previously 4 minutes

* Add regression test with same data

* fix epoch conversion

* lint

* undo and lint

* Begin work on adding mainnet config benchmark

* Try more to get mainnet e2e

* Try to fix delay

* Get past chainstart on e2e

* Try to fix flaky

* Get demo config working

* Remove unneeded changes

* Change how flags are enabled

* Lower shard count

* Temp skip

* Fix e2e

* Fix testing to run until last epoch

* Fix

* Add ending time log and remove att cache flag

* Fix ordering

* Reenable flag

* Change ports from default

* Add no log for if there are no err logs

* Add block evaluator

* Try to improve evaluators

* Progress on attestation evaluator

* Remove attestation evaluator

* Fix e2e

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2020-01-25 15:39:56 +08:00
Ivan Martinez
1562d3252b Allow ListBeaconCommittees API to return previous epoch (#4647)
* Allow committees API to request previous epoch

* Fix error log

* Fix previous epoch test
2020-01-24 19:54:18 -06:00
Preston Van Loon
10341cbf7f Add flags to cluster pk manager (#4645)
* Add flags to cluster pk manager
* Merge branch 'master' into cluster-pk-mgr
2020-01-24 19:34:24 +00:00
terence tsao
0f730b5887 Part 10 of proto array fork choice - Add Store (#4644) 2020-01-24 10:58:19 -08:00
terence tsao
b313b46f79 Part 9 of proto array fork choice - get head (#4643) 2020-01-24 10:15:01 -08:00
Jim McDonald
a78defcd26 Move to keymanager/keymanageropts command line parameters (#4590)
* Move to keymanager/keymanageropts command line parameters

* Add help for individual keymanagers

* gazelle

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-24 11:21:31 -06:00
terence tsao
86f6a44da6 Part 8 of proto array fork choice - prune (#4642)
* Docs
* Interface definitions
* Fmt and gazelle
* Rename interface to interfaces
* Define all the type for protoarray
* Gaz
* Add error types
* Add compute delta helper
* Compute delta tests
* Gaz
* Add checking if nodes viable
* Test for viable head
* Test for non leaf node can lead to viable head
* Conflict
* Extra space
* Remove fmt print
* Add updateBestChildAndDescendant
* Tests
* Merge branch 'master' into proto-array-forkchoice-6
* Conflict
* Merge branch 'proto-array-forkchoice-6' of git+ssh://github.com/prysmaticlabs/prysm into proto-array-forkchoice-6
* Add applyScoreChanges
* More test
* Rename score to weight
* Conflict
* Add insert function
* Test
* Merge refs/heads/master into proto-array-forkchoice-8
* Merge refs/heads/master into proto-array-forkchoice-8
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into proto-array-forkchoice-8
* Merge branch 'proto-array-forkchoice-8' of git+ssh://github.com/prysmaticlabs/prysm into proto-array-forkchoice-8
* Add prune method
* Tests
* Fixed long line
2020-01-24 15:51:11 +00:00
terence tsao
d978c19a41 Part 6 of proto array fork choice - update weight (#4636) 2020-01-23 20:32:27 -08:00
Preston Van Loon
588773cd0c Remove pubkey to validator ID map from validator (#4634)
* Remove map from validator
* remove failure check test
* fix
* Merge refs/heads/master into validator-fix
* Merge refs/heads/master into validator-fix
* Merge refs/heads/master into validator-fix
* Add error log if validator not found in committee
* Merge refs/heads/master into validator-fix
2020-01-24 01:50:07 +00:00
Preston Van Loon
62a5931843 Use a client side rate limit to reduce chance of getting banned (#4637)
* Use a client side rate limit to reduce chance of getting banned

* fix test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-23 17:33:43 -08:00
Preston Van Loon
6d6e8be10a Disable kafka build by default (#4638)
* Disable kafka build by default
2020-01-23 23:44:09 +00:00
terence tsao
144dcc3a69 Part 5 of proto array fork choice - update best child and descendant (#4629) 2020-01-23 14:23:45 -08:00
Jim McDonald
0f27343364 Fix deposit inclusion slot calculation (#4635) 2020-01-23 15:48:51 -05:00
terence tsao
3388ab74cf Part 4 of proto array fork choice - check nodes viable (#4625)
* Docs

* Interface definitions

* Fmt and gazelle

* Rename interface to interfaces

* Define all the type for protoarray

* Gaz

* Add error types

* Add compute delta helper

* Compute delta tests

* Gaz

* Add checking if nodes viable

* Test for viable head

* Test for non leaf node can lead to viable head

* Extra space

* Remove fmt print

* Update beacon-chain/forkchoice/protoarray/nodes.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Nishant Das <nish1993@hotmail.com>
2020-01-23 08:33:39 -08:00
Andre Miras
663557e44e Fixes broken links to docs.prylabs.network (#4628) 2020-01-23 08:14:35 -06:00
shayzluf
5df77848bb Fix go pbs (#4626)
* fix issues

* fix go pbs

* added services

* added services

* remove unused files

* bring back used files

* bring back db proto files

* gaz

* gaz and bring back faucet

* gaz and bring back rpc

* gaz and bring back rpc

* gaz and bring back rpc

* go imports

* remove unused
2020-01-23 16:03:11 +05:30
Ivan Martinez
ed3ab828a1 Implement attester protection into validator client (#4598)
* Add flag for attester protection

* Remove flags

* Add attestation history DB functions to validator client

* Fix comments

* Update interface to new funcs

* Fix test

* Add flags

* Implement most of attester protection

* Fix tests

* Add test for pruning

* Add more test cases for prunes

* Remove todo comment

* Fix comments

* Rename functions

* Fix logs

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-23 00:52:01 -05:00
Nishant Das
ee9b9e69dc Fix Faucet (#4624)
* fix faucet

* preston's review
2020-01-23 12:13:27 +08:00
Nishant Das
460250251d add flag (#4622) 2020-01-23 11:25:10 +08:00
Preston Van Loon
4aa7ebc2b7 Wire voluntary exits pool (#4613)
* Hookup voluntary exits pool
* Merge refs/heads/master into wire-voluntary-exits
* Merge refs/heads/master into wire-voluntary-exits
* Merge refs/heads/master into wire-voluntary-exits
* Merge refs/heads/master into wire-voluntary-exits
* Merge refs/heads/master into wire-voluntary-exits
* Merge refs/heads/master into wire-voluntary-exits
* fix tests
* Merge branch 'wire-voluntary-exits' of github.com:prysmaticlabs/prysm into wire-voluntary-exits
* Merge refs/heads/master into wire-voluntary-exits
* gofmt
* Merge branch 'wire-voluntary-exits' of github.com:prysmaticlabs/prysm into wire-voluntary-exits
* gofmt
* gaz
* Merge refs/heads/master into wire-voluntary-exits
2020-01-22 22:27:44 +00:00
Tim Myers
a1e3c2d47c Add --p2p-host-dns flag to specify p2p external DNS (#4608)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-22 16:07:22 -06:00
Preston Van Loon
cc58b5aca6 Refactor block operations for validating exits slightly (#4612)
* Refactor block operations for validating exits slightly so that we don't have to advance state in a pubsub validator

* current slot

* remove duplicated validation for exits

* nil request check

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-22 13:49:38 -08:00
Jim McDonald
9a395530b7 Tidy up error logging (#4609)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-22 15:12:49 -06:00
terence tsao
5cc6de9e67 Part 3 of proto array fork choice - compute delta helper (#4617)
* Docs

* Interface definitions

* Fmt and gazelle

* Rename interface to interfaces

* Define all the type for protoarray

* Gaz

* Add error types

* Add compute delta helper

* Compute delta tests

* Gaz

* Fix formatting and comments

* Apply suggestions from code review

Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
2020-01-22 14:19:52 -06:00
terence tsao
c041403a50 Part 2 of proto array fork choice - proto array types (#4616)
* Docs

* Interface definitions

* Fmt and gazelle

* Rename interface to interfaces

* Define all the type for protoarray

* Gaz
2020-01-22 13:12:41 -05:00
terence tsao
8d889f169e Part 1 of proto array fork choice - docs and interfaces (#4615)
* Docs

* Interface definitions

* Fmt and gazelle

* Rename interface to interfaces
2020-01-22 10:50:16 -06:00
shayzluf
b030771174 Slasher span cache (#4388)
* first version

* cli context

* fix service

* starting change to ccache

* ristretto cache

* added test

* test on evict

* remove evict test

* test onevict

* comment for exported flag

* update all span maps on load

* fix setup db

* span cache added to help flags

* start save cache on exit

* save cache to db before close

* comment fix

* fix flags

* setup db new

* nishant feedback

* workspace cr

* lint fix

* fix calls

* start db

* fix test

* Update slasher/db/db.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* add flag

* nishant feedback

* export Config

* fix imports

* fix imports

* fix imports

* Update slasher/service/service.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/service/service.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/service/service.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update slasher/service/service.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* remove mod print

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-01-21 23:39:21 -06:00
Raul Jordan
abe679e90e Create New Beacon State Data Structure (#4602)
* begin state service

* begin on the state trie idea

* created beacon state structure

* add in the full clone getter

* return by value instead

* add all setters

* new state setters are being completed

* arrays roots exposed

*  close to finishing all these headerssss

* functionality complete

* added in proto benchmark test

* test for compatibility

* add test for compat

* comments fixed

* add clone

* add clone

* remove underlying copies

* make it immutable

* integrate it into chainservice

* revert

* wrap up comments for package

* address all comments and godocs

* address all comments

* clone the pending attestation properly

* properly clone remaining items

* tests pass fixed bug

* prevent nil pointer exceptions

* fixed up some bugs in the clone comparisons

Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-21 22:36:12 -06:00
Preston Van Loon
bfda29f2ad Implement voluntary exits pool (#4610) 2020-01-21 15:29:04 -08:00
Nishant Das
e96c2f4949 use proper bound (#4607) 2020-01-21 07:45:06 -08:00
Nishant Das
a52f9d4549 Save Deposit Data at Every Interval (#4606)
* save only every 100 logs
2020-01-21 06:07:12 +00:00
terence tsao
29a7a587cf Fix old markdown links (#4603)
* Fix old MD links
* Revert
* Merge branch 'master' into clean-up-old-mds
2020-01-21 03:30:35 +00:00
Preston Van Loon
27254ad362 Use a better skip slots cache with a lock around it for identical parallel ProcessSlots requests (#4597)
* Use a better skip slots cache with a lock around it for common requests
* Merge refs/heads/master into better-skip-slots-cache
* add test
* Merge branch 'better-skip-slots-cache' of github.com:prysmaticlabs/prysm into better-skip-slots-cache
* Merge refs/heads/master into better-skip-slots-cache
* exit process slots if the context expired
* Revert "exit process slots if the context expired"

This reverts commit 1430d8ab19.
* ensure validation has a pubsub timeout
* Merge refs/heads/master into better-skip-slots-cache
* PR feedback
* Merge branch 'better-skip-slots-cache' of github.com:prysmaticlabs/prysm into better-skip-slots-cache
2020-01-21 02:19:42 +00:00
Raul Jordan
eb5e814eb4 Disable Fork Choice Feature Flag (#4574) 2020-01-20 17:45:37 -08:00
Celeste Ariana Seberras
0a8dbaaabc Updated doc portal links (#4599) 2020-01-20 15:41:02 -08:00
Andre Miras
e65d98925b Updaes README.md expose docker port 13000 (#4596)
Port 13000 also needs to be exposed if to improve connectivity and
receive more peers, refs #4323.
Also updates the "Docker on Windows" instructions for consistency.
2020-01-20 14:30:40 -06:00
Nishant Das
781b7d6870 Don't Panic if 0 Peers are Left (#4594)
* log error
* Merge branch 'master' into dontReturnError
* return blocks
* change back
* Merge branch 'dontReturnError' of https://github.com/prysmaticlabs/geth-sharding into dontReturnError
* use a more static finalized epoch
* Update beacon-chain/sync/initial-sync/round_robin.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>
* Merge refs/heads/master into dontReturnError
* jim's review
* Merge branch 'dontReturnError' of https://github.com/prysmaticlabs/geth-sharding into dontReturnError
* Update beacon-chain/sync/initial-sync/round_robin.go
2020-01-20 17:12:28 +00:00
Jim McDonald
dc4c1ca2b7 Ensure initial sync is initialised (#4587)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-20 10:38:27 -06:00
Andre Miras
d72e18ba60 Removes trailing backslash, refs #4562 (#4592)
* Removes trailing backslash, refs #4562
* Merge branch 'master' into feature/minor_documentation_fix
2020-01-19 22:46:19 +00:00
Ivan Martinez
a4db560e55 Prepare validator DB for attester protection implementation (#4584)
* Add flag for attester protection
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into protecc-attester-db
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into protecc-attester-db
* Remove flags
* Add attestation history DB functions to validator client
* Fix comments
* Update interface to new funcs
* Merge branch 'master' of https://github.com/prysmaticlabs/Prysm into protecc-attester-db
* Fix test
* Merge branch 'master' into protecc-attester-db
2020-01-19 22:05:48 +00:00
Preston Van Loon
e7ecd9329a Fix resync (#4585)
* reset synced to false
* comment
2020-01-19 03:29:08 +00:00
Nishant Das
3e7e447160 Make Status Requests Asynchronous (#4577)
* make rpc status requests async
* make whole block async
* fix nogo
* Update beacon-chain/sync/rpc_status.go
* Merge branch 'master' into makeAsync
* Merge refs/heads/master into makeAsync
2020-01-19 01:37:18 +00:00
Nishant Das
1b62e92159 Reset Status (#4576)
* reset status

* Update beacon-chain/powchain/service.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-01-19 09:24:19 +08:00
Preston Van Loon
aae27749d4 Release save deposits flag (#4581)
* release save deposits flag
* Merge refs/heads/master into release-save-deposits
2020-01-18 20:25:29 +00:00
Ivan Martinez
2ba81193b0 Add proto for attestation protection (#4579) 2020-01-18 14:46:33 -05:00
Preston Van Loon
68c1ca755d Don't mark peer as bad as part of this return. (#4575) 2020-01-18 12:46:12 +08:00
Preston Van Loon
ccfc650375 Better parent block request (#4572)
* Use a good peer instead of a random one, if we know about it
* Exit init sync if there is an issue
* Merge refs/heads/master into better-parent-block-processing
* Merge refs/heads/master into better-parent-block-processing
* Merge refs/heads/master into better-parent-block-processing
* Merge refs/heads/master into better-parent-block-processing
* Update pending_blocks_queue.go
2020-01-17 22:43:32 +00:00
terence tsao
3d3dccbdb4 Enabled proposer sig and randao verifications for init sync (#4573)
* Enabled proposer sig and randao verifications in init sync

* Comments

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-17 15:58:25 -06:00
Ivan Martinez
d04399ea96 Refactor generated benchmark files to allow for more general usage (#4436)
* Begin to refactor benchmark files to testutil

* Complete most of refactoring

* Fix file path

* gofmt

* Fix path

* Move generatego to tools/

* Move gen util to tools/benchmark-files-gen

* Add comments to pregen funcs

* Make function names consistent

* Update README

* Redo benchmarks with 16384 validators
2020-01-17 12:25:35 -05:00
Prince Sinha
0605118686 p2p: Added log for --p2p-host-ip (#4553)
* added log for external addr
* Merge branch 'master' into log-p2p-address
* Merge branch 'master' into log-p2p-address
* Merge branch 'master' into log-p2p-address
2020-01-17 11:07:37 +00:00
Jim McDonald
dab87ba252 Add --rpc-host option to beacon chain (#4571) 2020-01-16 20:18:26 -06:00
Raul Jordan
eb429ab719 Include Validator Index in GetDuties Response, Update EthereumAPIs (#4567)
* include new patch
* add patch and validator indices to duties resp
* test passing
* move call to validator index
* Merge branch 'master' into include-val-idx
* do not use wait groups anymore
* Merge branch 'include-val-idx' of github.com:prysmaticlabs/prysm into include-val-idx
* Update beacon-chain/rpc/validator/assignments_test.go
2020-01-16 22:37:51 +00:00
Raul Jordan
ed529965af Fix Up SSZ Cache Branch Recomputation (#4558)
* e2e ssz cache busting
* use ssz cache for e2e
* caching ensure
* fix up the cache even more
* gazelle
* formatting
* formatting
* add back cache for val registry
* sync
* fix up commented item
* add attestations
* Merge branch 'master' into e2e-ssz-cache
* Merge branch 'master' into e2e-ssz-cache
* Merge branch 'e2e-ssz-cache' of github.com:prysmaticlabs/prysm into e2e-ssz-cache
* formatting
* gaz
* Merge branch 'master' into e2e-ssz-cache
* resolve comments
* Merge branch 'master' into e2e-ssz-cache
* naming of test
* Merge refs/heads/master into e2e-ssz-cache
* Merge refs/heads/master into e2e-ssz-cache
2020-01-16 21:40:09 +00:00
Raul Jordan
c6343cac3a Enable RPCMaxPageSize via Beacon Node Flag (#4539)
* add new flag
* enforce max page size via flag
* ensure exists in flag group
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* Merge refs/heads/master into custom-max-page
* conflict with master
* resolved broken tests
* Update beacon-chain/flags/config.go
* Merge refs/heads/master into custom-max-page
2020-01-16 21:19:43 +00:00
Jim McDonald
06bc80d314 Add bad peer count (#4537)
* Add bad peer count
* Merge branch 'master' into badpeercount
* Merge branch 'master' into badpeercount
* Merge branch 'master' into badpeercount
* Merge branch 'master' into badpeercount
2020-01-16 21:07:09 +00:00
Nishant Das
60cab2dc73 update archive (#4443)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-16 12:37:04 -08:00
Prince Sinha
63d692a833 Fix deposit block slot before genesis state (#4495)
* before genesis state commit
* Merge branch 'master' into deposit-block-slot
* Merge branch 'master' into deposit-block-slot
* depositBlockSlot test added
* Merge branch 'deposit-block-slot' of https://github.com/princesinha19/prysm into deposit-block-slot
* Merge branch 'master' into deposit-block-slot
* Merge branch 'master' into deposit-block-slot
* resolve conflict
* status test commit
* Merge branch 'master' into deposit-block-slot
* Merge branch 'master' into deposit-block-slot
2020-01-16 19:38:30 +00:00
Jim McDonald
d744aaa2cd Better resync checking and running (#4516)
* Separate out fallen behind/resync check
* Remove hard-coded resync interval
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
* Merge branch 'master' into resync
2020-01-16 16:57:38 +00:00
JoshSnider
91d5ffae5b Remove invalid init-sync-no-verify option (#4562)
* Remove invalid `init-sync-no-verify` option

`init-sync-no-verify` was removed from `beacon-chain` in 32245a9062
* Merge branch 'master' into patch-1
2020-01-16 16:01:44 +00:00
terence tsao
cb49544fe3 Efficiently add proposer indices to cache (#4548)
* Use UpdateProposerIndicesInCache
* Merge branch 'master' into improve-proposer-cache
* Merge branch 'master' into improve-proposer-cache
* Merge branch 'master' into improve-proposer-cache
* Merge branch 'master' into improve-proposer-cache
2020-01-16 15:03:49 +00:00
Nishant Das
11731c4afe Fix RPC Panic (#4564) 2020-01-16 06:47:55 -08:00
Jim McDonald
5349b00e19 Tidy up peer logging (#4536)
* Tidy up peer logging
* Merge branch 'master' into peerlogs
* Merge branch 'master' into peerlogs
* Merge branch 'master' into peerlogs
2020-01-16 09:43:10 +00:00
Nishant Das
2e5429c94e Fix Stuck Beacon Node (#4454)
* Revert "Revert #4392 (#4449)"

This reverts commit 67c380b197.
* bound start req
* Merge refs/heads/master into revert-4449-revert-4392
* fix test
* Merge branch 'revert-4449-revert-4392' of https://github.com/prysmaticlabs/geth-sharding into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* add flag for deployment block
* Merge branch 'revert-4449-revert-4392' of https://github.com/prysmaticlabs/geth-sharding into revert-4449-revert-4392
* use constant and comments
* lint
* skip test for now
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Update shared/params/config.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Update beacon-chain/powchain/testing/mock.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* preston's review
* Merge branch 'revert-4449-revert-4392' of https://github.com/prysmaticlabs/geth-sharding into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* add flag
* Merge branch 'revert-4449-revert-4392' of https://github.com/prysmaticlabs/geth-sharding into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* use stateutils
* Merge branch 'revert-4449-revert-4392' of https://github.com/prysmaticlabs/geth-sharding into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
* Merge refs/heads/master into revert-4449-revert-4392
2020-01-16 01:46:15 +00:00
Nishant Das
0a632064d4 Fix Powchain Status (#4560)
* reset status
* Merge branch 'master' into fixStatus
* Merge refs/heads/master into fixStatus
2020-01-16 01:34:27 +00:00
Preston Van Loon
129bc763ee Rate limiter for rpc beacon blocks (#4549)
* use rate limiter for rpc beacon blocks

* gofmt

* don't delete empty buckets

* disconnect bad peers

* tell peer they are being rate limited

* defer disconnect

* fix tests

* set burst to x10

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-15 17:19:06 -08:00
Ivan Martinez
452cadc286 Cleanup featureconfig, make naming consistent (#4557)
* Cleanup featureconfig, make naming consistent

* Fix rename

* Change test package name
2020-01-15 18:41:40 -05:00
Ivan Martinez
2a4c89827d Add double proposal protection to validator client (#4460)
* Add double proposal protection to client

* Add mock test cases for past proposals, and after pruning

* Fix error

* Add force clear db to val in e2e

* Fix val tests

* Move saving proposal history to after broadcasting block

* Add featureflag

* Goimports

* Unexport flag

* Add flag to tests

* gazelle

* Move conditionals
2020-01-15 17:23:39 -05:00
terence tsao
5ab1efb537 Cached head root retrieve from DB on miss (#4552) 2020-01-14 21:02:02 -08:00
Preston Van Loon
d0793f00c5 Partially revert #4477 (#4550)
* Partially revert #4477
2020-01-15 00:29:02 +00:00
Preston Van Loon
d8d9f4482f p2p: Increment RPC metrics (#4547)
* Increment RPC metrics
* Merge refs/heads/master into rpc-metric
2020-01-14 17:02:50 +00:00
Nishant Das
4835ba7bdf Increment Metric at the Start of Validation (#4546)
* shift metric correctly
2020-01-14 16:49:15 +00:00
terence tsao
6ef1a712c2 OnBlockCacheFilteredTree (#4541) 2020-01-14 08:05:22 -08:00
Nishant Das
0bee1de486 Set Capacity for Slices (#4540)
* set capacities

* make it more accurate

* resolve it
2020-01-14 14:44:24 +08:00
Raul Jordan
d2d4e7e35d Benchmark and Optimize ListValidatorBalances (#4530)
* add balances api bench
* rename
* fix flakey test with sharding
* Merge branch 'master' into optimize-api
* optimizing the reqs for pagination
* Merge branch 'optimize-api' of github.com:prysmaticlabs/prysm into optimize-api
* Merge branch 'master' into optimize-api
* wrap up tests
* Merge branch 'optimize-api' of github.com:prysmaticlabs/prysm into optimize-api
* nishant comment
* Update beacon-chain/rpc/beacon/validators.go
2020-01-14 05:40:20 +00:00
Nishant Das
6e0248429f Fix Activation Queue (#4535)
* change operator
* Merge branch 'master' into fixQueue
* Merge refs/heads/master into fixQueue
* Merge refs/heads/master into fixQueue
* Merge refs/heads/master into fixQueue
* add test and fix issue
* Merge branch 'fixQueue' of https://github.com/prysmaticlabs/geth-sharding into fixQueue
2020-01-14 04:35:51 +00:00
terence tsao
884d2a159d Cache proposer indices (#4528)
* Precompute and plug it into run time
* Run time fix
* Testing
* More logging to debug
* More logging to debug
* This should fix it
* Clean up debug logs
* Removed last bit of debug log
* Comments
* Tests
* Merge branch 'master' into cache-proposer-index
* Gaz
* Merge branch 'cache-proposer-index' of git+ssh://github.com/prysmaticlabs/prysm into cache-proposer-index
* Merge refs/heads/master into cache-proposer-index
* Merge refs/heads/master into cache-proposer-index
* Merge refs/heads/master into cache-proposer-index
2020-01-14 04:08:32 +00:00
Preston Van Loon
415af93ad8 Minor tweaks to GetAttestationData (#4533)
* Maybe bugfix

* Maybe bugfix

* make GetAttestationData cheaper

* clone head state getter return values

* Fix tests

* fix e2e and revert most changes 😩

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-13 19:28:08 -08:00
shayzluf
0b35743d2c Attester proposer slashing store (#4315)
* Merge branch 'master' of github.com:prysmaticlabs/prysm into update_validators

# Conflicts:
#	slasher/flags/flags.go
#	slasher/main.go
#	slasher/service/data_update.go
#	slasher/service/service.go
#	slasher/service/service_test.go

* proposal and attester store

* day to status

* comment change

* one bucket

* Merge branch 'master' of github.com:prysmaticlabs/prysm into attester_proposer_slashing_store
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.

added comments

* comment

* typo fix

* raul review fix

* raul review fix full

* nishant feedback

* test fix

* fix tests and remove update gofmt goimports

* remove blank line in imports

* nishant fixes

* comment and fir delete proposer slashings

* avoid marshal twice

* remove space

* Update slasher/db/attester_slashings.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* terence feedback

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-01-14 07:43:25 +05:30
Preston Van Loon
62811e8f7c Only advance to the correct epoch (#4532) 2020-01-13 15:34:05 -08:00
Preston Van Loon
d4ae063ad7 Revert "Filter attestation with ProcessAttestationNoSignatureVerify" (#4529)
* Revert "Filter attestation with ProcessAttestationNoSignatureVerify (#4513)"

This reverts commit 22e01aa9f2.
2020-01-13 21:02:13 +00:00
Jim McDonald
3d24a85121 Tidy-up of BestFinalized (#4505)
* Tidy up BestFinalized
* Ensure no more than maxPeers returned
* Merge branch 'master' into bestfinalized
* Merge branch 'master' into bestfinalized
* Merge branch 'master' into bestfinalized
* Merge branch 'master' into bestfinalized
* Remove swap file
* Provide potential PIDs array with capacity
* Add test for trimming and ordering in BestFinalized
* Merge branch 'master' into bestfinalized
2020-01-13 18:12:10 +00:00
Prince Sinha
888e8925ee cli: Added flag for GRPC max msg size (#4524)
* added grpc max msg size flag
* Merge branch 'master' into grpc-cli-flag
* Merge branch 'master' into grpc-cli-flag
* Merge branch 'master' into grpc-cli-flag
* Merge branch 'master' into grpc-cli-flag
2020-01-13 17:29:43 +00:00
Preston Van Loon
18333293d0 Refactor database interface to prefer blockchain.HeadFetcher (#4523)
* start refactoring and deprecation round 1
* Merge branch 'master' of github.com:prysmaticlabs/prysm into single-source-of-truth-1
* Refactoring of database interface. Preferring limited access interface
* revert some changes from 008f992993
* Fix tests
* gofmt
* Merge branch 'master' into single-source-of-truth-1
* lint
* Merge refs/heads/master into single-source-of-truth-1
* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Merge refs/heads/master into single-source-of-truth-1
* Merge refs/heads/master into single-source-of-truth-1
* Merge refs/heads/master into single-source-of-truth-1
* Clone head block to avoid mutation
2020-01-13 17:02:20 +00:00
Nishant Das
e286069b20 Check Block Before Processing it (#4527)
* fix panic
* Update beacon-chain/sync/pending_blocks_queue.go
2020-01-13 15:09:22 +00:00
Jim McDonald
de2f1fbf5c Ignore VI swapfiles (#4525) 2020-01-13 06:43:14 -08:00
Jim McDonald
44fa2c6371 Only one handshake at a time with active peers (#4519) 2020-01-13 18:15:09 +08:00
terence tsao
a8edfa42cc Cache filtered block tree (#4515)
* Cache filtered block tree
* Merge refs/heads/master into cache-filtered-tree
* Merge refs/heads/master into cache-filtered-tree
* Add locks
* Merge branch 'cache-filtered-tree' of git+ssh://github.com/prysmaticlabs/prysm into cache-filtered-tree
* Confligt
* Merge refs/heads/master into cache-filtered-tree
* Merge refs/heads/master into cache-filtered-tree
* Update shared/featureconfig/flags.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
* Rlock
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into cache-filtered-tree
* Merge branch 'cache-filtered-tree' of git+ssh://github.com/prysmaticlabs/prysm into cache-filtered-tree
2020-01-13 04:12:50 +00:00
terence tsao
7edca61e44 #4506 take two (#4518)
* Samething, testing run time
* Check if the latest processed block root is the same
* Merge branch 'master' into improve-receive-block-reattempt
* Merge refs/heads/master into improve-receive-block-reattempt
* Merge refs/heads/master into improve-receive-block-reattempt
2020-01-12 23:55:28 +00:00
Preston Van Loon
1cb0edac00 PR #4502 take two (#4522)
* PR #4502 take two
2020-01-12 23:46:14 +00:00
Raul Jordan
88bce4af34 Revert "Deprecate new cache feature flag" (#4520)
* Revert "Deprecate new cache feature flag (#4502)"

This reverts commit 5287ddc114.
2020-01-12 23:15:08 +00:00
terence tsao
5287ddc114 Deprecate new cache feature flag (#4502)
* Starting to deprecate new cache flag
* All tests passing
* Fixed minimal test
* Merge branch 'master' of git+ssh://github.com/prysmaticlabs/prysm into deprecate-flag
* Fixed mainnet spec test
* Fixed a typo
* Merge refs/heads/master into deprecate-flag
* Merge refs/heads/master into deprecate-flag
* Merge refs/heads/master into deprecate-flag
2020-01-12 22:46:30 +00:00
terence tsao
22e01aa9f2 Filter attestation with ProcessAttestationNoSignatureVerify (#4513)
* Use no sig verify and comment

* Fixed all tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-12 16:32:33 -06:00
terence tsao
a79dab7919 Revert "ReceiveBlock: Only retrieve head block from DB if necessary (#4506)" (#4514)
This reverts commit 9a4bf6c1a2.
2020-01-12 14:08:25 -08:00
Preston Van Loon
9a4bf6c1a2 ReceiveBlock: Only retrieve head block from DB if necessary (#4506)
* Only retrieve head block from DB if necessary

* remove redundant comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-12 15:13:13 -06:00
Preston Van Loon
7992375a0e Check if we are already synced to the current epoch before querying all of our peers (#4504) 2020-01-11 18:31:30 -08:00
terence tsao
6c4bf22723 Fix up attestation pool (#4493)
* Update aggregated methods

* Update aggregated methods

* Use improved HasAttestation to check caches

* Add back some validations

* There's no need to save unaggregated att

* Fixed all the tests

* remove TODO for now

* Raul feedback

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-01-11 13:59:06 -08:00
Jim McDonald
ea12ffabba Log Ethereum 1 deposits before chainstart (#4499)
* Log Ethereum 1 deposits before chainstart
* Merge branch 'master' into logdeposits
2020-01-11 19:54:43 +00:00
Preston Van Loon
5077c009c3 Only advance slot when the request is for a future epoch (#4501) 2020-01-11 11:40:36 -08:00
Jim McDonald
7f1900e96c Remove unused function (#4496)
* Remove unused function
* Merge branch 'master' into rmunused
2020-01-11 11:20:05 +00:00
Jim McDonald
3c5d5bfc7b Use helper to calculate epoch (#4497) 2020-01-11 19:06:10 +08:00
Nishant Das
4e6c8c5b1a Batch Save Genesis Validators (#4494)
* save vals
* Merge branch 'master' into batchSaveGenesisValidators
2020-01-11 04:40:31 +00:00
Preston Van Loon
7919074a6a Add a step filter for beacon DB to retrieve blocks (#4488)
* Add a step filter for beacon DB to retrieve blocks
* Add a step filter for beacon DB to retrieve blocks
* gofmt
* Merge branch 'master' into db-step-filter
* Merge refs/heads/master into db-step-filter
* Merge refs/heads/master into db-step-filter
* Merge refs/heads/master into db-step-filter
* Merge refs/heads/master into db-step-filter
* fix tests
* Merge branch 'db-step-filter' of github.com:prysmaticlabs/prysm into db-step-filter
* Merge refs/heads/master into db-step-filter
2020-01-11 01:32:45 +00:00
Nishant Das
f6eea8e1fa Optimize Archival Assignment Retrieval (#4480)
* optimize further
* remove func
* Merge branch 'master' into optimizeArchival
* Merge refs/heads/master into optimizeArchival
* Merge refs/heads/master into optimizeArchival
* Merge refs/heads/master into optimizeArchival
* Merge refs/heads/master into optimizeArchival
* Merge refs/heads/master into optimizeArchival
* Merge refs/heads/master into optimizeArchival
* raul's review
* Merge branch 'optimizeArchival' of https://github.com/prysmaticlabs/geth-sharding into optimizeArchival
* preston's review
2020-01-11 01:19:52 +00:00
terence tsao
45e6eccfb4 Add epoch filter for fork choice attestation (#4487)
* Filter target epoch
* Test
* Comment
* Merge branch 'master' into fix-target-epoch
* Merge refs/heads/master into fix-target-epoch
* Merge refs/heads/master into fix-target-epoch
* Merge refs/heads/master into fix-target-epoch
2020-01-10 23:51:49 +00:00
terence tsao
b6c6b9b776 Filter block tree verifies block root has state (#4490)
* Construct block tree ensures block root has state
* Merge refs/heads/master into filter-tree-has-state
2020-01-10 23:40:46 +00:00
Raul Jordan
1a9b0da9ae Batch Save Validator Indices (#4489)
* add bolt alloc fix
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* const
* add a batch save for indices to speed up sync
* Merge branch 'batch-save-indices' into bolt-alloc-fix
* fix up
* fix broken build
* Merge branch 'batch-save-indices' into bolt-alloc-fix
* Merge branch 'master' into batch-save-indices
* ensure it saves each
* Merge branch 'batch-save-indices' of github.com:prysmaticlabs/prysm into batch-save-indices
* Merge branch 'master' into batch-save-indices
* revert ssz cache
* Merge branch 'batch-save-indices' of github.com:prysmaticlabs/prysm into batch-save-indices
2020-01-10 23:27:01 +00:00
Raul Jordan
025be93492 Allocate More Resources to BoltDB (#4485)
* add bolt alloc fix
* Merge branch 'master' of github.com:prysmaticlabs/prysm
* const
* Merge refs/heads/master into bolt-alloc-fix
2020-01-10 22:17:42 +00:00
Jim McDonald
22a3bf53ad Do not panic if initial sync fails (#4477)
* Do not panic if initial sync fails
* Only consider peers with non-0 finalized epoch
* Additional fixes
* Fix tests
* Merge branch 'master' into mvlog
* Merge branch 'master' into mvlog
2020-01-10 21:36:20 +00:00
terence tsao
37459ee765 Forkchoice att seen cache consider bitfield overlaps (#4483)
* Aggregate with previous aggregated attestations

* Update cache to bitfield as value

* Remove fmt print
2020-01-10 14:44:20 -06:00
terence tsao
e9d63e8dd3 Aggregate with previous aggregated attestations (#4478) 2020-01-10 10:56:28 -06:00
terence tsao
01b8a84e21 Check fork choice attestation's block and state in DB (#4475) 2020-01-10 06:25:43 -08:00
Preston Van Loon
9d8364bdfa only advance state in validate aggregate and proof if the epoch has changed between head state and attestation slot (#4474) 2020-01-09 19:11:02 -08:00
terence tsao
2b6a5aaaf9 Use db head info for request attestation (#4472) 2020-01-09 18:37:55 -08:00
Preston Van Loon
6aa92956f6 RPC: Use db.headBlock in getBlock (#4473)
* use headBlock in getBlock
* Merge branch 'master' into use-db-headblock
2020-01-10 02:00:36 +00:00
Preston Van Loon
eae2268dd1 DB: Prevent encoding a nil message (#4470)
* Prevent encoding a nil message
* Merge refs/heads/master into prevent-saving-nil-msg
2020-01-10 01:38:53 +00:00
Preston Van Loon
6de485c27e Use a longer deadline for processing pubsub messages (#4471)
* Use a longer deadline for processing pubsub messages
2020-01-10 01:27:20 +00:00
Jim McDonald
3839f577ec Change database *Index() to use slice (#4466)
* Change database *Index() to use slice

* Remove underscore from helper name
2020-01-09 14:45:05 -08:00
Jim McDonald
fc38a0413e Fix gauge description (#4468) 2020-01-09 11:39:11 -08:00
terence tsao
0dd0e23155 Optimize ListBeaconCommittees to use committees cache (#4464) 2020-01-09 08:28:11 -08:00
Nishant Das
699e1c8a23 Optimize List Validator Assignments (#4456)
* optimize
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* Merge refs/heads/master into listValidatorAssingments
* terence's and preston's review
* Merge branch 'listValidatorAssingments' of https://github.com/prysmaticlabs/geth-sharding into listValidatorAssingments
2020-01-09 05:07:24 +00:00
shayzluf
39b2570af5 Beacon node slasher client infrastructure (#4111) 2020-01-08 20:49:32 -08:00
Preston Van Loon
200bb5e42a Docker: Make root user the default (#4461)
* make root user the default
* Merge branch 'master' into root-user
* Merge refs/heads/master into root-user
* Merge refs/heads/master into root-user
2020-01-08 19:52:59 +00:00
Preston Van Loon
d249f78d79 Update tool README.md (#4463)
* Update README.md
* Merge refs/heads/master into prestonvanloon-patch-2
2020-01-08 19:44:34 +00:00
Nishant Das
e110f038bc Add Back Eth1 Block Delay (#4458)
* add delay
* Merge refs/heads/master into addBackDelay
* Merge refs/heads/master into addBackDelay
* Merge refs/heads/master into addBackDelay
* Merge refs/heads/master into addBackDelay
* Merge refs/heads/master into addBackDelay
2020-01-08 19:36:20 +00:00
Jim McDonald
5ee79dc4a8 Log fork version mismatches at debug (#4457)
* Log fork version mismatches at debug
* Merge branch 'master' into synclogerrors
* Merge branch 'master' into synclogerrors
* Merge branch 'master' into synclogerrors
2020-01-08 19:18:21 +00:00
Ivan Martinez
4ab0a91e51 Validator Slashing Protection DB (#4389)
* Begin adding DB to validator client

Begin adding ValidatorProposalHistory

Implement most of proposal history

Finish tests

Fix marking a proposal for the first time

Change proposalhistory to not using bit shifting

Add pb.go

Change after proto/slashing added

Finally fix protos

Fix most tests

Fix all tests for double proposal protection

Start initialiing DB in validator client

Add db to validator struct

Add DB to ProposeBlock

Fix test errors and begin mocking

Fix test formatting and pass test for validator protection!

Fix merge issues

Fix renames

Fix tests

* Fix tests

* Fix first startup on DB

* Fix nil check tests

* Fix E2E

* Fix e2e flag

* Fix comments

* Fix for comments

* Move proposal hepers to validator/client to keep DB clean

* Add clear-db flag to validator client

* Fix formatting

* Clear out unintended changes

* Fix build issues

* Fix build issues

* Gazelle

* Fix mock test

* Remove proposal history

* Add terminal confirmation to DB clearing

* Add interface for validatorDB, add context to DB functions

* Add force-clear-db flag

* Cleanup

* Update validator/node/node.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Change db to clear file, not whole folder

* Fix db test

* Fix teardown test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-01-08 13:16:17 -05:00
terence tsao
a69cb5c6e4 Committee cache fuzz tests (#4459)
* Starting TestCommitteeKeyFuzz_OK

* Added fuzz tests for committeees by epoch and active indices
2020-01-08 11:13:39 -06:00
terence tsao
624c42421d Add HasAggregatedAttestation getter for pool (#4451) 2020-01-08 06:59:49 -08:00
Jim McDonald
f3ae67a94b Fix bad update in #4453 (#4455) 2020-01-08 21:54:44 +08:00
Ivan Martinez
b30a7d1e19 Fix typos and inconsistencies (#4453)
* Fix typos and inconsistencies

* igoimports

* Gazelle
2020-01-07 20:36:55 -06:00
Ivan Martinez
0d400faea2 Remove unused parameters and unused code (#4452)
* Remove unused parameters
* Remove unused deposit contract config
2020-01-07 23:45:29 +00:00
Preston Van Loon
67c380b197 Revert #4392 (#4449)
* revert #4392
2020-01-07 21:15:40 +00:00
786 changed files with 60717 additions and 27019 deletions

164
.bazelrc
View File

@@ -24,3 +24,167 @@ build --define ssz=mainnet
build --incompatible_strict_action_env
test --incompatible_strict_action_env
run --incompatible_strict_action_env
# Disable kafka by default, it takes a long time to build...
build --define kafka_enabled=false
test --define kafka_enabled=false
run --define kafka_enabled=false
# Release flags
build:release --workspace_status_command=./scripts/workspace_status.sh
build:release --stamp
build:release --compilation_mode=opt
# LLVM compiler for building C/C++ dependencies.
build:llvm --crosstool_top=@llvm_toolchain//:toolchain
build:llvm --define compiler=llvm
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --config=llvm
build:cgo_symbolizer --copt=-g
build:cgo_symbolizer --define=USE_CGO_SYMBOLIZER=true
build:cgo_symbolizer -c dbg
# multi-arch cross-compiling toolchain configs:
-----------------------------------------------
build:cross --crosstool_top=@prysm_toolchains//:multiarch_toolchain
build:cross --host_platform=@io_bazel_rules_go//go/toolchain:linux_amd64
build:cross --host_crosstool_top=@prysm_toolchains//:hostonly_toolchain
# linux_amd64 config for cross compiler toolchain, not strictly necessary since host/exec env is amd64
build:linux_amd64 --platforms=@io_bazel_rules_go//go/toolchain:linux_amd64_cgo
# osx_amd64 config for cross compiler toolchain
build:osx_amd64 --config=cross
build:osx_amd64 --platforms=@io_bazel_rules_go//go/toolchain:darwin_amd64_cgo
build:osx_amd64 --compiler=osxcross
# windows
build:windows_amd64 --config=cross
build:windows_amd64 --platforms=@io_bazel_rules_go//go/toolchain:windows_amd64_cgo
build:windows_amd64 --compiler=mingw-w64
# linux_arm64 conifg for cross compiler toolchain
build:linux_arm64 --config=cross
build:linux_arm64 --platforms=@io_bazel_rules_go//go/toolchain:linux_arm64_cgo
build:linux_arm64 --copt=-funsafe-math-optimizations
build:linux_arm64 --copt=-ftree-vectorize
build:linux_arm64 --copt=-fomit-frame-pointer
build:linux_arm64 --cpu=aarch64
build:linux_arm64 --compiler=clang
build:linux_arm64 --copt=-march=armv8-a
# toolchain build debug configs
#------------------------------
build:debug --sandbox_debug
build:debug --toolchain_resolution_debug
build:debug --verbose_failures
build:debug -s
# windows debug
build:windows_amd64_debug --config=windows_amd64
build:windows_amd64_debug --config=debug
# osx_amd64 debug config
build:osx_amd64_debug --config=debug
build:osx_amd64_debug --config=osx_amd64
# linux_arm64_debug
build:linux_arm64_debug --config=linux_arm64
build:linux_arm64_debug --config=debug
# linux_amd64_debug
build:linux_amd64_debug --config=linux_amd64
build:linux_amd64_debug --config=debug
# Docker Sandbox Configs
#-----------------------
# Note all docker sandbox configs must run from a linux x86_64 host
# build:docker-sandbox --experimental_docker_image=gcr.io/prysmaticlabs/rbe-worker:latest
build:docker-sandbox --spawn_strategy=docker --strategy=Javac=docker --genrule_strategy=docker
build:docker-sandbox --define=EXECUTOR=remote
build:docker-sandbox --experimental_docker_verbose
build:docker-sandbox --experimental_enable_docker_sandbox
build:docker-sandbox --crosstool_top=@rbe_ubuntu_clang//cc:toolchain
build:docker-sandbox --host_javabase=@rbe_ubuntu_clang//java:jdk
build:docker-sandbox --javabase=@rbe_ubuntu_clang//java:jdk
build:docker-sandbox --host_java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:docker-sandbox --java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:docker-sandbox --extra_execution_platforms=@rbe_ubuntu_clang//config:platform
build:docker-sandbox --host_platform=@rbe_ubuntu_clang//config:platform
build:docker-sandbox --platforms=@rbe_ubuntu_clang//config:platform
build:docker-sandbox --extra_toolchains=@prysm_toolchains//:cc-toolchain-multiarch
# windows_amd64 docker sandbox build config
build:windows_amd64_docker --config=docker-sandbox --config=windows_amd64
build:windows_amd64_docker_debug --config=windows_amd64_docker --config=debug
# osx_amd64 docker sandbox build config
build:osx_amd64_docker --config=docker-sandbox --config=osx_amd64
build:osx_amd64_docker_debug --config=osx_amd64_docker --config=debug
# linux_arm64 docker sandbox build config
build:linux_arm64_docker --config=docker-sandbox --config=linux_arm64
build:linux_arm64_docker_debug --config=linux_arm64_docker --config=debug
# linux_amd64 docker sandbox build config
build:linux_amd64_docker --config=docker-sandbox --config=linux_amd64
build:linux_amd64_docker_debug --config=linux_amd64_docker --config=debug
# Remote Build Execution
#-----------------------
# Originally from https://github.com/bazelbuild/bazel-toolchains/blob/master/bazelrc/bazel-2.0.0.bazelrc
#
# Depending on how many machines are in the remote execution instance, setting
# this higher can make builds faster by allowing more jobs to run in parallel.
# Setting it too high can result in jobs that timeout, however, while waiting
# for a remote machine to execute them.
build:remote --jobs=50
# Set several flags related to specifying the platform, toolchain and java
# properties.
# These flags should only be used as is for the rbe-ubuntu16-04 container
# and need to be adapted to work with other toolchain containers.
build:remote --host_javabase=@rbe_ubuntu_clang//java:jdk
build:remote --javabase=@rbe_ubuntu_clang//java:jdk
build:remote --host_java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:remote --java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:remote --crosstool_top=@rbe_ubuntu_clang//cc:toolchain
build:remote --action_env=BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
# Platform flags:
# The toolchain container used for execution is defined in the target indicated
# by "extra_execution_platforms", "host_platform" and "platforms".
# More about platforms: https://docs.bazel.build/versions/master/platforms.html
build:remote --extra_toolchains=@rbe_ubuntu_clang//config:cc-toolchain
build:remote --extra_execution_platforms=@rbe_ubuntu_clang//config:platform
build:remote --host_platform=@rbe_ubuntu_clang//config:platform
build:remote --platforms=@rbe_ubuntu_clang//config:platform
# Starting with Bazel 0.27.0 strategies do not need to be explicitly
# defined. See https://github.com/bazelbuild/bazel/issues/7480
build:remote --define=EXECUTOR=remote
# Enable remote execution so actions are performed on the remote systems.
# build:remote --remote_executor=grpcs://remotebuildexecution.googleapis.com
# Enforce stricter environment rules, which eliminates some non-hermetic
# behavior and therefore improves both the remote cache hit rate and the
# correctness and repeatability of the build.
build:remote --incompatible_strict_action_env=true
# Set a higher timeout value, just in case.
build:remote --remote_timeout=3600
# Enable authentication. This will pick up application default credentials by
# default. You can use --google_credentials=some_file.json to use a service
# account credential instead.
# build:remote --google_default_credentials=true
# Enable build without the bytes
# See: https://github.com/bazelbuild/bazel/issues/6862
build:remote --experimental_remote_download_outputs=toplevel --experimental_inmemory_jdeps_files --experimental_inmemory_dotd_files
build:remote --remote_local_fallback

1
.bazelversion Normal file
View File

@@ -0,0 +1 @@
2.1.1

View File

@@ -20,15 +20,16 @@ build:remote-cache --strategy=Genrule=standalone
build:remote-cache --disk_cache=
build:remote-cache --host_platform_remote_properties_override='properties:{name:\"cache-silo-key\" value:\"prysm\"}'
build:remote-cache --remote_instance_name=projects/prysmaticlabs/instances/default_instance
build:remote-cache --experimental_remote_download_outputs=minimal
build:remote-cache --experimental_inmemory_jdeps_files
build:remote-cache --experimental_inmemory_dotd_files
build:remote-cache --remote_download_minimal
# Import workspace options.
import %workspace%/.bazelrc
startup --host_jvm_args=-Xmx1000m --host_jvm_args=-Xms1000m
query --repository_cache=/tmp/repositorycache
query --experimental_repository_cache_hardlinks
build --repository_cache=/tmp/repositorycache
build --experimental_repository_cache_hardlinks
build --experimental_strict_action_env
build --disk_cache=/tmp/bazelbuilds
build --experimental_multi_threaded_digest
@@ -36,12 +37,15 @@ build --sandbox_tmpfs_path=/tmp
build --verbose_failures
build --announce_rc
build --show_progress_rate_limit=5
build --curses=yes --color=yes
build --curses=yes --color=no
build --keep_going
build --test_output=errors
build --flaky_test_attempts=5
build --jobs=50
build --stamp
test --local_test_jobs=2
# Disabled race detection due to unstable test results under constrained environment build kite
# build --features=race
# Enable kafka for CI tests only.
test --define kafka_enabled=true
build --bes_backend=grpcs://builds.prylabs.net:1985
build --bes_results_url=https://builds.prylabs.net/invocation/

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
# Automatically require code review from core-team.
* @prysmaticlabs/core-team

7
.gitignore vendored
View File

@@ -4,6 +4,9 @@ bazel-*
.DS_Store
.swp
# Ignore VI/Vim swapfiles
.*.sw?
# IntelliJ
.idea
.ijwb
@@ -14,6 +17,7 @@ bazel-*
# Coverage outputs
coverage.txt
profile.out
profile.grind
# Nodejs
node_modules
@@ -26,3 +30,6 @@ password.txt
# go dependancy
/go.mod
/go.sum
# Dist files
dist

View File

@@ -4,23 +4,23 @@ Hash: SHA512
Contact: mailto:security@prysmaticlabs.com
Encryption: openpgp4fpr:0AE0051D647BA3C1A917AF4072E33E4DF1A5036E
Encryption: openpgp4fpr:341396BAFACC28C5082327F889725027FC8EC0D4
Encryption: openpgp4fpr:8B7814F1B221A8E8AA465FC7BDBF744ADE1A0033
Encryption: openpgp4fpr:FEE44615A19049DF0CA0C2735E2B7E5734DFADCB
Preferred-Languages: en
Canonical: https://github.com/prysmaticlabs/prysm/tree/master/.well-known/security.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAlzi0WgACgkQcuM+TfGl
A241pw/+Ks3Hxx8eGbjRIeuncuK811FkCiofNJS+MY2p4W2/tIrk48DtLRx8/k5L
Dh1QyypZsqUgofrK7PbGVdEin6oEb2jYbTWUarAVTbhlsUdM4YcxwpgmGVslW7+C
Hm8wMasQZhCkFfakzhfKX5hIQoFaFI/OvtVKIQsodP8dAieCDaGmtfq1Bs1LgFqi
KrpeEdC2XbBQs33ADheC5SdGT1mnatP3VX8cOhLsfoPksYgTSpwK0clkoWs1eZOQ
l1ImfW/FJCpSndBWgBR503ZgaU3Ic+5qxmAIuUP4chl0DFRMlPFEM5OWC6JkkCOd
5kKrXGRmrhgtQg+pA3zqJnFItRj7gxPBA/ypxCkKPrLEkRvbdpdZEl5vAlYkeBL6
iKSLHnMswGKldiYxy7ofam5bM3myhYYNFb25boV5pRptrnoUmWOACHioBGQHwWNt
B0XktD0j7+pCCiJyyYxmOnElsk/Y/u4Tv5pYWvfFuxTF2XOg+P/EH64AIFLWgB1U
VnITxhakxqejCBxZkuVCFNSzt+TXG0NS9EIj/UOYBY+wxrBZ62ITjdA16RS/3n3z
DuIDtxOOwUumbOO32+a5zIb+ARmnocYJviI7FuENb01/U6qb+nm9hQI6oIpSCNsv
Pb4O/ZlOx70U/7mt4Xn/dTKH9bnKOOVhOw00KJWFfAce73AVnLA=
=Uhqg
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAl6HrcwACgkQcuM+TfGl
A26voQ/8DFB5wUHP0uyY8k7FGbxhLzSeImxomnUHJaUGfdczbCdYPMEHc9zI1iZP
6LRiy9wS6qhqj/GSKVwvDPr+ZymXuV3L22GOP2lRhl7Z9Mm21ZJNOcoQBFOZnyHu
DAy9HeTmeuJxYkf8weqZYXyzEoKJBDmfuWmEFjrtMcFXUfT3aJn1E2A/AQdcVQIC
9L+iGWwFwjsPhcfaMuwcB7QMheDO6KSB7XPPCbrZ036Np8UTZ4qbZ5y73tlfkcOc
tYTrMSPtS4eNutiDOP5Np36cLzRtNpm/BziAK+7ZKiYY0HI5h9IkCTLO4x2UmAMX
sPoeaAB5z2QLIwmU9J2NhJrwiNMGTpJ+0bowy8U4cgzAX20CXVjRqGhy+cir8Ewg
DjEGjWINUw6W0yzJp0mKSKzuOhdTTmzIYBeMBsyce+pgN1KGFCxeIwxGxyJzADdw
mYQdljRXn4yEYP/KEpu/F2o8L4ptRO2jZWKvTvdzSSGGSyKyF4HsIRJ7m98DaB6S
0oGq1KpbKKTbQi5g8UShGV2gPeMCs5ZIIqK2b/cRzUet18aUuofLmR4lkKZa9yEG
rbzuJq/gB2vgQwExUEgVQ3/DfVc+y80e3YZ5s+rzV0vbLxl4Gh4yExpLo7hRf9iY
EFvMzH+BEEb5VfCwByZyV1BmesZVIosr7K6UmVtPe0bZGvv3uIg=
=5qpD
-----END PGP SIGNATURE-----

View File

@@ -4,7 +4,6 @@ load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@graknlabs_bazel_distribution//common:rules.bzl", "assemble_targz", "assemble_versioned")
load("//tools:binary_targets.bzl", "binary_targets", "determine_targets")
prefix = "github.com/prysmaticlabs/prysm"
@@ -104,40 +103,15 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/assign:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_tool_library",
"//tools/analyzers/maligned:go_tool_library",
],
)
assemble_versioned(
name = "assemble-versioned-all",
tags = ["manual"],
targets = [
":assemble-{}-{}-targz".format(
pair[0],
pair[1],
)
for pair in binary_targets
],
version_file = "//:VERSION",
)
common_files = {
"//:LICENSE.md": "LICENSE.md",
"//:README.md": "README.md",
}
[assemble_targz(
name = "assemble-{}-{}-targz".format(
pair[0],
pair[1],
),
additional_files = determine_targets(pair, common_files),
output_filename = "prysm-{}-{}".format(
pair[0],
pair[1],
),
tags = ["manual"],
) for pair in binary_targets]
toolchain(
name = "built_cmake_toolchain",
toolchain = "@rules_foreign_cc//tools/build_defs/native_tools:built_cmake",

View File

@@ -57,7 +57,7 @@ the system with 64 validators and the genesis time set to the current unix times
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --interop-num-validators 64
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
@@ -78,13 +78,7 @@ Assuming you generated a `genesis.ssz` file with 64 validators, open up two term
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --interop-num-validators 64
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.

View File

@@ -8,7 +8,7 @@
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the Ethereum 2.0 client specifications developed by [Prysmatic Labs](https://prysmaticlabs.com).
### Need assistance?
A more detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://prysmaticlabs.gitbook.io/prysm/). If you still have questions, feel free to stop by either our [Discord](https://discord.gg/KSA7rPr) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) and a member of the team or our community will be happy to assist you.
A more detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://docs.prylabs.network). If you still have questions, feel free to stop by either our [Discord](https://discord.gg/KSA7rPr) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) and a member of the team or our community will be happy to assist you.
### Come join the testnet!
Participation is now open to the public for our Ethereum 2.0 phase 0 testnet release. Visit [prylabs.net](https://prylabs.net) for more information on the project or to sign up as a validator on the network.
@@ -23,7 +23,7 @@ Participation is now open to the public for our Ethereum 2.0 phase 0 testnet rel
- [Running via Docker](#running-via-docker)
- [Running via Bazel](#running-via-bazel)
- [Staking ETH: running a validator client](#staking-eth-running-a-validator-client)
- [Activating your validator: depositing 3.2 Goerli ETH](#activating-your-validator-depositing-32-göerli-eth)
- [Activating your validator: depositing 3.2 Goerli ETH](#activating-your-validator-depositing-32-gerli-eth)
- [Starting the validator with Bazel](#starting-the-validator-with-bazel)
- [Setting up a local ETH2 development chain](#setting-up-a-local-eth2-development-chain)
- [Installation and dependencies](#installation-and-dependencies)
@@ -92,7 +92,7 @@ Bazel will automatically pull and install any dependencies as well, including Go
## Connecting to the testnet: running a beacon node
Below are instructions for initialising a beacon node and connecting to the public testnet. To further understand the role that the beacon node plays in Prysm, see [this section of the documentation.](https://prysmaticlabs.gitbook.io/prysm/how-prysm-works/overview-technical)
Below are instructions for initialising a beacon node and connecting to the public testnet. To further understand the role that the beacon node plays in Prysm, see [this section of the documentation.](https://docs.prylabs.network/docs/how-prysm-works/architecture-overview/)
**NOTE:** It is recommended to open up port 13000 on your local router to improve connectivity and receive more peers from the network. To do so, navigate to `192.168.0.1` in your browser and login if required. Follow along with the interface to modify your routers firewall settings. When this task is completed, append the parameter`--p2p-host-ip=$(curl -s ident.me)` to your selected beacon startup command presented in this section to use the newly opened port.
@@ -104,10 +104,9 @@ Below are instructions for initialising a beacon node and connecting to the publ
To start your beacon node, issue the following command:
```text
docker run -it -v $HOME/prysm:/data -p 4000:4000 --name beacon-node \
docker run -it -v $HOME/prysm:/data -p 4000:4000 -p 13000:13000 --name beacon-node \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data \
--init-sync-no-verify
--datadir=/data
```
The beacon node can be halted by either using `Ctrl+c` or with the command:
@@ -131,7 +130,7 @@ docker rm beacon-node
To recreate a deleted container and refresh the chain database, issue the start command with an additional `--clear-db` parameter:
```text
docker run -it -v $HOME/prysm:/data -p 4000:4000 --name beacon-node \
docker run -it -v $HOME/prysm:/data -p 4000:4000 -p 13000:13000 --name beacon-node \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data \
--clear-db
@@ -149,7 +148,7 @@ docker run -it -v $HOME/prysm:/data -p 4000:4000 --name beacon-node \
3. To run the beacon node, issue the following command:
```text
docker run -it -v c:/prysm/:/data -p 4000:4000 gcr.io/prysmaticlabs/prysm/beacon-chain:latest --datadir=/data --init-sync-no-verify --clear-db
docker run -it -v c:/prysm/:/data -p 4000:4000 -p 13000:13000 --name beacon-node gcr.io/prysmaticlabs/prysm/beacon-chain:latest --datadir=/data --clear-db
```
### Running via Bazel
@@ -168,13 +167,13 @@ This will sync up the beacon node with the latest head block in the network.
## Staking ETH: Running a validator client
Once your beacon node is up, the chain will be waiting for you to deposit 3.2 Goerli ETH into a [validator deposit contract](how-prysm-works/validator-deposit-contract.md) in order to activate your validator \(discussed in the section below\). First though, you will need to create this validator and connect to this node to participate in consensus.
Once your beacon node is up, the chain will be waiting for you to deposit 3.2 Goerli ETH into a [validator deposit contract](https://docs.prylabs.network/docs/how-prysm-works/validator-deposit-contract) in order to activate your validator \(discussed in the section below\). First though, you will need to create this validator and connect to this node to participate in consensus.
Each validator represents 3.2 Goerli ETH being staked in the system, and it is possible to spin up as many as you desire in order to have more stake in the network.
### Activating your validator: depositing 3.2 Göerli ETH
To begin setting up a validator, follow the instructions found on [prylabs.net](https://prylabs.net) to use the Göerli ETH faucet and make a deposit. For step-by-step assistance with the deposit page, see the [Activating a Validator ](activating-a-validator.md)section of this documentation.
To begin setting up a validator, follow the instructions found on [prylabs.net](https://prylabs.net) to use the Göerli ETH faucet and make a deposit. For step-by-step assistance with the deposit page, see the [Activating a Validator ](https://docs.prylabs.network/docs/prysm-usage/activating-a-validator)section of this documentation.
It will take a while for the nodes in the network to process a deposit. Once the node is active, the validator will immediately begin performing its responsibilities.
@@ -198,7 +197,7 @@ Open up two terminal windows. In the first, issue the command:
```text
bazel run //beacon-chain -- \
--no-genesis-delay \
--custom-genesis-delay=0 \
--bootstrap-node= \
--deposit-contract $(curl https://prylabs.net/contract) \
--clear-db \
@@ -209,7 +208,7 @@ bazel run //beacon-chain -- \
Wait a moment for the beacon chain to start. In the other terminal, issue the command:
```text
bazel run //validator -- --interop-num-validators 64
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This command will kickstart the system with your 64 validators performing their duties accordingly.
@@ -230,7 +229,7 @@ golangci-lint run
## Contributing
Want to get involved? Check out our [Contribution Guide](https://prysmaticlabs.gitbook.io/prysm/getting-involved/contribution-guidelines) to learn more!
Want to get involved? Check out our [Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/) to learn more!
## License
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)

411
WORKSPACE
View File

@@ -3,6 +3,46 @@ workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "bazel_toolchains",
sha256 = "b5a8039df7119d618402472f3adff8a1bd0ae9d5e253f53fcc4c47122e91a3d2",
strip_prefix = "bazel-toolchains-2.1.1",
urls = [
"https://github.com/bazelbuild/bazel-toolchains/releases/download/2.1.1/bazel-toolchains-2.1.1.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/bazel-toolchains/archive/2.1.1.tar.gz",
],
)
http_archive(
name = "com_grail_bazel_toolchain",
sha256 = "0bec89e35d8a141c87f28cfc506d6d344785c8eb2ff3a453140a1fe972ada79d",
strip_prefix = "bazel-toolchain-77a87103145f86f03f90475d19c2c8854398a444",
urls = ["https://github.com/grailbio/bazel-toolchain/archive/77a87103145f86f03f90475d19c2c8854398a444.tar.gz"],
)
load("@com_grail_bazel_toolchain//toolchain:deps.bzl", "bazel_toolchain_dependencies")
bazel_toolchain_dependencies()
load("@com_grail_bazel_toolchain//toolchain:rules.bzl", "llvm_toolchain")
llvm_toolchain(
name = "llvm_toolchain",
llvm_version = "9.0.0",
)
load("@llvm_toolchain//:toolchains.bzl", "llvm_register_toolchains")
llvm_register_toolchains()
load("@prysm//tools/cross-toolchain:prysm_toolchains.bzl", "configure_prysm_toolchains")
configure_prysm_toolchains()
load("@prysm//tools/cross-toolchain:rbe_toolchains_config.bzl", "rbe_toolchains_config")
rbe_toolchains_config()
http_archive(
name = "bazel_skylib",
sha256 = "2ea8a5ed2b448baf4a6855d3ce049c4c452a6470b1efd1504fdb7c1c134d220a",
@@ -12,10 +52,10 @@ http_archive(
http_archive(
name = "bazel_gazelle",
sha256 = "86c6d481b3f7aedc1d60c1c211c6f76da282ae197c3b3160f54bd3a8f847896f",
sha256 = "d8c45ee70ec39a57e7a05e5027c32b1576cc7f16d9dd37135b0eddde45cf1b10",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/bazel-gazelle/releases/download/v0.19.1/bazel-gazelle-v0.19.1.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.19.1/bazel-gazelle-v0.19.1.tar.gz",
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/bazel-gazelle/releases/download/v0.20.0/bazel-gazelle-v0.20.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.20.0/bazel-gazelle-v0.20.0.tar.gz",
],
)
@@ -28,17 +68,17 @@ http_archive(
http_archive(
name = "io_bazel_rules_docker",
# sha256 = "9ff889216e28c918811b77999257d4ac001c26c1f7c7fb17a79bc28abf74182e",
strip_prefix = "rules_docker-0.12.1",
url = "https://github.com/bazelbuild/rules_docker/archive/v0.12.1.tar.gz",
sha256 = "dc97fccceacd4c6be14e800b2a00693d5e8d07f69ee187babfd04a80a9f8e250",
strip_prefix = "rules_docker-0.14.1",
url = "https://github.com/bazelbuild/rules_docker/archive/v0.14.1.tar.gz",
)
http_archive(
name = "io_bazel_rules_go",
sha256 = "e88471aea3a3a4f19ec1310a55ba94772d087e9ce46e41ae38ecebe17935de7b",
sha256 = "e6a6c016b0663e06fa5fccf1cd8152eab8aa8180c583ec20c872f4f9953a7ac5",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/rules_go/releases/download/v0.20.3/rules_go-v0.20.3.tar.gz",
"https://github.com/bazelbuild/rules_go/releases/download/v0.20.3/rules_go-v0.20.3.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.22.1/rules_go-v0.22.1.tar.gz",
"https://github.com/bazelbuild/rules_go/releases/download/v0.22.1/rules_go-v0.22.1.tar.gz",
],
)
@@ -52,22 +92,21 @@ git_repository(
name = "graknlabs_bazel_distribution",
commit = "962f3a7e56942430c0ec120c24f9e9f2a9c2ce1a",
remote = "https://github.com/graknlabs/bazel-distribution",
shallow_since = "1563544980 +0300",
shallow_since = "1569509514 +0300",
)
# Override default import in rules_go with special patch until
# https://github.com/gogo/protobuf/pull/582 is merged.
git_repository(
name = "com_github_gogo_protobuf",
# v1.3.0 (latest) as of 2019-10-05
commit = "0ca988a254f991240804bf9821f3450d87ccbb1b",
commit = "5628607bb4c51c3157aacc3a50f0ab707582b805",
patch_args = ["-p1"],
patches = [
"@io_bazel_rules_go//third_party:com_github_gogo_protobuf-gazelle.patch",
"//third_party:com_github_gogo_protobuf-equal.patch",
],
remote = "https://github.com/gogo/protobuf",
shallow_since = "1567336231 +0200",
shallow_since = "1571033717 +0200",
# gazelle args: -go_prefix github.com/gogo/protobuf -proto legacy
)
@@ -78,6 +117,22 @@ load(
container_repositories()
load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
)
container_pull(
name = "alpine_cc_linux_amd64",
digest = "sha256:d5cee45549351be7a03a96c7b319b9c1808979b10888b79acca4435cc068005e",
registry = "index.docker.io",
repository = "frolvlad/alpine-glibc",
)
load("@prysm//third_party/herumi:herumi.bzl", "bls_dependencies")
bls_dependencies()
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
go_rules_dependencies()
@@ -97,15 +152,8 @@ load(
_go_image_repos = "repositories",
)
_go_image_repos()
# Golang images
# This is using gcr.io/distroless/base
load(
"@io_bazel_rules_docker//go:image.bzl",
_go_image_repos = "repositories",
)
_go_image_repos()
# CC images
@@ -125,16 +173,16 @@ proto_library(
srcs = ["src/proto/faucet.proto"],
visibility = ["//visibility:public"],
)""",
sha256 = "1184e44a7a9b8b172e68e82c02cc3b15a80122340e05a92bd1edeafe5e68debe",
strip_prefix = "prysm-testnet-site-ec6a4a4e421bf4445845969167d06e93ee8d7acc",
url = "https://github.com/prestonvanloon/prysm-testnet-site/archive/ec6a4a4e421bf4445845969167d06e93ee8d7acc.tar.gz",
sha256 = "29742136ff9faf47343073c4569a7cf21b8ed138f726929e09e3c38ab83544f7",
strip_prefix = "prysm-testnet-site-5c711600f0a77fc553b18cf37b880eaffef4afdb",
url = "https://github.com/prestonvanloon/prysm-testnet-site/archive/5c711600f0a77fc553b18cf37b880eaffef4afdb.tar.gz",
)
http_archive(
name = "io_kubernetes_build",
sha256 = "5ab110312cd7665a1940ba0523b67b9fbb6053beb9dd4e147643867bebd7e809",
strip_prefix = "repo-infra-db6ceb5f992254db76af7c25db2edc5469b5ea82",
url = "https://github.com/kubernetes/repo-infra/archive/db6ceb5f992254db76af7c25db2edc5469b5ea82.tar.gz",
sha256 = "b84fbd1173acee9d02a7d3698ad269fdf4f7aa081e9cecd40e012ad0ad8cfa2a",
strip_prefix = "repo-infra-6537f2101fb432b679f3d103ee729dd8ac5d30a0",
url = "https://github.com/kubernetes/repo-infra/archive/6537f2101fb432b679f3d103ee729dd8ac5d30a0.tar.gz",
)
http_archive(
@@ -192,13 +240,6 @@ http_archive(
url = "https://github.com/bazelbuild/buildtools/archive/bf564b4925ab5876a3f64d8b90fab7f769013d42.zip",
)
http_archive(
name = "com_github_herumi_bls_eth_go_binary",
sha256 = "15a41ddb0bf7d142ebffae68337f19c16e747676cb56794c5d80dbe388ce004c",
strip_prefix = "bls-go-binary-ac038c7cb6d3185c4a46f3bca0c99ebf7b191e16",
url = "https://github.com/nisdas/bls-go-binary/archive/ac038c7cb6d3185c4a46f3bca0c99ebf7b191e16.zip",
)
load("@com_github_bazelbuild_buildtools//buildifier:deps.bzl", "buildifier_dependencies")
buildifier_dependencies()
@@ -211,7 +252,7 @@ go_repository(
git_repository(
name = "com_google_protobuf",
commit = "d09d649aea36f02c03f8396ba39a8d4db8a607e4",
commit = "4cf5bfee9546101d98754d23ff378ff718ba8438",
remote = "https://github.com/protocolbuffers/protobuf",
shallow_since = "1558721209 -0700",
)
@@ -225,8 +266,9 @@ all_content = """filegroup(name = "all", srcs = glob(["**"]), visibility = ["//v
http_archive(
name = "rules_foreign_cc",
strip_prefix = "rules_foreign_cc-master",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/master.zip",
sha256 = "450563dc2938f38566a59596bb30a3e905fbbcc35b3fff5a1791b122bc140465",
strip_prefix = "rules_foreign_cc-456425521973736ef346d93d3d6ba07d807047df",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/456425521973736ef346d93d3d6ba07d807047df.zip",
)
load("@rules_foreign_cc//:workspace_definitions.bzl", "rules_foreign_cc_dependencies")
@@ -238,6 +280,7 @@ rules_foreign_cc_dependencies([
http_archive(
name = "librdkafka",
build_file_content = all_content,
sha256 = "f6be27772babfdacbbf2e4c5432ea46c57ef5b7d82e52a81b885e7b804781fd6",
strip_prefix = "librdkafka-1.2.1",
urls = ["https://github.com/edenhill/librdkafka/archive/v1.2.1.tar.gz"],
)
@@ -246,7 +289,7 @@ http_archive(
go_repository(
name = "com_github_ethereum_go_ethereum",
commit = "40beaeef26d5a2a0918dec2b960c2556c71a90a0",
commit = "861ae1b1875c17d86a6a5d68118708ab2b099658",
importpath = "github.com/ethereum/go-ethereum",
# Note: go-ethereum is not bazel-friendly with regards to cgo. We have a
# a fork that has resolved these issues by disabling HID/USB support and
@@ -261,12 +304,10 @@ go_repository(
name = "com_github_prysmaticlabs_go_ssz",
commit = "e24db4d9e9637cf88ee9e4a779e339a1686a84ee",
importpath = "github.com/prysmaticlabs/go-ssz",
)
go_repository(
name = "com_github_urfave_cli",
commit = "e6cf83ec39f6e1158ced1927d4ed14578fda8edb", # v1.21.0
importpath = "github.com/urfave/cli",
patch_args = ["-p1"],
patches = [
"//third_party:com_github_prysmaticlabs_go_ssz.patch",
],
)
go_repository(
@@ -295,7 +336,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p",
commit = "c1687281a5c19b61ee5e0dc07fad15697c3bde94", # v0.4.0
commit = "76944c4fc848530530f6be36fb22b70431ca506c", # v0.5.1
importpath = "github.com/libp2p/go-libp2p",
)
@@ -314,7 +355,7 @@ go_repository(
go_repository(
name = "com_github_multiformats_go_multiaddr",
commit = "f96df18bf0c217c77f6cc0f9e810a178cea12f38", # v0.1.1
commit = "8c6cee15b340d7210c30a82a19231ee333b69b1d", # v0.2.0
importpath = "github.com/multiformats/go-multiaddr",
)
@@ -326,7 +367,7 @@ go_repository(
go_repository(
name = "com_github_multiformats_go_multihash",
commit = "249ead2008065c476a2ee45e8e75e8b85d846a72", # v0.0.8
commit = "6b39927dce4869bc1726861b65ada415ee1f7fc7", # v0.0.13
importpath = "github.com/multiformats/go-multihash",
)
@@ -344,13 +385,13 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_peerstore",
commit = "f4c9af195c69379f1cf284dba31985482a56f78e", # v0.1.3
commit = "dee88d7532302c001604811fa3fbb5a7f83225e7", # v0.1.4
importpath = "github.com/libp2p/go-libp2p-peerstore",
)
go_repository(
name = "com_github_libp2p_go_libp2p_circuit",
commit = "0305622f3f146485f0ff6df0ae6c010787331ca7", # v0.1.3
commit = "61af9db0dd78e01e53b9fb044be44dcc7255667e", # v0.1.4
importpath = "github.com/libp2p/go-libp2p-circuit",
)
@@ -429,7 +470,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_secio",
build_file_proto_mode = "disable_global",
commit = "7c3f577d99debb69c3b68be35fe14d9445a6569c", # v0.2.0
commit = "6f83420d5715a8b1c4082aaf9c5c7785923e702e", # v0.2.1
importpath = "github.com/libp2p/go-libp2p-secio",
)
@@ -447,7 +488,7 @@ go_repository(
go_repository(
name = "com_github_jbenet_goprocess",
commit = "1dc239722b2ba3784472fb5301f62640fa5a8bc3", # v0.1.3
commit = "7f9d9ed286badffcf2122cfeb383ec37daf92508",
importpath = "github.com/jbenet/goprocess",
)
@@ -465,7 +506,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_nat",
commit = "c50c291a61bceccb914366d93eb24f58594e9134", # v0.0.4
commit = "873ef75f6ab6273821d77197660c1fb3af4cc02e", # v0.0.5
importpath = "github.com/libp2p/go-libp2p-nat",
)
@@ -483,7 +524,7 @@ go_repository(
go_repository(
name = "com_github_mattn_go_isatty",
commit = "e1f7b56ace729e4a73a29a6b4fac6cd5fcda7ab3", # v0.0.9
commit = "7b513a986450394f7bbf1476909911b3aa3a55ce",
importpath = "github.com/mattn/go-isatty",
)
@@ -567,7 +608,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_flow_metrics",
commit = "1f5b3acc846b2c8ce4c4e713296af74f5c24df55", # v0.0.1
commit = "e5a6a4db89199d99b2a74b8da198277a826241d8", # v0.0.3
importpath = "github.com/libp2p/go-flow-metrics",
)
@@ -591,14 +632,15 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_ws_transport",
commit = "8cca0dbc7f3533b122bd2cbeaa4a9b07c2913b9d", # v0.1.2
commit = "370d1a3a7420e27423417c37630cad3754ad5702", # v0.2.0
importpath = "github.com/libp2p/go-ws-transport",
)
go_repository(
name = "org_golang_x_crypto",
commit = "4def268fd1a49955bfb3dda92fe3db4f924f2285",
importpath = "golang.org/x/crypto",
sum = "h1:1ZiEyfaQIg3Qh0EoqpwAakHVhecoE5wlSg5GjnafJGw=",
version = "v0.0.0-20200221231518-2aa609cf4a9d",
)
go_repository(
@@ -631,6 +673,12 @@ go_repository(
importpath = "github.com/syndtr/goleveldb",
)
go_repository(
name = "com_github_emicklei_dot",
commit = "5810de2f2ab7aac98cd7bcbd59147a7ca6071768",
importpath = "github.com/emicklei/dot",
)
go_repository(
name = "com_github_libp2p_go_libp2p_blankhost",
commit = "da3b45205dfce3ef3926054ffd5dee76f5903382", # v0.1.4
@@ -690,19 +738,19 @@ go_repository(
go_repository(
name = "com_github_prometheus_client_model",
commit = "fd36f4220a901265f90734c3183c5f0c91daa0b8",
commit = "7bc5445566f0fe75b15de23e6b93886e982d7bf9",
importpath = "github.com/prometheus/client_model",
)
go_repository(
name = "com_github_prometheus_common",
commit = "287d3e634a1e550c9e463dd7e5a75a422c614505", # v0.7.0
commit = "d978bcb1309602d68bb4ba69cf3f8ed900e07308",
importpath = "github.com/prometheus/common",
)
go_repository(
name = "com_github_prometheus_procfs",
commit = "499c85531f756d1129edd26485a5f73871eeb308", # v0.0.5
commit = "6d489fc7f1d9cd890a250f3ea3431b1744b9623f",
importpath = "github.com/prometheus/procfs",
)
@@ -718,12 +766,6 @@ go_repository(
importpath = "github.com/matttproud/golang_protobuf_extensions",
)
go_repository(
name = "com_github_boltdb_bolt",
commit = "2f1ce7a837dcb8da3ec595b1dac9d0632f0f99e8", # v1.3.1
importpath = "github.com/boltdb/bolt",
)
go_repository(
name = "com_github_pborman_uuid",
commit = "8b1b92947f46224e3b97bb1a3a5b0382be00d31e", # v1.2.0
@@ -771,7 +813,7 @@ go_repository(
go_repository(
name = "com_github_ipfs_go_datastore",
commit = "d0ca9bc39f9d5b77bd602abe1a897473e105be7f", # v0.1.1
commit = "e7a498916ccca1b0b40fb08630659cd4d68a01e8", # v0.3.1
importpath = "github.com/ipfs/go-datastore",
)
@@ -783,14 +825,14 @@ go_repository(
go_repository(
name = "com_github_ipfs_go_cid",
commit = "9bb7ea69202c6c9553479eb355ab8a8a97d43a2e", # v0.0.3
commit = "3da5bbbe45260437a44f777e6b2e5effa2606901", # v0.0.4
importpath = "github.com/ipfs/go-cid",
)
go_repository(
name = "com_github_libp2p_go_libp2p_record",
build_file_proto_mode = "disable_global",
commit = "3f535b1abcdf698e11ac16f618c2e64c4e5a114a", # v0.1.1
commit = "8ccbca30634f70a8f03d133ac64cbf245d079e1e", # v0.1.2
importpath = "github.com/libp2p/go-libp2p-record",
)
@@ -802,7 +844,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_kbucket",
commit = "8b77351e0f784a5f71749d23000897c8aee71a76", # v0.2.1
commit = "a0cac6f63c491504b18eeba24be2ac0bbbfa0e5c", # v0.2.3
importpath = "github.com/libp2p/go-libp2p-kbucket",
)
@@ -826,7 +868,7 @@ go_repository(
go_repository(
name = "com_github_hashicorp_golang_lru",
commit = "7f827b33c0f158ec5dfbba01bb0b14a4541fd81d", # v0.5.3
commit = "14eae340515388ca95aa8e7b86f0de668e981f54", # v0.5.4
importpath = "github.com/hashicorp/golang-lru",
)
@@ -845,7 +887,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_autonat",
commit = "3464f9b4f7bfbd7bb008813eacb626c7ab7fb9a3", # v0.1.0
commit = "60bf479cf6bc73c939f4db97ad711756e949e522", # v0.1.1
importpath = "github.com/libp2p/go-libp2p-autonat",
)
@@ -868,10 +910,17 @@ go_repository(
importpath = "k8s.io/client-go",
)
go_repository(
name = "io_etcd_go_bbolt",
importpath = "go.etcd.io/bbolt",
sum = "h1:hi1bXHMVrlQh6WwxAy+qZCV/SYIlqo+Ushwdpa4tAKg=",
version = "v1.3.4",
)
go_repository(
name = "io_k8s_apimachinery",
build_file_proto_mode = "disable_global",
commit = "bfcf53abc9f82bad3e534fcb1c36599d3c989ebf",
commit = "79c2a76c473a20cdc4ce59cae4b72529b5d9d16b", # v0.17.2
importpath = "k8s.io/apimachinery",
)
@@ -953,7 +1002,7 @@ go_repository(
go_repository(
name = "com_google_cloud_go",
commit = "264def2dd949cdb8a803bb9f50fa29a67b798a6a", # v0.46.3
commit = "6daa679260d92196ffca2362d652c924fdcb7a22", # v0.52.0
importpath = "cloud.google.com/go",
)
@@ -995,7 +1044,7 @@ go_repository(
go_repository(
name = "com_github_pkg_errors",
commit = "ba968bfe8b2f7e042a574c888954fccecfa385b4", # v0.8.1
commit = "614d223910a179a466c1767a985424175c39b465", # v0.9.1
importpath = "github.com/pkg/errors",
)
@@ -1037,7 +1086,7 @@ go_repository(
go_repository(
name = "com_github_apache_thrift",
commit = "384647d290e2e4a55a14b1b7ef1b7e66293a2c33", # v0.12.0
commit = "cecee50308fc7e6f77f55b3fd906c1c6c471fa2f", # v0.13.0
importpath = "github.com/apache/thrift",
)
@@ -1049,7 +1098,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_connmgr",
commit = "b46e9bdbcd8436b4fe4b30a53ec913c07e5e09c9", # v0.1.1
commit = "273839464339f1885413b385feee35301c5cb76f", # v0.2.1
importpath = "github.com/libp2p/go-libp2p-connmgr",
)
@@ -1080,13 +1129,13 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_libp2p_core",
build_file_proto_mode = "disable_global",
commit = "26b960839df84e2783f8f6125fa822a9978c2b8f", # v0.2.3
commit = "f7f724862d85ec9f9ee7c58b0f79836abdee8cd9", # v0.3.0
importpath = "github.com/libp2p/go-libp2p-core",
)
go_repository(
name = "com_github_libp2p_go_libp2p_testing",
commit = "1fa303da162dc57872d8fc553497f7602aa11c10", # v0.1.0
commit = "82713a62880a5fe72d438bd58d737f0d3c4b7f36", # v0.1.1
importpath = "github.com/libp2p/go-libp2p-testing",
)
@@ -1114,6 +1163,12 @@ go_repository(
importpath = "github.com/multiformats/go-multiaddr-fmt",
)
go_repository(
name = "com_github_multiformats_go_varint",
commit = "0aa688902217dff2cba0f678c7e4a0f547b4983e",
importpath = "github.com/multiformats/go-varint",
)
go_repository(
name = "com_github_libp2p_go_yamux",
commit = "663972181d409e7263040f0b668462f87c85e1bd", # v1.2.3
@@ -1122,7 +1177,7 @@ go_repository(
go_repository(
name = "com_github_libp2p_go_nat",
commit = "d13fdefb3bbb2fde2c6fc090a7ea992cec8b26df", # v0.0.3
commit = "4b355d438085545df006ad9349686f30d8d37a27", # v0.0.4
importpath = "github.com/libp2p/go-nat",
)
@@ -1146,7 +1201,7 @@ go_repository(
go_repository(
name = "com_github_prysmaticlabs_go_bitfield",
commit = "dbb55b15e92f897ee230360c8d9695e2f224b117",
commit = "62c2aee7166951c456888f92237aee4303ba1b9d",
importpath = "github.com/prysmaticlabs/go-bitfield",
)
@@ -1157,8 +1212,9 @@ go_ssz_dependencies()
go_repository(
name = "org_golang_google_grpc",
build_file_proto_mode = "disable",
commit = "1d89a3c832915b2314551c1d2a506874d62e53f7", # v1.22.0
importpath = "google.golang.org/grpc",
sum = "h1:zvIju4sqAGvwKspUQOhwnpcqSbzi7/H6QomNNjTL4sk=",
version = "v1.27.1",
)
go_repository(
@@ -1187,7 +1243,7 @@ go_repository(
go_repository(
name = "com_github_googleapis_gnostic",
commit = "ab0dd09aa10e2952b28e12ecd35681b20463ebab", # v0.3.1
commit = "896953e6749863beec38e27029c804e88c3144b8", # v0.4.1
importpath = "github.com/googleapis/gnostic",
)
@@ -1211,7 +1267,7 @@ go_repository(
go_repository(
name = "com_github_google_go_cmp",
commit = "2d0692c2e9617365a95b295612ac0d4415ba4627", # v0.3.1
commit = "5a6f75716e1203a923a78c9efb94089d857df0f6", # v0.4.0
importpath = "github.com/google/go-cmp",
)
@@ -1241,12 +1297,6 @@ go_repository(
importpath = "k8s.io/utils",
)
go_repository(
name = "com_github_googleapis_gnostic",
commit = "25d8b0b6698593f520d9d8dc5a88e6b16ca9ecc0",
importpath = "github.com/googleapis/gnostic",
)
go_repository(
name = "com_github_patrickmn_go_cache",
commit = "46f407853014144407b6c2ec7ccc76bf67958d93",
@@ -1255,7 +1305,7 @@ go_repository(
go_repository(
name = "com_github_prysmaticlabs_ethereumapis",
commit = "87118fb893cc6f32b25793d819790fd3bcce3221",
commit = "62fd1d2ec119bc93b0473fde17426c63a85197ed",
importpath = "github.com/prysmaticlabs/ethereumapis",
patch_args = ["-p1"],
patches = [
@@ -1265,8 +1315,9 @@ go_repository(
go_repository(
name = "com_github_cloudflare_roughtime",
commit = "d41fdcee702eb3e5c3296288a453b9340184d37e",
importpath = "github.com/cloudflare/roughtime",
sum = "h1:jeSxE3fepJdhASERvBHI6RFkMhISv6Ir2JUybYLIVXs=",
version = "v0.0.0-20200205191924-a69ef1dab727",
)
go_repository(
@@ -1289,13 +1340,6 @@ go_repository(
version = "v0.0.4",
)
go_repository(
name = "com_github_mdlayher_prombolt",
importpath = "github.com/mdlayher/prombolt",
sum = "h1:N257g6TTx0LxYoskSDFxvkSJ3NOZpy9IF1xQ7Gu+K8I=",
version = "v0.0.0-20161005185022-dfcf01d20ee9",
)
go_repository(
name = "com_github_minio_highwayhash",
importpath = "github.com/minio/highwayhash",
@@ -1333,13 +1377,6 @@ go_repository(
version = "v0.10.5",
)
go_repository(
name = "in_gopkg_urfave_cli_v1",
importpath = "gopkg.in/urfave/cli.v1",
sum = "h1:NdAVW6RYxDif9DhDHaAortIu956m2c0v+09AZBPTbE0=",
version = "v1.20.0",
)
go_repository(
name = "com_github_naoina_go_stringutil",
importpath = "github.com/naoina/go-stringutil",
@@ -1417,9 +1454,9 @@ go_repository(
)
go_repository(
name = "com_github_emicklei_dot",
commit = "f4a04130244d60cef56086d2f649b4b55e9624aa",
importpath = "github.com/emicklei/dot",
name = "com_github_googleapis_gnostic",
commit = "25d8b0b6698593f520d9d8dc5a88e6b16ca9ecc0",
importpath = "github.com/googleapis/gnostic",
)
go_repository(
@@ -1445,7 +1482,7 @@ go_repository(
go_repository(
name = "com_github_dgraph_io_ristretto",
commit = "99d1bbbf28e64530eb246be0568fc7709a35ebdd",
commit = "99d1bbbf28e64530eb246be0568fc7709a35ebdd", # v0.0.1
importpath = "github.com/dgraph-io/ristretto",
)
@@ -1463,13 +1500,159 @@ go_repository(
)
go_repository(
name = "com_github_dgraph_io_ristretto",
commit = "99d1bbbf28e64530eb246be0568fc7709a35ebdd",
importpath = "github.com/dgraph-io/ristretto",
name = "com_github_kevinms_leakybucket_go",
importpath = "github.com/kevinms/leakybucket-go",
sum = "h1:oq6BiN7v0MfWCRcJAxSV+hesVMAAV8COrQbTjYNnso4=",
version = "v0.0.0-20190611015032-8a3d0352aa79",
)
go_repository(
name = "com_github_cespare_xxhash",
commit = "d7df74196a9e781ede915320c11c378c1b2f3a1f",
importpath = "github.com/cespare/xxhash",
name = "com_github_wealdtech_go_eth2_wallet",
commit = "6970d62e60d86fdae3c3e510e800e8a60d755a7d",
importpath = "github.com/wealdtech/go-eth2-wallet",
)
go_repository(
name = "com_github_wealdtech_go_eth2_wallet_hd",
commit = "ce0a252a01c621687e9786a64899cfbfe802ba73",
importpath = "github.com/wealdtech/go-eth2-wallet-hd",
)
go_repository(
name = "com_github_wealdtech_go_eth2_wallet_nd",
commit = "12c8c41cdbd16797ff292e27f58e126bb89e9706",
importpath = "github.com/wealdtech/go-eth2-wallet-nd",
)
go_repository(
name = "com_github_wealdtech_go_eth2_wallet_store_filesystem",
commit = "1eea6a48d75380047d2ebe7c8c4bd8985bcfdeca",
importpath = "github.com/wealdtech/go-eth2-wallet-store-filesystem",
)
go_repository(
name = "com_github_wealdtech_go_eth2_wallet_store_s3",
commit = "1c821b5161f7bb0b3efa2030eff687eea5e70e53",
importpath = "github.com/wealdtech/go-eth2-wallet-store-s3",
)
go_repository(
name = "com_github_wealdtech_go_eth2_wallet_encryptor_keystorev4",
commit = "0c11c07b9544eb662210fadded94f40f309d8c8f",
importpath = "github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4",
)
go_repository(
name = "com_github_wealdtech_go_eth2_wallet_types",
commit = "af67d8101be61e7c4dd8126d2b3eba20cff5dab2",
importpath = "github.com/wealdtech/go-eth2-wallet-types",
)
go_repository(
name = "com_github_wealdtech_go_eth2_types",
commit = "f9c31ddf180537dd5712d5998a3d56c45864d71f",
importpath = "github.com/wealdtech/go-eth2-types",
)
go_repository(
name = "com_github_wealdtech_go_eth2_util",
commit = "326ebb1755651131bb8f4506ea9a23be6d9ad1dd",
importpath = "github.com/wealdtech/go-eth2-util",
)
go_repository(
name = "com_github_wealdtech_go_ecodec",
commit = "7473d835445a3490e61a5fcf48fe4e9755a37957",
importpath = "github.com/wealdtech/go-ecodec",
)
go_repository(
name = "com_github_wealdtech_go_bytesutil",
commit = "e564d0ade555b9f97494f0f669196ddcc6bc531d",
importpath = "github.com/wealdtech/go-bytesutil",
)
go_repository(
name = "com_github_wealdtech_go_indexer",
commit = "334862c32b1e3a5c6738a2618f5c0a8ebeb8cd51",
importpath = "github.com/wealdtech/go-indexer",
)
go_repository(
name = "com_github_shibukawa_configdir",
commit = "e180dbdc8da04c4fa04272e875ce64949f38bd3e",
importpath = "github.com/shibukawa/configdir",
)
go_repository(
name = "com_github_libp2p_go_libp2p_noise",
importpath = "github.com/libp2p/go-libp2p-noise",
sum = "h1:J1gHJRNFEk7NdiaPQQqAvxEy+7hhCsVv3uzduWybmqY=",
version = "v0.0.0-20200302201340-8c54356e12c9",
)
go_repository(
name = "com_github_ferranbt_fastssz",
commit = "06015a5d84f9e4eefe2c21377ca678fa8f1a1b09",
importpath = "github.com/ferranbt/fastssz",
)
go_repository(
name = "com_github_burntsushi_toml",
importpath = "github.com/BurntSushi/toml",
sum = "h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=",
version = "v0.3.1",
)
go_repository(
name = "com_github_cpuguy83_go_md2man_v2",
importpath = "github.com/cpuguy83/go-md2man/v2",
sum = "h1:EoUDS0afbrsXAZ9YQ9jdu/mZ2sXgT1/2yyNng4PGlyM=",
version = "v2.0.0",
)
go_repository(
name = "com_github_russross_blackfriday_v2",
importpath = "github.com/russross/blackfriday/v2",
sum = "h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=",
version = "v2.0.1",
)
go_repository(
name = "com_github_shurcool_sanitized_anchor_name",
importpath = "github.com/shurcooL/sanitized_anchor_name",
sum = "h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=",
version = "v1.0.0",
)
go_repository(
name = "in_gopkg_urfave_cli_v2",
importpath = "gopkg.in/urfave/cli.v2",
sum = "h1:OvXt/p4cdwNl+mwcWMq/AxaKFkhdxcjx+tx+qf4EOvY=",
version = "v2.0.0-20190806201727-b62605953717",
)
go_repository(
name = "in_gopkg_urfave_cli_v1",
importpath = "gopkg.in/urfave/cli.v1",
sum = "h1:NdAVW6RYxDif9DhDHaAortIu956m2c0v+09AZBPTbE0=",
version = "v1.20.0",
)
go_repository(
name = "com_github_prysmaticlabs_prombbolt",
importpath = "github.com/prysmaticlabs/prombbolt",
sum = "h1:bVD46NhbqEE6bsIqj42TCS3ELUdumti3WfAw9DXNtkg=",
version = "v0.0.0-20200324184628-09789ef63796",
)
load("@com_github_prysmaticlabs_prombbolt//:repositories.bzl", "prombbolt_dependencies")
prombbolt_dependencies()
go_repository(
name = "com_github_ianlancetaylor_cgosymbolizer",
importpath = "github.com/ianlancetaylor/cgosymbolizer",
sum = "h1:GWsU1WjSE2rtvyTYGcndqmPPkQkBNV7pEuZdnGtwtu4=",
version = "v0.0.0-20200321040036-d43e30eacb43",
)

11
bazel.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
# This script serves as a wrapper around bazel to limit the scope of environment variables that
# may change the action output. Using this script should result in a higher cache hit ratio for
# cached actions with a more heremtic build.
env -i \
PATH=/usr/bin:/bin \
HOME=$HOME \
GOOGLE_APPLICATION_CREDENTIALS=$GOOGLE_APPLICATION_CREDENTIALS \
bazel "$@"

View File

@@ -1,7 +1,7 @@
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library", "go_test")
load("@io_bazel_rules_docker//go:image.bzl", "go_image")
load("@io_bazel_rules_docker//container:container.bzl", "container_bundle")
load("//tools:binary_targets.bzl", "binary_targets")
load("//tools:go_image.bzl", "go_image_alpine", "go_image_debug")
load("@io_bazel_rules_docker//contrib:push-all.bzl", "docker_push")
go_library(
@@ -23,9 +23,10 @@ go_library(
"@com_github_ipfs_go_log//:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@com_github_whyrusleeping_go_logging//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
"@in_gopkg_urfave_cli_v2//altsrc:go_default_library",
"@org_uber_go_automaxprocs//:go_default_library",
],
)
@@ -36,7 +37,11 @@ go_image(
"main.go",
"usage.go",
],
base = "//tools:cc_image",
base = select({
"//tools:base_image_alpine": "//tools:alpine_cc_image",
"//tools:base_image_cc": "//tools:cc_image",
"//conditions:default": "//tools:cc_image",
}),
goarch = "amd64",
goos = "linux",
importpath = "github.com/prysmaticlabs/prysm/beacon-chain",
@@ -55,9 +60,10 @@ go_image(
"@com_github_ipfs_go_log//:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@com_github_whyrusleeping_go_logging//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
"@in_gopkg_urfave_cli_v2//altsrc:go_default_library",
"@org_uber_go_automaxprocs//:go_default_library",
],
)
@@ -71,12 +77,54 @@ container_bundle(
tags = ["manual"],
)
go_image_debug(
name = "image_debug",
image = ":image",
tags = ["manual"],
)
container_bundle(
name = "image_bundle_debug",
images = {
"gcr.io/prysmaticlabs/prysm/beacon-chain:latest-debug": ":image_debug",
"gcr.io/prysmaticlabs/prysm/beacon-chain:{DOCKER_TAG}-debug": ":image_debug",
},
tags = ["manual"],
)
go_image_alpine(
name = "image_alpine",
image = ":image",
tags = ["manual"],
)
container_bundle(
name = "image_bundle_alpine",
images = {
"gcr.io/prysmaticlabs/prysm/beacon-chain:latest-alpine": ":image_alpine",
"gcr.io/prysmaticlabs/prysm/beacon-chain:{DOCKER_TAG}-alpine": ":image_alpine",
},
tags = ["manual"],
)
docker_push(
name = "push_images",
bundle = ":image_bundle",
tags = ["manual"],
)
docker_push(
name = "push_images_debug",
bundle = ":image_bundle_debug",
tags = ["manual"],
)
docker_push(
name = "push_images_alpine",
bundle = ":image_bundle_alpine",
tags = ["manual"],
)
go_binary(
name = "beacon-chain",
embed = [":go_default_library"],
@@ -91,17 +139,8 @@ go_test(
size = "small",
srcs = ["usage_test.go"],
embed = [":go_default_library"],
deps = ["@com_github_urfave_cli//:go_default_library"],
deps = [
"//shared/featureconfig:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
],
)
[go_binary(
name = "beacon-chain-{}-{}".format(
pair[0],
pair[1],
),
embed = [":go_default_library"],
goarch = pair[1],
goos = pair[0],
tags = ["manual"],
visibility = ["//visibility:public"],
) for pair in binary_targets]

View File

@@ -5,6 +5,6 @@ This is the main project folder for the beacon chain implementation of Ethereum
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/KSA7rPr)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/prysmaticlabs/prysm?badge&utm_medium=badge&utm_campaign=pr-badge)
Also, read the latest beacon chain [design spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/core/0_beacon-chain.md), this design spec serves as a source of truth for the beacon chain implementation we follow at prysmatic labs.
Also, read the latest beacon chain [design spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/beacon-chain.md), this design spec serves as a source of truth for the beacon chain implementation we follow at prysmatic labs.
Check out the [FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view). Refer this page on [why](http://email.mg2.substack.com/c/eJwlj9GOhCAMRb9G3jRQQPGBh5mM8xsbhKrsDGIAM9m_X9xN2qZtbpt7rCm4xvSjj5gLOTOmL-809CMbKXFaOKakIl4DZYr2AGyQIGjHOnWH22OiYnoIxmDijaBhhS6fcy7GvjobA9m0mSXOcnZq5GBqLkilXBZhBsus5ZK89VbKkRt-a-BZI6DzZ7iur1lQ953KJ9bemnxgahuQU9XJu6pFPdu8meT8vragzEjpMCwMGLlgLo6h5z1JumQTu4IJd4v15xqMf_8ZLP_Y1bSLdbnrD-LL71i2Kj7DLxaWWF4)
we are combining sharding and casper together.

View File

@@ -12,6 +12,7 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -32,6 +33,7 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
@@ -24,7 +25,7 @@ var log = logrus.WithField("prefix", "archiver")
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
beaconDB db.NoHeadAccessDatabase
headFetcher blockchain.HeadFetcher
participationFetcher blockchain.ParticipationFetcher
stateNotifier statefeed.Notifier
@@ -33,7 +34,7 @@ type Service struct {
// Config options for the archiver service.
type Config struct {
BeaconDB db.Database
BeaconDB db.NoHeadAccessDatabase
HeadFetcher blockchain.HeadFetcher
ParticipationFetcher blockchain.ParticipationFetcher
StateNotifier statefeed.Notifier
@@ -70,7 +71,7 @@ func (s *Service) Status() error {
}
// We archive committee information pertaining to the head state's epoch.
func (s *Service) archiveCommitteeInfo(ctx context.Context, headState *pb.BeaconState, epoch uint64) error {
func (s *Service) archiveCommitteeInfo(ctx context.Context, headState *state.BeaconState, epoch uint64) error {
proposerSeed, err := helpers.Seed(headState, epoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return errors.Wrap(err, "could not generate seed")
@@ -91,15 +92,16 @@ func (s *Service) archiveCommitteeInfo(ctx context.Context, headState *pb.Beacon
}
// We archive active validator set changes that happened during the previous epoch.
func (s *Service) archiveActiveSetChanges(ctx context.Context, headState *pb.BeaconState, epoch uint64) error {
func (s *Service) archiveActiveSetChanges(ctx context.Context, headState *state.BeaconState, epoch uint64) error {
prevEpoch := epoch - 1
activations := validators.ActivatedValidatorIndices(prevEpoch, headState.Validators)
slashings := validators.SlashedValidatorIndices(prevEpoch, headState.Validators)
vals := headState.Validators()
activations := validators.ActivatedValidatorIndices(prevEpoch, vals)
slashings := validators.SlashedValidatorIndices(prevEpoch, vals)
activeValidatorCount, err := helpers.ActiveValidatorCount(headState, prevEpoch)
if err != nil {
return errors.Wrap(err, "could not get active validator count")
}
exited, err := validators.ExitedValidatorIndices(prevEpoch, headState.Validators, activeValidatorCount)
exited, err := validators.ExitedValidatorIndices(prevEpoch, vals, activeValidatorCount)
if err != nil {
return errors.Wrap(err, "could not determine exited validator indices")
}
@@ -130,8 +132,7 @@ func (s *Service) archiveParticipation(ctx context.Context, epoch uint64) error
}
// We archive validator balances and active indices.
func (s *Service) archiveBalances(ctx context.Context, headState *pb.BeaconState, epoch uint64) error {
balances := headState.Balances
func (s *Service) archiveBalances(ctx context.Context, balances []uint64, epoch uint64) error {
if err := s.beaconDB.SaveArchivedBalances(ctx, epoch, balances); err != nil {
return errors.Wrap(err, "could not archive balances")
}
@@ -153,12 +154,13 @@ func (s *Service) run(ctx context.Context) {
log.WithError(err).Error("Head state is not available")
continue
}
currentEpoch := helpers.CurrentEpoch(headState)
if !helpers.IsEpochEnd(headState.Slot) && currentEpoch <= s.lastArchivedEpoch {
slot := headState.Slot()
currentEpoch := helpers.SlotToEpoch(slot)
if !helpers.IsEpochEnd(slot) && currentEpoch <= s.lastArchivedEpoch {
continue
}
epochToArchive := currentEpoch
if !helpers.IsEpochEnd(headState.Slot) {
if !helpers.IsEpochEnd(slot) {
epochToArchive--
}
if err := s.archiveCommitteeInfo(ctx, headState, epochToArchive); err != nil {
@@ -173,7 +175,7 @@ func (s *Service) run(ctx context.Context) {
log.WithError(err).Error("Could not archive validator participation")
continue
}
if err := s.archiveBalances(ctx, headState, epochToArchive); err != nil {
if err := s.archiveBalances(ctx, headState.Balances(), epochToArchive); err != nil {
log.WithError(err).Error("Could not archive validator balances and active indices")
continue
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
dbutil "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -27,15 +28,20 @@ import (
func init() {
logrus.SetLevel(logrus.DebugLevel)
logrus.SetOutput(ioutil.Discard)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
}
func TestArchiverService_ReceivesBlockProcessedEvent(t *testing.T) {
hook := logTest.NewGlobal()
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 1,
})
if err != nil {
t.Fatal(err)
}
svc.headFetcher = &mock.ChainService{
State: &pb.BeaconState{Slot: 1},
State: st,
}
event := &feed.Event{
@@ -55,8 +61,14 @@ func TestArchiverService_OnlyArchiveAtEpochEnd(t *testing.T) {
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
// The head state is NOT an epoch end.
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch - 2,
})
if err != nil {
t.Fatal(err)
}
svc.headFetcher = &mock.ChainService{
State: &pb.BeaconState{Slot: params.BeaconConfig().SlotsPerEpoch - 2},
State: st,
}
event := &feed.Event{
Type: statefeed.BlockProcessed,
@@ -81,7 +93,10 @@ func TestArchiverService_ArchivesEvenThroughSkipSlot(t *testing.T) {
hook := logTest.NewGlobal()
svc, beaconDB := setupService(t)
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
defer dbutil.TeardownDB(t, beaconDB)
event := &feed.Event{
Type: statefeed.BlockProcessed,
@@ -99,7 +114,9 @@ func TestArchiverService_ArchivesEvenThroughSkipSlot(t *testing.T) {
// Send out an event every slot, skipping the end slot of the epoch.
for i := uint64(0); i < params.BeaconConfig().SlotsPerEpoch+1; i++ {
headState.Slot = i
if err := headState.SetSlot(i); err != nil {
t.Fatal(err)
}
svc.headFetcher = &mock.ChainService{
State: headState,
}
@@ -130,7 +147,10 @@ func TestArchiverService_ArchivesEvenThroughSkipSlot(t *testing.T) {
func TestArchiverService_ComputesAndSavesParticipation(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
@@ -168,7 +188,10 @@ func TestArchiverService_ComputesAndSavesParticipation(t *testing.T) {
func TestArchiverService_SavesIndicesAndBalances(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
@@ -187,11 +210,11 @@ func TestArchiverService_SavesIndicesAndBalances(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(headState.Balances, retrieved) {
if !reflect.DeepEqual(headState.Balances(), retrieved) {
t.Errorf(
"Wanted balances for epoch %d %v, retrieved %v",
helpers.CurrentEpoch(headState),
headState.Balances,
headState.Balances(),
retrieved,
)
}
@@ -201,7 +224,10 @@ func TestArchiverService_SavesIndicesAndBalances(t *testing.T) {
func TestArchiverService_SavesCommitteeInfo(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
@@ -248,16 +274,33 @@ func TestArchiverService_SavesCommitteeInfo(t *testing.T) {
func TestArchiverService_SavesActivatedValidatorChanges(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
prevEpoch := helpers.PrevEpoch(headState)
delayedActEpoch := helpers.DelayedActivationExitEpoch(prevEpoch)
headState.Validators[4].ActivationEpoch = delayedActEpoch
headState.Validators[5].ActivationEpoch = delayedActEpoch
delayedActEpoch := helpers.ActivationExitEpoch(prevEpoch)
val1, err := headState.ValidatorAtIndex(4)
if err != nil {
t.Fatal(err)
}
val1.ActivationEpoch = delayedActEpoch
val2, err := headState.ValidatorAtIndex(5)
if err != nil {
t.Fatal(err)
}
val2.ActivationEpoch = delayedActEpoch
if err := headState.UpdateValidatorAtIndex(4, val1); err != nil {
t.Fatal(err)
}
if err := headState.UpdateValidatorAtIndex(5, val1); err != nil {
t.Fatal(err)
}
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
@@ -283,15 +326,32 @@ func TestArchiverService_SavesActivatedValidatorChanges(t *testing.T) {
func TestArchiverService_SavesSlashedValidatorChanges(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
prevEpoch := helpers.PrevEpoch(headState)
headState.Validators[95].Slashed = true
headState.Validators[96].Slashed = true
val1, err := headState.ValidatorAtIndex(95)
if err != nil {
t.Fatal(err)
}
val1.Slashed = true
val2, err := headState.ValidatorAtIndex(96)
if err != nil {
t.Fatal(err)
}
val2.Slashed = true
if err := headState.UpdateValidatorAtIndex(95, val1); err != nil {
t.Fatal(err)
}
if err := headState.UpdateValidatorAtIndex(96, val1); err != nil {
t.Fatal(err)
}
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
@@ -317,15 +377,25 @@ func TestArchiverService_SavesSlashedValidatorChanges(t *testing.T) {
func TestArchiverService_SavesExitedValidatorChanges(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
headState, err := setupState(validatorCount)
if err != nil {
t.Fatal(err)
}
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
prevEpoch := helpers.PrevEpoch(headState)
headState.Validators[95].ExitEpoch = prevEpoch
headState.Validators[95].WithdrawableEpoch = prevEpoch + params.BeaconConfig().MinValidatorWithdrawabilityDelay
val, err := headState.ValidatorAtIndex(95)
if err != nil {
t.Fatal(err)
}
val.ExitEpoch = prevEpoch
val.WithdrawableEpoch = prevEpoch + params.BeaconConfig().MinValidatorWithdrawabilityDelay
if err := headState.UpdateValidatorAtIndex(95, val); err != nil {
t.Fatal(err)
}
event := &feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
@@ -347,7 +417,7 @@ func TestArchiverService_SavesExitedValidatorChanges(t *testing.T) {
}
}
func setupState(t *testing.T, validatorCount uint64) *pb.BeaconState {
func setupState(validatorCount uint64) (*stateTrie.BeaconState, error) {
validators := make([]*ethpb.Validator, validatorCount)
balances := make([]uint64, validatorCount)
for i := 0; i < len(validators); i++ {
@@ -363,18 +433,18 @@ func setupState(t *testing.T, validatorCount uint64) *pb.BeaconState {
// We initialize a head state that has attestations from participated
// validators in a simulated fashion.
return &pb.BeaconState{
return stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: (2 * params.BeaconConfig().SlotsPerEpoch) - 1,
Validators: validators,
Balances: balances,
BlockRoots: make([][]byte, 128),
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
Slashings: []uint64{0, 1e9, 1e9},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentEpochAttestations: atts,
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: bitfield.Bitvector4{0x00},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{},
}
})
}
func setupService(t *testing.T) (*Service, db.Database) {

View File

@@ -4,9 +4,15 @@ go_library(
name = "go_default_library",
srcs = [
"chain_info.go",
"head.go",
"info.go",
"init_sync_process_block.go",
"log.go",
"metrics.go",
"process_attestation.go",
"process_attestation_helpers.go",
"process_block.go",
"process_block_helpers.go",
"receive_attestation.go",
"receive_block.go",
"service.go",
@@ -14,7 +20,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/blockchain/forkchoice:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
@@ -23,16 +29,27 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/flags:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"//shared/slotutil:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_emicklei_dot//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -56,8 +73,11 @@ go_test(
size = "medium",
srcs = [
"chain_info_test.go",
"head_test.go",
"init_sync_process_block_test.go",
"process_attestation_test.go",
"process_block_test.go",
"receive_attestation_test.go",
"receive_block_test.go",
"service_test.go",
],
embed = [":go_default_library"],
@@ -70,11 +90,12 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/params:go_default_library",
"//shared/stateutil:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
@@ -84,6 +105,7 @@ go_test(
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
"@org_golang_x_net//context:go_default_library",
],
)
@@ -95,6 +117,13 @@ go_test(
"service_norace_test.go",
],
embed = [":go_default_library"],
gc_goopts = [
# Go 1.14 enables checkptr by default when building with -race or -msan. There is a pointer
# issue in boltdb, so must disable checkptr at compile time. This flag can be removed once
# the project is migrated to etcd's version of boltdb and the issue has been fixed.
# See: https://github.com/etcd-io/bbolt/issues/187.
"-d=checkptr=0",
],
race = "on",
tags = ["race_on"],
deps = [

View File

@@ -5,10 +5,11 @@ import (
"context"
"time"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -17,32 +18,26 @@ import (
// directly retrieves chain info related data.
type ChainInfoFetcher interface {
HeadFetcher
CanonicalRootFetcher
FinalizationFetcher
}
// GenesisTimeFetcher retrieves the Eth2 genesis timestamp.
type GenesisTimeFetcher interface {
// TimeFetcher retrieves the Eth2 data that's related to time.
type TimeFetcher interface {
GenesisTime() time.Time
CurrentSlot() uint64
}
// HeadFetcher defines a common interface for methods in blockchain service which
// directly retrieves head related data.
type HeadFetcher interface {
HeadSlot() uint64
HeadRoot() []byte
HeadBlock() *ethpb.SignedBeaconBlock
HeadState(ctx context.Context) (*pb.BeaconState, error)
HeadRoot(ctx context.Context) ([]byte, error)
HeadBlock(ctx context.Context) (*ethpb.SignedBeaconBlock, error)
HeadState(ctx context.Context) (*state.BeaconState, error)
HeadValidatorsIndices(epoch uint64) ([]uint64, error)
HeadSeed(epoch uint64) ([32]byte, error)
}
// CanonicalRootFetcher defines a common interface for methods in blockchain service which
// directly retrieves canonical roots related data.
type CanonicalRootFetcher interface {
CanonicalRoot(slot uint64) []byte
}
// ForkFetcher retrieves the current fork information of the Ethereum beacon chain.
type ForkFetcher interface {
CurrentFork() *pb.Fork
@@ -64,115 +59,118 @@ type ParticipationFetcher interface {
// FinalizedCheckpt returns the latest finalized checkpoint from head state.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
if s.headState == nil || s.headState.FinalizedCheckpoint == nil {
if s.finalizedCheckpt == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
// If head state exists but there hasn't been a finalized check point,
// the check point's root should refer to genesis block root.
if bytes.Equal(s.headState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
if bytes.Equal(s.finalizedCheckpt.Root, params.BeaconConfig().ZeroHash[:]) {
return &ethpb.Checkpoint{Root: s.genesisRoot[:]}
}
return s.headState.FinalizedCheckpoint
return state.CopyCheckpoint(s.finalizedCheckpt)
}
// CurrentJustifiedCheckpt returns the current justified checkpoint from head state.
func (s *Service) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
if s.headState == nil || s.headState.CurrentJustifiedCheckpoint == nil {
if s.justifiedCheckpt == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
// If head state exists but there hasn't been a justified check point,
// the check point root should refer to genesis block root.
if bytes.Equal(s.headState.CurrentJustifiedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
if bytes.Equal(s.justifiedCheckpt.Root, params.BeaconConfig().ZeroHash[:]) {
return &ethpb.Checkpoint{Root: s.genesisRoot[:]}
}
return s.headState.CurrentJustifiedCheckpoint
return state.CopyCheckpoint(s.justifiedCheckpt)
}
// PreviousJustifiedCheckpt returns the previous justified checkpoint from head state.
func (s *Service) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
if s.headState == nil || s.headState.PreviousJustifiedCheckpoint == nil {
if s.prevJustifiedCheckpt == nil {
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
// If head state exists but there hasn't been a justified check point,
// the check point root should refer to genesis block root.
if bytes.Equal(s.headState.PreviousJustifiedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
if bytes.Equal(s.prevJustifiedCheckpt.Root, params.BeaconConfig().ZeroHash[:]) {
return &ethpb.Checkpoint{Root: s.genesisRoot[:]}
}
return s.headState.PreviousJustifiedCheckpoint
return state.CopyCheckpoint(s.prevJustifiedCheckpt)
}
// HeadSlot returns the slot of the head of the chain.
func (s *Service) HeadSlot() uint64 {
s.headLock.RLock()
defer s.headLock.RUnlock()
if !s.hasHeadState() {
return 0
}
return s.headSlot
return s.headSlot()
}
// HeadRoot returns the root of the head of the chain.
func (s *Service) HeadRoot() []byte {
s.headLock.RLock()
defer s.headLock.RUnlock()
root := s.canonicalRoots[s.headSlot]
if len(root) != 0 {
return root
func (s *Service) HeadRoot(ctx context.Context) ([]byte, error) {
if s.headRoot() != params.BeaconConfig().ZeroHash {
r := s.headRoot()
return r[:], nil
}
return params.BeaconConfig().ZeroHash[:]
b, err := s.beaconDB.HeadBlock(ctx)
if err != nil {
return nil, err
}
if b == nil {
return params.BeaconConfig().ZeroHash[:], nil
}
r, err := ssz.HashTreeRoot(b.Block)
if err != nil {
return nil, err
}
return r[:], nil
}
// HeadBlock returns the head block of the chain.
func (s *Service) HeadBlock() *ethpb.SignedBeaconBlock {
s.headLock.RLock()
defer s.headLock.RUnlock()
// If the head state is nil from service struct,
// it will attempt to get the head block from DB.
func (s *Service) HeadBlock(ctx context.Context) (*ethpb.SignedBeaconBlock, error) {
if s.hasHeadState() {
return s.headBlock(), nil
}
return proto.Clone(s.headBlock).(*ethpb.SignedBeaconBlock)
return s.beaconDB.HeadBlock(ctx)
}
// HeadState returns the head state of the chain.
// If the head state is nil from service struct,
// it will attempt to get from DB and error if nil again.
func (s *Service) HeadState(ctx context.Context) (*pb.BeaconState, error) {
s.headLock.RLock()
defer s.headLock.RUnlock()
if s.headState == nil {
return s.beaconDB.HeadState(ctx)
// it will attempt to get the head state from DB.
func (s *Service) HeadState(ctx context.Context) (*state.BeaconState, error) {
if s.hasHeadState() {
return s.headState(), nil
}
return proto.Clone(s.headState).(*pb.BeaconState), nil
return s.beaconDB.HeadState(ctx)
}
// HeadValidatorsIndices returns a list of active validator indices from the head view of a given epoch.
func (s *Service) HeadValidatorsIndices(epoch uint64) ([]uint64, error) {
if s.headState == nil {
if !s.hasHeadState() {
return []uint64{}, nil
}
return helpers.ActiveValidatorIndices(s.headState, epoch)
return helpers.ActiveValidatorIndices(s.headState(), epoch)
}
// HeadSeed returns the seed from the head view of a given epoch.
func (s *Service) HeadSeed(epoch uint64) ([32]byte, error) {
if s.headState == nil {
if !s.hasHeadState() {
return [32]byte{}, nil
}
return helpers.Seed(s.headState, epoch, params.BeaconConfig().DomainBeaconAttester)
}
// CanonicalRoot returns the canonical root of a given slot.
func (s *Service) CanonicalRoot(slot uint64) []byte {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.canonicalRoots[slot]
return helpers.Seed(s.headState(), epoch, params.BeaconConfig().DomainBeaconAttester)
}
// GenesisTime returns the genesis time of beacon chain.
@@ -182,13 +180,13 @@ func (s *Service) GenesisTime() time.Time {
// CurrentFork retrieves the latest fork information of the beacon chain.
func (s *Service) CurrentFork() *pb.Fork {
if s.headState == nil {
if !s.hasHeadState() {
return &pb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
}
}
return proto.Clone(s.headState.Fork).(*pb.Fork)
return s.head.state.Fork()
}
// Participation returns the participation stats of a given epoch.

View File

@@ -12,13 +12,11 @@ func TestHeadSlot_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
beaconDB: db,
}
go func() {
s.saveHead(
context.Background(),
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
@@ -29,47 +27,45 @@ func TestHeadRoot_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
beaconDB: db,
head: &head{root: [32]byte{'A'}},
}
go func() {
s.saveHead(
context.Background(),
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
s.HeadRoot()
if _, err := s.HeadRoot(context.Background()); err != nil {
t.Fatal(err)
}
}
func TestHeadBlock_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
beaconDB: db,
head: &head{block: &ethpb.SignedBeaconBlock{}},
}
go func() {
s.saveHead(
context.Background(),
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
s.HeadBlock()
s.HeadBlock(context.Background())
}
func TestHeadState_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
beaconDB: db,
}
go func() {
s.saveHead(
context.Background(),
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()

View File

@@ -7,23 +7,23 @@ import (
"testing"
"time"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
// Ensure Service implements chain info interface.
var _ = ChainInfoFetcher(&Service{})
var _ = GenesisTimeFetcher(&Service{})
var _ = TimeFetcher(&Service{})
var _ = ForkFetcher(&Service{})
func TestFinalizedCheckpt_Nil(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
c := setupBeaconChain(t, db)
c.headState, _ = testutil.DeterministicGenesisState(t, 1)
if !bytes.Equal(c.FinalizedCheckpt().Root, params.BeaconConfig().ZeroHash[:]) {
t.Error("Incorrect pre chain start value")
}
@@ -33,7 +33,11 @@ func TestHeadRoot_Nil(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
c := setupBeaconChain(t, db)
if !bytes.Equal(c.HeadRoot(), params.BeaconConfig().ZeroHash[:]) {
headRoot, err := c.HeadRoot(context.Background())
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(headRoot, params.BeaconConfig().ZeroHash[:]) {
t.Error("Incorrect pre chain start value")
}
}
@@ -42,9 +46,9 @@ func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Epoch: 5}
cp := &ethpb.Checkpoint{Epoch: 5, Root: []byte("foo")}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{FinalizedCheckpoint: cp}
c.finalizedCheckpt = cp
if c.FinalizedCheckpt().Epoch != cp.Epoch {
t.Errorf("Finalized epoch at genesis should be %d, got: %d", cp.Epoch, c.FinalizedCheckpt().Epoch)
@@ -55,10 +59,11 @@ func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
genesisRoot := [32]byte{'A'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{FinalizedCheckpoint: cp}
c.genesisRoot = [32]byte{'A'}
c.finalizedCheckpt = cp
c.genesisRoot = genesisRoot
if !bytes.Equal(c.FinalizedCheckpt().Root, c.genesisRoot[:]) {
t.Errorf("Got: %v, wanted: %v", c.FinalizedCheckpt().Root, c.genesisRoot[:])
@@ -69,9 +74,9 @@ func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Epoch: 6}
cp := &ethpb.Checkpoint{Epoch: 6, Root: []byte("foo")}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{CurrentJustifiedCheckpoint: cp}
c.justifiedCheckpt = cp
if c.CurrentJustifiedCheckpt().Epoch != cp.Epoch {
t.Errorf("Current Justifiied epoch at genesis should be %d, got: %d", cp.Epoch, c.CurrentJustifiedCheckpt().Epoch)
@@ -82,10 +87,11 @@ func TestJustifiedCheckpt_GenesisRootOk(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
genesisRoot := [32]byte{'B'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{CurrentJustifiedCheckpoint: cp}
c.genesisRoot = [32]byte{'B'}
c.justifiedCheckpt = cp
c.genesisRoot = genesisRoot
if !bytes.Equal(c.CurrentJustifiedCheckpt().Root, c.genesisRoot[:]) {
t.Errorf("Got: %v, wanted: %v", c.CurrentJustifiedCheckpt().Root, c.genesisRoot[:])
@@ -96,9 +102,9 @@ func TestPreviousJustifiedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Epoch: 7}
cp := &ethpb.Checkpoint{Epoch: 7, Root: []byte("foo")}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{PreviousJustifiedCheckpoint: cp}
c.prevJustifiedCheckpt = cp
if c.PreviousJustifiedCheckpt().Epoch != cp.Epoch {
t.Errorf("Previous Justifiied epoch at genesis should be %d, got: %d", cp.Epoch, c.PreviousJustifiedCheckpt().Epoch)
@@ -109,10 +115,11 @@ func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
genesisRoot := [32]byte{'C'}
cp := &ethpb.Checkpoint{Root: genesisRoot[:]}
c := setupBeaconChain(t, db)
c.headState = &pb.BeaconState{PreviousJustifiedCheckpoint: cp}
c.genesisRoot = [32]byte{'C'}
c.prevJustifiedCheckpt = cp
c.genesisRoot = genesisRoot
if !bytes.Equal(c.PreviousJustifiedCheckpt().Root, c.genesisRoot[:]) {
t.Errorf("Got: %v, wanted: %v", c.PreviousJustifiedCheckpt().Root, c.genesisRoot[:])
@@ -121,37 +128,49 @@ func TestPrevJustifiedCheckpt_GenesisRootOk(t *testing.T) {
func TestHeadSlot_CanRetrieve(t *testing.T) {
c := &Service{}
c.headSlot = 100
s, _ := state.InitializeFromProto(&pb.BeaconState{})
c.head = &head{slot: 100, state: s}
if c.HeadSlot() != 100 {
t.Errorf("Wanted head slot: %d, got: %d", 100, c.HeadSlot())
}
}
func TestHeadRoot_CanRetrieve(t *testing.T) {
c := &Service{canonicalRoots: make(map[uint64][]byte)}
c.headSlot = 100
c.canonicalRoots[c.headSlot] = []byte{'A'}
if !bytes.Equal([]byte{'A'}, c.HeadRoot()) {
t.Errorf("Wanted head root: %v, got: %d", []byte{'A'}, c.HeadRoot())
c := &Service{}
c.head = &head{root: [32]byte{'A'}}
if [32]byte{'A'} != c.headRoot() {
t.Errorf("Wanted head root: %v, got: %d", []byte{'A'}, c.headRoot())
}
}
func TestHeadBlock_CanRetrieve(t *testing.T) {
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
c := &Service{headBlock: b}
if !reflect.DeepEqual(b, c.HeadBlock()) {
s, _ := state.InitializeFromProto(&pb.BeaconState{})
c := &Service{}
c.head = &head{block: b, state: s}
recevied, err := c.HeadBlock(context.Background())
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(b, recevied) {
t.Error("incorrect head block received")
}
}
func TestHeadState_CanRetrieve(t *testing.T) {
s := &pb.BeaconState{Slot: 2}
c := &Service{headState: s}
s, err := state.InitializeFromProto(&pb.BeaconState{Slot: 2})
if err != nil {
t.Fatal(err)
}
c := &Service{}
c.head = &head{state: s}
headState, err := c.HeadState(context.Background())
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, headState) {
if !reflect.DeepEqual(s.InnerStateUnsafe(), headState.InnerStateUnsafe()) {
t.Error("incorrect head state received")
}
}
@@ -166,19 +185,13 @@ func TestGenesisTime_CanRetrieve(t *testing.T) {
func TestCurrentFork_CanRetrieve(t *testing.T) {
f := &pb.Fork{Epoch: 999}
s := &pb.BeaconState{Fork: f}
c := &Service{headState: s}
if !reflect.DeepEqual(c.CurrentFork(), f) {
t.Error("Recieved incorrect fork version")
}
}
func TestCanonicalRoot_CanRetrieve(t *testing.T) {
c := &Service{canonicalRoots: make(map[uint64][]byte)}
slot := uint64(123)
r := []byte{'B'}
c.canonicalRoots[slot] = r
if !bytes.Equal(r, c.CanonicalRoot(slot)) {
t.Errorf("Wanted head root: %v, got: %d", []byte{'A'}, c.CanonicalRoot(slot))
s, err := state.InitializeFromProto(&pb.BeaconState{Fork: f})
if err != nil {
t.Fatal(err)
}
c := &Service{}
c.head = &head{state: s}
if !proto.Equal(c.CurrentFork(), f) {
t.Error("Received incorrect fork version")
}
}

View File

@@ -1,168 +0,0 @@
package forkchoice
import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
func BenchmarkForkChoiceTree1(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
b.Fatal(err)
}
// Benchmark fork choice with 1024 validators
validators := make([]*ethpb.Validator, 1024)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
b.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
b.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
b.Fatal(err)
}
// Spread out the votes evenly for all 3 leaf nodes
for i := 0; i < len(validators); i++ {
switch {
case i < 256:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[1]}
case i > 768:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[7]}
default:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[8]}
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := store.Head(ctx)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkForkChoiceTree2(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree2(db)
if err != nil {
b.Fatal(err)
}
// Benchmark fork choice with 1024 validators
validators := make([]*ethpb.Validator, 1024)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
b.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
b.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
b.Fatal(err)
}
// Spread out the votes evenly for all the leaf nodes. 8 to 15
nodeIndex := 8
for i := 0; i < len(validators); i++ {
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[nodeIndex]}
if i%155 == 0 {
nodeIndex++
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := store.Head(ctx)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkForkChoiceTree3(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree3(db)
if err != nil {
b.Fatal(err)
}
// Benchmark fork choice with 1024 validators
validators := make([]*ethpb.Validator, 1024)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
b.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
b.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
b.Fatal(err)
}
// All validators vote on the same head
for i := 0; i < len(validators); i++ {
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[len(roots)-1]}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := store.Head(ctx)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -1,9 +0,0 @@
/*
Package forkchoice implements the Latest Message Driven GHOST (Greediest Heaviest Observed
Sub-Tree) algorithm as the Ethereum Serenity beacon chain fork choice rule. This algorithm is designed to
properly detect the canonical chain based on validator votes even in the presence of high network
latency, network partitions, and many conflicting blocks. To read more about fork choice, read the
official accompanying document:
https://github.com/ethereum/eth2.0-specs/blob/v0.9.0/specs/core/0_fork-choice.md
*/
package forkchoice

View File

@@ -1,59 +0,0 @@
test_cases:
# GHOST chooses b3 with the heaviest weight
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b1'
- id: 'b3'
parent: 'b1'
weights:
b0: 0
b1: 0
b2: 5
b3: 10
head: 'b3'
# GHOST chooses b1 with the heaviest weight
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b0'
- id: 'b3'
parent: 'b0'
weights:
b1: 5
b2: 4
b3: 3
head: 'b1'
# Equal weights children, GHOST chooses b2 because it is higher lexicographically than b3
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b0'
- id: 'b3'
parent: 'b0'
weights:
b1: 5
b2: 6
b3: 6
head: 'b3'
# Equal weights children, GHOST chooses b2 because it is higher lexicographically than b1
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b0'
weights:
b1: 0
b2: 0
head: 'b2'

View File

@@ -1,140 +0,0 @@
package forkchoice
import (
"bytes"
"context"
"io/ioutil"
"path/filepath"
"strconv"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"gopkg.in/yaml.v2"
)
type Config struct {
TestCases []struct {
Blocks []struct {
ID string `yaml:"id"`
Parent string `yaml:"parent"`
} `yaml:"blocks"`
Weights map[string]int `yaml:"weights"`
Head string `yaml:"head"`
} `yaml:"test_cases"`
}
func TestGetHeadFromYaml(t *testing.T) {
ctx := context.Background()
filename, _ := filepath.Abs("./lmd_ghost_test.yaml")
yamlFile, err := ioutil.ReadFile(filename)
if err != nil {
t.Fatal(err)
}
var c *Config
err = yaml.Unmarshal(yamlFile, &c)
params.UseMainnetConfig()
for _, test := range c.TestCases {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
blksRoot := make(map[int][]byte)
// Construct block tree from yaml.
for _, blk := range test.Blocks {
// genesis block condition
if blk.ID == blk.Parent {
b := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
if err := db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: b}); err != nil {
t.Fatal(err)
}
root, err := ssz.HashTreeRoot(b)
if err != nil {
t.Fatal(err)
}
blksRoot[0] = root[:]
} else {
slot, err := strconv.Atoi(blk.ID[1:])
if err != nil {
t.Fatal(err)
}
parentSlot, err := strconv.Atoi(blk.Parent[1:])
if err != nil {
t.Fatal(err)
}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: uint64(slot), ParentRoot: blksRoot[parentSlot]}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
blksRoot[slot] = root[:]
if err := db.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
}
}
store := NewForkChoiceService(ctx, db)
// Assign validator votes to the blocks as weights.
count := 0
for blk, votes := range test.Weights {
slot, err := strconv.Atoi(blk[1:])
if err != nil {
t.Fatal(err)
}
max := count + votes
for i := count; i < max; i++ {
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: blksRoot[slot]}
count++
}
}
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(blksRoot[0])); err != nil {
t.Fatal(err)
}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: blksRoot[0]}, &ethpb.Checkpoint{Root: blksRoot[0]}); err != nil {
t.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
head, err := store.Head(ctx)
if err != nil {
t.Fatal(err)
}
headSlot, err := strconv.Atoi(test.Head[1:])
if err != nil {
t.Fatal(err)
}
wantedHead := blksRoot[headSlot]
if !bytes.Equal(head, wantedHead) {
t.Errorf("wanted root %#x, got root %#x", wantedHead, head)
}
testDB.TeardownDB(t, db)
}
}

View File

@@ -1,40 +0,0 @@
package forkchoice
import (
"fmt"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "forkchoice")
// logs epoch related data during epoch boundary.
func logEpochData(beaconState *pb.BeaconState) {
log.WithFields(logrus.Fields{
"epoch": helpers.CurrentEpoch(beaconState),
"finalizedEpoch": beaconState.FinalizedCheckpoint.Epoch,
"justifiedEpoch": beaconState.CurrentJustifiedCheckpoint.Epoch,
"previousJustifiedEpoch": beaconState.PreviousJustifiedCheckpoint.Epoch,
}).Info("Starting next epoch")
activeVals, err := helpers.ActiveValidatorIndices(beaconState, helpers.CurrentEpoch(beaconState))
if err != nil {
log.WithError(err).Error("Could not get active validator indices")
return
}
log.WithFields(logrus.Fields{
"totalValidators": len(beaconState.Validators),
"activeValidators": len(activeVals),
"averageBalance": fmt.Sprintf("%.5f ETH", averageBalance(beaconState.Balances)),
}).Info("Validator registry information")
}
func averageBalance(balances []uint64) float64 {
total := uint64(0)
for i := 0; i < len(balances); i++ {
total += balances[i]
}
return float64(total) / float64(len(balances)) / float64(params.BeaconConfig().GweiPerEth)
}

View File

@@ -1,161 +0,0 @@
package forkchoice
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
var (
beaconFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_finalized_epoch",
Help: "Last finalized epoch of the processed state",
})
beaconFinalizedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_finalized_root",
Help: "Last finalized root of the processed state",
})
cacheFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "cache_finalized_epoch",
Help: "Last cached finalized epoch",
})
cacheFinalizedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "cache_finalized_root",
Help: "Last cached finalized root",
})
beaconCurrentJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_epoch",
Help: "Current justified epoch of the processed state",
})
beaconCurrentJustifiedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_root",
Help: "Current justified root of the processed state",
})
beaconPrevJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_previous_justified_epoch",
Help: "Previous justified epoch of the processed state",
})
beaconPrevJustifiedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_previous_justified_root",
Help: "Previous justified root of the processed state",
})
sigFailsToVerify = promauto.NewCounter(prometheus.CounterOpts{
Name: "att_signature_failed_to_verify_with_cache",
Help: "Number of attestation signatures that failed to verify with cache on, but succeeded without cache",
})
validatorsCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validator_count",
Help: "The total number of validators, in GWei",
}, []string{"state"})
validatorsBalance = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_total_balance",
Help: "The total balance of validators, in GWei",
}, []string{"state"})
validatorsEffectiveBalance = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_total_effective_balance",
Help: "The total effective balance of validators, in GWei",
}, []string{"state"})
currentEth1DataDepositCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "current_eth1_data_deposit_count",
Help: "The current eth1 deposit count in the last processed state eth1data field.",
})
totalEligibleBalances = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_eligible_balances",
Help: "The total amount of ether, in gwei, that has been used in voting attestation target of previous epoch",
})
totalVotedTargetBalances = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_voted_target_balances",
Help: "The total amount of ether, in gwei, that is eligible for voting of previous epoch",
})
)
func reportEpochMetrics(state *pb.BeaconState) {
currentEpoch := state.Slot / params.BeaconConfig().SlotsPerEpoch
// Validator instances
pendingInstances := 0
activeInstances := 0
slashingInstances := 0
slashedInstances := 0
exitingInstances := 0
exitedInstances := 0
// Validator balances
pendingBalance := uint64(0)
activeBalance := uint64(0)
activeEffectiveBalance := uint64(0)
exitingBalance := uint64(0)
exitingEffectiveBalance := uint64(0)
slashingBalance := uint64(0)
slashingEffectiveBalance := uint64(0)
for i, validator := range state.Validators {
if validator.Slashed {
if currentEpoch < validator.ExitEpoch {
slashingInstances++
slashingBalance += state.Balances[i]
slashingEffectiveBalance += validator.EffectiveBalance
} else {
slashedInstances++
}
continue
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch {
exitingInstances++
exitingBalance += state.Balances[i]
exitingEffectiveBalance += validator.EffectiveBalance
} else {
exitedInstances++
}
continue
}
if currentEpoch < validator.ActivationEpoch {
pendingInstances++
pendingBalance += state.Balances[i]
continue
}
activeInstances++
activeBalance += state.Balances[i]
activeEffectiveBalance += validator.EffectiveBalance
}
validatorsCount.WithLabelValues("Pending").Set(float64(pendingInstances))
validatorsCount.WithLabelValues("Active").Set(float64(activeInstances))
validatorsCount.WithLabelValues("Exiting").Set(float64(exitingInstances))
validatorsCount.WithLabelValues("Exited").Set(float64(exitedInstances))
validatorsCount.WithLabelValues("Slashing").Set(float64(slashingInstances))
validatorsCount.WithLabelValues("Slashed").Set(float64(slashedInstances))
validatorsBalance.WithLabelValues("Pending").Set(float64(pendingBalance))
validatorsBalance.WithLabelValues("Active").Set(float64(activeBalance))
validatorsBalance.WithLabelValues("Exiting").Set(float64(exitingBalance))
validatorsBalance.WithLabelValues("Slashing").Set(float64(slashingBalance))
validatorsEffectiveBalance.WithLabelValues("Active").Set(float64(activeEffectiveBalance))
validatorsEffectiveBalance.WithLabelValues("Exiting").Set(float64(exitingEffectiveBalance))
validatorsEffectiveBalance.WithLabelValues("Slashing").Set(float64(slashingEffectiveBalance))
// Last justified slot
if state.CurrentJustifiedCheckpoint != nil {
beaconCurrentJustifiedEpoch.Set(float64(state.CurrentJustifiedCheckpoint.Epoch))
beaconCurrentJustifiedRoot.Set(float64(bytesutil.ToLowInt64(state.CurrentJustifiedCheckpoint.Root)))
}
// Last previous justified slot
if state.PreviousJustifiedCheckpoint != nil {
beaconPrevJustifiedEpoch.Set(float64(state.PreviousJustifiedCheckpoint.Epoch))
beaconPrevJustifiedRoot.Set(float64(bytesutil.ToLowInt64(state.PreviousJustifiedCheckpoint.Root)))
}
// Last finalized slot
if state.FinalizedCheckpoint != nil {
beaconFinalizedEpoch.Set(float64(state.FinalizedCheckpoint.Epoch))
beaconFinalizedRoot.Set(float64(bytesutil.ToLowInt64(state.FinalizedCheckpoint.Root)))
}
if state.Eth1Data != nil {
currentEth1DataDepositCount.Set(float64(state.Eth1Data.DepositCount))
}
if precompute.Balances != nil {
totalEligibleBalances.Set(float64(precompute.Balances.PrevEpoch))
totalVotedTargetBalances.Set(float64(precompute.Balances.PrevEpochTargetAttesters))
}
}

View File

@@ -1,309 +0,0 @@
package forkchoice
import (
"context"
"fmt"
"time"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ErrTargetRootNotInDB returns when the target block root of an attestation cannot be found in the
// beacon database.
var ErrTargetRootNotInDB = errors.New("target root does not exist in db")
// OnAttestation is called whenever an attestation is received, it updates validators latest vote,
// as well as the fork choice store struct.
//
// Spec pseudocode definition:
// def on_attestation(store: Store, attestation: Attestation) -> None:
// """
// Run ``on_attestation`` upon receiving a new ``attestation`` from either within a block or directly on the wire.
//
// An ``attestation`` that is asserted as invalid may be valid at a later time,
// consider scheduling it for later processing in such case.
// """
// target = attestation.data.target
//
// # Attestations must be from the current or previous epoch
// current_epoch = compute_epoch_at_slot(get_current_slot(store))
// # Use GENESIS_EPOCH for previous when genesis to avoid underflow
// previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else GENESIS_EPOCH
// assert target.epoch in [current_epoch, previous_epoch]
// assert target.epoch == compute_epoch_at_slot(attestation.data.slot)
//
// # Attestations target be for a known block. If target block is unknown, delay consideration until the block is found
// assert target.root in store.blocks
// # Attestations cannot be from future epochs. If they are, delay consideration until the epoch arrives
// base_state = store.block_states[target.root].copy()
// assert store.time >= base_state.genesis_time + compute_start_slot_at_epoch(target.epoch) * SECONDS_PER_SLOT
//
// # Attestations must be for a known block. If block is unknown, delay consideration until the block is found
// assert attestation.data.beacon_block_root in store.blocks
// # Attestations must not be for blocks in the future. If not, the attestation should not be considered
// assert store.blocks[attestation.data.beacon_block_root].slot <= attestation.data.slot
//
// # Store target checkpoint state if not yet seen
// if target not in store.checkpoint_states:
// process_slots(base_state, compute_start_slot_at_epoch(target.epoch))
// store.checkpoint_states[target] = base_state
// target_state = store.checkpoint_states[target]
//
// # Attestations can only affect the fork choice of subsequent slots.
// # Delay consideration in the fork choice until their slot is in the past.
// assert store.time >= (attestation.data.slot + 1) * SECONDS_PER_SLOT
//
// # Get state at the `target` to validate attestation and calculate the committees
// indexed_attestation = get_indexed_attestation(target_state, attestation)
// assert is_valid_indexed_attestation(target_state, indexed_attestation)
//
// # Update latest messages
// for i in indexed_attestation.attesting_indices:
// if i not in store.latest_messages or target.epoch > store.latest_messages[i].epoch:
// store.latest_messages[i] = LatestMessage(epoch=target.epoch, root=attestation.data.beacon_block_root)
func (s *Store) OnAttestation(ctx context.Context, a *ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onAttestation")
defer span.End()
tgt := proto.Clone(a.Data.Target).(*ethpb.Checkpoint)
tgtSlot := helpers.StartSlot(tgt.Epoch)
if helpers.SlotToEpoch(a.Data.Slot) != a.Data.Target.Epoch {
return fmt.Errorf("data slot is not in the same epoch as target %d != %d", helpers.SlotToEpoch(a.Data.Slot), a.Data.Target.Epoch)
}
// Verify beacon node has seen the target block before.
if !s.db.HasBlock(ctx, bytesutil.ToBytes32(tgt.Root)) {
return ErrTargetRootNotInDB
}
// Verify attestation target has had a valid pre state produced by the target block.
baseState, err := s.verifyAttPreState(ctx, tgt)
if err != nil {
return err
}
// Verify attestation target is from current epoch or previous epoch.
if err := s.verifyAttTargetEpoch(ctx, baseState.GenesisTime, uint64(time.Now().Unix()), tgt); err != nil {
return err
}
// Verify Attestations cannot be from future epochs.
if err := helpers.VerifySlotTime(baseState.GenesisTime, tgtSlot); err != nil {
return errors.Wrap(err, "could not verify attestation target slot")
}
// Verify attestation beacon block is known and not from the future.
if err := s.verifyBeaconBlock(ctx, a.Data); err != nil {
return errors.Wrap(err, "could not verify attestation beacon block")
}
// Store target checkpoint state if not yet seen.
baseState, err = s.saveCheckpointState(ctx, baseState, tgt)
if err != nil {
return err
}
// Verify attestations can only affect the fork choice of subsequent slots.
if err := helpers.VerifySlotTime(baseState.GenesisTime, a.Data.Slot+1); err != nil {
return err
}
// Use the target state to to validate attestation and calculate the committees.
indexedAtt, err := s.verifyAttestation(ctx, baseState, a)
if err != nil {
return err
}
// Update every validator's latest vote.
if err := s.updateAttVotes(ctx, indexedAtt, tgt.Root, tgt.Epoch); err != nil {
return err
}
if err := s.db.SaveAttestation(ctx, a); err != nil {
return err
}
log := log.WithFields(logrus.Fields{
"Slot": a.Data.Slot,
"Index": a.Data.CommitteeIndex,
"AggregatedBitfield": fmt.Sprintf("%08b", a.AggregationBits),
"BeaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.Data.BeaconBlockRoot)),
})
log.Debug("Updated latest votes")
return nil
}
// verifyAttPreState validates input attested check point has a valid pre-state.
func (s *Store) verifyAttPreState(ctx context.Context, c *ethpb.Checkpoint) (*pb.BeaconState, error) {
baseState, err := s.db.State(ctx, bytesutil.ToBytes32(c.Root))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", helpers.StartSlot(c.Epoch))
}
if baseState == nil {
return nil, fmt.Errorf("pre state of target block %d does not exist", helpers.StartSlot(c.Epoch))
}
return baseState, nil
}
// verifyAttTargetEpoch validates attestation is from the current or previous epoch.
func (s *Store) verifyAttTargetEpoch(ctx context.Context, genesisTime uint64, nowTime uint64, c *ethpb.Checkpoint) error {
currentSlot := (nowTime - genesisTime) / params.BeaconConfig().SecondsPerSlot
currentEpoch := helpers.SlotToEpoch(currentSlot)
var prevEpoch uint64
// Prevents previous epoch under flow
if currentEpoch > 1 {
prevEpoch = currentEpoch - 1
}
if c.Epoch != prevEpoch && c.Epoch != currentEpoch {
return fmt.Errorf("target epoch %d does not match current epoch %d or prev epoch %d", c.Epoch, currentEpoch, prevEpoch)
}
return nil
}
// verifyBeaconBlock verifies beacon head block is known and not from the future.
func (s *Store) verifyBeaconBlock(ctx context.Context, data *ethpb.AttestationData) error {
b, err := s.db.Block(ctx, bytesutil.ToBytes32(data.BeaconBlockRoot))
if err != nil {
return err
}
if b == nil || b.Block == nil {
return fmt.Errorf("beacon block %#x does not exist", bytesutil.Trunc(data.BeaconBlockRoot))
}
if b.Block.Slot > data.Slot {
return fmt.Errorf("could not process attestation for future block, %d > %d", b.Block.Slot, data.Slot)
}
return nil
}
// saveCheckpointState saves and returns the processed state with the associated check point.
func (s *Store) saveCheckpointState(ctx context.Context, baseState *pb.BeaconState, c *ethpb.Checkpoint) (*pb.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.saveCheckpointState")
defer span.End()
s.checkpointStateLock.Lock()
defer s.checkpointStateLock.Unlock()
cachedState, err := s.checkpointState.StateByCheckpoint(c)
if err != nil {
return nil, errors.Wrap(err, "could not get cached checkpoint state")
}
if cachedState != nil {
return cachedState, nil
}
// Advance slots only when it's higher than current state slot.
if helpers.StartSlot(c.Epoch) > baseState.Slot {
stateCopy := proto.Clone(baseState).(*pb.BeaconState)
stateCopy, err = state.ProcessSlots(ctx, stateCopy, helpers.StartSlot(c.Epoch))
if err != nil {
return nil, errors.Wrapf(err, "could not process slots up to %d", helpers.StartSlot(c.Epoch))
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: c,
State: stateCopy,
}); err != nil {
return nil, errors.Wrap(err, "could not saved checkpoint state to cache")
}
return stateCopy, nil
}
return baseState, nil
}
// verifyAttestation validates input attestation is valid.
func (s *Store) verifyAttestation(ctx context.Context, baseState *pb.BeaconState, a *ethpb.Attestation) (*ethpb.IndexedAttestation, error) {
committee, err := helpers.BeaconCommitteeFromState(baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, err
}
indexedAtt, err := blocks.ConvertToIndexed(ctx, a, committee)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation")
}
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
// TODO(3603): Delete the following signature verify fallback when issue 3603 closes.
// When signature fails to verify with committee cache enabled at run time,
// the following re-runs the same signature verify routine without cache in play.
// This provides extra assurance that committee cache can't break run time.
if err == blocks.ErrSigFailedToVerify {
committee, err = helpers.BeaconCommitteeWithoutCache(baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation without cache")
}
indexedAtt, err = blocks.ConvertToIndexed(ctx, a, committee)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation")
}
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
return nil, errors.Wrap(err, "could not verify indexed attestation without cache")
}
sigFailsToVerify.Inc()
return indexedAtt, nil
}
return nil, errors.Wrap(err, "could not verify indexed attestation")
}
return indexedAtt, nil
}
// updateAttVotes updates validator's latest votes based on the incoming attestation.
func (s *Store) updateAttVotes(
ctx context.Context,
indexedAtt *ethpb.IndexedAttestation,
tgtRoot []byte,
tgtEpoch uint64) error {
indices := indexedAtt.AttestingIndices
s.voteLock.Lock()
defer s.voteLock.Unlock()
for _, i := range indices {
vote, ok := s.latestVoteMap[i]
if !ok || tgtEpoch > vote.Epoch {
s.latestVoteMap[i] = &pb.ValidatorLatestVote{
Epoch: tgtEpoch,
Root: tgtRoot,
}
}
}
return nil
}
// aggregatedAttestation returns the aggregated attestation after checking saved one in db.
func (s *Store) aggregatedAttestations(ctx context.Context, att *ethpb.Attestation) ([]*ethpb.Attestation, error) {
r, err := ssz.HashTreeRoot(att.Data)
if err != nil {
return nil, err
}
saved, err := s.db.AttestationsByDataRoot(ctx, r)
if err != nil {
return nil, err
}
if saved == nil {
return []*ethpb.Attestation{att}, nil
}
aggregated, err := helpers.AggregateAttestations(append(saved, att))
if err != nil {
return nil, err
}
return aggregated, nil
}

View File

@@ -1,572 +0,0 @@
package forkchoice
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"time"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// OnBlock is called when a gossip block is received. It runs regular state transition on the block and
// update fork choice store.
//
// Spec pseudocode definition:
// def on_block(store: Store, block: BeaconBlock) -> None:
// # Make a copy of the state to avoid mutability issues
// assert block.parent_root in store.block_states
// pre_state = store.block_states[block.parent_root].copy()
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert store.time >= pre_state.genesis_time + block.slot * SECONDS_PER_SLOT
// # Add new block to the store
// store.blocks[signing_root(block)] = block
// # Check block is a descendant of the finalized block
// assert (
// get_ancestor(store, signing_root(block), store.blocks[store.finalized_checkpoint.root].slot) ==
// store.finalized_checkpoint.root
// )
// # Check that block is later than the finalized epoch slot
// assert block.slot > compute_start_slot_of_epoch(store.finalized_checkpoint.epoch)
// # Check the block is valid and compute the post-state
// state = state_transition(pre_state, block)
// # Add new state for this block to the store
// store.block_states[signing_root(block)] = state
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
func (s *Store) OnBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onBlock")
defer span.End()
if signed == nil || signed.Block == nil {
return errors.New("nil block")
}
b := signed.Block
// Retrieve incoming block's pre state.
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
return err
}
preStateValidatorCount := len(preState.Validators)
root, err := ssz.HashTreeRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
log.WithFields(logrus.Fields{
"slot": b.Slot,
"root": fmt.Sprintf("0x%s...", hex.EncodeToString(root[:])[:8]),
}).Info("Executing state transition on block")
postState, err := state.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
if err := s.db.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
if err := s.db.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint.Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
// Update finalized check point.
// Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpoint.Epoch > s.finalizedCheckpt.Epoch {
if err := s.db.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if endSlot > startSlot {
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot)
}
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = postState.FinalizedCheckpoint
}
// Update validator indices in database as needed.
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
// Save the unseen attestations from block to db.
if err := s.saveNewBlockAttestations(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not save attestations")
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if postState.Slot >= s.nextEpochBoundarySlot {
logEpochData(postState)
reportEpochMetrics(postState)
// Update committees cache at epoch boundary slot.
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(postState, helpers.CurrentEpoch(postState)); err != nil {
return err
}
}
s.nextEpochBoundarySlot = helpers.StartSlot(helpers.NextEpoch(postState))
}
return nil
}
// OnBlockInitialSyncStateTransition is called when an initial sync block is received.
// It runs state transition on the block and without any BLS verification. The BLS verification
// includes proposer signature, randao and attestation's aggregated signature. It also does not save
// attestations.
func (s *Store) OnBlockInitialSyncStateTransition(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onBlock")
defer span.End()
if signed == nil || signed.Block == nil {
return errors.New("nil block")
}
b := signed.Block
s.initSyncStateLock.Lock()
defer s.initSyncStateLock.Unlock()
// Retrieve incoming block's pre state.
preState, err := s.cachedPreState(ctx, b)
if err != nil {
return err
}
preStateValidatorCount := len(preState.Validators)
log.WithField("slot", b.Slot).Debug("Executing state transition on block")
postState, err := state.ExecuteStateTransitionNoVerify(ctx, preState, signed)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
if err := s.db.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
root, err := ssz.HashTreeRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
if featureconfig.Get().InitSyncCacheState {
s.initSyncState[root] = postState
} else {
if err := s.db.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
}
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint.Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
// Update finalized check point.
// Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpoint.Epoch > s.finalizedCheckpt.Epoch {
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if endSlot > startSlot {
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot)
}
}
if err := s.saveInitState(ctx, postState); err != nil {
return errors.Wrap(err, "could not save init sync finalized state")
}
if err := s.db.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = postState.FinalizedCheckpoint
}
// Update validator indices in database as needed.
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
if flags.Get().EnableArchive {
// Save the unseen attestations from block to db.
if err := s.saveNewBlockAttestations(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not save attestations")
}
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if postState.Slot >= s.nextEpochBoundarySlot {
reportEpochMetrics(postState)
s.nextEpochBoundarySlot = helpers.StartSlot(helpers.NextEpoch(postState))
}
return nil
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.
func (s *Store) getBlockPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.getBlockPreState")
defer span.End()
// Verify incoming block has a valid pre state.
preState, err := s.verifyBlkPreState(ctx, b)
if err != nil {
return nil, err
}
// Verify block slot time is not from the feature.
if err := helpers.VerifySlotTime(preState.GenesisTime, b.Slot); err != nil {
return nil, err
}
// Verify block is a descendent of a finalized block.
if err := s.verifyBlkDescendant(ctx, bytesutil.ToBytes32(b.ParentRoot), b.Slot); err != nil {
return nil, err
}
// Verify block is later than the finalized epoch slot.
if err := s.verifyBlkFinalizedSlot(b); err != nil {
return nil, err
}
return preState, nil
}
// verifyBlkPreState validates input block has a valid pre-state.
func (s *Store) verifyBlkPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
preState, err := s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
return preState, nil
}
// verifyBlkDescendant validates input block root is a descendant of the
// current finalized block root.
func (s *Store) verifyBlkDescendant(ctx context.Context, root [32]byte, slot uint64) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.verifyBlkDescendant")
defer span.End()
finalizedBlkSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root))
if err != nil || finalizedBlkSigned == nil || finalizedBlkSigned.Block == nil {
return errors.Wrap(err, "could not get finalized block")
}
finalizedBlk := finalizedBlkSigned.Block
bFinalizedRoot, err := s.ancestor(ctx, root[:], finalizedBlk.Slot)
if err != nil {
return errors.Wrap(err, "could not get finalized block root")
}
if !bytes.Equal(bFinalizedRoot, s.finalizedCheckpt.Root) {
err := fmt.Errorf("block from slot %d is not a descendent of the current finalized block slot %d, %#x != %#x",
slot, finalizedBlk.Slot, bytesutil.Trunc(bFinalizedRoot), bytesutil.Trunc(s.finalizedCheckpt.Root))
traceutil.AnnotateError(span, err)
return err
}
return nil
}
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Store) verifyBlkFinalizedSlot(b *ethpb.BeaconBlock) error {
finalizedSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if finalizedSlot >= b.Slot {
return fmt.Errorf("block is equal or earlier than finalized block, slot %d < slot %d", b.Slot, finalizedSlot)
}
return nil
}
// saveNewValidators saves newly added validator index from state to db. Does nothing if validator count has not
// changed.
func (s *Store) saveNewValidators(ctx context.Context, preStateValidatorCount int, postState *pb.BeaconState) error {
postStateValidatorCount := len(postState.Validators)
if preStateValidatorCount != postStateValidatorCount {
for i := preStateValidatorCount; i < postStateValidatorCount; i++ {
pubKey := postState.Validators[i].PublicKey
if err := s.db.SaveValidatorIndex(ctx, bytesutil.ToBytes48(pubKey), uint64(i)); err != nil {
return errors.Wrapf(err, "could not save activated validator: %d", i)
}
log.WithFields(logrus.Fields{
"index": i,
"pubKey": hex.EncodeToString(bytesutil.Trunc(pubKey)),
"totalValidatorCount": i + 1,
}).Info("New validator index saved in DB")
}
}
return nil
}
// saveNewBlockAttestations saves the new attestations in block to DB.
func (s *Store) saveNewBlockAttestations(ctx context.Context, atts []*ethpb.Attestation) error {
attestations := make([]*ethpb.Attestation, 0, len(atts))
for _, att := range atts {
aggregated, err := s.aggregatedAttestations(ctx, att)
if err != nil {
continue
}
attestations = append(attestations, aggregated...)
}
if err := s.db.SaveAttestations(ctx, atts); err != nil {
return err
}
return nil
}
// rmStatesOlderThanLastFinalized deletes the states in db since last finalized check point.
func (s *Store) rmStatesOlderThanLastFinalized(ctx context.Context, startSlot uint64, endSlot uint64) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.rmStatesBySlots")
defer span.End()
// Make sure start slot is not a skipped slot
for i := startSlot; i > 0; i-- {
filter := filters.NewFilter().SetStartSlot(i).SetEndSlot(i)
b, err := s.db.Blocks(ctx, filter)
if err != nil {
return err
}
if len(b) > 0 {
startSlot = i
break
}
}
// Make sure finalized slot is not a skipped slot.
for i := endSlot; i > 0; i-- {
filter := filters.NewFilter().SetStartSlot(i).SetEndSlot(i)
b, err := s.db.Blocks(ctx, filter)
if err != nil {
return err
}
if len(b) > 0 {
endSlot = i - 1
break
}
}
// Do not remove genesis state
if startSlot == 0 {
startSlot++
}
// If end slot comes less than start slot
if endSlot < startSlot {
endSlot = startSlot
}
filter := filters.NewFilter().SetStartSlot(startSlot).SetEndSlot(endSlot)
roots, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return err
}
roots, err = s.filterBlockRoots(ctx, roots)
if err != nil {
return err
}
if err := s.db.DeleteStates(ctx, roots); err != nil {
return err
}
return nil
}
// shouldUpdateCurrentJustified prevents bouncing attack, by only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
func (s *Store) shouldUpdateCurrentJustified(ctx context.Context, newJustifiedCheckpt *ethpb.Checkpoint) (bool, error) {
if helpers.SlotsSinceEpochStarts(s.currentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
newJustifiedBlockSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(newJustifiedCheckpt.Root))
if err != nil {
return false, err
}
if newJustifiedBlockSigned == nil || newJustifiedBlockSigned.Block == nil {
return false, errors.New("nil new justified block")
}
newJustifiedBlock := newJustifiedBlockSigned.Block
if newJustifiedBlock.Slot <= helpers.StartSlot(s.justifiedCheckpt.Epoch) {
return false, nil
}
justifiedBlockSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
return false, err
}
if justifiedBlockSigned == nil || justifiedBlockSigned.Block == nil {
return false, errors.New("nil justified block")
}
justifiedBlock := justifiedBlockSigned.Block
b, err := s.ancestor(ctx, newJustifiedCheckpt.Root, justifiedBlock.Slot)
if err != nil {
return false, err
}
if !bytes.Equal(b, s.justifiedCheckpt.Root) {
return false, nil
}
return true, nil
}
func (s *Store) updateJustified(ctx context.Context, state *pb.BeaconState) error {
if state.CurrentJustifiedCheckpoint.Epoch > s.bestJustifiedCheckpt.Epoch {
s.bestJustifiedCheckpt = state.CurrentJustifiedCheckpoint
}
canUpdate, err := s.shouldUpdateCurrentJustified(ctx, state.CurrentJustifiedCheckpoint)
if err != nil {
return err
}
if canUpdate {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint
}
if featureconfig.Get().InitSyncCacheState {
justifiedRoot := bytesutil.ToBytes32(state.CurrentJustifiedCheckpoint.Root)
justifiedState := s.initSyncState[justifiedRoot]
if err := s.db.SaveState(ctx, justifiedState, justifiedRoot); err != nil {
return errors.Wrap(err, "could not save justified state")
}
}
return s.db.SaveJustifiedCheckpoint(ctx, state.CurrentJustifiedCheckpoint)
}
// currentSlot returns the current slot based on time.
func (s *Store) currentSlot() uint64 {
return (uint64(time.Now().Unix()) - s.genesisTime) / params.BeaconConfig().SecondsPerSlot
}
// updates justified check point in store if a better check point is known
func (s *Store) updateJustifiedCheckpoint() {
// Update at epoch boundary slot only
if !helpers.IsEpochStart(s.currentSlot()) {
return
}
if s.bestJustifiedCheckpt.Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = s.bestJustifiedCheckpt
}
}
// This receives cached state in memory for initial sync only during initial sync.
func (s *Store) cachedPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
if featureconfig.Get().InitSyncCacheState {
preState := s.initSyncState[bytesutil.ToBytes32(b.ParentRoot)]
var err error
if preState == nil {
preState, err = s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
}
return proto.Clone(preState).(*pb.BeaconState), nil
}
preState, err := s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
return preState, nil
}
// This saves every finalized state in DB during initial sync, needed as part of optimization to
// use cache state during initial sync in case of restart.
func (s *Store) saveInitState(ctx context.Context, state *pb.BeaconState) error {
if !featureconfig.Get().InitSyncCacheState {
return nil
}
finalizedRoot := bytesutil.ToBytes32(state.FinalizedCheckpoint.Root)
fs := s.initSyncState[finalizedRoot]
if err := s.db.SaveState(ctx, fs, finalizedRoot); err != nil {
return errors.Wrap(err, "could not save state")
}
for r, oldState := range s.initSyncState {
if oldState.Slot < state.FinalizedCheckpoint.Epoch*params.BeaconConfig().SlotsPerEpoch {
delete(s.initSyncState, r)
}
}
return nil
}
// This filters block roots that are not known as head root and finalized root in DB.
// It serves as the last line of defence before we prune states.
func (s *Store) filterBlockRoots(ctx context.Context, roots [][32]byte) ([][32]byte, error) {
f, err := s.db.FinalizedCheckpoint(ctx)
if err != nil {
return nil, err
}
fRoot := f.Root
h, err := s.db.HeadBlock(ctx)
if err != nil {
return nil, err
}
hRoot, err := ssz.SigningRoot(h)
if err != nil {
return nil, err
}
filtered := make([][32]byte, 0, len(roots))
for _, root := range roots {
if bytes.Equal(root[:], fRoot[:]) || bytes.Equal(root[:], hRoot[:]) {
continue
}
filtered = append(filtered, root)
}
return filtered, nil
}

View File

@@ -1,610 +0,0 @@
package forkchoice
import (
"bytes"
"context"
"reflect"
"strings"
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/stateutil"
)
func TestStore_OnBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
genesisStateRoot, err := stateutil.HashTreeRootState(&pb.BeaconState{})
if err != nil {
t.Error(err)
}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
if err := db.SaveBlock(ctx, genesis); err != nil {
t.Error(err)
}
validGenesisRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Error(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, validGenesisRoot); err != nil {
t.Fatal(err)
}
roots, err := blockTree1(db, validGenesisRoot[:])
if err != nil {
t.Fatal(err)
}
random := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1, ParentRoot: validGenesisRoot[:]}}
if err := db.SaveBlock(ctx, random); err != nil {
t.Error(err)
}
randomParentRoot, err := ssz.HashTreeRoot(random.Block)
if err != nil {
t.Error(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, randomParentRoot); err != nil {
t.Fatal(err)
}
randomParentRoot2 := roots[1]
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(randomParentRoot2)); err != nil {
t.Fatal(err)
}
tests := []struct {
name string
blk *ethpb.BeaconBlock
s *pb.BeaconState
time uint64
wantErrString string
}{
{
name: "parent block root does not have a state",
blk: &ethpb.BeaconBlock{},
s: &pb.BeaconState{},
wantErrString: "pre state of slot 0 does not exist",
},
{
name: "block is from the feature",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot[:], Slot: params.BeaconConfig().FarFutureEpoch},
s: &pb.BeaconState{},
wantErrString: "could not process slot from the future",
},
{
name: "could not get finalized block",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot[:]},
s: &pb.BeaconState{},
wantErrString: "block from slot 0 is not a descendent of the current finalized block",
},
{
name: "same slot as finalized block",
blk: &ethpb.BeaconBlock{Slot: 0, ParentRoot: randomParentRoot2},
s: &pb.BeaconState{},
wantErrString: "block is equal or earlier than finalized block, slot 0 < slot 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: validGenesisRoot[:]}, &ethpb.Checkpoint{Root: validGenesisRoot[:]}); err != nil {
t.Fatal(err)
}
store.finalizedCheckpt.Root = roots[0]
err := store.OnBlock(ctx, &ethpb.SignedBeaconBlock{Block: tt.blk})
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnBlock() error = %v, wantErr = %v", err, tt.wantErrString)
}
})
}
}
func TestStore_SaveNewValidators(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
preCount := 2 // validators 0 and validators 1
s := &pb.BeaconState{Validators: []*ethpb.Validator{
{PublicKey: []byte{0}}, {PublicKey: []byte{1}},
{PublicKey: []byte{2}}, {PublicKey: []byte{3}},
}}
if err := store.saveNewValidators(ctx, preCount, s); err != nil {
t.Fatal(err)
}
if !db.HasValidatorIndex(ctx, bytesutil.ToBytes48([]byte{2})) {
t.Error("Wanted validator saved in db")
}
if !db.HasValidatorIndex(ctx, bytesutil.ToBytes48([]byte{3})) {
t.Error("Wanted validator saved in db")
}
if db.HasValidatorIndex(ctx, bytesutil.ToBytes48([]byte{1})) {
t.Error("validator not suppose to be saved in db")
}
}
func TestStore_SavesNewBlockAttestations(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
a1 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b101}}
a2 := &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b110}}
r1, _ := ssz.HashTreeRoot(a1.Data)
r2, _ := ssz.HashTreeRoot(a2.Data)
if err := store.saveNewBlockAttestations(ctx, []*ethpb.Attestation{a1, a2}); err != nil {
t.Fatal(err)
}
saved, err := store.db.AttestationsByDataRoot(ctx, r1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a1}, saved) {
t.Error("did not retrieve saved attestation")
}
saved, err = store.db.AttestationsByDataRoot(ctx, r2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a2}, saved) {
t.Error("did not retrieve saved attestation")
}
a1 = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b111}}
a2 = &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b111}}
if err := store.saveNewBlockAttestations(ctx, []*ethpb.Attestation{a1, a2}); err != nil {
t.Fatal(err)
}
saved, err = store.db.AttestationsByDataRoot(ctx, r1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a1}, saved) {
t.Error("did not retrieve saved attestation")
}
saved, err = store.db.AttestationsByDataRoot(ctx, r2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a2}, saved) {
t.Error("did not retrieve saved attestation")
}
}
func TestRemoveStateSinceLastFinalized(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
// Save 100 blocks in DB, each has a state.
numBlocks := 100
totalBlocks := make([]*ethpb.SignedBeaconBlock, numBlocks)
blockRoots := make([][32]byte, 0)
for i := 0; i < len(totalBlocks); i++ {
totalBlocks[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: uint64(i),
},
}
r, err := ssz.HashTreeRoot(totalBlocks[i].Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{Slot: uint64(i)}, r); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, totalBlocks[i]); err != nil {
t.Fatal(err)
}
blockRoots = append(blockRoots, r)
if err := store.db.SaveHeadBlockRoot(ctx, r); err != nil {
t.Fatal(err)
}
}
// New finalized epoch: 1
finalizedEpoch := uint64(1)
finalizedSlot := finalizedEpoch * params.BeaconConfig().SlotsPerEpoch
endSlot := helpers.StartSlot(finalizedEpoch+1) - 1 // Inclusive
if err := store.rmStatesOlderThanLastFinalized(ctx, 0, endSlot); err != nil {
t.Fatal(err)
}
for _, r := range blockRoots {
s, err := store.db.State(ctx, r)
if err != nil {
t.Fatal(err)
}
// Also verifies genesis state didnt get deleted
if s != nil && s.Slot != finalizedSlot && s.Slot != 0 && s.Slot < endSlot {
t.Errorf("State with slot %d should not be in DB", s.Slot)
}
}
// New finalized epoch: 5
newFinalizedEpoch := uint64(5)
newFinalizedSlot := newFinalizedEpoch * params.BeaconConfig().SlotsPerEpoch
endSlot = helpers.StartSlot(newFinalizedEpoch+1) - 1 // Inclusive
if err := store.rmStatesOlderThanLastFinalized(ctx, helpers.StartSlot(finalizedEpoch+1)-1, endSlot); err != nil {
t.Fatal(err)
}
for _, r := range blockRoots {
s, err := store.db.State(ctx, r)
if err != nil {
t.Fatal(err)
}
// Also verifies genesis state didnt get deleted
if s != nil && s.Slot != newFinalizedSlot && s.Slot != finalizedSlot && s.Slot != 0 && s.Slot < endSlot {
t.Errorf("State with slot %d should not be in DB", s.Slot)
}
}
}
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
store.genesisTime = uint64(time.Now().Unix())
update, err := store.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{})
if err != nil {
t.Fatal(err)
}
if !update {
t.Error("Should be able to update justified, received false")
}
lastJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: []byte{'G'}}}
lastJustifiedRoot, _ := ssz.HashTreeRoot(lastJustifiedBlk.Block)
newJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1, ParentRoot: lastJustifiedRoot[:]}}
newJustifiedRoot, _ := ssz.HashTreeRoot(newJustifiedBlk.Block)
if err := store.db.SaveBlock(ctx, newJustifiedBlk); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, lastJustifiedBlk); err != nil {
t.Fatal(err)
}
diff := (params.BeaconConfig().SlotsPerEpoch - 1) * params.BeaconConfig().SecondsPerSlot
store.genesisTime = uint64(time.Now().Unix()) - diff
store.justifiedCheckpt = &ethpb.Checkpoint{Root: lastJustifiedRoot[:]}
update, err = store.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
if err != nil {
t.Fatal(err)
}
if !update {
t.Error("Should be able to update justified, received false")
}
}
func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
lastJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: []byte{'G'}}}
lastJustifiedRoot, _ := ssz.HashTreeRoot(lastJustifiedBlk.Block)
newJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: lastJustifiedRoot[:]}}
newJustifiedRoot, _ := ssz.HashTreeRoot(newJustifiedBlk.Block)
if err := store.db.SaveBlock(ctx, newJustifiedBlk); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, lastJustifiedBlk); err != nil {
t.Fatal(err)
}
diff := (params.BeaconConfig().SlotsPerEpoch - 1) * params.BeaconConfig().SecondsPerSlot
store.genesisTime = uint64(time.Now().Unix()) - diff
store.justifiedCheckpt = &ethpb.Checkpoint{Root: lastJustifiedRoot[:]}
update, err := store.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
if err != nil {
t.Fatal(err)
}
if update {
t.Error("Should not be able to update justified, received true")
}
}
func TestUpdateJustifiedCheckpoint_Update(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
store.genesisTime = uint64(time.Now().Unix())
store.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.bestJustifiedCheckpt = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'B'}}
store.updateJustifiedCheckpoint()
if !bytes.Equal(store.justifiedCheckpt.Root, []byte{'B'}) {
t.Error("Justified check point root did not update")
}
}
func TestUpdateJustifiedCheckpoint_NoUpdate(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
store.genesisTime = uint64(time.Now().Unix()) - params.BeaconConfig().SecondsPerSlot
store.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.bestJustifiedCheckpt = &ethpb.Checkpoint{Epoch: 1, Root: []byte{'B'}}
store.updateJustifiedCheckpoint()
if bytes.Equal(store.justifiedCheckpt.Root, []byte{'B'}) {
t.Error("Justified check point root was not suppose to update")
store := NewForkChoiceService(ctx, db)
// Save 5 blocks in DB, each has a state.
numBlocks := 5
totalBlocks := make([]*ethpb.SignedBeaconBlock, numBlocks)
blockRoots := make([][32]byte, 0)
for i := 0; i < len(totalBlocks); i++ {
totalBlocks[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: uint64(i),
},
}
r, err := ssz.HashTreeRoot(totalBlocks[i].Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{Slot: uint64(i)}, r); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, totalBlocks[i]); err != nil {
t.Fatal(err)
}
blockRoots = append(blockRoots, r)
}
if err := store.db.SaveHeadBlockRoot(ctx, blockRoots[0]); err != nil {
t.Fatal(err)
}
if err := store.rmStatesOlderThanLastFinalized(ctx, 10, 11); err != nil {
t.Fatal(err)
}
// Since 5-10 are skip slots, block with slot 4 should be deleted
s, err := store.db.State(ctx, blockRoots[4])
if err != nil {
t.Fatal(err)
}
if s != nil {
t.Error("Did not delete state for start slot")
}
}
}
func TestCachedPreState_CanGetFromCache(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
s := &pb.BeaconState{Slot: 1}
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
store.initSyncState[r] = s
wanted := "pre state of slot 1 does not exist"
if _, err := store.cachedPreState(ctx, b); !strings.Contains(err.Error(), wanted) {
t.Fatal("Not expected error")
}
}
func TestCachedPreState_CanGetFromCacheWithFeature(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
config := &featureconfig.Flags{
InitSyncCacheState: true,
}
featureconfig.Init(config)
store := NewForkChoiceService(ctx, db)
s := &pb.BeaconState{Slot: 1}
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
store.initSyncState[r] = s
received, err := store.cachedPreState(ctx, b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, received) {
t.Error("cached state not the same")
}
}
func TestCachedPreState_CanGetFromDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
_, err := store.cachedPreState(ctx, b)
wanted := "pre state of slot 1 does not exist"
if err.Error() != wanted {
t.Error("Did not get wanted error")
}
s := &pb.BeaconState{Slot: 1}
store.db.SaveState(ctx, s, r)
received, err := store.cachedPreState(ctx, b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, received) {
t.Error("cached state not the same")
}
}
func TestSaveInitState_CanSaveDelete(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
config := &featureconfig.Flags{
InitSyncCacheState: true,
}
featureconfig.Init(config)
for i := uint64(0); i < 64; i++ {
b := &ethpb.BeaconBlock{Slot: i}
s := &pb.BeaconState{Slot: i}
r, _ := ssz.HashTreeRoot(b)
store.initSyncState[r] = s
}
// Set finalized root as slot 32
finalizedRoot, _ := ssz.HashTreeRoot(&ethpb.BeaconBlock{Slot: 32})
if err := store.saveInitState(ctx, &pb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{
Epoch: 1, Root: finalizedRoot[:]}}); err != nil {
t.Fatal(err)
}
// Verify finalized state is saved in DB
finalizedState, err := store.db.State(ctx, finalizedRoot)
if err != nil {
t.Fatal(err)
}
if finalizedState == nil {
t.Error("finalized state can't be nil")
}
// Verify cached state is properly pruned
if len(store.initSyncState) != int(params.BeaconConfig().SlotsPerEpoch) {
t.Errorf("wanted: %d, got: %d", len(store.initSyncState), params.BeaconConfig().SlotsPerEpoch)
}
}
func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
signedBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, signedBlock); err != nil {
t.Fatal(err)
}
r, err := ssz.HashTreeRoot(signedBlock.Block)
if err != nil {
t.Fatal(err)
}
store.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
store.initSyncState[r] = &pb.BeaconState{}
if err := db.SaveState(ctx, &pb.BeaconState{}, r); err != nil {
t.Fatal(err)
}
// Could update
s := &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: r[:]}}
if err := store.updateJustified(context.Background(), s); err != nil {
t.Fatal(err)
}
if store.bestJustifiedCheckpt.Epoch != s.CurrentJustifiedCheckpoint.Epoch {
t.Error("Incorrect justified epoch in store")
}
// Could not update
store.bestJustifiedCheckpt.Epoch = 2
if err := store.updateJustified(context.Background(), s); err != nil {
t.Fatal(err)
}
if store.bestJustifiedCheckpt.Epoch != 2 {
t.Error("Incorrect justified epoch in store")
}
}
func TestFilterBlockRoots_CanFilter(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
fBlock := &ethpb.BeaconBlock{}
fRoot, _ := ssz.HashTreeRoot(fBlock)
hBlock := &ethpb.BeaconBlock{Slot: 1}
headRoot, _ := ssz.HashTreeRoot(hBlock)
if err := store.db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: fBlock}); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, fRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: fRoot[:]}); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: hBlock}); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, headRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveHeadBlockRoot(ctx, headRoot); err != nil {
t.Fatal(err)
}
roots := [][32]byte{{'C'}, {'D'}, headRoot, {'E'}, fRoot, {'F'}}
wanted := [][32]byte{{'C'}, {'D'}, {'E'}, {'F'}}
received, err := store.filterBlockRoots(ctx, roots)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(wanted, received) {
t.Error("Did not filter correctly")
}
}

View File

@@ -1,433 +0,0 @@
package forkchoice
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"sync"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/stateutil"
"go.opencensus.io/trace"
)
// ForkChoicer defines a common interface for methods useful for directly applying fork choice
// to beacon blocks to compute head.
type ForkChoicer interface {
Head(ctx context.Context) ([]byte, error)
OnBlock(ctx context.Context, b *ethpb.SignedBeaconBlock) error
OnBlockInitialSyncStateTransition(ctx context.Context, b *ethpb.SignedBeaconBlock) error
OnAttestation(ctx context.Context, a *ethpb.Attestation) error
GenesisStore(ctx context.Context, justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) error
FinalizedCheckpt() *ethpb.Checkpoint
}
// Store represents a service struct that handles the forkchoice
// logic of managing the full PoS beacon chain.
type Store struct {
ctx context.Context
cancel context.CancelFunc
db db.Database
justifiedCheckpt *ethpb.Checkpoint
finalizedCheckpt *ethpb.Checkpoint
prevFinalizedCheckpt *ethpb.Checkpoint
checkpointState *cache.CheckpointStateCache
checkpointStateLock sync.Mutex
genesisTime uint64
bestJustifiedCheckpt *ethpb.Checkpoint
latestVoteMap map[uint64]*pb.ValidatorLatestVote
voteLock sync.RWMutex
initSyncState map[[32]byte]*pb.BeaconState
initSyncStateLock sync.RWMutex
nextEpochBoundarySlot uint64
}
// NewForkChoiceService instantiates a new service instance that will
// be registered into a running beacon node.
func NewForkChoiceService(ctx context.Context, db db.Database) *Store {
ctx, cancel := context.WithCancel(ctx)
return &Store{
ctx: ctx,
cancel: cancel,
db: db,
checkpointState: cache.NewCheckpointStateCache(),
latestVoteMap: make(map[uint64]*pb.ValidatorLatestVote),
initSyncState: make(map[[32]byte]*pb.BeaconState),
}
}
// GenesisStore initializes the store struct before beacon chain
// starts to advance.
//
// Spec pseudocode definition:
// def get_genesis_store(genesis_state: BeaconState) -> Store:
// genesis_block = BeaconBlock(state_root=hash_tree_root(genesis_state))
// root = signing_root(genesis_block)
// justified_checkpoint = Checkpoint(epoch=GENESIS_EPOCH, root=root)
// finalized_checkpoint = Checkpoint(epoch=GENESIS_EPOCH, root=root)
// return Store(
// time=genesis_state.genesis_time,
// justified_checkpoint=justified_checkpoint,
// finalized_checkpoint=finalized_checkpoint,
// blocks={root: genesis_block},
// block_states={root: genesis_state.copy()},
// checkpoint_states={justified_checkpoint: genesis_state.copy()},
// )
func (s *Store) GenesisStore(
ctx context.Context,
justifiedCheckpoint *ethpb.Checkpoint,
finalizedCheckpoint *ethpb.Checkpoint) error {
s.justifiedCheckpt = proto.Clone(justifiedCheckpoint).(*ethpb.Checkpoint)
s.bestJustifiedCheckpt = proto.Clone(justifiedCheckpoint).(*ethpb.Checkpoint)
s.finalizedCheckpt = proto.Clone(finalizedCheckpoint).(*ethpb.Checkpoint)
s.prevFinalizedCheckpt = proto.Clone(finalizedCheckpoint).(*ethpb.Checkpoint)
justifiedState, err := s.db.State(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
return errors.Wrap(err, "could not retrieve last justified state")
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: s.justifiedCheckpt,
State: justifiedState,
}); err != nil {
return errors.Wrap(err, "could not save genesis state in check point cache")
}
s.genesisTime = justifiedState.GenesisTime
if err := s.cacheGenesisState(ctx); err != nil {
return errors.Wrap(err, "could not cache initial sync state")
}
return nil
}
// This sets up gensis for initial sync state cache.
func (s *Store) cacheGenesisState(ctx context.Context) error {
if !featureconfig.Get().InitSyncCacheState {
return nil
}
genesisState, err := s.db.GenesisState(ctx)
if err != nil {
return err
}
stateRoot, err := stateutil.HashTreeRootState(genesisState)
if err != nil {
return errors.Wrap(err, "could not tree hash genesis state")
}
genesisBlk := blocks.NewGenesisBlock(stateRoot[:])
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
if err != nil {
return errors.Wrap(err, "could not get genesis block root")
}
s.initSyncState[genesisBlkRoot] = genesisState
return nil
}
// ancestor returns the block root of an ancestry block from the input block root.
//
// Spec pseudocode definition:
// def get_ancestor(store: Store, root: Hash, slot: Slot) -> Hash:
// block = store.blocks[root]
// if block.slot > slot:
// return get_ancestor(store, block.parent_root, slot)
// elif block.slot == slot:
// return root
// else:
// return Bytes32() # root is older than queried slot: no results.
func (s *Store) ancestor(ctx context.Context, root []byte, slot uint64) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.ancestor")
defer span.End()
// Stop recursive ancestry lookup if context is cancelled.
if ctx.Err() != nil {
return nil, ctx.Err()
}
signed, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return nil, errors.Wrap(err, "could not get ancestor block")
}
if signed == nil || signed.Block == nil {
return nil, errors.New("nil block")
}
b := signed.Block
// If we dont have the ancestor in the DB, simply return nil so rest of fork choice
// operation can proceed. This is not an error condition.
if b == nil || b.Slot < slot {
return nil, nil
}
if b.Slot == slot {
return root, nil
}
return s.ancestor(ctx, b.ParentRoot, slot)
}
// latestAttestingBalance returns the staked balance of a block from the input block root.
//
// Spec pseudocode definition:
// def get_latest_attesting_balance(store: Store, root: Hash) -> Gwei:
// state = store.checkpoint_states[store.justified_checkpoint]
// active_indices = get_active_validator_indices(state, get_current_epoch(state))
// return Gwei(sum(
// state.validators[i].effective_balance for i in active_indices
// if (i in store.latest_messages
// and get_ancestor(store, store.latest_messages[i].root, store.blocks[root].slot) == root)
// ))
func (s *Store) latestAttestingBalance(ctx context.Context, root []byte) (uint64, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.latestAttestingBalance")
defer span.End()
lastJustifiedState, err := s.checkpointState.StateByCheckpoint(s.JustifiedCheckpt())
if err != nil {
return 0, errors.Wrap(err, "could not retrieve cached state via last justified check point")
}
if lastJustifiedState == nil {
return 0, errors.Wrapf(err, "could not get justified state at epoch %d", s.JustifiedCheckpt().Epoch)
}
lastJustifiedEpoch := helpers.CurrentEpoch(lastJustifiedState)
activeIndices, err := helpers.ActiveValidatorIndices(lastJustifiedState, lastJustifiedEpoch)
if err != nil {
return 0, errors.Wrap(err, "could not get active indices for last justified checkpoint")
}
wantedBlkSigned, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return 0, errors.Wrap(err, "could not get target block")
}
if wantedBlkSigned == nil || wantedBlkSigned.Block == nil {
return 0, errors.New("nil wanted block")
}
wantedBlk := wantedBlkSigned.Block
balances := uint64(0)
s.voteLock.RLock()
defer s.voteLock.RUnlock()
for _, i := range activeIndices {
vote, ok := s.latestVoteMap[i]
if !ok {
continue
}
wantedRoot, err := s.ancestor(ctx, vote.Root, wantedBlk.Slot)
if err != nil {
return 0, errors.Wrapf(err, "could not get ancestor root for slot %d", wantedBlk.Slot)
}
if bytes.Equal(wantedRoot, root) {
balances += lastJustifiedState.Validators[i].EffectiveBalance
}
}
return balances, nil
}
// Head returns the head of the beacon chain.
//
// Spec pseudocode definition:
// def get_head(store: Store) -> Root:
// # Get filtered block tree that only includes viable branches
// blocks = get_filtered_block_tree(store)
// # Execute the LMD-GHOST fork choice
// head = store.justified_checkpoint.root
// justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
// while True:
// children = [
// root for root in blocks.keys()
// if blocks[root].parent_root == head and blocks[root].slot > justified_slot
// ]
// if len(children) == 0:
// return head
// # Sort by latest attesting balance with ties broken lexicographically
// head = max(children, key=lambda root: (get_latest_attesting_balance(store, root), root))
func (s *Store) Head(ctx context.Context) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.head")
defer span.End()
head := s.JustifiedCheckpt().Root
filteredBlocks, err := s.getFilterBlockTree(ctx)
if err != nil {
return nil, err
}
justifiedSlot := helpers.StartSlot(s.justifiedCheckpt.Epoch)
for {
children := make([][32]byte, 0, len(filteredBlocks))
for root, block := range filteredBlocks {
if bytes.Equal(block.ParentRoot, head) && block.Slot > justifiedSlot {
children = append(children, root)
}
}
if len(children) == 0 {
return head, nil
}
// if a block has one child, then we don't have to lookup anything to
// know that this child will be the best child.
head = children[0][:]
if len(children) > 1 {
highest, err := s.latestAttestingBalance(ctx, head)
if err != nil {
return nil, errors.Wrap(err, "could not get latest balance")
}
for _, child := range children[1:] {
balance, err := s.latestAttestingBalance(ctx, child[:])
if err != nil {
return nil, errors.Wrap(err, "could not get latest balance")
}
// When there's a tie, it's broken lexicographically to favor the higher one.
if balance > highest ||
balance == highest && bytes.Compare(child[:], head) > 0 {
highest = balance
head = child[:]
}
}
}
}
}
// getFilterBlockTree retrieves a filtered block tree from store, it only returns branches
// whose leaf state's justified and finalized info agrees with what's in the store.
// Rationale: https://notes.ethereum.org/Fj-gVkOSTpOyUx-zkWjuwg?view
//
// Spec pseudocode definition:
// def get_filtered_block_tree(store: Store) -> Dict[Root, BeaconBlock]:
// """
// Retrieve a filtered block true from ``store``, only returning branches
// whose leaf state's justified/finalized info agrees with that in ``store``.
// """
// base = store.justified_checkpoint.root
// blocks: Dict[Root, BeaconBlock] = {}
// filter_block_tree(store, base, blocks)
// return blocks
func (s *Store) getFilterBlockTree(ctx context.Context) (map[[32]byte]*ethpb.BeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.getFilterBlockTree")
defer span.End()
baseRoot := bytesutil.ToBytes32(s.justifiedCheckpt.Root)
filteredBlocks := make(map[[32]byte]*ethpb.BeaconBlock)
if _, err := s.filterBlockTree(ctx, baseRoot, filteredBlocks); err != nil {
return nil, err
}
return filteredBlocks, nil
}
// filterBlockTree filters for branches that see latest finalized and justified info as correct on-chain
// before running Head.
//
// Spec pseudocode definition:
// def filter_block_tree(store: Store, block_root: Root, blocks: Dict[Root, BeaconBlock]) -> bool:
// block = store.blocks[block_root]
// children = [
// root for root in store.blocks.keys()
// if store.blocks[root].parent_root == block_root
// ]
// # If any children branches contain expected finalized/justified checkpoints,
// # add to filtered block-tree and signal viability to parent.
// if any(children):
// filter_block_tree_result = [filter_block_tree(store, child, blocks) for child in children]
// if any(filter_block_tree_result):
// blocks[block_root] = block
// return True
// return False
// # If leaf block, check finalized/justified checkpoints as matching latest.
// head_state = store.block_states[block_root]
// correct_justified = (
// store.justified_checkpoint.epoch == GENESIS_EPOCH
// or head_state.current_justified_checkpoint == store.justified_checkpoint
// )
// correct_finalized = (
// store.finalized_checkpoint.epoch == GENESIS_EPOCH
// or head_state.finalized_checkpoint == store.finalized_checkpoint
// )
// # If expected finalized/justified, add to viable block-tree and signal viability to parent.
// if correct_justified and correct_finalized:
// blocks[block_root] = block
// return True
// # Otherwise, branch not viable
// return False
func (s *Store) filterBlockTree(ctx context.Context, blockRoot [32]byte, filteredBlocks map[[32]byte]*ethpb.BeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.filterBlockTree")
defer span.End()
signed, err := s.db.Block(ctx, blockRoot)
if err != nil {
return false, err
}
if signed == nil || signed.Block == nil {
return false, errors.New("nil block")
}
block := signed.Block
filter := filters.NewFilter().SetParentRoot(blockRoot[:])
childrenRoots, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return false, err
}
if len(childrenRoots) != 0 {
var filtered bool
for _, childRoot := range childrenRoots {
didFilter, err := s.filterBlockTree(ctx, childRoot, filteredBlocks)
if err != nil {
return false, err
}
if didFilter {
filtered = true
}
}
if filtered {
filteredBlocks[blockRoot] = block
return true, nil
}
return false, nil
}
headState, err := s.db.State(ctx, blockRoot)
if err != nil {
return false, err
}
if headState == nil {
return false, fmt.Errorf("no state matching block root %v", hex.EncodeToString(blockRoot[:]))
}
correctJustified := s.justifiedCheckpt.Epoch == 0 ||
proto.Equal(s.justifiedCheckpt, headState.CurrentJustifiedCheckpoint)
correctFinalized := s.finalizedCheckpt.Epoch == 0 ||
proto.Equal(s.finalizedCheckpt, headState.FinalizedCheckpoint)
if correctJustified && correctFinalized {
filteredBlocks[blockRoot] = block
return true, nil
}
return false, nil
}
// JustifiedCheckpt returns the latest justified check point from fork choice store.
func (s *Store) JustifiedCheckpt() *ethpb.Checkpoint {
return proto.Clone(s.justifiedCheckpt).(*ethpb.Checkpoint)
}
// FinalizedCheckpt returns the latest finalized check point from fork choice store.
func (s *Store) FinalizedCheckpt() *ethpb.Checkpoint {
return proto.Clone(s.finalizedCheckpt).(*ethpb.Checkpoint)
}

View File

@@ -1,517 +0,0 @@
package forkchoice
import (
"bytes"
"context"
"reflect"
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/stateutil"
)
func TestStore_GenesisStoreOk(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
genesisTime := time.Unix(9999, 0)
genesisState := &pb.BeaconState{GenesisTime: uint64(genesisTime.Unix())}
genesisStateRoot, err := stateutil.HashTreeRootState(genesisState)
if err != nil {
t.Fatal(err)
}
genesisBlk := blocks.NewGenesisBlock(genesisStateRoot[:])
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveGenesisBlockRoot(ctx, genesisBlkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(store.justifiedCheckpt, checkPoint) {
t.Error("Justified check point from genesis store did not match")
}
if !reflect.DeepEqual(store.finalizedCheckpt, checkPoint) {
t.Error("Finalized check point from genesis store did not match")
}
cachedState, err := store.checkpointState.StateByCheckpoint(checkPoint)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(cachedState, genesisState) {
t.Error("Incorrect genesis state cached")
}
}
func TestStore_AncestorOk(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
type args struct {
root []byte
slot uint64
}
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
tests := []struct {
args *args
want []byte
}{
{args: &args{roots[1], 0}, want: roots[0]},
{args: &args{roots[8], 0}, want: roots[0]},
{args: &args{roots[8], 4}, want: roots[4]},
{args: &args{roots[7], 4}, want: roots[4]},
{args: &args{roots[7], 0}, want: roots[0]},
}
for _, tt := range tests {
got, err := store.ancestor(ctx, tt.args.root, tt.args.slot)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("Store.ancestor(ctx, ) = %v, want %v", got, tt.want)
}
}
}
func TestStore_AncestorNotPartOfTheChain(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
root, err := store.ancestor(ctx, roots[8], 1)
if err != nil {
t.Fatal(err)
}
if root != nil {
t.Error("block at slot 1 is not part of the chain")
}
root, err = store.ancestor(ctx, roots[8], 2)
if err != nil {
t.Fatal(err)
}
if root != nil {
t.Error("block at slot 2 is not part of the chain")
}
}
func TestStore_LatestAttestingBalance(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
validators := make([]*ethpb.Validator, 100)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
// /- B1 (33 votes)
// B0 /- B5 - B7 (33 votes)
// \- B3 - B4 - B6 - B8 (34 votes)
for i := 0; i < len(validators); i++ {
switch {
case i < 33:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[1]}
case i > 66:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[7]}
default:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[8]}
}
}
tests := []struct {
root []byte
want uint64
}{
{root: roots[0], want: 100 * 1e9},
{root: roots[1], want: 33 * 1e9},
{root: roots[3], want: 67 * 1e9},
{root: roots[4], want: 67 * 1e9},
{root: roots[7], want: 33 * 1e9},
{root: roots[8], want: 34 * 1e9},
}
for _, tt := range tests {
got, err := store.latestAttestingBalance(ctx, tt.root)
if err != nil {
t.Fatal(err)
}
if got != tt.want {
t.Errorf("Store.latestAttestingBalance(ctx, ) = %v, want %v", got, tt.want)
}
}
}
func TestStore_ChildrenBlocksFromParentRoot(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
filter := filters.NewFilter().SetParentRoot(roots[0]).SetStartSlot(0)
children, err := store.db.BlockRoots(ctx, filter)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(children, [][32]byte{bytesutil.ToBytes32(roots[1]), bytesutil.ToBytes32(roots[3])}) {
t.Error("Did not receive correct children roots")
}
filter = filters.NewFilter().SetParentRoot(roots[0]).SetStartSlot(2)
children, err = store.db.BlockRoots(ctx, filter)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(children, [][32]byte{bytesutil.ToBytes32(roots[3])}) {
t.Error("Did not receive correct children roots")
}
}
func TestStore_GetHead(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
validators := make([]*ethpb.Validator, 100)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
// /- B1 (33 votes)
// B0 /- B5 - B7 (33 votes)
// \- B3 - B4 - B6 - B8 (34 votes)
for i := 0; i < len(validators); i++ {
switch {
case i < 33:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[1]}
case i > 66:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[7]}
default:
store.latestVoteMap[uint64(i)] = &pb.ValidatorLatestVote{Root: roots[8]}
}
}
// Default head is B8
head, err := store.Head(ctx)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(head, roots[8]) {
t.Error("Incorrect head")
}
// 1 validator switches vote to B7 to gain 34%, enough to switch head
store.latestVoteMap[uint64(50)] = &pb.ValidatorLatestVote{Root: roots[7]}
head, err = store.Head(ctx)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(head, roots[7]) {
t.Error("Incorrect head")
}
// 18 validators switches vote to B1 to gain 51%, enough to switch head
for i := 0; i < 18; i++ {
idx := 50 + uint64(i)
store.latestVoteMap[uint64(idx)] = &pb.ValidatorLatestVote{Root: roots[1]}
}
head, err = store.Head(ctx)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(head, roots[1]) {
t.Log(head)
t.Error("Incorrect head")
}
}
func TestCacheGenesisState_Correct(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
config := &featureconfig.Flags{
InitSyncCacheState: true,
}
featureconfig.Init(config)
b := &ethpb.BeaconBlock{Slot: 1}
r, _ := ssz.HashTreeRoot(b)
s := &pb.BeaconState{GenesisTime: 99}
store.db.SaveState(ctx, s, r)
store.db.SaveGenesisBlockRoot(ctx, r)
if err := store.cacheGenesisState(ctx); err != nil {
t.Fatal(err)
}
for _, state := range store.initSyncState {
if !reflect.DeepEqual(s, state) {
t.Error("Did not get wanted state")
}
}
}
func TestStore_GetFilterBlockTree_CorrectLeaf(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
s := &pb.BeaconState{}
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
tree, err := store.getFilterBlockTree(ctx)
if err != nil {
t.Fatal(err)
}
wanted := make(map[[32]byte]*ethpb.BeaconBlock)
for _, root := range roots {
root32 := bytesutil.ToBytes32(root)
b, _ := store.db.Block(ctx, root32)
if b != nil {
wanted[root32] = b.Block
}
}
if !reflect.DeepEqual(tree, wanted) {
t.Error("Did not filter tree correctly")
}
}
func TestStore_GetFilterBlockTree_IncorrectLeaf(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
s := &pb.BeaconState{}
stateRoot, err := stateutil.HashTreeRootState(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := store.db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
// Filter for incorrect leaves for 1, 7 and 8
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}, bytesutil.ToBytes32(roots[1]))
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}, bytesutil.ToBytes32(roots[7]))
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}, bytesutil.ToBytes32(roots[8]))
store.justifiedCheckpt.Epoch = 1
tree, err := store.getFilterBlockTree(ctx)
if err != nil {
t.Fatal(err)
}
if len(tree) != 0 {
t.Error("filtered tree should be 0 length")
}
// Set leave 1 as correct
store.db.SaveState(ctx, &pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: store.justifiedCheckpt.Root}}, bytesutil.ToBytes32(roots[1]))
tree, err = store.getFilterBlockTree(ctx)
if err != nil {
t.Fatal(err)
}
wanted := make(map[[32]byte]*ethpb.BeaconBlock)
root32 := bytesutil.ToBytes32(roots[0])
b, err = store.db.Block(ctx, root32)
if err != nil {
t.Fatal(err)
}
wanted[root32] = b.Block
root32 = bytesutil.ToBytes32(roots[1])
b, err = store.db.Block(ctx, root32)
if err != nil {
t.Fatal(err)
}
wanted[root32] = b.Block
if !reflect.DeepEqual(tree, wanted) {
t.Error("Did not filter tree correctly")
}
}

View File

@@ -1,153 +0,0 @@
package forkchoice
import (
"context"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
// blockTree1 constructs the following tree:
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
// (B1, and B3 are all from the same slots)
func blockTree1(db db.Database, genesisRoot []byte) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: genesisRoot}
r0, _ := ssz.HashTreeRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.HashTreeRoot(b1)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r0[:]}
r3, _ := ssz.HashTreeRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r3[:]}
r4, _ := ssz.HashTreeRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r4[:]}
r5, _ := ssz.HashTreeRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r4[:]}
r6, _ := ssz.HashTreeRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r5[:]}
r7, _ := ssz.HashTreeRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r6[:]}
r8, _ := ssz.HashTreeRoot(b8)
for _, b := range []*ethpb.BeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, r1); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, r7); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, r8); err != nil {
return nil, err
}
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
}
// blockTree2 constructs the following tree:
// Scenario graph: shorturl.at/loyP6
//
//digraph G {
// rankdir=LR;
// node [shape="none"];
//
// subgraph blocks {
// rankdir=LR;
// node [shape="box"];
// a->b;
// a->c;
// b->d;
// b->e;
// c->f;
// c->g;
// d->h
// d->i
// d->j
// d->k
// h->l
// h->m
// g->n
// g->o
// e->p
// }
//}
func blockTree2(db db.Database) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.HashTreeRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.HashTreeRoot(b1)
b2 := &ethpb.BeaconBlock{Slot: 2, ParentRoot: r0[:]}
r2, _ := ssz.HashTreeRoot(b2)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r1[:]}
r3, _ := ssz.HashTreeRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r1[:]}
r4, _ := ssz.HashTreeRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r2[:]}
r5, _ := ssz.HashTreeRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r2[:]}
r6, _ := ssz.HashTreeRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r3[:]}
r7, _ := ssz.HashTreeRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r3[:]}
r8, _ := ssz.HashTreeRoot(b8)
b9 := &ethpb.BeaconBlock{Slot: 9, ParentRoot: r3[:]}
r9, _ := ssz.HashTreeRoot(b9)
b10 := &ethpb.BeaconBlock{Slot: 10, ParentRoot: r3[:]}
r10, _ := ssz.HashTreeRoot(b10)
b11 := &ethpb.BeaconBlock{Slot: 11, ParentRoot: r4[:]}
r11, _ := ssz.HashTreeRoot(b11)
b12 := &ethpb.BeaconBlock{Slot: 12, ParentRoot: r6[:]}
r12, _ := ssz.HashTreeRoot(b12)
b13 := &ethpb.BeaconBlock{Slot: 13, ParentRoot: r6[:]}
r13, _ := ssz.HashTreeRoot(b13)
b14 := &ethpb.BeaconBlock{Slot: 14, ParentRoot: r7[:]}
r14, _ := ssz.HashTreeRoot(b14)
b15 := &ethpb.BeaconBlock{Slot: 15, ParentRoot: r7[:]}
r15, _ := ssz.HashTreeRoot(b15)
for _, b := range []*ethpb.BeaconBlock{b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, b10, b11, b12, b13, b14, b15} {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
return [][]byte{r0[:], r1[:], r2[:], r3[:], r4[:], r5[:], r6[:], r7[:], r8[:], r9[:], r10[:], r11[:], r12[:], r13[:], r14[:], r15[:]}, nil
}
// blockTree3 constructs a tree that is 512 blocks in a row.
// B0 - B1 - B2 - B3 - .... - B512
func blockTree3(db db.Database) ([][]byte, error) {
blkCount := 512
roots := make([][]byte, 0, blkCount)
blks := make([]*ethpb.BeaconBlock, 0, blkCount)
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.HashTreeRoot(b0)
roots = append(roots, r0[:])
blks = append(blks, b0)
for i := 1; i < blkCount; i++ {
b := &ethpb.BeaconBlock{Slot: uint64(i), ParentRoot: roots[len(roots)-1]}
r, _ := ssz.HashTreeRoot(b)
roots = append(roots, r[:])
blks = append(blks, b)
}
for _, b := range blks {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
return roots, nil
}

View File

@@ -0,0 +1,211 @@
package blockchain
import (
"context"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
// This defines the current chain service's view of head.
type head struct {
slot uint64 // current head slot.
root [32]byte // current head root.
block *ethpb.SignedBeaconBlock // current head block.
state *state.BeaconState // current head state.
}
// This gets head from the fork choice service and saves head related items
// (ie root, block, state) to the local service cache.
func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
ctx, span := trace.StartSpan(ctx, "blockchain.updateHead")
defer span.End()
// To get the proper head update, a node first checks its best justified
// can become justified. This is designed to prevent bounce attack and
// ensure head gets its best justified info.
if s.bestJustifiedCheckpt.Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = s.bestJustifiedCheckpt
}
// Get head from the fork choice service.
f := s.finalizedCheckpt
j := s.justifiedCheckpt
headRoot, err := s.forkChoiceStore.Head(ctx, j.Epoch, bytesutil.ToBytes32(j.Root), balances, f.Epoch)
if err != nil {
return err
}
// Save head to the local service cache.
return s.saveHead(ctx, headRoot)
}
// This saves head info to the local service cache, it also saves the
// new head root to the DB.
func (s *Service) saveHead(ctx context.Context, headRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockchain.saveHead")
defer span.End()
// Do nothing if head hasn't changed.
if headRoot == s.headRoot() {
return nil
}
// If the head state is not available, just return nil.
// There's nothing to cache
if featureconfig.Get().NewStateMgmt {
if !s.stateGen.StateSummaryExists(ctx, headRoot) {
return nil
}
} else {
_, cached := s.initSyncState[headRoot]
if !cached && !s.beaconDB.HasState(ctx, headRoot) {
return nil
}
}
// Get the new head block from DB.
newHeadBlock, err := s.beaconDB.Block(ctx, headRoot)
if err != nil {
return err
}
if newHeadBlock == nil || newHeadBlock.Block == nil {
return errors.New("cannot save nil head block")
}
// Get the new head state from cached state or DB.
var newHeadState *state.BeaconState
if featureconfig.Get().NewStateMgmt {
newHeadState, err = s.stateGen.StateByRoot(ctx, headRoot)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
} else {
var exists bool
newHeadState, exists = s.initSyncState[headRoot]
if !exists {
newHeadState, err = s.beaconDB.State(ctx, headRoot)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
}
}
if newHeadState == nil {
return errors.New("cannot save nil head state")
}
// Cache the new head info.
s.setHead(headRoot, newHeadBlock, newHeadState)
// Save the new head root to DB.
if err := s.beaconDB.SaveHeadBlockRoot(ctx, headRoot); err != nil {
return errors.Wrap(err, "could not save head root in DB")
}
return nil
}
// This gets called to update canonical root mapping. It does not save head block
// root in DB. With the inception of inital-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b *ethpb.SignedBeaconBlock, r [32]byte) error {
if b == nil || b.Block == nil {
return errors.New("cannot save nil head block")
}
var headState *state.BeaconState
var err error
if featureconfig.Get().NewStateMgmt {
headState, err = s.stateGen.StateByRoot(ctx, r)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
} else {
headState, err = s.beaconDB.State(ctx, r)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
if headState == nil {
s.initSyncStateLock.RLock()
cachedHeadState, ok := s.initSyncState[r]
if ok {
headState = cachedHeadState
}
s.initSyncStateLock.RUnlock()
}
}
if headState == nil {
return errors.New("nil head state")
}
s.setHead(r, stateTrie.CopySignedBeaconBlock(b), headState)
return nil
}
// This sets head view object which is used to track the head slot, root, block and state.
func (s *Service) setHead(root [32]byte, block *ethpb.SignedBeaconBlock, state *state.BeaconState) {
s.headLock.Lock()
defer s.headLock.Unlock()
// This does a full copy of the block and state.
s.head = &head{
slot: block.Block.Slot,
root: root,
block: stateTrie.CopySignedBeaconBlock(block),
state: state.Copy(),
}
}
// This returns the head slot.
func (s *Service) headSlot() uint64 {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.head.slot
}
// This returns the head root.
// It does a full copy on head root for immutability.
func (s *Service) headRoot() [32]byte {
if s.head == nil {
return params.BeaconConfig().ZeroHash
}
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.head.root
}
// This returns the head block.
// It does a full copy on head block for immutability.
func (s *Service) headBlock() *ethpb.SignedBeaconBlock {
s.headLock.RLock()
defer s.headLock.RUnlock()
return stateTrie.CopySignedBeaconBlock(s.head.block)
}
// This returns the head state.
// It does a full copy on head state for immutability.
func (s *Service) headState() *state.BeaconState {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.head.state.Copy()
}
// Returns true if head state exists.
func (s *Service) hasHeadState() bool {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.head != nil && s.head.state != nil
}

View File

@@ -0,0 +1,72 @@
package blockchain
import (
"bytes"
"context"
"reflect"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestSaveHead_Same(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := setupBeaconChain(t, db)
r := [32]byte{'A'}
service.head = &head{slot: 0, root: r}
if err := service.saveHead(context.Background(), r); err != nil {
t.Fatal(err)
}
if service.headSlot() != 0 {
t.Error("Head did not stay the same")
}
if service.headRoot() != r {
t.Error("Head did not stay the same")
}
}
func TestSaveHead_Different(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := setupBeaconChain(t, db)
oldRoot := [32]byte{'A'}
service.head = &head{slot: 0, root: oldRoot}
newHeadBlock := &ethpb.BeaconBlock{Slot: 1}
newHeadSignedBlock := &ethpb.SignedBeaconBlock{Block: newHeadBlock}
service.beaconDB.SaveBlock(context.Background(), newHeadSignedBlock)
newRoot, _ := ssz.HashTreeRoot(newHeadBlock)
headState, _ := state.InitializeFromProto(&pb.BeaconState{Slot: 1})
service.beaconDB.SaveState(context.Background(), headState, newRoot)
if err := service.saveHead(context.Background(), newRoot); err != nil {
t.Fatal(err)
}
if service.HeadSlot() != 1 {
t.Error("Head did not change")
}
cachedRoot, err := service.HeadRoot(context.Background())
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(cachedRoot, newRoot[:]) {
t.Error("Head did not change")
}
if !reflect.DeepEqual(service.headBlock(), newHeadSignedBlock) {
t.Error("Head did not change")
}
if !reflect.DeepEqual(service.headState().CloneInnerState(), headState.CloneInnerState()) {
t.Error("Head did not change")
}
}

View File

@@ -1,57 +1,83 @@
package blockchain
import (
"bytes"
"encoding/hex"
"fmt"
"net/http"
"sort"
"strconv"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/sirupsen/logrus"
"github.com/emicklei/dot"
)
const latestSlotCount = 10
const template = `<html>
<head>
<script src="//cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/viz.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/full.render.js"></script>
<body>
<script type="application/javascript">
var graph = ` + "`%s`;" + `
var viz = new Viz();
viz.renderSVGElement(graph) // reading the graph.
.then(function(element) {
document.body.appendChild(element); // appends to document.
})
.catch(error => {
// Create a new Viz instance (@see Caveats page for more info)
viz = new Viz();
// Possibly display the error
console.error(error);
});
</script>
</head>
</body>
</html>`
// HeadsHandler is a handler to serve /heads page in metrics.
func (s *Service) HeadsHandler(w http.ResponseWriter, _ *http.Request) {
buf := new(bytes.Buffer)
if _, err := fmt.Fprintf(w, "\n %s\t%s\t", "Head slot", "Head root"); err != nil {
logrus.WithError(err).Error("Failed to render chain heads page")
return
// TreeHandler is a handler to serve /tree page in metrics.
func (s *Service) TreeHandler(w http.ResponseWriter, _ *http.Request) {
if s.headState() == nil {
if _, err := w.Write([]byte("Unavailable during initial syncing")); err != nil {
log.WithError(err).Error("Failed to render p2p info page")
}
}
if _, err := fmt.Fprintf(w, "\n %s\t%s\t", "---------", "---------"); err != nil {
logrus.WithError(err).Error("Failed to render chain heads page")
return
nodes := s.forkChoiceStore.Nodes()
graph := dot.NewGraph(dot.Directed)
graph.Attr("rankdir", "RL")
graph.Attr("labeljust", "l")
dotNodes := make([]*dot.Node, len(nodes))
avgBalance := uint64(averageBalance(s.headState().Balances()))
for i := len(nodes) - 1; i >= 0; i-- {
// Construct label for each node.
slot := strconv.Itoa(int(nodes[i].Slot))
weight := strconv.Itoa(int(nodes[i].Weight / 1e9)) // Convert unit Gwei to unit ETH.
votes := strconv.Itoa(int(nodes[i].Weight / 1e9 / avgBalance))
bestDescendent := strconv.Itoa(int(nodes[i].BestDescendent))
index := strconv.Itoa(int(i))
label := "slot: " + slot + "\n index: " + index + "\n bestDescendent: " + bestDescendent + "\n votes: " + votes + "\n weight: " + weight
var dotN dot.Node
if nodes[i].Parent != ^uint64(0) {
dotN = graph.Node(index).Box().Attr("label", label)
}
if nodes[i].Slot == s.headSlot() &&
nodes[i].BestDescendent == ^uint64(0) {
dotN = dotN.Attr("color", "green")
}
dotNodes[i] = &dotN
}
slots := s.latestHeadSlots()
for _, slot := range slots {
r := hex.EncodeToString(bytesutil.Trunc(s.canonicalRoots[uint64(slot)]))
if _, err := fmt.Fprintf(w, "\n %d\t\t%s\t", slot, r); err != nil {
logrus.WithError(err).Error("Failed to render chain heads page")
return
for i := len(nodes) - 1; i >= 0; i-- {
if nodes[i].Parent != ^uint64(0) && nodes[i].Parent < uint64(len(dotNodes)) {
graph.Edge(*dotNodes[i], *dotNodes[nodes[i].Parent])
}
}
w.WriteHeader(http.StatusOK)
if _, err := w.Write(buf.Bytes()); err != nil {
log.WithError(err).Error("Failed to render chain heads page")
w.Header().Set("Content-Type", "text/html")
if _, err := fmt.Fprintf(w, template, graph.String()); err != nil {
log.WithError(err).Error("Failed to render p2p info page")
}
}
// This returns the latest head slots in a slice and up to latestSlotCount
func (s *Service) latestHeadSlots() []int {
slots := make([]int, 0, len(s.canonicalRoots))
for k := range s.canonicalRoots {
slots = append(slots, int(k))
}
sort.Ints(slots)
if (len(slots)) > latestSlotCount {
return slots[len(slots)-latestSlotCount:]
}
return slots
}

View File

@@ -0,0 +1,255 @@
package blockchain
import (
"context"
"sort"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
)
const maxCacheSize = 70
const initialSyncCacheSize = 45
const minimumCacheSize = initialSyncCacheSize / 3
func (s *Service) persistCachedStates(ctx context.Context, numOfStates int) error {
oldStates := make([]*stateTrie.BeaconState, 0, numOfStates)
// Add slots to the map and add epoch boundary states to the slice.
for _, rt := range s.boundaryRoots[:numOfStates-minimumCacheSize] {
oldStates = append(oldStates, s.initSyncState[rt])
}
err := s.beaconDB.SaveStates(ctx, oldStates, s.boundaryRoots[:numOfStates-minimumCacheSize])
if err != nil {
return err
}
for _, rt := range s.boundaryRoots[:numOfStates-minimumCacheSize] {
delete(s.initSyncState, rt)
}
s.boundaryRoots = s.boundaryRoots[numOfStates-minimumCacheSize:]
return nil
}
// filter out boundary candidates from our currently processed batch of states.
func (s *Service) filterBoundaryCandidates(ctx context.Context, root [32]byte, postState *stateTrie.BeaconState) {
// Only trigger on epoch start.
if !helpers.IsEpochStart(postState.Slot()) {
return
}
stateSlice := make([][32]byte, 0, len(s.initSyncState))
// Add epoch boundary roots to slice.
for rt := range s.initSyncState {
stateSlice = append(stateSlice, rt)
}
sort.Slice(stateSlice, func(i int, j int) bool {
return s.initSyncState[stateSlice[i]].Slot() < s.initSyncState[stateSlice[j]].Slot()
})
epochLength := params.BeaconConfig().SlotsPerEpoch
if len(s.boundaryRoots) > 0 {
// Retrieve previous boundary root.
previousBoundaryRoot := s.boundaryRoots[len(s.boundaryRoots)-1]
previousState, ok := s.initSyncState[previousBoundaryRoot]
if !ok {
// Remove the non-existent root and exit filtering.
s.boundaryRoots = s.boundaryRoots[:len(s.boundaryRoots)-1]
return
}
previousSlot := previousState.Slot()
// Round up slot number to account for skipped slots.
previousSlot = helpers.RoundUpToNearestEpoch(previousSlot)
if postState.Slot()-previousSlot >= epochLength {
targetSlot := postState.Slot()
tempRoots := s.loopThroughCandidates(stateSlice, previousBoundaryRoot, previousSlot, targetSlot)
s.boundaryRoots = append(s.boundaryRoots, tempRoots...)
}
}
s.boundaryRoots = append(s.boundaryRoots, root)
s.pruneOldStates()
s.pruneNonBoundaryStates()
}
// loop-through the provided candidate roots to filter out which would be appropriate boundary roots.
func (s *Service) loopThroughCandidates(stateSlice [][32]byte, previousBoundaryRoot [32]byte,
previousSlot uint64, targetSlot uint64) [][32]byte {
tempRoots := [][32]byte{}
epochLength := params.BeaconConfig().SlotsPerEpoch
// Loop through current states to filter for valid boundary states.
for i := len(stateSlice) - 1; stateSlice[i] != previousBoundaryRoot && i >= 0; i-- {
currentSlot := s.initSyncState[stateSlice[i]].Slot()
// Skip if the current slot is larger than the previous epoch
// boundary.
if currentSlot > targetSlot-epochLength {
continue
}
tempRoots = append(tempRoots, stateSlice[i])
// Switch target slot if the current slot is greater than
// 1 epoch boundary from the previously saved boundary slot.
if currentSlot > previousSlot+epochLength {
currentSlot = helpers.RoundUpToNearestEpoch(currentSlot)
targetSlot = currentSlot
continue
}
break
}
// Reverse to append the roots in ascending order corresponding
// to the respective slots.
tempRoots = bytesutil.ReverseBytes32Slice(tempRoots)
return tempRoots
}
// prune for states past the current finalized checkpoint.
func (s *Service) pruneOldStates() {
prunedBoundaryRoots := [][32]byte{}
for _, rt := range s.boundaryRoots {
st, ok := s.initSyncState[rt]
// Skip non-existent roots.
if !ok {
continue
}
if st.Slot() < helpers.StartSlot(s.FinalizedCheckpt().Epoch) {
delete(s.initSyncState, rt)
continue
}
prunedBoundaryRoots = append(prunedBoundaryRoots, rt)
}
s.boundaryRoots = prunedBoundaryRoots
}
// prune cache for non-boundary states.
func (s *Service) pruneNonBoundaryStates() {
boundaryMap := make(map[[32]byte]bool)
for i := range s.boundaryRoots {
boundaryMap[s.boundaryRoots[i]] = true
}
for rt := range s.initSyncState {
if !boundaryMap[rt] {
delete(s.initSyncState, rt)
}
}
}
func (s *Service) pruneOldNonFinalizedStates() {
stateSlice := make([][32]byte, 0, len(s.initSyncState))
// Add epoch boundary roots to slice.
for rt := range s.initSyncState {
stateSlice = append(stateSlice, rt)
}
// Sort by slots.
sort.Slice(stateSlice, func(i int, j int) bool {
return s.initSyncState[stateSlice[i]].Slot() < s.initSyncState[stateSlice[j]].Slot()
})
boundaryMap := make(map[[32]byte]bool)
for i := range s.boundaryRoots {
boundaryMap[s.boundaryRoots[i]] = true
}
for _, rt := range stateSlice[:initialSyncCacheSize] {
if boundaryMap[rt] {
continue
}
delete(s.initSyncState, rt)
}
}
func (s *Service) generateState(ctx context.Context, startRoot [32]byte, endRoot [32]byte) (*stateTrie.BeaconState, error) {
preState, err := s.beaconDB.State(ctx, startRoot)
if err != nil {
return nil, err
}
if preState == nil {
preState, err = s.stateGen.StateByRoot(ctx, startRoot)
if err != nil {
return nil, err
}
if preState == nil {
return nil, errors.New("finalized state does not exist in db")
}
}
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return nil, err
}
var endBlock *ethpb.SignedBeaconBlock
if !featureconfig.Get().NoInitSyncBatchSaveBlocks && s.hasInitSyncBlock(endRoot) {
endBlock = s.getInitSyncBlock(endRoot)
s.clearInitSyncBlocks()
} else {
endBlock, err = s.beaconDB.Block(ctx, endRoot)
if err != nil {
return nil, err
}
}
if endBlock == nil {
return nil, errors.New("provided block root does not have block saved in the db")
}
log.Warnf("Generating missing state of slot %d and root %#x", endBlock.Block.Slot, endRoot)
blocks, err := s.stateGen.LoadBlocks(ctx, preState.Slot()+1, endBlock.Block.Slot, endRoot)
if err != nil {
return nil, errors.Wrap(err, "could not load the required blocks")
}
postState, err := s.stateGen.ReplayBlocks(ctx, preState, blocks, endBlock.Block.Slot)
if err != nil {
return nil, errors.Wrap(err, "could not replay the blocks to generate the resultant state")
}
return postState, nil
}
// This saves a beacon block to the initial sync blocks cache.
func (s *Service) saveInitSyncBlock(r [32]byte, b *ethpb.SignedBeaconBlock) {
s.initSyncBlocksLock.Lock()
defer s.initSyncBlocksLock.Unlock()
s.initSyncBlocks[r] = b
}
// This checks if a beacon block exists in the initial sync blocks cache using the root
// of the block.
func (s *Service) hasInitSyncBlock(r [32]byte) bool {
s.initSyncBlocksLock.RLock()
defer s.initSyncBlocksLock.RUnlock()
_, ok := s.initSyncBlocks[r]
return ok
}
// This retrieves a beacon block from the initial sync blocks cache using the root of
// the block.
func (s *Service) getInitSyncBlock(r [32]byte) *ethpb.SignedBeaconBlock {
s.initSyncBlocksLock.RLock()
defer s.initSyncBlocksLock.RUnlock()
b := s.initSyncBlocks[r]
return b
}
// This retrieves all the beacon blocks from the initial sync blocks cache, the returned
// blocks are unordered.
func (s *Service) getInitSyncBlocks() []*ethpb.SignedBeaconBlock {
s.initSyncBlocksLock.RLock()
defer s.initSyncBlocksLock.RUnlock()
blks := make([]*ethpb.SignedBeaconBlock, 0, len(s.initSyncBlocks))
for _, b := range s.initSyncBlocks {
blks = append(blks, b)
}
return blks
}
// This clears out the initial sync blocks cache.
func (s *Service) clearInitSyncBlocks() {
s.initSyncBlocksLock.Lock()
defer s.initSyncBlocksLock.Unlock()
s.initSyncBlocks = make(map[[32]byte]*ethpb.SignedBeaconBlock)
}

View File

@@ -0,0 +1,281 @@
package blockchain
import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
messagediff "gopkg.in/d4l3k/messagediff.v1"
)
func TestFilterBoundaryCandidates_FilterCorrect(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
for i := uint64(0); i < 500; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
service.initSyncState[root] = st.Copy()
if helpers.IsEpochStart(i) {
service.boundaryRoots = append(service.boundaryRoots, root)
}
}
lastIndex := len(service.boundaryRoots) - 1
for i := uint64(500); i < 2000; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
service.initSyncState[root] = st.Copy()
}
// Set current state.
latestSlot := helpers.RoundUpToNearestEpoch(2000)
st.SetSlot(latestSlot)
lastRoot := [32]byte{}
copy(lastRoot[:], bytesutil.Bytes32(latestSlot))
service.initSyncState[lastRoot] = st.Copy()
service.finalizedCheckpt = &ethpb.Checkpoint{
Epoch: 0,
Root: []byte{},
}
service.filterBoundaryCandidates(context.Background(), lastRoot, st)
if len(service.boundaryRoots[lastIndex+1:]) == 0 {
t.Fatal("Wanted non zero added boundary roots")
}
for _, rt := range service.boundaryRoots[lastIndex+1:] {
st, ok := service.initSyncState[rt]
if !ok {
t.Error("Root doen't exist in cache map")
continue
}
if !(helpers.IsEpochStart(st.Slot()) || helpers.IsEpochStart(st.Slot()-1) || helpers.IsEpochStart(st.Slot()+1)) {
t.Errorf("boundary roots not validly stored. They have slot %d", st.Slot())
}
}
}
func TestFilterBoundaryCandidates_HandleSkippedSlots(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
for i := uint64(0); i < 500; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
service.initSyncState[root] = st.Copy()
if helpers.IsEpochStart(i) {
service.boundaryRoots = append(service.boundaryRoots, root)
}
}
lastIndex := len(service.boundaryRoots) - 1
for i := uint64(500); i < 2000; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
// save only for offsetted slots
if helpers.IsEpochStart(i + 10) {
service.initSyncState[root] = st.Copy()
}
}
// Set current state.
latestSlot := helpers.RoundUpToNearestEpoch(2000)
st.SetSlot(latestSlot)
lastRoot := [32]byte{}
copy(lastRoot[:], bytesutil.Bytes32(latestSlot))
service.initSyncState[lastRoot] = st.Copy()
service.finalizedCheckpt = &ethpb.Checkpoint{
Epoch: 0,
Root: []byte{},
}
service.filterBoundaryCandidates(context.Background(), lastRoot, st)
if len(service.boundaryRoots[lastIndex+1:]) == 0 {
t.Fatal("Wanted non zero added boundary roots")
}
for _, rt := range service.boundaryRoots[lastIndex+1:] {
st, ok := service.initSyncState[rt]
if !ok {
t.Error("Root doen't exist in cache map")
continue
}
if st.Slot() >= 500 {
// Ignore head boundary root.
if st.Slot() == 2016 {
continue
}
if !helpers.IsEpochStart(st.Slot() + 10) {
t.Errorf("boundary roots not validly stored. They have slot %d "+
"instead of the offset from epoch start", st.Slot())
}
}
}
}
func TestPruneOldStates_AlreadyFinalized(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
for i := uint64(100); i < 200; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
service.initSyncState[root] = st.Copy()
service.boundaryRoots = append(service.boundaryRoots, root)
}
finalizedEpoch := uint64(5)
service.finalizedCheckpt = &ethpb.Checkpoint{Epoch: finalizedEpoch}
service.pruneOldStates()
for _, rt := range service.boundaryRoots {
st, ok := service.initSyncState[rt]
if !ok {
t.Error("Root doen't exist in cache map")
continue
}
if st.Slot() < helpers.StartSlot(finalizedEpoch) {
t.Errorf("State with slot %d still exists and not pruned", st.Slot())
}
}
}
func TestPruneNonBoundary_CanPrune(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
for i := uint64(0); i < 2000; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
service.initSyncState[root] = st.Copy()
if helpers.IsEpochStart(i) {
service.boundaryRoots = append(service.boundaryRoots, root)
}
}
service.pruneNonBoundaryStates()
for _, rt := range service.boundaryRoots {
st, ok := service.initSyncState[rt]
if !ok {
t.Error("Root doesn't exist in cache map")
continue
}
if !helpers.IsEpochStart(st.Slot()) {
t.Errorf("Non boundary state with slot %d still exists and not pruned", st.Slot())
}
}
}
func TestGenerateState_CorrectlyGenerated(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db, StateGen: stategen.New(db, cache.NewStateSummaryCache())}
service, err := NewService(context.Background(), cfg)
if err != nil {
t.Fatal(err)
}
beaconState, privs := testutil.DeterministicGenesisState(t, 32)
genesisBlock := blocks.NewGenesisBlock([]byte{})
bodyRoot, err := ssz.HashTreeRoot(genesisBlock.Block)
if err != nil {
t.Fatal(err)
}
beaconState.SetLatestBlockHeader(&ethpb.BeaconBlockHeader{
Slot: genesisBlock.Block.Slot,
ParentRoot: genesisBlock.Block.ParentRoot,
StateRoot: params.BeaconConfig().ZeroHash[:],
BodyRoot: bodyRoot[:],
})
beaconState.SetSlashings(make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector))
cp := beaconState.CurrentJustifiedCheckpoint()
mockRoot := [32]byte{}
copy(mockRoot[:], "hello-world")
cp.Root = mockRoot[:]
beaconState.SetCurrentJustifiedCheckpoint(cp)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
err = db.SaveBlock(context.Background(), genesisBlock)
if err != nil {
t.Fatal(err)
}
genRoot, err := ssz.HashTreeRoot(genesisBlock)
if err != nil {
t.Fatal(err)
}
err = db.SaveState(context.Background(), beaconState, genRoot)
if err != nil {
t.Fatal(err)
}
lastBlock := &ethpb.SignedBeaconBlock{}
for i := uint64(1); i < 10; i++ {
block, err := testutil.GenerateFullBlock(beaconState, privs, testutil.DefaultBlockGenConfig(), i)
if err != nil {
t.Fatal(err)
}
beaconState, err = state.ExecuteStateTransition(context.Background(), beaconState, block)
if err != nil {
t.Fatal(err)
}
err = db.SaveBlock(context.Background(), block)
if err != nil {
t.Fatal(err)
}
lastBlock = block
}
root, err := ssz.HashTreeRoot(lastBlock.Block)
if err != nil {
t.Fatal(err)
}
newState, err := service.generateState(context.Background(), genRoot, root)
if err != nil {
t.Fatal(err)
}
if !ssz.DeepEqual(newState.InnerStateUnsafe(), beaconState.InnerStateUnsafe()) {
diff, _ := messagediff.PrettyDiff(newState.InnerStateUnsafe(), beaconState.InnerStateUnsafe())
t.Errorf("Generated state is different from what is expected: %s", diff)
}
}

View File

@@ -1,17 +1,50 @@
package blockchain
import (
"fmt"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "blockchain")
// logs state transition related data every slot.
func logStateTransitionData(b *ethpb.BeaconBlock, r []byte) {
func logStateTransitionData(b *ethpb.BeaconBlock) {
log.WithFields(logrus.Fields{
"slot": b.Slot,
"attestations": len(b.Body.Attestations),
"deposits": len(b.Body.Deposits),
"slot": b.Slot,
"attestations": len(b.Body.Attestations),
"deposits": len(b.Body.Deposits),
"attesterSlashings": len(b.Body.AttesterSlashings),
}).Info("Finished applying state transition")
}
func logEpochData(beaconState *stateTrie.BeaconState) {
log.WithFields(logrus.Fields{
"epoch": helpers.CurrentEpoch(beaconState),
"finalizedEpoch": beaconState.FinalizedCheckpointEpoch(),
"justifiedEpoch": beaconState.CurrentJustifiedCheckpoint().Epoch,
"previousJustifiedEpoch": beaconState.PreviousJustifiedCheckpoint().Epoch,
}).Info("Starting next epoch")
activeVals, err := helpers.ActiveValidatorIndices(beaconState, helpers.CurrentEpoch(beaconState))
if err != nil {
log.WithError(err).Error("Could not get active validator indices")
return
}
log.WithFields(logrus.Fields{
"totalValidators": len(beaconState.Validators()),
"activeValidators": len(activeVals),
"averageBalance": fmt.Sprintf("%.5f ETH", averageBalance(beaconState.Balances())),
}).Info("Validator registry information")
}
func averageBalance(balances []uint64) float64 {
total := uint64(0)
for i := 0; i < len(balances); i++ {
total += balances[i]
}
return float64(total) / float64(len(balances)) / float64(params.BeaconConfig().GweiPerEth)
}

View File

@@ -3,7 +3,11 @@ package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
var (
@@ -15,38 +19,10 @@ var (
Name: "beacon_head_slot",
Help: "Slot of the head block of the beacon chain",
})
beaconHeadRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_head_root",
Help: "Root of the head block of the beacon chain, it returns the lowest 8 bytes interpreted as little endian",
})
competingAtts = promauto.NewCounter(prometheus.CounterOpts{
Name: "competing_attestations",
Help: "The # of attestations received and processed from a competing chain",
})
competingBlks = promauto.NewCounter(prometheus.CounterOpts{
Name: "competing_blocks",
Help: "The # of blocks received and processed from a competing chain",
})
processedBlkNoPubsub = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_no_pubsub_block_counter",
Help: "The # of processed block without pubsub, this usually means the blocks from sync",
})
processedBlkNoPubsubForkchoice = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_no_pubsub_forkchoice_block_counter",
Help: "The # of processed block without pubsub and forkchoice, this means indicate blocks from initial sync",
})
processedBlk = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_block_counter",
Help: "The # of total processed in block chain service, with fork choice and pubsub",
})
processedAttNoPubsub = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_no_pubsub_attestation_counter",
Help: "The # of processed attestation without pubsub, this usually means the attestations from sync",
})
processedAtt = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_attestation_counter",
Help: "The # of processed attestation with pubsub and fork choice, this ususally means attestations from rpc",
})
headFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "head_finalized_epoch",
Help: "Last finalized epoch of the head state",
@@ -55,14 +31,150 @@ var (
Name: "head_finalized_root",
Help: "Last finalized root of the head state",
})
beaconFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_finalized_epoch",
Help: "Last finalized epoch of the processed state",
})
beaconFinalizedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_finalized_root",
Help: "Last finalized root of the processed state",
})
beaconCurrentJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_epoch",
Help: "Current justified epoch of the processed state",
})
beaconCurrentJustifiedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_root",
Help: "Current justified root of the processed state",
})
beaconPrevJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_previous_justified_epoch",
Help: "Previous justified epoch of the processed state",
})
beaconPrevJustifiedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_previous_justified_root",
Help: "Previous justified root of the processed state",
})
validatorsCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validator_count",
Help: "The total number of validators",
}, []string{"state"})
validatorsBalance = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_total_balance",
Help: "The total balance of validators, in GWei",
}, []string{"state"})
validatorsEffectiveBalance = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_total_effective_balance",
Help: "The total effective balance of validators, in GWei",
}, []string{"state"})
currentEth1DataDepositCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "current_eth1_data_deposit_count",
Help: "The current eth1 deposit count in the last processed state eth1data field.",
})
totalEligibleBalances = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_eligible_balances",
Help: "The total amount of ether, in gwei, that has been used in voting attestation target of previous epoch",
})
totalVotedTargetBalances = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_voted_target_balances",
Help: "The total amount of ether, in gwei, that is eligible for voting of previous epoch",
})
)
func (s *Service) reportSlotMetrics(currentSlot uint64) {
// reportSlotMetrics reports slot related metrics.
func reportSlotMetrics(currentSlot uint64, headSlot uint64, finalizedCheckpoint *ethpb.Checkpoint) {
beaconSlot.Set(float64(currentSlot))
beaconHeadSlot.Set(float64(s.HeadSlot()))
beaconHeadRoot.Set(float64(bytesutil.ToLowInt64(s.HeadRoot())))
if s.headState != nil {
headFinalizedEpoch.Set(float64(s.headState.FinalizedCheckpoint.Epoch))
headFinalizedRoot.Set(float64(bytesutil.ToLowInt64(s.headState.FinalizedCheckpoint.Root)))
beaconHeadSlot.Set(float64(headSlot))
if finalizedCheckpoint != nil {
headFinalizedEpoch.Set(float64(finalizedCheckpoint.Epoch))
headFinalizedRoot.Set(float64(bytesutil.ToLowInt64(finalizedCheckpoint.Root)))
}
}
// reportEpochMetrics reports epoch related metrics.
func reportEpochMetrics(state *stateTrie.BeaconState) {
currentEpoch := state.Slot() / params.BeaconConfig().SlotsPerEpoch
// Validator instances
pendingInstances := 0
activeInstances := 0
slashingInstances := 0
slashedInstances := 0
exitingInstances := 0
exitedInstances := 0
// Validator balances
pendingBalance := uint64(0)
activeBalance := uint64(0)
activeEffectiveBalance := uint64(0)
exitingBalance := uint64(0)
exitingEffectiveBalance := uint64(0)
slashingBalance := uint64(0)
slashingEffectiveBalance := uint64(0)
for i, validator := range state.Validators() {
bal, err := state.BalanceAtIndex(uint64(i))
if err != nil {
continue
}
if validator.Slashed {
if currentEpoch < validator.ExitEpoch {
slashingInstances++
slashingBalance += bal
slashingEffectiveBalance += validator.EffectiveBalance
} else {
slashedInstances++
}
continue
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
if currentEpoch < validator.ExitEpoch {
exitingInstances++
exitingBalance += bal
exitingEffectiveBalance += validator.EffectiveBalance
} else {
exitedInstances++
}
continue
}
if currentEpoch < validator.ActivationEpoch {
pendingInstances++
pendingBalance += bal
continue
}
activeInstances++
activeBalance += bal
activeEffectiveBalance += validator.EffectiveBalance
}
validatorsCount.WithLabelValues("Pending").Set(float64(pendingInstances))
validatorsCount.WithLabelValues("Active").Set(float64(activeInstances))
validatorsCount.WithLabelValues("Exiting").Set(float64(exitingInstances))
validatorsCount.WithLabelValues("Exited").Set(float64(exitedInstances))
validatorsCount.WithLabelValues("Slashing").Set(float64(slashingInstances))
validatorsCount.WithLabelValues("Slashed").Set(float64(slashedInstances))
validatorsBalance.WithLabelValues("Pending").Set(float64(pendingBalance))
validatorsBalance.WithLabelValues("Active").Set(float64(activeBalance))
validatorsBalance.WithLabelValues("Exiting").Set(float64(exitingBalance))
validatorsBalance.WithLabelValues("Slashing").Set(float64(slashingBalance))
validatorsEffectiveBalance.WithLabelValues("Active").Set(float64(activeEffectiveBalance))
validatorsEffectiveBalance.WithLabelValues("Exiting").Set(float64(exitingEffectiveBalance))
validatorsEffectiveBalance.WithLabelValues("Slashing").Set(float64(slashingEffectiveBalance))
// Last justified slot
beaconCurrentJustifiedEpoch.Set(float64(state.CurrentJustifiedCheckpoint().Epoch))
beaconCurrentJustifiedRoot.Set(float64(bytesutil.ToLowInt64(state.CurrentJustifiedCheckpoint().Root)))
// Last previous justified slot
beaconPrevJustifiedEpoch.Set(float64(state.PreviousJustifiedCheckpoint().Epoch))
beaconPrevJustifiedRoot.Set(float64(bytesutil.ToLowInt64(state.PreviousJustifiedCheckpoint().Root)))
// Last finalized slot
beaconFinalizedEpoch.Set(float64(state.FinalizedCheckpointEpoch()))
beaconFinalizedRoot.Set(float64(bytesutil.ToLowInt64(state.FinalizedCheckpoint().Root)))
currentEth1DataDepositCount.Set(float64(state.Eth1Data().DepositCount))
if precompute.Balances != nil {
totalEligibleBalances.Set(float64(precompute.Balances.PrevEpoch))
totalVotedTargetBalances.Set(float64(precompute.Balances.PrevEpochTargetAttesters))
}
}

View File

@@ -0,0 +1,132 @@
package blockchain
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"go.opencensus.io/trace"
)
// ErrTargetRootNotInDB returns when the target block root of an attestation cannot be found in the
// beacon database.
var ErrTargetRootNotInDB = errors.New("target root does not exist in db")
// onAttestation is called whenever an attestation is received, verifies the attestation is valid and saves
/// it to the DB.
//
// Spec pseudocode definition:
// def on_attestation(store: Service, attestation: Attestation) -> None:
// """
// Run ``on_attestation`` upon receiving a new ``attestation`` from either within a block or directly on the wire.
//
// An ``attestation`` that is asserted as invalid may be valid at a later time,
// consider scheduling it for later processing in such case.
// """
// target = attestation.data.target
//
// # Attestations must be from the current or previous epoch
// current_epoch = compute_epoch_at_slot(get_current_slot(store))
// # Use GENESIS_EPOCH for previous when genesis to avoid underflow
// previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else GENESIS_EPOCH
// assert target.epoch in [current_epoch, previous_epoch]
// assert target.epoch == compute_epoch_at_slot(attestation.data.slot)
//
// # Attestations target be for a known block. If target block is unknown, delay consideration until the block is found
// assert target.root in store.blocks
// # Attestations cannot be from future epochs. If they are, delay consideration until the epoch arrives
// base_state = store.block_states[target.root].copy()
// assert store.time >= base_state.genesis_time + compute_start_slot_at_epoch(target.epoch) * SECONDS_PER_SLOT
//
// # Attestations must be for a known block. If block is unknown, delay consideration until the block is found
// assert attestation.data.beacon_block_root in store.blocks
// # Attestations must not be for blocks in the future. If not, the attestation should not be considered
// assert store.blocks[attestation.data.beacon_block_root].slot <= attestation.data.slot
//
// # Service target checkpoint state if not yet seen
// if target not in store.checkpoint_states:
// process_slots(base_state, compute_start_slot_at_epoch(target.epoch))
// store.checkpoint_states[target] = base_state
// target_state = store.checkpoint_states[target]
//
// # Attestations can only affect the fork choice of subsequent slots.
// # Delay consideration in the fork choice until their slot is in the past.
// assert store.time >= (attestation.data.slot + 1) * SECONDS_PER_SLOT
//
// # Get state at the `target` to validate attestation and calculate the committees
// indexed_attestation = get_indexed_attestation(target_state, attestation)
// assert is_valid_indexed_attestation(target_state, indexed_attestation)
//
// # Update latest messages
// for i in indexed_attestation.attesting_indices:
// if i not in store.latest_messages or target.epoch > store.latest_messages[i].epoch:
// store.latest_messages[i] = LatestMessage(epoch=target.epoch, root=attestation.data.beacon_block_root)
func (s *Service) onAttestation(ctx context.Context, a *ethpb.Attestation) ([]uint64, error) {
ctx, span := trace.StartSpan(ctx, "blockchain.onAttestation")
defer span.End()
tgt := stateTrie.CopyCheckpoint(a.Data.Target)
tgtSlot := helpers.StartSlot(tgt.Epoch)
if helpers.SlotToEpoch(a.Data.Slot) != a.Data.Target.Epoch {
return nil, fmt.Errorf("data slot is not in the same epoch as target %d != %d", helpers.SlotToEpoch(a.Data.Slot), a.Data.Target.Epoch)
}
// Verify beacon node has seen the target block before.
if !s.hasBlock(ctx, bytesutil.ToBytes32(tgt.Root)) {
return nil, ErrTargetRootNotInDB
}
// Retrieve attestation's data beacon block pre state. Advance pre state to latest epoch if necessary and
// save it to the cache.
baseState, err := s.getAttPreState(ctx, tgt)
if err != nil {
return nil, err
}
genesisTime := baseState.GenesisTime()
// Verify attestation target is from current epoch or previous epoch.
if err := s.verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Unix()), tgt); err != nil {
return nil, err
}
// Verify Attestations cannot be from future epochs.
if err := helpers.VerifySlotTime(genesisTime, tgtSlot); err != nil {
return nil, errors.Wrap(err, "could not verify attestation target slot")
}
// Verify attestation beacon block is known and not from the future.
if err := s.verifyBeaconBlock(ctx, a.Data); err != nil {
return nil, errors.Wrap(err, "could not verify attestation beacon block")
}
// Verify attestations can only affect the fork choice of subsequent slots.
if err := helpers.VerifySlotTime(genesisTime, a.Data.Slot+1); err != nil {
return nil, err
}
// Use the target state to to validate attestation and calculate the committees.
indexedAtt, err := s.verifyAttestation(ctx, baseState, a)
if err != nil {
return nil, err
}
// Only save attestation in DB for archival node.
if flags.Get().EnableArchive {
if err := s.beaconDB.SaveAttestation(ctx, a); err != nil {
return nil, err
}
}
// Update forkchoice store with the new attestation for updating weight.
s.forkChoiceStore.ProcessAttestation(ctx, indexedAtt.AttestingIndices, bytesutil.ToBytes32(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)
return indexedAtt.AttestingIndices, nil
}

View File

@@ -0,0 +1,160 @@
package blockchain
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
)
// getAttPreState retrieves the att pre state by either from the cache or the DB.
func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (*stateTrie.BeaconState, error) {
s.checkpointStateLock.Lock()
defer s.checkpointStateLock.Unlock()
cachedState, err := s.checkpointState.StateByCheckpoint(c)
if err != nil {
return nil, errors.Wrap(err, "could not get cached checkpoint state")
}
if cachedState != nil {
return cachedState, nil
}
var baseState *stateTrie.BeaconState
if featureconfig.Get().NewStateMgmt {
baseState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(c.Root))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", helpers.StartSlot(c.Epoch))
}
} else {
if featureconfig.Get().CheckHeadState {
headRoot, err := s.HeadRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "could not get head root")
}
if bytes.Equal(headRoot, c.Root) {
st, err := s.HeadState(ctx)
if err != nil {
return nil, errors.Wrapf(err, "could not get head state")
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: c,
State: st.Copy(),
}); err != nil {
return nil, errors.Wrap(err, "could not saved checkpoint state to cache")
}
return st, nil
}
}
baseState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(c.Root))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", helpers.StartSlot(c.Epoch))
}
}
if baseState == nil {
return nil, fmt.Errorf("pre state of target block %d does not exist", helpers.StartSlot(c.Epoch))
}
if helpers.StartSlot(c.Epoch) > baseState.Slot() {
baseState, err = state.ProcessSlots(ctx, baseState, helpers.StartSlot(c.Epoch))
if err != nil {
return nil, errors.Wrapf(err, "could not process slots up to %d", helpers.StartSlot(c.Epoch))
}
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: c,
State: baseState.Copy(),
}); err != nil {
return nil, errors.Wrap(err, "could not saved checkpoint state to cache")
}
return baseState, nil
}
// verifyAttTargetEpoch validates attestation is from the current or previous epoch.
func (s *Service) verifyAttTargetEpoch(ctx context.Context, genesisTime uint64, nowTime uint64, c *ethpb.Checkpoint) error {
currentSlot := (nowTime - genesisTime) / params.BeaconConfig().SecondsPerSlot
currentEpoch := helpers.SlotToEpoch(currentSlot)
var prevEpoch uint64
// Prevents previous epoch under flow
if currentEpoch > 1 {
prevEpoch = currentEpoch - 1
}
if c.Epoch != prevEpoch && c.Epoch != currentEpoch {
return fmt.Errorf("target epoch %d does not match current epoch %d or prev epoch %d", c.Epoch, currentEpoch, prevEpoch)
}
return nil
}
// verifyBeaconBlock verifies beacon head block is known and not from the future.
func (s *Service) verifyBeaconBlock(ctx context.Context, data *ethpb.AttestationData) error {
b, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(data.BeaconBlockRoot))
if err != nil {
return err
}
if b == nil || b.Block == nil {
return fmt.Errorf("beacon block %#x does not exist", bytesutil.Trunc(data.BeaconBlockRoot))
}
if b.Block.Slot > data.Slot {
return fmt.Errorf("could not process attestation for future block, block.Slot=%d > attestation.Data.Slot=%d", b.Block.Slot, data.Slot)
}
return nil
}
// verifyAttestation validates input attestation is valid.
func (s *Service) verifyAttestation(ctx context.Context, baseState *stateTrie.BeaconState, a *ethpb.Attestation) (*ethpb.IndexedAttestation, error) {
committee, err := helpers.BeaconCommitteeFromState(baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, err
}
indexedAtt := attestationutil.ConvertToIndexed(ctx, a, committee)
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
if err == blocks.ErrSigFailedToVerify {
// When sig fails to verify, check if there's a differences in committees due to
// different seeds.
var aState *stateTrie.BeaconState
var err error
if featureconfig.Get().NewStateMgmt {
aState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
return nil, err
}
aState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
if err != nil {
return nil, err
}
epoch := helpers.SlotToEpoch(a.Data.Slot)
origSeed, err := helpers.Seed(baseState, epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return nil, errors.Wrap(err, "could not get original seed")
}
aSeed, err := helpers.Seed(aState, epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return nil, errors.Wrap(err, "could not get attester's seed")
}
if origSeed != aSeed {
return nil, fmt.Errorf("could not verify indexed attestation due to differences in seeds: %v != %v",
hex.EncodeToString(bytesutil.Trunc(origSeed[:])), hex.EncodeToString(bytesutil.Trunc(aSeed[:])))
}
}
return nil, errors.Wrap(err, "could not verify indexed attestation")
}
return indexedAtt, nil
}

View File

@@ -1,4 +1,4 @@
package forkchoice
package blockchain
import (
"context"
@@ -7,12 +7,15 @@ import (
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -22,9 +25,13 @@ func TestStore_OnAttestation(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
cfg := &Config{BeaconDB: db, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
_, err := blockTree1(db, []byte{'g'})
_, err = blockTree1(db, []byte{'g'})
if err != nil {
t.Fatal(err)
}
@@ -40,7 +47,12 @@ func TestStore_OnAttestation(t *testing.T) {
t.Fatal(err)
}
BlkWithStateBadAttRoot, _ := ssz.HashTreeRoot(BlkWithStateBadAtt.Block)
if err := store.db.SaveState(ctx, &pb.BeaconState{}, BlkWithStateBadAttRoot); err != nil {
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{})
if err := s.SetSlot(100 * params.BeaconConfig().SlotsPerEpoch); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot); err != nil {
t.Fatal(err)
}
@@ -49,14 +61,15 @@ func TestStore_OnAttestation(t *testing.T) {
t.Fatal(err)
}
BlkWithValidStateRoot, _ := ssz.HashTreeRoot(BlkWithValidState.Block)
if err := store.db.SaveState(ctx, &pb.BeaconState{
s, _ = stateTrie.InitializeFromProto(&pb.BeaconState{
Fork: &pb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}, BlkWithValidStateRoot); err != nil {
})
if err := service.beaconDB.SaveState(ctx, s, BlkWithValidStateRoot); err != nil {
t.Fatal(err)
}
@@ -92,7 +105,7 @@ func TestStore_OnAttestation(t *testing.T) {
name: "process attestation doesn't match current epoch",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}},
s: &pb.BeaconState{},
s: &pb.BeaconState{Slot: 100 * params.BeaconConfig().SlotsPerEpoch},
wantErr: true,
wantErrString: "does not match current epoch",
},
@@ -100,17 +113,10 @@ func TestStore_OnAttestation(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := store.GenesisStore(
ctx,
&ethpb.Checkpoint{Root: BlkWithValidStateRoot[:]},
&ethpb.Checkpoint{Root: BlkWithValidStateRoot[:]}); err != nil {
t.Fatal(err)
}
err := store.OnAttestation(ctx, tt.a)
_, err := service.onAttestation(ctx, tt.a)
if tt.wantErr {
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnAttestation() error = %v, wantErr = %v", err, tt.wantErrString)
t.Errorf("Store.onAttestation() error = %v, wantErr = %v", err, tt.wantErrString)
}
} else {
t.Error(err)
@@ -125,106 +131,92 @@ func TestStore_SaveCheckpointState(t *testing.T) {
defer testDB.TeardownDB(t, db)
params.UseDemoBeaconConfig()
store := NewForkChoiceService(ctx, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
s := &pb.BeaconState{
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Fork: &pb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
StateRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
StateRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
LatestBlockHeader: &ethpb.BeaconBlockHeader{},
JustificationBits: []byte{0},
Slashings: make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector),
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
FinalizedCheckpoint: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'A'}, 32)},
})
r := [32]byte{'g'}
if err := store.db.SaveState(ctx, s, r); err != nil {
t.Fatal(err)
}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: r[:]}, &ethpb.Checkpoint{Root: r[:]}); err != nil {
if err := service.beaconDB.SaveState(ctx, s, r); err != nil {
t.Fatal(err)
}
service.justifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
s1, err := store.saveCheckpointState(ctx, s, cp1)
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'}))
s1, err := service.getAttPreState(ctx, cp1)
if err != nil {
t.Fatal(err)
}
if s1.Slot != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot)
if s1.Slot() != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot())
}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: []byte{'B'}}
s2, err := store.saveCheckpointState(ctx, s, cp2)
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'B'}, 32)}
service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'B'}))
s2, err := service.getAttPreState(ctx, cp2)
if err != nil {
t.Fatal(err)
}
if s2.Slot != 2*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot)
if s2.Slot() != 2*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot())
}
s1, err = store.saveCheckpointState(ctx, nil, cp1)
s1, err = service.getAttPreState(ctx, cp1)
if err != nil {
t.Fatal(err)
}
if s1.Slot != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot)
if s1.Slot() != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot())
}
s1, err = store.checkpointState.StateByCheckpoint(cp1)
s1, err = service.checkpointState.StateByCheckpoint(cp1)
if err != nil {
t.Fatal(err)
}
if s1.Slot != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot)
if s1.Slot() != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot())
}
s2, err = store.checkpointState.StateByCheckpoint(cp2)
s2, err = service.checkpointState.StateByCheckpoint(cp2)
if err != nil {
t.Fatal(err)
}
if s2.Slot != 2*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot)
if s2.Slot() != 2*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot())
}
s.Slot = params.BeaconConfig().SlotsPerEpoch + 1
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{Root: r[:]}, &ethpb.Checkpoint{Root: r[:]}); err != nil {
t.Fatal(err)
}
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'C'}}
s3, err := store.saveCheckpointState(ctx, s, cp3)
s.SetSlot(params.BeaconConfig().SlotsPerEpoch + 1)
service.justifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, 32)}
service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'}))
s3, err := service.getAttPreState(ctx, cp3)
if err != nil {
t.Fatal(err)
}
if s3.Slot != s.Slot {
t.Errorf("Wanted state slot: %d, got: %d", s.Slot, s3.Slot)
}
}
func TestStore_ReturnAggregatedAttestation(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
a1 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0x02}}
err := store.db.SaveAttestation(ctx, a1)
if err != nil {
t.Fatal(err)
}
a2 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0x03}}
saved, err := store.aggregatedAttestations(ctx, a2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a2}, saved) {
t.Error("did not retrieve saved attestation")
if s3.Slot() != s.Slot() {
t.Errorf("Wanted state slot: %d, got: %d", s.Slot(), s3.Slot())
}
}
@@ -233,31 +225,37 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
epoch := uint64(1)
baseState, _ := testutil.DeterministicGenesisState(t, 1)
baseState.Slot = epoch * params.BeaconConfig().SlotsPerEpoch
baseState.SetSlot(epoch * params.BeaconConfig().SlotsPerEpoch)
checkpoint := &ethpb.Checkpoint{Epoch: epoch}
returned, err := store.saveCheckpointState(ctx, baseState, checkpoint)
service.beaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root))
returned, err := service.getAttPreState(ctx, checkpoint)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(baseState, returned) {
if baseState.Slot() != returned.Slot() {
t.Error("Incorrectly returned base state")
}
cached, err := store.checkpointState.StateByCheckpoint(checkpoint)
cached, err := service.checkpointState.StateByCheckpoint(checkpoint)
if err != nil {
t.Fatal(err)
}
if cached != nil {
t.Error("State shouldn't have been cached")
if cached == nil {
t.Error("State should have been cached")
}
epoch = uint64(2)
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch}
returned, err = store.saveCheckpointState(ctx, baseState, newCheckpoint)
service.beaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(newCheckpoint.Root))
returned, err = service.getAttPreState(ctx, newCheckpoint)
if err != nil {
t.Fatal(err)
}
@@ -265,11 +263,11 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(baseState, returned) {
if baseState.Slot() != returned.Slot() {
t.Error("Incorrectly returned base state")
}
cached, err = store.checkpointState.StateByCheckpoint(newCheckpoint)
cached, err = service.checkpointState.StateByCheckpoint(newCheckpoint)
if err != nil {
t.Fatal(err)
}
@@ -283,8 +281,13 @@ func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
if err := store.verifyAttTargetEpoch(
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
if err := service.verifyAttTargetEpoch(
ctx,
0,
params.BeaconConfig().SlotsPerEpoch*params.BeaconConfig().SecondsPerSlot,
@@ -298,8 +301,13 @@ func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
if err := store.verifyAttTargetEpoch(
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
if err := service.verifyAttTargetEpoch(
ctx,
0,
params.BeaconConfig().SlotsPerEpoch*params.BeaconConfig().SecondsPerSlot,
@@ -313,8 +321,13 @@ func TestAttEpoch_NotMatch(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
if err := store.verifyAttTargetEpoch(
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
if err := service.verifyAttTargetEpoch(
ctx,
0,
2*params.BeaconConfig().SlotsPerEpoch*params.BeaconConfig().SecondsPerSlot,
@@ -329,9 +342,14 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := NewForkChoiceService(ctx, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
d := &ethpb.AttestationData{}
if err := s.verifyBeaconBlock(ctx, d); !strings.Contains(err.Error(), "beacon block does not exist") {
if err := service.verifyBeaconBlock(ctx, d); !strings.Contains(err.Error(), "beacon block does not exist") {
t.Error("Did not receive the wanted error")
}
}
@@ -341,13 +359,18 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := NewForkChoiceService(ctx, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 2}}
s.db.SaveBlock(ctx, b)
service.beaconDB.SaveBlock(ctx, b)
r, _ := ssz.HashTreeRoot(b.Block)
d := &ethpb.AttestationData{Slot: 1, BeaconBlockRoot: r[:]}
if err := s.verifyBeaconBlock(ctx, d); !strings.Contains(err.Error(), "could not process attestation for future block") {
if err := service.verifyBeaconBlock(ctx, d); !strings.Contains(err.Error(), "could not process attestation for future block") {
t.Error("Did not receive the wanted error")
}
}
@@ -357,13 +380,18 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := NewForkChoiceService(ctx, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 2}}
s.db.SaveBlock(ctx, b)
service.beaconDB.SaveBlock(ctx, b)
r, _ := ssz.HashTreeRoot(b.Block)
d := &ethpb.AttestationData{Slot: 2, BeaconBlockRoot: r[:]}
if err := s.verifyBeaconBlock(ctx, d); err != nil {
if err := service.verifyBeaconBlock(ctx, d); err != nil {
t.Error("Did not receive the wanted error")
}
}

View File

@@ -0,0 +1,387 @@
package blockchain
import (
"context"
"encoding/hex"
"fmt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// This defines size of the upper bound for initial sync block cache.
var initialSyncBlockCacheSize = 2 * params.BeaconConfig().SlotsPerEpoch
// onBlock is called when a gossip block is received. It runs regular state transition on the block.
//
// Spec pseudocode definition:
// def on_block(store: Store, block: BeaconBlock) -> None:
// # Make a copy of the state to avoid mutability issues
// assert block.parent_root in store.block_states
// pre_state = store.block_states[block.parent_root].copy()
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert store.time >= pre_state.genesis_time + block.slot * SECONDS_PER_SLOT
// # Add new block to the store
// store.blocks[signing_root(block)] = block
// # Check block is a descendant of the finalized block
// assert (
// get_ancestor(store, signing_root(block), store.blocks[store.finalized_checkpoint.root].slot) ==
// store.finalized_checkpoint.root
// )
// # Check that block is later than the finalized epoch slot
// assert block.slot > compute_start_slot_of_epoch(store.finalized_checkpoint.epoch)
// # Check the block is valid and compute the post-state
// state = state_transition(pre_state, block)
// # Add new state for this block to the store
// store.block_states[signing_root(block)] = state
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
func (s *Service) onBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock) (*stateTrie.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "blockchain.onBlock")
defer span.End()
if signed == nil || signed.Block == nil {
return nil, errors.New("nil block")
}
b := signed.Block
// Retrieve incoming block's pre state.
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
return nil, err
}
preStateValidatorCount := preState.NumValidators()
root, err := stateutil.BlockRoot(b)
if err != nil {
return nil, errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
log.WithFields(logrus.Fields{
"slot": b.Slot,
"root": fmt.Sprintf("0x%s...", hex.EncodeToString(root[:])[:8]),
}).Info("Executing state transition on block")
postState, err := state.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return nil, errors.Wrap(err, "could not execute state transition")
}
if err := s.beaconDB.SaveBlock(ctx, signed); err != nil {
return nil, errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
if err := s.insertBlockToForkChoiceStore(ctx, b, root, postState); err != nil {
return nil, errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot)
}
if featureconfig.Get().NewStateMgmt {
if err := s.stateGen.SaveState(ctx, root, postState); err != nil {
return nil, errors.Wrap(err, "could not save state")
}
} else {
if err := s.beaconDB.SaveState(ctx, postState, root); err != nil {
return nil, errors.Wrap(err, "could not save state")
}
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint().Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
return nil, err
}
}
// Update finalized check point. Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch {
if !featureconfig.Get().NoInitSyncBatchSaveBlocks {
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return nil, err
}
s.clearInitSyncBlocks()
}
if err := s.beaconDB.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint()); err != nil {
return nil, errors.Wrap(err, "could not save finalized checkpoint")
}
if !featureconfig.Get().NewStateMgmt {
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if endSlot > startSlot {
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return nil, errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot)
}
}
}
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
// Prune proto array fork choice nodes, all nodes before finalized check point will
// be pruned.
s.forkChoiceStore.Prune(ctx, fRoot)
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = postState.FinalizedCheckpoint()
if err := s.finalizedImpliesNewJustified(ctx, postState); err != nil {
return nil, errors.Wrap(err, "could not save new justified")
}
if featureconfig.Get().NewStateMgmt {
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
fBlock, err := s.beaconDB.Block(ctx, fRoot)
if err != nil {
return nil, errors.Wrap(err, "could not get finalized block to migrate")
}
if err := s.stateGen.MigrateToCold(ctx, fBlock.Block.Slot, fRoot); err != nil {
return nil, errors.Wrap(err, "could not migrate to cold")
}
}
}
// Update validator indices in database as needed.
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return nil, errors.Wrap(err, "could not save new validators")
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if postState.Slot() >= s.nextEpochBoundarySlot {
logEpochData(postState)
reportEpochMetrics(postState)
// Update committees cache at epoch boundary slot.
if err := helpers.UpdateCommitteeCache(postState, helpers.CurrentEpoch(postState)); err != nil {
return nil, err
}
if err := helpers.UpdateProposerIndicesInCache(postState, helpers.CurrentEpoch(postState)); err != nil {
return nil, err
}
s.nextEpochBoundarySlot = helpers.StartSlot(helpers.NextEpoch(postState))
}
// Delete the processed block attestations from attestation pool.
if err := s.deletePoolAtts(b.Body.Attestations); err != nil {
return nil, err
}
// Delete the processed block attester slashings from slashings pool.
for i := 0; i < len(b.Body.AttesterSlashings); i++ {
s.slashingPool.MarkIncludedAttesterSlashing(b.Body.AttesterSlashings[i])
}
return postState, nil
}
// onBlockInitialSyncStateTransition is called when an initial sync block is received.
// It runs state transition on the block and without any BLS verification. The excluded BLS verification
// includes attestation's aggregated signature. It also does not save attestations.
func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "blockchain.onBlock")
defer span.End()
if signed == nil || signed.Block == nil {
return errors.New("nil block")
}
b := signed.Block
// Retrieve incoming block's pre state.
preState, err := s.verifyBlkPreState(ctx, b)
if err != nil {
return err
}
// Exit early if the pre state slot is higher than incoming block's slot.
if preState.Slot() >= signed.Block.Slot {
return nil
}
preStateValidatorCount := preState.NumValidators()
postState, err := state.ExecuteStateTransitionNoVerifyAttSigs(ctx, preState, signed)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
root, err := stateutil.BlockRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
if !featureconfig.Get().NoInitSyncBatchSaveBlocks {
s.saveInitSyncBlock(root, signed)
} else {
if err := s.beaconDB.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
}
if err := s.insertBlockToForkChoiceStore(ctx, b, root, postState); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot)
}
if featureconfig.Get().NewStateMgmt {
if err := s.stateGen.SaveState(ctx, root, postState); err != nil {
return errors.Wrap(err, "could not save state")
}
} else {
s.initSyncStateLock.Lock()
defer s.initSyncStateLock.Unlock()
s.initSyncState[root] = postState.Copy()
s.filterBoundaryCandidates(ctx, root, postState)
}
if flags.Get().EnableArchive {
atts := signed.Block.Body.Attestations
if err := s.beaconDB.SaveAttestations(ctx, atts); err != nil {
return errors.Wrapf(err, "could not save block attestations from slot %d", b.Slot)
}
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint().Epoch > s.justifiedCheckpt.Epoch {
if err := s.updateJustified(ctx, postState); err != nil {
return err
}
}
// Rate limit how many blocks (2 epochs worth of blocks) a node keeps in the memory.
if len(s.getInitSyncBlocks()) > int(initialSyncBlockCacheSize) {
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
}
// Update finalized check point. Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch {
if !featureconfig.Get().NewStateMgmt {
startSlot := helpers.StartSlot(s.prevFinalizedCheckpt.Epoch)
endSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if endSlot > startSlot {
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot)
}
}
if err := s.saveInitState(ctx, postState); err != nil {
return errors.Wrap(err, "could not save init sync finalized state")
}
}
if !featureconfig.Get().NoInitSyncBatchSaveBlocks {
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
}
if err := s.beaconDB.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint()); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
s.prevFinalizedCheckpt = s.finalizedCheckpt
s.finalizedCheckpt = postState.FinalizedCheckpoint()
if err := s.finalizedImpliesNewJustified(ctx, postState); err != nil {
return errors.Wrap(err, "could not save new justified")
}
if featureconfig.Get().NewStateMgmt {
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
fBlock, err := s.beaconDB.Block(ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block to migrate")
}
if err := s.stateGen.MigrateToCold(ctx, fBlock.Block.Slot, fRoot); err != nil {
return errors.Wrap(err, "could not migrate to cold")
}
}
}
// Update validator indices in database as needed.
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return errors.Wrap(err, "could not save new validators")
}
if !featureconfig.Get().NewStateMgmt {
numOfStates := len(s.boundaryRoots)
if numOfStates > initialSyncCacheSize {
if err = s.persistCachedStates(ctx, numOfStates); err != nil {
return err
}
}
if len(s.initSyncState) > maxCacheSize {
s.pruneOldNonFinalizedStates()
}
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if postState.Slot() >= s.nextEpochBoundarySlot {
reportEpochMetrics(postState)
s.nextEpochBoundarySlot = helpers.StartSlot(helpers.NextEpoch(postState))
// Update committees cache at epoch boundary slot.
if err := helpers.UpdateCommitteeCache(postState, helpers.CurrentEpoch(postState)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(postState, helpers.CurrentEpoch(postState)); err != nil {
return err
}
if !featureconfig.Get().NewStateMgmt && helpers.IsEpochStart(postState.Slot()) {
if err := s.beaconDB.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
}
}
}
return nil
}
// This feeds in the block and block's attestations to fork choice store. It's allows fork choice store
// to gain information on the most current chain.
func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk *ethpb.BeaconBlock, root [32]byte, state *stateTrie.BeaconState) error {
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, state); err != nil {
return err
}
// Feed in block to fork choice store.
if err := s.forkChoiceStore.ProcessBlock(ctx,
blk.Slot, root, bytesutil.ToBytes32(blk.ParentRoot),
state.CurrentJustifiedCheckpoint().Epoch,
state.FinalizedCheckpointEpoch()); err != nil {
return errors.Wrap(err, "could not process block for proto array fork choice")
}
// Feed in block's attestations to fork choice store.
for _, a := range blk.Body.Attestations {
committee, err := helpers.BeaconCommitteeFromState(state, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return err
}
indices := attestationutil.AttestingIndices(a.AggregationBits, committee)
s.forkChoiceStore.ProcessAttestation(ctx, indices, bytesutil.ToBytes32(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)
}
return nil
}

View File

@@ -0,0 +1,505 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() uint64 {
return uint64(roughtime.Now().Unix()-s.genesisTime.Unix()) / params.BeaconConfig().SecondsPerSlot
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.
func (s *Service) getBlockPreState(ctx context.Context, b *ethpb.BeaconBlock) (*stateTrie.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.getBlockPreState")
defer span.End()
// Verify incoming block has a valid pre state.
preState, err := s.verifyBlkPreState(ctx, b)
if err != nil {
return nil, err
}
// Verify block slot time is not from the feature.
if err := helpers.VerifySlotTime(preState.GenesisTime(), b.Slot); err != nil {
return nil, err
}
// Verify block is a descendent of a finalized block.
if err := s.verifyBlkDescendant(ctx, bytesutil.ToBytes32(b.ParentRoot), b.Slot); err != nil {
return nil, err
}
// Verify block is later than the finalized epoch slot.
if err := s.verifyBlkFinalizedSlot(b); err != nil {
return nil, err
}
return preState, nil
}
// verifyBlkPreState validates input block has a valid pre-state.
func (s *Service) verifyBlkPreState(ctx context.Context, b *ethpb.BeaconBlock) (*stateTrie.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "chainService.verifyBlkPreState")
defer span.End()
if featureconfig.Get().NewStateMgmt {
preState, err := s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, errors.Wrapf(err, "nil pre state for slot %d", b.Slot)
}
return preState, nil // No copy needed from newly hydrated state gen object.
}
preState := s.initSyncState[bytesutil.ToBytes32(b.ParentRoot)]
var err error
if preState == nil {
if featureconfig.Get().CheckHeadState {
headRoot, err := s.HeadRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "could not get head root")
}
if bytes.Equal(headRoot, b.ParentRoot) {
return s.HeadState(ctx)
}
}
preState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
if bytes.Equal(s.finalizedCheckpt.Root, b.ParentRoot) {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
preState, err = s.generateState(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root), bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, err
}
}
return preState, nil // No copy needed from newly hydrated DB object.
}
return preState.Copy(), nil
}
// verifyBlkDescendant validates input block root is a descendant of the
// current finalized block root.
func (s *Service) verifyBlkDescendant(ctx context.Context, root [32]byte, slot uint64) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.verifyBlkDescendant")
defer span.End()
finalizedBlkSigned, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root))
if err != nil || finalizedBlkSigned == nil || finalizedBlkSigned.Block == nil {
return errors.Wrap(err, "could not get finalized block")
}
finalizedBlk := finalizedBlkSigned.Block
bFinalizedRoot, err := s.ancestor(ctx, root[:], finalizedBlk.Slot)
if err != nil {
return errors.Wrap(err, "could not get finalized block root")
}
if bFinalizedRoot == nil {
return fmt.Errorf("no finalized block known for block from slot %d", slot)
}
if !bytes.Equal(bFinalizedRoot, s.finalizedCheckpt.Root) {
err := fmt.Errorf("block from slot %d is not a descendent of the current finalized block slot %d, %#x != %#x",
slot, finalizedBlk.Slot, bytesutil.Trunc(bFinalizedRoot), bytesutil.Trunc(s.finalizedCheckpt.Root))
traceutil.AnnotateError(span, err)
return err
}
return nil
}
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Service) verifyBlkFinalizedSlot(b *ethpb.BeaconBlock) error {
finalizedSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if finalizedSlot >= b.Slot {
return fmt.Errorf("block is equal or earlier than finalized block, slot %d < slot %d", b.Slot, finalizedSlot)
}
return nil
}
// saveNewValidators saves newly added validator indices from the state to db.
// Does nothing if validator count has not changed.
func (s *Service) saveNewValidators(ctx context.Context, preStateValidatorCount int, postState *stateTrie.BeaconState) error {
postStateValidatorCount := postState.NumValidators()
if preStateValidatorCount != postStateValidatorCount {
indices := make([]uint64, 0)
pubKeys := make([][48]byte, 0)
for i := preStateValidatorCount; i < postStateValidatorCount; i++ {
indices = append(indices, uint64(i))
pubKeys = append(pubKeys, postState.PubkeyAtIndex(uint64(i)))
}
if err := s.beaconDB.SaveValidatorIndices(ctx, pubKeys, indices); err != nil {
return errors.Wrapf(err, "could not save activated validators: %v", indices)
}
log.WithFields(logrus.Fields{
"indices": indices,
"totalValidatorCount": postStateValidatorCount - preStateValidatorCount,
}).Trace("Validator indices saved in DB")
}
return nil
}
// rmStatesOlderThanLastFinalized deletes the states in db since last finalized check point.
func (s *Service) rmStatesOlderThanLastFinalized(ctx context.Context, startSlot uint64, endSlot uint64) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.rmStatesBySlots")
defer span.End()
// Make sure start slot is not a skipped slot
for i := startSlot; i > 0; i-- {
filter := filters.NewFilter().SetStartSlot(i).SetEndSlot(i)
b, err := s.beaconDB.Blocks(ctx, filter)
if err != nil {
return err
}
if len(b) > 0 {
startSlot = i
break
}
}
// Make sure finalized slot is not a skipped slot.
for i := endSlot; i > 0; i-- {
filter := filters.NewFilter().SetStartSlot(i).SetEndSlot(i)
b, err := s.beaconDB.Blocks(ctx, filter)
if err != nil {
return err
}
if len(b) > 0 {
endSlot = i - 1
break
}
}
// Do not remove genesis state
if startSlot == 0 {
startSlot++
}
// If end slot comes less than start slot
if endSlot < startSlot {
endSlot = startSlot
}
filter := filters.NewFilter().SetStartSlot(startSlot).SetEndSlot(endSlot)
roots, err := s.beaconDB.BlockRoots(ctx, filter)
if err != nil {
return err
}
roots, err = s.filterBlockRoots(ctx, roots)
if err != nil {
return err
}
if err := s.beaconDB.DeleteStates(ctx, roots); err != nil {
return err
}
return nil
}
// shouldUpdateCurrentJustified prevents bouncing attack, by only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustifiedCheckpt *ethpb.Checkpoint) (bool, error) {
if helpers.SlotsSinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
var newJustifiedBlockSigned *ethpb.SignedBeaconBlock
justifiedRoot := bytesutil.ToBytes32(newJustifiedCheckpt.Root)
var err error
if !featureconfig.Get().NoInitSyncBatchSaveBlocks && s.hasInitSyncBlock(justifiedRoot) {
newJustifiedBlockSigned = s.getInitSyncBlock(justifiedRoot)
} else {
newJustifiedBlockSigned, err = s.beaconDB.Block(ctx, justifiedRoot)
if err != nil {
return false, err
}
}
if newJustifiedBlockSigned == nil || newJustifiedBlockSigned.Block == nil {
return false, errors.New("nil new justified block")
}
newJustifiedBlock := newJustifiedBlockSigned.Block
if newJustifiedBlock.Slot <= helpers.StartSlot(s.justifiedCheckpt.Epoch) {
return false, nil
}
var justifiedBlockSigned *ethpb.SignedBeaconBlock
cachedJustifiedRoot := bytesutil.ToBytes32(s.justifiedCheckpt.Root)
if !featureconfig.Get().NoInitSyncBatchSaveBlocks && s.hasInitSyncBlock(cachedJustifiedRoot) {
justifiedBlockSigned = s.getInitSyncBlock(cachedJustifiedRoot)
} else {
justifiedBlockSigned, err = s.beaconDB.Block(ctx, cachedJustifiedRoot)
if err != nil {
return false, err
}
}
if justifiedBlockSigned == nil || justifiedBlockSigned.Block == nil {
return false, errors.New("nil justified block")
}
justifiedBlock := justifiedBlockSigned.Block
b, err := s.ancestor(ctx, newJustifiedCheckpt.Root, justifiedBlock.Slot)
if err != nil {
return false, err
}
if !bytes.Equal(b, s.justifiedCheckpt.Root) {
return false, nil
}
return true, nil
}
func (s *Service) updateJustified(ctx context.Context, state *stateTrie.BeaconState) error {
cpt := state.CurrentJustifiedCheckpoint()
if cpt.Epoch > s.bestJustifiedCheckpt.Epoch {
s.bestJustifiedCheckpt = cpt
}
canUpdate, err := s.shouldUpdateCurrentJustified(ctx, cpt)
if err != nil {
return err
}
if canUpdate {
s.prevJustifiedCheckpt = s.justifiedCheckpt
s.justifiedCheckpt = cpt
}
if !featureconfig.Get().NewStateMgmt {
justifiedRoot := bytesutil.ToBytes32(cpt.Root)
justifiedState := s.initSyncState[justifiedRoot]
// If justified state is nil, resume back to normal syncing process and save
// justified check point.
var err error
if justifiedState == nil {
if s.beaconDB.HasState(ctx, justifiedRoot) {
return s.beaconDB.SaveJustifiedCheckpoint(ctx, cpt)
}
justifiedState, err = s.generateState(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root), justifiedRoot)
if err != nil {
log.Error(err)
return s.beaconDB.SaveJustifiedCheckpoint(ctx, cpt)
}
}
if err := s.beaconDB.SaveState(ctx, justifiedState, justifiedRoot); err != nil {
return errors.Wrap(err, "could not save justified state")
}
}
return s.beaconDB.SaveJustifiedCheckpoint(ctx, cpt)
}
// This saves every finalized state in DB during initial sync, needed as part of optimization to
// use cache state during initial sync in case of restart.
func (s *Service) saveInitState(ctx context.Context, state *stateTrie.BeaconState) error {
cpt := state.FinalizedCheckpoint()
finalizedRoot := bytesutil.ToBytes32(cpt.Root)
fs := s.initSyncState[finalizedRoot]
if fs == nil {
var err error
fs, err = s.beaconDB.State(ctx, finalizedRoot)
if err != nil {
return err
}
if fs == nil {
fs, err = s.generateState(ctx, bytesutil.ToBytes32(s.prevFinalizedCheckpt.Root), finalizedRoot)
if err != nil {
// This might happen if the client was in sync and is now re-syncing for whatever reason.
log.Warn("Initial sync cache did not have finalized state root cached")
return err
}
}
}
if err := s.beaconDB.SaveState(ctx, fs, finalizedRoot); err != nil {
return errors.Wrap(err, "could not save state")
}
return nil
}
// This filters block roots that are not known as head root and finalized root in DB.
// It serves as the last line of defence before we prune states.
func (s *Service) filterBlockRoots(ctx context.Context, roots [][32]byte) ([][32]byte, error) {
f, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return nil, err
}
fRoot := f.Root
h, err := s.beaconDB.HeadBlock(ctx)
if err != nil {
return nil, err
}
hRoot, err := ssz.HashTreeRoot(h.Block)
if err != nil {
return nil, err
}
filtered := make([][32]byte, 0, len(roots))
for _, root := range roots {
if bytes.Equal(root[:], fRoot[:]) || bytes.Equal(root[:], hRoot[:]) {
continue
}
filtered = append(filtered, root)
}
return filtered, nil
}
// ancestor returns the block root of an ancestry block from the input block root.
//
// Spec pseudocode definition:
// def get_ancestor(store: Store, root: Hash, slot: Slot) -> Hash:
// block = store.blocks[root]
// if block.slot > slot:
// return get_ancestor(store, block.parent_root, slot)
// elif block.slot == slot:
// return root
// else:
// return Bytes32() # root is older than queried slot: no results.
func (s *Service) ancestor(ctx context.Context, root []byte, slot uint64) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.ancestor")
defer span.End()
// Stop recursive ancestry lookup if context is cancelled.
if ctx.Err() != nil {
return nil, ctx.Err()
}
signed, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return nil, errors.Wrap(err, "could not get ancestor block")
}
if !featureconfig.Get().NoInitSyncBatchSaveBlocks && s.hasInitSyncBlock(bytesutil.ToBytes32(root)) {
signed = s.getInitSyncBlock(bytesutil.ToBytes32(root))
}
if signed == nil || signed.Block == nil {
return nil, errors.New("nil block")
}
b := signed.Block
// If we dont have the ancestor in the DB, simply return nil so rest of fork choice
// operation can proceed. This is not an error condition.
if b == nil || b.Slot < slot {
return nil, nil
}
if b.Slot == slot {
return root, nil
}
return s.ancestor(ctx, b.ParentRoot, slot)
}
// This updates justified check point in store, if the new justified is later than stored justified or
// the store's justified is not in chain with finalized check point.
//
// Spec definition:
// if (
// state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch
// or get_ancestor(store, store.justified_checkpoint.root, finalized_slot) != store.finalized_checkpoint.root
// ):
// store.justified_checkpoint = state.current_justified_checkpoint
func (s *Service) finalizedImpliesNewJustified(ctx context.Context, state *stateTrie.BeaconState) error {
finalizedBlkSigned, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root))
if err != nil || finalizedBlkSigned == nil || finalizedBlkSigned.Block == nil {
return errors.Wrap(err, "could not get finalized block")
}
finalizedBlk := finalizedBlkSigned.Block
anc, err := s.ancestor(ctx, s.justifiedCheckpt.Root, finalizedBlk.Slot)
if err != nil {
return err
}
// Either the new justified is later than stored justified or not in chain with finalized check pint.
if cpt := state.CurrentJustifiedCheckpoint(); cpt != nil && cpt.Epoch > s.justifiedCheckpt.Epoch || !bytes.Equal(anc, s.finalizedCheckpt.Root) {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint()
}
return nil
}
// This retrieves missing blocks from DB (ie. the blocks that couldn't received over sync) and inserts them to fork choice store.
// This is useful for block tree visualizer and additional vote accounting.
func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk *ethpb.BeaconBlock, state *stateTrie.BeaconState) error {
pendingNodes := make([]*ethpb.BeaconBlock, 0)
parentRoot := bytesutil.ToBytes32(blk.ParentRoot)
slot := blk.Slot
// Fork choice only matters from last finalized slot.
higherThanFinalized := slot > helpers.StartSlot(s.finalizedCheckpt.Epoch)
// As long as parent node is not in fork choice store, and parent node is in DB.
for !s.forkChoiceStore.HasNode(parentRoot) && s.beaconDB.HasBlock(ctx, parentRoot) && higherThanFinalized {
b, err := s.beaconDB.Block(ctx, parentRoot)
if err != nil {
return err
}
pendingNodes = append(pendingNodes, b.Block)
parentRoot = bytesutil.ToBytes32(b.Block.ParentRoot)
slot = b.Block.Slot
higherThanFinalized = slot > helpers.StartSlot(s.finalizedCheckpt.Epoch)
}
// Insert parent nodes to fork choice store in reverse order.
// Lower slots should be at the end of the list.
for i := len(pendingNodes) - 1; i >= 0; i-- {
b := pendingNodes[i]
r, err := ssz.HashTreeRoot(b)
if err != nil {
return err
}
if err := s.forkChoiceStore.ProcessBlock(ctx,
b.Slot, r, bytesutil.ToBytes32(b.ParentRoot),
state.CurrentJustifiedCheckpoint().Epoch,
state.FinalizedCheckpointEpoch()); err != nil {
return errors.Wrap(err, "could not process block for proto array fork choice")
}
}
return nil
}
// The deletes input attestations from the attestation pool, so proposers don't include them in a block for the future.
func (s *Service) deletePoolAtts(atts []*ethpb.Attestation) error {
for _, att := range atts {
if helpers.IsAggregated(att) {
if err := s.attPool.DeleteAggregatedAttestation(att); err != nil {
return err
}
} else {
if err := s.attPool.DeleteUnaggregatedAttestation(att); err != nil {
return err
}
}
}
return nil
}

View File

@@ -0,0 +1,701 @@
package blockchain
import (
"context"
"reflect"
"strings"
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
func TestStore_OnBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
if err := db.SaveBlock(ctx, genesis); err != nil {
t.Error(err)
}
validGenesisRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Error(err)
}
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
if err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil {
t.Fatal(err)
}
roots, err := blockTree1(db, validGenesisRoot[:])
if err != nil {
t.Fatal(err)
}
random := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1, ParentRoot: validGenesisRoot[:]}}
if err := db.SaveBlock(ctx, random); err != nil {
t.Error(err)
}
randomParentRoot, err := ssz.HashTreeRoot(random.Block)
if err != nil {
t.Error(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), randomParentRoot); err != nil {
t.Fatal(err)
}
randomParentRoot2 := roots[1]
if err := service.beaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)); err != nil {
t.Fatal(err)
}
tests := []struct {
name string
blk *ethpb.BeaconBlock
s *stateTrie.BeaconState
time uint64
wantErrString string
}{
{
name: "parent block root does not have a state",
blk: &ethpb.BeaconBlock{},
s: st.Copy(),
wantErrString: "provided block root does not have block saved in the db",
},
{
name: "block is from the feature",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot[:], Slot: params.BeaconConfig().FarFutureEpoch},
s: st.Copy(),
wantErrString: "could not process slot from the future",
},
{
name: "could not get finalized block",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot[:]},
s: st.Copy(),
wantErrString: "block from slot 0 is not a descendent of the current finalized block",
},
{
name: "same slot as finalized block",
blk: &ethpb.BeaconBlock{Slot: 0, ParentRoot: randomParentRoot2},
s: st.Copy(),
wantErrString: "block is equal or earlier than finalized block, slot 0 < slot 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
service.justifiedCheckpt = &ethpb.Checkpoint{Root: validGenesisRoot[:]}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: validGenesisRoot[:]}
service.finalizedCheckpt = &ethpb.Checkpoint{Root: validGenesisRoot[:]}
service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: validGenesisRoot[:]}
service.finalizedCheckpt.Root = roots[0]
_, err := service.onBlock(ctx, &ethpb.SignedBeaconBlock{Block: tt.blk})
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnBlock() error = %v, wantErr = %v", err, tt.wantErrString)
}
})
}
}
func TestStore_SaveNewValidators(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
preCount := 2 // validators 0 and validators 1
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{Validators: []*ethpb.Validator{
{PublicKey: []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
{PublicKey: []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}},
{PublicKey: []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2}},
{PublicKey: []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3}},
}})
if err := service.saveNewValidators(ctx, preCount, s); err != nil {
t.Fatal(err)
}
if !db.HasValidatorIndex(ctx, []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2}) {
t.Error("Wanted validator saved in db")
}
if !db.HasValidatorIndex(ctx, []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3}) {
t.Error("Wanted validator saved in db")
}
if db.HasValidatorIndex(ctx, []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}) {
t.Error("validator not suppose to be saved in db")
}
}
func TestRemoveStateSinceLastFinalized(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
// Save 100 blocks in DB, each has a state.
numBlocks := 100
totalBlocks := make([]*ethpb.SignedBeaconBlock, numBlocks)
blockRoots := make([][32]byte, 0)
for i := 0; i < len(totalBlocks); i++ {
totalBlocks[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: uint64(i),
},
}
r, err := ssz.HashTreeRoot(totalBlocks[i].Block)
if err != nil {
t.Fatal(err)
}
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: uint64(i)})
if err := service.beaconDB.SaveState(ctx, s, r); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveBlock(ctx, totalBlocks[i]); err != nil {
t.Fatal(err)
}
blockRoots = append(blockRoots, r)
if err := service.beaconDB.SaveHeadBlockRoot(ctx, r); err != nil {
t.Fatal(err)
}
}
// New finalized epoch: 1
finalizedEpoch := uint64(1)
finalizedSlot := finalizedEpoch * params.BeaconConfig().SlotsPerEpoch
endSlot := helpers.StartSlot(finalizedEpoch+1) - 1 // Inclusive
if err := service.rmStatesOlderThanLastFinalized(ctx, 0, endSlot); err != nil {
t.Fatal(err)
}
for _, r := range blockRoots {
s, err := service.beaconDB.State(ctx, r)
if err != nil {
t.Fatal(err)
}
// Also verifies genesis state didnt get deleted
if s != nil && s.Slot() != finalizedSlot && s.Slot() != 0 && s.Slot() < endSlot {
t.Errorf("State with slot %d should not be in DB", s.Slot())
}
}
// New finalized epoch: 5
newFinalizedEpoch := uint64(5)
newFinalizedSlot := newFinalizedEpoch * params.BeaconConfig().SlotsPerEpoch
endSlot = helpers.StartSlot(newFinalizedEpoch+1) - 1 // Inclusive
if err := service.rmStatesOlderThanLastFinalized(ctx, helpers.StartSlot(finalizedEpoch+1)-1, endSlot); err != nil {
t.Fatal(err)
}
for _, r := range blockRoots {
s, err := service.beaconDB.State(ctx, r)
if err != nil {
t.Fatal(err)
}
// Also verifies genesis state didnt get deleted
if s != nil && s.Slot() != newFinalizedSlot && s.Slot() != finalizedSlot && s.Slot() != 0 && s.Slot() < endSlot {
t.Errorf("State with slot %d should not be in DB", s.Slot())
}
}
}
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
service.genesisTime = time.Now()
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{})
if err != nil {
t.Fatal(err)
}
if !update {
t.Error("Should be able to update justified, received false")
}
lastJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: []byte{'G'}}}
lastJustifiedRoot, _ := ssz.HashTreeRoot(lastJustifiedBlk.Block)
newJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1, ParentRoot: lastJustifiedRoot[:]}}
newJustifiedRoot, _ := ssz.HashTreeRoot(newJustifiedBlk.Block)
if err := service.beaconDB.SaveBlock(ctx, newJustifiedBlk); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveBlock(ctx, lastJustifiedBlk); err != nil {
t.Fatal(err)
}
diff := (params.BeaconConfig().SlotsPerEpoch - 1) * params.BeaconConfig().SecondsPerSlot
service.genesisTime = time.Unix(time.Now().Unix()-int64(diff), 0)
service.justifiedCheckpt = &ethpb.Checkpoint{Root: lastJustifiedRoot[:]}
update, err = service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
if err != nil {
t.Fatal(err)
}
if !update {
t.Error("Should be able to update justified, received false")
}
}
func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
lastJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: []byte{'G'}}}
lastJustifiedRoot, _ := ssz.HashTreeRoot(lastJustifiedBlk.Block)
newJustifiedBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{ParentRoot: lastJustifiedRoot[:]}}
newJustifiedRoot, _ := ssz.HashTreeRoot(newJustifiedBlk.Block)
if err := service.beaconDB.SaveBlock(ctx, newJustifiedBlk); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveBlock(ctx, lastJustifiedBlk); err != nil {
t.Fatal(err)
}
diff := (params.BeaconConfig().SlotsPerEpoch - 1) * params.BeaconConfig().SecondsPerSlot
service.genesisTime = time.Unix(time.Now().Unix()-int64(diff), 0)
service.justifiedCheckpt = &ethpb.Checkpoint{Root: lastJustifiedRoot[:]}
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: newJustifiedRoot[:]})
if err != nil {
t.Fatal(err)
}
if update {
t.Error("Should not be able to update justified, received true")
}
}
func TestCachedPreState_CanGetFromCache(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: 1})
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
service.initSyncState[r] = s
received, err := service.verifyBlkPreState(ctx, b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s.InnerStateUnsafe(), received.InnerStateUnsafe()) {
t.Error("cached state not the same")
}
}
func TestCachedPreState_CanGetFromDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
r := [32]byte{'A'}
b := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r[:]}
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
_, err = service.verifyBlkPreState(ctx, b)
wanted := "pre state of slot 1 does not exist"
if err.Error() != wanted {
t.Error("Did not get wanted error")
}
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: 1})
service.beaconDB.SaveState(ctx, s, r)
received, err := service.verifyBlkPreState(ctx, b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, received) {
t.Error("cached state not the same")
}
}
func TestSaveInitState_CanSaveDelete(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
for i := uint64(0); i < 64; i++ {
b := &ethpb.BeaconBlock{Slot: i}
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{Slot: i})
r, _ := ssz.HashTreeRoot(b)
service.initSyncState[r] = s
}
// Set finalized root as slot 32
finalizedRoot, _ := ssz.HashTreeRoot(&ethpb.BeaconBlock{Slot: 32})
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{
Epoch: 1, Root: finalizedRoot[:]}})
if err := service.saveInitState(ctx, s); err != nil {
t.Fatal(err)
}
// Verify finalized state is saved in DB
finalizedState, err := service.beaconDB.State(ctx, finalizedRoot)
if err != nil {
t.Fatal(err)
}
if finalizedState == nil {
t.Error("finalized state can't be nil")
}
}
func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
signedBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, signedBlock); err != nil {
t.Fatal(err)
}
r, err := ssz.HashTreeRoot(signedBlock.Block)
if err != nil {
t.Fatal(err)
}
service.justifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: []byte{'A'}}
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
if err != nil {
t.Fatal(err)
}
service.initSyncState[r] = st.Copy()
if err := db.SaveState(ctx, st.Copy(), r); err != nil {
t.Fatal(err)
}
// Could update
s, _ := stateTrie.InitializeFromProto(&pb.BeaconState{CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: r[:]}})
if err := service.updateJustified(context.Background(), s); err != nil {
t.Fatal(err)
}
if service.bestJustifiedCheckpt.Epoch != s.CurrentJustifiedCheckpoint().Epoch {
t.Error("Incorrect justified epoch in service")
}
// Could not update
service.bestJustifiedCheckpt.Epoch = 2
if err := service.updateJustified(context.Background(), s); err != nil {
t.Fatal(err)
}
if service.bestJustifiedCheckpt.Epoch != 2 {
t.Error("Incorrect justified epoch in service")
}
}
func TestFilterBlockRoots_CanFilter(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
fBlock := &ethpb.BeaconBlock{}
fRoot, _ := ssz.HashTreeRoot(fBlock)
hBlock := &ethpb.BeaconBlock{Slot: 1}
headRoot, _ := ssz.HashTreeRoot(hBlock)
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
if err := service.beaconDB.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: fBlock}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), fRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: fRoot[:]}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: hBlock}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, st.Copy(), headRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveHeadBlockRoot(ctx, headRoot); err != nil {
t.Fatal(err)
}
roots := [][32]byte{{'C'}, {'D'}, headRoot, {'E'}, fRoot, {'F'}}
wanted := [][32]byte{{'C'}, {'D'}, {'E'}, {'F'}}
received, err := service.filterBlockRoots(ctx, roots)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(wanted, received) {
t.Error("Did not filter correctly")
}
}
func TestPersistCache_CanSave(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
for i := uint64(0); i < initialSyncCacheSize; i++ {
st.SetSlot(i)
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
service.initSyncState[root] = st.Copy()
service.boundaryRoots = append(service.boundaryRoots, root)
}
if err = service.persistCachedStates(ctx, initialSyncCacheSize); err != nil {
t.Fatal(err)
}
for i := uint64(0); i < initialSyncCacheSize-minimumCacheSize; i++ {
root := [32]byte{}
copy(root[:], bytesutil.Bytes32(i))
state, err := db.State(context.Background(), root)
if err != nil {
t.Errorf("State with root of %#x , could not be retrieved: %v", root, err)
}
if state == nil {
t.Errorf("State with root of %#x , does not exist", root)
}
if state.Slot() != i {
t.Errorf("Incorrect slot retrieved. Wanted %d but got %d", i, state.Slot())
}
}
}
func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
service.forkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.finalizedCheckpt = &ethpb.Checkpoint{}
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
if err := db.SaveBlock(ctx, genesis); err != nil {
t.Error(err)
}
validGenesisRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Error(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil {
t.Fatal(err)
}
roots, err := blockTree1(db, validGenesisRoot[:])
if err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
block := &ethpb.BeaconBlock{Slot: 9, ParentRoot: roots[8]}
if err := service.fillInForkChoiceMissingBlocks(context.Background(), block, beaconState); err != nil {
t.Fatal(err)
}
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
if len(service.forkChoiceStore.Nodes()) != 5 {
t.Error("Miss match nodes")
}
if !service.forkChoiceStore.HasNode(bytesutil.ToBytes32(roots[4])) {
t.Error("Didn't save node")
}
if !service.forkChoiceStore.HasNode(bytesutil.ToBytes32(roots[6])) {
t.Error("Didn't save node")
}
if !service.forkChoiceStore.HasNode(bytesutil.ToBytes32(roots[8])) {
t.Error("Didn't save node")
}
}
func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db}
service, err := NewService(ctx, cfg)
if err != nil {
t.Fatal(err)
}
service.forkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
// Set finalized epoch to 1.
service.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 1}
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
if err := db.SaveBlock(ctx, genesis); err != nil {
t.Error(err)
}
validGenesisRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Error(err)
}
st, _ := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
if err := service.beaconDB.SaveState(ctx, st.Copy(), validGenesisRoot); err != nil {
t.Fatal(err)
}
// Define a tree branch, slot 63 <- 64 <- 65
b63 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 63}}
if err := service.beaconDB.SaveBlock(ctx, b63); err != nil {
t.Fatal(err)
}
r63, _ := ssz.HashTreeRoot(b63.Block)
b64 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 64, ParentRoot: r63[:]}}
if err := service.beaconDB.SaveBlock(ctx, b64); err != nil {
t.Fatal(err)
}
r64, _ := ssz.HashTreeRoot(b64.Block)
b65 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 65, ParentRoot: r64[:]}}
if err := service.beaconDB.SaveBlock(ctx, b65); err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.fillInForkChoiceMissingBlocks(context.Background(), b65.Block, beaconState); err != nil {
t.Fatal(err)
}
// There should be 2 nodes, block 65 and block 64.
if len(service.forkChoiceStore.Nodes()) != 2 {
t.Error("Miss match nodes")
}
// Block with slot 63 should be in fork choice because it's less than finalized epoch 1.
if !service.forkChoiceStore.HasNode(r63) {
t.Error("Didn't save node")
}
}
// blockTree1 constructs the following tree:
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
// (B1, and B3 are all from the same slots)
func blockTree1(db db.Database, genesisRoot []byte) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: genesisRoot}
r0, _ := ssz.HashTreeRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.HashTreeRoot(b1)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r0[:]}
r3, _ := ssz.HashTreeRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r3[:]}
r4, _ := ssz.HashTreeRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r4[:]}
r5, _ := ssz.HashTreeRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r4[:]}
r6, _ := ssz.HashTreeRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r5[:]}
r7, _ := ssz.HashTreeRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r6[:]}
r8, _ := ssz.HashTreeRoot(b8)
st, err := stateTrie.InitializeFromProtoUnsafe(&pb.BeaconState{})
if err != nil {
return nil, err
}
for _, b := range []*ethpb.BeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} {
if err := db.SaveBlock(context.Background(), &ethpb.SignedBeaconBlock{Block: b}); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), st.Copy(), bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
if err := db.SaveState(context.Background(), st.Copy(), r1); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), st.Copy(), r7); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), st.Copy(), r8); err != nil {
return nil, err
}
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
}

View File

@@ -1,14 +1,17 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"time"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/slotutil"
"github.com/sirupsen/logrus"
@@ -18,6 +21,7 @@ import (
// AttestationReceiver interface defines the methods of chain service receive and processing new attestations.
type AttestationReceiver interface {
ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation) error
IsValidAttestation(ctx context.Context, att *ethpb.Attestation) bool
}
// ReceiveAttestationNoPubsub is a function that defines the operations that are preformed on
@@ -29,39 +33,51 @@ func (s *Service) ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Att
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveAttestationNoPubsub")
defer span.End()
// Update forkchoice store for the new attestation
if err := s.forkChoiceStore.OnAttestation(ctx, att); err != nil {
return errors.Wrap(err, "could not process attestation from fork choice service")
}
// Run fork choice for head block after updating fork choice store.
headRoot, err := s.forkChoiceStore.Head(ctx)
_, err := s.onAttestation(ctx, att)
if err != nil {
return errors.Wrap(err, "could not get head from fork choice service")
return errors.Wrap(err, "could not process attestation")
}
// Only save head if it's different than the current head.
if !bytes.Equal(headRoot, s.HeadRoot()) {
signed, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
if !featureconfig.Get().DisableUpdateHeadPerAttestation {
baseState, err := s.getAttPreState(ctx, att.Data.Target)
if err != nil {
return errors.Wrap(err, "could not compute state from block head")
return err
}
if signed == nil || signed.Block == nil {
return errors.New("nil head block")
}
if err := s.saveHead(ctx, signed, bytesutil.ToBytes32(headRoot)); err != nil {
return errors.Wrap(err, "could not save head")
// This updates fork choice head, if a new head could not be updated due to
// long range or intermediate forking. It simply logs a warning and returns nil
// as that's more appropriate than returning errors.
if err := s.updateHead(ctx, baseState.Balances()); err != nil {
log.Warnf("Resolving fork due to new attestation: %v", err)
return nil
}
}
processedAttNoPubsub.Inc()
return nil
}
// IsValidAttestation returns true if the attestation can be verified against its pre-state.
func (s *Service) IsValidAttestation(ctx context.Context, att *ethpb.Attestation) bool {
baseState, err := s.getAttPreState(ctx, att.Data.Target)
if err != nil {
log.WithError(err).Error("Failed to validate attestation")
return false
}
if err := blocks.VerifyAttestation(ctx, baseState, att); err != nil {
log.WithError(err).Error("Failed to validate attestation")
return false
}
return true
}
// This processes attestations from the attestation pool to account for validator votes and fork choice.
func (s *Service) processAttestation() {
func (s *Service) processAttestation(subscribedToStateEvents chan struct{}) {
// Wait for state to be initialized.
stateChannel := make(chan *feed.Event, 1)
stateSub := s.stateNotifier.StateFeed().Subscribe(stateChannel)
subscribedToStateEvents <- struct{}{}
<-stateChannel
stateSub.Unsubscribe()
@@ -74,16 +90,56 @@ func (s *Service) processAttestation() {
ctx := context.Background()
atts := s.attPool.ForkchoiceAttestations()
for _, a := range atts {
var hasState bool
if featureconfig.Get().NewStateMgmt {
hasState = s.stateGen.StateSummaryExists(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
} else {
hasState = s.beaconDB.HasState(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot)) && s.beaconDB.HasState(ctx, bytesutil.ToBytes32(a.Data.Target.Root))
}
hasBlock := s.hasBlock(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
if !(hasState && hasBlock) {
continue
}
if err := s.attPool.DeleteForkchoiceAttestation(a); err != nil {
log.WithError(err).Error("Could not delete fork choice attestation in pool")
}
if !s.verifyCheckpointEpoch(a.Data.Target) {
continue
}
if err := s.ReceiveAttestationNoPubsub(ctx, a); err != nil {
log.WithFields(logrus.Fields{
"targetRoot": fmt.Sprintf("%#x", a.Data.Target.Root),
}).WithError(err).Error("Could not receive attestation in chain service")
"slot": a.Data.Slot,
"committeeIndex": a.Data.CommitteeIndex,
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.Data.BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.Data.Target.Root)),
"aggregationCount": a.AggregationBits.Count(),
}).WithError(err).Warn("Could not receive attestation in chain service")
}
}
}
}
}
// This verifies the epoch of input checkpoint is within current epoch and previous epoch
// with respect to current time. Returns true if it's within, false if it's not.
func (s *Service) verifyCheckpointEpoch(c *ethpb.Checkpoint) bool {
now := uint64(time.Now().Unix())
genesisTime := uint64(s.genesisTime.Unix())
currentSlot := (now - genesisTime) / params.BeaconConfig().SecondsPerSlot
currentEpoch := helpers.SlotToEpoch(currentSlot)
var prevEpoch uint64
if currentEpoch > 1 {
prevEpoch = currentEpoch - 1
}
if c.Epoch != prevEpoch && c.Epoch != currentEpoch {
return false
}
return true
}

View File

@@ -2,45 +2,24 @@ package blockchain
import (
"testing"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
"golang.org/x/net/context"
)
func TestReceiveAttestationNoPubsub_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
func TestVerifyCheckpointEpoch_Ok(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.HashTreeRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
chainService.genesisTime = time.Now()
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.HashTreeRoot(b.Block)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
if !chainService.verifyCheckpointEpoch(&ethpb.Checkpoint{}) {
t.Error("Wanted true, got false")
}
a := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root[:]},
}}
if err := chainService.ReceiveAttestationNoPubsub(ctx, a); err != nil {
t.Fatal(err)
if chainService.verifyCheckpointEpoch(&ethpb.Checkpoint{Epoch: 1}) {
t.Error("Wanted false, got true")
}
testutil.AssertLogsContain(t, hook, "Saved new head info")
testutil.AssertLogsDoNotContain(t, hook, "Broadcasting attestation")
}

View File

@@ -5,7 +5,6 @@ import (
"context"
"encoding/hex"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
@@ -13,7 +12,8 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
@@ -26,6 +26,7 @@ type BlockReceiver interface {
ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedBeaconBlock) error
HasInitSyncBlock(root [32]byte) bool
}
// ReceiveBlock is a function that defines the operations that are preformed on
@@ -55,7 +56,6 @@ func (s *Service) ReceiveBlock(ctx context.Context, block *ethpb.SignedBeaconBlo
return err
}
processedBlk.Inc()
return nil
}
@@ -67,35 +67,40 @@ func (s *Service) ReceiveBlock(ctx context.Context, block *ethpb.SignedBeaconBlo
func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoPubsub")
defer span.End()
blockCopy := proto.Clone(block).(*ethpb.SignedBeaconBlock)
blockCopy := stateTrie.CopySignedBeaconBlock(block)
// Apply state transition on the new block.
if err := s.forkChoiceStore.OnBlock(ctx, blockCopy); err != nil {
err := errors.Wrap(err, "could not process block from fork choice service")
postState, err := s.onBlock(ctx, blockCopy)
if err != nil {
err := errors.Wrap(err, "could not process block")
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.HashTreeRoot(blockCopy.Block)
// Add attestations from the block to the pool for fork choice.
if err := s.attPool.SaveBlockAttestations(blockCopy.Block.Body.Attestations); err != nil {
log.Errorf("Could not save attestation for fork choice: %v", err)
return nil
}
for _, exit := range block.Block.Body.VoluntaryExits {
s.exitPool.MarkIncluded(exit)
}
s.epochParticipationLock.Lock()
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
root, err := stateutil.BlockRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
// Run fork choice after applying state transition on the new block.
headRoot, err := s.forkChoiceStore.Head(ctx)
if err != nil {
return errors.Wrap(err, "could not get head from fork choice service")
}
signedHeadBlock, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
if err != nil {
return errors.Wrap(err, "could not compute state from block head")
}
if signedHeadBlock == nil || signedHeadBlock.Block == nil {
return errors.New("nil head block")
}
// Only save head if it's different than the current head.
if !bytes.Equal(headRoot, s.HeadRoot()) {
if err := s.saveHead(ctx, signedHeadBlock, bytesutil.ToBytes32(headRoot)); err != nil {
if featureconfig.Get().DisableForkChoice && block.Block.Slot > s.headSlot() {
if err := s.saveHead(ctx, root); err != nil {
return errors.Wrap(err, "could not save head")
}
} else {
if err := s.updateHead(ctx, postState.Balances()); err != nil {
return errors.Wrap(err, "could not save head")
}
}
@@ -104,31 +109,17 @@ func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedB
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: blockCopy.Block.Slot,
BlockRoot: root,
Verified: true,
},
})
// Add attestations from the block to the pool for fork choice.
if err := s.attPool.SaveBlockAttestations(blockCopy.Block.Body.Attestations); err != nil {
log.Errorf("Could not save attestation for fork choice: %v", err)
return nil
}
// Reports on block and fork choice metrics.
s.reportSlotMetrics(blockCopy.Block.Slot)
// Log if block is a competing block.
isCompetingBlock(root[:], blockCopy.Block.Slot, headRoot, signedHeadBlock.Block.Slot)
reportSlotMetrics(blockCopy.Block.Slot, s.headSlot(), s.finalizedCheckpt)
// Log state transition data.
logStateTransitionData(blockCopy.Block, root[:])
s.epochParticipationLock.Lock()
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
processedBlkNoPubsub.Inc()
logStateTransitionData(blockCopy.Block)
return nil
}
@@ -140,21 +131,26 @@ func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedB
func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoForkchoice")
defer span.End()
blockCopy := proto.Clone(block).(*ethpb.SignedBeaconBlock)
blockCopy := stateTrie.CopySignedBeaconBlock(block)
// Apply state transition on the incoming newly received block.
if err := s.forkChoiceStore.OnBlock(ctx, blockCopy); err != nil {
err := errors.Wrap(err, "could not process block from fork choice service")
// Apply state transition on the new block.
_, err := s.onBlock(ctx, blockCopy)
if err != nil {
err := errors.Wrap(err, "could not process block")
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.HashTreeRoot(blockCopy.Block)
root, err := stateutil.BlockRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHead(ctx, blockCopy, root); err != nil {
cachedHeadRoot, err := s.HeadRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get head root from cache")
}
if !bytes.Equal(root[:], cachedHeadRoot) {
if err := s.saveHead(ctx, root); err != nil {
return errors.Wrap(err, "could not save head")
}
}
@@ -163,22 +159,22 @@ func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *eth
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: blockCopy.Block.Slot,
BlockRoot: root,
Verified: true,
},
})
// Reports on block and fork choice metrics.
s.reportSlotMetrics(blockCopy.Block.Slot)
reportSlotMetrics(blockCopy.Block.Slot, s.headSlot(), s.finalizedCheckpt)
// Log state transition data.
logStateTransitionData(blockCopy.Block, root[:])
logStateTransitionData(blockCopy.Block)
s.epochParticipationLock.Lock()
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
processedBlkNoPubsubForkchoice.Inc()
return nil
}
@@ -188,32 +184,30 @@ func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *eth
func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoVerify")
defer span.End()
blockCopy := proto.Clone(block).(*ethpb.SignedBeaconBlock)
blockCopy := stateTrie.CopySignedBeaconBlock(block)
// Apply state transition on the incoming newly received blockCopy without verifying its BLS contents.
if err := s.forkChoiceStore.OnBlockInitialSyncStateTransition(ctx, blockCopy); err != nil {
return errors.Wrap(err, "could not process blockCopy from fork choice service")
if err := s.onBlockInitialSyncStateTransition(ctx, blockCopy); err != nil {
err := errors.Wrap(err, "could not process block")
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.HashTreeRoot(blockCopy.Block)
root, err := stateutil.BlockRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received blockCopy")
}
if featureconfig.Get().InitSyncCacheState {
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHeadNoDB(ctx, blockCopy, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
}
}
} else {
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHead(ctx, blockCopy, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
}
cachedHeadRoot, err := s.HeadRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get head root from cache")
}
if !bytes.Equal(root[:], cachedHeadRoot) {
if err := s.saveHeadNoDB(ctx, blockCopy, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
}
}
@@ -221,13 +215,14 @@ func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedB
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: blockCopy.Block.Slot,
BlockRoot: root,
Verified: false,
},
})
// Reports on blockCopy and fork choice metrics.
s.reportSlotMetrics(blockCopy.Block.Slot)
reportSlotMetrics(blockCopy.Block.Slot, s.headSlot(), s.finalizedCheckpt)
// Log state transition data.
log.WithFields(logrus.Fields{
@@ -243,15 +238,7 @@ func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedB
return nil
}
// This checks if the block is from a competing chain, emits warning and updates metrics.
func isCompetingBlock(root []byte, slot uint64, headRoot []byte, headSlot uint64) {
if !bytes.Equal(root[:], headRoot) {
log.WithFields(logrus.Fields{
"blkSlot": slot,
"blkRoot": hex.EncodeToString(root[:]),
"headSlot": headSlot,
"headRoot": hex.EncodeToString(headRoot),
}).Warn("Calculated head diffs from new block")
competingBlks.Inc()
}
// HasInitSyncBlock returns true if the block of the input root exists in initial sync blocks cache.
func (s *Service) HasInitSyncBlock(root [32]byte) bool {
return s.hasInitSyncBlock(root)
}

View File

@@ -1,195 +0,0 @@
package blockchain
import (
"bytes"
"context"
"reflect"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/stateutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestReceiveBlock_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
genesis, _ := testutil.GenerateFullBlock(beaconState, privKeys, nil, beaconState.Slot+1)
beaconState, err := state.ExecuteStateTransition(ctx, beaconState, genesis)
if err != nil {
t.Fatal(err)
}
genesisBlkRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
cp := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := chainService.forkChoiceStore.GenesisStore(ctx, cp, cp); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, genesis); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
if err := db.SaveState(ctx, beaconState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
slot := beaconState.Slot + 1
block, err := testutil.GenerateFullBlock(beaconState, privKeys, nil, slot)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
if err := chainService.ReceiveBlock(context.Background(), block); err != nil {
t.Errorf("Block failed processing: %v", err)
}
testutil.AssertLogsContain(t, hook, "Finished applying state transition")
}
func TestReceiveReceiveBlockNoPubsub_CanSaveHeadInfo(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
headBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 100}}
if err := db.SaveBlock(ctx, headBlk); err != nil {
t.Fatal(err)
}
r, err := ssz.HashTreeRoot(headBlk.Block)
if err != nil {
t.Fatal(err)
}
head := &pb.BeaconState{Slot: 100, FinalizedCheckpoint: &ethpb.Checkpoint{Root: r[:]}}
if err := db.SaveState(ctx, head, r); err != nil {
t.Fatal(err)
}
chainService.forkChoiceStore = &store{headRoot: r[:]}
if err := chainService.ReceiveBlockNoPubsub(ctx, &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{},
},
}); err != nil {
t.Fatal(err)
}
if !bytes.Equal(r[:], chainService.HeadRoot()) {
t.Error("Incorrect head root saved")
}
if !reflect.DeepEqual(headBlk, chainService.HeadBlock()) {
t.Error("Incorrect head block saved")
}
testutil.AssertLogsContain(t, hook, "Saved new head info")
}
func TestReceiveReceiveBlockNoPubsub_SameHead(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
headBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, headBlk); err != nil {
t.Fatal(err)
}
newBlk := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{},
},
}
newRoot, _ := ssz.HashTreeRoot(newBlk.Block)
if err := db.SaveBlock(ctx, newBlk); err != nil {
t.Fatal(err)
}
chainService.forkChoiceStore = &store{headRoot: newRoot[:]}
chainService.canonicalRoots[0] = newRoot[:]
if err := chainService.ReceiveBlockNoPubsub(ctx, newBlk); err != nil {
t.Fatal(err)
}
testutil.AssertLogsDoNotContain(t, hook, "Saved new head info")
}
func TestReceiveBlockNoPubsubForkchoice_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
block, err := testutil.GenerateFullBlock(beaconState, privKeys, nil, beaconState.Slot)
if err != nil {
t.Fatal(err)
}
stateRoot, err := stateutil.HashTreeRootState(beaconState)
if err != nil {
t.Fatal(err)
}
genesis := b.NewGenesisBlock(stateRoot[:])
parentRoot, err := ssz.HashTreeRoot(genesis.Block)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, parentRoot); err != nil {
t.Fatal(err)
}
if err := chainService.forkChoiceStore.GenesisStore(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}, &ethpb.Checkpoint{Root: parentRoot[:]}); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
block, err = testutil.GenerateFullBlock(beaconState, privKeys, nil, beaconState.Slot)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, bytesutil.ToBytes32(block.Block.ParentRoot)); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
if err := chainService.ReceiveBlockNoPubsubForkchoice(context.Background(), block); err != nil {
t.Errorf("Block failed processing: %v", err)
}
testutil.AssertLogsContain(t, hook, "Finished applying state transition")
testutil.AssertLogsDoNotContain(t, hook, "Finished fork choice")
}

View File

@@ -13,7 +13,7 @@ import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
@@ -22,13 +22,21 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
f "github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
@@ -37,42 +45,61 @@ import (
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
beaconDB db.HeadAccessDatabase
depositCache *depositcache.DepositCache
chainStartFetcher powchain.ChainStartFetcher
attPool attestations.Pool
forkChoiceStore forkchoice.ForkChoicer
slashingPool *slashings.Pool
exitPool *voluntaryexits.Pool
genesisTime time.Time
p2p p2p.Broadcaster
maxRoutines int64
headSlot uint64
headBlock *ethpb.SignedBeaconBlock
headState *pb.BeaconState
canonicalRoots map[uint64][]byte
head *head
headLock sync.RWMutex
stateNotifier statefeed.Notifier
genesisRoot [32]byte
epochParticipation map[uint64]*precompute.Balance
epochParticipationLock sync.RWMutex
forkChoiceStore f.ForkChoicer
justifiedCheckpt *ethpb.Checkpoint
prevJustifiedCheckpt *ethpb.Checkpoint
bestJustifiedCheckpt *ethpb.Checkpoint
finalizedCheckpt *ethpb.Checkpoint
prevFinalizedCheckpt *ethpb.Checkpoint
nextEpochBoundarySlot uint64
voteLock sync.RWMutex
initSyncState map[[32]byte]*stateTrie.BeaconState
boundaryRoots [][32]byte
initSyncStateLock sync.RWMutex
checkpointState *cache.CheckpointStateCache
checkpointStateLock sync.Mutex
stateGen *stategen.State
opsService *attestations.Service
initSyncBlocks map[[32]byte]*ethpb.SignedBeaconBlock
initSyncBlocksLock sync.RWMutex
}
// Config options for the service.
type Config struct {
BeaconBlockBuf int
ChainStartFetcher powchain.ChainStartFetcher
BeaconDB db.Database
BeaconDB db.HeadAccessDatabase
DepositCache *depositcache.DepositCache
AttPool attestations.Pool
ExitPool *voluntaryexits.Pool
SlashingPool *slashings.Pool
P2p p2p.Broadcaster
MaxRoutines int64
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
OpsService *attestations.Service
StateGen *stategen.State
}
// NewService instantiates a new block service instance that will
// be registered into a running beacon node.
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
store := forkchoice.NewForkChoiceService(ctx, cfg.BeaconDB)
return &Service{
ctx: ctx,
cancel: cancel,
@@ -80,12 +107,19 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
depositCache: cfg.DepositCache,
chainStartFetcher: cfg.ChainStartFetcher,
attPool: cfg.AttPool,
forkChoiceStore: store,
exitPool: cfg.ExitPool,
slashingPool: cfg.SlashingPool,
p2p: cfg.P2p,
canonicalRoots: make(map[uint64][]byte),
maxRoutines: cfg.MaxRoutines,
stateNotifier: cfg.StateNotifier,
epochParticipation: make(map[uint64]*precompute.Balance),
forkChoiceStore: cfg.ForkChoiceStore,
initSyncState: make(map[[32]byte]*stateTrie.BeaconState),
boundaryRoots: [][32]byte{},
checkpointState: cache.NewCheckpointStateCache(),
opsService: cfg.OpsService,
stateGen: cfg.StateGen,
initSyncBlocks: make(map[[32]byte]*ethpb.SignedBeaconBlock),
}, nil
}
@@ -100,12 +134,18 @@ func (s *Service) Start() {
// For running initial sync with state cache, in an event of restart, we use
// last finalized check point as start point to sync instead of head
// state. This is because we no longer save state every slot during sync.
if featureconfig.Get().InitSyncCacheState {
cp, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
if beaconState == nil {
cp, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
if beaconState == nil {
if featureconfig.Get().NewStateMgmt {
beaconState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state by root: %v", err)
}
} else {
beaconState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
@@ -113,10 +153,14 @@ func (s *Service) Start() {
}
}
// Make sure that attestation processor is subscribed and ready for state initializing event.
attestationProcessorSubscribed := make(chan struct{}, 1)
// If the chain has already been initialized, simply start the block processing routine.
if beaconState != nil {
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = time.Unix(int64(beaconState.GenesisTime), 0)
s.genesisTime = time.Unix(int64(beaconState.GenesisTime()), 0)
s.opsService.SetGenesisTime(beaconState.GenesisTime())
if err := s.initializeChainInfo(ctx); err != nil {
log.Fatalf("Could not set up chain info: %v", err)
}
@@ -128,9 +172,23 @@ func (s *Service) Start() {
if err != nil {
log.Fatalf("Could not get finalized checkpoint: %v", err)
}
if err := s.forkChoiceStore.GenesisStore(ctx, justifiedCheckpoint, finalizedCheckpoint); err != nil {
log.Fatalf("Could not start fork choice service: %v", err)
// Resume fork choice.
s.justifiedCheckpt = stateTrie.CopyCheckpoint(justifiedCheckpoint)
s.prevJustifiedCheckpt = stateTrie.CopyCheckpoint(justifiedCheckpoint)
s.bestJustifiedCheckpt = stateTrie.CopyCheckpoint(justifiedCheckpoint)
s.finalizedCheckpt = stateTrie.CopyCheckpoint(finalizedCheckpoint)
s.prevFinalizedCheckpt = stateTrie.CopyCheckpoint(finalizedCheckpoint)
s.resumeForkChoice(justifiedCheckpoint, finalizedCheckpoint)
if !featureconfig.Get().NewStateMgmt {
if finalizedCheckpoint.Epoch > 1 {
if err := s.pruneGarbageState(ctx, helpers.StartSlot(finalizedCheckpoint.Epoch)-params.BeaconConfig().SlotsPerEpoch); err != nil {
log.WithError(err).Warn("Could not prune old states")
}
}
}
s.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Initialized,
Data: &statefeed.InitializedData{
@@ -147,6 +205,7 @@ func (s *Service) Start() {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
<-attestationProcessorSubscribed
for {
select {
case event := <-stateChannel:
@@ -167,7 +226,7 @@ func (s *Service) Start() {
}()
}
go s.processAttestation()
go s.processAttestation(attestationProcessorSubscribed)
}
// processChainStartTime initializes a series of deposits from the ChainStart deposits in the eth1
@@ -191,11 +250,10 @@ func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Ti
func (s *Service) initializeBeaconChain(
ctx context.Context,
genesisTime time.Time,
preGenesisState *pb.BeaconState,
preGenesisState *stateTrie.BeaconState,
eth1data *ethpb.Eth1Data) error {
_, span := trace.StartSpan(context.Background(), "beacon-chain.Service.initializeBeaconChain")
defer span.End()
log.Info("Genesis time reached, starting the beacon chain")
s.genesisTime = genesisTime
unixTime := uint64(genesisTime.Unix())
@@ -208,12 +266,20 @@ func (s *Service) initializeBeaconChain(
return errors.Wrap(err, "could not save genesis data")
}
log.Info("Initialized beacon chain genesis state")
// Clear out all pre-genesis data now that the state is initialized.
s.chainStartFetcher.ClearPreGenesisData()
// Update committee shuffled indices for genesis epoch.
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
return err
}
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(genesisState, 0 /* genesis epoch */); err != nil {
return err
}
s.opsService.SetGenesisTime(genesisState.GenesisTime())
return nil
}
@@ -233,81 +299,29 @@ func (s *Service) Status() error {
return nil
}
// This gets called to update canonical root mapping.
func (s *Service) saveHead(ctx context.Context, signed *ethpb.SignedBeaconBlock, r [32]byte) error {
s.headLock.Lock()
defer s.headLock.Unlock()
if signed == nil || signed.Block == nil {
return errors.New("cannot save nil head block")
}
s.headSlot = signed.Block.Slot
s.canonicalRoots[signed.Block.Slot] = r[:]
if err := s.beaconDB.SaveHeadBlockRoot(ctx, r); err != nil {
return errors.Wrap(err, "could not save head root in DB")
}
s.headBlock = signed
headState, err := s.beaconDB.State(ctx, r)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
}
s.headState = headState
log.WithFields(logrus.Fields{
"slot": signed.Block.Slot,
"headRoot": fmt.Sprintf("%#x", r),
}).Debug("Saved new head info")
return nil
// ClearCachedStates removes all stored caches states. This is done after the node
// is synced.
func (s *Service) ClearCachedStates() {
s.initSyncState = map[[32]byte]*stateTrie.BeaconState{}
}
// This gets called to update canonical root mapping. It does not save head block
// root in DB. With the inception of inital-sync-cache-state flag, it uses finalized
// check point as anchors to resume sync therefore head is no longer needed to be saved on per slot basis.
func (s *Service) saveHeadNoDB(ctx context.Context, b *ethpb.SignedBeaconBlock, r [32]byte) error {
s.headLock.Lock()
defer s.headLock.Unlock()
// This gets called when beacon chain is first initialized to save validator indices and public keys in db.
func (s *Service) saveGenesisValidators(ctx context.Context, state *stateTrie.BeaconState) error {
pubkeys := make([][48]byte, state.NumValidators())
indices := make([]uint64, state.NumValidators())
s.headSlot = b.Block.Slot
s.canonicalRoots[b.Block.Slot] = r[:]
s.headBlock = b
headState, err := s.beaconDB.State(ctx, r)
if err != nil {
return errors.Wrap(err, "could not retrieve head state in DB")
for i := 0; i < state.NumValidators(); i++ {
pubkeys[i] = state.PubkeyAtIndex(uint64(i))
indices[i] = uint64(i)
}
s.headState = headState
log.WithFields(logrus.Fields{
"slot": b.Block.Slot,
"headRoot": fmt.Sprintf("%#x", r),
}).Debug("Saved new head info")
return nil
return s.beaconDB.SaveValidatorIndices(ctx, pubkeys, indices)
}
// This gets called when beacon chain is first initialized to save validator indices and pubkeys in db
func (s *Service) saveGenesisValidators(ctx context.Context, state *pb.BeaconState) error {
for i, v := range state.Validators {
if err := s.beaconDB.SaveValidatorIndex(ctx, bytesutil.ToBytes48(v.PublicKey), uint64(i)); err != nil {
return errors.Wrapf(err, "could not save validator index: %d", i)
}
}
return nil
}
// This gets called when beacon chain is first initialized to save genesis data (state, block, and more) in db
func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconState) error {
s.headLock.Lock()
defer s.headLock.Unlock()
stateRoot, err := ssz.HashTreeRoot(genesisState)
// This gets called when beacon chain is first initialized to save genesis data (state, block, and more) in db.
func (s *Service) saveGenesisData(ctx context.Context, genesisState *stateTrie.BeaconState) error {
stateRoot, err := genesisState.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not tree hash genesis state")
return err
}
genesisBlk := blocks.NewGenesisBlock(stateRoot[:])
genesisBlkRoot, err := ssz.HashTreeRoot(genesisBlk.Block)
@@ -318,8 +332,20 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconSt
if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil {
return errors.Wrap(err, "could not save genesis block")
}
if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save genesis state")
if featureconfig.Get().NewStateMgmt {
if err := s.stateGen.SaveState(ctx, genesisBlkRoot, genesisState); err != nil {
return errors.Wrap(err, "could not save genesis state")
}
if err := s.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: 0,
Root: genesisBlkRoot[:],
}); err != nil {
return err
}
} else {
if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save genesis state")
}
}
if err := s.beaconDB.SaveHeadBlockRoot(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save head block root")
@@ -332,23 +358,30 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconSt
}
genesisCheckpoint := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := s.forkChoiceStore.GenesisStore(ctx, genesisCheckpoint, genesisCheckpoint); err != nil {
return errors.Wrap(err, "Could not start fork choice service: %v")
// Add the genesis block to the fork choice store.
s.justifiedCheckpt = stateTrie.CopyCheckpoint(genesisCheckpoint)
s.prevJustifiedCheckpt = stateTrie.CopyCheckpoint(genesisCheckpoint)
s.bestJustifiedCheckpt = stateTrie.CopyCheckpoint(genesisCheckpoint)
s.finalizedCheckpt = stateTrie.CopyCheckpoint(genesisCheckpoint)
s.prevFinalizedCheckpt = stateTrie.CopyCheckpoint(genesisCheckpoint)
if err := s.forkChoiceStore.ProcessBlock(ctx,
genesisBlk.Block.Slot,
genesisBlkRoot,
params.BeaconConfig().ZeroHash,
genesisCheckpoint.Epoch,
genesisCheckpoint.Epoch); err != nil {
log.Fatalf("Could not process genesis block for fork choice: %v", err)
}
s.genesisRoot = genesisBlkRoot
s.headBlock = genesisBlk
s.headState = genesisState
s.canonicalRoots[genesisState.Slot] = genesisBlkRoot[:]
s.setHead(genesisBlkRoot, genesisBlk, genesisState)
return nil
}
// This gets called to initialize chain info variables using the finalized checkpoint stored in DB
func (s *Service) initializeChainInfo(ctx context.Context) error {
s.headLock.Lock()
defer s.headLock.Unlock()
genesisBlock, err := s.beaconDB.GenesisBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not get genesis block from db")
@@ -362,6 +395,23 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
}
s.genesisRoot = genesisBlkRoot
if flags.Get().UnsafeSync {
headBlock, err := s.beaconDB.HeadBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not retrieve head block")
}
headRoot, err := ssz.HashTreeRoot(headBlock.Block)
if err != nil {
return errors.Wrap(err, "could not hash head block")
}
headState, err := s.beaconDB.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not retrieve head state")
}
s.setHead(headRoot, headBlock, headState)
return nil
}
finalized, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint from db")
@@ -371,19 +421,75 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
// would be the genesis state and block.
return errors.New("no finalized epoch in the database")
}
s.headState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(finalized.Root))
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
finalizedRoot := bytesutil.ToBytes32(finalized.Root)
var finalizedState *stateTrie.BeaconState
if featureconfig.Get().NewStateMgmt {
finalizedRoot = s.beaconDB.LastArchivedIndexRoot(ctx)
finalizedState, err = s.stateGen.Resume(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
if finalizedRoot == params.BeaconConfig().ZeroHash {
finalizedRoot = bytesutil.ToBytes32(finalized.Root)
}
} else {
finalizedState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(finalized.Root))
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
}
s.headBlock, err = s.beaconDB.Block(ctx, bytesutil.ToBytes32(finalized.Root))
finalizedBlock, err := s.beaconDB.Block(ctx, finalizedRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block from db")
}
if s.headBlock != nil && s.headBlock.Block != nil {
s.headSlot = s.headBlock.Block.Slot
if finalizedState == nil || finalizedBlock == nil {
return errors.New("finalized state and block can't be nil")
}
s.canonicalRoots[s.headSlot] = finalized.Root
s.setHead(finalizedRoot, finalizedBlock, finalizedState)
return nil
}
// This is called when a client starts from a non-genesis slot. It deletes the states in DB
// from slot 1 (avoid genesis state) to `slot`.
func (s *Service) pruneGarbageState(ctx context.Context, slot uint64) error {
if featureconfig.Get().DontPruneStateStartUp {
return nil
}
filter := filters.NewFilter().SetStartSlot(1).SetEndSlot(slot)
roots, err := s.beaconDB.BlockRoots(ctx, filter)
if err != nil {
return err
}
if err := s.beaconDB.DeleteStates(ctx, roots); err != nil {
return err
}
if err := s.beaconDB.SaveLastArchivedIndex(ctx, 0); err != nil {
return err
}
return nil
}
// This is called when a client starts from non-genesis slot. This passes last justified and finalized
// information to fork choice service to initializes fork choice store.
func (s *Service) resumeForkChoice(justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) {
store := protoarray.New(justifiedCheckpoint.Epoch, finalizedCheckpoint.Epoch, bytesutil.ToBytes32(finalizedCheckpoint.Root))
s.forkChoiceStore = store
}
// This returns true if block has been processed before. Two ways to verify the block has been processed:
// 1.) Check fork choice store.
// 2.) Check DB.
// Checking 1.) is ten times faster than checking 2.)
func (s *Service) hasBlock(ctx context.Context, root [32]byte) bool {
if s.forkChoiceStore.HasNode(root) {
return true
}
return s.beaconDB.HasBlock(ctx, root)
}

View File

@@ -5,7 +5,6 @@ import (
"io/ioutil"
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/sirupsen/logrus"
)
@@ -19,19 +18,16 @@ func TestChainService_SaveHead_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
beaconDB: db,
}
go func() {
s.saveHead(
context.Background(),
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 777}},
[32]byte{},
)
}()
s.saveHead(
context.Background(),
&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 888}},
[32]byte{},
)
}

View File

@@ -20,12 +20,15 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
protodb "github.com/prysmaticlabs/prysm/proto/beacon/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -42,16 +45,20 @@ type store struct {
headRoot []byte
}
func (s *store) OnBlock(ctx context.Context, b *ethpb.SignedBeaconBlock) error {
return nil
func (s *store) OnBlock(ctx context.Context, b *ethpb.SignedBeaconBlock) (*beaconstate.BeaconState, error) {
return nil, nil
}
func (s *store) OnBlockInitialSyncStateTransition(ctx context.Context, b *ethpb.SignedBeaconBlock) error {
return nil
func (s *store) OnBlockCacheFilteredTree(ctx context.Context, b *ethpb.SignedBeaconBlock) (*beaconstate.BeaconState, error) {
return nil, nil
}
func (s *store) OnAttestation(ctx context.Context, a *ethpb.Attestation) error {
return nil
func (s *store) OnBlockInitialSyncStateTransition(ctx context.Context, b *ethpb.SignedBeaconBlock) (*beaconstate.BeaconState, error) {
return nil, nil
}
func (s *store) OnAttestation(ctx context.Context, a *ethpb.Attestation) ([]uint64, error) {
return nil, nil
}
func (s *store) GenesisStore(ctx context.Context, justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) error {
@@ -62,6 +69,10 @@ func (s *store) FinalizedCheckpt() *ethpb.Checkpoint {
return nil
}
func (s *store) JustifiedCheckpt() *ethpb.Checkpoint {
return nil
}
func (s *store) Head(ctx context.Context) ([]byte, error) {
return s.headRoot, nil
}
@@ -94,6 +105,25 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
ctx := context.Background()
var web3Service *powchain.Service
var err error
bState, _ := testutil.DeterministicGenesisState(t, 10)
err = beaconDB.SavePowchainData(ctx, &protodb.ETH1ChainData{
BeaconState: bState.InnerStateUnsafe(),
Trie: &protodb.SparseMerkleTrie{},
CurrentEth1Data: &protodb.LatestETH1Data{
BlockHash: make([]byte, 32),
},
ChainstartData: &protodb.ChainStartData{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
DepositCount: 0,
BlockHash: make([]byte, 32),
},
},
DepositContainers: []*protodb.DepositContainer{},
})
if err != nil {
t.Fatal(err)
}
web3Service, err = powchain.NewService(ctx, &powchain.Web3ServiceConfig{
BeaconDB: beaconDB,
ETH1Endpoint: endpoint,
@@ -103,6 +133,10 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
t.Fatalf("unable to set up web3 service: %v", err)
}
opsService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
if err != nil {
t.Fatal(err)
}
cfg := &Config{
BeaconBlockBuf: 0,
BeaconDB: beaconDB,
@@ -111,10 +145,13 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
P2p: &mockBroadcaster{},
StateNotifier: &mockBeaconNode{},
AttPool: attestations.NewPool(),
ForkChoiceStore: protoarray.New(0, 0, params.BeaconConfig().ZeroHash),
OpsService: opsService,
}
if err != nil {
t.Fatalf("could not register blockchain service: %v", err)
}
chainService, err := NewService(ctx, cfg)
if err != nil {
t.Fatalf("unable to setup chain service: %v", err)
@@ -165,7 +202,7 @@ func TestChainStartStop_Uninitialized(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if beaconState == nil || beaconState.Slot != 0 {
if beaconState == nil || beaconState.Slot() != 0 {
t.Error("Expected canonical state feed to send a state with genesis block")
}
if err := chainService.Stop(); err != nil {
@@ -176,7 +213,7 @@ func TestChainStartStop_Uninitialized(t *testing.T) {
t.Error("Context was not canceled")
}
testutil.AssertLogsContain(t, hook, "Waiting")
testutil.AssertLogsContain(t, hook, "Genesis time reached")
testutil.AssertLogsContain(t, hook, "Initialized beacon chain genesis state")
}
func TestChainStartStop_Initialized(t *testing.T) {
@@ -195,7 +232,11 @@ func TestChainStartStop_Initialized(t *testing.T) {
if err := db.SaveBlock(ctx, genesisBlk); err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, &pb.BeaconState{Slot: 1}, blkRoot); err != nil {
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{Slot: 1})
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveHeadBlockRoot(ctx, blkRoot); err != nil {
@@ -238,11 +279,14 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
t.Fatal(err)
}
hashTreeRoot := trie.HashTreeRoot()
genState := state.EmptyGenesisState()
genState.Eth1Data = &ethpb.Eth1Data{
genState, err := state.EmptyGenesisState()
if err != nil {
t.Fatal(err)
}
genState.SetEth1Data(&ethpb.Eth1Data{
DepositRoot: hashTreeRoot[:],
DepositCount: uint64(len(deposits)),
}
})
genState, err = b.ProcessDeposits(ctx, genState, &ethpb.BeaconBlockBody{Deposits: deposits})
if err != nil {
t.Fatal(err)
@@ -253,13 +297,13 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
t.Fatal(err)
}
s, err := bc.beaconDB.State(ctx, bytesutil.ToBytes32(bc.canonicalRoots[0]))
s, err := bc.beaconDB.State(ctx, bc.headRoot())
if err != nil {
t.Fatal(err)
}
for _, v := range s.Validators {
if !db.HasValidatorIndex(ctx, bytesutil.ToBytes48(v.PublicKey)) {
for _, v := range s.Validators() {
if !db.HasValidatorIndex(ctx, v.PublicKey) {
t.Errorf("Validator %s missing from db", hex.EncodeToString(v.PublicKey))
}
}
@@ -267,11 +311,15 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
if _, err := bc.HeadState(ctx); err != nil {
t.Error(err)
}
if bc.HeadBlock() == nil {
headBlk, err := bc.HeadBlock(ctx)
if err != nil {
t.Fatal(err)
}
if headBlk == nil {
t.Error("Head state can't be nil after initialize beacon chain")
}
if bc.CanonicalRoot(0) == nil {
t.Error("Canonical root for slot 0 can't be nil after initialize beacon chain")
if bc.headRoot() == params.BeaconConfig().ZeroHash {
t.Error("Canonical root for slot 0 can't be zeros after initialize beacon chain")
}
}
@@ -294,7 +342,10 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: finalizedSlot, ParentRoot: genesisRoot[:]}}
headState := &pb.BeaconState{Slot: finalizedSlot}
headState, err := beaconstate.InitializeFromProto(&pb.BeaconState{Slot: finalizedSlot})
if err != nil {
t.Fatal(err)
}
headRoot, _ := ssz.HashTreeRoot(headBlock.Block)
if err := db.SaveState(ctx, headState, headRoot); err != nil {
t.Fatal(err)
@@ -311,24 +362,32 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
if err := db.SaveBlock(ctx, headBlock); err != nil {
t.Fatal(err)
}
c := &Service{beaconDB: db, canonicalRoots: make(map[uint64][]byte)}
c := &Service{beaconDB: db}
if err := c.initializeChainInfo(ctx); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(c.HeadBlock(), headBlock) {
headBlk, err := c.HeadBlock(ctx)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(headBlk, headBlock) {
t.Error("head block incorrect")
}
s, err := c.HeadState(ctx)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(s, headState) {
if !reflect.DeepEqual(s.InnerStateUnsafe(), headState.InnerStateUnsafe()) {
t.Error("head state incorrect")
}
if headBlock.Block.Slot != c.HeadSlot() {
t.Error("head slot incorrect")
}
if !bytes.Equal(headRoot[:], c.HeadRoot()) {
r, err := c.HeadRoot(context.Background())
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(headRoot[:], r) {
t.Error("head slot incorrect")
}
if c.genesisRoot != genesisRoot {
@@ -341,11 +400,13 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
defer testDB.TeardownDB(t, db)
ctx := context.Background()
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
beaconDB: db,
}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, _ := ssz.HashTreeRoot(b)
state := &pb.BeaconState{}
newState, err := beaconstate.InitializeFromProto(state)
s.beaconDB.SaveState(ctx, newState, r)
if err := s.saveHeadNoDB(ctx, b, r); err != nil {
t.Fatal(err)
}
@@ -358,3 +419,124 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
t.Error("head block should not be equal")
}
}
func TestChainService_PruneOldStates(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
s := &Service{
beaconDB: db,
}
for i := 0; i < 100; i++ {
block := &ethpb.BeaconBlock{Slot: uint64(i)}
if err := s.beaconDB.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: block}); err != nil {
t.Fatal(err)
}
r, err := ssz.HashTreeRoot(block)
if err != nil {
t.Fatal(err)
}
state := &pb.BeaconState{Slot: uint64(i)}
newState, err := beaconstate.InitializeFromProto(state)
if err != nil {
t.Fatal(err)
}
if err := s.beaconDB.SaveState(ctx, newState, r); err != nil {
t.Fatal(err)
}
}
// Delete half of the states.
if err := s.pruneGarbageState(ctx, 50); err != nil {
t.Fatal(err)
}
filter := filters.NewFilter().SetStartSlot(1).SetEndSlot(100)
roots, err := s.beaconDB.BlockRoots(ctx, filter)
if err != nil {
t.Fatal(err)
}
for i := 1; i < 50; i++ {
s, err := s.beaconDB.State(ctx, roots[i])
if err != nil {
t.Fatal(err)
}
if s != nil {
t.Errorf("wanted nil for slot %d", i)
}
}
}
func TestHasBlock_ForkChoiceAndDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
forkChoiceStore: protoarray.New(0, 0, [32]byte{}),
finalizedCheckpt: &ethpb.Checkpoint{},
beaconDB: db,
}
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, _ := ssz.HashTreeRoot(block.Block)
bs := &pb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}
state, _ := beaconstate.InitializeFromProto(bs)
if err := s.insertBlockToForkChoiceStore(ctx, block.Block, r, state); err != nil {
t.Fatal(err)
}
if s.hasBlock(ctx, [32]byte{}) {
t.Error("Should not have block")
}
if !s.hasBlock(ctx, r) {
t.Error("Should have block")
}
}
func BenchmarkHasBlockDB(b *testing.B) {
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
ctx := context.Background()
s := &Service{
beaconDB: db,
}
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := s.beaconDB.SaveBlock(ctx, block); err != nil {
b.Fatal(err)
}
r, _ := ssz.HashTreeRoot(block.Block)
b.ResetTimer()
for i := 0; i < b.N; i++ {
if !s.beaconDB.HasBlock(ctx, r) {
b.Fatal("Block is not in DB")
}
}
}
func BenchmarkHasBlockForkChoiceStore(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
s := &Service{
forkChoiceStore: protoarray.New(0, 0, [32]byte{}),
finalizedCheckpt: &ethpb.Checkpoint{},
beaconDB: db,
}
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, _ := ssz.HashTreeRoot(block.Block)
bs := &pb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{}}
state, _ := beaconstate.InitializeFromProto(bs)
if err := s.insertBlockToForkChoiceStore(ctx, block.Block, r, state); err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if !s.forkChoiceStore.HasNode(r) {
b.Fatal("Block is not in fork choice store")
}
}
}

View File

@@ -8,10 +8,12 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/event:go_default_library",
"//shared/params:go_default_library",

View File

@@ -9,10 +9,12 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
blockfeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/block"
opfeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -21,7 +23,7 @@ import (
// ChainService defines the mock interface for testing
type ChainService struct {
State *pb.BeaconState
State *stateTrie.BeaconState
Root []byte
Block *ethpb.SignedBeaconBlock
FinalizedCheckPoint *ethpb.Checkpoint
@@ -33,7 +35,9 @@ type ChainService struct {
Fork *pb.Fork
DB db.Database
stateNotifier statefeed.Notifier
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
ValidAttestation bool
}
// StateNotifier mocks the same method in the chain service.
@@ -44,6 +48,27 @@ func (ms *ChainService) StateNotifier() statefeed.Notifier {
return ms.stateNotifier
}
// BlockNotifier mocks the same method in the chain service.
func (ms *ChainService) BlockNotifier() blockfeed.Notifier {
if ms.blockNotifier == nil {
ms.blockNotifier = &MockBlockNotifier{}
}
return ms.blockNotifier
}
// MockBlockNotifier mocks the block notifier.
type MockBlockNotifier struct {
feed *event.Feed
}
// BlockFeed returns a block feed.
func (msn *MockBlockNotifier) BlockFeed() *event.Feed {
if msn.feed == nil {
msn.feed = new(event.Feed)
}
return msn.feed
}
// MockStateNotifier mocks the state notifier.
type MockStateNotifier struct {
feed *event.Feed
@@ -96,12 +121,14 @@ func (ms *ChainService) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.S
// ReceiveBlockNoPubsubForkchoice mocks ReceiveBlockNoPubsubForkchoice method in chain service.
func (ms *ChainService) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error {
if ms.State == nil {
ms.State = &pb.BeaconState{}
ms.State = &stateTrie.BeaconState{}
}
if !bytes.Equal(ms.Root, block.Block.ParentRoot) {
return errors.Errorf("wanted %#x but got %#x", ms.Root, block.Block.ParentRoot)
}
ms.State.Slot = block.Block.Slot
if err := ms.State.SetSlot(block.Block.Slot); err != nil {
return err
}
ms.BlocksReceived = append(ms.BlocksReceived, block)
signingRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
@@ -123,23 +150,22 @@ func (ms *ChainService) HeadSlot() uint64 {
if ms.State == nil {
return 0
}
return ms.State.Slot
return ms.State.Slot()
}
// HeadRoot mocks HeadRoot method in chain service.
func (ms *ChainService) HeadRoot() []byte {
return ms.Root
func (ms *ChainService) HeadRoot(ctx context.Context) ([]byte, error) {
return ms.Root, nil
}
// HeadBlock mocks HeadBlock method in chain service.
func (ms *ChainService) HeadBlock() *ethpb.SignedBeaconBlock {
return ms.Block
func (ms *ChainService) HeadBlock(context.Context) (*ethpb.SignedBeaconBlock, error) {
return ms.Block, nil
}
// HeadState mocks HeadState method in chain service.
func (ms *ChainService) HeadState(context.Context) (*pb.BeaconState, error) {
func (ms *ChainService) HeadState(context.Context) (*stateTrie.BeaconState, error) {
return ms.State, nil
}
@@ -191,7 +217,25 @@ func (ms *ChainService) GenesisTime() time.Time {
return ms.Genesis
}
// CurrentSlot mocks the same method in the chain service.
func (ms *ChainService) CurrentSlot() uint64 {
return uint64(time.Now().Unix()-ms.Genesis.Unix()) / params.BeaconConfig().SecondsPerSlot
}
// Participation mocks the same method in the chain service.
func (ms *ChainService) Participation(epoch uint64) *precompute.Balance {
return ms.Balance
}
// IsValidAttestation always returns true.
func (ms *ChainService) IsValidAttestation(ctx context.Context, att *ethpb.Attestation) bool {
return ms.ValidAttestation
}
// ClearCachedStates does nothing.
func (ms *ChainService) ClearCachedStates() {}
// HasInitSyncBlock mocks the same method in the chain service.
func (ms *ChainService) HasInitSyncBlock(root [32]byte) bool {
return false
}

View File

@@ -6,22 +6,31 @@ go_library(
"attestation_data.go",
"checkpoint_state.go",
"committee.go",
"committee_ids.go",
"common.go",
"eth1_data.go",
"hot_state_cache.go",
"skip_slot_cache.go",
"state_summary.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache",
visibility = ["//beacon-chain:__subpackages__"],
visibility = [
"//beacon-chain:__subpackages__",
"//tools:__subpackages__",
],
deps = [
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/sliceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@io_k8s_client_go//tools/cache:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
@@ -31,19 +40,23 @@ go_test(
srcs = [
"attestation_data_test.go",
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"eth1_data_test.go",
"feature_flag_test.go",
"hot_state_cache_test.go",
"skip_slot_cache_test.go",
],
embed = [":go_default_library"],
race = "on",
deps = [
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
],
)

View File

@@ -11,7 +11,6 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"k8s.io/client-go/tools/cache"
)
@@ -59,12 +58,6 @@ func NewAttestationCache() *AttestationCache {
// Get waits for any in progress calculation to complete before returning a
// cached response, if any.
func (c *AttestationCache) Get(ctx context.Context, req *ethpb.AttestationDataRequest) (*ethpb.AttestationData, error) {
if !featureconfig.Get().EnableAttestationCache {
// Return a miss result if cache is not enabled.
attestationCacheMiss.Inc()
return nil, nil
}
if req == nil {
return nil, errors.New("nil attestation data request")
}
@@ -113,10 +106,6 @@ func (c *AttestationCache) Get(ctx context.Context, req *ethpb.AttestationDataRe
// MarkInProgress a request so that any other similar requests will block on
// Get until MarkNotInProgress is called.
func (c *AttestationCache) MarkInProgress(req *ethpb.AttestationDataRequest) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
s, e := reqToKey(req)
@@ -126,19 +115,13 @@ func (c *AttestationCache) MarkInProgress(req *ethpb.AttestationDataRequest) err
if c.inProgress[s] {
return ErrAlreadyInProgress
}
if featureconfig.Get().EnableAttestationCache {
c.inProgress[s] = true
}
c.inProgress[s] = true
return nil
}
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *AttestationCache) MarkNotInProgress(req *ethpb.AttestationDataRequest) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
s, e := reqToKey(req)
@@ -151,10 +134,6 @@ func (c *AttestationCache) MarkNotInProgress(req *ethpb.AttestationDataRequest)
// Put the response in the cache.
func (c *AttestationCache) Put(ctx context.Context, req *ethpb.AttestationDataRequest, res *ethpb.AttestationData) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
data := &attestationReqResWrapper{
req,
res,

View File

@@ -4,11 +4,10 @@ import (
"errors"
"sync"
"github.com/gogo/protobuf/proto"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"k8s.io/client-go/tools/cache"
)
@@ -19,7 +18,9 @@ var (
ErrNotCheckpointState = errors.New("object is not a state by check point struct")
// maxCheckpointStateSize defines the max number of entries check point to state cache can contain.
maxCheckpointStateSize = 4
// Choosing 10 to account for multiple forks, this allows 5 forks per epoch boundary with 2 epochs
// window to accept attestation based on latest spec.
maxCheckpointStateSize = 10
// Metrics.
checkpointStateMiss = promauto.NewCounter(prometheus.CounterOpts{
@@ -35,7 +36,7 @@ var (
// CheckpointState defines the active validator indices per epoch.
type CheckpointState struct {
Checkpoint *ethpb.Checkpoint
State *pb.BeaconState
State *stateTrie.BeaconState
}
// CheckpointStateCache is a struct with 1 queue for looking up state by checkpoint.
@@ -67,7 +68,7 @@ func NewCheckpointStateCache() *CheckpointStateCache {
// StateByCheckpoint fetches state by checkpoint. Returns true with a
// reference to the CheckpointState info, if exists. Otherwise returns false, nil.
func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (*pb.BeaconState, error) {
func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (*stateTrie.BeaconState, error) {
c.lock.RLock()
defer c.lock.RUnlock()
h, err := hashutil.HashProto(cp)
@@ -92,7 +93,7 @@ func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (*pb.Beac
return nil, ErrNotCheckpointState
}
return proto.Clone(info.State).(*pb.BeaconState), nil
return info.State.Copy(), nil
}
// AddCheckpointState adds CheckpointState object to the cache. This method also trims the least
@@ -100,7 +101,10 @@ func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (*pb.Beac
func (c *CheckpointStateCache) AddCheckpointState(cp *CheckpointState) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.cache.AddIfNotPresent(cp); err != nil {
if err := c.cache.AddIfNotPresent(&CheckpointState{
Checkpoint: stateTrie.CopyCheckpoint(cp.Checkpoint),
State: cp.State.Copy(),
}); err != nil {
return err
}

View File

@@ -4,16 +4,24 @@ import (
"reflect"
"testing"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
)
func TestCheckpointStateCacheKeyFn_OK(t *testing.T) {
cp := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
cp := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 64,
})
if err != nil {
t.Fatal(err)
}
info := &CheckpointState{
Checkpoint: cp,
State: &pb.BeaconState{Slot: 64},
State: st,
}
key, err := checkpointState(info)
if err != nil {
@@ -38,10 +46,16 @@ func TestCheckpointStateCacheKeyFn_InvalidObj(t *testing.T) {
func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
cache := NewCheckpointStateCache()
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 64,
})
if err != nil {
t.Fatal(err)
}
info1 := &CheckpointState{
Checkpoint: cp1,
State: &pb.BeaconState{Slot: 64},
State: st,
}
state, err := cache.StateByCheckpoint(cp1)
if err != nil {
@@ -58,14 +72,20 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, info1.State) {
if !reflect.DeepEqual(state.InnerStateUnsafe(), info1.State.InnerStateUnsafe()) {
t.Error("incorrectly cached state")
}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: []byte{'B'}}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'B'}, 32)}
st2, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 128,
})
if err != nil {
t.Fatal(err)
}
info2 := &CheckpointState{
Checkpoint: cp2,
State: &pb.BeaconState{Slot: 128},
State: st2,
}
if err := cache.AddCheckpointState(info2); err != nil {
t.Fatal(err)
@@ -74,7 +94,7 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, info2.State) {
if !reflect.DeepEqual(state.CloneInnerState(), info2.State.CloneInnerState()) {
t.Error("incorrectly cached state")
}
@@ -82,18 +102,26 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, info1.State) {
if !reflect.DeepEqual(state.CloneInnerState(), info1.State.CloneInnerState()) {
t.Error("incorrectly cached state")
}
}
func TestCheckpointStateCache__MaxSize(t *testing.T) {
func TestCheckpointStateCache_MaxSize(t *testing.T) {
c := NewCheckpointStateCache()
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 0,
})
if err != nil {
t.Fatal(err)
}
for i := 0; i < maxCheckpointStateSize+100; i++ {
if err := st.SetSlot(uint64(i)); err != nil {
t.Fatal(err)
}
info := &CheckpointState{
Checkpoint: &ethpb.Checkpoint{Epoch: uint64(i)},
State: &pb.BeaconState{Slot: uint64(i)},
State: st,
}
if err := c.AddCheckpointState(info); err != nil {
t.Fatal(err)

View File

@@ -6,7 +6,6 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"k8s.io/client-go/tools/cache"
@@ -40,6 +39,7 @@ type Committees struct {
Seed [32]byte
ShuffledIndices []uint64
SortedIndices []uint64
ProposerIndices []uint64
}
// CommitteeCache is a struct with 1 queue for looking up shuffled indices list by seed.
@@ -68,9 +68,6 @@ func NewCommitteesCache() *CommitteeCache {
// Committee fetches the shuffled indices by slot and committee index. Every list of indices
// represent one committee. Returns true if the list exists with slot and committee index. Otherwise returns false, nil.
func (c *CommitteeCache) Committee(slot uint64, seed [32]byte, index uint64) ([]uint64, error) {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return nil, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
@@ -99,21 +96,50 @@ func (c *CommitteeCache) Committee(slot uint64, seed [32]byte, index uint64) ([]
indexOffSet := index + (slot%params.BeaconConfig().SlotsPerEpoch)*committeeCountPerSlot
start, end := startEndIndices(item, indexOffSet)
if int(end) > len(item.ShuffledIndices) || end < start {
return nil, errors.New("requested index out of bound")
}
return item.ShuffledIndices[start:end], nil
}
// AddCommitteeShuffledList adds Committee shuffled list object to the cache. T
// his method also trims the least recently list if the cache size has ready the max cache size limit.
func (c *CommitteeCache) AddCommitteeShuffledList(committees *Committees) error {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
if err := c.CommitteeCache.AddIfNotPresent(committees); err != nil {
return err
}
trim(c.CommitteeCache, maxCommitteesCacheSize)
return nil
}
// AddProposerIndicesList updates the committee shuffled list with proposer indices.
func (c *CommitteeCache) AddProposerIndicesList(seed [32]byte, indices []uint64) error {
c.lock.Lock()
defer c.lock.Unlock()
obj, exists, err := c.CommitteeCache.GetByKey(key(seed))
if err != nil {
return err
}
if !exists {
committees := &Committees{ProposerIndices: indices}
if err := c.CommitteeCache.Add(committees); err != nil {
return err
}
} else {
committees, ok := obj.(*Committees)
if !ok {
return ErrNotCommittee
}
committees.ProposerIndices = indices
if err := c.CommitteeCache.Add(committees); err != nil {
return err
}
}
trim(c.CommitteeCache, maxCommitteesCacheSize)
return nil
@@ -121,10 +147,6 @@ func (c *CommitteeCache) AddCommitteeShuffledList(committees *Committees) error
// ActiveIndices returns the active indices of a given seed stored in cache.
func (c *CommitteeCache) ActiveIndices(seed [32]byte) ([]uint64, error) {
if !featureconfig.Get().EnableShuffledIndexCache && !featureconfig.Get().EnableNewCache {
return nil, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.CommitteeCache.GetByKey(key(seed))
@@ -147,6 +169,30 @@ func (c *CommitteeCache) ActiveIndices(seed [32]byte) ([]uint64, error) {
return item.SortedIndices, nil
}
// ProposerIndices returns the proposer indices of a given seed.
func (c *CommitteeCache) ProposerIndices(seed [32]byte) ([]uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.CommitteeCache.GetByKey(key(seed))
if err != nil {
return nil, err
}
if exists {
CommitteeCacheHit.Inc()
} else {
CommitteeCacheMiss.Inc()
return nil, nil
}
item, ok := obj.(*Committees)
if !ok {
return nil, ErrNotCommittee
}
return item.ProposerIndices, nil
}
func startEndIndices(c *Committees, index uint64) (uint64, uint64) {
validatorCount := uint64(len(c.ShuffledIndices))
start := sliceutil.SplitOffset(validatorCount, c.CommitteeCount, index)
@@ -157,7 +203,7 @@ func startEndIndices(c *Committees, index uint64) (uint64, uint64) {
// Using seed as source for key to handle reorgs in the same epoch.
// The seed is derived from state's array of randao mixes and epoch value
// hashed together. This avoids collisions on different validator set. Spec definition:
// https://github.com/ethereum/eth2.0-specs/blob/v0.9.2/specs/core/0_beacon-chain.md#get_seed
// https://github.com/ethereum/eth2.0-specs/blob/v0.9.3/specs/core/0_beacon-chain.md#get_seed
func key(seed [32]byte) string {
return string(seed[:])
}

View File

@@ -0,0 +1,68 @@
package cache
import (
"reflect"
"testing"
fuzz "github.com/google/gofuzz"
)
func TestCommitteeKeyFuzz_OK(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
c := &Committees{}
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
k, err := committeeKeyFn(c)
if err != nil {
t.Fatal(err)
}
if k != key(c.Seed) {
t.Errorf("Incorrect hash k: %s, expected %s", k, key(c.Seed))
}
}
}
func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
cache := NewCommitteesCache()
fuzzer := fuzz.NewWithSeed(0)
c := &Committees{}
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
if err := cache.AddCommitteeShuffledList(c); err != nil {
t.Fatal(err)
}
if _, err := cache.Committee(0, c.Seed, 0); err != nil {
t.Fatal(err)
}
}
if len(cache.CommitteeCache.ListKeys()) != maxCommitteesCacheSize {
t.Error("Incorrect key size")
}
}
func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
cache := NewCommitteesCache()
fuzzer := fuzz.NewWithSeed(0)
c := &Committees{}
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
if err := cache.AddCommitteeShuffledList(c); err != nil {
t.Fatal(err)
}
indices, err := cache.ActiveIndices(c.Seed)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(indices, c.SortedIndices) {
t.Error("Saved indices not the same")
}
}
if len(cache.CommitteeCache.ListKeys()) != maxCommitteesCacheSize {
t.Error("Incorrect key size")
}
}

44
beacon-chain/cache/committee_ids.go vendored Normal file
View File

@@ -0,0 +1,44 @@
package cache
import (
"sync"
lru "github.com/hashicorp/golang-lru"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
)
type committeeIDs struct {
cache *lru.Cache
lock sync.RWMutex
}
// CommitteeIDs for attestations.
var CommitteeIDs = newCommitteeIDs()
func newCommitteeIDs() *committeeIDs {
cache, err := lru.New(8)
if err != nil {
panic(err)
}
return &committeeIDs{cache: cache}
}
// AddIDs to the cache for attestation committees by epoch.
func (t *committeeIDs) AddIDs(indices []uint64, epoch uint64) {
t.lock.Lock()
defer t.lock.Unlock()
val, exists := t.cache.Get(epoch)
if exists {
indices = sliceutil.UnionUint64(append(indices, val.([]uint64)...))
}
t.cache.Add(epoch, indices)
}
// GetIDs from the cache for attestation committees by epoch.
func (t *committeeIDs) GetIDs(epoch uint64) []uint64 {
val, exists := t.cache.Get(epoch)
if !exists {
return []uint64{}
}
return val.([]uint64)
}

View File

@@ -1,6 +1,7 @@
package cache
import (
"math"
"reflect"
"sort"
"strconv"
@@ -96,6 +97,53 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
}
}
func TestCommitteeCache_AddProposerIndicesList(t *testing.T) {
cache := NewCommitteesCache()
seed := [32]byte{'A'}
indices := []uint64{1, 2, 3, 4, 5}
indices, err := cache.ProposerIndices(seed)
if err != nil {
t.Fatal(err)
}
if indices != nil {
t.Error("Expected committee count not to exist in empty cache")
}
if err := cache.AddProposerIndicesList(seed, indices); err != nil {
t.Fatal(err)
}
received, err := cache.ProposerIndices(seed)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(indices, received) {
t.Error("Did not receive correct proposer indices from cache")
}
item := &Committees{Seed: [32]byte{'B'}, SortedIndices: []uint64{1, 2, 3, 4, 5, 6}}
if err := cache.AddCommitteeShuffledList(item); err != nil {
t.Fatal(err)
}
indices, err = cache.ProposerIndices(item.Seed)
if err != nil {
t.Fatal(err)
}
if indices != nil {
t.Error("Expected committee count not to exist in empty cache")
}
if err := cache.AddProposerIndicesList(item.Seed, indices); err != nil {
t.Fatal(err)
}
received, err = cache.ProposerIndices(item.Seed)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(indices, received) {
t.Error("Did not receive correct proposer indices from cache")
}
}
func TestCommitteeCache_CanRotate(t *testing.T) {
cache := NewCommitteesCache()
@@ -125,3 +173,19 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
t.Error("incorrect key received for slot 199")
}
}
func TestCommitteeCacheOutOfRange(t *testing.T) {
cache := NewCommitteesCache()
seed := bytesutil.ToBytes32([]byte("foo"))
cache.CommitteeCache.Add(&Committees{
CommitteeCount: 1,
Seed: seed,
ShuffledIndices: []uint64{0},
SortedIndices: []uint64{},
ProposerIndices: []uint64{},
})
_, err := cache.Committee(0, seed, math.MaxUint64) // Overflow!
if err == nil {
t.Fatal("Did not fail as expected")
}
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
var _ = PendingDepositsFetcher(&DepositCache{})
@@ -33,8 +34,12 @@ func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
func TestRemovePendingDeposit_OK(t *testing.T) {
db := DepositCache{}
depToRemove := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
otherDep := &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}
proof1 := make([][]byte, 33)
proof1[0] = bytesutil.PadTo([]byte{'A'}, 32)
proof2 := make([][]byte, 33)
proof2[0] = bytesutil.PadTo([]byte{'A'}, 32)
depToRemove := &ethpb.Deposit{Proof: proof1}
otherDep := &ethpb.Deposit{Proof: proof2}
db.pendingDeposits = []*dbpb.DepositContainer{
{Deposit: depToRemove, Index: 1},
{Deposit: otherDep, Index: 5},
@@ -57,7 +62,9 @@ func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
func TestPendingDeposit_RoundTrip(t *testing.T) {
dc := DepositCache{}
dep := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
proof := make([][]byte, 33)
proof[0] = bytesutil.PadTo([]byte{'A'}, 32)
dep := &ethpb.Deposit{Proof: proof}
dc.InsertPendingDeposit(context.Background(), dep, 111, 100, [32]byte{})
dc.RemovePendingDeposit(context.Background(), dep)
if len(dc.pendingDeposits) != 0 {

View File

@@ -91,7 +91,7 @@ func TestEth1Data_MaxSize(t *testing.T) {
for i := 0; i < maxEth1DataVoteSize+1; i++ {
var hash [32]byte
copy(hash[:], []byte(strconv.Itoa(i)))
copy(hash[:], strconv.Itoa(i))
eInfo := &Eth1DataVote{
Eth1DataHash: hash,
}

View File

@@ -4,8 +4,6 @@ import "github.com/prysmaticlabs/prysm/shared/featureconfig"
func init() {
featureconfig.Init(&featureconfig.Flags{
EnableAttestationCache: true,
EnableEth1DataVoteCache: true,
EnableShuffledIndexCache: true,
EnableEth1DataVoteCache: true,
})
}

61
beacon-chain/cache/hot_state_cache.go vendored Normal file
View File

@@ -0,0 +1,61 @@
package cache
import (
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
)
var (
// hotStateCacheSize defines the max number of hot state this can cache.
hotStateCacheSize = 16
// Metrics
hotStateCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "hot_state_cache_hit",
Help: "The total number of cache hits on the hot state cache.",
})
hotStateCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "hot_state_cache_miss",
Help: "The total number of cache misses on the hot state cache.",
})
)
// HotStateCache is used to store the processed beacon state after finalized check point..
type HotStateCache struct {
cache *lru.Cache
}
// NewHotStateCache initializes the map and underlying cache.
func NewHotStateCache() *HotStateCache {
cache, err := lru.New(hotStateCacheSize)
if err != nil {
panic(err)
}
return &HotStateCache{
cache: cache,
}
}
// Get returns a cached response via input block root, if any.
// The response is copied by default.
func (c *HotStateCache) Get(root [32]byte) *stateTrie.BeaconState {
item, exists := c.cache.Get(root)
if exists && item != nil {
hotStateCacheHit.Inc()
return item.(*stateTrie.BeaconState).Copy()
}
hotStateCacheMiss.Inc()
return nil
}
// Put the response in the cache.
func (c *HotStateCache) Put(root [32]byte, state *stateTrie.BeaconState) {
c.cache.Add(root, state)
}
// Has returns true if the key exists in the cache.
func (c *HotStateCache) Has(root [32]byte) bool {
return c.cache.Contains(root)
}

View File

@@ -0,0 +1,41 @@
package cache_test
import (
"reflect"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestHotStateCache_RoundTrip(t *testing.T) {
c := cache.NewHotStateCache()
root := [32]byte{'A'}
state := c.Get(root)
if state != nil {
t.Errorf("Empty cache returned an object: %v", state)
}
if c.Has(root) {
t.Error("Empty cache has an object")
}
state, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 10,
})
if err != nil {
t.Fatal(err)
}
c.Put(root, state)
if !c.Has(root) {
t.Error("Empty cache does not have an object")
}
res := c.Get(root)
if state == nil {
t.Errorf("Empty cache returned an object: %v", state)
}
if !reflect.DeepEqual(state.CloneInnerState(), res.CloneInnerState()) {
t.Error("Expected equal protos to return from cache")
}
}

148
beacon-chain/cache/skip_slot_cache.go vendored Normal file
View File

@@ -0,0 +1,148 @@
package cache
import (
"context"
"math"
"sync"
"time"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"go.opencensus.io/trace"
)
var (
// Metrics
skipSlotCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "skip_slot_cache_hit",
Help: "The total number of cache hits on the skip slot cache.",
})
skipSlotCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "skip_slot_cache_miss",
Help: "The total number of cache misses on the skip slot cache.",
})
)
// SkipSlotCache is used to store the cached results of processing skip slots in state.ProcessSlots.
type SkipSlotCache struct {
cache *lru.Cache
lock sync.RWMutex
disabled bool // Allow for programmatic toggling of the cache, useful during initial sync.
inProgress map[uint64]bool
}
// NewSkipSlotCache initializes the map and underlying cache.
func NewSkipSlotCache() *SkipSlotCache {
cache, err := lru.New(8)
if err != nil {
panic(err)
}
return &SkipSlotCache{
cache: cache,
inProgress: make(map[uint64]bool),
}
}
// Enable the skip slot cache.
func (c *SkipSlotCache) Enable() {
c.disabled = false
}
// Disable the skip slot cache.
func (c *SkipSlotCache) Disable() {
c.disabled = true
}
// Get waits for any in progress calculation to complete before returning a
// cached response, if any.
func (c *SkipSlotCache) Get(ctx context.Context, slot uint64) (*stateTrie.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "skipSlotCache.Get")
defer span.End()
if c.disabled {
// Return a miss result if cache is not enabled.
skipSlotCacheMiss.Inc()
return nil, nil
}
delay := minDelay
// Another identical request may be in progress already. Let's wait until
// any in progress request resolves or our timeout is exceeded.
inProgress := false
for {
if ctx.Err() != nil {
return nil, ctx.Err()
}
c.lock.RLock()
if !c.inProgress[slot] {
c.lock.RUnlock()
break
}
inProgress = true
c.lock.RUnlock()
// This increasing backoff is to decrease the CPU cycles while waiting
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
}
span.AddAttributes(trace.BoolAttribute("inProgress", inProgress))
item, exists := c.cache.Get(slot)
if exists && item != nil {
skipSlotCacheHit.Inc()
span.AddAttributes(trace.BoolAttribute("hit", true))
return item.(*stateTrie.BeaconState).Copy(), nil
}
skipSlotCacheMiss.Inc()
span.AddAttributes(trace.BoolAttribute("hit", false))
return nil, nil
}
// MarkInProgress a request so that any other similar requests will block on
// Get until MarkNotInProgress is called.
func (c *SkipSlotCache) MarkInProgress(slot uint64) error {
if c.disabled {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
if c.inProgress[slot] {
return ErrAlreadyInProgress
}
c.inProgress[slot] = true
return nil
}
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *SkipSlotCache) MarkNotInProgress(slot uint64) error {
if c.disabled {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
delete(c.inProgress, slot)
return nil
}
// Put the response in the cache.
func (c *SkipSlotCache) Put(ctx context.Context, slot uint64, state *stateTrie.BeaconState) error {
if c.disabled {
return nil
}
// Copy state so cached value is not mutated.
c.cache.Add(slot, state.Copy())
return nil
}

View File

@@ -0,0 +1,53 @@
package cache_test
import (
"context"
"reflect"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestSkipSlotCache_RoundTrip(t *testing.T) {
ctx := context.Background()
c := cache.NewSkipSlotCache()
state, err := c.Get(ctx, 5)
if err != nil {
t.Error(err)
}
if state != nil {
t.Errorf("Empty cache returned an object: %v", state)
}
if err := c.MarkInProgress(5); err != nil {
t.Error(err)
}
state, err = stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 10,
})
if err != nil {
t.Fatal(err)
}
if err = c.Put(ctx, 5, state); err != nil {
t.Error(err)
}
if err := c.MarkNotInProgress(5); err != nil {
t.Error(err)
}
res, err := c.Get(ctx, 5)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(state.CloneInnerState(), res.CloneInnerState()) {
t.Error("Expected equal protos to return from cache")
}
}

65
beacon-chain/cache/state_summary.go vendored Normal file
View File

@@ -0,0 +1,65 @@
package cache
import (
"sync"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
// StateSummaryCache caches state summary object.
type StateSummaryCache struct {
initSyncStateSummaries map[[32]byte]*pb.StateSummary
initSyncStateSummariesLock sync.RWMutex
}
// NewStateSummaryCache creates a new state summary cache.
func NewStateSummaryCache() *StateSummaryCache {
return &StateSummaryCache{
initSyncStateSummaries: make(map[[32]byte]*pb.StateSummary),
}
}
// Put saves a state summary to the initial sync state summaries cache.
func (s *StateSummaryCache) Put(r [32]byte, b *pb.StateSummary) {
s.initSyncStateSummariesLock.Lock()
defer s.initSyncStateSummariesLock.Unlock()
s.initSyncStateSummaries[r] = b
}
// Has checks if a state summary exists in the initial sync state summaries cache using the root
// of the block.
func (s *StateSummaryCache) Has(r [32]byte) bool {
s.initSyncStateSummariesLock.RLock()
defer s.initSyncStateSummariesLock.RUnlock()
_, ok := s.initSyncStateSummaries[r]
return ok
}
// Get retrieves a state summary from the initial sync state summaries cache using the root of
// the block.
func (s *StateSummaryCache) Get(r [32]byte) *pb.StateSummary {
s.initSyncStateSummariesLock.RLock()
defer s.initSyncStateSummariesLock.RUnlock()
b := s.initSyncStateSummaries[r]
return b
}
// GetAll retrieves all the beacon state summaries from the initial sync state summaries cache, the returned
// state summaries are unordered.
func (s *StateSummaryCache) GetAll() []*pb.StateSummary {
s.initSyncStateSummariesLock.RLock()
defer s.initSyncStateSummariesLock.RUnlock()
blks := make([]*pb.StateSummary, 0, len(s.initSyncStateSummaries))
for _, b := range s.initSyncStateSummaries {
blks = append(blks, b)
}
return blks
}
// Clear clears out the initial sync state summaries cache.
func (s *StateSummaryCache) Clear() {
s.initSyncStateSummariesLock.Lock()
defer s.initSyncStateSummariesLock.Unlock()
s.initSyncStateSummaries = make(map[[32]byte]*pb.StateSummary)
}

View File

@@ -14,9 +14,11 @@ go_library(
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
@@ -46,9 +48,11 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/trieutil:go_default_library",

View File

@@ -14,9 +14,11 @@ import (
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state/stateutils"
v "github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
@@ -56,6 +58,25 @@ func verifySigningRoot(obj interface{}, pub []byte, signature []byte, domain uin
return nil
}
func verifyBlockRoot(blk *ethpb.BeaconBlock, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
}
sig, err := bls.SignatureFromBytes(signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
root, err := stateutil.BlockRoot(blk)
if err != nil {
return errors.Wrap(err, "could not get signing root")
}
if !sig.Verify(root[:], publicKey, domain) {
return ErrSigFailedToVerify
}
return nil
}
// Deprecated: This method uses deprecated ssz.SigningRoot.
func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
@@ -100,29 +121,46 @@ func verifySignature(signedData []byte, pub []byte, signature []byte, domain uin
// state.eth1_data_votes.append(body.eth1_data)
// if state.eth1_data_votes.count(body.eth1_data) * 2 > SLOTS_PER_ETH1_VOTING_PERIOD:
// state.latest_eth1_data = body.eth1_data
func ProcessEth1DataInBlock(beaconState *pb.BeaconState, block *ethpb.BeaconBlock) (*pb.BeaconState, error) {
beaconState.Eth1DataVotes = append(beaconState.Eth1DataVotes, block.Body.Eth1Data)
func ProcessEth1DataInBlock(beaconState *stateTrie.BeaconState, block *ethpb.BeaconBlock) (*stateTrie.BeaconState, error) {
if beaconState == nil {
return nil, errors.New("nil state")
}
if block == nil || block.Body == nil {
return nil, errors.New("nil block or block withought body")
}
if err := beaconState.AppendEth1DataVotes(block.Body.Eth1Data); err != nil {
return nil, err
}
hasSupport, err := Eth1DataHasEnoughSupport(beaconState, block.Body.Eth1Data)
if err != nil {
return nil, err
}
if hasSupport {
beaconState.Eth1Data = block.Body.Eth1Data
if err := beaconState.SetEth1Data(block.Body.Eth1Data); err != nil {
return nil, err
}
}
return beaconState, nil
}
func areEth1DataEqual(a, b *ethpb.Eth1Data) bool {
if a == nil || b == nil {
return false
}
return a.DepositCount == b.DepositCount &&
bytes.Equal(a.BlockHash, b.BlockHash) &&
bytes.Equal(a.DepositRoot, b.DepositRoot)
}
// Eth1DataHasEnoughSupport returns true when the given eth1data has more than 50% votes in the
// eth1 voting period. A vote is cast by including eth1data in a block and part of state processing
// appends eth1data to the state in the Eth1DataVotes list. Iterating through this list checks the
// votes to see if they match the eth1data.
func Eth1DataHasEnoughSupport(beaconState *pb.BeaconState, data *ethpb.Eth1Data) (bool, error) {
func Eth1DataHasEnoughSupport(beaconState *stateTrie.BeaconState, data *ethpb.Eth1Data) (bool, error) {
voteCount := uint64(0)
var eth1DataHash [32]byte
var err error
data = stateTrie.CopyETH1Data(data)
if featureconfig.Get().EnableEth1DataVoteCache {
eth1DataHash, err = hashutil.HashProto(data)
if err != nil {
@@ -135,8 +173,8 @@ func Eth1DataHasEnoughSupport(beaconState *pb.BeaconState, data *ethpb.Eth1Data)
}
if voteCount == 0 {
for _, vote := range beaconState.Eth1DataVotes {
if proto.Equal(vote, data) {
for _, vote := range beaconState.Eth1DataVotes() {
if areEth1DataEqual(vote, data) {
voteCount++
}
}
@@ -181,9 +219,9 @@ func Eth1DataHasEnoughSupport(beaconState *pb.BeaconState, data *ethpb.Eth1Data)
// # Verify proposer signature
// assert bls_verify(proposer.pubkey, signing_root(block), block.signature, get_domain(state, DOMAIN_BEACON_PROPOSER))
func ProcessBlockHeader(
beaconState *pb.BeaconState,
beaconState *stateTrie.BeaconState,
block *ethpb.SignedBeaconBlock,
) (*pb.BeaconState, error) {
) (*stateTrie.BeaconState, error) {
beaconState, err := ProcessBlockHeaderNoVerify(beaconState, block.Block)
if err != nil {
return nil, err
@@ -193,12 +231,18 @@ func ProcessBlockHeader(
if err != nil {
return nil, err
}
proposer := beaconState.Validators[idx]
proposer, err := beaconState.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
// Verify proposer signature.
currentEpoch := helpers.CurrentEpoch(beaconState)
domain := helpers.Domain(beaconState.Fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err := verifySigningRoot(block.Block, proposer.PublicKey, block.Signature, domain); err != nil {
currentEpoch := helpers.SlotToEpoch(beaconState.Slot())
domain, err := helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return nil, err
}
if err := verifyBlockRoot(block.Block, proposer.PublicKey, block.Signature, domain); err != nil {
return nil, ErrSigFailedToVerify
}
@@ -229,20 +273,20 @@ func ProcessBlockHeader(
// proposer = state.validators[get_beacon_proposer_index(state)]
// assert not proposer.slashed
func ProcessBlockHeaderNoVerify(
beaconState *pb.BeaconState,
beaconState *stateTrie.BeaconState,
block *ethpb.BeaconBlock,
) (*pb.BeaconState, error) {
) (*stateTrie.BeaconState, error) {
if block == nil {
return nil, errors.New("nil block")
}
if beaconState.Slot != block.Slot {
return nil, fmt.Errorf("state slot: %d is different then block slot: %d", beaconState.Slot, block.Slot)
if beaconState.Slot() != block.Slot {
return nil, fmt.Errorf("state slot: %d is different then block slot: %d", beaconState.Slot(), block.Slot)
}
parentRoot, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader)
parentRoot, err := stateutil.BlockHeaderRoot(beaconState.LatestBlockHeader())
if err != nil {
return nil, err
}
if !bytes.Equal(block.ParentRoot, parentRoot[:]) {
return nil, fmt.Errorf(
"parent root %#x does not match the latest block header signing root in state %#x",
@@ -253,20 +297,25 @@ func ProcessBlockHeaderNoVerify(
if err != nil {
return nil, err
}
proposer := beaconState.Validators[idx]
proposer, err := beaconState.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
if proposer.Slashed {
return nil, fmt.Errorf("proposer at index %d was previously slashed", idx)
}
bodyRoot, err := ssz.HashTreeRoot(block.Body)
bodyRoot, err := stateutil.BlockBodyRoot(block.Body)
if err != nil {
return nil, err
}
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{
if err := beaconState.SetLatestBlockHeader(&ethpb.BeaconBlockHeader{
Slot: block.Slot,
ParentRoot: block.ParentRoot,
StateRoot: params.BeaconConfig().ZeroHash[:],
BodyRoot: bodyRoot[:],
}); err != nil {
return nil, err
}
return beaconState, nil
}
@@ -291,21 +340,24 @@ func ProcessBlockHeaderNoVerify(
// hash(body.randao_reveal))
// )
func ProcessRandao(
beaconState *pb.BeaconState,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*pb.BeaconState, error) {
) (*stateTrie.BeaconState, error) {
proposerIdx, err := helpers.BeaconProposerIndex(beaconState)
if err != nil {
return nil, errors.Wrap(err, "could not get beacon proposer index")
}
proposerPub := beaconState.Validators[proposerIdx].PublicKey
proposerPub := beaconState.PubkeyAtIndex(proposerIdx)
currentEpoch := helpers.CurrentEpoch(beaconState)
currentEpoch := helpers.SlotToEpoch(beaconState.Slot())
buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, currentEpoch)
domain := helpers.Domain(beaconState.Fork, currentEpoch, params.BeaconConfig().DomainRandao)
if err := verifySignature(buf, proposerPub, body.RandaoReveal, domain); err != nil {
domain, err := helpers.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainRandao)
if err != nil {
return nil, err
}
if err := verifySignature(buf, proposerPub[:], body.RandaoReveal, domain); err != nil {
return nil, errors.Wrap(err, "could not verify block randao")
}
@@ -326,19 +378,27 @@ func ProcessRandao(
// hash(body.randao_reveal))
// )
func ProcessRandaoNoVerify(
beaconState *pb.BeaconState,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*pb.BeaconState, error) {
currentEpoch := helpers.CurrentEpoch(beaconState)
) (*stateTrie.BeaconState, error) {
currentEpoch := helpers.SlotToEpoch(beaconState.Slot())
// If block randao passed verification, we XOR the state's latest randao mix with the block's
// randao and update the state's corresponding latest randao mix value.
latestMixesLength := params.BeaconConfig().EpochsPerHistoricalVector
latestMixSlice := beaconState.RandaoMixes[currentEpoch%latestMixesLength]
latestMixSlice, err := beaconState.RandaoMixAtIndex(currentEpoch % latestMixesLength)
if err != nil {
return nil, err
}
blockRandaoReveal := hashutil.Hash(body.RandaoReveal)
if len(blockRandaoReveal) != len(latestMixSlice) {
return nil, errors.New("blockRandaoReveal length doesnt match latestMixSlice length")
}
for i, x := range blockRandaoReveal {
latestMixSlice[i] ^= x
}
beaconState.RandaoMixes[currentEpoch%latestMixesLength] = latestMixSlice
if err := beaconState.UpdateRandaoMixesAtIndex(latestMixSlice, currentEpoch%latestMixesLength); err != nil {
return nil, err
}
return beaconState, nil
}
@@ -364,10 +424,17 @@ func ProcessRandaoNoVerify(
// assert bls_verify(proposer.pubkey, signing_root(header), header.signature, domain)
//
// slash_validator(state, proposer_slashing.proposer_index)
func ProcessProposerSlashings(ctx context.Context, beaconState *pb.BeaconState, body *ethpb.BeaconBlockBody) (*pb.BeaconState, error) {
func ProcessProposerSlashings(
ctx context.Context,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*stateTrie.BeaconState, error) {
var err error
for idx, slashing := range body.ProposerSlashings {
if int(slashing.ProposerIndex) >= len(beaconState.Validators) {
if slashing == nil {
return nil, errors.New("nil proposer slashings in block body")
}
if int(slashing.ProposerIndex) >= beaconState.NumValidators() {
return nil, fmt.Errorf("invalid proposer index given in slashing %d", slashing.ProposerIndex)
}
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
@@ -385,22 +452,30 @@ func ProcessProposerSlashings(ctx context.Context, beaconState *pb.BeaconState,
// VerifyProposerSlashing verifies that the data provided fro slashing is valid.
func VerifyProposerSlashing(
beaconState *pb.BeaconState,
beaconState *stateTrie.BeaconState,
slashing *ethpb.ProposerSlashing,
) error {
proposer := beaconState.Validators[slashing.ProposerIndex]
proposer, err := beaconState.ValidatorAtIndex(slashing.ProposerIndex)
if err != nil {
return err
}
if slashing.Header_1 == nil || slashing.Header_1.Header == nil || slashing.Header_2 == nil || slashing.Header_2.Header == nil {
return errors.New("nil header cannot be verified")
}
if slashing.Header_1.Header.Slot != slashing.Header_2.Header.Slot {
return fmt.Errorf("mismatched header slots, received %d == %d", slashing.Header_1.Header.Slot, slashing.Header_2.Header.Slot)
}
if proto.Equal(slashing.Header_1, slashing.Header_2) {
return errors.New("expected slashing headers to differ")
}
if !helpers.IsSlashableValidator(proposer, helpers.CurrentEpoch(beaconState)) {
if !helpers.IsSlashableValidator(proposer, helpers.SlotToEpoch(beaconState.Slot())) {
return fmt.Errorf("validator with key %#x is not slashable", proposer.PublicKey)
}
// Using headerEpoch1 here because both of the headers should have the same epoch.
domain := helpers.Domain(beaconState.Fork, helpers.StartSlot(slashing.Header_1.Header.Slot), params.BeaconConfig().DomainBeaconProposer)
domain, err := helpers.Domain(beaconState.Fork(), helpers.StartSlot(slashing.Header_1.Header.Slot), params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return err
}
headers := []*ethpb.SignedBeaconBlockHeader{slashing.Header_1, slashing.Header_2}
for _, header := range headers {
if err := verifySigningRoot(header.Header, proposer.PublicKey, header.Signature, domain); err != nil {
@@ -429,7 +504,11 @@ func VerifyProposerSlashing(
// slash_validator(state, index)
// slashed_any = True
// assert slashed_any
func ProcessAttesterSlashings(ctx context.Context, beaconState *pb.BeaconState, body *ethpb.BeaconBlockBody) (*pb.BeaconState, error) {
func ProcessAttesterSlashings(
ctx context.Context,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*stateTrie.BeaconState, error) {
for idx, slashing := range body.AttesterSlashings {
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrapf(err, "could not verify attester slashing %d", idx)
@@ -438,11 +517,16 @@ func ProcessAttesterSlashings(ctx context.Context, beaconState *pb.BeaconState,
sort.SliceStable(slashableIndices, func(i, j int) bool {
return slashableIndices[i] < slashableIndices[j]
})
currentEpoch := helpers.CurrentEpoch(beaconState)
currentEpoch := helpers.SlotToEpoch(beaconState.Slot())
var err error
var slashedAny bool
var val *ethpb.Validator
for _, validatorIndex := range slashableIndices {
if helpers.IsSlashableValidator(beaconState.Validators[validatorIndex], currentEpoch) {
val, err = beaconState.ValidatorAtIndex(validatorIndex)
if err != nil {
return nil, err
}
if helpers.IsSlashableValidator(val, currentEpoch) {
beaconState, err = v.SlashValidator(beaconState, validatorIndex, 0)
if err != nil {
return nil, errors.Wrapf(err, "could not slash validator index %d",
@@ -459,7 +543,16 @@ func ProcessAttesterSlashings(ctx context.Context, beaconState *pb.BeaconState,
}
// VerifyAttesterSlashing validates the attestation data in both attestations in the slashing object.
func VerifyAttesterSlashing(ctx context.Context, beaconState *pb.BeaconState, slashing *ethpb.AttesterSlashing) error {
func VerifyAttesterSlashing(ctx context.Context, beaconState *stateTrie.BeaconState, slashing *ethpb.AttesterSlashing) error {
if slashing == nil {
return errors.New("nil slashing")
}
if slashing.Attestation_1 == nil || slashing.Attestation_2 == nil {
return errors.New("nil attestation")
}
if slashing.Attestation_1.Data == nil || slashing.Attestation_2.Data == nil {
return errors.New("nil attestation data")
}
att1 := slashing.Attestation_1
att2 := slashing.Attestation_2
data1 := att1.Data
@@ -490,12 +583,18 @@ func VerifyAttesterSlashing(ctx context.Context, beaconState *pb.BeaconState, sl
// (data_1.source.epoch < data_2.source.epoch and data_2.target.epoch < data_1.target.epoch)
// )
func IsSlashableAttestationData(data1 *ethpb.AttestationData, data2 *ethpb.AttestationData) bool {
if data1 == nil || data2 == nil || data1.Target == nil || data2.Target == nil || data1.Source == nil || data2.Source == nil {
return false
}
isDoubleVote := !proto.Equal(data1, data2) && data1.Target.Epoch == data2.Target.Epoch
isSurroundVote := data1.Source.Epoch < data2.Source.Epoch && data2.Target.Epoch < data1.Target.Epoch
return isDoubleVote || isSurroundVote
}
func slashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
if slashing == nil || slashing.Attestation_1 == nil || slashing.Attestation_2 == nil {
return nil
}
indices1 := slashing.Attestation_1.AttestingIndices
indices2 := slashing.Attestation_1.AttestingIndices
return sliceutil.IntersectionUint64(indices1, indices2)
@@ -504,7 +603,11 @@ func slashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
// ProcessAttestations applies processing operations to a block's inner attestation
// records. This function returns a list of pending attestations which can then be
// appended to the BeaconState's latest attestations.
func ProcessAttestations(ctx context.Context, beaconState *pb.BeaconState, body *ethpb.BeaconBlockBody) (*pb.BeaconState, error) {
func ProcessAttestations(
ctx context.Context,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*stateTrie.BeaconState, error) {
var err error
for idx, attestation := range body.Attestations {
beaconState, err = ProcessAttestation(ctx, beaconState, attestation)
@@ -517,7 +620,11 @@ func ProcessAttestations(ctx context.Context, beaconState *pb.BeaconState, body
// ProcessAttestationsNoVerify applies processing operations to a block's inner attestation
// records. The only difference would be that the attestation signature would not be verified.
func ProcessAttestationsNoVerify(ctx context.Context, beaconState *pb.BeaconState, body *ethpb.BeaconBlockBody) (*pb.BeaconState, error) {
func ProcessAttestationsNoVerify(
ctx context.Context,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*stateTrie.BeaconState, error) {
var err error
for idx, attestation := range body.Attestations {
beaconState, err = ProcessAttestationNoVerify(ctx, beaconState, attestation)
@@ -557,7 +664,11 @@ func ProcessAttestationsNoVerify(ctx context.Context, beaconState *pb.BeaconStat
//
// # Check signature
// assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))
func ProcessAttestation(ctx context.Context, beaconState *pb.BeaconState, att *ethpb.Attestation) (*pb.BeaconState, error) {
func ProcessAttestation(
ctx context.Context,
beaconState *stateTrie.BeaconState,
att *ethpb.Attestation,
) (*stateTrie.BeaconState, error) {
beaconState, err := ProcessAttestationNoVerify(ctx, beaconState, att)
if err != nil {
return nil, err
@@ -567,7 +678,11 @@ func ProcessAttestation(ctx context.Context, beaconState *pb.BeaconState, att *e
// ProcessAttestationNoVerify processes the attestation without verifying the attestation signature. This
// method is used to validate attestations whose signatures have already been verified.
func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState, att *ethpb.Attestation) (*pb.BeaconState, error) {
func ProcessAttestationNoVerify(
ctx context.Context,
beaconState *stateTrie.BeaconState,
att *ethpb.Attestation,
) (*stateTrie.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "core.ProcessAttestationNoVerify")
defer span.End()
@@ -575,13 +690,20 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
return nil, errors.New("nil attestation data target")
}
currEpoch := helpers.SlotToEpoch(beaconState.Slot())
var prevEpoch uint64
if currEpoch == 0 {
prevEpoch = 0
} else {
prevEpoch = currEpoch - 1
}
data := att.Data
if data.Target.Epoch != helpers.PrevEpoch(beaconState) && data.Target.Epoch != helpers.CurrentEpoch(beaconState) {
if data.Target.Epoch != prevEpoch && data.Target.Epoch != currEpoch {
return nil, fmt.Errorf(
"expected target epoch (%d) to be the previous epoch (%d) or the current epoch (%d)",
data.Target.Epoch,
helpers.PrevEpoch(beaconState),
helpers.CurrentEpoch(beaconState),
prevEpoch,
currEpoch,
)
}
if helpers.SlotToEpoch(data.Slot) != data.Target.Epoch {
@@ -589,20 +711,20 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
}
s := att.Data.Slot
minInclusionCheck := s+params.BeaconConfig().MinAttestationInclusionDelay <= beaconState.Slot
epochInclusionCheck := beaconState.Slot <= s+params.BeaconConfig().SlotsPerEpoch
minInclusionCheck := s+params.BeaconConfig().MinAttestationInclusionDelay <= beaconState.Slot()
epochInclusionCheck := beaconState.Slot() <= s+params.BeaconConfig().SlotsPerEpoch
if !minInclusionCheck {
return nil, fmt.Errorf(
"attestation slot %d + inclusion delay %d > state slot %d",
s,
params.BeaconConfig().MinAttestationInclusionDelay,
beaconState.Slot,
beaconState.Slot(),
)
}
if !epochInclusionCheck {
return nil, fmt.Errorf(
"state slot %d > attestation slot %d + SLOTS_PER_EPOCH %d",
beaconState.Slot,
beaconState.Slot(),
s,
params.BeaconConfig().SlotsPerEpoch,
)
@@ -619,23 +741,27 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
pendingAtt := &pb.PendingAttestation{
Data: data,
AggregationBits: att.AggregationBits,
InclusionDelay: beaconState.Slot - s,
InclusionDelay: beaconState.Slot() - s,
ProposerIndex: proposerIndex,
}
var ffgSourceEpoch uint64
var ffgSourceRoot []byte
var ffgTargetEpoch uint64
if data.Target.Epoch == helpers.CurrentEpoch(beaconState) {
ffgSourceEpoch = beaconState.CurrentJustifiedCheckpoint.Epoch
ffgSourceRoot = beaconState.CurrentJustifiedCheckpoint.Root
ffgTargetEpoch = helpers.CurrentEpoch(beaconState)
beaconState.CurrentEpochAttestations = append(beaconState.CurrentEpochAttestations, pendingAtt)
if data.Target.Epoch == currEpoch {
ffgSourceEpoch = beaconState.CurrentJustifiedCheckpoint().Epoch
ffgSourceRoot = beaconState.CurrentJustifiedCheckpoint().Root
ffgTargetEpoch = currEpoch
if err := beaconState.AppendCurrentEpochAttestations(pendingAtt); err != nil {
return nil, err
}
} else {
ffgSourceEpoch = beaconState.PreviousJustifiedCheckpoint.Epoch
ffgSourceRoot = beaconState.PreviousJustifiedCheckpoint.Root
ffgTargetEpoch = helpers.PrevEpoch(beaconState)
beaconState.PreviousEpochAttestations = append(beaconState.PreviousEpochAttestations, pendingAtt)
ffgSourceEpoch = beaconState.PreviousJustifiedCheckpoint().Epoch
ffgSourceRoot = beaconState.PreviousJustifiedCheckpoint().Root
ffgTargetEpoch = prevEpoch
if err := beaconState.AppendPreviousEpochAttestations(pendingAtt); err != nil {
return nil, err
}
}
if data.Source.Epoch != ffgSourceEpoch {
return nil, fmt.Errorf("expected source epoch %d, received %d", ffgSourceEpoch, data.Source.Epoch)
@@ -650,44 +776,6 @@ func ProcessAttestationNoVerify(ctx context.Context, beaconState *pb.BeaconState
return beaconState, nil
}
// ConvertToIndexed converts attestation to (almost) indexed-verifiable form.
//
// Note about spec pseudocode definition. The state was used by get_attesting_indices to determine
// the attestation committee. Now that we provide this as an argument, we no longer need to provide
// a state.
//
// Spec pseudocode definition:
// def get_indexed_attestation(state: BeaconState, attestation: Attestation) -> IndexedAttestation:
// """
// Return the indexed attestation corresponding to ``attestation``.
// """
// attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
//
// return IndexedAttestation(
// attesting_indices=sorted(attesting_indices),
// data=attestation.data,
// signature=attestation.signature,
// )
func ConvertToIndexed(ctx context.Context, attestation *ethpb.Attestation, committee []uint64) (*ethpb.IndexedAttestation, error) {
ctx, span := trace.StartSpan(ctx, "core.ConvertToIndexed")
defer span.End()
attIndices, err := helpers.AttestingIndices(attestation.AggregationBits, committee)
if err != nil {
return nil, errors.Wrap(err, "could not get attesting indices")
}
sort.Slice(attIndices, func(i, j int) bool {
return attIndices[i] < attIndices[j]
})
inAtt := &ethpb.IndexedAttestation{
Data: attestation.Data,
Signature: attestation.Signature,
AttestingIndices: attIndices,
}
return inAtt, nil
}
// VerifyIndexedAttestation determines the validity of an indexed attestation.
//
// Spec pseudocode definition:
@@ -711,10 +799,12 @@ func ConvertToIndexed(ctx context.Context, attestation *ethpb.Attestation, commi
// ):
// return False
// return True
func VerifyIndexedAttestation(ctx context.Context, beaconState *pb.BeaconState, indexedAtt *ethpb.IndexedAttestation) error {
func VerifyIndexedAttestation(ctx context.Context, beaconState *stateTrie.BeaconState, indexedAtt *ethpb.IndexedAttestation) error {
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
defer span.End()
if indexedAtt == nil || indexedAtt.Data == nil || indexedAtt.Data.Target == nil {
return errors.New("nil or missing indexed attestation data")
}
indices := indexedAtt.AttestingIndices
if uint64(len(indices)) > params.BeaconConfig().MaxValidatorsPerCommittee {
@@ -737,16 +827,20 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *pb.BeaconState,
return errors.New("attesting indices is not uniquely sorted")
}
domain := helpers.Domain(beaconState.Fork, indexedAtt.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester)
domain, err := helpers.Domain(beaconState.Fork(), indexedAtt.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
var pubkey *bls.PublicKey
var err error
if len(indices) > 0 {
pubkey, err = bls.PublicKeyFromBytes(beaconState.Validators[indices[0]].PublicKey)
pubkeyAtIdx := beaconState.PubkeyAtIndex(indices[0])
pubkey, err = bls.PublicKeyFromBytes(pubkeyAtIdx[:])
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
for _, i := range indices[1:] {
pk, err := bls.PublicKeyFromBytes(beaconState.Validators[i].PublicKey)
for i := 1; i < len(indices); i++ {
pubkeyAtIdx = beaconState.PubkeyAtIndex(indices[i])
pk, err := bls.PublicKeyFromBytes(pubkeyAtIdx[:])
if err != nil {
return errors.Wrap(err, "could not deserialize validator public key")
}
@@ -773,15 +867,15 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState *pb.BeaconState,
// VerifyAttestation converts and attestation into an indexed attestation and verifies
// the signature in that attestation.
func VerifyAttestation(ctx context.Context, beaconState *pb.BeaconState, att *ethpb.Attestation) error {
func VerifyAttestation(ctx context.Context, beaconState *stateTrie.BeaconState, att *ethpb.Attestation) error {
if att == nil || att.Data == nil {
return fmt.Errorf("nil or missing attestation data: %v", att)
}
committee, err := helpers.BeaconCommitteeFromState(beaconState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return err
}
indexedAtt, err := ConvertToIndexed(ctx, att, committee)
if err != nil {
return errors.Wrap(err, "could not convert to indexed attestation")
}
indexedAtt := attestationutil.ConvertToIndexed(ctx, att, committee)
return VerifyIndexedAttestation(ctx, beaconState, indexedAtt)
}
@@ -792,13 +886,18 @@ func VerifyAttestation(ctx context.Context, beaconState *pb.BeaconState, att *et
// Spec pseudocode definition:
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
func ProcessDeposits(ctx context.Context, beaconState *pb.BeaconState, body *ethpb.BeaconBlockBody) (*pb.BeaconState, error) {
func ProcessDeposits(
ctx context.Context,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*stateTrie.BeaconState, error) {
var err error
deposits := body.Deposits
valIndexMap := stateutils.ValidatorIndexMap(beaconState)
for _, deposit := range deposits {
beaconState, err = ProcessDeposit(beaconState, deposit, valIndexMap)
if deposit == nil || deposit.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, err = ProcessDeposit(beaconState, deposit)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
@@ -807,24 +906,37 @@ func ProcessDeposits(ctx context.Context, beaconState *pb.BeaconState, body *eth
}
// ProcessPreGenesisDeposit processes a deposit for the beacon state before chainstart.
func ProcessPreGenesisDeposit(ctx context.Context, beaconState *pb.BeaconState,
deposit *ethpb.Deposit, validatorIndices map[[48]byte]int) (*pb.BeaconState, error) {
func ProcessPreGenesisDeposit(
ctx context.Context,
beaconState *stateTrie.BeaconState,
deposit *ethpb.Deposit,
) (*stateTrie.BeaconState, error) {
var err error
beaconState, err = ProcessDeposit(beaconState, deposit, validatorIndices)
beaconState, err = ProcessDeposit(beaconState, deposit)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
pubkey := deposit.Data.PublicKey
index, ok := validatorIndices[bytesutil.ToBytes48(pubkey)]
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubkey))
if !ok {
return beaconState, nil
}
balance := beaconState.Balances[index]
beaconState.Validators[index].EffectiveBalance = mathutil.Min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
if beaconState.Validators[index].EffectiveBalance ==
balance, err := beaconState.BalanceAtIndex(index)
if err != nil {
return nil, err
}
validator, err := beaconState.ValidatorAtIndex(index)
if err != nil {
return nil, err
}
validator.EffectiveBalance = mathutil.Min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
if validator.EffectiveBalance ==
params.BeaconConfig().MaxEffectiveBalance {
beaconState.Validators[index].ActivationEligibilityEpoch = 0
beaconState.Validators[index].ActivationEpoch = 0
validator.ActivationEligibilityEpoch = 0
validator.ActivationEpoch = 0
}
if err := beaconState.UpdateValidatorAtIndex(uint64(index), validator); err != nil {
return nil, err
}
return beaconState, nil
}
@@ -872,14 +984,23 @@ func ProcessPreGenesisDeposit(ctx context.Context, beaconState *pb.BeaconState,
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ProcessDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit, valIndexMap map[[48]byte]int) (*pb.BeaconState, error) {
func ProcessDeposit(
beaconState *stateTrie.BeaconState,
deposit *ethpb.Deposit,
) (*stateTrie.BeaconState, error) {
if err := verifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
beaconState.Eth1DepositIndex++
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, err
}
pubKey := deposit.Data.PublicKey
amount := deposit.Data.Amount
index, ok := valIndexMap[bytesutil.ToBytes48(pubKey)]
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
numVals := beaconState.NumValidators()
if !ok {
domain := bls.ComputeDomain(params.BeaconConfig().DomainDeposit)
depositSig := deposit.Data.Signature
@@ -893,7 +1014,7 @@ func ProcessDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit, valInde
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
beaconState.Validators = append(beaconState.Validators, &ethpb.Validator{
if err := beaconState.AppendValidator(&ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: deposit.Data.WithdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
@@ -901,27 +1022,42 @@ func ProcessDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit, valInde
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
})
beaconState.Balances = append(beaconState.Balances, amount)
valIndexMap[bytesutil.ToBytes48(pubKey)] = len(beaconState.Validators) - 1
}); err != nil {
return nil, err
}
if err := beaconState.AppendBalance(amount); err != nil {
return nil, err
}
numVals++
beaconState.SetValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey), uint64(numVals-1))
} else {
beaconState = helpers.IncreaseBalance(beaconState, uint64(index), amount)
if err := helpers.IncreaseBalance(beaconState, uint64(index), amount); err != nil {
return nil, err
}
}
return beaconState, nil
}
func verifyDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit) error {
func verifyDeposit(beaconState *stateTrie.BeaconState, deposit *ethpb.Deposit) error {
// Verify Merkle proof of deposit and deposit trie root.
receiptRoot := beaconState.Eth1Data.DepositRoot
if deposit == nil || deposit.Data == nil {
return errors.New("received nil deposit or nil deposit data")
}
eth1Data := beaconState.Eth1Data()
if eth1Data == nil {
return errors.New("received nil eth1data in the beacon state")
}
receiptRoot := eth1Data.DepositRoot
leaf, err := ssz.HashTreeRoot(deposit.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash deposit data")
}
if ok := trieutil.VerifyMerkleProof(
if ok := trieutil.VerifyMerkleBranch(
receiptRoot,
leaf[:],
int(beaconState.Eth1DepositIndex),
int(beaconState.Eth1DepositIndex()),
deposit.Proof,
); !ok {
return fmt.Errorf(
@@ -955,12 +1091,28 @@ func verifyDeposit(beaconState *pb.BeaconState, deposit *ethpb.Deposit) error {
// assert bls_verify(validator.pubkey, signing_root(exit), exit.signature, domain)
// # Initiate exit
// initiate_validator_exit(state, exit.validator_index)
func ProcessVoluntaryExits(ctx context.Context, beaconState *pb.BeaconState, body *ethpb.BeaconBlockBody) (*pb.BeaconState, error) {
var err error
func ProcessVoluntaryExits(
ctx context.Context,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*stateTrie.BeaconState, error) {
exits := body.VoluntaryExits
for idx, exit := range exits {
if err := VerifyExit(beaconState, exit); err != nil {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
}
if int(exit.Exit.ValidatorIndex) >= beaconState.NumValidators() {
return nil, fmt.Errorf(
"validator index out of bound %d > %d",
exit.Exit.ValidatorIndex,
beaconState.NumValidators(),
)
}
val, err := beaconState.ValidatorAtIndex(exit.Exit.ValidatorIndex)
if err != nil {
return nil, err
}
if err := VerifyExit(val, beaconState.Slot(), beaconState.Fork(), exit); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, err = v.InitiateValidatorExit(beaconState, exit.Exit.ValidatorIndex)
@@ -974,13 +1126,16 @@ func ProcessVoluntaryExits(ctx context.Context, beaconState *pb.BeaconState, bod
// ProcessVoluntaryExitsNoVerify processes all the voluntary exits in
// a block body, without verifying their BLS signatures.
func ProcessVoluntaryExitsNoVerify(
beaconState *pb.BeaconState,
beaconState *stateTrie.BeaconState,
body *ethpb.BeaconBlockBody,
) (*pb.BeaconState, error) {
) (*stateTrie.BeaconState, error) {
var err error
exits := body.VoluntaryExits
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil exit")
}
beaconState, err = v.InitiateValidatorExit(beaconState, exit.Exit.ValidatorIndex)
if err != nil {
return nil, errors.Wrapf(err, "failed to process voluntary exit at index %d", idx)
@@ -1008,18 +1163,13 @@ func ProcessVoluntaryExitsNoVerify(
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, exit.epoch)
// assert bls_verify(validator.pubkey, signing_root(exit), exit.signature, domain)
func VerifyExit(beaconState *pb.BeaconState, signed *ethpb.SignedVoluntaryExit) error {
func VerifyExit(validator *ethpb.Validator, currentSlot uint64, fork *pb.Fork, signed *ethpb.SignedVoluntaryExit) error {
if signed == nil || signed.Exit == nil {
return errors.New("nil exit")
}
exit := signed.Exit
if int(exit.ValidatorIndex) >= len(beaconState.Validators) {
return fmt.Errorf("validator index out of bound %d > %d", exit.ValidatorIndex, len(beaconState.Validators))
}
validator := beaconState.Validators[exit.ValidatorIndex]
currentEpoch := helpers.CurrentEpoch(beaconState)
currentEpoch := helpers.SlotToEpoch(currentSlot)
// Verify the validator is active.
if !helpers.IsActiveValidator(validator, currentEpoch) {
return errors.New("non-active validator cannot exit")
@@ -1040,7 +1190,10 @@ func VerifyExit(beaconState *pb.BeaconState, signed *ethpb.SignedVoluntaryExit)
validator.ActivationEpoch+params.BeaconConfig().PersistentCommitteePeriod,
)
}
domain := helpers.Domain(beaconState.Fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit)
domain, err := helpers.Domain(fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit)
if err != nil {
return err
}
if err := verifySigningRoot(exit, validator.PublicKey, signed.Signature, domain); err != nil {
return ErrSigFailedToVerify
}

View File

@@ -1,16 +1,22 @@
package blocks_test
package blocks
import (
"context"
"testing"
fuzz "github.com/google/gofuzz"
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
//"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
ethereum_beacon_p2p_v1 "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestFuzzProcessAttestation_10000(t *testing.T) {
func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
ctx := context.Background()
state := &ethereum_beacon_p2p_v1.BeaconState{}
@@ -19,7 +25,8 @@ func TestFuzzProcessAttestation_10000(t *testing.T) {
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(att)
_, _ = blocks.ProcessAttestationNoVerify(ctx, state, att)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
_, _ = ProcessAttestationNoVerify(ctx, s, att)
}
}
@@ -31,6 +38,404 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(block)
_, _ = blocks.ProcessBlockHeader(state, block)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
_, _ = ProcessBlockHeader(s, block)
}
}
func TestFuzzverifySigningRoot_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
pubkey := [48]byte{}
sig := [96]byte{}
domain := [4]byte{}
p := []byte{}
s := []byte{}
d := uint64(0)
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(&pubkey)
fuzzer.Fuzz(&sig)
fuzzer.Fuzz(&domain)
fuzzer.Fuzz(state)
fuzzer.Fuzz(&p)
fuzzer.Fuzz(&s)
fuzzer.Fuzz(&d)
domain := bytesutil.FromBytes4(domain[:])
verifySigningRoot(state, pubkey[:], sig[:], domain)
verifySigningRoot(state, p, s, d)
}
}
func TestFuzzverifyDepositDataSigningRoot_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
ba := []byte{}
pubkey := [48]byte{}
sig := [96]byte{}
domain := [4]byte{}
p := []byte{}
s := []byte{}
d := uint64(0)
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(&ba)
fuzzer.Fuzz(&pubkey)
fuzzer.Fuzz(&sig)
fuzzer.Fuzz(&domain)
fuzzer.Fuzz(&p)
fuzzer.Fuzz(&s)
fuzzer.Fuzz(&d)
domain := bytesutil.FromBytes4(domain[:])
verifySignature(ba, pubkey[:], sig[:], domain)
verifySignature(ba, p, s, d)
}
}
func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
block := &eth.BeaconBlock{}
state := &stateTrie.BeaconState{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(block)
s, err := ProcessEth1DataInBlock(state, block)
if err != nil && s != nil {
t.Fatalf("state should be nil on err. found: %v on error: %v for state: %v and block: %v", s, err, state, block)
}
}
}
func TestFuzzareEth1DataEqual_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
eth1data := &eth.Eth1Data{}
eth1data2 := &eth.Eth1Data{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(eth1data)
fuzzer.Fuzz(eth1data2)
areEth1DataEqual(eth1data, eth1data2)
areEth1DataEqual(eth1data, eth1data)
}
}
func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
eth1data := &eth.Eth1Data{}
stateVotes := []*eth.Eth1Data{}
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(eth1data)
fuzzer.Fuzz(&stateVotes)
s, _ := beaconstate.InitializeFromProto(&ethereum_beacon_p2p_v1.BeaconState{
Eth1DataVotes: stateVotes,
})
Eth1DataHasEnoughSupport(s, eth1data)
}
}
func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
block := &eth.BeaconBlock{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(block)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
_, _ = ProcessBlockHeaderNoVerify(s, block)
}
}
func TestFuzzProcessRandao_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessRandao(s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessRandaoNoVerify(s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessProposerSlashings(ctx, s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
proposerSlashing := &eth.ProposerSlashing{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(proposerSlashing)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
VerifyProposerSlashing(s, proposerSlashing)
}
}
func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessAttesterSlashings(ctx, s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
attesterSlashing := &eth.AttesterSlashing{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attesterSlashing)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
VerifyAttesterSlashing(ctx, s, attesterSlashing)
}
}
func TestFuzzIsSlashableAttestationData_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
attestationData := &eth.AttestationData{}
attestationData2 := &eth.AttestationData{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(attestationData)
fuzzer.Fuzz(attestationData2)
IsSlashableAttestationData(attestationData, attestationData2)
}
}
func TestFuzzslashableAttesterIndices_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
attesterSlashing := &eth.AttesterSlashing{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(attesterSlashing)
slashableAttesterIndices(attesterSlashing)
}
}
func TestFuzzProcessAttestations_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessAttestations(ctx, s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessAttestationsNoVerify(ctx, s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzProcessAttestation_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
attestation := &eth.Attestation{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attestation)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessAttestation(ctx, s, attestation)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, attestation)
}
}
}
func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
idxAttestation := &eth.IndexedAttestation{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(idxAttestation)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
VerifyIndexedAttestation(ctx, s, idxAttestation)
}
}
func TestFuzzVerifyAttestation_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
attestation := &eth.Attestation{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attestation)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
VerifyAttestation(ctx, s, attestation)
}
}
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessDeposits(ctx, s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
deposit := &eth.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessPreGenesisDeposit(ctx, s, deposit)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
deposit := &eth.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessDeposit(s, deposit)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
deposit := &eth.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
verifyDeposit(s, deposit)
}
}
func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessVoluntaryExits(ctx, s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
blockBody := &eth.BeaconBlockBody{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, _ := beaconstate.InitializeFromProtoUnsafe(state)
r, err := ProcessVoluntaryExitsNoVerify(s, blockBody)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
}
}
func TestFuzzVerifyExit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
ve := &eth.SignedVoluntaryExit{}
val := &eth.Validator{}
fork := &pb.Fork{}
var slot uint64
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(ve)
fuzzer.Fuzz(val)
fuzzer.Fuzz(fork)
fuzzer.Fuzz(&slot)
VerifyExit(val, slot, fork, ve)
}
}

View File

@@ -16,8 +16,9 @@ import (
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state/stateutils"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -32,9 +33,9 @@ func init() {
func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{Slot: 9}
beaconState.SetLatestBlockHeader(&ethpb.BeaconBlockHeader{Slot: 9})
lbhsr, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader)
lbhsr, err := ssz.HashTreeRoot(beaconState.LatestBlockHeader())
if err != nil {
t.Error(err)
}
@@ -57,7 +58,10 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
if err != nil {
t.Fatalf("Failed to get signing root of block: %v", err)
}
dt := helpers.Domain(beaconState.Fork, 0, params.BeaconConfig().DomainBeaconProposer)
dt, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
t.Fatalf("Failed to get domain form state: %v", err)
}
blockSig := privKeys[proposerIdx+1].Sign(signingRoot[:], dt)
block.Signature = blockSig.Marshal()[:]
@@ -78,7 +82,7 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
}
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: 0,
LatestBlockHeader: &ethpb.BeaconBlockHeader{Slot: 9},
@@ -87,14 +91,17 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
CurrentVersion: []byte{0, 0, 0, 0},
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
})
lbhsr, err := ssz.HashTreeRoot(state.LatestBlockHeader)
lbhsr, err := ssz.HashTreeRoot(state.LatestBlockHeader())
if err != nil {
t.Error(err)
}
currentEpoch := helpers.CurrentEpoch(state)
dt := helpers.Domain(state.Fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
t.Fatalf("Failed to get domain form state: %v", err)
}
priv := bls.RandKey()
blockSig := priv.Sign([]byte("hello"), dt)
validators[5896].PublicKey = priv.PublicKey().Marshal()
@@ -125,7 +132,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
}
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: 0,
LatestBlockHeader: &ethpb.BeaconBlockHeader{Slot: 9},
@@ -134,10 +141,13 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
CurrentVersion: []byte{0, 0, 0, 0},
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
})
currentEpoch := helpers.CurrentEpoch(state)
dt := helpers.Domain(state.Fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
t.Fatalf("Failed to get domain form state: %v", err)
}
priv := bls.RandKey()
blockSig := priv.Sign([]byte("hello"), dt)
validators[5896].PublicKey = priv.PublicKey().Marshal()
@@ -152,7 +162,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
Signature: blockSig.Marshal(),
}
_, err := blocks.ProcessBlockHeader(state, block)
_, err = blocks.ProcessBlockHeader(state, block)
want := "does not match"
if !strings.Contains(err.Error(), want) {
t.Errorf("Expected %v, received %v", want, err)
@@ -168,7 +178,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
}
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: 0,
LatestBlockHeader: &ethpb.BeaconBlockHeader{Slot: 9},
@@ -177,14 +187,17 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
CurrentVersion: []byte{0, 0, 0, 0},
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
})
parentRoot, err := ssz.HashTreeRoot(state.LatestBlockHeader)
parentRoot, err := ssz.HashTreeRoot(state.LatestBlockHeader())
if err != nil {
t.Error(err)
}
currentEpoch := helpers.CurrentEpoch(state)
dt := helpers.Domain(state.Fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
t.Fatalf("Failed to get domain form state: %v", err)
}
priv := bls.RandKey()
blockSig := priv.Sign([]byte("hello"), dt)
validators[12683].PublicKey = priv.PublicKey().Marshal()
@@ -215,7 +228,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
}
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: validators,
Slot: 0,
LatestBlockHeader: &ethpb.BeaconBlockHeader{Slot: 9},
@@ -224,14 +237,17 @@ func TestProcessBlockHeader_OK(t *testing.T) {
CurrentVersion: []byte{0, 0, 0, 0},
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
})
latestBlockSignedRoot, err := ssz.HashTreeRoot(state.LatestBlockHeader)
latestBlockSignedRoot, err := ssz.HashTreeRoot(state.LatestBlockHeader())
if err != nil {
t.Error(err)
}
currentEpoch := helpers.CurrentEpoch(state)
dt := helpers.Domain(state.Fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer)
dt, err := helpers.Domain(state.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
t.Fatalf("Failed to get domain form state: %v", err)
}
priv := bls.RandKey()
block := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
@@ -259,13 +275,17 @@ func TestProcessBlockHeader_OK(t *testing.T) {
}
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
err = state.UpdateValidatorAtIndex(proposerIdx, validators[proposerIdx])
if err != nil {
t.Fatal(err)
}
newState, err := blocks.ProcessBlockHeader(state, block)
if err != nil {
t.Fatalf("Failed to process block header got: %v", err)
}
var zeroHash [32]byte
nsh := newState.LatestBlockHeader
nsh := newState.LatestBlockHeader()
expected := &ethpb.BeaconBlockHeader{
Slot: block.Block.Slot,
ParentRoot: latestBlockSignedRoot[:],
@@ -287,8 +307,10 @@ func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
epoch := uint64(0)
buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, epoch)
domain := helpers.Domain(beaconState.Fork, epoch, params.BeaconConfig().DomainRandao)
domain, err := helpers.Domain(beaconState.Fork(), epoch, params.BeaconConfig().DomainRandao)
if err != nil {
t.Fatal(err)
}
// We make the previous validator's index sign the message instead of the proposer.
epochSignature := privKeys[proposerIdx-1].Sign(buf, domain)
block := &ethpb.BeaconBlock{
@@ -329,7 +351,7 @@ func TestProcessRandao_SignatureVerifiesAndUpdatesLatestStateMixes(t *testing.T)
t.Errorf("Unexpected error processing block randao: %v", err)
}
currentEpoch := helpers.CurrentEpoch(beaconState)
mix := newState.RandaoMixes[currentEpoch%params.BeaconConfig().EpochsPerHistoricalVector]
mix := newState.RandaoMixes()[currentEpoch%params.BeaconConfig().EpochsPerHistoricalVector]
if bytes.Equal(mix, params.BeaconConfig().ZeroHash[:]) {
t.Errorf(
@@ -340,9 +362,9 @@ func TestProcessRandao_SignatureVerifiesAndUpdatesLatestStateMixes(t *testing.T)
}
func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Eth1DataVotes: []*ethpb.Eth1Data{},
}
})
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
@@ -360,20 +382,20 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
}
}
newETH1DataVotes := beaconState.Eth1DataVotes
newETH1DataVotes := beaconState.Eth1DataVotes()
if len(newETH1DataVotes) <= 1 {
t.Error("Expected new ETH1 data votes to have length > 1")
}
if !proto.Equal(beaconState.Eth1Data, block.Body.Eth1Data) {
if !proto.Equal(beaconState.Eth1Data(), stateTrie.CopyETH1Data(block.Body.Eth1Data)) {
t.Errorf(
"Expected latest eth1 data to have been set to %v, received %v",
block.Body.Eth1Data,
beaconState.Eth1Data,
beaconState.Eth1Data(),
)
}
}
func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
registry := make([]*ethpb.Validator, 2)
beaconState, _ := testutil.DeterministicGenesisState(t, 20)
currentSlot := uint64(0)
slashings := []*ethpb.ProposerSlashing{
{
@@ -390,11 +412,8 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
},
}
beaconState.SetSlot(currentSlot)
beaconState := &pb.BeaconState{
Validators: registry,
Slot: currentSlot,
}
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
ProposerSlashings: slashings,
@@ -407,7 +426,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
}
func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
registry := make([]*ethpb.Validator, 2)
beaconState, _ := testutil.DeterministicGenesisState(t, 2)
currentSlot := uint64(0)
slashings := []*ethpb.ProposerSlashing{
{
@@ -425,10 +444,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
beaconState := &pb.BeaconState{
Validators: registry,
Slot: currentSlot,
}
beaconState.SetSlot(currentSlot)
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
ProposerSlashings: slashings,
@@ -468,10 +484,10 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
},
}
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Slot: currentSlot,
}
})
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
ProposerSlashings: slashings,
@@ -479,7 +495,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
}
want := fmt.Sprintf(
"validator with key %#x is not slashable",
beaconState.Validators[0].PublicKey,
beaconState.Validators()[0].PublicKey,
)
if _, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Body); !strings.Contains(err.Error(), want) {
@@ -493,7 +509,10 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
proposerIdx := uint64(1)
domain := helpers.Domain(beaconState.Fork, 0, params.BeaconConfig().DomainBeaconProposer)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
t.Fatal(err)
}
header1 := &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 0,
@@ -537,10 +556,10 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
t.Fatalf("Unexpected error: %s", err)
}
newStateVals := newState.Validators
if newStateVals[1].ExitEpoch != beaconState.Validators[1].ExitEpoch {
newStateVals := newState.Validators()
if newStateVals[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf("Proposer with index 1 did not correctly exit,"+"wanted slot:%d, got:%d",
newStateVals[1].ExitEpoch, beaconState.Validators[1].ExitEpoch)
newStateVals[1].ExitEpoch, beaconState.Validators()[1].ExitEpoch)
}
}
@@ -584,10 +603,10 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
registry := []*ethpb.Validator{}
currentSlot := uint64(0)
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Slot: currentSlot,
}
})
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
@@ -604,10 +623,10 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
registry := []*ethpb.Validator{}
currentSlot := uint64(0)
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Slot: currentSlot,
}
})
slashings := []*ethpb.AttesterSlashing{
{
@@ -642,7 +661,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
for _, vv := range beaconState.Validators {
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = 1 * params.BeaconConfig().SlotsPerEpoch
}
@@ -657,7 +676,10 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
if err != nil {
t.Error(err)
}
domain := helpers.Domain(beaconState.Fork, 0, params.BeaconConfig().DomainBeaconAttester)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
t.Fatal(err)
}
sig0 := privKeys[0].Sign(hashTreeRoot[:], domain)
sig1 := privKeys[1].Sign(hashTreeRoot[:], domain)
aggregateSig := bls.AggregateSignatures([]*bls.Signature{sig0, sig1})
@@ -687,7 +709,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
beaconState.Slot = currentSlot
beaconState.SetSlot(currentSlot)
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
@@ -699,17 +721,17 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
if err != nil {
t.Fatal(err)
}
newRegistry := newState.Validators
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators[1].ExitEpoch {
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators[1].ExitEpoch,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
@@ -735,7 +757,7 @@ func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
"attestation slot %d + inclusion delay %d > state slot %d",
attestations[0].Data.Slot,
params.BeaconConfig().MinAttestationInclusionDelay,
beaconState.Slot,
beaconState.Slot(),
)
if _, err := blocks.ProcessAttestations(context.Background(), beaconState, block.Body); !strings.Contains(err.Error(), want) {
t.Errorf("Expected %s, received %v", want, err)
@@ -754,9 +776,11 @@ func TestProcessAttestations_NeitherCurrentNorPrevEpoch(t *testing.T) {
},
}
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot += params.BeaconConfig().SlotsPerEpoch*4 + params.BeaconConfig().MinAttestationInclusionDelay
beaconState.PreviousJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.PreviousEpochAttestations = []*pb.PendingAttestation{}
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().SlotsPerEpoch*4 + params.BeaconConfig().MinAttestationInclusionDelay)
pfc := beaconState.PreviousJustifiedCheckpoint()
pfc.Root = []byte("hello-world")
beaconState.SetPreviousJustifiedCheckpoint(pfc)
beaconState.SetPreviousEpochAttestations([]*pb.PendingAttestation{})
want := fmt.Sprintf(
"expected target epoch (%d) to be the previous epoch (%d) or the current epoch (%d)",
@@ -786,9 +810,11 @@ func TestProcessAttestations_CurrentEpochFFGDataMismatches(t *testing.T) {
},
}
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot += params.BeaconConfig().MinAttestationInclusionDelay
beaconState.CurrentJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{}
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = []byte("hello-world")
beaconState.SetCurrentJustifiedCheckpoint(cfc)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
want := fmt.Sprintf(
"expected source epoch %d, received %d",
@@ -804,7 +830,7 @@ func TestProcessAttestations_CurrentEpochFFGDataMismatches(t *testing.T) {
want = fmt.Sprintf(
"expected source root %#x, received %#x",
beaconState.CurrentJustifiedCheckpoint.Root,
beaconState.CurrentJustifiedCheckpoint().Root,
attestations[0].Data.Source.Root,
)
if _, err := blocks.ProcessAttestations(context.Background(), beaconState, block.Body); !strings.Contains(err.Error(), want) {
@@ -833,9 +859,11 @@ func TestProcessAttestations_PrevEpochFFGDataMismatches(t *testing.T) {
},
}
beaconState.Slot += params.BeaconConfig().SlotsPerEpoch + params.BeaconConfig().MinAttestationInclusionDelay
beaconState.PreviousJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.PreviousEpochAttestations = []*pb.PendingAttestation{}
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().SlotsPerEpoch + params.BeaconConfig().MinAttestationInclusionDelay)
pfc := beaconState.PreviousJustifiedCheckpoint()
pfc.Root = []byte("hello-world")
beaconState.SetPreviousJustifiedCheckpoint(pfc)
beaconState.SetPreviousEpochAttestations([]*pb.PendingAttestation{})
want := fmt.Sprintf(
"expected source epoch %d, received %d",
@@ -852,7 +880,7 @@ func TestProcessAttestations_PrevEpochFFGDataMismatches(t *testing.T) {
want = fmt.Sprintf(
"expected source root %#x, received %#x",
beaconState.CurrentJustifiedCheckpoint.Root,
beaconState.CurrentJustifiedCheckpoint().Root,
attestations[0].Data.Source.Root,
)
if _, err := blocks.ProcessAttestations(context.Background(), beaconState, block.Body); !strings.Contains(err.Error(), want) {
@@ -877,10 +905,12 @@ func TestProcessAttestations_InvalidAggregationBitsLength(t *testing.T) {
},
}
beaconState.Slot += params.BeaconConfig().MinAttestationInclusionDelay
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
beaconState.CurrentJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{}
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = []byte("hello-world")
beaconState.SetCurrentJustifiedCheckpoint(cfc)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
expected := "failed to verify aggregation bitfield: wanted participants bitfield length 3, got: 4"
_, err := blocks.ProcessAttestations(context.Background(), beaconState, block.Body)
@@ -894,22 +924,26 @@ func TestProcessAttestations_OK(t *testing.T) {
aggBits := bitfield.NewBitlist(3)
aggBits.SetBitAt(0, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Target: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
},
AggregationBits: aggBits,
}
beaconState.CurrentJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{}
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = mockRoot[:]
beaconState.SetCurrentJustifiedCheckpoint(cfc)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
committee, err := helpers.BeaconCommitteeFromState(beaconState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
t.Error(err)
}
attestingIndices, err := helpers.AttestingIndices(att.AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
if err != nil {
t.Error(err)
}
@@ -917,7 +951,10 @@ func TestProcessAttestations_OK(t *testing.T) {
if err != nil {
t.Error(err)
}
domain := helpers.Domain(beaconState.Fork, 0, params.BeaconConfig().DomainBeaconAttester)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
t.Fatal(err)
}
sigs := make([]*bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices {
sig := privKeys[indice].Sign(hashTreeRoot[:], domain)
@@ -931,7 +968,7 @@ func TestProcessAttestations_OK(t *testing.T) {
},
}
beaconState.Slot += params.BeaconConfig().MinAttestationInclusionDelay
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
if _, err := blocks.ProcessAttestations(context.Background(), beaconState, block.Body); err != nil {
t.Errorf("Unexpected error: %v", err)
@@ -941,7 +978,10 @@ func TestProcessAttestations_OK(t *testing.T) {
func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
domain := helpers.Domain(beaconState.Fork, 0, params.BeaconConfig().DomainBeaconAttester)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
t.Fatal(err)
}
data := &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Target: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
@@ -955,14 +995,16 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
AggregationBits: aggBits1,
}
beaconState.CurrentJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{}
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = []byte("hello-world")
beaconState.SetCurrentJustifiedCheckpoint(cfc)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
committee, err := helpers.BeaconCommitteeFromState(beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
if err != nil {
t.Error(err)
}
attestingIndices1, err := helpers.AttestingIndices(att1.AggregationBits, committee)
attestingIndices1 := attestationutil.AttestingIndices(att1.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -990,7 +1032,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices2, err := helpers.AttestingIndices(att2.AggregationBits, committee)
attestingIndices2 := attestationutil.AttestingIndices(att2.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1013,10 +1055,15 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 300)
domain := helpers.Domain(beaconState.Fork, 0, params.BeaconConfig().DomainBeaconAttester)
domain, err := helpers.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
t.Fatal(err)
}
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
data := &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Target: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
}
aggBits1 := bitfield.NewBitlist(9)
aggBits1.SetBitAt(0, true)
@@ -1026,14 +1073,16 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
AggregationBits: aggBits1,
}
beaconState.CurrentJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{}
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = mockRoot[:]
beaconState.SetCurrentJustifiedCheckpoint(cfc)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
committee, err := helpers.BeaconCommitteeFromState(beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
if err != nil {
t.Error(err)
}
attestingIndices1, err := helpers.AttestingIndices(att1.AggregationBits, committee)
attestingIndices1 := attestationutil.AttestingIndices(att1.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1060,7 +1109,7 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices2, err := helpers.AttestingIndices(att2.AggregationBits, committee)
attestingIndices2 := attestationutil.AttestingIndices(att2.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1085,7 +1134,7 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
},
}
beaconState.Slot += params.BeaconConfig().MinAttestationInclusionDelay
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
if _, err := blocks.ProcessAttestations(context.Background(), beaconState, block.Body); err != nil {
t.Error(err)
@@ -1114,9 +1163,11 @@ func TestProcessAttestationsNoVerify_OK(t *testing.T) {
aggBits := bitfield.NewBitlist(3)
aggBits.SetBitAt(1, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0},
},
AggregationBits: aggBits,
@@ -1125,9 +1176,11 @@ func TestProcessAttestationsNoVerify_OK(t *testing.T) {
zeroSig := [96]byte{}
att.Signature = zeroSig[:]
beaconState.Slot += params.BeaconConfig().MinAttestationInclusionDelay
beaconState.CurrentJustifiedCheckpoint.Root = []byte("hello-world")
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{}
beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
ckp := beaconState.CurrentJustifiedCheckpoint()
copy(ckp.Root, "hello-world")
beaconState.SetCurrentJustifiedCheckpoint(ckp)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
if _, err := blocks.ProcessAttestationNoVerify(context.TODO(), beaconState, att); err != nil {
t.Errorf("Unexpected error: %v", err)
@@ -1135,6 +1188,7 @@ func TestProcessAttestationsNoVerify_OK(t *testing.T) {
}
func TestConvertToIndexed_OK(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, 2*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -1142,22 +1196,22 @@ func TestConvertToIndexed_OK(t *testing.T) {
}
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 5,
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
})
tests := []struct {
aggregationBitfield bitfield.Bitlist
wantedAttestingIndices []uint64
}{
{
aggregationBitfield: bitfield.Bitlist{0x07},
wantedAttestingIndices: []uint64{4, 30},
wantedAttestingIndices: []uint64{43, 47},
},
{
aggregationBitfield: bitfield.Bitlist{0x03},
wantedAttestingIndices: []uint64{30},
wantedAttestingIndices: []uint64{47},
},
{
aggregationBitfield: bitfield.Bitlist{0x01},
@@ -1165,8 +1219,10 @@ func TestConvertToIndexed_OK(t *testing.T) {
},
}
var sig [96]byte
copy(sig[:], "signed")
attestation := &ethpb.Attestation{
Signature: []byte("signed"),
Signature: sig[:],
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0},
Target: &ethpb.Checkpoint{Epoch: 0},
@@ -1184,10 +1240,7 @@ func TestConvertToIndexed_OK(t *testing.T) {
if err != nil {
t.Error(err)
}
ia, err := blocks.ConvertToIndexed(context.Background(), attestation, committee)
if err != nil {
t.Errorf("failed to convert attestation to indexed attestation: %v", err)
}
ia := attestationutil.ConvertToIndexed(context.Background(), attestation, committee)
if !reflect.DeepEqual(wanted, ia) {
diff, _ := messagediff.PrettyDiff(ia, wanted)
t.Log(diff)
@@ -1207,7 +1260,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
}
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 5,
Validators: validators,
Fork: &pb.Fork{
@@ -1216,7 +1269,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
})
tests := []struct {
attestation *ethpb.IndexedAttestation
}{
@@ -1255,8 +1308,10 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
}
for _, tt := range tests {
domain := helpers.Domain(state.Fork, tt.attestation.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester)
domain, err := helpers.Domain(state.Fork(), tt.attestation.Data.Target.Epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
t.Fatal(err)
}
root, err := ssz.HashTreeRoot(tt.attestation.Data)
if err != nil {
t.Errorf("Could not find the ssz root: %v", err)
@@ -1286,10 +1341,15 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
for i := uint64(0); i < params.BeaconConfig().MaxValidatorsPerCommittee+5; i++ {
indexedAtt1.AttestingIndices[i] = i
indexedAtt1.Data = &ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Epoch: i,
},
}
}
want := "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE"
if err := blocks.VerifyIndexedAttestation(context.Background(), &pb.BeaconState{}, indexedAtt1); !strings.Contains(err.Error(), want) {
if err := blocks.VerifyIndexedAttestation(context.Background(), &stateTrie.BeaconState{}, indexedAtt1); !strings.Contains(err.Error(), want) {
t.Errorf("Expected verification to fail return false, received: %v", err)
}
}
@@ -1315,7 +1375,7 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
},
}
balances := []uint64{0}
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
@@ -1323,14 +1383,14 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
}
})
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, block.Body)
if err != nil {
t.Fatalf("Expected block deposits to process correctly, received: %v", err)
}
if len(newState.Validators) != 2 {
t.Errorf("Incorrect validator count. Wanted %d, got %d", 2, len(newState.Validators))
if len(newState.Validators()) != 2 {
t.Errorf("Incorrect validator count. Wanted %d, got %d", 2, len(newState.Validators()))
}
}
@@ -1362,12 +1422,12 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
Deposits: []*ethpb.Deposit{deposit},
},
}
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
},
}
})
want := "deposit root did not verify"
if _, err := blocks.ProcessDeposits(context.Background(), beaconState, block.Body); !strings.Contains(err.Error(), want) {
t.Errorf("Expected error: %s, received %v", want, err)
@@ -1393,7 +1453,7 @@ func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
},
}
balances := []uint64{0}
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
@@ -1401,16 +1461,16 @@ func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
}
})
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, block.Body)
if err != nil {
t.Fatalf("Expected block deposits to process correctly, received: %v", err)
}
if newState.Balances[1] != dep[0].Data.Amount {
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
"Expected state validator balances index 0 to equal %d, received %d",
dep[0].Data.Amount,
newState.Balances[1],
newState.Balances()[1],
)
}
}
@@ -1461,20 +1521,20 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
}
balances := []uint64{0, 50}
root := depositTrie.Root()
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
}
})
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, block.Body)
if err != nil {
t.Fatalf("Process deposit failed: %v", err)
}
if newState.Balances[1] != 1000+50 {
t.Errorf("Expected balance at index 1 to be 1050, received %d", newState.Balances[1])
if newState.Balances()[1] != 1000+50 {
t.Errorf("Expected balance at index 1 to be 1050, received %d", newState.Balances()[1])
}
}
@@ -1493,7 +1553,7 @@ func TestProcessDeposit_AddsNewValidatorDeposit(t *testing.T) {
},
}
balances := []uint64{0}
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
@@ -1501,26 +1561,25 @@ func TestProcessDeposit_AddsNewValidatorDeposit(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
}
})
newState, err := blocks.ProcessDeposit(
beaconState,
dep[0],
stateutils.ValidatorIndexMap(beaconState),
)
if err != nil {
t.Fatalf("Process deposit failed: %v", err)
}
if len(newState.Validators) != 2 {
t.Errorf("Expected validator list to have length 2, received: %v", len(newState.Validators))
if len(newState.Validators()) != 2 {
t.Errorf("Expected validator list to have length 2, received: %v", len(newState.Validators()))
}
if len(newState.Balances) != 2 {
t.Fatalf("Expected validator balances list to have length 2, received: %v", len(newState.Balances))
if len(newState.Balances()) != 2 {
t.Fatalf("Expected validator balances list to have length 2, received: %v", len(newState.Balances()))
}
if newState.Balances[1] != dep[0].Data.Amount {
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
"Expected state validator balances index 1 to equal %d, received %d",
dep[0].Data.Amount,
newState.Balances[1],
newState.Balances()[1],
)
}
}
@@ -1545,7 +1604,7 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
},
}
balances := []uint64{0}
beaconState := &pb.BeaconState{
beaconState, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
@@ -1553,30 +1612,29 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
}
})
newState, err := blocks.ProcessDeposit(
beaconState,
dep[0],
stateutils.ValidatorIndexMap(beaconState),
)
if err != nil {
t.Fatalf("Expected invalid block deposit to be ignored without error, received: %v", err)
}
if newState.Eth1DepositIndex != 1 {
if newState.Eth1DepositIndex() != 1 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 1 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex,
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators) != 1 {
t.Errorf("Expected validator list to have length 1, received: %v", len(newState.Validators))
if len(newState.Validators()) != 1 {
t.Errorf("Expected validator list to have length 1, received: %v", len(newState.Validators()))
}
if len(newState.Balances) != 1 {
t.Errorf("Expected validator balances list to have length 1, received: %v", len(newState.Balances))
if len(newState.Balances()) != 1 {
t.Errorf("Expected validator balances list to have length 1, received: %v", len(newState.Balances()))
}
if newState.Balances[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances[0])
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
@@ -1593,9 +1651,9 @@ func TestProcessVoluntaryExits_ValidatorNotActive(t *testing.T) {
ExitEpoch: 0,
},
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
}
})
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
VoluntaryExits: exits,
@@ -1622,10 +1680,10 @@ func TestProcessVoluntaryExits_InvalidExitEpoch(t *testing.T) {
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Slot: 0,
}
})
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
VoluntaryExits: exits,
@@ -1653,10 +1711,10 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Slot: 10,
}
})
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
VoluntaryExits: exits,
@@ -1684,23 +1742,31 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
ActivationEpoch: 0,
},
}
state := &pb.BeaconState{
state, _ := stateTrie.InitializeFromProto(&pb.BeaconState{
Validators: registry,
Fork: &pb.Fork{
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
Slot: params.BeaconConfig().SlotsPerEpoch * 5,
}
state.Slot = state.Slot + (params.BeaconConfig().PersistentCommitteePeriod * params.BeaconConfig().SlotsPerEpoch)
})
state.SetSlot(state.Slot() + (params.BeaconConfig().PersistentCommitteePeriod * params.BeaconConfig().SlotsPerEpoch))
priv := bls.RandKey()
state.Validators[0].PublicKey = priv.PublicKey().Marshal()[:]
val, err := state.ValidatorAtIndex(0)
if err != nil {
t.Fatal(err)
}
val.PublicKey = priv.PublicKey().Marshal()[:]
state.UpdateValidatorAtIndex(0, val)
signingRoot, err := ssz.HashTreeRoot(exits[0].Exit)
if err != nil {
t.Error(err)
}
domain := helpers.Domain(state.Fork, helpers.CurrentEpoch(state), params.BeaconConfig().DomainVoluntaryExit)
domain, err := helpers.Domain(state.Fork(), helpers.CurrentEpoch(state), params.BeaconConfig().DomainVoluntaryExit)
if err != nil {
t.Fatal(err)
}
sig := priv.Sign(signingRoot[:], domain)
exits[0].Signature = sig.Marshal()
block := &ethpb.BeaconBlock{
@@ -1713,9 +1779,9 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
if err != nil {
t.Fatalf("Could not process exits: %v", err)
}
newRegistry := newState.Validators
if newRegistry[0].ExitEpoch != helpers.DelayedActivationExitEpoch(state.Slot/params.BeaconConfig().SlotsPerEpoch) {
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch) {
t.Errorf("Expected validator exit epoch to be %d, got %d",
helpers.DelayedActivationExitEpoch(state.Slot/params.BeaconConfig().SlotsPerEpoch), newRegistry[0].ExitEpoch)
helpers.ActivationExitEpoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch), newRegistry[0].ExitEpoch)
}
}

View File

@@ -6,6 +6,7 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -92,9 +93,9 @@ func TestEth1DataHasEnoughSupport(t *testing.T) {
c.SlotsPerEth1VotingPeriod = tt.votingPeriodLength
params.OverrideBeaconConfig(c)
s := &pb.BeaconState{
s, _ := beaconstate.InitializeFromProto(&pb.BeaconState{
Eth1DataVotes: tt.stateVotes,
}
})
result, err := blocks.Eth1DataHasEnoughSupport(s, tt.data)
if err != nil {
t.Fatal(err)

View File

@@ -35,6 +35,7 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params/spectest:go_default_library",
"//shared/testutil:go_default_library",
@@ -68,6 +69,7 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/state/stateutils:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params/spectest:go_default_library",
"//shared/testutil:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -38,10 +39,14 @@ func runBlockHeaderTest(t *testing.T, config string) {
if err != nil {
t.Fatal(err)
}
preBeaconState := &pb.BeaconState{}
if err := ssz.Unmarshal(preBeaconStateFile, preBeaconState); err != nil {
preBeaconStateBase := &pb.BeaconState{}
if err := ssz.Unmarshal(preBeaconStateFile, preBeaconStateBase); err != nil {
t.Fatalf("Failed to unmarshal: %v", err)
}
preBeaconState, err := beaconstate.InitializeFromProto(preBeaconStateBase)
if err != nil {
t.Fatal(err)
}
// If the post.ssz is not present, it means the test should fail on our end.
postSSZFilepath, err := bazel.Runfile(path.Join(testsFolderPath, folder.Name(), "post.ssz"))
@@ -68,9 +73,8 @@ func runBlockHeaderTest(t *testing.T, config string) {
if err := ssz.Unmarshal(postBeaconStateFile, postBeaconState); err != nil {
t.Fatalf("Failed to unmarshal: %v", err)
}
if !proto.Equal(beaconState, postBeaconState) {
diff, _ := messagediff.PrettyDiff(beaconState, postBeaconState)
if !proto.Equal(beaconState.CloneInnerState(), postBeaconState) {
diff, _ := messagediff.PrettyDiff(beaconState.CloneInnerState(), postBeaconState)
t.Log(diff)
t.Fatal("Post state does not match expected")
}

View File

@@ -12,13 +12,19 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
"gopkg.in/d4l3k/messagediff.v1"
)
func init() {
state.SkipSlotCache.Disable()
}
func runBlockProcessingTest(t *testing.T, config string) {
if err := spectest.SetConfig(config); err != nil {
t.Fatal(err)
@@ -27,14 +33,19 @@ func runBlockProcessingTest(t *testing.T, config string) {
testFolders, testsFolderPath := testutil.TestFolders(t, config, "sanity/blocks/pyspec_tests")
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
helpers.ClearCache()
preBeaconStateFile, err := testutil.BazelFileBytes(testsFolderPath, folder.Name(), "pre.ssz")
if err != nil {
t.Fatal(err)
}
beaconState := &pb.BeaconState{}
if err := ssz.Unmarshal(preBeaconStateFile, beaconState); err != nil {
beaconStateBase := &pb.BeaconState{}
if err := ssz.Unmarshal(preBeaconStateFile, beaconStateBase); err != nil {
t.Fatalf("Failed to unmarshal: %v", err)
}
beaconState, err := beaconstate.InitializeFromProto(beaconStateBase)
if err != nil {
t.Fatal(err)
}
file, err := testutil.BazelFileBytes(testsFolderPath, folder.Name(), "meta.yaml")
if err != nil {
@@ -87,8 +98,8 @@ func runBlockProcessingTest(t *testing.T, config string) {
t.Fatalf("Failed to unmarshal: %v", err)
}
if !proto.Equal(beaconState, postBeaconState) {
diff, _ := messagediff.PrettyDiff(beaconState, postBeaconState)
if !proto.Equal(beaconState.CloneInnerState(), postBeaconState) {
diff, _ := messagediff.PrettyDiff(beaconState.CloneInnerState(), postBeaconState)
t.Log(diff)
t.Fatal("Post state does not match expected")
}

View File

@@ -8,7 +8,9 @@ go_library(
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -27,6 +29,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",

View File

@@ -13,24 +13,27 @@ import (
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
var epochState *pb.BeaconState
// sortableIndices implements the Sort interface to sort newly activated validator indices
// by activation epoch and by index number.
type sortableIndices []uint64
type sortableIndices struct {
indices []uint64
validators []*ethpb.Validator
}
func (s sortableIndices) Len() int { return len(s) }
func (s sortableIndices) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s sortableIndices) Len() int { return len(s.indices) }
func (s sortableIndices) Swap(i, j int) { s.indices[i], s.indices[j] = s.indices[j], s.indices[i] }
func (s sortableIndices) Less(i, j int) bool {
if epochState.Validators[s[i]].ActivationEligibilityEpoch == epochState.Validators[s[j]].ActivationEligibilityEpoch {
return s[i] < s[j]
if s.validators[s.indices[i]].ActivationEligibilityEpoch == s.validators[s.indices[j]].ActivationEligibilityEpoch {
return s.indices[i] < s.indices[j]
}
return epochState.Validators[s[i]].ActivationEligibilityEpoch < epochState.Validators[s[j]].ActivationEligibilityEpoch
return s.validators[s.indices[i]].ActivationEligibilityEpoch < s.validators[s.indices[j]].ActivationEligibilityEpoch
}
// AttestingBalance returns the total balance from all the attesting indices.
@@ -42,7 +45,7 @@ func (s sortableIndices) Less(i, j int) bool {
// Spec pseudocode definition:
// def get_attesting_balance(state: BeaconState, attestations: List[PendingAttestation]) -> Gwei:
// return get_total_balance(state, get_unslashed_attesting_indices(state, attestations))
func AttestingBalance(state *pb.BeaconState, atts []*pb.PendingAttestation) (uint64, error) {
func AttestingBalance(state *stateTrie.BeaconState, atts []*pb.PendingAttestation) (uint64, error) {
indices, err := unslashedAttestingIndices(state, atts)
if err != nil {
return 0, errors.Wrap(err, "could not get attesting indices")
@@ -73,14 +76,17 @@ func AttestingBalance(state *pb.BeaconState, atts []*pb.PendingAttestation) (uin
// for index in activation_queue[:get_validator_churn_limit(state)]:
// validator = state.validators[index]
// validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
func ProcessRegistryUpdates(state *stateTrie.BeaconState) (*stateTrie.BeaconState, error) {
currentEpoch := helpers.CurrentEpoch(state)
vals := state.Validators()
var err error
for idx, validator := range state.Validators {
for idx, validator := range vals {
// Process the validators for activation eligibility.
if helpers.IsEligibleForActivationQueue(validator) {
validator.ActivationEligibilityEpoch = helpers.CurrentEpoch(state) + 1
if err := state.UpdateValidatorAtIndex(uint64(idx), validator); err != nil {
return nil, err
}
}
// Process the validators for ejection.
@@ -96,14 +102,13 @@ func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
// Queue validators eligible for activation and not yet dequeued for activation.
var activationQ []uint64
for idx, validator := range state.Validators {
for idx, validator := range vals {
if helpers.IsEligibleForActivation(state, validator) {
activationQ = append(activationQ, uint64(idx))
}
}
epochState = state
sort.Sort(sortableIndices(activationQ))
sort.Sort(sortableIndices{indices: activationQ, validators: vals})
// Only activate just enough validators according to the activation churn limit.
limit := len(activationQ)
@@ -123,10 +128,15 @@ func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
}
for _, index := range activationQ[:limit] {
validator := state.Validators[index]
validator.ActivationEpoch = helpers.DelayedActivationExitEpoch(currentEpoch)
validator, err := state.ValidatorAtIndex(index)
if err != nil {
return nil, err
}
validator.ActivationEpoch = helpers.ActivationExitEpoch(currentEpoch)
if err := state.UpdateValidatorAtIndex(index, validator); err != nil {
return nil, err
}
}
return state, nil
}
@@ -141,7 +151,7 @@ func ProcessRegistryUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
// penalty_numerator = validator.effective_balance // increment * min(sum(state.slashings) * 3, total_balance)
// penalty = penalty_numerator // total_balance * increment
// decrease_balance(state, ValidatorIndex(index), penalty)
func ProcessSlashings(state *pb.BeaconState) (*pb.BeaconState, error) {
func ProcessSlashings(state *stateTrie.BeaconState) (*stateTrie.BeaconState, error) {
currentEpoch := helpers.CurrentEpoch(state)
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
@@ -152,22 +162,28 @@ func ProcessSlashings(state *pb.BeaconState) (*pb.BeaconState, error) {
exitLength := params.BeaconConfig().EpochsPerSlashingsVector
// Compute the sum of state slashings
slashings := state.Slashings()
totalSlashing := uint64(0)
for _, slashing := range state.Slashings {
for _, slashing := range slashings {
totalSlashing += slashing
}
// Compute slashing for each validator.
for index, validator := range state.Validators {
correctEpoch := (currentEpoch + exitLength/2) == validator.WithdrawableEpoch
if validator.Slashed && correctEpoch {
// a callback is used here to apply the following actions to all validators
// below equally.
err = state.ApplyToEveryValidator(func(idx int, val *ethpb.Validator) (bool, error) {
correctEpoch := (currentEpoch + exitLength/2) == val.WithdrawableEpoch
if val.Slashed && correctEpoch {
minSlashing := mathutil.Min(totalSlashing*3, totalBalance)
increment := params.BeaconConfig().EffectiveBalanceIncrement
penaltyNumerator := validator.EffectiveBalance / increment * minSlashing
penaltyNumerator := val.EffectiveBalance / increment * minSlashing
penalty := penaltyNumerator / totalBalance * increment
state = helpers.DecreaseBalance(state, uint64(index), penalty)
if err := helpers.DecreaseBalance(state, uint64(idx), penalty); err != nil {
return false, err
}
return true, nil
}
}
return false, nil
})
return state, err
}
@@ -207,67 +223,97 @@ func ProcessSlashings(state *pb.BeaconState) (*pb.BeaconState, error) {
// # Rotate current/previous epoch attestations
// state.previous_epoch_attestations = state.current_epoch_attestations
// state.current_epoch_attestations = []
func ProcessFinalUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
func ProcessFinalUpdates(state *stateTrie.BeaconState) (*stateTrie.BeaconState, error) {
currentEpoch := helpers.CurrentEpoch(state)
nextEpoch := currentEpoch + 1
// Reset ETH1 data votes.
if (state.Slot+1)%params.BeaconConfig().SlotsPerEth1VotingPeriod == 0 {
state.Eth1DataVotes = []*ethpb.Eth1Data{}
if (state.Slot()+1)%params.BeaconConfig().SlotsPerEth1VotingPeriod == 0 {
if err := state.SetEth1DataVotes([]*ethpb.Eth1Data{}); err != nil {
return nil, err
}
}
bals := state.Balances()
// Update effective balances with hysteresis.
for i, v := range state.Validators {
if v == nil {
return nil, fmt.Errorf("validator %d is nil in state", i)
validatorFunc := func(idx int, val *ethpb.Validator) (bool, error) {
if val == nil {
return false, fmt.Errorf("validator %d is nil in state", idx)
}
if i >= len(state.Balances) {
return nil, fmt.Errorf("validator index exceeds validator length in state %d >= %d", i, len(state.Balances))
if idx >= len(bals) {
return false, fmt.Errorf("validator index exceeds validator length in state %d >= %d", idx, len(state.Balances()))
}
balance := state.Balances[i]
balance := bals[idx]
halfInc := params.BeaconConfig().EffectiveBalanceIncrement / 2
if balance < v.EffectiveBalance || v.EffectiveBalance+3*halfInc < balance {
v.EffectiveBalance = params.BeaconConfig().MaxEffectiveBalance
if v.EffectiveBalance > balance-balance%params.BeaconConfig().EffectiveBalanceIncrement {
v.EffectiveBalance = balance - balance%params.BeaconConfig().EffectiveBalanceIncrement
if balance < val.EffectiveBalance || val.EffectiveBalance+3*halfInc < balance {
val.EffectiveBalance = params.BeaconConfig().MaxEffectiveBalance
if val.EffectiveBalance > balance-balance%params.BeaconConfig().EffectiveBalanceIncrement {
val.EffectiveBalance = balance - balance%params.BeaconConfig().EffectiveBalanceIncrement
}
return true, nil
}
return false, nil
}
if err := state.ApplyToEveryValidator(validatorFunc); err != nil {
return nil, err
}
// Set total slashed balances.
slashedExitLength := params.BeaconConfig().EpochsPerSlashingsVector
slashedEpoch := int(nextEpoch % slashedExitLength)
if len(state.Slashings) != int(slashedExitLength) {
return nil, fmt.Errorf("state slashing length %d different than EpochsPerHistoricalVector %d", len(state.Slashings), slashedExitLength)
slashings := state.Slashings()
if len(slashings) != int(slashedExitLength) {
return nil, fmt.Errorf(
"state slashing length %d different than EpochsPerHistoricalVector %d",
len(slashings),
slashedExitLength,
)
}
if err := state.UpdateSlashingsAtIndex(uint64(slashedEpoch) /* index */, 0 /* value */); err != nil {
return nil, err
}
state.Slashings[slashedEpoch] = 0
// Set RANDAO mix.
randaoMixLength := params.BeaconConfig().EpochsPerHistoricalVector
if len(state.RandaoMixes) != int(randaoMixLength) {
return nil, fmt.Errorf("state randao length %d different than EpochsPerHistoricalVector %d", len(state.RandaoMixes), randaoMixLength)
if state.RandaoMixesLength() != int(randaoMixLength) {
return nil, fmt.Errorf(
"state randao length %d different than EpochsPerHistoricalVector %d",
state.RandaoMixesLength(),
randaoMixLength,
)
}
mix, err := helpers.RandaoMix(state, currentEpoch)
if err != nil {
return nil, err
}
if err := state.UpdateRandaoMixesAtIndex(mix, nextEpoch%randaoMixLength); err != nil {
return nil, err
}
mix := helpers.RandaoMix(state, currentEpoch)
state.RandaoMixes[nextEpoch%randaoMixLength] = mix
// Set historical root accumulator.
epochsPerHistoricalRoot := params.BeaconConfig().SlotsPerHistoricalRoot / params.BeaconConfig().SlotsPerEpoch
if nextEpoch%epochsPerHistoricalRoot == 0 {
historicalBatch := &pb.HistoricalBatch{
BlockRoots: state.BlockRoots,
StateRoots: state.StateRoots,
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
}
batchRoot, err := ssz.HashTreeRoot(historicalBatch)
if err != nil {
return nil, errors.Wrap(err, "could not hash historical batch")
}
state.HistoricalRoots = append(state.HistoricalRoots, batchRoot[:])
if err := state.AppendHistoricalRoots(batchRoot); err != nil {
return nil, err
}
}
// Rotate current and previous epoch attestations.
state.PreviousEpochAttestations = state.CurrentEpochAttestations
state.CurrentEpochAttestations = []*pb.PendingAttestation{}
if err := state.SetPreviousEpochAttestations(state.CurrentEpochAttestations()); err != nil {
return nil, err
}
if err := state.SetCurrentEpochAttestations([]*pb.PendingAttestation{}); err != nil {
return nil, err
}
return state, nil
}
@@ -281,7 +327,7 @@ func ProcessFinalUpdates(state *pb.BeaconState) (*pb.BeaconState, error) {
// for a in attestations:
// output = output.union(get_attesting_indices(state, a.data, a.aggregation_bits))
// return set(filter(lambda index: not state.validators[index].slashed, output))
func unslashedAttestingIndices(state *pb.BeaconState, atts []*pb.PendingAttestation) ([]uint64, error) {
func unslashedAttestingIndices(state *stateTrie.BeaconState, atts []*pb.PendingAttestation) ([]uint64, error) {
var setIndices []uint64
seen := make(map[uint64]bool)
@@ -290,10 +336,7 @@ func unslashedAttestingIndices(state *pb.BeaconState, atts []*pb.PendingAttestat
if err != nil {
return nil, err
}
attestingIndices, err := helpers.AttestingIndices(att.AggregationBits, committee)
if err != nil {
return nil, errors.Wrap(err, "could not get attester indices")
}
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
// Create a set for attesting indices
set := make([]uint64, 0, len(attestingIndices))
for _, index := range attestingIndices {
@@ -308,7 +351,7 @@ func unslashedAttestingIndices(state *pb.BeaconState, atts []*pb.PendingAttestat
sort.Slice(setIndices, func(i, j int) bool { return setIndices[i] < setIndices[j] })
// Remove the slashed validator indices.
for i := 0; i < len(setIndices); i++ {
if state.Validators[setIndices[i]].Slashed {
if v, _ := state.ValidatorAtIndex(setIndices[i]); v != nil && v.Slashed {
setIndices = append(setIndices[:i], setIndices[i+1:]...)
}
}
@@ -327,12 +370,16 @@ func unslashedAttestingIndices(state *pb.BeaconState, atts []*pb.PendingAttestat
// total_balance = get_total_active_balance(state)
// effective_balance = state.validator_registry[index].effective_balance
// return effective_balance * BASE_REWARD_FACTOR // integer_squareroot(total_balance) // BASE_REWARDS_PER_EPOCH
func BaseReward(state *pb.BeaconState, index uint64) (uint64, error) {
func BaseReward(state *stateTrie.BeaconState, index uint64) (uint64, error) {
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return 0, errors.Wrap(err, "could not calculate active balance")
}
effectiveBalance := state.Validators[index].EffectiveBalance
val, err := state.ValidatorAtIndex(index)
if err != nil {
return 0, err
}
effectiveBalance := val.EffectiveBalance
baseReward := effectiveBalance * params.BeaconConfig().BaseRewardFactor /
mathutil.IntegerSquareRoot(totalBalance) / params.BeaconConfig().BaseRewardsPerEpoch
return baseReward, nil

View File

@@ -4,15 +4,20 @@ import (
"testing"
fuzz "github.com/google/gofuzz"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
ethereum_beacon_p2p_v1 "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestFuzzFinalUpdates_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethereum_beacon_p2p_v1.BeaconState{}
base := &ethereum_beacon_p2p_v1.BeaconState{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
_, _ = ProcessFinalUpdates(state)
fuzzer.Fuzz(base)
s, err := beaconstate.InitializeFromProtoUnsafe(base)
if err != nil {
t.Fatal(err)
}
_, _ = ProcessFinalUpdates(s)
}
}

View File

@@ -8,6 +8,7 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -39,10 +40,14 @@ func TestUnslashedAttestingIndices_CanSortAndFilter(t *testing.T) {
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state := &pb.BeaconState{
base := &pb.BeaconState{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
indices, err := unslashedAttestingIndices(state, atts)
if err != nil {
@@ -56,7 +61,9 @@ func TestUnslashedAttestingIndices_CanSortAndFilter(t *testing.T) {
// Verify the slashed validator is filtered.
slashedValidator := indices[0]
state.Validators[slashedValidator].Slashed = true
validators = state.Validators()
validators[slashedValidator].Slashed = true
state.SetValidators(validators)
indices, err = unslashedAttestingIndices(state, atts)
if err != nil {
t.Fatal(err)
@@ -87,10 +94,14 @@ func TestUnslashedAttestingIndices_DuplicatedAttestations(t *testing.T) {
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state := &pb.BeaconState{
base := &pb.BeaconState{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
indices, err := unslashedAttestingIndices(state, atts)
if err != nil {
@@ -105,6 +116,7 @@ func TestUnslashedAttestingIndices_DuplicatedAttestations(t *testing.T) {
}
func TestAttestingBalance_CorrectBalance(t *testing.T) {
helpers.ClearCache()
// Generate 2 attestations.
atts := make([]*pb.PendingAttestation, 2)
for i := 0; i < len(atts); i++ {
@@ -129,13 +141,17 @@ func TestAttestingBalance_CorrectBalance(t *testing.T) {
}
balances[i] = params.BeaconConfig().MaxEffectiveBalance
}
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: 2,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
Validators: validators,
Balances: balances,
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
balance, err := AttestingBalance(state, atts)
if err != nil {
@@ -159,11 +175,15 @@ func TestBaseReward_AccurateRewards(t *testing.T) {
{40 * 1e9, params.BeaconConfig().MaxEffectiveBalance, 2862174},
}
for _, tt := range tests {
state := &pb.BeaconState{
base := &pb.BeaconState{
Validators: []*ethpb.Validator{
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: tt.b}},
Balances: []uint64{tt.a},
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
c, err := BaseReward(state, 0)
if err != nil {
t.Fatal(err)
@@ -176,19 +196,23 @@ func TestBaseReward_AccurateRewards(t *testing.T) {
}
func TestProcessSlashings_NotSlashed(t *testing.T) {
s := &pb.BeaconState{
base := &pb.BeaconState{
Slot: 0,
Validators: []*ethpb.Validator{{Slashed: true}},
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance},
Slashings: []uint64{0, 1e9},
}
s, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
newState, err := ProcessSlashings(s)
if err != nil {
t.Fatal(err)
}
wanted := params.BeaconConfig().MaxEffectiveBalance
if newState.Balances[0] != wanted {
t.Errorf("Wanted slashed balance: %d, got: %d", wanted, newState.Balances[0])
if newState.Balances()[0] != wanted {
t.Errorf("Wanted slashed balance: %d, got: %d", wanted, newState.Balances()[0])
}
}
@@ -262,16 +286,20 @@ func TestProcessSlashings_SlashedLess(t *testing.T) {
for i, tt := range tests {
t.Run(string(i), func(t *testing.T) {
original := proto.Clone(tt.state)
newState, err := ProcessSlashings(tt.state)
s, err := state.InitializeFromProto(tt.state)
if err != nil {
t.Fatal(err)
}
newState, err := ProcessSlashings(s)
if err != nil {
t.Fatal(err)
}
if newState.Balances[0] != tt.want {
if newState.Balances()[0] != tt.want {
t.Errorf(
"ProcessSlashings({%v}) = newState; newState.Balances[0] = %d; wanted %d",
original,
newState.Balances[0],
newState.Balances()[0],
tt.want,
)
}
@@ -283,48 +311,54 @@ func TestProcessFinalUpdates_CanProcess(t *testing.T) {
s := buildState(params.BeaconConfig().SlotsPerHistoricalRoot-1, params.BeaconConfig().SlotsPerEpoch)
ce := helpers.CurrentEpoch(s)
ne := ce + 1
s.Eth1DataVotes = []*ethpb.Eth1Data{}
s.Balances[0] = 29 * 1e9
s.Slashings[ce] = 0
s.RandaoMixes[ce] = []byte{'A'}
s.SetEth1DataVotes([]*ethpb.Eth1Data{})
balances := s.Balances()
balances[0] = 29 * 1e9
s.SetBalances(balances)
slashings := s.Slashings()
slashings[ce] = 0
s.SetSlashings(slashings)
mixes := s.RandaoMixes()
mixes[ce] = []byte{'A'}
s.SetRandaoMixes(mixes)
newS, err := ProcessFinalUpdates(s)
if err != nil {
t.Fatal(err)
}
// Verify effective balance is correctly updated.
if newS.Validators[0].EffectiveBalance != 29*1e9 {
t.Errorf("effective balance incorrectly updated, got %d", s.Validators[0].EffectiveBalance)
if newS.Validators()[0].EffectiveBalance != 29*1e9 {
t.Errorf("effective balance incorrectly updated, got %d", s.Validators()[0].EffectiveBalance)
}
// Verify slashed balances correctly updated.
if newS.Slashings[ce] != newS.Slashings[ne] {
if newS.Slashings()[ce] != newS.Slashings()[ne] {
t.Errorf("wanted slashed balance %d, got %d",
newS.Slashings[ce],
newS.Slashings[ne])
newS.Slashings()[ce],
newS.Slashings()[ne])
}
// Verify randao is correctly updated in the right position.
if bytes.Equal(newS.RandaoMixes[ne], params.BeaconConfig().ZeroHash[:]) {
if mix, _ := newS.RandaoMixAtIndex(ne); bytes.Equal(mix, params.BeaconConfig().ZeroHash[:]) {
t.Error("latest RANDAO still zero hashes")
}
// Verify historical root accumulator was appended.
if len(newS.HistoricalRoots) != 1 {
t.Errorf("wanted slashed balance %d, got %d", 1, len(newS.HistoricalRoots[ce]))
if len(newS.HistoricalRoots()) != 1 {
t.Errorf("wanted slashed balance %d, got %d", 1, len(newS.HistoricalRoots()[ce]))
}
if newS.CurrentEpochAttestations == nil {
if newS.CurrentEpochAttestations() == nil {
t.Error("nil value stored in current epoch attestations instead of empty slice")
}
}
func TestProcessRegistryUpdates_NoRotation(t *testing.T) {
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
Validators: []*ethpb.Validator{
{ExitEpoch: params.BeaconConfig().MaxSeedLookhead},
{ExitEpoch: params.BeaconConfig().MaxSeedLookhead},
{ExitEpoch: params.BeaconConfig().MaxSeedLookahead},
{ExitEpoch: params.BeaconConfig().MaxSeedLookahead},
},
Balances: []uint64{
params.BeaconConfig().MaxEffectiveBalance,
@@ -332,20 +366,24 @@ func TestProcessRegistryUpdates_NoRotation(t *testing.T) {
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
newState, err := ProcessRegistryUpdates(state)
if err != nil {
t.Fatal(err)
}
for i, validator := range newState.Validators {
if validator.ExitEpoch != params.BeaconConfig().MaxSeedLookhead {
for i, validator := range newState.Validators() {
if validator.ExitEpoch != params.BeaconConfig().MaxSeedLookahead {
t.Errorf("Could not update registry %d, wanted exit slot %d got %d",
i, params.BeaconConfig().MaxSeedLookhead, validator.ExitEpoch)
i, params.BeaconConfig().MaxSeedLookahead, validator.ExitEpoch)
}
}
}
func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 6},
}
@@ -354,25 +392,26 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
t.Error(err)
}
for i := 0; i < int(limit)+10; i++ {
state.Validators = append(state.Validators, &ethpb.Validator{
base.Validators = append(base.Validators, &ethpb.Validator{
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
})
}
state, err := state.InitializeFromProto(base)
currentEpoch := helpers.CurrentEpoch(state)
newState, err := ProcessRegistryUpdates(state)
if err != nil {
t.Error(err)
}
for i, validator := range newState.Validators {
for i, validator := range newState.Validators() {
if validator.ActivationEligibilityEpoch != currentEpoch+1 {
t.Errorf("Could not update registry %d, wanted activation eligibility epoch %d got %d",
i, currentEpoch, validator.ActivationEligibilityEpoch)
}
if i < int(limit) && validator.ActivationEpoch != helpers.DelayedActivationExitEpoch(currentEpoch) {
if i < int(limit) && validator.ActivationEpoch != helpers.ActivationExitEpoch(currentEpoch) {
t.Errorf("Could not update registry %d, validators failed to activate: wanted activation epoch %d, got %d",
i, helpers.DelayedActivationExitEpoch(currentEpoch), validator.ActivationEpoch)
i, helpers.ActivationExitEpoch(currentEpoch), validator.ActivationEpoch)
}
if i >= int(limit) && validator.ActivationEpoch != params.BeaconConfig().FarFutureEpoch {
t.Errorf("Could not update registry %d, validators should not have been activated, wanted activation epoch: %d, got %d",
@@ -382,30 +421,34 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
}
func TestProcessRegistryUpdates_ActivationCompletes(t *testing.T) {
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
Validators: []*ethpb.Validator{
{ExitEpoch: params.BeaconConfig().MaxSeedLookhead,
ActivationEpoch: 5 + params.BeaconConfig().MaxSeedLookhead + 1},
{ExitEpoch: params.BeaconConfig().MaxSeedLookhead,
ActivationEpoch: 5 + params.BeaconConfig().MaxSeedLookhead + 1},
{ExitEpoch: params.BeaconConfig().MaxSeedLookahead,
ActivationEpoch: 5 + params.BeaconConfig().MaxSeedLookahead + 1},
{ExitEpoch: params.BeaconConfig().MaxSeedLookahead,
ActivationEpoch: 5 + params.BeaconConfig().MaxSeedLookahead + 1},
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
newState, err := ProcessRegistryUpdates(state)
if err != nil {
t.Error(err)
}
for i, validator := range newState.Validators {
if validator.ExitEpoch != params.BeaconConfig().MaxSeedLookhead {
for i, validator := range newState.Validators() {
if validator.ExitEpoch != params.BeaconConfig().MaxSeedLookahead {
t.Errorf("Could not update registry %d, wanted exit slot %d got %d",
i, params.BeaconConfig().MaxSeedLookhead, validator.ExitEpoch)
i, params.BeaconConfig().MaxSeedLookahead, validator.ExitEpoch)
}
}
}
func TestProcessRegistryUpdates_ValidatorsEjected(t *testing.T) {
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: 0,
Validators: []*ethpb.Validator{
{
@@ -419,23 +462,27 @@ func TestProcessRegistryUpdates_ValidatorsEjected(t *testing.T) {
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
newState, err := ProcessRegistryUpdates(state)
if err != nil {
t.Error(err)
}
for i, validator := range newState.Validators {
if validator.ExitEpoch != params.BeaconConfig().MaxSeedLookhead+1 {
for i, validator := range newState.Validators() {
if validator.ExitEpoch != params.BeaconConfig().MaxSeedLookahead+1 {
t.Errorf("Could not update registry %d, wanted exit slot %d got %d",
i, params.BeaconConfig().MaxSeedLookhead+1, validator.ExitEpoch)
i, params.BeaconConfig().MaxSeedLookahead+1, validator.ExitEpoch)
}
}
}
func TestProcessRegistryUpdates_CanExits(t *testing.T) {
epoch := uint64(5)
exitEpoch := helpers.DelayedActivationExitEpoch(epoch)
exitEpoch := helpers.ActivationExitEpoch(epoch)
minWithdrawalDelay := params.BeaconConfig().MinValidatorWithdrawabilityDelay
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: epoch * params.BeaconConfig().SlotsPerEpoch,
Validators: []*ethpb.Validator{
{
@@ -447,11 +494,15 @@ func TestProcessRegistryUpdates_CanExits(t *testing.T) {
},
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
newState, err := ProcessRegistryUpdates(state)
if err != nil {
t.Fatal(err)
}
for i, validator := range newState.Validators {
for i, validator := range newState.Validators() {
if validator.ExitEpoch != exitEpoch {
t.Errorf("Could not update registry %d, wanted exit slot %d got %d",
i,
@@ -462,7 +513,7 @@ func TestProcessRegistryUpdates_CanExits(t *testing.T) {
}
}
func buildState(slot uint64, validatorCount uint64) *pb.BeaconState {
func buildState(slot uint64, validatorCount uint64) *state.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -488,7 +539,7 @@ func buildState(slot uint64, validatorCount uint64) *pb.BeaconState {
for i := 0; i < len(latestRandaoMixes); i++ {
latestRandaoMixes[i] = params.BeaconConfig().ZeroHash[:]
}
return &pb.BeaconState{
s, err := state.InitializeFromProto(&pb.BeaconState{
Slot: slot,
Balances: validatorBalances,
Validators: validators,
@@ -498,5 +549,9 @@ func buildState(slot uint64, validatorCount uint64) *pb.BeaconState {
FinalizedCheckpoint: &ethpb.Checkpoint{},
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{},
})
if err != nil {
panic(err)
}
return s
}

View File

@@ -14,7 +14,9 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"//shared/traceutil:go_default_library",
@@ -37,7 +39,9 @@ go_test(
deps = [
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",

View File

@@ -6,7 +6,9 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"go.opencensus.io/trace"
)
@@ -19,16 +21,17 @@ var Balances *Balance
// it also tracks and updates epoch attesting balances.
func ProcessAttestations(
ctx context.Context,
state *pb.BeaconState,
state *stateTrie.BeaconState,
vp []*Validator,
bp *Balance) ([]*Validator, *Balance, error) {
bp *Balance,
) ([]*Validator, *Balance, error) {
ctx, span := trace.StartSpan(ctx, "precomputeEpoch.ProcessAttestations")
defer span.End()
v := &Validator{}
var err error
for _, a := range append(state.PreviousEpochAttestations, state.CurrentEpochAttestations...) {
for _, a := range append(state.PreviousEpochAttestations(), state.CurrentEpochAttestations()...) {
v.IsCurrentEpochAttester, v.IsCurrentEpochTargetAttester, err = AttestedCurrentEpoch(state, a)
if err != nil {
traceutil.AnnotateError(span, err)
@@ -44,10 +47,7 @@ func ProcessAttestations(
if err != nil {
return nil, nil, err
}
indices, err := helpers.AttestingIndices(a.AggregationBits, committee)
if err != nil {
return nil, nil, err
}
indices := attestationutil.AttestingIndices(a.AggregationBits, committee)
vp = UpdateValidator(vp, v, indices, a, a.Data.Slot)
}
@@ -58,7 +58,7 @@ func ProcessAttestations(
}
// AttestedCurrentEpoch returns true if attestation `a` attested once in current epoch and/or epoch boundary block.
func AttestedCurrentEpoch(s *pb.BeaconState, a *pb.PendingAttestation) (bool, bool, error) {
func AttestedCurrentEpoch(s *stateTrie.BeaconState, a *pb.PendingAttestation) (bool, bool, error) {
currentEpoch := helpers.CurrentEpoch(s)
var votedCurrentEpoch, votedTarget bool
// Did validator vote current epoch.
@@ -76,7 +76,7 @@ func AttestedCurrentEpoch(s *pb.BeaconState, a *pb.PendingAttestation) (bool, bo
}
// AttestedPrevEpoch returns true if attestation `a` attested once in previous epoch and epoch boundary block and/or the same head.
func AttestedPrevEpoch(s *pb.BeaconState, a *pb.PendingAttestation) (bool, bool, bool, error) {
func AttestedPrevEpoch(s *stateTrie.BeaconState, a *pb.PendingAttestation) (bool, bool, bool, error) {
prevEpoch := helpers.PrevEpoch(s)
var votedPrevEpoch, votedTarget, votedHead bool
// Did validator vote previous epoch.
@@ -102,7 +102,7 @@ func AttestedPrevEpoch(s *pb.BeaconState, a *pb.PendingAttestation) (bool, bool,
}
// SameTarget returns true if attestation `a` attested to the same target block in state.
func SameTarget(state *pb.BeaconState, a *pb.PendingAttestation, e uint64) (bool, error) {
func SameTarget(state *stateTrie.BeaconState, a *pb.PendingAttestation, e uint64) (bool, error) {
r, err := helpers.BlockRoot(state, e)
if err != nil {
return false, err
@@ -114,7 +114,7 @@ func SameTarget(state *pb.BeaconState, a *pb.PendingAttestation, e uint64) (bool
}
// SameHead returns true if attestation `a` attested to the same block by attestation slot in state.
func SameHead(state *pb.BeaconState, a *pb.PendingAttestation) (bool, error) {
func SameHead(state *stateTrie.BeaconState, a *pb.PendingAttestation) (bool, error) {
r, err := helpers.BlockRootAtSlot(state, a.Data.Slot)
if err != nil {
return false, err

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -73,12 +74,14 @@ func TestUpdateBalance(t *testing.T) {
func TestSameHead(t *testing.T) {
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = 1
beaconState.SetSlot(1)
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}}}
r := []byte{'A'}
beaconState.BlockRoots[0] = r
att.Data.BeaconBlockRoot = r
r := [32]byte{'A'}
br := beaconState.BlockRoots()
br[0] = r[:]
beaconState.SetBlockRoots(br)
att.Data.BeaconBlockRoot = r[:]
same, err := precompute.SameHead(beaconState, &pb.PendingAttestation{Data: att.Data})
if err != nil {
t.Fatal(err)
@@ -86,7 +89,8 @@ func TestSameHead(t *testing.T) {
if !same {
t.Error("head in state does not match head in attestation")
}
att.Data.BeaconBlockRoot = []byte{'B'}
newRoot := [32]byte{'B'}
att.Data.BeaconBlockRoot = newRoot[:]
same, err = precompute.SameHead(beaconState, &pb.PendingAttestation{Data: att.Data})
if err != nil {
t.Fatal(err)
@@ -98,12 +102,14 @@ func TestSameHead(t *testing.T) {
func TestSameTarget(t *testing.T) {
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = 1
beaconState.SetSlot(1)
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}}}
r := []byte{'A'}
beaconState.BlockRoots[0] = r
att.Data.Target.Root = r
r := [32]byte{'A'}
br := beaconState.BlockRoots()
br[0] = r[:]
beaconState.SetBlockRoots(br)
att.Data.Target.Root = r[:]
same, err := precompute.SameTarget(beaconState, &pb.PendingAttestation{Data: att.Data}, 0)
if err != nil {
t.Fatal(err)
@@ -111,7 +117,8 @@ func TestSameTarget(t *testing.T) {
if !same {
t.Error("head in state does not match head in attestation")
}
att.Data.Target.Root = []byte{'B'}
newRoot := [32]byte{'B'}
att.Data.Target.Root = newRoot[:]
same, err = precompute.SameTarget(beaconState, &pb.PendingAttestation{Data: att.Data}, 0)
if err != nil {
t.Fatal(err)
@@ -123,13 +130,15 @@ func TestSameTarget(t *testing.T) {
func TestAttestedPrevEpoch(t *testing.T) {
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = params.BeaconConfig().SlotsPerEpoch
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}}}
r := []byte{'A'}
beaconState.BlockRoots[0] = r
att.Data.Target.Root = r
att.Data.BeaconBlockRoot = r
r := [32]byte{'A'}
br := beaconState.BlockRoots()
br[0] = r[:]
beaconState.SetBlockRoots(br)
att.Data.Target.Root = r[:]
att.Data.BeaconBlockRoot = r[:]
votedEpoch, votedTarget, votedHead, err := precompute.AttestedPrevEpoch(beaconState, &pb.PendingAttestation{Data: att.Data})
if err != nil {
t.Fatal(err)
@@ -147,13 +156,16 @@ func TestAttestedPrevEpoch(t *testing.T) {
func TestAttestedCurrentEpoch(t *testing.T) {
beaconState, _ := testutil.DeterministicGenesisState(t, 100)
beaconState.Slot = params.BeaconConfig().SlotsPerEpoch + 1
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch + 1)
att := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 1}}}
r := []byte{'A'}
beaconState.BlockRoots[params.BeaconConfig().SlotsPerEpoch] = r
att.Data.Target.Root = r
att.Data.BeaconBlockRoot = r
r := [32]byte{'A'}
br := beaconState.BlockRoots()
br[params.BeaconConfig().SlotsPerEpoch] = r[:]
beaconState.SetBlockRoots(br)
att.Data.Target.Root = r[:]
att.Data.BeaconBlockRoot = r[:]
votedEpoch, votedTarget, err := precompute.AttestedCurrentEpoch(beaconState, &pb.PendingAttestation{Data: att.Data})
if err != nil {
t.Fatal(err)
@@ -172,7 +184,7 @@ func TestProcessAttestations(t *testing.T) {
validators := uint64(64)
beaconState, _ := testutil.DeterministicGenesisState(t, validators)
beaconState.Slot = params.BeaconConfig().SlotsPerEpoch
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
bf := []byte{0xff}
att1 := &ethpb.Attestation{Data: &ethpb.AttestationData{
@@ -181,14 +193,17 @@ func TestProcessAttestations(t *testing.T) {
att2 := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 0}},
AggregationBits: bf}
beaconState.BlockRoots[0] = []byte{'A'}
att1.Data.Target.Root = []byte{'A'}
att1.Data.BeaconBlockRoot = []byte{'A'}
beaconState.BlockRoots[0] = []byte{'B'}
att2.Data.Target.Root = []byte{'A'}
att2.Data.BeaconBlockRoot = []byte{'B'}
beaconState.PreviousEpochAttestations = []*pb.PendingAttestation{{Data: att1.Data, AggregationBits: bf}}
beaconState.CurrentEpochAttestations = []*pb.PendingAttestation{{Data: att2.Data, AggregationBits: bf}}
rt := [32]byte{'A'}
att1.Data.Target.Root = rt[:]
att1.Data.BeaconBlockRoot = rt[:]
br := beaconState.BlockRoots()
newRt := [32]byte{'B'}
br[0] = newRt[:]
beaconState.SetBlockRoots(br)
att2.Data.Target.Root = rt[:]
att2.Data.BeaconBlockRoot = newRt[:]
beaconState.SetPreviousEpochAttestations([]*pb.PendingAttestation{{Data: att1.Data, AggregationBits: bf}})
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{{Data: att2.Data, AggregationBits: bf}})
vp := make([]*precompute.Validator, validators)
for i := 0; i < len(vp); i++ {
@@ -204,7 +219,7 @@ func TestProcessAttestations(t *testing.T) {
if err != nil {
t.Error(err)
}
indices, _ := helpers.AttestingIndices(att1.AggregationBits, committee)
indices := attestationutil.AttestingIndices(att1.AggregationBits, committee)
for _, i := range indices {
if !vp[i].IsPrevEpochAttester {
t.Error("Not a prev epoch attester")
@@ -214,7 +229,7 @@ func TestProcessAttestations(t *testing.T) {
if err != nil {
t.Error(err)
}
indices, _ = helpers.AttestingIndices(att2.AggregationBits, committee)
indices = attestationutil.AttestingIndices(att2.AggregationBits, committee)
for _, i := range indices {
if !vp[i].IsPrevEpochAttester {
t.Error("Not a prev epoch attester")

View File

@@ -4,25 +4,31 @@ import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
)
// ProcessJustificationAndFinalizationPreCompute processes justification and finalization during
// epoch processing. This is where a beacon node can justify and finalize a new epoch.
// Note: this is an optimized version by passing in precomputed total and attesting balances.
func ProcessJustificationAndFinalizationPreCompute(state *pb.BeaconState, p *Balance) (*pb.BeaconState, error) {
if state.Slot <= helpers.StartSlot(2) {
func ProcessJustificationAndFinalizationPreCompute(state *stateTrie.BeaconState, p *Balance) (*stateTrie.BeaconState, error) {
if state.Slot() <= helpers.StartSlot(2) {
return state, nil
}
prevEpoch := helpers.PrevEpoch(state)
currentEpoch := helpers.CurrentEpoch(state)
oldPrevJustifiedCheckpoint := state.PreviousJustifiedCheckpoint
oldCurrJustifiedCheckpoint := state.CurrentJustifiedCheckpoint
oldPrevJustifiedCheckpoint := state.PreviousJustifiedCheckpoint()
oldCurrJustifiedCheckpoint := state.CurrentJustifiedCheckpoint()
// Process justifications
state.PreviousJustifiedCheckpoint = state.CurrentJustifiedCheckpoint
state.JustificationBits.Shift(1)
if err := state.SetPreviousJustifiedCheckpoint(state.CurrentJustifiedCheckpoint()); err != nil {
return nil, err
}
newBits := state.JustificationBits()
newBits.Shift(1)
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
}
// Note: the spec refers to the bit index position starting at 1 instead of starting at zero.
// We will use that paradigm here for consistency with the godoc spec definition.
@@ -33,8 +39,14 @@ func ProcessJustificationAndFinalizationPreCompute(state *pb.BeaconState, p *Bal
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for previous epoch %d", prevEpoch)
}
state.CurrentJustifiedCheckpoint = &ethpb.Checkpoint{Epoch: prevEpoch, Root: blockRoot}
state.JustificationBits.SetBitAt(1, true)
if err := state.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: prevEpoch, Root: blockRoot}); err != nil {
return nil, err
}
newBits := state.JustificationBits()
newBits.SetBitAt(1, true)
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
}
}
// If 2/3 or more of the total balance attested in the current epoch.
@@ -43,31 +55,45 @@ func ProcessJustificationAndFinalizationPreCompute(state *pb.BeaconState, p *Bal
if err != nil {
return nil, errors.Wrapf(err, "could not get block root for current epoch %d", prevEpoch)
}
state.CurrentJustifiedCheckpoint = &ethpb.Checkpoint{Epoch: currentEpoch, Root: blockRoot}
state.JustificationBits.SetBitAt(0, true)
if err := state.SetCurrentJustifiedCheckpoint(&ethpb.Checkpoint{Epoch: currentEpoch, Root: blockRoot}); err != nil {
return nil, err
}
newBits := state.JustificationBits()
newBits.SetBitAt(0, true)
if err := state.SetJustificationBits(newBits); err != nil {
return nil, err
}
}
// Process finalization according to ETH2.0 specifications.
justification := state.JustificationBits.Bytes()[0]
justification := state.JustificationBits().Bytes()[0]
// 2nd/3rd/4th (0b1110) most recent epochs are justified, the 2nd using the 4th as source.
if justification&0x0E == 0x0E && (oldPrevJustifiedCheckpoint.Epoch+3) == currentEpoch {
state.FinalizedCheckpoint = oldPrevJustifiedCheckpoint
if err := state.SetFinalizedCheckpoint(oldPrevJustifiedCheckpoint); err != nil {
return nil, err
}
}
// 2nd/3rd (0b0110) most recent epochs are justified, the 2nd using the 3rd as source.
if justification&0x06 == 0x06 && (oldPrevJustifiedCheckpoint.Epoch+2) == currentEpoch {
state.FinalizedCheckpoint = oldPrevJustifiedCheckpoint
if err := state.SetFinalizedCheckpoint(oldPrevJustifiedCheckpoint); err != nil {
return nil, err
}
}
// 1st/2nd/3rd (0b0111) most recent epochs are justified, the 1st using the 3rd as source.
if justification&0x07 == 0x07 && (oldCurrJustifiedCheckpoint.Epoch+2) == currentEpoch {
state.FinalizedCheckpoint = oldCurrJustifiedCheckpoint
if err := state.SetFinalizedCheckpoint(oldCurrJustifiedCheckpoint); err != nil {
return nil, err
}
}
// The 1st/2nd (0b0011) most recent epochs are justified, the 1st using the 2nd as source
if justification&0x03 == 0x03 && (oldCurrJustifiedCheckpoint.Epoch+1) == currentEpoch {
state.FinalizedCheckpoint = oldCurrJustifiedCheckpoint
if err := state.SetFinalizedCheckpoint(oldCurrJustifiedCheckpoint); err != nil {
return nil, err
}
}
return state, nil

View File

@@ -7,6 +7,7 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -18,7 +19,7 @@ func TestProcessJustificationAndFinalizationPreCompute_ConsecutiveEpochs(t *test
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch*2 + 1,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
@@ -34,30 +35,35 @@ func TestProcessJustificationAndFinalizationPreCompute_ConsecutiveEpochs(t *test
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots,
}
state, err := beaconstate.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
attestedBalance := 4 * e * 3 / 2
b := &precompute.Balance{PrevEpochTargetAttesters: attestedBalance}
newState, err := precompute.ProcessJustificationAndFinalizationPreCompute(state, b)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint.Root, []byte{byte(64)}) {
rt := [32]byte{byte(64)}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint().Root, rt[:]) {
t.Errorf("Wanted current justified root: %v, got: %v",
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint.Root)
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint().Root)
}
if newState.CurrentJustifiedCheckpoint.Epoch != 2 {
if newState.CurrentJustifiedCheckpoint().Epoch != 2 {
t.Errorf("Wanted justified epoch: %d, got: %d",
2, newState.CurrentJustifiedCheckpoint.Epoch)
2, newState.CurrentJustifiedCheckpoint().Epoch)
}
if newState.PreviousJustifiedCheckpoint.Epoch != 0 {
if newState.PreviousJustifiedCheckpoint().Epoch != 0 {
t.Errorf("Wanted previous justified epoch: %d, got: %d",
0, newState.PreviousJustifiedCheckpoint.Epoch)
0, newState.PreviousJustifiedCheckpoint().Epoch)
}
if !bytes.Equal(newState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
if !bytes.Equal(newState.FinalizedCheckpoint().Root, params.BeaconConfig().ZeroHash[:]) {
t.Errorf("Wanted current finalized root: %v, got: %v",
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint.Root)
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint().Root)
}
if newState.FinalizedCheckpoint.Epoch != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpoint.Epoch)
if newState.FinalizedCheckpointEpoch() != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpointEpoch())
}
}
@@ -68,7 +74,7 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyCurrentEpoch(t *te
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch*2 + 1,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
@@ -84,30 +90,35 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyCurrentEpoch(t *te
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots,
}
state, err := beaconstate.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
attestedBalance := 4 * e * 3 / 2
b := &precompute.Balance{PrevEpochTargetAttesters: attestedBalance}
newState, err := precompute.ProcessJustificationAndFinalizationPreCompute(state, b)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint.Root, []byte{byte(64)}) {
rt := [32]byte{byte(64)}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint().Root, rt[:]) {
t.Errorf("Wanted current justified root: %v, got: %v",
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint.Root)
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint().Root)
}
if newState.CurrentJustifiedCheckpoint.Epoch != 2 {
if newState.CurrentJustifiedCheckpoint().Epoch != 2 {
t.Errorf("Wanted justified epoch: %d, got: %d",
2, newState.CurrentJustifiedCheckpoint.Epoch)
2, newState.CurrentJustifiedCheckpoint().Epoch)
}
if newState.PreviousJustifiedCheckpoint.Epoch != 0 {
if newState.PreviousJustifiedCheckpoint().Epoch != 0 {
t.Errorf("Wanted previous justified epoch: %d, got: %d",
0, newState.PreviousJustifiedCheckpoint.Epoch)
0, newState.PreviousJustifiedCheckpoint().Epoch)
}
if !bytes.Equal(newState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
if !bytes.Equal(newState.FinalizedCheckpoint().Root, params.BeaconConfig().ZeroHash[:]) {
t.Errorf("Wanted current finalized root: %v, got: %v",
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint.Root)
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint().Root)
}
if newState.FinalizedCheckpoint.Epoch != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpoint.Epoch)
if newState.FinalizedCheckpointEpoch() != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpointEpoch())
}
}
@@ -118,7 +129,7 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyPrevEpoch(t *testi
for i := 0; i < len(blockRoots); i++ {
blockRoots[i] = []byte{byte(i)}
}
state := &pb.BeaconState{
base := &pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch*2 + 1,
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
@@ -133,29 +144,34 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyPrevEpoch(t *testi
Balances: []uint64{a, a, a, a}, // validator total balance should be 128000000000
BlockRoots: blockRoots, FinalizedCheckpoint: &ethpb.Checkpoint{},
}
state, err := beaconstate.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
attestedBalance := 4 * e * 3 / 2
b := &precompute.Balance{PrevEpochTargetAttesters: attestedBalance}
newState, err := precompute.ProcessJustificationAndFinalizationPreCompute(state, b)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint.Root, []byte{byte(64)}) {
rt := [32]byte{byte(64)}
if !bytes.Equal(newState.CurrentJustifiedCheckpoint().Root, rt[:]) {
t.Errorf("Wanted current justified root: %v, got: %v",
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint.Root)
[]byte{byte(64)}, newState.CurrentJustifiedCheckpoint().Root)
}
if newState.PreviousJustifiedCheckpoint.Epoch != 0 {
if newState.PreviousJustifiedCheckpoint().Epoch != 0 {
t.Errorf("Wanted previous justified epoch: %d, got: %d",
0, newState.PreviousJustifiedCheckpoint.Epoch)
0, newState.PreviousJustifiedCheckpoint().Epoch)
}
if newState.CurrentJustifiedCheckpoint.Epoch != 2 {
if newState.CurrentJustifiedCheckpoint().Epoch != 2 {
t.Errorf("Wanted justified epoch: %d, got: %d",
2, newState.CurrentJustifiedCheckpoint.Epoch)
2, newState.CurrentJustifiedCheckpoint().Epoch)
}
if !bytes.Equal(newState.FinalizedCheckpoint.Root, params.BeaconConfig().ZeroHash[:]) {
if !bytes.Equal(newState.FinalizedCheckpoint().Root, params.BeaconConfig().ZeroHash[:]) {
t.Errorf("Wanted current finalized root: %v, got: %v",
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint.Root)
params.BeaconConfig().ZeroHash, newState.FinalizedCheckpoint().Root)
}
if newState.FinalizedCheckpoint.Epoch != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpoint.Epoch)
if newState.FinalizedCheckpointEpoch() != 0 {
t.Errorf("Wanted finalized epoch: 0, got: %d", newState.FinalizedCheckpointEpoch())
}
}

View File

@@ -4,7 +4,7 @@ import (
"context"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
@@ -12,40 +12,40 @@ import (
// New gets called at the beginning of process epoch cycle to return
// pre computed instances of validators attesting records and total
// balances attested in an epoch.
func New(ctx context.Context, state *pb.BeaconState) ([]*Validator, *Balance) {
func New(ctx context.Context, state *stateTrie.BeaconState) ([]*Validator, *Balance) {
ctx, span := trace.StartSpan(ctx, "precomputeEpoch.New")
defer span.End()
vp := make([]*Validator, len(state.Validators))
vp := make([]*Validator, state.NumValidators())
bp := &Balance{}
currentEpoch := helpers.CurrentEpoch(state)
prevEpoch := helpers.PrevEpoch(state)
for i, v := range state.Validators {
state.ReadFromEveryValidator(func(idx int, val *stateTrie.ReadOnlyValidator) error {
// Was validator withdrawable or slashed
withdrawable := currentEpoch >= v.WithdrawableEpoch
withdrawable := currentEpoch >= val.WithdrawableEpoch()
p := &Validator{
IsSlashed: v.Slashed,
IsSlashed: val.Slashed(),
IsWithdrawableCurrentEpoch: withdrawable,
CurrentEpochEffectiveBalance: v.EffectiveBalance,
CurrentEpochEffectiveBalance: val.EffectiveBalance(),
}
// Was validator active current epoch
if helpers.IsActiveValidator(v, currentEpoch) {
if helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
p.IsActiveCurrentEpoch = true
bp.CurrentEpoch += v.EffectiveBalance
bp.CurrentEpoch += val.EffectiveBalance()
}
// Was validator active previous epoch
if helpers.IsActiveValidator(v, prevEpoch) {
if helpers.IsActiveValidatorUsingTrie(val, prevEpoch) {
p.IsActivePrevEpoch = true
bp.PrevEpoch += v.EffectiveBalance
bp.PrevEpoch += val.EffectiveBalance()
}
// Set inclusion slot and inclusion distance to be max, they will be compared and replaced
// with the lower values
p.InclusionSlot = params.BeaconConfig().FarFutureEpoch
p.InclusionDistance = params.BeaconConfig().FarFutureEpoch
vp[i] = p
}
vp[idx] = p
return nil
})
return vp, bp
}

View File

@@ -7,13 +7,14 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestNew(t *testing.T) {
ffe := params.BeaconConfig().FarFutureEpoch
s := &pb.BeaconState{
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch,
// Validator 0 is slashed
// Validator 1 is withdrawable
@@ -25,6 +26,9 @@ func TestNew(t *testing.T) {
{WithdrawableEpoch: ffe, ExitEpoch: ffe, EffectiveBalance: 100},
{WithdrawableEpoch: ffe, ExitEpoch: 1, EffectiveBalance: 100},
},
})
if err != nil {
t.Fatal(err)
}
e := params.BeaconConfig().FarFutureEpoch
v, b := precompute.New(context.Background(), s)

View File

@@ -2,22 +2,28 @@ package precompute
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
// ProcessRewardsAndPenaltiesPrecompute processes the rewards and penalties of individual validator.
// This is an optimized version by passing in precomputed validator attesting records and and total epoch balances.
func ProcessRewardsAndPenaltiesPrecompute(state *pb.BeaconState, bp *Balance, vp []*Validator) (*pb.BeaconState, error) {
func ProcessRewardsAndPenaltiesPrecompute(
state *stateTrie.BeaconState,
bp *Balance,
vp []*Validator,
) (*stateTrie.BeaconState, error) {
// Can't process rewards and penalties in genesis epoch.
if helpers.CurrentEpoch(state) == 0 {
return state, nil
}
numOfVals := state.NumValidators()
// Guard against an out-of-bounds using validator balance precompute.
if len(vp) != len(state.Validators) || len(vp) != len(state.Balances) {
if len(vp) != numOfVals || len(vp) != state.BalancesLength() {
return state, errors.New("precomputed registries not the same length as state registries")
}
@@ -29,18 +35,34 @@ func ProcessRewardsAndPenaltiesPrecompute(state *pb.BeaconState, bp *Balance, vp
if err != nil {
return nil, errors.Wrap(err, "could not get attestation delta")
}
for i := 0; i < len(state.Validators); i++ {
state = helpers.IncreaseBalance(state, uint64(i), attsRewards[i]+proposerRewards[i])
state = helpers.DecreaseBalance(state, uint64(i), attsPenalties[i])
for i := 0; i < numOfVals; i++ {
vp[i].BeforeEpochTransitionBalance, err = state.BalanceAtIndex(uint64(i))
if err != nil {
return nil, errors.Wrap(err, "could not get validator balance before epoch")
}
if err := helpers.IncreaseBalance(state, uint64(i), attsRewards[i]+proposerRewards[i]); err != nil {
return nil, err
}
if err := helpers.DecreaseBalance(state, uint64(i), attsPenalties[i]); err != nil {
return nil, err
}
vp[i].AfterEpochTransitionBalance, err = state.BalanceAtIndex(uint64(i))
if err != nil {
return nil, errors.Wrap(err, "could not get validator balance after epoch")
}
}
return state, nil
}
// This computes the rewards and penalties differences for individual validators based on the
// voting records.
func attestationDeltas(state *pb.BeaconState, bp *Balance, vp []*Validator) ([]uint64, []uint64, error) {
rewards := make([]uint64, len(state.Validators))
penalties := make([]uint64, len(state.Validators))
func attestationDeltas(state *stateTrie.BeaconState, bp *Balance, vp []*Validator) ([]uint64, []uint64, error) {
numOfVals := state.NumValidators()
rewards := make([]uint64, numOfVals)
penalties := make([]uint64, numOfVals)
for i, v := range vp {
rewards[i], penalties[i] = attestationDelta(state, bp, v)
@@ -48,9 +70,9 @@ func attestationDeltas(state *pb.BeaconState, bp *Balance, vp []*Validator) ([]u
return rewards, penalties, nil
}
func attestationDelta(state *pb.BeaconState, bp *Balance, v *Validator) (uint64, uint64) {
func attestationDelta(state *stateTrie.BeaconState, bp *Balance, v *Validator) (uint64, uint64) {
eligible := v.IsActivePrevEpoch || (v.IsSlashed && !v.IsWithdrawableCurrentEpoch)
if !eligible {
if !eligible || bp.CurrentEpoch == 0 {
return 0, 0
}
@@ -84,7 +106,8 @@ func attestationDelta(state *pb.BeaconState, bp *Balance, v *Validator) (uint64,
}
// Process finality delay penalty
finalityDelay := e - state.FinalizedCheckpoint.Epoch
finalizedEpoch := state.FinalizedCheckpointEpoch()
finalityDelay := e - finalizedEpoch
if finalityDelay > params.BeaconConfig().MinEpochsToInactivityPenalty {
p += params.BeaconConfig().BaseRewardsPerEpoch * br
if !v.IsPrevEpochTargetAttester {
@@ -96,8 +119,9 @@ func attestationDelta(state *pb.BeaconState, bp *Balance, v *Validator) (uint64,
// This computes the rewards and penalties differences for individual validators based on the
// proposer inclusion records.
func proposerDeltaPrecompute(state *pb.BeaconState, bp *Balance, vp []*Validator) ([]uint64, error) {
rewards := make([]uint64, len(state.Validators))
func proposerDeltaPrecompute(state *stateTrie.BeaconState, bp *Balance, vp []*Validator) ([]uint64, error) {
numofVals := state.NumValidators()
rewards := make([]uint64, numofVals)
totalBalance := bp.CurrentEpoch

View File

@@ -8,6 +8,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -15,7 +16,7 @@ import (
func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := uint64(2048)
state := buildState(e+3, validatorCount)
base := buildState(e+3, validatorCount)
atts := make([]*pb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
@@ -27,10 +28,15 @@ func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
InclusionDelay: 1,
}
}
state.PreviousEpochAttestations = atts
base.PreviousEpochAttestations = atts
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
vp, bp := New(context.Background(), state)
vp, bp, err := ProcessAttestations(context.Background(), state, vp, bp)
vp, bp, err = ProcessAttestations(context.Background(), state, vp, bp)
if err != nil {
t.Fatal(err)
}
@@ -42,38 +48,48 @@ func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
// Indices that voted everything except for head, lost a bit money
wanted := uint64(31999810265)
if state.Balances[4] != wanted {
if state.Balances()[4] != wanted {
t.Errorf("wanted balance: %d, got: %d",
wanted, state.Balances[4])
wanted, state.Balances()[4])
}
// Indices that did not vote, lost more money
wanted = uint64(31999873505)
if state.Balances[0] != wanted {
if state.Balances()[0] != wanted {
t.Errorf("wanted balance: %d, got: %d",
wanted, state.Balances[0])
wanted, state.Balances()[0])
}
}
func TestAttestationDeltaPrecompute(t *testing.T) {
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := uint64(2048)
state := buildState(e+2, validatorCount)
base := buildState(e+2, validatorCount)
atts := make([]*pb.PendingAttestation, 3)
var emptyRoot [32]byte
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{},
Source: &ethpb.Checkpoint{},
Target: &ethpb.Checkpoint{
Root: emptyRoot[:],
},
Source: &ethpb.Checkpoint{
Root: emptyRoot[:],
},
BeaconBlockRoot: emptyRoot[:],
},
AggregationBits: bitfield.Bitlist{0xC0, 0xC0, 0xC0, 0xC0, 0x01},
InclusionDelay: 1,
}
}
state.PreviousEpochAttestations = atts
base.PreviousEpochAttestations = atts
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
vp, bp := New(context.Background(), state)
vp, bp, err := ProcessAttestations(context.Background(), state, vp, bp)
vp, bp, err = ProcessAttestations(context.Background(), state, vp, bp)
if err != nil {
t.Fatal(err)
}
@@ -91,7 +107,7 @@ func TestAttestationDeltaPrecompute(t *testing.T) {
t.Fatal(err)
}
attestedIndices := []uint64{100, 106, 196, 641, 654, 1606}
attestedIndices := []uint64{55, 1339, 1746, 1811, 1569, 1413}
for _, i := range attestedIndices {
base, err := epoch.BaseReward(state, i)
if err != nil {
@@ -104,7 +120,7 @@ func TestAttestationDeltaPrecompute(t *testing.T) {
proposerReward := base / params.BeaconConfig().ProposerRewardQuotient
wanted += (base - proposerReward) * params.BeaconConfig().MinAttestationInclusionDelay
if rewards[i] != wanted {
t.Errorf("Wanted reward balance %d, got %d", wanted, rewards[i])
t.Errorf("Wanted reward balance %d, got %d for validator with index %d", wanted, rewards[i], i)
}
// Since all these validators attested, they shouldn't get penalized.
if penalties[i] != 0 {
@@ -112,7 +128,7 @@ func TestAttestationDeltaPrecompute(t *testing.T) {
}
}
nonAttestedIndices := []uint64{12, 23, 45, 79}
nonAttestedIndices := []uint64{434, 677, 872, 791}
for _, i := range nonAttestedIndices {
base, err := epoch.BaseReward(state, i)
if err != nil {
@@ -130,6 +146,47 @@ func TestAttestationDeltaPrecompute(t *testing.T) {
}
}
func TestAttestationDeltas_ZeroEpoch(t *testing.T) {
e := params.BeaconConfig().SlotsPerEpoch
validatorCount := uint64(2048)
base := buildState(e+2, validatorCount)
atts := make([]*pb.PendingAttestation, 3)
var emptyRoot [32]byte
for i := 0; i < len(atts); i++ {
atts[i] = &pb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{
Root: emptyRoot[:],
},
Source: &ethpb.Checkpoint{
Root: emptyRoot[:],
},
BeaconBlockRoot: emptyRoot[:],
},
AggregationBits: bitfield.Bitlist{0xC0, 0xC0, 0xC0, 0xC0, 0x01},
InclusionDelay: 1,
}
}
base.PreviousEpochAttestations = atts
state, err := state.InitializeFromProto(base)
if err != nil {
t.Fatal(err)
}
vp, bp := New(context.Background(), state)
vp, bp, err = ProcessAttestations(context.Background(), state, vp, bp)
if err != nil {
t.Fatal(err)
}
bp.CurrentEpoch = 0 // Could cause a divide by zero panic.
_, _, err = attestationDeltas(state, bp, vp)
if err != nil {
t.Fatal(err)
}
}
func buildState(slot uint64, validatorCount uint64) *pb.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {

View File

@@ -1,34 +1,40 @@
package precompute
import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
// ProcessSlashingsPrecompute processes the slashed validators during epoch processing.
// This is an optimized version by passing in precomputed total epoch balances.
func ProcessSlashingsPrecompute(state *pb.BeaconState, p *Balance) *pb.BeaconState {
func ProcessSlashingsPrecompute(state *stateTrie.BeaconState, p *Balance) error {
currentEpoch := helpers.CurrentEpoch(state)
exitLength := params.BeaconConfig().EpochsPerSlashingsVector
// Compute the sum of state slashings
slashings := state.Slashings()
totalSlashing := uint64(0)
for _, slashing := range state.Slashings {
for _, slashing := range slashings {
totalSlashing += slashing
}
// Compute slashing for each validator.
for index, validator := range state.Validators {
correctEpoch := (currentEpoch + exitLength/2) == validator.WithdrawableEpoch
if validator.Slashed && correctEpoch {
validatorFunc := func(idx int, val *ethpb.Validator) (bool, error) {
correctEpoch := (currentEpoch + exitLength/2) == val.WithdrawableEpoch
if val.Slashed && correctEpoch {
minSlashing := mathutil.Min(totalSlashing*3, p.CurrentEpoch)
increment := params.BeaconConfig().EffectiveBalanceIncrement
penaltyNumerator := validator.EffectiveBalance / increment * minSlashing
penaltyNumerator := val.EffectiveBalance / increment * minSlashing
penalty := penaltyNumerator / p.CurrentEpoch * increment
state = helpers.DecreaseBalance(state, uint64(index), penalty)
if err := helpers.DecreaseBalance(state, uint64(idx), penalty); err != nil {
return false, err
}
return true, nil
}
return false, nil
}
return state
return state.ApplyToEveryValidator(validatorFunc)
}

View File

@@ -6,23 +6,29 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestProcessSlashingsPrecompute_NotSlashed(t *testing.T) {
s := &pb.BeaconState{
s, err := beaconstate.InitializeFromProto(&pb.BeaconState{
Slot: 0,
Validators: []*ethpb.Validator{{Slashed: true}},
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance},
Slashings: []uint64{0, 1e9},
})
if err != nil {
t.Fatal(err)
}
bp := &precompute.Balance{CurrentEpoch: params.BeaconConfig().MaxEffectiveBalance}
newState := precompute.ProcessSlashingsPrecompute(s, bp)
if err := precompute.ProcessSlashingsPrecompute(s, bp); err != nil {
t.Fatal(err)
}
wanted := params.BeaconConfig().MaxEffectiveBalance
if newState.Balances[0] != wanted {
t.Errorf("Wanted slashed balance: %d, got: %d", wanted, newState.Balances[0])
if s.Balances()[0] != wanted {
t.Errorf("Wanted slashed balance: %d, got: %d", wanted, s.Balances()[0])
}
}
@@ -107,13 +113,19 @@ func TestProcessSlashingsPrecompute_SlashedLess(t *testing.T) {
bp := &precompute.Balance{CurrentEpoch: ab}
original := proto.Clone(tt.state)
newState := precompute.ProcessSlashingsPrecompute(tt.state, bp)
state, err := beaconstate.InitializeFromProto(tt.state)
if err != nil {
t.Fatal(err)
}
if err := precompute.ProcessSlashingsPrecompute(state, bp); err != nil {
t.Fatal(err)
}
if newState.Balances[0] != tt.want {
if state.Balances()[0] != tt.want {
t.Errorf(
"ProcessSlashings({%v}) = newState; newState.Balances[0] = %d; wanted %d",
original,
newState.Balances[0],
state.Balances()[0],
tt.want,
)
}

View File

@@ -31,6 +31,10 @@ type Validator struct {
InclusionDistance uint64
// ProposerIndex is the index of proposer at slot where this validator's attestation was included.
ProposerIndex uint64
// BeforeEpochTransitionBalance is the validator balance prior to epoch transition.
BeforeEpochTransitionBalance uint64
// AfterEpochTransitionBalance is the validator balance after epoch transition.
AfterEpochTransitionBalance uint64
}
// Balance stores the pre computation of the total participated balances for a given epoch

View File

@@ -28,6 +28,7 @@ go_test(
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params/spectest:go_default_library",
"//shared/testutil:go_default_library",
@@ -58,6 +59,7 @@ go_test(
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/params/spectest:go_default_library",
"//shared/testutil:go_default_library",

View File

@@ -5,7 +5,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -24,7 +24,7 @@ func runFinalUpdatesTests(t *testing.T, config string) {
}
}
func processFinalUpdatesWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
func processFinalUpdatesWrapper(t *testing.T, state *beaconstate.BeaconState) (*beaconstate.BeaconState, error) {
state, err := epoch.ProcessFinalUpdates(state)
if err != nil {
t.Fatalf("could not process final updates: %v", err)

View File

@@ -6,7 +6,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -26,7 +26,7 @@ func runJustificationAndFinalizationTests(t *testing.T, config string) {
}
}
func processJustificationAndFinalizationPrecomputeWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
func processJustificationAndFinalizationPrecomputeWrapper(t *testing.T, state *state.BeaconState) (*state.BeaconState, error) {
ctx := context.Background()
vp, bp := precompute.New(ctx, state)
_, bp, err := precompute.ProcessAttestations(ctx, state, vp, bp)

View File

@@ -5,7 +5,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -24,7 +24,7 @@ func runRegistryUpdatesTests(t *testing.T, config string) {
}
}
func processRegistryUpdatesWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
func processRegistryUpdatesWrapper(t *testing.T, state *beaconstate.BeaconState) (*beaconstate.BeaconState, error) {
state, err := epoch.ProcessRegistryUpdates(state)
if err != nil {
t.Fatalf("could not process registry updates: %v", err)

View File

@@ -7,7 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch/precompute"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/params/spectest"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -27,7 +27,7 @@ func runSlashingsTests(t *testing.T, config string) {
}
}
func processSlashingsWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
func processSlashingsWrapper(t *testing.T, state *beaconstate.BeaconState) (*beaconstate.BeaconState, error) {
state, err := epoch.ProcessSlashings(state)
if err != nil {
t.Fatalf("could not process slashings: %v", err)
@@ -35,7 +35,7 @@ func processSlashingsWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconSta
return state, nil
}
func processSlashingsPrecomputeWrapper(t *testing.T, state *pb.BeaconState) (*pb.BeaconState, error) {
func processSlashingsPrecomputeWrapper(t *testing.T, state *beaconstate.BeaconState) (*beaconstate.BeaconState, error) {
ctx := context.Background()
vp, bp := precompute.New(ctx, state)
_, bp, err := precompute.ProcessAttestations(ctx, state, vp, bp)
@@ -43,6 +43,5 @@ func processSlashingsPrecomputeWrapper(t *testing.T, state *pb.BeaconState) (*pb
t.Fatal(err)
}
state = precompute.ProcessSlashingsPrecompute(state, bp)
return state, nil
return state, precompute.ProcessSlashingsPrecompute(state, bp)
}

View File

@@ -1,37 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["validation.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/core/exit",
visibility = [
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bls:go_default_library",
"//shared/mathutil:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["validation_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)

Some files were not shown because too many files have changed in this diff Show More