Compare commits

...

636 Commits

Author SHA1 Message Date
Preston Van Loon
2d98902eed Require state to exist when updating finalized checkpoint (#3843)
* require state to exist before saving finalized checkpoint

* Add comment

* gaz

* fix test

* fix test
2019-10-23 22:50:33 -07:00
terence tsao
ae1e435231 Guard against deleting genesis and finalized state in DB (#3842)
* Don't delete boundary state

* Lint

* Test

* Feedback

* Batch and better comment

* Fix test

* zzzzzz

* rmStatesOlderThanLastFinalized
2019-10-23 22:39:41 -07:00
terence tsao
d5547355d5 Update READINGS.md (#3832) 2019-10-24 10:11:03 +08:00
Preston Van Loon
544ce2b4ed add peer to span and log (#3838) 2019-10-23 18:35:08 -07:00
terence tsao
9a0fb5dca1 Delete DB states older than finalized checkpoint (#3824) 2019-10-23 08:30:21 -07:00
Preston Van Loon
32271aeae1 Add script to upload github assets (#3822) 2019-10-23 22:21:35 +08:00
Preston Van Loon
2fefe6d14b Update TESTNET.md (#3831) 2019-10-23 17:53:33 +08:00
Preston Van Loon
27bd188ea8 Add a tool to extract genesis.ssz from existing database (#3827)
* Add a tool to extract genesis.ssz from existing database

* close db

* panic on nil
2019-10-22 22:43:41 -07:00
Preston Van Loon
c2e7aa7a39 Update TESTNET.md (#3828) 2019-10-22 19:17:05 -07:00
terence tsao
be5451abef Removed extra save genesis block root (#3829) 2019-10-22 17:49:18 -07:00
Preston Van Loon
e4dafd8475 Minor edit (#3826) 2019-10-22 16:51:32 -07:00
Preston Van Loon
7b8331c607 Add testnet markdown instructions (#3825) 2019-10-22 16:45:27 -07:00
Preston Van Loon
552baf1c21 Revert "Fix Round Robin Test (#3775)" (#3823) 2019-10-22 14:44:52 -07:00
Preston Van Loon
e37e757226 Limit init sync to 15 peers (#3821) 2019-10-22 11:43:07 -07:00
terence tsao
f86b7ac62d Add State Batch Delete for DB (#3820) 2019-10-22 11:13:41 -07:00
Nishant Das
a440c32155 Blacklist Peer if they fail Handshake too many times (#3815)
* add new changes

* add changes to set

* Revert "add changes to set"

This reverts commit 07fd48c15f.

* Revert "Revert "add changes to set""

This reverts commit 6b84a6017e.

* new changes

* add blacklist

* gaz

* add test

* fix visibility

* Update beacon-chain/sync/rpc_status_test.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* preston's review
2019-10-21 17:43:44 -07:00
Preston Van Loon
1138c2cb51 Gateway registers gRPC gateway handlers (#3818) 2019-10-21 17:15:57 -07:00
Nishant Das
c9a7a9c709 Attestation is Decoupled From Proposal (#3800)
* fix

* add correct test

* add helper

* gaz

* add helper

* gaz
2019-10-22 07:15:32 +08:00
terence tsao
97905c3e79 Refactor operation service.go into smaller files (#3808)
* Different files

* Preston's feedback
2019-10-21 09:37:50 -07:00
Preston Van Loon
d8e70fe83c do not use mutliple read locks (#3812) 2019-10-20 22:37:39 -07:00
Marius Kjærstad
635e20529a Updated why-combining-sharding-and-casper link in validator/README.md (#3810)
Updated why-combining-sharding-and-casper link in validator/README.md
2019-10-20 19:43:30 -07:00
Jim McDonald
053fa5e616 Remove hard-coded config values (#3762)
* Remove hard-coded config values

* Partial reversion to use hard-coded fork version of 0

* Re-add fetching genesis fork version from config
2019-10-19 13:52:34 -07:00
Jim McDonald
5000535907 Waiting condition check prior to waiting log entry (#3802) 2019-10-19 13:35:45 -07:00
Jim McDonald
04113baf9d Generate keys in test (#3807) 2019-10-19 10:29:25 -07:00
terence tsao
1433fab0d4 Use lowest included slot for precompute (#3805)
* Fix attestations

* Fix new

* Testing state transition

* Fix

* Runtime works
2019-10-18 21:47:54 -07:00
shayzluf
23be8419fe Indexed attestations store (#3322) 2019-10-18 20:35:09 -07:00
shayzluf
2437a0e33c Prevent PATH changes from causing bazel rebuild (#3806) 2019-10-19 08:03:40 +05:30
terence tsao
e42af4f11d Check attestations bitfield overlaps (#3804) 2019-10-18 17:41:27 -07:00
terence tsao
a05dca18c7 Add spec tests for process_slashing and process_justification_finalization (#3803)
* Add precompute spec tests

* Gazelle
2019-10-18 09:09:38 -07:00
terence tsao
a41ac6b498 Implement ProcessEpochPrecompute (#3798)
* Implemented ProcessSlashingsPrecompute

* Tests for ProcessSlashingsPrecompute

* Gaz

* Lint

* Feature flags

* Hook in ProcessEpochPrecompute

* Hook it to spec test

* Seperate test into it's own package to avoid circular dependency

* Lint

* Gazelle

* Preston's feedback

* Nishant's feedback
2019-10-18 16:38:47 +08:00
Preston Van Loon
fb510a3510 Reject blocks from the future in regular sync (#3799)
* reject blocks from the future in syncing

* fix test and gofmt
2019-10-18 11:30:14 +08:00
terence tsao
aedd38092f Process slashing with precompute (#3797)
* Implemented ProcessSlashingsPrecompute

* Tests for ProcessSlashingsPrecompute

* Gaz

* Lint

* Removed comment
2019-10-18 09:10:41 +08:00
terence tsao
89ef6d6648 Process rewards and penalties with precompute (#3793) 2019-10-17 10:54:03 -07:00
Preston Van Loon
fef6b95fed bump fork version (#3795) 2019-10-17 08:50:13 -07:00
Jim McDonald
d4db7a68aa Add note on building standalone binaries (#3758) 2019-10-17 08:42:53 -07:00
terence tsao
921d0a6e7e Process justification with precompute (#3792) 2019-10-17 02:24:13 -07:00
Preston Van Loon
6bf14dedcd Better aggregated attestations pool (#3761)
* WIP of aggregated signatures in DB

* new lines at end

* taking a nap on the plane now

* fix tests

* remove duplication of attestations. so much for that airplane nap lol

* benchmark before flight lands

* gaz

* manual gaz

* fully contained checks

* quick improvement before landing

* new bitlist with fixes

* doesn't need real signatures

* it works, mostly

* print shard too

* some refactoring

* Revert "some refactoring"

This reverts commit 377ce7fbfb.

* Revert "Revert "some refactoring""

This reverts commit b46a458898.

These changes are ok, just need to update the expected values

* fix tests

* lint

* lint

* upstream changes

* fix tests

* what

* resolve TODOs

* gofmt

* revert unrelated pb

* remove debug statement
2019-10-16 23:46:07 -07:00
shayzluf
cde87ae39b slasher todo (#3794)
* add todo

* add issue number
2019-10-17 11:06:55 +05:30
shayzluf
4130c78be7 slasher server (#3596)
* first version of the watchtower api

* service files

* Begin work on grpc server

* More changes to server

* REnames and mock setup

* working test

* merge

* double propose detection test

* nishant review

* todo change

* gaz

* fix service

* gaz

* remove unused import

* gaz

* resolve circular dependency

* resolve circular dependency 2nd try

* remove package

* fix package

* fix test

* added tests

* gaz

* remove status check

* gaz

* remove context

* remove context

* change var name

* moved to rpc dir

* gaz

* remove server code

* gaz

* slasher server

* visibility change

* pb

* service update

* gaz
2019-10-17 08:42:26 +05:30
terence tsao
2d863a1e63 Methods to precompute process_epoch records (#3788) 2019-10-16 17:48:26 -07:00
terence tsao
a62ac97a35 Type for process epoch optimization (#3783) 2019-10-16 11:26:02 -07:00
Preston Van Loon
86a8ec035c Aggregation helper improvements (#3789)
* aggregation helper

* lint
2019-10-16 11:11:58 -07:00
terence tsao
f0944d205d update go_bitfield (#3782) 2019-10-16 08:17:01 -07:00
Jim McDonald
b0eccd24a2 Tidy up logging (#3784) 2019-10-16 06:37:43 -07:00
Jim McDonald
9d441011d7 Remove key duplication (#3763)
* Remove key duplication

* Break out function to allow testing
2019-10-15 11:20:23 -07:00
Nishant Das
f63c12b7b2 change commit (#3781) 2019-10-15 06:37:54 -07:00
Nishant Das
00a5a25323 Fix Round Robin Test (#3775)
* fix round robin test

* add comment
2019-10-15 14:06:44 +08:00
Preston Van Loon
0d1aeeeaf4 Update renovate.json (#3780) 2019-10-14 15:23:20 -07:00
Preston Van Loon
c5d4d5dfce Change renovate to group dependencies (#3776) 2019-10-14 13:56:06 -05:00
Sylvain Laurent
2bd1e54d92 Fix missing parameter in docker command (#3757) 2019-10-12 18:46:10 +09:00
Preston Van Loon
9e6b4d1f29 some fixes for bazel v1 (#3754) 2019-10-12 16:55:56 +09:00
Santiago Rodríguez
1dbb67af81 Improved docker instructions (#3693) 2019-10-12 12:32:12 +09:00
Jim McDonald
aa07843157 Change public key map identifier to byte array (#3716) 2019-10-12 11:22:51 +09:00
terence tsao
707dfca62c Update go-bitfield workspace (#3749) 2019-10-11 12:09:41 +09:00
Preston Van Loon
d4001a8b29 Annotate errors / spans in block processing queue (#3751)
* annotation error in span

* added more annotations and spans to process pending blocks

* use diff

* workspace dep
2019-10-10 17:44:24 +09:00
Nishant Das
964c54f911 Continue Writing to The Stream Despite Failures (#3743)
* don't fail if block doesn't exist

* fix nogo
2019-10-09 14:57:43 +08:00
terence tsao
df80a7d949 update (#3741) 2019-10-09 12:19:52 +09:00
terence tsao
9bf55e53e7 Add context and tracing to attestation pool (#3737)
* Test case for overlapping aggregation bits

* Add ctx and tracing for attestation pool and beyond

* No nil

* Use real ctx

* Use real ctx

* Fix test

* Fix test

* Fix test

* Fix test

* Fixed imports
2019-10-09 06:42:17 +08:00
Jim McDonald
1c4ea5c471 Additional log information for invalid deposits (#3740)
* Additional log information for invalid deposits

* Update field names
2019-10-08 18:28:20 +08:00
Raul Jordan
1a94ef12b9 Ensure Blocks Are Not Duplicated When Saved to DB (#3739)
* dedup

* tests pass when using the fallback
2019-10-08 13:47:48 +09:00
terence tsao
46ecbdc997 Tests for process att with overlapping bits (#3734) 2019-10-08 10:52:07 +09:00
Nishant Das
384fd5336e Use demo config (#3738)
* use demo config

* gaz

* docker dep
2019-10-08 09:00:00 +08:00
Preston Van Loon
4e22f52ab3 Testnet restart and hotfixes (#3736)
* hotfix for round robin, hotfix for grpc recovery

* gofmt

* break

* wrong subtraction

* lint

* testnet fork version update

* ignore relay / DHT protocol not supported error
2019-10-08 07:59:08 +08:00
Raul Jordan
9254ebf3ba new ssz in workspace (#3735) 2019-10-07 21:09:04 +08:00
Nishant Das
cbeedeb5a7 Update Renovate (#3732) 2019-10-07 17:54:41 +09:00
Nishant Das
093c32e229 update repo (#3717) 2019-10-07 15:50:58 +08:00
Preston Van Loon
23764c4640 Abstract verifySlot to helper package (#3731)
* Abstract verifySlot to helper package

* blocks -> slots

* fix test
2019-10-07 16:15:50 +09:00
terence tsao
750bc83369 Clean up feature flag namings (#3715) 2019-10-07 14:11:49 +09:00
Ivan Martinez
14d9a83cda Add Block Generation Util to testutil package (#3674) (#3709) 2019-10-04 16:07:46 -07:00
terence tsao
66dcf2b80d Moved /shared/ test code in different package (#3714)
* Fixed bytesutil

* Fix featureflag

* Fix hashutil

* Fix interop

* Fix iputils

* Fix mathutils

* Fix messagehandler

* Fix pagination

* Fix params

* Fix sliceutil

* Fix merkletrie
2019-10-04 15:46:49 -07:00
Nishant Das
91cb081b7e fix log (#3713) 2019-10-04 14:54:21 -07:00
Jim McDonald
73ffde869f Tidy up logging (#3711) 2019-10-04 14:11:59 -07:00
Nishant Das
24cbcc552f change to hex (#3712) 2019-10-04 04:36:57 -07:00
Nishant Das
aa819bf5ba Change MultiAddr Conversion Error to Debug (#3702)
* change to debug

* check if IP is nil
2019-10-04 16:05:52 +08:00
terence tsao
273871940c Test code in different packages (#3710)
* Moved a few packagesXXX to test_XXX

* Gaz
2019-10-04 15:41:09 +08:00
Preston Van Loon
20e97bc6c3 remove fluentd timestamp (#3708) 2019-10-03 20:56:10 -07:00
Nishant Das
fddb51fc45 Support Provided Host Addresses (#3707)
* support provided host addresses

* remove log
2019-10-04 11:22:17 +08:00
Nishant Das
50b1d209ab Resubscribe Headers from ETH 1 Chain (#3706)
* resubscribe headers

* Update beacon-chain/powchain/service.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2019-10-04 10:48:14 +08:00
terence tsao
fb0e504856 Added cov for untested file at chain_info (#3705) 2019-10-03 15:26:24 -07:00
Preston Van Loon
58dbdfb6f5 Remove attestation that fails to verify from queue (#3704) 2019-10-03 13:57:18 -07:00
Preston Van Loon
a6f6bb12fa Add message events to spans for pubsub (#3703) 2019-10-03 09:33:16 -07:00
Preston Van Loon
4bee60826d add flag to enable BLS pubkey cache (#3699) 2019-10-03 08:26:15 -07:00
Nishant Das
6d2ce49c06 Send Status For Already Connected Peers (#3698)
* do not reject if peer is already in status map

* add space

* remove space
2019-10-03 06:53:57 -07:00
Preston Van Loon
7a04ff6368 Add database API for creating backups (#3694)
* Save db backup

* Fix DB backup method

* Add backup db webhook

* gaz

* if err != nil

* more verbose filename

* Don't obliterate everything :)
2019-10-03 17:29:49 +08:00
Jim McDonald
f046c77499 Replace junk test data with real values (#3689) 2019-10-02 22:01:03 -07:00
Preston Van Loon
f39f4336a0 Add log when blocks by range request fails (#3695) 2019-10-03 10:52:00 +08:00
Celeste A.S
1064f6ebaf Bazel dependencies tweaked (#3585)
Inclusion of macOS.
2019-10-02 18:51:29 -07:00
Preston Van Loon
33b746e025 ask for blocks from peers in parallel (#3675) 2019-10-02 15:42:26 -05:00
Nishant Das
4daf62fc28 Fix Validator Activation (#3684) 2019-10-02 13:26:11 -07:00
Ivan Martinez
8bab55d88e Add Block Generation Util to testutil package (#3674)
* Create block generation util in testutil

* Gazelle

* Fix deps

* Fix imports

* Change tests to use config and fix integer division

* Remove logs

* Fix build

* Add test to ensure finalization occurs

* Add check for finalization

* Add comment for incrementing the state

* Fix test

* Fix tes

* Fix testutil us

* Fix tests

* Change var name

* Add regression test for bug with large validator count

* Fix bazel test
2019-10-02 08:15:40 -07:00
terence tsao
c632b96454 Validator logging improvements (#3661)
* Starting

* Update logging for service

* Update logging for assignment

* Update logging for attest

* Update logging for propose

* Update logging for balance update

* Final touchup

* Fixed test

* Fixed test

* Feedback

* Fix

* Fix all the tests
2019-10-02 12:18:01 +08:00
Jim McDonald
323ee8dfac SetupInitialDeposits() now returns deposit data roots (#3683) 2019-10-02 10:50:34 +08:00
Preston Van Loon
42a2d5c1ee log buf.String() instead of map[reflect.Type]error (#3681) 2019-10-02 08:51:11 +08:00
kilic
d5e02eaa43 Change BLS Pairing Engine (#3670)
* change bls pairing engine

* fix linter warnings

* curve order

* add back spec test

* use only one dep

* fix test

* remove toBytes

* gaz

* add it back

* fix tests

* imports

* imports

* gaz

* remove hash function

* change naming

* preston's comments

* gaz

* fix test failure

* change back

* revert test changes

* gaz
2019-10-02 08:13:59 +08:00
Raul Jordan
2d9550e55c small fix (#3682) 2019-10-01 14:00:24 -07:00
Raul Jordan
d9c0e65cef Improve Beacon Node Logging UX (#3600)
* info logs beacon node improvements

* prom test fixes

* info logging changes

* wrapped up node info logging

* changed to debug level

* warn logs taken care of

* Terence suggestion

* warn spacing

* better logging in initial sync

* debug level standardized

* complete debug standardization

* participation at epoch end

* fix archive tests

* even more test fixes

* prom test

* ops test

* powtest

* rpc sync test

* rem part

* log formatting
2019-10-01 15:05:17 -05:00
Preston Van Loon
f78d6e66b3 only enable libp2p logs when trace level logging (#3680) 2019-10-01 08:38:21 -07:00
Nishant Das
87f0581742 Add Error to Message validation (#3678)
* change functions

* fixing tests

* fixed all tests

* format

* fix test failures

* change to error
2019-10-01 08:13:04 -07:00
Raul Jordan
3d37a4e038 Optimize Domain Data RPC Request (#3671)
* gaz

* fix broken build

* fix broken test

* fix broken test
2019-10-01 09:36:36 -05:00
Nishant Das
2a5046fbc9 revert (#3679) 2019-10-01 06:54:46 -07:00
terence tsao
98f3efffea Add process block with full attestations test (#3676) 2019-09-30 20:41:51 -07:00
Jim McDonald
628da919a4 Use deterministic method to create test deposits (#3639)
* Use deterministic method to create test deposits

* More descriptive failure messages for tests
2019-10-01 08:56:26 +08:00
terence tsao
944d3b16fd Batch save attestations in state transition (#3672)
* Batch save attestations

* Update test

* Revert config
2019-09-30 16:11:59 -07:00
Raul Jordan
8d215feb25 Large Prysm Performance Improvements (#3622)
* skip bls verification with a feature flag at runtime

* gazelle

* more bls mocks

* block roots efficiency

* db block roots now does not show up on the flame graphs

* save validator latest votes batch

* batch save att

* misc improvements to pprof

* import

* include validator index cache

* error if no filter criteria

* resolved comments

* build fix

* lint

* remove delay global

* attestation and block test fixes

* preston suggestions

* fix db tests

* fix missing broken tests

* tests pass
2019-09-30 15:45:53 -05:00
Alex
6a203dce81 remove roughtime servers hardcode (#3666) 2019-09-30 15:30:45 -05:00
terence tsao
4f1d2868f8 Save head if diff than prev saved head (#3669)
* Save head if it's diff

* New test for process attestation

* New test for process block

* Fixed loggings

* Fixed all the tests
2019-09-30 15:15:56 -05:00
Preston Van Loon
a2a66e7cb7 More instrumentation in state transitions (#3667)
* more instrumentation in state transitions

* gofmt gaz

* more

* more
2019-09-30 11:28:41 -07:00
Nishant Das
8ece8fb44b Expose DB Metrics (#3663)
* add in bolt metrics

* unregister in db teardown

* unregister in Close()

* fix clear db case

* fix test error

* gaz

* remove unregister

* remove gaz
2019-09-30 10:24:47 -05:00
Preston Van Loon
22ddcb253d Add metrics for p2p failures (#3662) 2019-09-29 22:23:19 -07:00
Raul Jordan
23c3138c57 workspace (#3660) 2019-09-29 22:34:33 -05:00
Preston Van Loon
2dd71c076e Bulk update renovate (#3659)
* Update libp2p

* Update com_google_protobuf commit hash to 97b1802

* Update graknlabs_bazel_distribution commit hash to 962f3a7

* Update io_kubernetes_build commit hash to b6d1648

* Update dependency build_bazel_rules_nodejs to v0.38.0

* Update dependency com_github_paulbellamy_ratecounter to v0.2.0

* Update libp2p

* Update dependency com_github_go_stack_stack to v1

* Update dependency com_github_karlseguin_ccache to v2

* Update dependency com_github_rs_cors to v1

* Update dependency io_k8s_client_go to v12

* Update dependency io_k8s_klog to v1

* Update dependency io_k8s_sigs_yaml to v1

* minor build fixes
2019-09-29 21:54:54 -05:00
Preston Van Loon
7c6270143f remove dep on github.com/elastic/gosigar (#3643) 2019-09-29 14:36:15 -07:00
terence tsao
5675038e5d Add active indices functionality to cache (#3629) 2019-09-29 12:10:11 -07:00
Preston Van Loon
571efc11d1 add error spans, interceptrs (#3641) 2019-09-29 11:48:55 -07:00
Jim McDonald
1c51b509ad Update abigen command (#3640) 2019-09-29 08:57:23 -07:00
Nishant Das
0e8828abd3 update to new version (#3637) 2019-09-29 22:21:07 +08:00
Preston Van Loon
7fe65bb53b only report reg sync unhealthy after chainstart (#3635) 2019-09-29 14:42:09 +08:00
Preston Van Loon
5a92725329 Fix init sync race condition (#3633)
* fix init sync race condition

* grab subscription before checking
2019-09-28 18:42:44 -07:00
Preston Van Loon
508fac65be make the print the same number of characters (#3626) 2019-09-28 09:41:02 +08:00
terence tsao
4c8269aca3 Part 4 of caching improvement - Use Cache (#3625) 2019-09-27 15:56:08 -07:00
Preston Van Loon
00e68c6cc7 Use demo config for accounts create (#3627) 2019-09-27 15:48:49 -07:00
Preston Van Loon
877f596c54 all deposits must be verified (#3624) 2019-09-27 14:18:11 -07:00
terence tsao
d02e73c5fe Feature flag for new caching scheme (#3619) 2019-09-27 13:14:22 -07:00
Preston Van Loon
707a816f2b bootstrap from finalized checkpoint rather than head slot (#3621) 2019-09-27 12:39:32 -07:00
Preston Van Loon
59b4ade50b Report sync unhealthy in the case that the node is still syncing initially (#3623) 2019-09-27 12:30:28 -07:00
Raul Jordan
24df2d3e44 Skip BLS With a Feature Flag at Runtime (#3618)
* skip bls verification with a feature flag at runtime

* gazelle

* more bls mocks
2019-09-27 13:28:43 -05:00
Ivan Martinez
ee837ecbb9 Reorganize State Transition Functions (#3589) 2019-09-27 09:54:03 -07:00
Raul Jordan
4bd2730c5e Batch Deletions for Blocks and Attestations (#3496)
* batch deletions for blocks and attestations

* test for atts delete

* test for blocks delete

* better naming of args in iface methods

* modify a bit

* convert to batch

* blocks batch delete

* batch fixes

* suspecting deadlock

* blocks batch delete tests pass

* more complex test
2019-09-27 11:11:10 -05:00
Jim McDonald
e1e36e1424 Default genesisTime to now when generating a genesis state. (#3615)
* Default genesisTime to now when generating a genesis state.

* Use roughtime for genesis creation timestamp
2019-09-27 10:49:55 -05:00
terence tsao
14bc8d7637 Part 3 of caching improvement - Update cache (#3617)
* Implemented UpdateCommitteeCache in committee.go

* Implemented test for UpdateCommitteeCache

* Updated test to use mainnet config
2019-09-27 10:43:09 -05:00
Jim McDonald
b089cdd216 Allow overwriting of default bootstrap node (#3616)
* Allow overwriting of default bootstrap node

* Update shared/cmd/flags.go

Co-Authored-By: Nishant Das <nish1993@hotmail.com>

* Provide warning at more suitable time
2019-09-27 20:05:16 +08:00
Preston Van Loon
f681bc6867 Change pk bytes maps to 48 bytes (#3613)
* change pk bytes maps to 48 bytes

* test
2019-09-27 14:07:36 +08:00
Nishant Das
1600217eb1 Ask User Before Deleting Chaindata (#3592) 2019-09-26 21:38:18 -07:00
Nishant Das
ddf6f7d4d9 add new bootnode and contract endpoint (#3612) 2019-09-26 21:05:38 -07:00
Raul Jordan
9dc1674417 resolve queue (#3611) 2019-09-26 22:34:51 -05:00
terence tsao
4a73bc13b5 Part 2 of caching improvement - A cache for shuffled indices (#3607)
* Cache for shuffled indices from committee

* Tests

* Lint
2019-09-26 19:51:39 -07:00
Preston Van Loon
90a02a035b add a few logs at start of initial sync (#3608) 2019-09-26 19:17:43 -07:00
Jim McDonald
6a4b46ab0e Tidy up logging in the validator (#3597)
* Tidy up logging in the validator

* Log full public key when validator first initialised

* Use 'validator' rather than 'pubKey' for traces; use full public key
2019-09-27 09:47:03 +08:00
Preston Van Loon
af1301ddcb update rules go to support go 1.13.1 (#3599)
* update rules go to support go 1.13.1

* gazelle update
2019-09-26 18:06:59 -07:00
Preston Van Loon
156e3ca65a grab read lock (#3601) 2019-09-26 17:14:12 -07:00
terence tsao
d7891fca88 Part 1 of caching improvement - ShuffledIndices function (#3605) 2019-09-26 16:45:40 -07:00
terence tsao
b8bd28cca2 Move SplitOffset to Sliceutil (#3606) 2019-09-26 16:25:55 -07:00
Preston Van Loon
32ffb70a1a handle large ranges of skipped slots (#3602) 2019-09-26 17:13:55 -05:00
Raul Jordan
2690c2080d Implement GetValidatorQueue RPC (#3574)
* changes

* active set change

* helpers for active set changes

* include the helpers in archive service

* table driven tests for helpers

* use from archive

* remove item

* properly use the keys in the response

* test for active set changes

* test passing

* test passing no archive

* archive tests completed

* add ethapis latest commit

* begin implementation get validator queue

* include queue

* finish the queue implementation

* consolidated churn  limit

* pending active testing

* pending active t est

* tests below churn in response

* pubkey

* below  churn  test

* only fetches below the churn  limit

* exit queue churn clip as needed

* full test for  pending active below churn limits

* pending exit test

* pending exit logic

* pending exit below churn test

* all tests done for queue impl

* revert some bad changes

* bug
2019-09-26 16:46:06 -05:00
Nishant Das
9b008522b8 Refactor Validator Start Routine (#3594)
* make demo default

* make minimal config a flag

* lint

* initialize config at the start

* gaz

* make main method cleaner

* remove interop.go

* fix test

* lint

* gaz

* Update validator/accounts/interop.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* fix docker build

* fix docker build
2019-09-26 13:23:25 -05:00
shayzluf
8f0b131631 Slasher grpc service (#3271) 2019-09-26 09:29:10 -07:00
renovate[bot]
a683f4652f Update dependency com_google_cloud_go to v0.46.3 (#3550) 2019-09-26 10:47:33 -05:00
renovate[bot]
5a533f8e4a Update dependency com_github_spf13_pflag to v1 (#3552) 2019-09-26 10:23:10 -05:00
Nishant Das
82ac56d0e7 Fix Interop Readme (#3591)
* remove warning

* specify using ssz
2019-09-26 09:40:56 -05:00
Nishant Das
73938876b1 better log validator votes (#3590) 2019-09-26 06:32:06 -07:00
Preston Van Loon
0a2dfedf0f Log warning with error instead of returning an error for fork choice votes update (#3587) 2019-09-25 20:39:19 -07:00
terence tsao
3ef681e649 Initial sync no verify block (#3586) 2019-09-25 20:32:00 -07:00
renovate[bot]
f52bac7d06 Update libp2p (#3551) 2019-09-25 21:43:58 -05:00
terence tsao
fb74dae835 Remove dead cache (#3584) 2019-09-25 18:23:15 -07:00
Raul Jordan
3a890e70f7 Implement Active Set Changes RPC Method (#3568)
* changes

* active set change

* helpers for active set changes

* include the helpers in archive service

* table driven tests for helpers

* use from archive

* remove item

* properly use the keys in the response

* test for active set changes

* test passing

* test passing no archive

* archive tests completed

* add ethapis latest commit

* fix test
2019-09-25 18:06:02 -05:00
Preston Van Loon
ef6f2a196e Update ssz and do not try to send a nil block in RPC (#3582)
* update ssz and do not try to send a nil block

* update ssz again
2019-09-25 14:05:23 -05:00
Preston Van Loon
ba4f45b180 BLS pubkey from bytes cache (#3583)
* BLS pubkey from bytes cache

* lint
2019-09-25 13:51:14 -05:00
Preston Van Loon
f6a3fcb778 Update service.go (#3581) 2019-09-25 11:01:55 -07:00
Preston Van Loon
b5984af17c update bazel-go-ethereum, delete unused deps (#3580) 2019-09-25 10:30:25 -07:00
Preston Van Loon
5345ddf686 Initial Sync: Round robin (#3538)
* first pass, step 1 works

* naive from finalized to head

* delete commented code

* checkpoint progress on tests

* passing test

* abstract code slightly

* failure cases

* chkpt

* mostly working, missing a single block and having timeout

* passing tests

* comments

* comments

* gaz

* clarify comments

* progress on a few new cases

* add back bootnode query tool

* bootstrap from DHT

* chunked responses in round robin

* fix tests and deadlines

* add basic counter, time estimation

* hello -> handshakes

* show peers in use during sync

* just one last test failure

* only request blocks starting in the finalized epoch for step 1

* revert that

* comment out test and add better commentary

* move requestBlocks out to pointer receiver

* mathutil

* Update beacon-chain/sync/initial-sync/round_robin.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* PR feedback

* PR feedback
2019-09-25 12:00:04 -05:00
Nishant Das
8ce96428b1 Fixes our Attestation Aggregation Issues (#3579) 2019-09-25 07:39:16 -07:00
Raul Jordan
5398faea44 Do Not Archive Active Indices (#3573)
* dont archive active indices

* eliminate logs
2019-09-25 17:18:37 +08:00
terence tsao
6c892dc376 General indices helpers (#3575)
* Implemented Power of 2 helpers

* Test for power of 2 helpers

* Gazelle

* Fmt

* Implemented MerkleTree

* Test for MerkleTree

* Fixed tests

* Implemented ConcatGeneralizedIndices and GeneralizedIndexLength

* Tests for the above

* Benchmarked copy, it's faster

* Implemented rest of the indices helpers

* Tests for indices helpers

* Delete
2019-09-25 17:05:35 +08:00
Preston Van Loon
7c9ddfeb58 delete ssz server which is no longer needed (#3578) 2019-09-24 22:01:27 -07:00
Nishant Das
3dcaeabb3e Fix Validator Account Creation (#3577)
* print raw tx data

* revert change
2019-09-24 20:16:08 -07:00
terence tsao
2335b5eae7 Merkle tree implementation (#3572) 2019-09-24 17:37:45 -07:00
terence tsao
e64287773c Merkle tree math helpers (#3571) 2019-09-24 09:58:18 -07:00
Nishant Das
0e329fc115 Attestation Server Fix (#3570) 2019-09-24 08:19:37 -07:00
Nishant Das
0db690df75 Chunked Responses (#3528)
* update naming

* replace with updated version

* more changes

* fixed all tests

* build and lint

* regen protos

* fix test

* remove outdated code

* prestons review

* add chunk size

* more fixes to chunked responses

* handle eof

* fix all tests

* abstract into common method

* add comment

* preston's comments

* preston's review

* preston's review

* lint

* add encoding methods

* gaz

* simplify

* simplify

* lint

* change naming

* update

* handle eof separately

* Apply suggestions from code review

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* remove def

* preston's review

* preston's review

* add unit tests

* add delay to fix test
2019-09-24 07:56:50 -07:00
Jim McDonald
41631c2e3a Fix incorrect value in comment (#3558) 2019-09-24 06:52:55 -07:00
Preston Van Loon
07360bcc07 Write failed blocks to disk (#3569)
* write a failed requested block to disk

* write faile blocks from pubsub too

* gaz
2019-09-23 21:46:40 -07:00
terence tsao
cbcbb487ac Lock around validateBeaconBlockPubSub (#3567) 2019-09-23 18:43:07 -07:00
Preston Van Loon
ad47817bcd Only write interop ssz states to disk with flag ON (#3566)
* only write SSZ states to disk with flag on

* lint

* also write blocks
2019-09-23 18:36:12 -05:00
Raul Jordan
305d0299dd Revamp GetValidators to Retrieve Historical Validators By Epoch (#3563)
* archive participation begin implementation

* validator participation compute

* comments

* compute participation common func

* full test for archiving data

* gazelle

* complete tests

* gaz

* properly retrieve the validators

* revert weird change

* historical validator fetching

* resolves issues with current tests

* adding test for old epoch validators

* tests in
2019-09-23 18:00:38 -05:00
Preston Van Loon
5d33514001 Restore bootnode query tool for kaddht and fix bootstrapping (#3565)
* add back bootnode query tool

* bootstrap from DHT
2019-09-23 17:18:01 -05:00
Preston Van Loon
f5aa25821d p2p: Relay support, CIDR whitelist, connection maintenance (#3564)
* add relay dial

* add relay support advertisement, add peer watching to maintain connections to bootstrap nodes and relay nodes

* gofmt

* double space
2019-09-23 14:43:53 -07:00
Raul Jordan
b8bdf71d5b Allow for Fetching RPC Historical Assignments from Archived Data (#3559)
* archive participation begin implementation

* validator participation compute

* comments

* compute participation common func

* full test for archiving data

* gazelle

* complete tests

* gaz

* define archived validator assignment method

* archived assign logic

* need to use compute committee next

* compute archival assignments helper func

* properly compute committee using current shard

* modify the assignments request to take in a query filter item

* more intuitive implementation of list assignments

* utilize the query filter

* complete implementation

* revamp tests

* fixing current tests before adding archive tests

* test now passes using len filtered indices for total size

* include prop index in test

* revert bad change

* use ethapis

* add necessary tests

* comments
2019-09-23 16:04:59 -05:00
terence tsao
9577e2c123 Removed attestation check for votes (#3562) 2019-09-23 15:44:13 -05:00
skillful-alex
b015dc793a HeadFetcher data race fix (#3460)
* HeadFetcher data race fix

* bazel run //:gazelle -- fix

* add the db teardown to test

* add TestChainService_SaveHead_DataRace test

* split race and norace tests

* change testset name

* test CI with 'spectest' tag instead of 'raceon'

* one more test CI with 'spectest' tag instead of 'raceon'

* bazel run //:gazelle -- fix

* set test tag to 'race_on'

* not clone the state
2019-09-23 14:24:42 -05:00
terence tsao
4432c88f73 Update beacon chain server AttestationPool (#3560)
* Run time bug

* Still failing

* Run time working

* Run time working

* Gazelle

* Fixed all the tests

* Revert config

* Revert back test configs

* Revert config

* Tested run time again, everything is good

* Implemented AttestationPoolNoVerify
2019-09-23 13:01:57 -05:00
Nishant Das
b5b10a8d35 Add Back Kademlia DHT to Prysm (#3557)
* serve nodes

* remove testing flag

* add back bootnode

* add dht

* add back dht

* gaz

* fix build

* bootnode works in runtime

* fix all references

* all tests pass

* remove feature flag

* separate out ports

* lint

* fix docker build

* use one error package
2019-09-23 10:24:16 -07:00
Raul Jordan
5294a6c5af Use Archive in Retrieving Validator Balances (#3536) 2019-09-23 09:19:18 -07:00
terence tsao
ab2d4e8ad6 Implement attestation pool in memory (#3542)
* Run time bug

* Still failing

* Run time working

* Run time working

* Gazelle

* Fixed all the tests

* Revert config

* Revert back test configs

* Revert config

* Tested run time again, everything is good
2019-09-23 11:11:44 -05:00
Raul Jordan
64795bd231 Utilize Archived Data in GetValidatorParticipation RPC Server (#3526)
* archive participation begin implementation

* validator participation compute

* comments

* compute participation common func

* full test for archiving data

* gazelle

* complete tests

* gaz

* remove double negative grammar in comment

* use archive in rpc

* uses the archive!

* error if nothing found in archive

* comment

* use head fetcher and root

* tests pass

* archive active set changes appropriately

* archive committees

* archive info

* done with committee info archiving

* archived set changes stored

* fix build

* test for archive balances and active indices

* further abstractions

* only archive epoch end

* tests all pass

* tests pass now

* archive done

* test for activated validators

* tests for exited

* amend message

* use different proto

* finalization fetcher

* gaz

* use root

* use ctx

* use new ethapis

* use proper hash

* match apis compatibility

* match apis

* properly use participation

* fix tests

* use right commit
2019-09-23 10:47:34 -05:00
terence tsao
4e6ed2744d Fixed 2 cosmetic errors (#3543)
* Fixed

* Treehash

* Updated error msg
2019-09-23 08:30:39 -07:00
Nishant Das
41ea8a18a0 Expose Nodes (#3556)
* serve nodes

* remove testing flag
2019-09-23 08:18:58 -07:00
Nishant Das
71098b6ed8 Lookup Bootnode instead of Random Nodes (#3555)
* add workspace change

* change to lookup bootnode
2019-09-23 10:17:11 +05:30
terence tsao
b7853f1fa8 Batch renovate updates (#3554)
* Update graknlabs_bazel_distribution commit hash to d4a7864

* Update io_kubernetes_build commit hash to 24cc8eb

* Update dependency build_bazel_rules_nodejs to v0.37.1

* Update dependency com_google_cloud_go to v0.46.3

* Update libp2p

* Update dependency com_github_spf13_pflag to v1
2019-09-23 08:16:56 +05:30
Nishant Das
762f108ea5 add workspace change (#3545) 2019-09-22 11:14:04 -07:00
terence tsao
041735ef54 Removed dead code (#3544) 2019-09-21 22:04:20 -07:00
terence tsao
315d4f0549 Save aggregated attestations in DB after processing (#3541)
* Added AggregatedAttestation helper

* Implemented `saveNewAttestation` for processing attestation

* Implemented `savesNewBlockAttestations` for processing block

* Tests

* Fix name

* Raul's feedback
2019-09-21 13:57:32 -05:00
terence tsao
2b2ef4f37c Added AggregatedAttestation helper (#3539) 2019-09-21 09:43:18 -07:00
Preston Van Loon
fb8d6a4046 infrequently ping bootnode (#3540) 2019-09-21 09:21:44 -07:00
Preston Van Loon
37596ac188 Is len(map) threadsafe? (#3535) 2019-09-20 13:05:08 -05:00
Preston Van Loon
9fcc6fc201 Wait until fully synced to process pubsub messages (#3514)
* context timeout for pubsub message processing

* add syncing check

* gofmt

* use a global cache

* lint

* fmt

* fix conflicts

* revert change

* gaz
2019-09-20 10:54:32 -07:00
terence tsao
c29a7be0ec Fix justified check point mutation (#3534)
* first version of the watchtower api

* first version

* delete watchtower

* move to message loop

* roughtime

* one time

* fix test

* Fixed

* Fixed

* Revert unused lock
2019-09-20 10:44:28 -07:00
terence tsao
cf2ad1f21c Parent blocks fetching/processing (#3459)
* first version of the watchtower api

* Initial prototype of sync parent fetching/processing

* Another map to track seen block root

* Fixed fmt

* Ready to live test

* Ready to live test

* Seperate pending block queue into its own

* first version

* delete watchtower

* move to message loop

* roughtime

* one time

* fix test

* Started testing but peer list empty

* Comment

* Loggins

* Stuck at decoding non proto type

* Revert

* First take, need feedback

* Run time panics at hello

* Revert

* use reflect properly

* Fixed subscriber

* instantiate helper

* More reverts

* Revert back tests

* Cont when EOF

* Working

* Clean hello tracker on peer disconnection

* Clean hello tracker on peer disconnection

* Move to validation

* Propoer locking

* Propoer locking

* Fmt

* Nishant's feedbacke

* More feedback

* All tests passing

* fix build

* remove log

* gaz

* Added the todo
2019-09-20 10:08:32 -07:00
Raul Jordan
a2aa142b90 Archive Remaining Data at Epoch End in Archiver Service (#3531)
* archive participation begin implementation

* validator participation compute

* comments

* compute participation common func

* full test for archiving data

* gazelle

* complete tests

* gaz

* remove double negative grammar in comment

* use head fetcher and root

* tests pass

* archive active set changes appropriately

* archive committees

* archive info

* done with committee info archiving

* archived set changes stored

* fix build

* test for archive balances and active indices

* further abstractions

* only archive epoch end

* tests all pass

* tests pass now

* archive done

* test for activated validators

* tests for exited

* use ctx
2019-09-20 11:51:06 -05:00
Nishant Das
44e5e5de65 Remove Error Message Type (#3533)
* proto change

* fix test

* fix error resp
2019-09-20 09:13:38 -05:00
Nishant Das
4bc2d628b1 Update Naming to Latest Networking Spec (#3519)
* update naming

* replace with updated version

* more changes

* fixed all tests

* build and lint

* regen protos

* fix test

* remove outdated code

* prestons review

* preston's comments

* preston's review

* preston's review

* lint
2019-09-20 11:57:28 +05:30
terence tsao
ac176a5078 Update validators votes from incoming block (#3530)
* first version of the watchtower api

* first version

* delete watchtower

* move to message loop

* roughtime

* one time

* fix test

* Update block attestation votes

* Clean up

* Found a bug

* Confirmed to be working in run time

* Confirmed to be working run time

* Raul's feedback

* Tests
2019-09-19 20:34:57 -05:00
Raul Jordan
4ffef61e1d Archive Validator Participation on End of Epoch Event (#3524)
* archive participation begin implementation

* validator participation compute

* comments

* compute participation common func

* full test for archiving data

* gazelle

* complete tests

* gaz

* remove double negative grammar in comment

* use head fetcher and root

* tests pass
2019-09-19 15:59:23 -05:00
Preston Van Loon
cba44e5151 log a warning on unhealthy healthz (#3529) 2019-09-19 20:39:55 +05:30
Nishant Das
8179ed57b9 fix validator (#3527) 2019-09-19 13:30:47 +05:30
Raul Jordan
e8b6951591 Complete ListAttestations EthereumAPIs v1alpha (#3452)
* retrieve attestations by block root as well

* add beacon block root filter

* rem err unimpl

* add changes to list atts filter proto

* utilize the new filter attributes

* add filter types

* utilize filters in the api server impl

* tests for filter

* tests pass

* filter test done

* fix test by using head fetcher instead

* gaz

* no panic

* use new ethapis commit

* elim panic

* res panic

* ensure proto compatibility

* fixed broken test
2019-09-18 20:14:26 -05:00
Preston Van Loon
33ef5f9150 Use hex string private keys for enr calculator (#3525)
* use hex string private keys for enr calculator

* use hex string private keys for enr calculator
2019-09-18 16:27:34 -05:00
terence tsao
495621e99b Sync RPC to support none proto message (#3512) 2019-09-18 13:48:16 -07:00
Preston Van Loon
e1861bdb31 Clean hello tracker on peer disconnection (#3522)
* Clean hello tracker on peer disconnection

* Clean hello tracker on peer disconnection
2019-09-18 15:02:34 -05:00
terence tsao
f69195f211 Continue on bad attestation (#3523) 2019-09-18 12:22:26 -07:00
Raul Jordan
36e3a9f82a Implement Archival DB Methods (#3521)
* generate archive proto

* archived committee info

* archive methods added to iface definition

* impl

* update iface

* proto comments

* implement first method

* committee info

* save committee info

* participation checked in

* fully implemented

* tests

* test defs

* db impls done
2019-09-18 13:41:47 -05:00
Raul Jordan
6f25e4ce81 Use Start Index Flag in Unencrypted Key Gen (#3506)
* use start idx

* fix tests
2019-09-18 10:15:26 -07:00
Raul Jordan
b919429801 Archived Data Definitions & DB API (#3510)
* generate archive proto

* archived committee info

* archive methods added to iface definition

* impl

* update iface

* proto comments
2019-09-18 12:05:24 -05:00
Nishant Das
26af4496c0 Batch Validator Performance Requests (#3520)
* change proto msg types

* change server and client

* regen protos
2019-09-18 11:47:14 -05:00
Nishant Das
8701ccfe87 update to latest (#3518) 2019-09-18 10:51:49 -05:00
Nishant Das
d9664d3b6b Fix Withdrawal Credentials (#3517) 2019-09-18 07:52:34 -07:00
Raul Jordan
037c01f4d7 Archiver Service Definition (#3507)
* archive flags

* gaz

* create archiver

* register archiver in node

* registering the head updater feed

* add more gazelle

* cancel func

* test for service

* properly utilize the mocks

* lint

* remove extraneous log

* add back write to disk

* gaz
2019-09-18 09:30:02 -05:00
terence tsao
9ab08e6998 Remove beacon rpc service (#3515)
* first version of the watchtower api

* first version

* delete watchtower

* move to message loop

* roughtime

* one time

* fix test

* Fixed test

* Fixed proposer server

* Gaz

* gaz

* Stuck

* Tests passing

* Fixed all the tests
2019-09-18 06:34:50 -07:00
shayzluf
bdb1b472b6 Test proccess on chainstart (#3516)
* first version of the watchtower api

* first version

* delete watchtower

* move to message loop

* roughtime

* one time

* fix test

* add test to chain start

* fix test

* move logic to mock

* remove unused method

* remove imports

* gaz

* goimports

* goimport
2019-09-18 14:14:25 +05:30
Preston Van Loon
0d318b394e Enable go-ethereum logs for bootnode (#3513)
* enable go-ethereum logs for bootnode

* fix docker imgs
2019-09-17 16:24:08 -07:00
shayzluf
b9f9cf0b2c Handle blocks after chain start (#3486) 2019-09-17 14:14:51 -07:00
Nishant Das
b1b76ac87c Handle Attestations in a Separate Goroutine (#3487)
* move into separate routine

* preston's review

* use opencensus
2019-09-17 11:17:21 -05:00
terence tsao
b63e938cfb Update (#3504) 2019-09-17 21:17:57 +05:30
Nishant Das
31eae719b9 fix config (#3500) 2019-09-17 11:19:58 +05:30
terence tsao
b863004b2a Aggregate attestations before verify and update votes (#3493) 2019-09-16 17:48:03 -07:00
Nishant Das
7eba8da9d2 Save Network Keys in Data Directory (#3488)
* change marshalling

* add networkkeys

* gaz

* fix test

* add new function

* resolve comments, rename to datadir
2019-09-16 17:09:16 -05:00
Raul Jordan
7e7941b0af bls endianness (#3495) 2019-09-16 16:39:45 -05:00
Raul Jordan
49a529388b Resolve Miscellaneous Prysm TODOs (#3465)
* resolve

* resolve

* return

* remove deprecated protos

* rem deprecated pbs

* resolve cache

* resolve md TODO

* node server

* resolve config todo

* resolve even more

* broken build
2019-09-16 15:45:03 -05:00
Raul Jordan
9683a83750 Properly Use Demo Config (#3494) 2019-09-16 12:55:30 -07:00
Nishant Das
c9f48373cb allow rpc requests (#3490) 2019-09-16 12:54:46 -05:00
terence tsao
bf07cfcdab Clean up operation service (#3468)
* Cleaned up operation service

* Fixed all the tests

* Fixed node.go

* Review feedback

* Todo
2019-09-16 12:05:30 -05:00
Nishant Das
bef58620fc Update Renovate (#3489)
* Update libp2p

* Update com_google_protobuf commit hash to 763c358

* Update graknlabs_bazel_distribution commit hash to 1ec7e2d

* Update dependency com_github_coreos_go_semver to v0.3.0

* Update dependency com_github_minio_sha256_simd to v0.1.1

* Update dependency com_github_prometheus_common to v0.7.0

* Update dependency com_github_prometheus_procfs to v0.0.5

* Update dependency com_google_cloud_go to v0.46.2

* Update dependency io_bazel_rules_docker to v0.10.1

* Update libp2p

* Update dependency com_github_beorn7_perks to v1

* Update dependency com_github_grpc_ecosystem_go_grpc_middleware to v1

* change back protobuf
2019-09-16 06:05:16 -07:00
terence tsao
c5b4cf7f7d Fixed attesting indices set (#3469)
* Fixed attesting indices set

* Typo

* Regression test

* Comment

* Validator count for tests
2019-09-15 13:54:55 -07:00
Preston Van Loon
a2685245f2 demo-config flag (#3473)
* democonfig

* 3.2
2019-09-15 14:24:08 -05:00
Nishant Das
d597410d9b MultiLocking in Operations Service (#3470)
* add lock

* add cache

* modify lock retrieval
2019-09-15 13:51:37 -05:00
Preston Van Loon
86d4eb5868 Update main.go (#3472) 2019-09-15 10:47:23 -07:00
Preston Van Loon
c6236df603 Delete outdated k8s (#3455)
* remove outdated k8s directory

* remove k8s deps
2019-09-15 10:28:37 -07:00
terence tsao
9d62e542e5 Clean up validator server (#3466) 2019-09-14 13:31:38 -07:00
Nishant Das
d36061d62f Fix Deterministic Key Generator (#3467) 2019-09-14 10:46:38 -04:00
Raul Jordan
8887ccdd51 Add Consensus Regression Tests (#3464)
* Start working on adding consensus regression tests

* resolve issues

* table driven tests

* always check equality

* revert

* better naming

* better naming of tests

* build

* resolved

* remove unused deps

* fail on failure
2019-09-13 15:46:35 -07:00
Raul Jordan
1e086b63e8 Add Fork Choice Package Docs (#3463) 2019-09-13 13:03:14 -04:00
terence tsao
8c7ef61238 Add back configure validator features (#3456) 2019-09-12 14:02:53 -05:00
Preston Van Loon
0a0d579822 run minimal tests (#3454) 2019-09-12 14:24:35 -04:00
terence tsao
91f824fe10 Clean up unused flags (#3449) 2019-09-12 11:48:34 -04:00
terence tsao
bee3aff6c5 Delete deprecated p2p (#3451) 2019-09-12 11:20:46 -04:00
Nishant Das
b04b542e64 fix panic (#3448) 2019-09-12 14:18:41 +05:30
Nishant Das
3e8a94516d remove bootstrap node (#3447) 2019-09-12 12:46:16 +05:30
Nishant Das
273b917319 Allow Separate Ports for Different Transports (#3414)
* update workspace

* change to new version

* gaz

* set keys

* try more things

* finally fixed all tests

* fix bootnode

* Update beacon-chain/p2p/discovery.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* preston's and raul's review

* add http server

* add tool

* add image

* change comment

* add multiaddr comment

* lint

* cosmetic changes

* fix docker

* remove dep

* preston's requested changes

* new flags

* add support for separate tcp port

* fix refs

* change tcp port
2019-09-12 10:22:27 +05:30
Raul Jordan
e0e3dada7b Abstract Usage of Deposit Cache Into Interface (#3443)
* deposit cache refactor begin

* use interface for has chain  started

* use deposit fetcher interface instead

* use moar interfaces

* comment

* gaz

* fix breaking build

* lint

* implement chainstart fetcher

* allow start to work

* fix broken tests
2019-09-11 23:30:04 -05:00
Preston Van Loon
8ff289fe1a Antoine sent me a patch instead of a PR /shrug? (#3446) 2019-09-11 20:51:14 -07:00
Preston Van Loon
14ed36a41e Concensus issue on state transition (#3444)
* Add failing test

* remove glob

* remove extra deps

* Use min config for test

* Set for unslashedAttestingIndices

* add comment

* add minimal and manual tag
2019-09-11 19:15:20 -07:00
Preston Van Loon
ccece73483 Use the raw bytes, not the libp2p protobuf container for sepc256k1 private keys (#3445)
* use the raw bytes, not the libp2p protobuf container for sepc256k1 private keys

* fix tests
2019-09-11 17:04:35 -07:00
terence tsao
798bbbdc82 Cold start for interop (#3437)
* coldstart flags for validator

* WIP beacon node flags

* wip beacon chain, flag fix in validator, arg fix in validator

* checkpoint

* Added interop service

* working on mock chainstart

* save the state lol

* fix tests

* Save genesis validators

* gaz

* fix validator help flags

* WaitForChainStart actually waits for genesis time

* cold start fixes

* cache

* change back

* allow for genesis state too

* remove logs

* increase mmap size

* dont process if head doesn't exist

* add 10ms tolerance

* enable libp2p debug at debug, fix pubsub

* works with checkpt

* initialize justified and finalized in db

* Removed preloadStatePath from blockchain service

* Clean up

* Write to disk for now post state

* revert b466dd536f

* Builds

* Only RPC test fails now

* use minimal config, no demo config

* clean up branch

* Lint

* resolve lint

* more lint fixes

* lint

* fix viz

* Fixing RPC test

* skip before epoch 2

* RPC time out

* Fixed ordering

* rename

* remove some dbg statements

* ensure index is correct

* fix some panics

* getting closer

* fix tests

* Fix private key

* Fixed RPC test

* Fixed beacon chain build for docker

* Add interop.go to validator go_image

* Fixed docker build

* handle errors

* skip test, skip disconnecting peers

* Fixed docker build

* tolerance for attestation processing

* revert copy

* clearer err message parse

* fix launching from dep contract
2019-09-11 13:38:35 -05:00
Nishant Das
b4975f2b9d Read P2P Peer Key Properly (#3442)
* fix conflict

* fix conflict

* gaz

* fix test

* gaz
2019-09-11 20:28:23 +05:30
shayzluf
1edeb8ec4c Beaconblock over wire (#3436) 2019-09-10 10:24:14 -04:00
Preston Van Loon
3708a8f476 Add tool and script for interop testing (#3417)
* add tool and script for interop testing

* identity

* lint

* merge upstream, fix conflict, update script, add comment

* add comma separated support for --peer=

* remove NUM_VALIDATORS, comma fix

* WIP docker image

* use CI builder

* pr feedback

* whatever antoine says

* ignore git in docker

* jobs=auto

* disable remote cache

* try to cache the golang part

* try to cache the golang part

* nvm

* From Antoine with love

* fix
2019-09-09 17:31:19 -04:00
Raul Jordan
af07c13730 [Interop] Improve RPC Codebase + Start Beacon Chain With Mock ETH1 Values (#3407)
* add main.go

* interop readme

* proper visibility

* standardize and abstract into simpler funcs

* formatting

* no os pkg

* add test

* no panics anywhere, properly and nicely handle errors

* proper comments

* fix broken test

* readme

* comment

* recommend ssz

* install

* tool now works

* README

* build

* readme

* 64 validators

* rem print

* register the no powchain flag

* work on mock eth1 start

* common interface

* getting closer with the interface defs

* only two uses of powchain

* remove powchain dependency

* remove powchain dependency

* common powchain interface

* proper comment in case of flag

* proper args into rpc services

* rename fields

* pass in mock flag into RPC

* conforms to iface

* use client instead of block fetcher iface

* broken tests

* block fetcher

* finalized

* resolved broken build

* fix build

* comment

* fix tests

* tests pass

* resolved confs

* took them out

* rename into smaller interfaces

* resolve some confs

* ensure tests pass

* properly utilize mock instead of localized mock

* res lint

* lint

* finish test for mock eth1data

* run gazelle

* include flag again

* fix broken build

* disable powchain

* dont dial eth1 nodes

* reenable pow

* use smaller interfaces, standardize naming

* abstract mock into its own package

* faulty mock lint

* fix stutter in lint

* rpc tests all passing

* use mocks for operations

* no more mocks in the entire rpc package

* no  mock

* viz

* testonly
2019-09-09 17:13:50 -04:00
Preston Van Loon
8d234014a4 Fix broadcast ssz (#3423)
* add two types of encoding/decoding ssz

* fix tests

* lint

* lint
2019-09-08 19:34:52 -07:00
Preston Van Loon
4dad28d1f6 Accept a filepath for bootnode ENR address (#3422)
* accept a filepath for bootnode ENR address

* fix
2019-09-08 19:05:28 -07:00
Ivan Martinez
5e939378d0 Update to spec v0.8.3 (#3355)
* Ignore latest messages in fork choice prior to latest justified

* Make sure Compact Committee Roots isn't changed by process_final_updates

* WIP add attestation bitfields length to match committee length

* Begin work on updating spec tests to 0.8.2

* WIP set up for new spec test structure

* Fix slashings

* Get mainnet tests mostly passing for attestations and attester slashings

* Fix process attestation test

* Undo change

* Complete spec tests for all operations
Still need sanity block tests

* Fix BLS sigs

* Reduce amount of reused code in core/blocks/spectests/

* Fix tests

* Update block sanity tests to 0.8.2

* Update epoch spec tests to 0.8.2

* Clean up all tests and fix shuffling/epoch tests

* WIP update bls tests to 0.8.2

* WIP update bls tests to 0.8.3

* Finish BLS spectest update to 0.8.3

* Fix shuffling spec tests

* Fix more tests

* Update proto ssz spec tests to 0.8.3

* Attempt to fix PrevEpochFFGDataMismatches test

* Goimports

* Fix documentation

* fix test

* Use custom general spec tests

* Reduce code footprint

* Remove unneeded minimal skip

* Fix for comments

* Fix for comments

* Fix test

* Small fixes

* Cleanup block spec tests a bit

* Undo change

* fix validator

* Fix validator tests

* Run gazelle

* Fix error output for epoch spec tests
2019-09-08 12:41:52 -07:00
Preston Van Loon
d94522510f add outfile support (#3421) 2019-09-08 23:38:46 +05:30
Raul Jordan
adc27a0bc2 Update WORKSPACE With Latest SSZ (#3416) 2019-09-08 12:58:22 -04:00
terence tsao
a3c3a72e72 Chang flag name to interopXXX (#3418) 2019-09-08 09:29:23 -04:00
terence tsao
4235980511 Clean up post --next (#3411)
* Delete old code

* RPC mock testing

* Fixed BUILD

* Conflict

* Lint

* More lint
2019-09-06 22:39:14 -04:00
Nishant Das
fb20fc7881 change to wrap (#3413) 2019-09-06 14:01:59 -07:00
Nishant Das
171e5007c5 Update Discv5 to the Latest Version (#3392)
* update workspace

* change to new version

* gaz

* set keys

* try more things

* finally fixed all tests

* fix bootnode

* Update beacon-chain/p2p/discovery.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* preston's and raul's review

* add http server

* add tool

* add image

* change comment

* add multiaddr comment

* lint

* cosmetic changes

* fix docker

* remove dep

* preston's requested changes
2019-09-07 00:50:20 +05:30
terence tsao
56a395a297 Add /heads page (#3410) 2019-09-05 20:04:25 -07:00
Marius Kjærstad
b133bced26 Updated how-prysm-works link in README.md (#3412)
Updated how-prysm-works link in README.md
2019-09-05 18:29:54 -07:00
terence tsao
14c59b2ff9 Remove deprecated services and --next (#3371)
* Save new validators in DB

* Use info

* Add total validator count

* Fixed tests

* Add new test

* Revert light client config

* Add state metrics back

* Gaz

* Mark old ones as deprecated

* Deprecate not --next services

* Fixed all operation tests

* Fixed node test

* All tests passing locally

* Add deprecated-p2p back, blocked by boostrap-query

* Revert message proto

* delete deprecated DB items

* delete all other instances of old db

* gaz

* cycle rem

* clear db
2019-09-05 11:04:06 -05:00
terence tsao
75bce9b7e1 Align metrics to interop (#3406) 2019-09-05 08:32:35 -07:00
terence tsao
c383b6a30c Load ssz formatted genesis state (#3408)
* Preload ssz genesis state

* Log
2019-09-04 17:32:38 -05:00
Raul Jordan
75c0b01932 Genesis State Generator + Interop Docs (#3405)
* add main.go

* interop readme

* proper visibility

* standardize and abstract into simpler funcs

* formatting

* no os pkg

* add test

* no panics anywhere, properly and nicely handle errors

* proper comments

* fix broken test

* readme

* comment

* recommend ssz

* install

* tool now works

* README

* build

* readme

* 64 validators

* rem print
2019-09-04 13:47:44 -05:00
Preston Van Loon
b0e6d7215c Tracing: Add additional attributes (#3404)
* Add some attributes for tracing

* gaz
2019-09-03 20:03:09 -07:00
Preston Van Loon
b6e0d700ec PubSub: Check messages received from self, do not double process (#3403)
* Check messages received from self, add them to the store

* tests

* fmt
2019-09-03 17:22:15 -07:00
Preston Van Loon
0a61c379a5 Rename / move logic about updating validator indices (#3402)
* rename, move

* pr feedback
2019-09-03 13:57:08 -07:00
terence tsao
6614816061 Log the slot of the block w/o parent (#3401) 2019-09-03 13:43:59 -07:00
terence tsao
60c048a0ec Remove lock from store struct (#3400) 2019-09-03 13:14:23 -07:00
Preston Van Loon
5ec629af71 remove unnecessary lock (#3399) 2019-09-03 12:53:18 -07:00
Preston Van Loon
399f704bf5 Initial Sync: report healthy before chain started (#3388)
* Return error while syncing

* chainStarted
2019-09-03 11:25:20 -07:00
Preston Van Loon
8f342cc5bb fix some parent context usage, add tracing to p2p handlers (#3395) 2019-09-03 11:06:35 -07:00
Raul Jordan
8ce8717676 Fix Prysm Deposit Formatting (#3394)
* proofs with proper size

* getting to the root of the problem, no pun intended

* add regression test and fix proofs

* debugging the receipt root

* debug

* fixed spec tests

* fixed up proofs!

* tests all pass
2019-09-03 12:47:47 -05:00
Preston Van Loon
90b2a880c6 Add /p2p page (#3391)
* add /p2p page

* fix tests
2019-09-03 11:07:40 -05:00
Preston Van Loon
d23ba8e69d Temporarily ban peer if it fails to connect (#3390)
* temporarily ban peer if it fails to connect

* hotfix for handshake
2019-09-02 18:10:58 -07:00
terence tsao
b52f32d17c Clean up configs (#3389) 2019-09-02 17:13:33 -07:00
Preston Van Loon
b1a102fd1d Return error while syncing (#3386) 2019-09-02 14:36:14 -05:00
Raul Jordan
da630f349f Add Test for Aggregating Large Amount of Attestations (#3358)
* test for verifying large amount of agg sigs

* agg sgi could not verify

* 128 fails

* confirmed works for 512

* comprehensive test for handle att

* commented test

* fix up test

* include the proper wait group

* concurrency managed to reproduce verification bug

* concurrent test passes

* revert config changes

* use new db in operations tests

* debugging for the special attestations

* resolve tests

* fmt
2019-09-02 13:49:37 -05:00
Nishant Das
c412dde3bd add flag (#3383) 2019-09-02 11:23:07 -07:00
terence tsao
250e911faa Mega renovate updates (#3382)
* Update dependency build_bazel_rules_nodejs to v0.36.2

* Update dependency com_github_prometheus_procfs to v0.0.4

* Update libp2p

* Update dependency com_github_grpc_ecosystem_go_grpc_prometheus to v1
2019-09-02 10:41:39 -07:00
renovate[bot]
510184c9cc Update libp2p (#3379) 2019-09-02 10:41:15 -07:00
shayzluf
b32c19a004 Slasher db (#3270)
* first version of the watchtower api

* first commit

* remove watchtower

* working version

* fix < 0

* gaz

* Update slasher/db/db.go

* remove clear history

* moved constant to config

* gaz

* feedback changes

* compare uint64

* add constant config

* PruneSlasherStoragePeriod change
2019-09-02 18:36:29 +03:00
Nishant Das
34a163b110 fix logging (#3384) 2019-09-02 06:29:59 -07:00
Nishant Das
876e0ea84d Fix Discv5 in Runtime (#3373)
* fix bug

* remove logs

* fix test

* add locks

* add ttl

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* change to ccache
2019-09-01 15:29:58 -07:00
terence tsao
25dbc5ea85 Add back state metrics (#3369) 2019-09-01 08:37:38 -07:00
Preston Van Loon
a4ac23160a Bootnode: Print private key at debug (#3372)
* print private key at debug

* fix docker deps
2019-08-31 20:05:36 -07:00
terence tsao
146b611dc8 Use rough time for checking attestation is not from future epoch (#3370) 2019-08-31 13:18:31 -07:00
terence tsao
ca2a55874c Save new validator index in DB (#3367) 2019-08-30 21:43:18 -07:00
Preston Van Loon
8e2dcb81ae use roughtime (#3366) 2019-08-30 15:51:15 -07:00
terence tsao
f131585041 Initialize chain info w/o clear db (#3365)
* Initialize chain info upon restart

* Test
2019-08-30 15:24:37 -07:00
terence tsao
9a6410ec15 Lock when process attestation (#3364) 2019-08-30 14:07:42 -07:00
terence tsao
314bc513af Fixed validator pubkey -> index getter (#3361) 2019-08-30 13:50:21 -07:00
Preston Van Loon
95c528f0bc First pass: single peer initial sync (#3363)
* lint

* add requests

* add all new stuff

* comment

* preston's review

* initial commit

* reorder sync so it isn't required to wait until start

* checkpoint

* fix

* improved handler API

* Set up prechain start values

* improved handler API

* ooops

* successful peer handshakes

* successful peer handshakes

* successful peer handshakes

* checkpoint

* chpkt

* handle init after chain start

* emit state initialized feed if existing db state

* merge error

* Done

* Test

* Fixed test

* emit state initialized

* force fork choice update

* wait for genesis time

* sync to current slot

* Use saved head in DB

* gaz

* fix tests

* lint

* lint

* lint

* lint

* Revert "Use saved head in DB"

This reverts commit c5f3404fdf.

* remove db

* lint

* remove unused interfaces from composite

* resolve comments
2019-08-30 15:15:40 -05:00
terence tsao
205fe1baa5 DB: finalized and justified checkpoints can't return nil (#3362) 2019-08-30 10:03:55 -07:00
terence tsao
c425bf2c31 Refactor fork choice start up (#3360)
* Done

* Test

* Fixed test

* emit state initialized

* Fixed existing tests

* Lint

* Lint
2019-08-30 08:58:02 -05:00
skillful-alex
538babb7e9 dont lose keys (#3357) 2019-08-30 10:02:08 +05:30
terence tsao
f0332e1131 Save genesis state in DB (#3359)
* Done

* Test

* Fixed test

* emit state initialized
2019-08-29 15:32:35 -07:00
Nishant Das
1f0aad31d2 Add Hello Tracking (#3342)
* lint

* add requests

* add all new stuff

* comment

* preston's review

* change to send

* remove topic and add lock

* add test

* lint

* change num of peers

* preston's review

* Update beacon-chain/p2p/handshake.go
2019-08-29 22:02:52 +05:30
terence tsao
f49469a820 Fix chain info's pre chain start return values (#3353)
* Set up prechain start values

* ooops
2019-08-29 10:34:26 -05:00
terence tsao
d8fd7e502a Fix GetChainHead for RPC (#3352)
* Fix ChainHeadQuery

* Fixed test
2019-08-29 10:17:21 -05:00
terence tsao
206222c5bc Return cloned state (#3351) 2019-08-29 09:36:43 -05:00
Raul Jordan
816aac82d5 clone read access to head state and block (#3350) 2019-08-28 13:14:00 -07:00
skillful-alex
9e5864fc61 Added roughtime to validator waitToSlotMidpoint (#3344)
* add roughtime to validator waitToSlotMidpoint

* gazelle
2019-08-28 14:59:30 -05:00
terence tsao
5d7c33a8dc Check if slot is greater before process slots (#3349) 2019-08-28 14:24:33 -05:00
terence tsao
d84ae95309 Moved delay att inclusion to fork choice service (#3345) 2019-08-28 10:26:07 -07:00
Raul Jordan
e8f030977a wait for chainstart (#3343) 2019-08-28 12:07:58 -05:00
Raul Jordan
14f77449ce Include Prysm Tool to Generate Unencrypted Keys (#3324)
* next compatible, tests pass

* terence feedback

* skip comment

* fixes

* misc fix

* on block

* parse from unencrypted keys json

* mod val client

* launching unencrypted workssss

* fix broken build

* fix up build

* rem prints

* unencrypted keys file generator

* generate json

* unencrypted keys gen files

* tool done

* function abstractions

* removed docker img stuff

* lint
2019-08-28 11:07:31 -05:00
terence tsao
cbb66dab50 Fix finalized block filtering in sync (#3334) 2019-08-28 08:29:45 -07:00
Preston Van Loon
2ee4f00b81 Add sync/p2p metric for number of messages received by topic (#3341)
* Add going msg metric

* fmt

* rename
2019-08-28 10:14:22 -05:00
skillful-alex
7bb5ac0dde do not panic if dv5Listener is not inited (#3339) 2019-08-28 16:29:34 +05:30
terence tsao
9f2c2f0197 Minor runtime fixes (#3335) 2019-08-27 22:19:47 -05:00
terence tsao
323bbe10ed Add checkpoint to state caching (#3333) 2019-08-27 15:01:27 -07:00
terence tsao
3a138b9e77 Update workspace for go-ssz (#3331)
* Update workspace for ssz

* Update WORKSPACE
2019-08-27 10:48:11 -05:00
Nishant Das
ca0f61bf24 Change Ordering of Gossipsub Registration (#3330)
* fix ordering

* Add comment
2019-08-27 10:23:22 -05:00
Nishant Das
701c70ae3b add better logging (#3329) 2019-08-27 06:27:04 -07:00
Raul Jordan
7beafa159d Support Starting Validator Binary from Unencrypted Keys JSON (#3308)
* next compatible, tests pass

* terence feedback

* skip comment

* fixes

* misc fix

* on block

* parse from unencrypted keys json

* mod val client

* launching unencrypted workssss

* fix broken build

* fix up build

* rem prints

* resolve lint

* bls comment

* fix docker deps

* gaz
2019-08-26 16:07:09 -05:00
terence tsao
f188609137 Implement GetHead for RPC (#3326)
* next compatible, tests pass

* terence feedback

* skip comment

* fixes

* misc fix

* on block

* Update RPC service to use chain info

* OOps

* All tests pass, run time with 8 validators work!

* Remove saving genesis validator

* Revert gensis count

* Move redundant headstate

* Comments

* Implemented GetChainHead

* Test works

* Moved mock package

* Fixed visibility for BUILD file

* Conflict
2019-08-26 15:59:17 -05:00
terence tsao
aca775e405 Fork detection tool (#3327) 2019-08-26 13:17:47 -07:00
terence tsao
64d0826469 Update RPC service to use chain info (#3309) 2019-08-26 13:06:16 -07:00
terence tsao
a1020585fd Fix aggregation with new DB (#3323)
* Check legacy when aggregate

* Typo
2019-08-26 14:30:39 -05:00
terence tsao
53d9fca201 Mega renovate updates (#3321)
* Update graknlabs_bazel_distribution commit hash to bd93910

* Update dependency bazel_gazelle to v0.18.2

* Update dependency build_bazel_rules_nodejs to v0.36.1

* Update libp2p

* Update dependency com_github_golang_mock to v1

* Update dependency com_github_gorilla_websocket to v1

* Revert update gazelle

* Update WORKSPACE
2019-08-26 14:25:29 -04:00
Andrei Ivasko
f99e2bd7c9 Benchmark active indices (#3153)
* new branch off master

* bloomfilter + benchmarks done

* cfilter benchmarks

* comments added

* goimports and gofmt added

* linter issues

* bazel run //:gazelle -- fix

* workspace definitions fixed

* fixed tree_test.go

* Update workspace

* final commit

* gazelle

* updated workspace

* workspace

* reverted workspace changes

* workspace newline

* applied git checkout origin/master WORKSPACE
2019-08-26 11:30:28 -05:00
terence tsao
0b5b3865ef Update validators db during epoch boundary (#3307) 2019-08-26 11:02:17 -05:00
Nishant Das
5828278807 Fix Bolt Fatal Crash (#3320)
* add fix and reg test

* nogo

* nogo
2019-08-26 09:00:40 -05:00
Raul Jordan
9ad00ffafb Use New Attestation Receiver Method in RPC (#3287)
* next compatible, tests pass

* terence feedback

* skip comment

* fixes

* misc fix

* on block

* resolved err
2019-08-25 15:45:55 -05:00
terence tsao
6bcb68f862 Fix save the correct head (#3306) 2019-08-24 18:39:40 -07:00
terence tsao
045badc5f3 Fix competing attestation check (#3305) 2019-08-24 16:56:40 -07:00
Nishant Das
919877f301 Ignore Messages From Local Peer (#3299)
* validate message coming into pipeline

* gaz

* add to deprecated p2p

* add new lib

* change lib
2019-08-24 14:41:24 -04:00
terence tsao
122166b317 Fix transition logging (#3303) 2019-08-24 11:51:00 -06:00
terence tsao
8870bcea64 Fix attestation pool clean up for new db (#3304) 2019-08-24 11:36:31 -06:00
Preston Van Loon
06c97256bc p2p --next: Register p2p peer count metrics (#3301) 2019-08-24 10:07:03 -06:00
Nishant Das
9d15196bed Runtime Fixes (#3300) 2019-08-24 07:26:25 -06:00
Nishant Das
111f225177 Remove IsAttCanonical From Operations Service (#3298) 2019-08-24 06:50:43 -06:00
terence tsao
a31057de83 Fixed a few more init beacon node bugs (#3297) 2019-08-23 22:02:34 -06:00
terence tsao
5294caf5e8 Save validators upon chainstart (#3295) 2019-08-23 19:59:09 -06:00
Preston Van Loon
a852d610e2 Add panic handler (#3296) 2019-08-23 19:15:02 -06:00
terence tsao
3b422cb9c6 fixed conflict att log (#3294) 2019-08-23 17:23:19 -06:00
Preston Van Loon
b04bfb87a8 only attempt discv5 listener when no-discovery is not present (#3293) 2019-08-23 17:59:59 -04:00
Preston Van Loon
0353cc533e p2p error logging (#3292) 2019-08-23 15:46:54 -06:00
Preston Van Loon
0c0ec97343 fixes (#3291) 2019-08-23 17:34:03 -04:00
terence tsao
4484558d87 Part 11 of update fork choice - tracing and spans (#3285)
* Add tracing in forkchoice service

* Gazelle
2019-08-23 15:04:06 -05:00
Preston Van Loon
ce65b11801 Beacon attestation pubsub subscriber (#3289)
* beacon attestation subscriber

* register beacon attestation handler

* fix tests
2019-08-23 14:46:04 -05:00
Raul Jordan
2e8a06d6d4 Use New Blockchain Service in RPC Package (#3286)
* new chain service usage via interface

* put in the new chain service in propose blk

* deprecate with new service for canonical block roots

* remove old chain serv absolutely in validator server

* full legacy code compatible in beacon server

* fully compliant

* full deprecation at service level

* no more mock chain serv

* fix beacon server tests

* add changes to prop server

* broken build

* --next compatible

* conditional register of chain service

* proper conversion

* nil deref
2019-08-23 13:53:07 -05:00
terence tsao
02ca2290e1 Added metrics for monitoring processed objects and competing chain (#3283) 2019-08-23 12:18:39 -06:00
terence tsao
15f052c48d Update sync to use chain info for head and finalized check point (#3288)
* Starting

* Fixed all the tests
2019-08-23 12:48:40 -05:00
Nishant Das
74df2aa0c3 Add Recent Blocks RPC Request Handler (#3281)
* add new rpc handler

* gaz

* add it back

* remove ok

* preston's comments
2019-08-23 13:10:25 -04:00
Nishant Das
22f4807e0b Implement GoodBye RPC Handler (#3282)
* add handler

* gaz and addition to main rpc method

* remove todo

* preston's comments

* gaz
2019-08-23 12:53:38 -04:00
Raul Jordan
7f475bee00 no cache tests (#3284) 2019-08-23 09:56:48 -05:00
Nishant Das
ebb0e398d3 Deposit Cache Fix (#3280)
* fix cache

* fix spacing
2019-08-22 22:49:03 -04:00
Raul Jordan
f342224410 Full RPC Package Compliance With New DB Interface (#3275)
* deprecate db

* fix build

* begin integrating new db

* gaz

* use more of the new db

* newest implementation uses head state

* remove more deprecated items

* setup validators in state helper

* fix up some tests with the new db

* resolve broken build

* gaz

* begin ensuring tests pass

* optional idx

* list validator balances passing

* default page size passing

* only two failing

* fixed most tests, found edge case

* allow nil return and add proper tests

* pass tests

* fix head block root problem

* working with the new db

* every ethereumapis method now compliant with both dbs

* pass in db into server

* proposer server all compliant

* validator service fully compliant

* fix broken build, tests pass

* spacing

* compute state root and propose block tests passing with new db

* complete proposer server tests revamp

* validator tests halfway through passing with new db

* more validator server tests

* more than halfway there

* so so close

* all validators tests done

* attester server tests fixing

* use new api

* attester server complete

* complete
2019-08-22 20:39:06 -05:00
terence tsao
c47598514c Part 9 of update fork choice - HeadBlock and HeadState getters (#3279)
* Headblock and headstate getters

* Moved mutex around
2019-08-22 20:13:56 -05:00
terence tsao
0d64f7b80e Part 6 of update fork choice - implement new ReceiveAttestation (#3246)
* Implemented new fork choice service and helpers

* Added rest of the tests

* Lint

* Add back helpers test

* Reformatted to doc, helpers and metrics.go

* include new getter for block

* create block filters from indices

* give every block index a unique bucket

* construct block indices by bucket mmap

* almost done save for the block filters

* include block filters, need a few more small touches for fetching the proper indices by bucket

* full functionality to filter by parent root

* tests pass when using the same logic as attestations

* todo

* proper todo formatting

* first minimum slot range filter

* slot range filters pass

* more filter criteria passing

* tests passing

* add todos

* all block tests pass and work

* rem fmt

* range retrieval test

* fixed test conditions

* instantiate the other buckets

* simplify bucket lookups

* deprecate non map code

* revamp to remove old index prefixes

* create indices from data

* create indices from data

* fetch block roots by slot range

* better abstractions

* simpler abstractions

* roots rename

* comment

* preston feedback

* Fixed existing tests

* allow blocks without parent root

* Cleaned up a few things

* Removed todo

* Lint

* Cleaned up a few things

* A few functions don't need to be exported

* Gaz

* Fixed visibility

* Review feedback

* Review feedback part1

* Raul's feedback, refactored OnBlock and OnAttestation to its own file

* Fixed grammar

* Lint

* Implemented ReceiveAttestation

* Use time.Time

* Implemented ReceiveAttestation

* All tests pass

* Lint

* Oooops

* Typo
2019-08-22 20:00:55 -05:00
Preston Van Loon
b59b3ec09c P2P implement message send (#3278)
* return a stream with send, for reading response

* gofmt

* added sender impl

* fix imports
2019-08-22 19:02:46 -04:00
Raul Jordan
8f01b76366 Integrate DB Refactor Into Ethereum APIs Beacon Chain Server (#3245)
* deprecate db

* fix build

* begin integrating new db

* gaz

* use more of the new db

* newest implementation uses head state

* remove more deprecated items

* setup validators in state helper

* fix up some tests with the new db

* resolve broken build

* gaz

* begin ensuring tests pass

* optional idx

* list validator balances passing

* default page size passing

* only two failing

* fixed most tests, found edge case

* allow nil return and add proper tests

* pass tests

* fix head block root problem

* working with the new db

* every ethereumapis method now compliant with both dbs

* pass in db into server
2019-08-22 15:28:53 -05:00
Sylvain Laurent
4e25f6d78f Fix testnet URL to production link (#3276) 2019-08-22 15:34:42 -04:00
Preston Van Loon
ce28feea45 Regular sync: pubsub subscriber for beacon blocks (#3220)
* add validation

* add block db check in validation

* merge

* in memory caching of seen blocks

* basic block processing

* Update BUILD.bazel

* use new receiveBlockNoPubsub

* fix build

* add TODO issue numbers

* add TODO issue numbers

* lint
2019-08-22 13:11:52 -05:00
terence tsao
2e352cf5ff Add getter for GenesisTime (#3274)
* Add getter for gensis time

* Lint

* Lint
2019-08-22 11:41:05 -05:00
Nishant Das
e0d3e78746 Add Support for Static Peering (#3272)
* add test and support for static peering

* gaz

* remove delay

* add log

* handle all peers
2019-08-22 10:23:16 -05:00
Raul Jordan
bb542d2032 Basic Block/LatestVote Caching in DB Refactor (#3249)
* add block caching layer

* runlock

* lockinggg

* latest votes map

* validator latest vote deletion method added to interface

* seguin cache working

* cache size

* impl interface

* initialize caches at struct layer
2019-08-22 10:04:13 -05:00
Nishant Das
36c9a5665d Implement Proposer Slashing Handler (#3273)
* validate proposer slashing

* add commetn

* add handler

* add

* remove

* gaz
2019-08-22 20:01:56 +05:30
Nishant Das
83083b9c65 Fix BLS Aggregation Method (#3269)
* lint

* update to new method

* fix all tests
2019-08-22 11:45:02 +05:30
Nishant Das
c09a6b87c3 Implement Attester Slashing Handler in Sync (#3260)
* add validation

* add test

* add new changes

* fix lint

* Update beacon-chain/sync/validate_attetser_slashing_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* terence's comments

* change key
2019-08-22 11:04:25 +05:30
terence tsao
b91639a32e Deprecate old block chain service (#3268)
* seperate out block chain services

* Fix blockchain service config

* Gazelle

* Fixed tests
2019-08-21 19:14:24 -06:00
Preston Van Loon
4b17711702 fix startup contract check (#3267) 2019-08-21 17:48:29 -06:00
Raul Jordan
de82956088 Include Voluntary Exits Definitions in DB Refactor (#3266)
* new interface methods

* support proposer slashings

* add in the new buckets

* all crud for propoer slashings

* attester slashings complete

* all slashings crud done

* right comment

* deposit contract tests pass

* delete out of scope methods

* conform old beacon DB

* comment

* include interface implementations

* deprecations

* pass lint
2019-08-21 16:32:44 -05:00
Raul Jordan
3ca4d6fd91 Add Deposit Contract Methods to DB Refactor (#3264)
* new interface methods

* support proposer slashings

* add in the new buckets

* all crud for propoer slashings

* attester slashings complete

* all slashings crud done

* right comment

* deposit contract tests pass

* delete out of scope methods

* conform old beacon DB

* comment

* deprecations

* pass lint

* Update deposit_contract.go
2019-08-21 16:11:50 -05:00
Preston Van Loon
01de412956 Minor runtime fixes for --next (#3265)
* some runtime fixes

* fixes

* fixes

* fixes

* fixes

* fixes
2019-08-21 16:58:38 -04:00
Raul Jordan
8fef74ab25 Block Slashings CRUD Methods DB Refactor (#3261)
* new interface methods

* support proposer slashings

* add in the new buckets

* all crud for propoer slashings

* attester slashings complete

* all slashings crud done

* right comment

* delete out of scope methods

* conform old beacon DB
2019-08-21 15:21:04 -05:00
terence tsao
bfbff885fe Part 7 of update fork choice - chain info access (#3263)
* Implemented new fork choice service and helpers

* Added rest of the tests

* Lint

* Add back helpers test

* Add benchmark tests

* Add yaml driven framework tests

* Reformatted to doc, helpers and metrics.go

* include new getter for block

* create block filters from indices

* give every block index a unique bucket

* construct block indices by bucket mmap

* almost done save for the block filters

* include block filters, need a few more small touches for fetching the proper indices by bucket

* full functionality to filter by parent root

* tests pass when using the same logic as attestations

* todo

* proper todo formatting

* first minimum slot range filter

* slot range filters pass

* more filter criteria passing

* tests passing

* add todos

* all block tests pass and work

* rem fmt

* range retrieval test

* fixed test conditions

* Implemented new receive block methods

* Comments

* Remove mark evil block

* instantiate the other buckets

* simplify bucket lookups

* deprecate non map code

* revamp to remove old index prefixes

* create indices from data

* create indices from data

* fetch block roots by slot range

* better abstractions

* simpler abstractions

* roots rename

* comment

* preston feedback

* Fixed existing tests

* allow blocks without parent root

* Cleaned up a few things

* Removed todo

* Lint

* Cleaned up a few things

* A few functions don't need to be exported

* Gaz

* Fixed visibility

* Review feedback

* Review feedback part1

* Raul's feedback, refactored OnBlock and OnAttestation to its own file

* Fixed grammar

* Lint

* Renamed to receive_block.go

* Use time.Time

* Preston's feedback, removed OnTick and Store.time

* Dont have to cast it to kv

* add block caching layer

* runlock

* lockinggg

* Fixed

* Avoid 2 fetches of the same data

* latest votes map

* Gaz

* Test passes

* Lint

* Fixed db set up

* Fixed all the tests

* Gazelle

* Added tests

* Remove todo

* remove kv

* Last clean up

* Last clean up

Last clean up

* Lint

* Preston's feedback

* Starting

* Gazelle
2019-08-21 14:50:27 -05:00
terence tsao
b440891aea Part 5 of update fork choice - implement new ReceiveBlock (#3242)
* Implemented new fork choice service and helpers

* Added rest of the tests

* Lint

* Add back helpers test

* Add benchmark tests

* Add yaml driven framework tests

* Reformatted to doc, helpers and metrics.go

* include new getter for block

* create block filters from indices

* give every block index a unique bucket

* construct block indices by bucket mmap

* almost done save for the block filters

* include block filters, need a few more small touches for fetching the proper indices by bucket

* full functionality to filter by parent root

* tests pass when using the same logic as attestations

* todo

* proper todo formatting

* first minimum slot range filter

* slot range filters pass

* more filter criteria passing

* tests passing

* add todos

* all block tests pass and work

* rem fmt

* range retrieval test

* fixed test conditions

* Implemented new receive block methods

* Comments

* Remove mark evil block

* instantiate the other buckets

* simplify bucket lookups

* deprecate non map code

* revamp to remove old index prefixes

* create indices from data

* create indices from data

* fetch block roots by slot range

* better abstractions

* simpler abstractions

* roots rename

* comment

* preston feedback

* Fixed existing tests

* allow blocks without parent root

* Cleaned up a few things

* Removed todo

* Lint

* Cleaned up a few things

* A few functions don't need to be exported

* Gaz

* Fixed visibility

* Review feedback

* Review feedback part1

* Raul's feedback, refactored OnBlock and OnAttestation to its own file

* Fixed grammar

* Lint

* Renamed to receive_block.go

* Use time.Time

* Preston's feedback, removed OnTick and Store.time

* Dont have to cast it to kv

* add block caching layer

* runlock

* lockinggg

* Fixed

* Avoid 2 fetches of the same data

* latest votes map

* Gaz

* Test passes

* Lint

* Fixed db set up

* Fixed all the tests

* Gazelle

* Added tests

* Remove todo

* remove kv

* Last clean up

* Last clean up

Last clean up

* Lint

* Preston's feedback
2019-08-21 13:40:00 -04:00
Nishant Das
79e57e8e8e Add Flag for SSZ Encoding (#3256)
* add flag and enum

* change help message

* linter

* add flag

* add comment

* one more comment

* fix panic

* preston's comments

* add panic
2019-08-21 12:33:48 -04:00
Preston Van Loon
acb20e269c Add flags to support new database, new sync (#3252) 2019-08-21 10:04:00 -06:00
Nishant Das
0f123ae562 Change log to node's URL (#3255) 2019-08-21 09:30:22 -06:00
Nishant Das
3cb32c3792 Implement Discv5 in Prysm (#3211)
* add discovery

* gaz

* add build options

* add udpPort

* add more changes

* refactor private key

* added discovery loop

* add ttl

* add ttl

* use ip type instead of string

* tests pass

* gaz and new test file

* add test

* add more tests

* add one more test

* adding multiAddr tests

* adding new protocol , listener

* fix keys

* more fixes

* more changes dialing peers works now

* gaz

* add more changes

* add more changes

* gaz

* add new test helpers

* new test

* fixed all tests

* gaz

* reduce sleep

* lint

* new changes

* change formats

* fix all this stuff

* remove discv5 protocol

* remove protocol

* remove port condition,too restrictive

* preston's feedback

* preston's feedback

* close all peers

* gaz

* remove unused func

* Update beacon-chain/p2p/service.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* remove build options

* refactor tests
2019-08-21 11:38:30 +05:30
terence tsao
8fc3c55199 Forkchoice get head tiebreaker (#3253) 2019-08-20 21:26:04 -06:00
Preston Van Loon
e146bc35c0 revert this again (#3254) 2019-08-20 23:10:00 -04:00
terence tsao
6195a0bfa1 Fix GetHeadFromYaml is flakey (#3251)
* Forgot clear cache needs to be within the loop

* space
2019-08-20 19:57:35 -06:00
terence tsao
e330fa5733 Part 3 of fork choice update - yaml tests (#3213) 2019-08-20 16:20:54 -06:00
terence tsao
1c4b7329f2 Part 2 of fork choice update - benchmark tests (#3212) 2019-08-20 16:13:20 -06:00
Raul Jordan
121a277726 add better godocs (#3250) 2019-08-20 18:01:55 -04:00
Raul Jordan
900b550864 add all proper spans to methods (#3248) 2019-08-20 14:35:48 -05:00
Preston Van Loon
3f0d1c1d41 Reg sync beacon blocks (#3218)
* checkpoint

* checkpoint

* varint prefix for ssz

* move the encoding API around a little bit to support reader writer

* add a simple test for the happy path subscribe

* move wait timeout to testutil

* Add inverted topic mapping

* Add varint prefixing to ssz network encoder

* fix spacing

* fix comments

* fix comments

* make anon methods more clear

* clean up log fields

* move topic mapping, reformat TODOs, get ready for brutal team review

* lint

* lint

* lint

* Update beacon-chain/p2p/gossip_topic_mappings.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* PR feedback

* PR feedback

* PR feedback

* PR feedback

* PR feedback

* basic test with a hardcoded fork choice

* updated beacon config to use genesis fork version

* checkpoint on hello handler

* create a testing db method that can be used with the new database interface

* lint

* lint

* PR feedback

* checkpoint

* passing tests

* comments, errors

* comments, errors

* remove swarm

* Add basic sanity test, naive implementation

* add test with real data
2019-08-20 14:06:49 -05:00
terence tsao
01bbc552cd Part 1 of fork choice update - fork choice as a service (#3209) 2019-08-20 12:26:43 -06:00
Raul Jordan
8f967d26d7 Allow Nil Return from DB Methods (#3247)
* allow nil return and add proper tests

* pass tests
2019-08-20 12:24:29 -05:00
Raul Jordan
a7d336a7d0 Misc Blocks DB Improvements (#3244)
* allow string constructions

* fix all other cases with formatting
2019-08-20 10:09:10 -05:00
Raul Jordan
1c8ac6658e Implement Block DB Methods (#3221)
* include new getter for block

* create block filters from indices

* give every block index a unique bucket

* construct block indices by bucket mmap

* almost done save for the block filters

* include block filters, need a few more small touches for fetching the proper indices by bucket

* full functionality to filter by parent root

* tests pass when using the same logic as attestations

* todo

* proper todo formatting

* first minimum slot range filter

* slot range filters pass

* more filter criteria passing

* tests passing

* add todos

* all block tests pass and work

* rem fmt

* range retrieval test

* fixed test conditions

* instantiate the other buckets

* simplify bucket lookups

* deprecate non map code

* revamp to remove old index prefixes

* create indices from data

* create indices from data

* fetch block roots by slot range

* better abstractions

* simpler abstractions

* roots rename

* comment

* preston feedback

* allow blocks without parent root
2019-08-19 19:34:53 -05:00
Preston Van Loon
0b8cbd06b6 Add flag for testing new p2p (#3243)
* refactor a bit to select p2p

* lint

* fix build

* fix build

* fix build

* fix build

* fix build
2019-08-19 17:20:56 -04:00
Nishant Das
b7b62e24ad new changes (#3241) 2019-08-19 12:31:15 -04:00
Preston Van Loon
e88bbaf614 Block networking in sandbox test by default, fix roughtime panic (#3240)
* block networking in sandbox test by default, fix roughtime panic

* Update .bazelrc
2019-08-19 12:13:05 -04:00
terence tsao
6ac0d12f5b Part 4 of update fork choice - mark old functions deprecated (#3215)
* Mark these soon to be deprecated functions as "deprecated"

* Deprecate all

* Marked fork choice reorg test as deprecated

* Marked fork choice reorg test as deprecated

* Gaz
2019-08-19 11:34:25 -04:00
terence tsao
4c1ff2a897 Mega renovate updates (#3239)
* Update dependency build_bazel_rules_nodejs to v0.35.0

* Update io_bazel_rules_k8s commit hash to b815470

* Update dependency build_bazel_rules_nodejs to v0.36.0

* Update dependency com_github_googleapis_gnostic to v0.3.1

* Update dependency com_github_mattn_go_isatty to v0.0.9

* Update dependency com_google_cloud_go to v0.44.3

* Update dependency io_bazel_rules_go to v0.19.3

* Update libp2p

* Update io_bazel_rules_k8s commit hash to b799dd0
2019-08-19 09:57:19 -04:00
Raul Jordan
5f2e0493eb Fix Slice Union Helpers for Variadic Arguments (#3228)
* union changes

* slice util fixes
2019-08-18 22:42:50 -05:00
Nishant Das
16c5d96e6a Change BootNode to use Discv5 instead of Kademlia (#3203)
* add new test

* specify ecdsa keygen

* skip test

* fix ref

* comment again

* fix test and clean up

* gaz

* change to another format

* Apply suggestions from code review

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* fix docker build

* add close
2019-08-19 01:24:20 +05:30
Preston Van Loon
a26ef9b44f Regular sync: pubsub subscriber for voluntary exits (#3227)
* voluntary exit validator & handler skeleton

* pass through && register

* voluntary exits

* voluntary exits

* voluntary exits

* gaz

* lint
2019-08-18 11:33:58 -04:00
Preston Van Loon
b8e550b1e9 Add p2p broadcast implementation (#3226)
* add broadcaster impl

* change API so broadcast returns an error

* change API so broadcast returns an error

* add test for message not mapped

* lint msg

* lint msg
2019-08-18 00:32:39 -04:00
Preston Van Loon
68210eb733 remove the fork version changes since this would be required change for deposits and needs to be more carefully thought about (#3224) 2019-08-17 11:12:05 -04:00
Preston Van Loon
78bf39aff7 sync RPC: Hello handler (#3216)
* checkpoint

* checkpoint

* varint prefix for ssz

* move the encoding API around a little bit to support reader writer

* add a simple test for the happy path subscribe

* move wait timeout to testutil

* Add inverted topic mapping

* Add varint prefixing to ssz network encoder

* fix spacing

* fix comments

* fix comments

* make anon methods more clear

* clean up log fields

* move topic mapping, reformat TODOs, get ready for brutal team review

* lint

* lint

* lint

* Update beacon-chain/p2p/gossip_topic_mappings.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* PR feedback

* PR feedback

* PR feedback

* PR feedback

* PR feedback

* basic test with a hardcoded fork choice

* updated beacon config to use genesis fork version

* checkpoint on hello handler

* create a testing db method that can be used with the new database interface

* lint

* lint

* PR feedback

* checkpoint

* passing tests

* comments, errors

* comments, errors

* remove swarm

* Update WORKSPACE

* Update WORKSPACE

* merge

* lint

* lint

* touch

* touch

* imports
2019-08-16 16:03:11 -04:00
Preston Van Loon
81f868bd48 Regular Sync - First Pass (#3201)
* checkpoint

* checkpoint

* varint prefix for ssz

* move the encoding API around a little bit to support reader writer

* add a simple test for the happy path subscribe

* move wait timeout to testutil

* Add inverted topic mapping

* Add varint prefixing to ssz network encoder

* fix spacing

* fix comments

* fix comments

* make anon methods more clear

* clean up log fields

* move topic mapping, reformat TODOs, get ready for brutal team review

* lint

* lint

* lint

* Update beacon-chain/p2p/gossip_topic_mappings.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* PR feedback

* PR feedback

* PR feedback

* PR feedback

* PR feedback

* Update WORKSPACE

* Update WORKSPACE
2019-08-16 13:13:04 -04:00
Raul Jordan
6ec9d7e6e2 Utilize Indices for Key Lookup and Filtering Attestations in DB (#3202)
* begin indices approach

* use shard bucket

* continue the indices approach

* eliminate the filter checkers in favor of the single loop of root lookups

* elim extraneous println statement

* continue the indices approach

* intersection for multiple filter types works, but is complex, verbose, and nearly unreadable

* remove unused code

* table drive tests for byte slice intersections

* include all table driven tests

* gazelle imports

* better abstractions

* better comments

* variadic approach working

* transform to variadic

* comments

* comments

* separate bucket for indices for faster range scans

* attestation key as hash tree root of data and different indices buckets

* test pass

* default behavior without filter

* appropriate filter criterion errors if criterion does not apply to type

* better abstractions and prune keys on deletion

* better naming

* fix build

* fix build

* rem extraneous code
2019-08-15 19:57:43 -05:00
Preston Van Loon
eb192049b8 Create a testing db method that can be used with the new database interface (#3217)
* create a testing db method that can be used with the new database interface

* lint

* lint

* PR feedback

* Update setup_db.go
2019-08-15 17:41:51 -04:00
Raul Jordan
65ee6eb3af Refactor Slice Utils as Variadic Functions (#3206)
* variadic approach working

* transform to variadic

* comments

* all variadic funcs simplified and tests passing
2019-08-14 17:08:53 -05:00
Preston Van Loon
c0627e29a8 Add varint prefixing to ssz network encoder (#3210)
* Add varint prefixing to ssz network encoder

* fix spacing

* fix comments

* fix comments

* Update varint.go
2019-08-14 16:18:32 -05:00
shayzluf
df65a8d118 watchtower api (#3134) 2019-08-14 12:16:17 -07:00
terence tsao
11ac9585ad Deprecate the old, and add new DB setup util for tests (#3208) 2019-08-14 11:48:28 -07:00
Raul Jordan
d0bdbe5a33 Add Byte Slice Intersection Utils (#3204)
* remove unused code

* table drive tests for byte slice intersections

* include all table driven tests

* gazelle imports

* imports
2019-08-14 10:27:18 -05:00
terence tsao
072bb4be27 Add gossipsub parameter test (#3200)
* added test for parameters

* Fixed test
2019-08-14 08:26:03 -04:00
Preston Van Loon
5b7182cf18 ssz network encoder (with snappy compression) (#3198)
* move to deprecated-p2p

* fix lint

* Add boilerplate p2p

* lint?

* fix imports

* fix lint

* lint

* lint

* lint

* lint

* comment

* skeleton

* checkpoint

* add a new message that should work with ssz

* add ssz fix and test snappy encoder

* clarify todo

* fix viz

* move, no need to be in subpackage

* testing pb

* end nl

* use merged ssz
2019-08-13 21:37:45 -04:00
Nishant Das
1eb29a2394 Clean Up In Memory Deposits in DB (#3065)
* lint

* clean up deposits in db

* fix all references

* fixed tests

* lint

* bring it into a separate package

* fix lint

* move test

* fix ref

* fix test

* fix test

* fix test
2019-08-13 19:13:47 -04:00
Preston Van Loon
8ea586a3e6 add new p2p messages for RPC (#3199) 2019-08-13 18:48:25 -04:00
terence tsao
27319a8990 Implement State DB Methods (#3193)
* Added state implementation

* Gaze

* Fixed test

* Fixed build file

* Fixed all tests

* Merged with master

* Added comments to save and get from roots

* Make it explicit signing root

* s/./,

* s/marshalled/marshaled
2019-08-13 18:33:31 -04:00
Preston Van Loon
d2186726a3 New p2p package (#3196) 2019-08-13 14:12:00 -07:00
Preston Van Loon
82efca9b6f Move p2p to deprecated-p2p (#3191)
* move to deprecated-p2p

* fix lint

* lint?

* fix lint

* lint

* lint

* lint

* lint
2019-08-13 14:52:04 -04:00
Preston Van Loon
e31792f999 deprecate node p2p config (#3192) 2019-08-13 14:24:09 -04:00
skillful-alex
4e886a84f9 Added roughtime to IsSlotValid and fixed test TestIsValidBlock_InvalidSlot (#3186)
* add roughtime to IsSlotValid

* gazelle

* gofmt -s
2019-08-13 13:59:11 -04:00
terence tsao
a3ac250ac1 Mark deprecated protobuf p2p msgs (#3194)
* Mark deprecated msgs

* Deprecate all

* Build
2019-08-13 13:33:04 -04:00
Raul Jordan
655f5830f4 Implement Blocks DB Methods (#3195) 2019-08-13 09:49:27 -07:00
Preston Van Loon
856dde497b Move sync to deprecated- prefix (#3190)
* Move sync to deprecated_ prefix

* do not use underscore

* fix
2019-08-13 12:35:34 -04:00
Raul Jordan
8d8849feed Implement Attestations DB Methods (#3183)
* begin db interface

* define the database interface

* interface definition simplifications

* include latest message proto

* modify pbs

* rem kv folder

* add filter interface

* lint

* ctx package is great

* interface getting better

* ctx everywhere...it's everywhere!

* block roots method

* new kv store initialization

* comments

* gaz

* implement interface

* refactor for proper naming conventions

* add todos

* proper comments

* rem unused

* add schema

* implementation simplicity

* has validator latest vote func impl

* retrieve validator latest vote

* has idx

* implement missing validator methods

* missing validator methods and test helpers

* validator index crud tests

* validator tests

* save attestation implementation

* attestation basic methods

* batch  save

* all buckets

* refactor with ok bool

* retrieval by root working

* todo for has attestations

* all tests passing, fmt, imports

* generate key use helper

* most att methods complete

* crud tests passing

* closer and closer to filtering all atts

* default no filter

* filter criteria functioning

* simplified conditional

* filter criteria func

* filter criteria

* filter criteria for atts there

* query filter map strategy

* internal filter api complete

* comments

* complete the passing of all other tests using criteria met

* imports

* fix broken build:

* breaking arg

* import sort groups

* keygen outside tx

* address feedback
2019-08-13 11:04:33 -05:00
Preston Van Loon
fa0ef76561 Lower jaegar BufferMaxCount (#3188)
* Lower BufferMaxCount

* revert tools/cluster-pk-manager/client/main.go
2019-08-12 22:50:47 -04:00
Raul Jordan
551ed1d335 Internal Filter Criteria Builder API (#3185)
* query filter map strategy

* internal filter api complete

* comments

* terence feedback
2019-08-12 20:55:28 -05:00
Preston Van Loon
0ab969a87d fix panic on invalid bls key (#3184) 2019-08-12 13:58:25 -07:00
Raul Jordan
6bd8ae8f67 Implement Validator DB Methods (#3172)
* begin db interface

* define the database interface

* interface definition simplifications

* include latest message proto

* modify pbs

* rem kv folder

* add filter interface

* lint

* ctx package is great

* interface getting better

* ctx everywhere...it's everywhere!

* block roots method

* new kv store initialization

* comments

* gaz

* implement interface

* refactor for proper naming conventions

* add todos

* proper comments

* rem unused

* add schema

* implementation simplicity

* has validator latest vote func impl

* retrieve validator latest vote

* has idx

* implement missing validator methods

* missing validator methods and test helpers

* validator index crud tests

* validator tests

* all buckets

* refactor with ok bool

* all tests passing, fmt, imports
2019-08-12 14:33:07 -05:00
terence tsao
715b9cd5ba Save head block root for new DB refactor (#3182)
* Save head block root instead of save head state

* Revert state
2019-08-12 12:13:30 -04:00
terence tsao
212f8d6c3f Mega renovate updates (#3181)
* Update io_bazel_rules_k8s commit hash to b815470

* Update io_kubernetes_build commit hash to 9f4571a

* Update dependency com_github_google_go_cmp to v0.3.1

* Update libp2p
2019-08-12 10:33:39 -04:00
Nishant Das
22df351e89 Request Missing Logs (#3173)
* add request for missing logs

* fix formatting

* add metric

* add test

* fix test
2019-08-12 06:59:57 -04:00
renovate[bot]
06950907c8 Update dependency com_google_cloud_go to v0.44.0 (#3178) 2019-08-11 21:55:48 -04:00
Preston Van Loon
4c62b0410f Filter deposits by index and block number (#3171)
* filter deposits by index and block number

* fix test and inverted logic

* nishant feedback

* -1

* Update beacon-chain/db/deposits_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* raul feedback: explicit variables

* revert everything :(

* a better fix

* fix +1

* fix test
2019-08-11 15:54:39 -04:00
Preston Van Loon
a938274d57 Return errors in log processing (#3170)
* Return errors in log processing

* error formatting
2019-08-11 10:39:22 -04:00
Raul Jordan
dce9c41094 Define Interface Stubs for New DB Interface (#3164)
* begin db interface

* define the database interface

* interface definition simplifications

* include latest message proto

* modify pbs

* rem kv folder

* add filter interface

* lint

* ctx package is great

* interface getting better

* ctx everywhere...it's everywhere!

* block roots method

* new kv store initialization

* comments

* gaz

* implement interface

* refactor for proper naming conventions

* add todos

* proper comments

* rem unused
2019-08-10 20:50:10 -04:00
Ivan Martinez
bb2d79be85 Aggregate attestations before adding into the DB (#3149)
* Implement Attestation Aggregation before inserting into the DB

* Nearly complete test for aggregating signatures

* Finish tests for aggregating signatures

* gazelle

* Rename tests

* add lock and advance state

* only advance if necessary

* Fix most tests

* Fix more of DB keys and changing keys to data hashes

* Fix a lot of tests and inconsistencies

* fix lock

* gaz

* undo local changes

* fix ref

* fix ref

* Fix some tests

* clear cache

* fix sync for attestations

* finally working across multiple nodes

* gen proto

* lint

* properly wrap error
2019-08-10 16:13:04 -04:00
Preston Van Loon
830a0a4bca Fix mid-epoch assignment requests (#3168)
* fix mid-epoch assignments

* add quick test comment
2019-08-10 15:49:58 -04:00
terence tsao
d153abd992 Remove attestation announcement (#3165) 2019-08-09 16:31:44 -07:00
Raul Jordan
3d63bca127 Define New DB Interface (#3163)
* begin db interface

* define the database interface

* interface definition simplifications

* include latest message proto

* modify pbs

* rem kv folder

* add filter interface

* lint

* ctx package is great

* interface getting better

* ctx everywhere...it's everywhere!

* block roots method
2019-08-09 15:17:18 -04:00
Jean-André Santoni
e1dfe73525 Query a roughtime server to mitigate NTP attacks (#3151) 2019-08-09 07:05:08 -07:00
Preston Van Loon
d860dbbb60 Fix deposits at genesis eth1data (#3161)
* Exclude additional deposits from genesis block

* fix tests

* add test to cover this scenario
2019-08-08 23:26:18 -04:00
terence tsao
32c426ed1b Replaced block and state roots construction to SlotsPerHistoricalRoot (#3160) 2019-08-08 13:12:35 -05:00
terence tsao
b3e29399aa Reest array size to SlotsPerHistoricalRoot (#3158) 2019-08-08 08:03:24 -07:00
terence tsao
2ec8a46cb2 forgot to wrap these two errors (#3156) 2019-08-08 06:55:25 -07:00
Preston Van Loon
ccc7d8d7b7 Update BLS with @protolambda's improvements (#3152)
* Add @protolambda's fork until https://github.com/phoreproject/bls/pull/11

* update workspace
2019-08-07 22:54:33 -04:00
Raul Jordan
4e041c852b utilize newest ssz and fix build (#3155) 2019-08-07 20:24:04 -05:00
Preston Van Loon
cb5c920502 Add quick bls benchmark (#3148) 2019-08-05 17:11:38 -07:00
Nishant Das
9ec54ae432 Optimize Verification Of Signatures in Attestations (#3146)
* add few changes

* add process attestation no verify

* gaz

* add reg test

* revert config

* add new method

* fix test

* preston's review

* preston's review

* space
2019-08-05 10:35:47 -04:00
terence tsao
dec694916b Renovate updates in batch (#3145)
* Update io_bazel_rules_k8s commit hash to 5648b17

* Update dependency build_bazel_rules_nodejs to v0.35.0

* Update libp2p

* Update dependency com_github_urfave_cli to v1
2019-08-05 09:33:40 -04:00
renovate[bot]
64f7569894 Update graknlabs_bazel_distribution commit hash to 8dc6490 (#3138) 2019-08-05 00:37:19 -04:00
renovate[bot]
7d2bb5878f Update dependency io_bazel_rules_docker to v0.9.0 (#3141) 2019-08-04 23:33:51 -04:00
terence tsao
bccd2f95cc Finish error wrapping (#3135) 2019-08-04 15:45:03 -07:00
Preston Van Loon
d97b691f7d stub insertions when flag off (#3131) 2019-08-03 16:28:04 -04:00
terence tsao
d59800210a Skip empty criteria for public key instead of fail (#3130) 2019-08-03 10:23:10 -07:00
terence tsao
7e819990f6 Reject attestation older than finalized epoch (#3124)
* Reject Att older than current finalized epoch

* Fixed receive att test

* Don't use slot for old fork choice target map, use epoch

* Fixed one last conflict
2019-08-03 10:07:57 -07:00
terence tsao
d6b311ab84 Fix ListBlock RPC bugs (#3126) 2019-08-03 09:22:13 -07:00
Nishant Das
ea09a918d8 Fix Sync Service Status (#3128)
* fix bug

* remove log
2019-08-03 10:37:52 -04:00
Nishant Das
b1fcaa03ae Remove Statistical Package (#3129) 2019-08-03 06:47:46 -07:00
terence tsao
0beb919fc0 Switch over to sha256-simd (#3125) 2019-08-02 14:42:59 -07:00
Preston Van Loon
953c59a302 Wrap errors (#3123) 2019-08-01 19:27:38 -07:00
terence tsao
7eca7ba43b Fixed spelling of genesis (#3120) 2019-08-01 16:55:50 -04:00
Preston Van Loon
f019e54ebb Remove metrics with high cardinality (#3121)
* remove metrics with high cardinality

* imports
2019-08-01 12:23:52 -04:00
Nishant Das
474fd20123 Fixes when Fetching Pending Deposits (#3119)
* fix bug and add reg test

* fix test

* add better test
2019-08-01 11:12:54 -04:00
terence tsao
08ac1c3c35 Remove nil for assignment when slot is 0 (#3117) 2019-07-31 17:46:11 -05:00
Nishant Das
b504d3beb8 Attestation Fixes (#3113)
* change to hashTreeRoot

* remove function and run gaz

* fix panic

* remove cache and add fix

* Revert "remove cache and add fix"

This reverts commit 735986a2db.

* add back fix

* comment out

* refactor and reg test

* some more fixes

* fix tests

* todo

* Revert config changes

* fix test

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/rpc/attester_server.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* preston's review
2019-07-31 14:58:03 -04:00
terence tsao
da551f688d Removed extra state root check (#3114) 2019-07-31 12:57:29 -04:00
Nishant Das
57d60d681a Eth1Data VoteCount Fix (#3105)
* change to hashTreeRoot

* remove function and run gaz

* fix panic

* remove cache and add fix

* Revert "remove cache and add fix"

This reverts commit 735986a2db.

* add back fix

* comment out

* refactor and reg test

* Update beacon-chain/rpc/proposer_server_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Terence PR feedback
2019-07-31 11:00:31 -04:00
terence tsao
f70d94675b Fix assignment to advance state up to requested slot (#3112) 2019-07-30 20:47:34 -07:00
Preston Van Loon
3c3e4a2cb5 Add tests for db deposits (#3111)
* add tests for deposits

* fix imports

* gaz

* travis lint
2019-07-30 12:30:07 -04:00
Preston Van Loon
5f4cdd6095 Need to ignore in progress cache while disabled (#3109) 2019-07-30 01:07:18 -04:00
Preston Van Loon
63cf0f07a2 Disable caches, allow toggle via feature flag (#3107) 2019-07-29 20:38:05 -07:00
terence tsao
68f29967f3 Renovate updates in batch (#3103) 2019-07-29 08:23:34 -07:00
Nishant Das
f72f7677b3 Replace Deposit Hash with HashTreeRoot (#3102)
* change to hashTreeRoot

* remove function and run gaz

* fix panic
2019-07-29 08:43:24 -05:00
terence tsao
3fe0933936 Fixed TestRetrieveAttestations_OK test (#3087) 2019-07-27 17:13:00 -05:00
terence tsao
ad82e84503 Updated pseudocode for get_indexed_attestation (#3086) 2019-07-27 14:47:32 -07:00
Nishant Das
fc907261e9 Fix State Root Verification in Sync (#3085)
* remove caches

* use already imported block package
2019-07-28 02:32:13 +05:30
terence tsao
96c32c3865 Implement ListBlocks RPC function (#3084)
* Implemented rpc list blocks

* Tests for list blocks

* lint

* lint
2019-07-27 15:26:28 -05:00
Nishant Das
8cbd1097d7 Fix Signature Verification for Attestations (#3080)
* fix bug and test

* terence's review
2019-07-26 23:18:44 -05:00
terence tsao
956b07f5c1 Implement state transition no verify (#3048)
* Implemented state transition w/o sig verification

* ExecuteStateTransitionNoValidateStateRoot -> ExecuteStateTransitionNoVerify

* Fixed all the tests

* Extra spaces

* Gazelle

* Conflict

* Added start shard cache back

* typos
2019-07-27 00:24:42 +05:30
Raul Jordan
4ebe2fb5b5 Implement AttestationPool and ListAttestations RPC Functions (#3061) 2019-07-26 10:07:20 -07:00
Raul Jordan
40fca7bb2c Update SSZ Dependency With Tree Hash Cache Enabled (#3076)
* utilize new SSZ

* workspace
2019-07-25 23:49:49 -05:00
Ivan Martinez
ed78f1f406 Require Signature Verification in Randao, Attestations and AttesterSlashings (#3075) 2019-07-25 14:42:13 -07:00
terence tsao
b80d9f4f7f Implement GetValidatorParticipation RPC function (#3069) 2019-07-25 14:01:18 -07:00
terence tsao
c1eeeef853 Implement ListValidatorAssignments RPC function (#3067)
* Need to sync latest ethereum API, will do it in master

* Added pagination wrapper

* Implemented ListValidatorAssignments

* Finished tests

* Fixed tests

* Raul's feedback

* Fmt

* Pagination test

* Removed extra loggings
2019-07-25 15:45:31 -04:00
Nishant Das
88b715d8f6 Fix Key in Eth1Data Cache (#3074) 2019-07-25 11:26:19 -07:00
Ivan Martinez
e452b46873 Remove Optional Signature Verification for VoluntaryExit and BlockHeader (#3053)
* Remove verifySignatures from ProposerSlashings

* Remove flag from process transfers

* resolve all conflicts

* fix more references to old pbs

* Fix merge conflicts

* Remove verifySignature flag from ProcessBlockHeader

* fx spectest

* Fix test errors

* Fix tests

* Fix tests

* Goimports

* Fix test finally

* Move test helpers to testutil

* Goimports

* Fix imports

* Add tests for new helpers

* Run gazelle

* Fix tests
2019-07-25 13:53:46 -04:00
Preston Van Loon
59253afb96 compare hex/hash bytes instead of strings (#3072) 2019-07-25 10:41:15 -04:00
Preston Van Loon
94a73e847f Log errors in regular sync (#3070)
* add a error level logging for regular sync

* add a error level logging for regular sync

* add a error level logging for regular sync
2019-07-24 22:29:40 -04:00
Preston Van Loon
7d47be84ed Template based protobuf parameters for ssz configurations (#3062)
* WIP on build time configuration changes

* add ssz_minimal tests

* split up spec tests into mainnet and minimal, skip any minimal test that are failing without --define ssz=minimal

* lint

* add commentary to ssz_proto_library
2019-07-24 22:03:05 -04:00
Preston Van Loon
f58afa62af fix docker image builds (#3068) 2019-07-24 15:19:44 -04:00
blacktemplar
9f2543267e Ignore deposits with invalid signatures instead of throwing error (#3011)
* errors in signatures of deposits lead to ignoring the deposit instead of error (by spec)

* add appropriate error log for invalid signature in process deposit

* Small comment improvement

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* improved log

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* adds three unit tests for skipping deposits with invalid and uncompressed signatures and one for successfully performing a process deposit with a valid signature (inclusive signature checking)

* fix panics, still test failures

* adapts tests to use testutil

* add forgotten dependencies

* reordering imports according to goimports
2019-07-24 14:55:40 -04:00
Nishant Das
590aaaf370 Fix Batched Block Response (#3012) 2019-07-24 07:53:38 -07:00
terence tsao
17576af752 Implement GetValidators gRPC server (#3054) 2019-07-23 19:36:35 -07:00
Justin Page
9d0e9fa77d Improve sorted indices check in blocks package (#3060) 2019-07-23 18:17:47 -07:00
terence tsao
41e55a6902 Sync with latest eth api definitions (#3059) 2019-07-23 14:33:25 -07:00
Raul Jordan
930e992e85 Include Stubs for All Beacon Chain Server RPC Methods (#3058)
* add most rpc method stubs

* include all stubs

* use unimplemented error code
2019-07-23 15:29:13 -05:00
Raul Jordan
a2caba9956 Optimize Sparse Merkle Trie (#3056)
* calc tree from leaves simpler

* fast generate proof

* align api to be the same

* ensure tests pass

* err condition

* travis

* fix build

* zero hashes work
2019-07-23 14:17:39 -05:00
Preston Van Loon
be514076c1 Remove optional verify signatures argument when verifying deposits (#3052)
* remove optional verification of deposit signatures

* use minimal config for easier setup

* progress

* progress

* Fix a few test errors

* Fix more of tests

* fix imports, gazelle

* fix rpc package

* fix blocks package

* fixed state test

* fixed powchain tests

* add comments

* remove todo

* Update beacon-chain/rpc/validator_server_test.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2019-07-23 22:19:14 +05:30
Raul Jordan
5374350a1c Standardize Flags at Top Level and Remove Deprecated Utils (#3046)
* move shuffling to core

* remove old utils

* move flags to top level

* package lvl comment removal

* fix up references to flags

* revert node.go

* revert p2p_config.go

* revert main.go

* revert validator node.go

* revert validator main.go

* add flags pkg

* viz

* goimports
2019-07-23 08:58:20 -05:00
terence tsao
d42fab070d Implement ListValidatorBalances gRPC server (#3050)
* Implemented ListValidatorBalances in beacon chain server

* Fmt

* Tests

* Spacings

* goimports

* Apply suggestions from code review, thanks Raul!

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* Update beacon-chain/rpc/beacon_chain_server.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2019-07-22 21:49:47 -05:00
Raul Jordan
b06876d698 Implement Node gRPC Server (#3049)
* add node server

* stub implementations

* node server impl

* gaz

* only missing genesis info now

* fmt imports

* all tests pass

* fmt

* revert change

* punctuation

* use internal err code

* view permission

* using real reflection

* spacing

* lint
2019-07-22 21:19:55 -05:00
Preston Van Loon
6a930ba175 Remove optional verifyTree argument (#3047)
* remove optional verifyTree argument

* remove fmt

* do not provide a default eth1data, return an error instead

* add a test for this new logic

* gaz
2019-07-22 16:47:11 -05:00
Ivan Martinez
1d71398b7c Remove VerifySignatures Flag From ProposerSlashings and Transfers (#3009)
* Remove verifySignatures from ProposerSlashings

* Remove flag from process transfers

* resolve all conflicts

* fix more references to old pbs
2019-07-22 14:25:32 -05:00
Raul Jordan
8cfbf0309d Resolve Proto Lint Issues (#3044)
* skip proto lint

* golang ci lint no -t flag

* regenerate protos to match schema

* move to compatibility folder

* build file compatibility

* foo
2019-07-22 14:10:17 -05:00
Preston Van Loon
d5dcc25472 Nogo fix for mac (#3043) 2019-07-22 07:39:37 -07:00
terence tsao
1b5b8a57e0 Remove unused proto schemas (#3005)
* Update io_kubernetes_build commit hash to 1246899

* Update dependency build_bazel_rules_nodejs to v0.33.1

* Update dependency com_github_hashicorp_golang_lru to v0.5.1

* Update libp2p

* Update io_bazel_rules_k8s commit hash to e68d5d7

* Starting to remove old protos

* Bazel build proto passes

* Fixing pb version

* Cleaned up core package

* Fixing tests

* 6 tests failing

* Update proto bugs

* Fixed incorrect validator ordering proto

* Sync with master

* Update go-ssz commit

* Removed bad copies from v1alpha1 folder

* add json spec json to pb handler

* add nested proto example

* proto/testing test works

* fix refactoring build failures

* use merged ssz

* push latest changes

* used forked json encoding

* used forked json encoding

* fix warning

* fix build issues

* fix test and lint

* fix build

* lint
2019-07-22 10:03:57 -04:00
Preston Van Loon
c8e8e84c60 Renovate updates (#3040)
* Update libp2p

* Update com_github_gogo_protobuf commit hash to dadb625

* Update com_google_protobuf commit hash to 9857d63

* Update graknlabs_bazel_distribution commit hash to 9aec688

* Update io_bazel_rules_k8s commit hash to 68aa778

* Update io_kubernetes_build commit hash to f85734f

* Update dependency bazel_gazelle to v0.18.1

* Update dependency bazel_skylib to v0.9.0

* Update dependency com_github_burntsushi_toml to v0.3.1

* Update dependency com_github_google_go_cmp to v0.3.0

* Update dependency com_github_googleapis_gnostic to v0.3.0

* Update dependency com_google_cloud_go to v0.43.0

* Update dependency in_gopkg_inf_v0 to v0.9.1

* Update dependency io_bazel_rules_docker to v0.8.1

* Update dependency org_golang_x_text to v0.3.2

* Update dependency com_github_davecgh_go_spew to v1

* Update dependency com_github_google_btree to v1

* Update dependency com_github_json_iterator_go to v1

* Update dependency com_github_modern_go_reflect2 to v1

* Update dependency com_github_peterbourgon_diskv to v3

* Update dependency in_gopkg_d4l3k_messagediff_v1 to v1

* Update dependency org_golang_google_grpc to v1

* revert broken updates

* revert gazelle, annoying warnings
2019-07-22 01:56:58 -04:00
skillful-alex
4bb5160817 added forgotten CertFlag flag processing (#2988) 2019-07-21 17:12:31 -04:00
Preston Van Loon
365580706b Ethereum APIs compatibility test (#3010)
* update some k8s deps

* merge

* fix build sizes

* Add test for compatability for upstream protos

* Add test for compatability for upstream protos

* Add test for compatability for upstream protos

* add field name check

* passing test
2019-07-21 16:20:09 -04:00
terence tsao
fd22f73d1f Sync eth v1alpha1 with upstream (#3013)
* Updated protos

* regen
2019-07-21 16:07:31 -04:00
Preston Van Loon
db5549f143 update some k8s deps (#3008)
* update some k8s deps

* merge

* fix build sizes
2019-07-21 15:34:36 -04:00
terence tsao
4a422b13f0 Schema of BeaconBlockHeader has an extra column (#3004) 2019-07-21 10:20:13 -07:00
Ivan Martinez
acf1ebff2d Implement Proposer Signature (#2973)
* Implement Proposer Signatures

* Remove logging flag

* Create verifySignature and verifySigningRoot

* Clean up logs

* Fix

* Remove log

* Fix test
2019-07-21 11:29:35 -04:00
Preston Van Loon
4a4316eb95 Add rule to push docker images (#3006)
* touch readme

* whatever

* Add docker_push to actually push the images

* revert readme

* revert pb.go
2019-07-20 20:40:03 -04:00
Preston Van Loon
cc696d90e3 Docker: use container bundles to upload multiple image tags (#3003)
* use docker tag from environment, if exists

* use container bundles to upload multiple image tags
2019-07-19 21:55:09 -07:00
Preston Van Loon
dfc64121c6 Remove unused feature flags (#3002)
* remove a few feature flags that are no longer needed

* remove other unused flags

* forgot a few more
2019-07-19 21:27:35 -04:00
Preston Van Loon
e744d1a07e Spec freeze updates (#2312)
* Optimize Shuffled Indices Cache (#2728)

* Refactor Deposit Contract Test Setup (#2731)

* add new package

* fix all tests

* lint

* change hash function (#2732)

* Remove Deprecated Validator Protobuf (#2727)

* Remove deprecated validator protos

* Fix to comments

* Fix most of skipped tests (#2735)

* Cache Active Validator Indices, Count, and Balances (#2737)

* Optimize Base Reward Calculation (#2753)

* benchmark process epoch

* revert prof.out

* Add some optimizations

* beware where we use ActiveValidatorIndices...

* revert extra file

* gaz

* quick commit to get feedback

* revert extra file

* started fixing tests

* fixed broken TestProcessCrosslink_NoUpdate

* gaz

* cache randao seed

* fixed all the tests

* fmt and lint

* spacing

* Added todo

* lint

* revert binary file

* started regression test

* basic tests done

* using a fifo for active indices cache

* using a fifo for active count cache

* using a fifo for total balance cache

* using a fifo for active balance cache

* using a fifo for start shard cache

* using a fifo for seed cache

* gaz

* clean up

* fixing tests

* fixed all the core tests

* fixed all the tests!!!

* lint

* comment

* rm'ed commented code

* cache size to 1000 should be good enough

* optimized base reward

* revert binary file

* Added comments to calculate adjusted quotient outside

* removed deprecated configs (#2755)

* Optimize Process Eth1 Data Vote (#2754)

* Cleanup and Docs update (#2756)

* Add graffiti and update generate seed (#2759)

* Benchmark Process Block with Attestations (#2758)

* Tidying up Godoc for Core Package (#2762)

* Clean up Old RPC Endpoints (#2763)

* Update RPC end point for Proposer (#2767)

* add RequestBlock

* run mockgen

* implemented RequestBlock

* updated proto definitions

* updated tests

* updated validator attest tests

* done

* comment

* todo issue

* removed unused proto

* Update attesting indices v0.6 (#2449)

* sort participants slice

* add bitfield functions and tests

* added BitfieldBit test

* add AttestationParticipantsNew

* revert AttestationParticipants to its previous change

* add tests

* remove verifybitfieldnew

* fix tests and remove multiple tests

* remove duplicate test

* change magic number into ceildiv8

* Implement Justification and finalization Processing (#2448)

* Add convert to indexed (#2519)

* sort participants slice

* add bitfield functions and tests

* added BitfieldBit test

* add AttestationParticipantsNew

* revert AttestationParticipants to its previous change

* add tests

* remove verifybitfieldnew

* fix tests and remove multiple tests

* remove duplicate test

* start work

* convert attestation to indexed attestations

* fix test for convert index

* remove calling getter

* add more tests

* remove underscore

* changes name to signature (#2535)

* update registry updates func (#2521)

* update registry updates func

* added tests and moved to epoch processing

* fixed naming issues

* Update Committee Helpers to v0.6.0 (#2398)

* Update Committee Helper Part 2 (#2592)

* Implement Process Slashings for 0.6 (#2523)

* Update Proposer/Attester Slashings and Slashing Helpers (#2603)

* Implement Final Updates 0.6 (#2562)

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* implemented process_final_updates

* move to the top of the file

* add comment back

* gaz

* Test for process final updates

* fixed tests

* fixed all the tests

* Update Reward Helper v0.6 (#2470)

* added BaseReward

* added rewards helper

* added test for BaseReward

* extra space

* move exported function above

* update to new spec (#2614)

* Update Block Processing Voluntary Exits (#2609)

* Update processEth1Data for v0.6 (#2516)

* Clean up Helper Functions Part 1 (#2612)

* Finalize helper functions for 0.6 (#2632)

* Process Beacon Chain Transfers v0.6 (#2642)

* add transfers

* beacon transfer operations complete

* full cov

* transfer testing

* finished tests

* Update beacon-chain/core/blocks/block_operations_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Implement Process Crosslink From 0.6 (#2460)

* Process Block Eth1 Data v0.6 (#2645)

* Get attestation data slot v0.6 (#2593)

* attestation.go is ready, tests not

* ready for review

* fixing linter issues

* modified Crosslink and AttestationData proto fields to spec 2.0,marked deprecated fields

* gazelle

* fixed tests

* fixed error

* error msg

* Process Block Deposits v0.6 (#2647)

* imports fixes

* deposits tests pass

* wrapped up gazelle

* spacing

* Implement Crosslink Delta Rewards for 0.6 (#2517)

* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* merged master

* all tests passing

* start testing

* done

* add ProcessBlockHeader v0.6 (#2534)

* add ProcessBlockHeader

* function has all its dependancies in place

* arange the basic ok test

* gazzele and skip test update

* skip wrong sig test

* fmt imports and change requests

* goimports fmt

* map for struct fields to be location independent

* reorder protobuf fields

* added tests

* gazzle fix

* few change requests fixes

* revert changes in types.proto

* revert changes in types

* fix tests

* fix lint

* fmt imports

* fix gazelle

* fix var naming

* pb update

* var naming

* tarance change request fixes

* fix test

* Add Process Registry for Epoch Processing (#2668)

* update update-process-registry

* added back the old tests

* fmt

* gaz

* Follow up on process block header v0.6 (#2666)

* Putting Crosslink Delta Back (#2654)

* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* merged master

* all tests passing

* start testing

* done

* fixed tests

* addressed shay's feedback

* Implement Attestation Delta for v0.6 (#2646)

* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* implemented process_attestation_delta

* comments

* comments

* merged master

* all tests passing

* start testing

* done

* merged master

* fixed tests

* tests, more to come

* tests done

* lint

* spaces over tabs

* addressed shay's feedback

* merged master

* Implement process_rewards_and_penalties for 0.6 (#2665)

* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* implemented process_attestation_delta

* comments

* comments

* merged master

* all tests passing

* start testing

* done

* merged master

* fixed tests

* tests, more to come

* tests done

* lint

* spaces over tabs

* addressed shay's feedback

* starting but need to merge a few things...

* tests

* fmt

* Update Slot Processing and State Transition v0.6 (#2664)

* edit state transition to add slot processing logic, reorder logic

* fix build

* lint

* tests passing

* spacing

* tests pass

* imports

* passing tests

* Implement Process Epoch for 0.6 (#2675)

* can't find process j f functons

* implemented process_epoch

* tests done

* lint

* nishant's feedback

* stupid goland replace

* goimports

* Update CommitteeAssignment (#2693)

* cleaned up skipped tests for core processing (#2697)

* Process Block Attestations v0.6 (#2650)

* attestation.go is ready, tests not

* ready for review

* fixing linter issues

* modified Crosslink and AttestationData proto fields to spec 2.0,marked deprecated fields

* gazelle

* fixed tests

* fixed error

* add att processing:

* process atts

* error msg

* finish process attestations logic

* spacing

* ssz move

* inclusion delay failure passing

* more attestation tests

* more att tests passing

* more tests

* ffg data mismatching test

* ffg tests complete

* gofmt

* fix testing to match attestation updates

* ssz

* lint

* Fixed Skipped Tests for RPC Server (#2712)

* Remove Obsolete Deposit Proto Objects (#2673)

* remove deposit data

* remove deposit input

* fix references

* remove deposit helpers

* fix all refs

* gaz

* rgene proto

* fix all tests

* remove deposit data deprecated field

* fix remaining references

* fix all tests

* fix lint

* regen proto

* fix test

* Remove Deprecated Protobuf State Fields (#2713)

* Remove Deprecated Protobuf Crosslink/Slashing/Block Fields (#2714)

* Remove Deprecated Beacon Block Proto Fields (#2717)

* Remove Deprecated Attestation Proto Fields (#2723)

* Cache Shuffled Validator Indices (#2682)

* YAML shuffle tests for v0.6 (#2667)

* new shuffle tests

* added comment for exported function

* fix format and print

* added config files handling

* gazelle fix

* shuffle test debugging

* added shuffle list and benchmark

* hash function addition from nishant code

* gazelle fix

* remove unused function

* few minor changes

* add test to test protos optimization

* test a bigger list

* remove commented code

* small changes

* fix spec test and test indices to pass

* remove empty line

* abstraction of repeated code and comment arrangement

* terence change requests

* fix new test

* add small comment for better readability

* change from unshuflle to shuffle

* comment

* better comment

* fix all tests

* Remove Latest Block (#2721)

* lint

* remove latest block

* lint

* add proto

* fix build

* Fix Deposit Trie (#2686)

* remove deposit data

* remove deposit input

* fix references

* remove deposit helpers

* fix all refs

* gaz

* rgene proto

* fix all tests

* remove deposit data deprecated field

* fix remaining references

* fix all tests

* fix lint

* new tests with contract

* gaz

* more tests

* fixed bugs

* new test

* finally fixed it

* gaz

* fix test

* Remove Committee Cache (#2729)

* Benchmark Compute Committee (#2698)

* Fixed Skipped Attestation Tests (#2730)

* Optimize Shuffled Indices Cache (#2728)

* Refactor Deposit Contract Test Setup (#2731)

* add new package

* fix all tests

* lint

* change hash function (#2732)

* Remove Deprecated Validator Protobuf (#2727)

* Remove deprecated validator protos

* Fix to comments

* Fix most of skipped tests (#2735)

* Optimize Base Reward Calculation (#2753)

* benchmark process epoch

* revert prof.out

* Add some optimizations

* beware where we use ActiveValidatorIndices...

* revert extra file

* gaz

* quick commit to get feedback

* revert extra file

* started fixing tests

* fixed broken TestProcessCrosslink_NoUpdate

* gaz

* cache randao seed

* fixed all the tests

* fmt and lint

* spacing

* Added todo

* lint

* revert binary file

* started regression test

* basic tests done

* using a fifo for active indices cache

* using a fifo for active count cache

* using a fifo for total balance cache

* using a fifo for active balance cache

* using a fifo for start shard cache

* using a fifo for seed cache

* gaz

* clean up

* fixing tests

* fixed all the core tests

* fixed all the tests!!!

* lint

* comment

* rm'ed commented code

* cache size to 1000 should be good enough

* optimized base reward

* revert binary file

* Added comments to calculate adjusted quotient outside

* removed deprecated configs (#2755)

* Optimize Process Eth1 Data Vote (#2754)

* Cleanup and Docs update (#2756)

* Add graffiti and update generate seed (#2759)

* Benchmark Process Block with Attestations (#2758)

* Tidying up Godoc for Core Package (#2762)

* Clean up Old RPC Endpoints (#2763)

* Update RPC end point for Proposer (#2767)

* add RequestBlock

* run mockgen

* implemented RequestBlock

* updated proto definitions

* updated tests

* updated validator attest tests

* done

* comment

* todo issue

* removed unused proto

* Cache Active Validator Indices, Count, and Balances (#2737)

* Update Deposit Contract (#2648)

* lint

* add new contract

* change version

* remove log

* generating abi and binary files

* fix tests

* update to current version

* new changes

* add new hash function

* save hashed nodes

* add more things

* new method

* add update to trie

* new stuff

* gaz

* more stuff

* finally fixed build

* remove deposit data

* Revert "remove deposit data"

This reverts commit 9085409e91.

* more changes

* lint and gaz

* lint

* Update Shard Helpers for 0.6 (#2497)

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* Update Genesis State Function to v0.6 (#2465)

* add pseudocode

* make changes

* fix all tests

* fix tests

* lint

* regen protos and mocks

* regenerated protos

* started fixing core

* all core tests passing!

* removed shared/forkutils

* started fixing blockchain package

* lint

* updating rpc package

* add back deleted stuff

* add back deleted stuff that was deleted accidentally

* add back protos and mocks

* fix errors

* fix genesis issue

* fix genesis issue for slot ticker

* fix all genesis errors

* fix build files

* temp change for go-ssz

* fix test

* Revert "temp change for go-ssz"

This reverts commit 3411cb9d6d.

* update to latest go-ssz

* unstaged changes

* Update Attester Server RPC Calls (#2773)

* Update config and function parameters to v0.7 (#2791)

* Minor Updates to 0.7 (#2795)

* Refactor Deposit Flow and Cleanup Tests (#2788)

* More WIP on cleaning deposit flow

* Fix tests

* Cleanup and imports

* run gazelle

* Move deposit to block_operations

* gazelle

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Fix docs

* Remove unneeded calculations

* Fix tests

* Fix tests finally (?)

* Optimize Committee Assignment RPC (#2787)

* Update BlockRoot to BlockHash (#2816)

* Fix Final Missing Items in Block Processing v0.6 (#2710)

* override config successfully

* passes processing

* add signing root helper

* blockchain tests pass

* tests blocked by signing root

* lint

* fix references

* fix protos

* proper use of signing root

* only few failing tests now

* fix final test

* tests passing

* lint and imports

* rem unused

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* lint

* Update beacon-chain/attestation/service_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/db/block_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* rename to hash tree root

* rename decode to unmarshal

* fix

* use latest ssz

* all tests passing

* lint

* fmt

* Add Config YAML for Spec Tests (#2818)

* Align Protobuf Type Names (#2825)

* gofmt

* Revert "Align Protobuf Type Names (#2825)" (#2827)

This reverts commit 882d067144.

* Update Domain Related Functions (#2832)

* Add Functions for Compressed and Uncompressed HashG2 With Domain (#2833)

* add tests

* gaz

* lint

* Revert "Add Functions for Compressed and Uncompressed HashG2 With Domain (#2833)" (#2835)

This reverts commit 7fb2ebf3f1.

* Add ConvertToPb to package testutil (#2838)

* Block Processing Bug Fixes (#2836)

* Update types PB with Size Tags (#2840)

* Epoch processing spec tests (#2814)

* Remove Deposit Index (#2851)

* Shuffle tests revisited (#2829)

* first commit

* remove old files, add log

* remove duplicate yaml testing code

* reduce visability

* nishant feedback changes

* skip TestFromYaml_Pass

* added tags to bazel build

* gazelle fix

* remove unused vars

* adda back config

* remove config handling

* remove unused var

* gazelle fix

* SSZ compatibility test for protobufs (#2839)

* update workspace spec sha

* remove yamls from branch

*  BLS spec tests (#2826) (#2856)

* bls spec tests

* add more bls tests

* use ioutil instead of bazel runfiles

* dont read bytes

* skip tests that overflow uint64

* manually fix input data

* add tests

* lint and gaz

* add all new changes

* some refactoring, cleanup, remove new API methods that only exist for tests

* gaz

* Remove yamls, skip test

* Slot processing spec test (#2813)

* eth1data rpc endpoint (#2733)

* eth1data rpc endpoint

* first version

* comment added

* gazelle fix

* new function to go once over the deposit array

* fix tests

* export DepositContainer

* terence feedback

* move structure decleration

* binary search

* fix block into Block

* preston feedback

* keep slice sorted to remove overhead in retrival

* merge changes

* feedback

* update to the latest go-ssz

* revert change

* chnages to fit new ssz

* revert merge reversion

* go fmt goimprts duplicate string

* exception for lint unused doesParentExist

* feedback changes

* latesteth1data to eth1data

* goimports and stop exposing Eth1Data

* revert unneeded change

* remove exposure of DepositContainer

* feedback and fixes

* fix workspace duplicate dependancy

* greatest number of deposits at current height

* add count votes function

* change method name

* revert back to latesteth1data

* latesteth1data

* preston feedback

* seperate function add tests fix bug

* stop exposing voteCountMap

* eth1data comment fix

* preston feedback

* fix tests

* new proto files

* workspace to default version of ssz

* new ssz

* chnage test size

* marshalled  marshaled

* Attesting Indices Fix (#2862)

* add change

* fix one test

* fix all tests

* add test

* clear cache

* removed old chaintest, simulated backend and state generator (#2863)

* Block Processing Sanity Spec Tests (#2817)

* update PrevEpoch

* add new changes

* shift to blocks package

* add more changes

* new changes

* updated pb with size tags

* add new changes

* fix errors

* uncomment code

* more changes

* add new changes

* rename and lint

* gaz

* more changes

* proccess slot SigningRoot instead of HashTreeRoot

* ensure yaml generated structs work

* block sanity all passing

* minimal and mainnet all pass

* remove commented code

* fix one test

* fix all tests

* fix again

* no state comparison

* matching spec

* change target viz

* comments gazelle

* clear caches before test cases

* latest attempts

* clean up test format

* remove debugging log, remove yaml

* unskip attestation

* remove skip, check post state, diff state diffs

* handle err

* add bug fixes

* fixed one more bug

* fixed churn limit bug

* change hashProto to HashTreeRoot

* all tests pass :)

* fix all tests

* gaz

* add regression tests

* fix test bug

* Mutation testing fixes for beacon-chain/core/helpers/attestation.go (#2868)

* mutation testing for attestation.go

* new line

* lint

* revert fmt.Errorf deletion

* gofmt

* Add some fixes for mutation testing on blocks.go (#2869)

* Fix sizes

* gaz

* Spec freeze release candidate spectests

* Align Protobuf Type Names  (#2872)

* Removes some deprecated fields from protobuf (#2877)

* search and replace checkpoints

* fix tests, except spec tests

* Update Configs for Freeze (#2876)

* update configs

* updated minimal configs

* almost there

* all tests passing except for spec tests

* better comment for MinGenesisTime

* done, ready for review

* rm seconds per day

* feedback

* Mutation testing fixes for beacon-chain/core/helpers/committee.go (#2870)

* Add some fixes for mutation testing on blocks.go

* working on mutation testing fo committee.go

* gofmt

* goimports

* update readme target

* update latest sha for spec tests

* fix build

* Update State Transition Function (#2867)

* Change Base Reward Factor (#2888)

* Update Freeze Spec Simplification Section - part 1 (#2893)

* finished changes to attesting_indices

* removed index_count <= 2**40 requirement

* lint

* reverted index_count <= 2**40 check

* added short cut len(a) > len(b)

* Update justification bits (#2894)

* updated all the helper pseudocodes (#2895)

* Make Constants Explicit and Minor Cleanups (#2898)

* Rename outdated configs, make constants explicitly delcared

* Remove activate_validator, not needed

* Remove GenesisSlot and GenesisEpoch

* Remove unused import

* Move Block Operation Length Checks to ProcessOperations (#2900)

* Move block operation length checks to ProcessOperations

* Write tests for each length check in ProcessOperations

* Remove unneeded test

* Move checks to a new function

* Move duplicate check back into ProcessOperations

* reorder proto fields (#2902)

* Slashing Penalty Calculation Change (#2889)

* lint

* change config val

* add max helper

* changes to slashing and process slashing, add a min function for integers

* gaz

* fix failing tests

* fix test

* fixed all tests

* Change Yaml tag

* lint

* remove gc hack

* fix test

* gaz

* preston's comments

* change failing field

* fix and regen proto

* lint

* Implement Compact Committee Root (#2897)

* add tags

* add function

* add new code

* add function

* add all new changes

* lint

* add tests

* fix tests

* fix more tests

* fix all outstanding tests

* gaz

* Update beacon-chain/core/helpers/committee.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* comment

* Remove deprecated fields from attestation data (#2892)

* fix broken tests

* remove comment

* fixes

* Update Deposit Contract (#2903)

* update to new contract

* fix references

* fix tests

* fix some more tests

* fix local deposit trie

* gaz

* shays review

* more changes

* update WORKSPACE to use 0.8 spec tests

* Perform Mutesting in Helpers Package (#2912)

* Perform mutesting on validator

* Mutesting in helpers package

* Mutested eth1data

* s/volundary/voluntary (#2914)

* Update BLS Domain (#2916)

* change from integer to byte slice

* add test

* fix func for bytes4

* Fix Spec tests (#2907)

* fix panics

* handle failed transitions

* remove log

* fix to protos

* new changes

* remove i

* change ssz commit

* new changes

* update epoch tests

* fix epoch testing

* fix shuffle tests

* fix test

* Perform Mutesting in Epoch and State Packages (#2913)

* done with updates (#2890)

* Add Max Size Tag for Protobuf Fields (#2908)

* No more space between ssz-size numbers

* Regen pb.go

* Fixed a few incorrect proto fields (#2926)

* Update Validator Workflow (#2906)

* Justification spec tests (#2896)

* update go-ssz

* Fix SSZ Compatibility Test (#2924)

* figuring out how to seqeeze in multiple fields in a tag for pb

* Added max tags and regenerated pb.go

* updated to new standard

* New bitfield types in proto (#2915)

* Cast bytes to correct bitfield, failing tests now though

* Add forked gogo/protobuf until https://github.com/gogo/protobuf/pull/582

* remove newline

* use proper override for gogo-protobuf

* fix a few tags

* forgot to include custody bits and Slashable not used

* Update yaml struct to use pb

* Update workspace to use latest ssz

* All tests fail

* Use the latest go-ssz commit

* All pass except for state (too long to taste)

* Update test.proto

* Added rest of the tests

* use 1 justification bits

* fix tag test, apply @rauljordan's suggestion

* add IsEmpty and use ssz struct

* delete unused file

* Update zero hash to sha256().digest() (#2932)

* update zero hash

* change zero hash to conform with spec

* goimports

* add test for zero hash

* Revert "Update zero hash to sha256().digest() (#2932)" (#2933)

This reverts commit b926ae0667.

* Fix compress validator (#2936)

* fix compress validator

* update go-ssz

* build without the bytes test

* try minimal

* Block operations spec tests (#2828)

* update PrevEpoch

* debugging proposer slashing tests

* fmt

* add deposit tests

* Added skeleton for attestation minimal test

* remove bazel runfiles thing

* add deposits

* proposer slashing test is done

* comment

* complete test, some failing cases

* sig verify on

* refactor slightly to support mainnet and minimal

* included mainnet stuff

* Add block header tests

* volunary exit done

* transfer done

* new changes

* fix all tests

* update domain functions

* fmt

* fixed lint

* fixed all the tests

* fixed a transfer bug

* finished attester slashing tests and fixed a few bugs

* started fixing...

* cleaned up exit and proposr slashing tests

* attester slashing passing

* refactored deposit tests

* remove yamls, update ssz

* Added todo for invalid sig

* gazelle

* deposits test done!

* transfer tests done and pass!

* fix attesting indices bug

* temporarily disabled signature verification

* cleaned up most of the block ops, except for att

* update committee AttestingIndices

* oops, i dont know how or why i changed this file

* fixed all the rpc tests

* 6 more failing packages

* test max transfer in state package

* replace hashproto with treehash in package blockchain

* gazelle

* fix test

* fix test again

* fixed transition test, 2 more left

* expect an error in attestation tests

* Handle panic when no votes in aggregate attestation

* clear cache

* Add differ, add logging, tests pass yay

* remove todo, add tag

* fixed TestReceiveBlock_RemovesPendingDeposits

* TestAttestationMinimal/success_since_max_epochs_per_crosslink fails now...

* handle panics

* Transfer tests were disabled in https://github.com/ethereum/eth2.0-specs/pull/1238

* more fixes after merge, updating block_operations.yaml.go to match yaml

* figuring out how to seqeeze in multiple fields in a tag for pb

* Added max tags and regenerated pb.go

* updated to new standard

* New bitfield types in proto (#2915)

* Cast bytes to correct bitfield, failing tests now though

* Add forked gogo/protobuf until https://github.com/gogo/protobuf/pull/582

* remove newline

* fix references and test panic

* change to proto objects from custom types

* fix panics in tests

* use proper override for gogo-protobuf

* fix a few tags

* fix tests

* forgot to include custody bits and Slashable not used

* fix tests

* sort again

* Update yaml struct to use pb

* Update workspace to use latest ssz

* All tests fail

* Use the latest go-ssz commit

* All pass except for state (too long to taste)

* Update test.proto

* Added rest of the tests

* use 1 justification bits

* minor fixes

* wrong proto.Equal

* fix tag test, apply @rauljordan's suggestion

* add IsEmpty and use ssz struct

* inverted logic

* update zero hash

* change zero hash to conform with spec

* goimports

* add test for zero hash

* Revert "Update zero hash to sha256().digest() (#2932)"

This reverts commit b926ae0667.

* update ssz, fix import, shard big test

* checkpoint

* fix compress validator

* update go-ssz

* missing import

* missing import

* tests now pass

* been a good day

* update test size

* fix lint

* imports and remove unused const

* update bazel jobs flag

* update bazel jobs flag

* satisfy deprecation warning

* Add ssz regression tests for investigation of test failures in PR #2828 (#2935)

* Adding regression tests for investigation

* add another example

* goimports

* add quick comment about test case 0

* Epoch Process Slashings Spec Tests (#2930)

* updated justification bits, tests passing OK

* regen pb.go, clarify bit operations

* justification and finalization tests; failing

* Add wrapper, so we call the correct method

* checkpoint

* Update tar ref

* TestSlashingsMinimal/small_penalty still failing

* Use bigint instead of () and float

* Revert a bad merge from workspace

* Fmt

* add note about https://github.com/ethereum/eth2.0-specs/issues/1284

* improve tests

* gaz

* Perform Mutesting In core/state Package (#2923)

* Perform mutesting on validator

* Mutesting in helpers package

* Mutested eth1data

* Perform mutesting in epoch and state packages

* Fix voluntary exits test

* Fix typo

* Fix comments

* Fix formatting

* Fix error message

* Handle missing errors

* Handle all errors

* Perform Mutesting In State Package

* Fix block roots size

* Remove comment

* Fix error

* add backend service

* Add ssz compatibility tests for signing root (#2931)

* Added tests for signing root

* imports

* fix lint on travis

* fix bes flag

* Final updates spec tests (#2901)

* set up tests

* need to reorder pbs

* figuring out how to seqeeze in multiple fields in a tag for pb

* Added max tags and regenerated pb.go

* updated to new standard

* New bitfield types in proto (#2915)

* Cast bytes to correct bitfield, failing tests now though

* Add forked gogo/protobuf until https://github.com/gogo/protobuf/pull/582

* remove newline

* use proper override for gogo-protobuf

* fix a few tags

* forgot to include custody bits and Slashable not used

* playing with tags idea, can revert this commit later

* fixes after merge

* reset caches before test

* all epoch tests pass

* gazelle

* Genesis trigger (#2905)

* genesis change

* integrate changes

* bodyroot

* remove unused code

* HasChainStarted

* added isValidGenesisState to ProcessLog

* state fix

* fix gazelle

* uint64 timestamp

* SetupInitialDeposits adds proof

* remove unneeded parts of test

* deposithash

* merkleproof from spec utils

* Revert "merkleproof from spec utils"

This reverts commit 1b0a124352.

* fix test failures

* chain started and hashtree root in tests

* simple eth2genesistime

* eth2 genesis time

* fix zero time

* add comment

* remove eth1data and feedback

* fix build issues

* main changes: add fields and methods to track active validator
count

* gaz

* fix test

* fix more tests

* improve test utils

* shift spec method to state package, improve test setup

* fixed log processing tests

* remove log

* gaz

* fix invalid metric

* use better tag names, not latest

* Remove Block Signing Root (#2945)

* replace with hash tree root

* Revert "replace with hash tree root"

This reverts commit 77d8f16a16.

* replace with signing root instead

* remove one more ref

* Create Test Runner for Genesis State Spec Tests (#2940)

* genesis change

* integrate changes

* bodyroot

* remove unused code

* HasChainStarted

* added isValidGenesisState to ProcessLog

* state fix

* fix gazelle

* uint64 timestamp

* SetupInitialDeposits adds proof

* remove unneeded parts of test

* deposithash

* merkleproof from spec utils

* Revert "merkleproof from spec utils"

This reverts commit 1b0a124352.

* fix test failures

* chain started and hashtree root in tests

* simple eth2genesistime

* eth2 genesis time

* fix zero time

* add comment

* remove eth1data and feedback

* fix build issues

* main changes: add fields and methods to track active validator
count

* gaz

* fix test

* fix more tests

* improve test utils

* Start genesis spec tests

* shift spec method to state package, improve test setup

* Add Genesis validity spec test

* Bazel

* fixed log processing tests

* remove log

* gaz

* fix invalid metric

* use json tags

* fix up latest changes

* Fix most of test errors

* Attempts to see whats wrong with genesis validity

* Fix merge

* skip minimal

* fix state test

* new commit

* fix nishant comment

* gaz

* Static check on branch spec-v0.6 (#2946)

* Ran staticcheck and fixed the important complains

* commit

* commit

* Create Test Runner for Genesis State Spec Tests (#2940)

* genesis change

* integrate changes

* bodyroot

* remove unused code

* HasChainStarted

* added isValidGenesisState to ProcessLog

* state fix

* fix gazelle

* uint64 timestamp

* SetupInitialDeposits adds proof

* remove unneeded parts of test

* deposithash

* merkleproof from spec utils

* Revert "merkleproof from spec utils"

This reverts commit 1b0a124352.

* fix test failures

* chain started and hashtree root in tests

* simple eth2genesistime

* eth2 genesis time

* fix zero time

* add comment

* remove eth1data and feedback

* fix build issues

* main changes: add fields and methods to track active validator
count

* gaz

* fix test

* fix more tests

* improve test utils

* Start genesis spec tests

* shift spec method to state package, improve test setup

* Add Genesis validity spec test

* Bazel

* fixed log processing tests

* remove log

* gaz

* fix invalid metric

* use json tags

* fix up latest changes

* Fix most of test errors

* Attempts to see whats wrong with genesis validity

* Fix merge

* skip minimal

* fix state test

* new commit

* fix nishant comment

* gaz

* Add Back Eth1Data After Bad Merge (#2953)

* eth1data rpc endpoint

* first version

* comment added

* gazelle fix

* new function to go once over the deposit array

* fix tests

* export DepositContainer

* terence feedback

* move structure decleration

* binary search

* fix block into Block

* preston feedback

* keep slice sorted to remove overhead in retrival

* merge changes

* feedback

* update to the latest go-ssz

* revert change

* chnages to fit new ssz

* revert merge reversion

* go fmt goimprts duplicate string

* exception for lint unused doesParentExist

* feedback changes

* latesteth1data to eth1data

* goimports and stop exposing Eth1Data

* revert unneeded change

* remove exposure of DepositContainer

* feedback and fixes

* fix workspace duplicate dependancy

* greatest number of deposits at current height

* add count votes function

* change method name

* revert back to latesteth1data

* latesteth1data

* preston feedback

* seperate function add tests fix bug

* stop exposing voteCountMap

* eth1data comment fix

* preston feedback

* fix tests

* new proto files

* workspace to default version of ssz

* new ssz

* chnage test size

* marshalled  marshaled

* everything passing again

* add skip reason

* cleanup deposit contract slightly

* remove unused chainstart param (#2957)

* fix breakages from #2957 (#2958)

* fix breakages from #2957

* oops

* Fix deposit input data (#2956)

* fix deposit input data

* fix deposit input data

* gaz and build fix

* Add Tests for Genesis Deposits Caching (#2952)

* remove old method and replace with an improved one

* add new files

* gaz

* add test

* added all these tests

* gaz

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* fix merkle proof error

* fix config

* Minor fixes for runtime (#2960)

* Minor fixes for runtime

* use comments

* goimports

* revert beacon-chain/core/state/state.go and fix comment for lint

* fix test too

* Minor runtime fixes  (#2961)

* Add support for bundling binaries and fix ARM64 builds (#2970)

* Add support for bundling binaries and fix ARM64 builds

* Fix exports

* ignore manual targets wrt vis check

* fix graknlabs

* update spec tests (#2979)

* hotfix until https://github.com/graknlabs/bazel-distribution/pull/169

* Overflow slashing calculation fix (#2977)

* Runtime Fixes (#2736)

* first batch of fixes

* add log

* more fixes

* another bug fixed

* update deposit contract and other fixes

* remove logs

* new changes

* fixes

* fix build

* remove config

* more fixes

* add more changes

* add back todo

* make compute state root work

* remove commented out and fix condition

* fix commented code

* fix config

* gaz

* remove flag

* remove init

* new fixes

* fix test

* one more fix

* fix all tests

* change back config

* fix one more bug

* remove logging bool

* Only build test targets when running bazel test //...

* Align prysm to spec v0.8.1 (#2978)

* Bazel problem

* Update zero hash representation to be clear (cosmetic)

* Update minor cosmetic fixes

* Fixed lookahead off by 1

* Update randao.go

* update ssz

* test failures fixed

* test fixes

* fix up workspace

* lint

* fixed errs

* Updated pubkey loggings (#2983)

* Fix proposer assignment (#2984)

* add jvm limits

* add jvm limits

* Removed logging from state transition functions (#2980)

* Match spec on proposer index division (#2985)

* Match spec on proposer index division

* gaz

* fixes

* Fix Default Eth1Data (#2982)

* fix bug

* Update beacon-chain/core/state/transition.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

* more fixes

* reg test

* Set ejection balance to 1.6 (#2987)

* improve block processing log (#2990)

* Fix HistoricalRootsLimit (#2989)

* fix historical lenght

* fix genesis state initialization and test

* fix genesis state initialization and test

* fix genesis state initialization and test

* fix genesis state initialization and test

* hack config until https://github.com/prysmaticlabs/prysm/issues/2993

* More Runtime Fixes (#2986)

* local changes

* add val sig

* attester fix

* one more fix

* fixed all tests

* rem validator issue

* fix finality issue (#2994)

* Fix validator prev balance calculation (#2992)

* attester fix

* one more fix

* fixed all tests

* Fix validator prev balances calculation

* go fmt

* 48 bytes

* Fix Querier to handle Deposit Logs Race (#2999)

* fix sync issue

* fix build

* add regression test

* add var for magic number

* Fix eth1data and deposits (#2996)

* Work in progress, eth1data works, deposits are included at the appropriate time, and activation happens at the correct time

* revert blockInfo being public

* git tests to build, not yet pass though

* add tests

* some commentary

* fix comment

* goimports

* fmt and remove unused method

* Update rules go (#2975)

* update rules_go

* Fix some cross compile builds stuff

* add missing deps

* update to 0.19.1

* Update Protobufs to Match Ethereum APIs (#2998)

* add beacon block and attestation files

* add all types

* include all new proto type definitions

* add add all proto definitions

* fix all comments to say 48 bytes

* include latest changes

* readd common

* no swag

* add build file

* deps issue

* right package names

* address feedback, maintain parity between upstream ethereumapis

* delete pb

* bad gens

* Update "Testing Prysm" readme section (#3000)

Resolves invalid url link to golangci-lint.

* Update badge to version 0.8.1

* elaborate on test skip

* revert shared/p2p/options.go

* make travis happy with goimports

* Update beacon-chain/core/blocks/block.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2019-07-19 19:16:10 -05:00
skillful-alex
d8e24af4c3 best peer assignment fix (#2976) 2019-07-16 11:31:51 -04:00
terence tsao
ddff0f1c51 Mega Renovate Updates (#2971) 2019-07-15 16:01:45 -07:00
renovate[bot]
31fd73d173 Update dependency com_google_cloud_go to v0.41.0 (#2920) 2019-07-08 22:23:15 -04:00
Celeste A. Seberras
dd18f15cd5 Small grammatical fixes (#2925) 2019-07-08 14:17:14 -07:00
renovate[bot]
7511a497d0 Update libp2p (#2921)
* Update libp2p

* Update libp2p

* add event bus repo
2019-07-08 10:16:18 -04:00
renovate[bot]
18e7ced517 Update io_bazel_rules_k8s commit hash to 6057108 (#2919) 2019-07-08 09:53:10 -04:00
Celeste Seberras
b3323bfb57 Formatting and structure changes (#2918) 2019-07-07 15:57:07 -04:00
Preston Van Loon
87894eb12f fix issue #2830 (#2904) 2019-07-04 00:04:18 -04:00
Preston Van Loon
f12fdfda0f Complain about improperly sized tests (#2873) 2019-06-30 09:00:22 -07:00
renovate[bot]
72139c41ea Update io_bazel_rules_k8s commit hash to dda7ab9 (#2843) 2019-06-29 11:09:52 -04:00
renovate[bot]
7b49697dff Update dependency com_github_prometheus_common to v0.6.0 (#2845) 2019-06-29 10:32:20 -04:00
renovate[bot]
5853a399f6 Update dependency com_github_syndtr_goleveldb to v1 (#2847) 2019-06-29 09:54:17 -04:00
Dan
9ac950f480 Added the 'enable-upnp' flag to the list of command line args (#2860)
* Added the 'enable-upnp' flag to the list of supported command line arguments.

If the user specifies this feature flag (adds --enable-upnp as an argument) - the Beacon-chain and Validator services, when started, will initialize libp2p with the UPNP options.

* Added the new arg to usage.go due to test failure

* Update shared/p2p/service.go

Changed the logging according to Preston's recommendation.

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Code review changes:

1. File formatting.
2. Command line arg more detailed description.
2019-06-26 12:59:37 -07:00
Preston Van Loon
9bd6147027 BLS spec tests (#2826)
* bls spec tests

* add more bls tests

* use ioutil instead of bazel runfiles

* dont read bytes

* skip tests that overflow uint64

* manually fix input data

* add tests

* lint and gaz

* add all new changes

* some refactoring, cleanup, remove new API methods that only exist for tests

* gaz

* Remove yamls, skip test
2019-06-25 12:57:47 -04:00
Preston Van Loon
c944b281c8 Revert "Merge renovate updates (#2850)" (#2854) 2019-06-24 14:18:07 -07:00
Preston Van Loon
ebdbc230c3 Merge renovate updates (#2850)
* Update com_github_prysmaticlabs_go_ssz commit hash to 0fdbce2

* Update io_bazel_rules_k8s commit hash to cddc035

* Update dependency build_bazel_rules_nodejs to v0.32.2

* Update dependency com_github_prometheus_common to v0.6.0

* Update dependency com_github_syndtr_goleveldb to v1

* revert ssz breakage

* Update libp2p

* update infra

* update infra

* specify node_modules

* add flag

* clarify in comment

* update rules_docker

* workarounds for python2. see: https://github.com/bazelbuild/rules_docker/issues/842
2019-06-24 15:53:49 -04:00
Preston Van Loon
9460232550 Update allocations.go (#2819) 2019-06-19 15:34:57 -07:00
Preston Van Loon
2c8bddc324 Update kubesec.bzl to new starlark attr.label API (#2823)
* Update kubesec.bzl

* fix rule take 2
2019-06-19 13:01:39 -04:00
Preston Van Loon
cebefde335 Update config.go to fix BLS issue (#2821)
* Update config.go

* fix test
2019-06-18 18:02:01 -04:00
terence tsao
df84615496 Renovate Updates (#2815)
* Update dependency com_github_golang_snappy to v0.0.1

* Update dependency io_bazel_rules_go to v0.18.6

* Update dependency org_uber_go_automaxprocs to v1

* Update dependency com_github_prometheus_client_golang to v1

* Update libp2p

* Update libp2p
2019-06-17 20:26:38 -04:00
Preston Van Loon
132a5f10f2 Update README.md (#2801)
To display build status for master only
2019-06-17 14:47:31 -07:00
Preston Van Loon
c0752f0de5 Create FUNDING.yml (#2800)
Added gitcoin
2019-06-17 01:49:06 -04:00
Preston Van Loon
c30bc3dd97 Revert "Add BoltDB Internal Stats (#2768)" (#2799)
This reverts commit c66186b54a.
2019-06-16 15:50:23 -04:00
Preston Van Loon
dd131561bf automaxprocs (#2770) 2019-06-13 07:53:42 -07:00
Hsien-Tang Kao
3a167e54b5 Properly log and handle HTTP server error (#2685) 2019-06-12 19:58:49 -04:00
Nishant Das
c66186b54a Add BoltDB Internal Stats (#2768)
* lint

* add metrics

* rename file

* add review changes

* remove dumb pattern

* fix naming

* add all the buckets data

* change to seconds

* Update beacon-chain/db/db_metrics.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* preston's comments
2019-06-12 19:07:36 -04:00
Nishant Das
663490ee1f Update Phore Dependency to the latest version (#2792)
* add new changes

* fix typos
2019-06-12 11:06:39 -05:00
Nishant Das
d48e0925d0 Add gRPC Gateway Folder (#2793)
* add gateway proto files

* lint
2019-06-12 10:11:04 -05:00
Preston Van Loon
6611916689 Update codecov to ignore generated code (#2794) 2019-06-12 10:45:17 -04:00
Preston Van Loon
d9ee55013d Use generalized examples in test (#2786) 2019-06-10 14:29:15 -04:00
renovate[bot]
9793de59a6 Update libp2p (#2783)
* Update libp2p

* Update libp2p, remove unused WORKSPACE go_repositories, fix test
2019-06-10 12:17:42 -04:00
terence tsao
81f777cd46 Renovate Updates (#2785)
* Update com_github_atlassian_bazel_tools commit hash to 6fbc36c

* Update dependency com_github_prometheus_procfs to v0.0.2

* Update dependency com_github_spf13_cobra to v0.0.5

* Update dependency com_google_cloud_go to v0.40.0

* Update dependency org_golang_google_api to v0.6.0
2019-06-10 10:41:16 -04:00
Preston Van Loon
fbac09c1f6 Add go maxprocs metric (#2765) 2019-06-07 15:43:22 -04:00
gzuhlwang
78c3166ef2 fix typo (#2764) 2019-06-07 07:58:27 -04:00
terence tsao
85c5672ab3 Mega Renovate Updates (#2751)
* Update dependency com_github_prometheus_client_golang to v0.9.3

* Update dependency com_github_spf13_cobra to v0.0.4

* Update dependency com_google_cloud_go to v0.39.0

* Update dependency io_bazel_rules_go to v0.18.5

* Update com_github_atlassian_bazel_tools commit hash to f04c7c0

* Update com_github_prysmaticlabs_go_ssz commit hash to 2e84733

* Update io_bazel_rules_k8s commit hash to e521766

* Update dependency build_bazel_rules_nodejs to v0.30.2

* Update dependency com_github_prometheus_procfs to v0.0.1

* Update dependency io_opencensus_go to v0.22.0

* Update libp2p

* Update dependency com_github_ghodss_yaml to v1

* Update dependency grpc_ecosystem_grpc_gateway to v1

* Update libp2p

* Revert "Update io_bazel_rules_k8s commit hash to e521766"

This reverts commit b2c5ee219c.

* Revert "Update dependency build_bazel_rules_nodejs to v0.30.2"

This reverts commit 3286af0b46.

* Revert "Update libp2p"

This reverts commit 699fc4489f.

* Revert "Update libp2p"

This reverts commit e1a2372cd0.
2019-06-03 11:21:38 +08:00
Preston Van Loon
9e98e914a1 Add a gRPC gateway (#2604) 2019-06-02 08:33:44 -07:00
Dan
71b5d5beec Adding the BlockTreeBySlots function for getting the Block tree filte… (#2720) 2019-06-01 22:41:17 -07:00
Preston Van Loon
55bedd0745 Move go-ssz to external repo under MIT license (#2722) 2019-05-29 18:04:25 -07:00
Antoine Toulme
932e68571b expose p2p private key for static peering (#2719)
* expose p2p private key for static peering

* Review revisions

* Use testutil.TempDir()

* Use testing.T to report fatal errors
2019-05-29 15:43:23 -04:00
Antoine Toulme
fcc54317a3 Fix static peering (#2725)
* Fix static peering

* Fix import and use a slice to assemble peers to watch for

* Add a check for zero-length

* Fix duplicate import
2019-05-29 15:30:26 -04:00
Nishant Das
3871be006c Update Renovate (#2711) 2019-05-28 06:13:35 -07:00
Preston Van Loon
9ce5de3d95 Add flag to whitelist certain connections (#2716) 2019-05-28 05:54:49 -07:00
Hsien-Tang Kao
0aeaed866e Fix attester server test data race (#2708) 2019-05-27 06:22:51 -07:00
Preston Van Loon
e71e91c9aa update BLS (#2692) 2019-05-25 11:26:03 -04:00
Preston Van Loon
2d7dcfae61 fix joonix log after https://github.com/joonix/log/pull/13 (#2691) 2019-05-24 16:54:03 -04:00
skillful-alex
305fea5bdb fix panic if activeValidatorIndices is zero (#2621)
* fix panic if activeValidatorIndices is zero

* add unit test
2019-05-24 16:34:33 -04:00
LamboshiNakaghini
4fd8ddf8ca update README.md (#2684)
* Update README.md

Added more complete instructions for running testnet on Linux.

* Update README.md
2019-05-24 16:23:20 -04:00
Hsien-Tang Kao
3bf46ff6e8 Update CONTRIBUTING.md (#2679) 2019-05-23 06:30:00 -07:00
shayzluf
5e3931dc44 change rebase into pull (#2669) 2019-05-22 14:53:46 -07:00
Dan
b040ac909e Persistent logs (#2660)
* added file log feature

* moved the logic to one central location (shared/logutil/logutil.go), removed new line chars from file logs

* removed a resdundant temp file refrences that went into beachon-chain/BUILD.bazel by mistake

* Update shared/cmd/flags.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* Update shared/cmd/flags.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

* manually added loguitl dep to the go image target

* Manaully added the logutil dep to the go image target

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update validator/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update validator/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* syntax and styling changes required by code reviewers

* Update beacon-chain/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* changed the return type of 'ConfigurePersistentLogging' from bool, error to error based on recommendation from code review

* ran goimports in beacon-chain/main.go after tests have failed

* Update beacon-chain/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon-chain/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update shared/logutil/logutil.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update validator/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Update validator/main.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>

* Changes requested by code reviewer

* Added a mandatory comment (linter required)  to the 'Fire' event

* Changed the beacon-chain and validator to support only same format stdout and file logging due to complications in the outputs when using different formats.

* Had to run gazelle --fix due to check failure
2019-05-22 09:22:11 -04:00
Preston Van Loon
2617f5c3ac Eth1 bal monitoring followup (#2651)
* Add eth1 balance monitoring

* lint

* lint

* priority

* lint

* use value in alerts

* fix beacon-chain service

* working on stability

* more yaml

* add more alerts to the finality alerts

* add nother header to ignore

* extend requirement time for low balance

* remove old flag

* remove extra flag

* feedback to use consistent flag

* PR feedback

* fix image build
2019-05-20 14:05:04 -04:00
Preston Van Loon
3f205e462f add security file (#2662) 2019-05-20 12:55:38 -04:00
Preston Van Loon
46f215b673 Renovate updates (#2661) 2019-05-20 11:10:43 -04:00
Preston Van Loon
40588021d4 Add eth1 balance monitoring alert (#2575)
* Add eth1 balance monitoring

* lint

* lint

* priority

* lint

* use value in alerts

* fix beacon-chain service

* working on stability

* more yaml

* add more alerts to the finality alerts

* add nother header to ignore

* extend requirement time for low balance

* remove old flag

* remove extra flag

* feedback to use consistent flag
2019-05-19 10:52:17 -04:00
Raul Jordan
632f6797cd Reverts #2638 #2637 #2630 (#2640)
* Revert "fix nil block (#2638)"

This reverts commit d43ea74244.

* Revert "add to topic mapping (#2637)"

This reverts commit 85ef099360.

* Revert "Reorg to an Announced Finalized Block if On a Different Chain (#2630)"

This reverts commit 08288f0958.
2019-05-17 22:27:44 -04:00
terence tsao
3bad541f3c Revert "remove canonical attestation filtering (#2635)" (#2639) 2019-05-17 22:16:45 -04:00
Raul Jordan
d43ea74244 fix nil block (#2638) 2019-05-17 21:31:28 -04:00
Raul Jordan
85ef099360 add to topic mapping (#2637) 2019-05-17 20:48:28 -04:00
Raul Jordan
50063912a8 Remove Expensive Participation Rate Prometheus Gauge (#2636)
* rem expensive prom gauge

* rem prom
2019-05-17 19:57:13 -04:00
terence tsao
becd06553b remove canonical attestation filtering (#2635) 2019-05-17 19:47:16 -04:00
Preston Van Loon
bdf4590b86 Fix config value, yaml fixes (#2634)
* minor fixes

* go back to 8
2019-05-17 19:27:13 -04:00
Preston Van Loon
208c5dfea6 Revert "Disable libp2p TLS security protocols for now (#2622)" (#2633) 2019-05-17 18:59:01 -04:00
Nishant Das
25ce3a3676 Add Excess Deposit Flag to allow Validator Balances more than 32 ETH (#2625)
* add flags and code

* adding tests

* gaz

* Update shared/featureconfig/config.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2019-05-17 17:10:34 -04:00
Raul Jordan
08288f0958 Reorg to an Announced Finalized Block if On a Different Chain (#2630)
* proto changes

* spacings

* rem root

* finalized root

* regen

* handle finalized state announcement

* handle finalized announcement

* fixed broken tests

* finalized state switch

* tests passing

* sync service imports

* check interface impl
2019-05-17 16:58:04 -04:00
Preston Van Loon
72d1fa2899 Change a few params in k8s (#2628) 2019-05-17 16:28:53 -04:00
terence tsao
3349fb4cba increase slots per epoch to 16 (#2627) 2019-05-17 14:41:49 -04:00
Preston Van Loon
15cac0c0b1 Disable libp2p TLS security protocols for now (#2622)
* Disable security protocols for now

* Enabling security for test only. See https://github.com/libp2p/go-libp2p-swarm/issues/124

* Fix spacing
2019-05-17 13:29:25 -04:00
terence tsao
b9fe8b172c Filter Canonical Attestation by Default (#2626)
* exclusive of finalized block

* filter canonical attestations by default
2019-05-17 13:17:29 -04:00
Preston Van Loon
dd734f23c3 Handle unmarshal failures (#2624) 2019-05-17 11:12:17 -04:00
Raul Jordan
40fb4b01fa Override Finalized State Announcement Proto (#2623) 2019-05-17 10:59:00 -04:00
Preston Van Loon
d20c3d6cf7 Add better, incremental reputation (#2618)
* add better, incremental reputation

* remove space

* Lint
2019-05-17 22:04:38 +08:00
Raul Jordan
15a48dbd75 New Finalized State Announcement Protobuf (#2619) 2019-05-16 22:06:40 -04:00
Nishant Das
fc4fd7834b fix panic (#2613) 2019-05-16 11:17:26 -04:00
terence tsao
5e4b9c0909 Add Total Vote Count for Block Tree (#2576) 2019-05-16 11:05:27 -04:00
Preston Van Loon
4837629091 Add timestamp metadata to p2p messages (#2611) 2019-05-15 21:28:02 -07:00
Nishant Das
64ce41f9fc Add Check for Goroutines Count (#2608)
* changes

* revert ide

* goimports

* Update shared/cmd/flags.go
2019-05-15 10:38:27 -04:00
Nishant Das
56130404fc Change Logging of Failed Attestations (#2599)
* change logging

* review comments

* gazelle
2019-05-15 09:09:39 +08:00
Andrew
d672a06026 Updated README Deploying with Docker in Windows (#2598)
Updated instructutions for running Beacon Node in Docker on Windows. The process requires a few additional steps to make a local volume available to mount the data dir. If this doesn't happen, the /tmp/prysm-data dir won't be created at run, and the chaindata will not be stored locally, nor will the account creation process store the key files appropriately for reference when running the validator.
2019-05-14 10:40:52 -04:00
terence tsao
c10c45c4b1 Renovate Renovate Updates (#2587)
* exclusive of finalized block

* Update com_github_atlassian_bazel_tools commit hash to 20cbdb1

* Update io_bazel_rules_k8s commit hash to 7475ba2

* Update dependency build_bazel_rules_nodejs to v0.29.0

* Update dependency com_github_jbenet_goprocess to v0.1.3

* Update dependency com_github_prometheus_common to v0.4.0

* Update dependency io_bazel_rules_go to v0.18.4

* Update dependency org_golang_google_api to v0.5.0

* Update libp2p

* Update prysm_testnet_site commit hash to 0438607

* renovate updates

* fixed duplication
2019-05-14 01:37:12 -04:00
Preston Van Loon
fd4c7ffc07 Add new log formats and set fluentd as default for cluster (#2594)
* Add new log formats and set fluentd as default for cluster

* fix image build
2019-05-13 20:43:04 -04:00
Preston Van Loon
23880351f8 Handle race condition in progress (#2589)
* Handle in progress condition

* Handle in progress condition

* add test too

* space
2019-05-13 14:42:57 -04:00
Nishant Das
e33a6d8aa5 Fix Bitfield Errors (#2588) 2019-05-13 06:43:02 -07:00
Nishant Das
eef35996de Blacklist Keys in Cluster PK Manager (#2536)
* initialize rpc client

* add struct

* gaz

* add keymap

* update proto and add rpc methods

* update proto

* add method

* make changes

* add routine

* gaz

* mockgen

* error fix

* prom metric

* Some improvements

* fixes

* fix and working cluster pk manager

* fix and working cluster pk manager

* fix and working cluster pk manager

* fix and working cluster pk manager

* regen mocks and pb.go

* k8s
2019-05-12 16:38:37 -04:00
Jim McDonald
215e6fc494 Do not panic when the beacon node is shut down. (#2571) 2019-05-12 11:58:07 -04:00
Antoine Toulme
c9ce8b5246 Allow discovery to be removed, and add peers explicitly to peer store (#2557)
* Allow discovery to be removed, and add peers explicitly to peer store

* Changes after code review

* Update shared/cmd/flags.go

Co-Authored-By: Preston Van Loon <preston@prysmaticlabs.com>
2019-05-11 18:02:58 -04:00
Preston Van Loon
9f7f7d6cff Tracing improvements (#2570)
* some improvements

* fix

* gazelle

* disable lostcancel
2019-05-11 17:43:55 -04:00
terence tsao
cf8e474410 Avoid Panic Retrieving Validator Public Key (#2566)
* exclusive of finalized block

* fixed saveValidatorIdx to skip validator not in state

* fixed test

* tests

* comment

* comment

* fixed test

* comment
2019-05-11 17:08:00 -04:00
Nishant Das
678ffa607e Fix Bitfield in Attestations (#2565)
* fix bitfield

* test

* fix reference

* fix tests

* remove test

* fix test

* add new helper

* add test

* fix tests

* fix test

* gaz

* add continue
2019-05-11 16:49:09 -04:00
terence tsao
d34656a76d Add Nil Block Conditions for Block Cache (#2569)
* exclusive of finalized block

* add nil blk conditions
2019-05-11 16:37:06 -04:00
Preston Van Loon
78a76e56fb Add alert manager config (#2564) 2019-05-11 09:21:21 -07:00
Nishant Das
d1fa88ce4b Pool Attestations from Sync (#2559)
* pooling attestations

* add test

* change limit

* comment

* terence's review

* handle zero case

* add metrics

* more metrics
2019-05-11 10:21:26 +08:00
terence tsao
39a3689a57 Implement Block Cache in DB (#2560) 2019-05-10 18:19:46 -07:00
Raul Jordan
94dbac4016 Fix BlockTree RPC Server Response (#2556) 2019-05-10 10:07:43 -07:00
Preston Van Loon
fc1fbf8017 Use a prysm specific DHT protocol (#2558)
* use a prysm specific DHT

* gazelle

* space
2019-05-10 11:56:30 -04:00
Nishant Das
a4d50f097e Fix Logging in Validator Client (#2555) 2019-05-10 06:43:04 -07:00
Preston Van Loon
9a82845c3c Fix lint issues (#2554)
* fix broadcast debug message

* feedback

* imports

* lint
2019-05-10 11:59:30 +08:00
Preston Van Loon
65f4c78750 Only marshal broadcast debug message when actually logging debug (#2553)
* fix broadcast debug message

* feedback
2019-05-09 22:57:47 -04:00
terence tsao
ed8a88337b Can't save attestation target when head is nil (#2530)
* take care nil block

* warn to info

* preston's feedback
2019-05-09 22:34:31 -04:00
terence tsao
13e9bb5020 Filter Canonical Attester for RPC (#2551)
* exclusive of finalized block

* add filter to only include canonical attestation

* comments

* grammer

* gaz

* typo

* fixed existing tests

* added test for IsAttCanonical

* add nil blocks test
2019-05-09 18:53:19 -05:00
Preston Van Loon
991ee7e81b "Super sync" and naive p2p reputation (#2550)
* checkpoint on super sync with reputation

* ensure handling only expected peers msg

* exclusive of finalized block

* skip block saved already

* clean up struct

* remove 2 more fields

* _

* everything builds, but doesnt test yet

* lint

* fix p2p tests

* space

* space

* space

* fmt

* fmt
2019-05-09 16:02:24 -05:00
Raul Jordan
ecef1093eb Fetch Block Tree from Justified Block to Highest Observed Slot via RPC (#2549)
* test block tree req

* tree improvement

* use the right data

* block tree blocked by children func

* rem file

* imports

* add ctx

* imports

* mock

* check expired context

* added block root

* gazelle

* sace
2019-05-09 12:38:05 -05:00
Raul Jordan
c1dfa2677e Prevent Reorgs if Chain Head Does Not Change (#2548)
* revent reorgs if head does not change

* lint

* spacing
2019-05-09 11:42:24 -05:00
Nishant Das
5fc6f2d728 PreChainStart Activation Fix (#2544)
* fix activation

* remove logs

* remove logs

* revert change

* fix test
2019-05-09 11:20:44 -05:00
terence tsao
729c45df67 exclusive of finalized block (#2547) 2019-05-09 08:51:33 -07:00
Raul Jordan
a4128f691b Refactor DB Package to Enable Multiple Blocks/States at Slots (#2540)
* prefixed blocks blocked

* db refactor

* new historical state saving

* builds but tests fail

* more tests pass

* fix tests

* fix tests

* delete buf

* Update beacon-chain/db/block.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* Update beacon-chain/db/block.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* rem unused
2019-05-09 10:42:12 -05:00
Preston Van Loon
7c47db0015 add attestation data req cache (#2542)
* add attestation data req cache

* add tests

* godocs

* fix cache size gauge

* lint

* fix tests

* gazelle

* add more comments
2019-05-08 19:27:29 -05:00
terence tsao
b05f64ff91 enhance forkchoice log (#2537) 2019-05-08 19:00:30 -05:00
Preston Van Loon
8a4f322e2c Check context has not expired before expensive operations (#2541)
* use ctx.Err for potentially expensive RPC methods, use batch for saving attestations

* more

* in sync too

* Update BUILD.bazel

* fix spacing
2019-05-08 18:51:00 -05:00
terence tsao
104966b63d Sync Responds With Canonical Block Lists (#2539)
* first attempt at canonical blk list

* lint

* condition 1

* ctx w/ time out

* added canonical block list tests

* revert

* add to BeaconChainFlags

* dont use map, use proto

* attempt to use proto, take 1

* add run

* like canonical better than head

* removed unused

* Update proto/beacon/p2p/v1/messages.proto

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* protos
2019-05-08 18:23:06 -05:00
terence tsao
fe3fd57600 removed unused doesParentExist (#2538) 2019-05-08 11:37:00 -05:00
Raul Jordan
0bab9f492d Do Not Run Fork Choice on Block Proposals (#2526) 2019-05-07 23:02:52 -07:00
Preston Van Loon
57495bc8fe Revert "Canonical Blocks for Batch Block Request (#2511)" (#2532)
This reverts commit a818564b8d.
2019-05-08 00:52:34 -05:00
Preston Van Loon
e5cb1db5bc Sort list before processing batched blocks (#2531) 2019-05-08 12:27:00 +08:00
Raul Jordan
76881fd1ae Do Not Subscribe to Blocks in Initial Sync (#2524)
* only sub to block batches

* batch sub remove

* tests

* fix lint

* gazelle

* delete old im mem blocks code
2019-05-07 21:12:36 -05:00
terence tsao
7642f950d8 delete failed pending atts (#2528) 2019-05-07 18:46:16 -07:00
terence tsao
eb626e5834 fixed atts verification (#2527) 2019-05-07 15:51:41 -07:00
terence tsao
0f0510096e Update Attestation Target for AttestHead (#2525)
* update attestation target for AttestHead

* fixed test
2019-05-07 17:31:06 -05:00
Nishant Das
1be950f90c fix validator flags (#2518) 2019-05-06 21:49:03 -05:00
813 changed files with 83654 additions and 60701 deletions

24
.bazelrc Normal file
View File

@@ -0,0 +1,24 @@
# Print warnings for tests with inappropriate test size or timeout.
test --test_verbose_timeout_warnings
# Only build test targets when running bazel test //...
test --build_tests_only
test --test_output=errors
# Fix for rules_docker. See: https://github.com/bazelbuild/rules_docker/issues/842
build --host_force_python=PY2
test --host_force_python=PY2
run --host_force_python=PY2
# Networking is blocked for tests by default, add "requires-network" tag to your test if networking
# is required within the sandbox. This flag is no longer experimental after 0.29.0.
# Network sandboxing only works on linux.
--experimental_sandbox_default_allow_network=false
# Use minimal protobufs at runtime
run --define ssz=minimal
# Prevent PATH changes from rebuilding when switching from IDE to command line.
build --incompatible_strict_action_env
test --incompatible_strict_action_env
run --incompatible_strict_action_env

View File

@@ -2,8 +2,7 @@
# across machines, developers, and workspaces.
#
# This config is loaded from https://github.com/bazelbuild/bazel-toolchains/blob/master/bazelrc/latest.bazelrc
build:remote-cache --remote_cache=remotebuildexecution.googleapis.com
build:remote-cache --tls_enabled=true
build:remote-cache --remote_cache=grpcs://remotebuildexecution.googleapis.com
build:remote-cache --remote_timeout=3600
build:remote-cache --auth_enabled=true
build:remote-cache --spawn_strategy=standalone
@@ -11,12 +10,26 @@ build:remote-cache --strategy=Javac=standalone
build:remote-cache --strategy=Closure=standalone
build:remote-cache --strategy=Genrule=standalone
# Build results backend.
build:remote-cache --bes_results_url="https://source.cloud.google.com/results/invocations/"
build:remote-cache --bes_backend=buildeventservice.googleapis.com
build:remote-cache --bes_timeout=60s
build:remote-cache --project_id=prysmaticlabs
build:remote-cache --bes_upload_mode=fully_async
# Prysm specific remote-cache properties.
build:remote-cache --disk_cache=
build:remote-cache --jobs=50
build:remote-cache --host_platform_remote_properties_override='properties:{name:\"cache-silo-key\" value:\"prysm\"}'
build:remote-cache --remote_instance_name=projects/prysmaticlabs/instances/default_instance
build:remote-cache --experimental_remote_download_outputs=minimal
build:remote-cache --experimental_inmemory_jdeps_files
build:remote-cache --experimental_inmemory_dotd_files
# Import workspace options.
import %workspace%/.bazelrc
startup --host_jvm_args=-Xmx1000m --host_jvm_args=-Xms1000m
build --experimental_strict_action_env
build --disk_cache=/tmp/bazelbuilds
build --experimental_multi_threaded_digest
@@ -28,6 +41,8 @@ build --curses=yes --color=yes
build --keep_going
build --test_output=errors
build --flaky_test_attempts=5
build --test_timeout=5,60,-1,-1
build --jobs=50
build --stamp
test --local_test_jobs=2
# Disabled race detection due to unstable test results under constrained environment build kite
# build --features=race

View File

@@ -27,3 +27,7 @@ comment:
layout: "header, diff"
behavior: default
require_changes: no
ignore:
- "**/*.pb.go"
- "**/*_mock.go"

2
.dockerignore Normal file
View File

@@ -0,0 +1,2 @@
bazel-*
.git

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
custom: https://gitcoin.co/grants/24/prysm-by-prysmatic-labs

6
.gitignore vendored
View File

@@ -21,4 +21,8 @@ yarn-error.log
.vscode/
# Ignore password file
password.txt
password.txt
# go dependancy
/go.mod
/go.sum

View File

View File

@@ -1,18 +0,0 @@
{
"extends": "solium:recommended",
"plugins": [
"security"
],
"rules": {
"quotes": [
"error",
"double"
],
"security/no-inline-assembly": ["warning"],
"indentation": [
"error",
4
]
}
}

View File

@@ -12,7 +12,7 @@ matrix:
- go get ${gobuild_args} -t ./...
- go get ${gobuild_args} github.com/golangci/golangci-lint/cmd/golangci-lint
script:
- golangci-lint run
- golangci-lint run --skip-dirs ./proto
email: false
after_success:
- wget https://raw.githubusercontent.com/k3rn31p4nic/travis-ci-discord-webhook/master/send.sh

26
.well-known/security.txt Normal file
View File

@@ -0,0 +1,26 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Contact: mailto:security@prysmaticlabs.com
Encryption: openpgp4fpr:0AE0051D647BA3C1A917AF4072E33E4DF1A5036E
Encryption: openpgp4fpr:341396BAFACC28C5082327F889725027FC8EC0D4
Encryption: openpgp4fpr:8B7814F1B221A8E8AA465FC7BDBF744ADE1A0033
Preferred-Languages: en
Canonical: https://github.com/prysmaticlabs/prysm/tree/master/.well-known/security.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAlzi0WgACgkQcuM+TfGl
A241pw/+Ks3Hxx8eGbjRIeuncuK811FkCiofNJS+MY2p4W2/tIrk48DtLRx8/k5L
Dh1QyypZsqUgofrK7PbGVdEin6oEb2jYbTWUarAVTbhlsUdM4YcxwpgmGVslW7+C
Hm8wMasQZhCkFfakzhfKX5hIQoFaFI/OvtVKIQsodP8dAieCDaGmtfq1Bs1LgFqi
KrpeEdC2XbBQs33ADheC5SdGT1mnatP3VX8cOhLsfoPksYgTSpwK0clkoWs1eZOQ
l1ImfW/FJCpSndBWgBR503ZgaU3Ic+5qxmAIuUP4chl0DFRMlPFEM5OWC6JkkCOd
5kKrXGRmrhgtQg+pA3zqJnFItRj7gxPBA/ypxCkKPrLEkRvbdpdZEl5vAlYkeBL6
iKSLHnMswGKldiYxy7ofam5bM3myhYYNFb25boV5pRptrnoUmWOACHioBGQHwWNt
B0XktD0j7+pCCiJyyYxmOnElsk/Y/u4Tv5pYWvfFuxTF2XOg+P/EH64AIFLWgB1U
VnITxhakxqejCBxZkuVCFNSzt+TXG0NS9EIj/UOYBY+wxrBZ62ITjdA16RS/3n3z
DuIDtxOOwUumbOO32+a5zIb+ARmnocYJviI7FuENb01/U6qb+nm9hQI6oIpSCNsv
Pb4O/ZlOx70U/7mt4Xn/dTKH9bnKOOVhOw00KJWFfAce73AVnLA=
=Uhqg
-----END PGP SIGNATURE-----

View File

@@ -3,10 +3,14 @@ load("@com_github_atlassian_bazel_tools//gometalinter:def.bzl", "gometalinter")
load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@graknlabs_bazel_distribution//common:rules.bzl", "assemble_targz", "assemble_versioned")
load("//tools:binary_targets.bzl", "binary_targets", "determine_targets")
prefix = "github.com/prysmaticlabs/prysm"
exports_files(["genesis.json"])
exports_files([
"LICENSE.md",
])
# gazelle:prefix github.com/prysmaticlabs/prysm
gazelle(
@@ -32,6 +36,24 @@ alias(
],
)
# Protobuf gRPC compiler without gogoproto. Required for gRPC gateway.
alias(
name = "grpc_nogogo_proto_compiler",
actual = "@io_bazel_rules_go//proto:go_grpc",
visibility = [
"//proto:__subpackages__",
],
)
# Protobuf gRPC gateway compiler
alias(
name = "grpc_gateway_proto_compiler",
actual = "@grpc_ecosystem_grpc_gateway//protoc-gen-grpc-gateway:go_gen_grpc_gateway",
visibility = [
"//proto:__subpackages__",
],
)
gometalinter(
name = "gometalinter",
config = "//:.gometalinter.json",
@@ -44,8 +66,8 @@ gometalinter(
goimports(
name = "goimports",
display_diffs = True,
write = False,
prefix = prefix,
write = False,
)
workspace_binary(
@@ -55,6 +77,8 @@ workspace_binary(
nogo(
name = "nogo",
config = "nogo_config.json",
visibility = ["//visibility:public"],
deps = [
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_tool_library",
@@ -68,7 +92,8 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_tool_library",
# "@org_golang_x_tools//go/analysis/passes/lostcancel:go_tool_library",
# lost cancel ignore doesn't seem to work when running with coverage
#"@org_golang_x_tools//go/analysis/passes/lostcancel:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_tool_library",
@@ -86,6 +111,35 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/inspect:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_tool_library",
],
visibility = ["//visibility:public"],
config = "nogo_config.json",
)
assemble_versioned(
name = "assemble-versioned-all",
tags = ["manual"],
targets = [
":assemble-{}-{}-targz".format(
pair[0],
pair[1],
)
for pair in binary_targets
],
version_file = "//:VERSION",
)
common_files = {
"//:LICENSE.md": "LICENSE.md",
"//:README.md": "README.md",
}
[assemble_targz(
name = "assemble-{}-{}-targz".format(
pair[0],
pair[1],
),
additional_files = determine_targets(pair, common_files),
output_filename = "prysm-{}-{}".format(
pair[0],
pair[1],
),
tags = ["manual"],
) for pair in binary_targets]

View File

@@ -63,7 +63,7 @@ $ go test <file_you_are_working_on>
Changes that affect multiple files can be tested with ...
```
$ gometalinter && bazel test
$ golangci-lint run && bazel test //...
```
**10. Stage the file or files that you want to commit.**
@@ -88,10 +88,10 @@ You can use the amend flag to include previous commits that have not yet been
$ git fetch prysm
```
**13. Rebase your branch atop of the latest version of Prysm.**
**13. Pull latest version of Prysm.**
```
$ git rebase prysm/master
$ git pull origin master
```
If there are conflicts between your edits and those made by others since you started work Git will ask you to resolve them. To find out which files have conflicts run ...
@@ -115,10 +115,10 @@ The code from the Prysm repo is inserted between <<< and === while the change yo
**14. Push your changes to your fork of the Prysm repo.**
Rebasing a pull request changes the history on your branch, so Git will reject a normal git push after a rebase. Use a force push to move your changes to your fork of the repo.
Use git push to move your changes to your fork of the repo.
```
$ git push myrepo feature-in-progress-branch -f
$ git push myrepo feature-in-progress-branch
```
**15. Check to be sure your fork of the Prysm repo contains your feature branch with the latest edits.**

93
INTEROP.md Normal file
View File

@@ -0,0 +1,93 @@
# Prysm Client Interoperability Guide
This README details how to setup Prysm for interop testing for usage with other Ethereum 2.0 clients.
## Installation & Setup
1. Install [Bazel](https://docs.bazel.build/versions/master/install.html) **(Recommended)**
2. `git clone https://github.com/prysmaticlabs/prysm && cd prysm`
3. `bazel build //...`
## Starting from Genesis
Prysm supports a few ways to quickly launch a beacon node from basic configurations:
- `NumValidators + GenesisTime`: Launches a beacon node by deterministically generating a state from a num-validators flag along with a genesis time **(Recommended)**
- `SSZ Genesis`: Launches a beacon node from a .ssz file containing a SSZ-encoded, genesis beacon state
## Generating a Genesis State
To setup the necessary files for these quick starts, Prysm provides a tool to generate a `genesis.ssz` from
a deterministically generated set of validator private keys following the official interop YAML format
[here](https://github.com/ethereum/eth2.0-pm/blob/master/interop/mocked_start).
You can use `bazel run //tools/genesis-state-gen` to create a deterministic genesis state for interop.
### Usage
- **--genesis-time** uint: Unix timestamp used as the genesis time in the generated genesis state (defaults to now)
- **--mainnet-config** bool: Select whether genesis state should be generated with mainnet or minimal (default) params
- **--num-validators** int: Number of validators to deterministically include in the generated genesis state
- **--output-ssz** string: Output filename of the SSZ marshaling of the generated genesis state
The example below creates 64 validator keys, instantiates a genesis state with those 64 validators and with genesis unix timestamp 1567542540,
and finally writes a ssz encoded output to ~/Desktop/genesis.ssz. This file can be used to kickstart the beacon chain in the next section.
```
bazel run //tools/genesis-state-gen -- --output-ssz ~/Desktop/genesis.ssz --num-validators 64 --genesis-time 1567542540
```
## Launching a Beacon Node + Validator Client
### Launching from Pure CLI Flags
Open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--no-genesis-delay \
--bootstrap-node= \
--deposit-contract 0xD775140349E6A5D12524C6ccc3d6A1d4519D4029 \
--clear-db \
--interop-num-validators 64 \
--interop-eth1data-votes
```
This will deterministically generate a beacon genesis state and start
the system with 64 validators and the genesis time set to the current unix timestamp.
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --interop-num-validators 64
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
specify which keys
### Launching from `genesis.ssz`
Assuming you generated a `genesis.ssz` file with 64 validators, open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--no-genesis-delay \
--bootstrap-node= \
--deposit-contract 0xD775140349E6A5D12524C6ccc3d6A1d4519D4029 \
--clear-db \
--interop-genesis-state /path/to/genesis.ssz \
--interop-eth1data-votes
```
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --interop-num-validators 64
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.

197
README.md
View File

@@ -1,122 +1,195 @@
# Prysmatic Labs Ethereum Serenity Implementation
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg)](https://buildkite.com/prysmatic-labs/prysm)
This is the main repository for the Go implementation of the Ethereum 2.0 Serenity [Prysmatic Labs](https://prysmaticlabs.com).
Before you begin, check out our [official documentation portal](https://prysmaticlabs.gitbook.io/prysm/) and join our active chat room on Discord or Gitter below:
# Prysm: Ethereum 'Serenity' 2.0 Go Implementation
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![ETH2.0_Spec_Version 0.8.1](https://img.shields.io/badge/ETH2.0%20Spec%20Version-v0.8.1-blue.svg)](https://github.com/ethereum/eth2.0-specs/commit/452ecf8e27c7852c7854597f2b1bb4a62b80c7ec)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/KSA7rPr)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
Also, read our [Roadmap Reference Implementation Doc](https://github.com/prysmaticlabs/prysm/blob/master/docs/ROADMAP.md). This doc provides a background on the milestones we aim for the project to achieve.
This is the Core repository for Prysm, [Prysmatic Labs](https://prysmaticlabs.com)' [Go](https://golang.org/) implementation of the Ethereum protocol 2.0 (Serenity).
### Need assistance?
A more detailed set of installation and usage instructions as well as explanations of each component are available on our [official documentation portal](https://prysmaticlabs.gitbook.io/prysm/). If you still have questions, feel free to stop by either our [Discord](https://discord.gg/KSA7rPr) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) and a member of the team or our community will be happy to assist you.
**Interested in what's next?** Be sure to read our [Roadmap Reference Implementation](https://github.com/prysmaticlabs/prysm/blob/master/docs/ROADMAP.md) document. This page outlines the basics of sharding as well as the various short-term milestones that we hope to achieve over the coming year.
### Come join the testnet!
Participation is now open to the public in our testnet release for Ethereum 2.0 phase 0. Visit [prylabs.net](https://prylabs.net) for more information on the project itself or to sign up as a validator on the network.
# Table of Contents
- [Join Our Testnet](#join-our-testnet)
- [Dependencies](#dependencies)
- [Installation](#installation)
- [Run Via Docker](#run-via-docker-recommended)
- [Run Via Bazel](#run-via-bazel)
- [Prysm Main Components](#prysm-main-components)
- [Running an Ethereum 2.0 Beacon Node](#running-an-ethereum-20-beacon-node)
- [Staking ETH: Running a Validator Client](#staking-eth-running-a-validator-client)
- [Testing](#testing)
- [Build Via Docker](#build-via-docker)
- [Build Via Bazel](#build-via-bazel)
- [Running an Ethereum 2.0 Beacon Node](#running-an-ethereum-20-beacon-node)
- [Staking ETH: Running a Validator Client](#staking-eth-running-a-validator-client)
- [Testing Prysm](#testing-prysm)
- [Contributing](#contributing)
- [License](#license)
# Join Our Testnet
## Dependencies
Prysm can be installed either with Docker **(recommended method)** or using our build tool, Bazel. The below instructions include sections for performing both.
You can now participate in our public testnet release for Ethereum 2.0 phase 0. Visit [prylabs.net](https://prylabs.net) 💎 to participate!
**For Docker installations:**
- The latest release of [Docker](https://docs.docker.com/install/)
# Installing Prysm
**For Bazel installations:**
- The latest release of [Bazel](https://docs.bazel.build/versions/master/install.html)
- A modern UNIX operating system (MacOS included)
### Installation Options
You can either choose to run our system via:
- Our latest [release](https://github.com/prysmaticlabs/prysm/releases) **(Easiest)**
- Using Docker **(Recommended)**
- Using Our Build Tool, Bazel
### Fetching via Docker (Recommended)
Docker is a convenient way to run Prysm, as all you need to do is fetch the latest images:
## Installation
### Build via Docker
1. Ensure you are running the most recent version of Docker by issuing the command:
```
docker -v
```
2. To pull the Prysm images from the server, issue the following commands:
```
docker pull gcr.io/prysmaticlabs/prysm/validator:latest
docker pull gcr.io/prysmaticlabs/prysm/beacon-chain:latest
```
This process will also install any related dependencies.
### Build Via Bazel
First, clone our repository:
```
git clone https://github.com/prysmaticlabs/prysm
```
Download the Bazel build tool by Google here and ensure it works by typing:
### Build via Bazel
1. Open a terminal window. Ensure you are running the most recent version of Bazel by issuing the command:
```
bazel version
```
Bazel manages all of the dependencies for you (including go and necessary compilers) so you are all set to build prysm. Then, build both parts of our system: a beacon chain node implementation, and a validator client:
2. Clone this repository and enter the directory:
```
git clone https://github.com/prysmaticlabs/prysm
cd prysm
```
3. Build both the beacon chain node implementation and the validator client:
```
bazel build //beacon-chain:beacon-chain
bazel build //validator:validator
```
Bazel will automatically pull and install any dependencies as well, including Go and necessary compilers.
# Prysm Main Components
Prysm ships with two important components: a beacon node and a validator client. The beacon node is the server that performs the heavy lifting of Ethereum 2.0., A validator client is another piece of software that securely connects to the beacon node and allows you to stake 3.2 Goerli ETH in order to secure the network. You'll be mostly interacting with the validator client to manage your stake.
Another critical component of Ethereum 2.0 is the Validator Deposit Contract, which is a smart contract deployed on the Ethereum 1.0 chain which can be used for current holders of ETH to do a one-way transfer into Ethereum 2.0.
### Running an Ethereum 2.0 Beacon Node
With docker:
Note that to build with the appropriate configuration for the Prysm testnet you should run:
```
docker run -v /tmp/prysm-data:/data -p 4000:4000 \
bazel build --define ssz=minimal //beacon-chain:beacon-chain
bazel build --define ssz=minimal //validator:validator
```
The binaries will be created in an architecture-dependent subdirectory of `bazel-bin` and this information is supplied as part of bazel's build process. For example:
```
$ bazel build --define ssz=minimal //beacon-chain:beacon-chain
...
Target //beacon-chain:beacon-chain up-to-date:
bazel-bin/beacon-chain/linux_amd64_stripped/beacon-chain
...
```
Here it can be seen the beacon chain binary has been created at `bazel-bin/beacon-chain/linux_amd64_stripped/beacon-chain`
## Running an Ethereum 2.0 Beacon Node
To understand the role that both the beacon node and validator play in Prysm, see [this section of our documentation](https://prysmaticlabs.gitbook.io/prysm/how-prysm-works/overview-technical).
### Running via Docker
**Docker on Linux/Mac:**
To start your beacon node, issue the following command:
```
docker run -v $HOME/prysm-data:/data -p 4000:4000 \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--name beacon-node \
--datadir=/data
```
You can stop the beacon node using `Ctrl+c` or with the following command:
```
docker stop beacon-node
```
Then it can be restarted again with
```
docker start -ai beacon-node
```
If you run into issues you can always delete the container like this:
```
docker rm beacon-node
```
and re-create it again and even reset the chain database adding the parameter `--clear-db` as specified here:
```
docker run -it -v $HOME/prysm-data:/data -p 4000:4000 \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--name beacon-node \
--datadir=/data \
--clear-db
```
To start your beacon node with bazel:
**Docker on Windows:**
1) You will need to share the local drive you wish to mount to to container (e.g. C:).
1. Enter Docker settings (right click the tray icon)
2. Click 'Shared Drives'
3. Select a drive to share
4. Click 'Apply'
2) You will next need to create a directory named ```/tmp/prysm-data/``` within your selected shared Drive. This folder will be used as a local data directory for Beacon Node chain data as well as account and keystore information required by the validator. Docker will **not** create this directory if it does not exist already. For the purposes of these instructions, it is assumed that ```C:``` is your prior-selected shared Drive.
4) To run the beacon node, issue the following command:
```
docker run -it -v c:/tmp/prysm-data:/data -p 4000:4000 gcr.io/prysmaticlabs/prysm/beacon-chain:latest --datadir=/data --clear-db
```
### Running via Bazel
1) To start your Beacon Node with Bazel, issue the following command:
```
bazel run //beacon-chain -- --clear-db --datadir=/tmp/prysm-data
```
This will sync you up with the latest head block in the network, and then you'll have a ready beacon node.
The chain will then be waiting for you to deposit 3.2 Goerli ETH into the Validator Deposit Contract before your validator can become active! Now, you'll need to create a validator client to connect to this node and stake 3.2 Goerli ETH to participate as a validator in Ethereum 2.0's Proof of Stake system.
This will sync up the Beacon Node with the latest head block in the network.
### Staking ETH: Running a Validator Client
Once your beacon node is up, you'll need to attach a validator client as a separate process. Each validator represents 3.2 Goerli ETH being staked in the system, so you can spin up as many as you want to have more at stake in the network
**Activating Your Validator: Depositing 3.2 Goerli ETH**
## Staking ETH: Running a Validator Client
Using your validator deposit data from the previous step, use the instructions in https://alpha.prylabs.net/participate to deposit.
Once your beacon node is up and **completely synced** (otherwise you will lose validator funds since the validator will not be able to operate), the chain will be waiting for you to deposit 3.2 Goerli ETH into the Validator Deposit Contract to activate your validator (discussed in the section below). First though, you will need to create a *validator client* to connect to this node in order to stake and participate. Each validator represents 3.2 Goerli ETH being staked in the system, and it is possible to spin up as many as you desire in order to have more stake in the network.
It'll take a while for the nodes in the network to process your deposit, but once you're active, your validator will begin doing its responsibility! In your validator client, you'll be able to frequently see your validator balance as it goes up. If you ever go offline for a while, you'll start gradually losing your deposit until you get kicked out of the system. Congratulations, you are now running Ethereum 2.0 Phase 0 :).
### Activating Your Validator: Depositing 3.2 Goerli ETH
# Testing
Using your validator deposit data from the previous step, follow the instructions found on https://prylabs.net/participate to make a deposit.
To run the unit tests of our system do:
It will take a while for the nodes in the network to process your deposit, but once your node is active, the validator will begin doing its responsibility. In your validator client, you will be able to frequently see your validator balance as it goes up over time. Note that, should your node ever go offline for a long period, you'll start gradually losing your deposit until you are removed from the system.
### Starting the validator with Bazel
1. Open another terminal window. Enter your Prysm directory and run the validator by issuing the following command:
```
cd prysm
bazel run //validator
```
**Congratulations, you are now running Ethereum 2.0 Phase 0!**
## Testing Prysm
**To run the unit tests of our system**, issue the command:
```
bazel test //...
```
To run our linter, make sure you have [golangci-lint](https://https://github.com/golangci/golangci-lint) installed and then run:
**To run our linter**, make sure you have [golangci-lint](https://github.com/golangci/golangci-lint) installed and then issue the command:
```
golangci-lint run
```
# Contributing
## Contributing
We have put all of our contribution guidelines into [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/master/CONTRIBUTING.md)! Check it out to get started.
![nyancat](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRBSus2ozk_HuGdHMHKWjb1W5CmwwoxmYIjIBmERE1u-WeONpJJXg)
# License
## License
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)

43
TESTNET.md Normal file
View File

@@ -0,0 +1,43 @@
# Testnet
The Prysmatic Labs test network is available for anyone to join. The easiest way to participate is by joining through the website, https://prylabs.net.
## Interop
For developers looking to connect a client other than Prysm to the test network, here is the relevant information for compatability.
**Spec version** - [v0.8.3](https://github.com/ethereum/eth2.0-specs/tree/v0.8.3)
**ETH 1 Deposit Contract Address** - See https://prylabs.net/contract. This contract is deployed on the [goerli](https://goerli.net/) network.
**Genesis time** - The ETH1 block time in which the 64th deposit to start ETH2 was included. This is NOT midnight of the next day as required by spec.
### ETH 2 Configuration
Use the [minimal config](https://github.com/ethereum/eth2.0-specs/blob/v0.8.3/configs/minimal.yaml) with the following changes.
| field | value |
|-------|-------|
| MIN_DEPOSIT_AMOUNT | 100 |
| MAX_EFFECTIVE_BALANCE | 3.2 * 1e9 |
| EJECTION_BALANCE | 1.6 * 1e9 |
| EFFECTIVE_BALANCE_INCREMENT | 0.1 * 1e9 |
| ETH1_FOLLOW_DISTANCE | 16 |
| GENESIS_FORK_VERSION | See [latest code](https://github.com/prysmaticlabs/prysm/blob/master/shared/params/config.go#L236) |
These parameters reduce the minimal config to 1/10 of the required ETH.
We have a genesis.ssz file available for download [here](https://prysmaticlabs.com/uploads/genesis.ssz)
### Connecting to the network
We have a libp2p bootstrap node available at `/dns4/prylabs.net/tcp/30001/p2p/16Uiu2HAm7Qwe19vz9WzD2Mxn7fXd1vgHHp4iccuyq7TxwRXoAGfc`.
Some of the Prysmatic Labs hosted nodes are behind a libp2p relay, so your libp2p implementation protocol should understand this functionality.
### Other
Undoubtably, you will have bugs. Reach out to us on [Discord](https://discord.gg/KSA7rPr) and be sure to capture issues on Github at https://github.com/prysmaticlabs/prysm/issues.
If you have instructions for you client, we would love to attempt this on your behalf. Kindly send over the instructions via github issue, PR, email to team@prysmaticlabs.com, or discord.

1
VERSION Normal file
View File

@@ -0,0 +1 @@
0.2.0

753
WORKSPACE

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library", "go_test")
load("@io_bazel_rules_docker//go:image.bzl", "go_image")
load("@io_bazel_rules_docker//container:container.bzl", "container_push")
load("@io_bazel_rules_docker//container:container.bzl", "container_bundle")
load("//tools:binary_targets.bzl", "binary_targets")
load("@io_bazel_rules_docker//contrib:push-all.bzl", "docker_push")
go_library(
name = "go_default_library",
@@ -11,15 +13,20 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/flags:go_default_library",
"//beacon-chain/node:go_default_library",
"//beacon-chain/utils:go_default_library",
"//shared/cmd:go_default_library",
"//shared/debug:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/logutil:go_default_library",
"//shared/version:go_default_library",
"@com_github_ipfs_go_log//:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@com_github_whyrusleeping_go_logging//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
"@org_uber_go_automaxprocs//:go_default_library",
],
)
@@ -37,27 +44,36 @@ go_image(
tags = ["manual"],
visibility = ["//visibility:private"],
deps = [
"//beacon-chain/flags:go_default_library",
"//beacon-chain/node:go_default_library",
"//beacon-chain/utils:go_default_library",
"//shared/cmd:go_default_library",
"//shared/debug:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/logutil:go_default_library",
"//shared/version:go_default_library",
"@com_github_ipfs_go_log//:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@com_github_whyrusleeping_go_logging//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
"@org_uber_go_automaxprocs//:go_default_library",
],
)
container_push(
name = "push_image",
format = "Docker",
image = ":image",
registry = "gcr.io",
repository = "prysmaticlabs/prysm/beacon-chain",
tag = "latest",
container_bundle(
name = "image_bundle",
images = {
"gcr.io/prysmaticlabs/prysm/beacon-chain:latest": ":image",
"gcr.io/prysmaticlabs/prysm/beacon-chain:{DOCKER_TAG}": ":image",
},
tags = ["manual"],
)
docker_push(
name = "push_images",
bundle = ":image_bundle",
tags = ["manual"],
visibility = ["//visibility:private"],
)
go_binary(
@@ -65,3 +81,23 @@ go_binary(
embed = [":go_default_library"],
visibility = ["//beacon-chain:__subpackages__"],
)
go_test(
name = "go_default_test",
size = "small",
srcs = ["usage_test.go"],
embed = [":go_default_library"],
deps = ["@com_github_urfave_cli//:go_default_library"],
)
[go_binary(
name = "beacon-chain-{}-{}".format(
pair[0],
pair[1],
),
embed = [":go_default_library"],
goarch = pair[1],
goos = pair[0],
tags = ["manual"],
visibility = ["//visibility:public"],
) for pair in binary_targets]

View File

@@ -0,0 +1,39 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["service.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/archiver",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["service_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -0,0 +1,177 @@
package archiver
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "archiver")
// Service defining archiver functionality for persisting checkpointed
// beacon chain information to a database backend for historical purposes.
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB db.Database
headFetcher blockchain.HeadFetcher
newHeadNotifier blockchain.NewHeadNotifier
newHeadRootChan chan [32]byte
}
// Config options for the archiver service.
type Config struct {
BeaconDB db.Database
HeadFetcher blockchain.HeadFetcher
NewHeadNotifier blockchain.NewHeadNotifier
}
// NewArchiverService initializes the service from configuration options.
func NewArchiverService(ctx context.Context, cfg *Config) *Service {
ctx, cancel := context.WithCancel(ctx)
return &Service{
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
headFetcher: cfg.HeadFetcher,
newHeadNotifier: cfg.NewHeadNotifier,
newHeadRootChan: make(chan [32]byte, 1),
}
}
// Start the archiver service event loop.
func (s *Service) Start() {
go s.run(s.ctx)
}
// Stop the archiver service event loop.
func (s *Service) Stop() error {
defer s.cancel()
return nil
}
// Status reports the healthy status of the archiver. Returning nil means service
// is correctly running without error.
func (s *Service) Status() error {
return nil
}
// We archive committee information pertaining to the head state's epoch.
func (s *Service) archiveCommitteeInfo(ctx context.Context, headState *pb.BeaconState) error {
currentEpoch := helpers.SlotToEpoch(headState.Slot)
committeeCount, err := helpers.CommitteeCount(headState, currentEpoch)
if err != nil {
return errors.Wrap(err, "could not get committee count")
}
seed, err := helpers.Seed(headState, currentEpoch)
if err != nil {
return errors.Wrap(err, "could not generate seed")
}
startShard, err := helpers.StartShard(headState, currentEpoch)
if err != nil {
return errors.Wrap(err, "could not get start shard")
}
proposerIndex, err := helpers.BeaconProposerIndex(headState)
if err != nil {
return errors.Wrap(err, "could not get beacon proposer index")
}
info := &ethpb.ArchivedCommitteeInfo{
Seed: seed[:],
StartShard: startShard,
CommitteeCount: committeeCount,
ProposerIndex: proposerIndex,
}
if err := s.beaconDB.SaveArchivedCommitteeInfo(ctx, currentEpoch, info); err != nil {
return errors.Wrap(err, "could not archive committee info")
}
return nil
}
// We archive active validator set changes that happened during the epoch.
func (s *Service) archiveActiveSetChanges(ctx context.Context, headState *pb.BeaconState) error {
activations := validators.ActivatedValidatorIndices(headState)
slashings := validators.SlashedValidatorIndices(headState)
exited, err := validators.ExitedValidatorIndices(headState)
if err != nil {
return errors.Wrap(err, "could not determine exited validator indices")
}
activeSetChanges := &ethpb.ArchivedActiveSetChanges{
Activated: activations,
Exited: exited,
Slashed: slashings,
}
if err := s.beaconDB.SaveArchivedActiveValidatorChanges(ctx, helpers.CurrentEpoch(headState), activeSetChanges); err != nil {
return errors.Wrap(err, "could not archive active validator set changes")
}
return nil
}
// We compute participation metrics by first retrieving the head state and
// matching validator attestations during the epoch.
func (s *Service) archiveParticipation(ctx context.Context, headState *pb.BeaconState) error {
participation, err := epoch.ComputeValidatorParticipation(headState)
if err != nil {
return errors.Wrap(err, "could not compute participation")
}
return s.beaconDB.SaveArchivedValidatorParticipation(ctx, helpers.SlotToEpoch(headState.Slot), participation)
}
// We archive validator balances and active indices.
func (s *Service) archiveBalances(ctx context.Context, headState *pb.BeaconState) error {
balances := headState.Balances
currentEpoch := helpers.CurrentEpoch(headState)
if err := s.beaconDB.SaveArchivedBalances(ctx, currentEpoch, balances); err != nil {
return errors.Wrap(err, "could not archive balances")
}
return nil
}
func (s *Service) run(ctx context.Context) {
sub := s.newHeadNotifier.HeadUpdatedFeed().Subscribe(s.newHeadRootChan)
defer sub.Unsubscribe()
for {
select {
case r := <-s.newHeadRootChan:
log.WithField("headRoot", fmt.Sprintf("%#x", r)).Debug("New chain head event")
headState := s.headFetcher.HeadState()
if !helpers.IsEpochEnd(headState.Slot) {
continue
}
if err := s.archiveCommitteeInfo(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive committee info")
continue
}
if err := s.archiveActiveSetChanges(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive active validator set changes")
continue
}
if err := s.archiveParticipation(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive validator participation")
continue
}
if err := s.archiveBalances(ctx, headState); err != nil {
log.WithError(err).Error("Could not archive validator balances and active indices")
continue
}
log.WithField(
"epoch",
helpers.CurrentEpoch(headState),
).Debug("Successfully archived beacon chain data during epoch")
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-sub.Err():
log.WithError(err).Error("Subscription to new chain head notifier failed")
return
}
}
}

View File

@@ -0,0 +1,311 @@
package archiver
import (
"context"
"fmt"
"io/ioutil"
"reflect"
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
dbutil "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func init() {
logrus.SetLevel(logrus.DebugLevel)
logrus.SetOutput(ioutil.Discard)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
}
func TestArchiverService_ReceivesNewChainHeadEvent(t *testing.T) {
hook := logTest.NewGlobal()
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: &pb.BeaconState{Slot: 1},
}
headRoot := [32]byte{1, 2, 3}
triggerNewHeadEvent(t, svc, headRoot)
testutil.AssertLogsContain(t, hook, fmt.Sprintf("%#x", headRoot))
testutil.AssertLogsContain(t, hook, "New chain head event")
}
func TestArchiverService_OnlyArchiveAtEpochEnd(t *testing.T) {
hook := logTest.NewGlobal()
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
// The head state is NOT an epoch end.
svc.headFetcher = &mock.ChainService{
State: &pb.BeaconState{Slot: params.BeaconConfig().SlotsPerEpoch - 3},
}
triggerNewHeadEvent(t, svc, [32]byte{})
// The context should have been canceled.
if svc.ctx.Err() != context.Canceled {
t.Error("context was not canceled")
}
testutil.AssertLogsContain(t, hook, "New chain head event")
// The service should ONLY log any archival logs if we receive a
// head slot that is an epoch end.
testutil.AssertLogsDoNotContain(t, hook, "Successfully archived")
}
func TestArchiverService_ComputesAndSavesParticipation(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
triggerNewHeadEvent(t, svc, [32]byte{})
attestedBalance := uint64(1)
currentEpoch := helpers.CurrentEpoch(headState)
wanted := &ethpb.ValidatorParticipation{
VotedEther: attestedBalance,
EligibleEther: validatorCount * params.BeaconConfig().MaxEffectiveBalance,
GlobalParticipationRate: float32(attestedBalance) / float32(validatorCount*params.BeaconConfig().MaxEffectiveBalance),
}
retrieved, err := svc.beaconDB.ArchivedValidatorParticipation(svc.ctx, currentEpoch)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(wanted, retrieved) {
t.Errorf("Wanted participation for epoch %d %v, retrieved %v", currentEpoch, wanted, retrieved)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func TestArchiverService_SavesIndicesAndBalances(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
triggerNewHeadEvent(t, svc, [32]byte{})
retrieved, err := svc.beaconDB.ArchivedBalances(svc.ctx, helpers.CurrentEpoch(headState))
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(headState.Balances, retrieved) {
t.Errorf(
"Wanted balances for epoch %d %v, retrieved %v",
helpers.CurrentEpoch(headState),
headState.Balances,
retrieved,
)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func TestArchiverService_SavesCommitteeInfo(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
triggerNewHeadEvent(t, svc, [32]byte{})
currentEpoch := helpers.CurrentEpoch(headState)
startShard, err := helpers.StartShard(headState, currentEpoch)
if err != nil {
t.Fatal(err)
}
committeeCount, err := helpers.CommitteeCount(headState, currentEpoch)
if err != nil {
t.Fatal(err)
}
seed, err := helpers.Seed(headState, currentEpoch)
if err != nil {
t.Fatal(err)
}
propIdx, err := helpers.BeaconProposerIndex(headState)
if err != nil {
t.Fatal(err)
}
wanted := &ethpb.ArchivedCommitteeInfo{
Seed: seed[:],
StartShard: startShard,
CommitteeCount: committeeCount,
ProposerIndex: propIdx,
}
retrieved, err := svc.beaconDB.ArchivedCommitteeInfo(svc.ctx, helpers.CurrentEpoch(headState))
if err != nil {
t.Fatal(err)
}
if !proto.Equal(wanted, retrieved) {
t.Errorf(
"Wanted committee info for epoch %d %v, retrieved %v",
helpers.CurrentEpoch(headState),
wanted,
retrieved,
)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func TestArchiverService_SavesActivatedValidatorChanges(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
currentEpoch := helpers.CurrentEpoch(headState)
delayedActEpoch := helpers.DelayedActivationExitEpoch(currentEpoch)
headState.Validators[4].ActivationEpoch = delayedActEpoch
headState.Validators[5].ActivationEpoch = delayedActEpoch
triggerNewHeadEvent(t, svc, [32]byte{})
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, currentEpoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(retrieved.Activated, []uint64{4, 5}) {
t.Errorf("Wanted indices 4 5 activated, received %v", retrieved.Activated)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func TestArchiverService_SavesSlashedValidatorChanges(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
currentEpoch := helpers.CurrentEpoch(headState)
headState.Validators[95].Slashed = true
headState.Validators[96].Slashed = true
triggerNewHeadEvent(t, svc, [32]byte{})
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, currentEpoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(retrieved.Slashed, []uint64{95, 96}) {
t.Errorf("Wanted indices 95, 96 slashed, received %v", retrieved.Slashed)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func TestArchiverService_SavesExitedValidatorChanges(t *testing.T) {
hook := logTest.NewGlobal()
validatorCount := uint64(100)
headState := setupState(t, validatorCount)
svc, beaconDB := setupService(t)
defer dbutil.TeardownDB(t, beaconDB)
svc.headFetcher = &mock.ChainService{
State: headState,
}
currentEpoch := helpers.CurrentEpoch(headState)
headState.Validators[95].ExitEpoch = currentEpoch + 1
headState.Validators[95].WithdrawableEpoch = currentEpoch + 1 + params.BeaconConfig().MinValidatorWithdrawabilityDelay
triggerNewHeadEvent(t, svc, [32]byte{})
retrieved, err := beaconDB.ArchivedActiveValidatorChanges(svc.ctx, currentEpoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(retrieved.Exited, []uint64{95}) {
t.Errorf("Wanted indices 95 exited, received %v", retrieved.Exited)
}
testutil.AssertLogsContain(t, hook, "Successfully archived")
}
func setupState(t *testing.T, validatorCount uint64) *pb.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
balances := make([]uint64, validatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
balances[i] = params.BeaconConfig().MaxEffectiveBalance
}
atts := []*pb.PendingAttestation{{Data: &ethpb.AttestationData{Crosslink: &ethpb.Crosslink{Shard: 0}, Target: &ethpb.Checkpoint{}}}}
var crosslinks []*ethpb.Crosslink
for i := uint64(0); i < params.BeaconConfig().ShardCount; i++ {
crosslinks = append(crosslinks, &ethpb.Crosslink{
StartEpoch: 0,
DataRoot: []byte{'A'},
})
}
// We initialize a head state that has attestations from participated
// validators in a simulated fashion.
return &pb.BeaconState{
Slot: (2 * params.BeaconConfig().SlotsPerEpoch) - 1,
Validators: validators,
Balances: balances,
BlockRoots: make([][]byte, 128),
Slashings: []uint64{0, 1e9, 1e9},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
ActiveIndexRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CompactCommitteesRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentCrosslinks: crosslinks,
CurrentEpochAttestations: atts,
FinalizedCheckpoint: &ethpb.Checkpoint{},
JustificationBits: bitfield.Bitvector4{0x00},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{},
}
}
func setupService(t *testing.T) (*Service, db.Database) {
beaconDB := dbutil.SetupDB(t)
ctx, cancel := context.WithCancel(context.Background())
return &Service{
beaconDB: beaconDB,
ctx: ctx,
cancel: cancel,
newHeadRootChan: make(chan [32]byte, 0),
newHeadNotifier: &mock.ChainService{},
}, beaconDB
}
func triggerNewHeadEvent(t *testing.T, svc *Service, headRoot [32]byte) {
exitRoutine := make(chan bool)
go func() {
svc.run(svc.ctx)
<-exitRoutine
}()
svc.newHeadRootChan <- headRoot
if err := svc.Stop(); err != nil {
t.Fatal(err)
}
exitRoutine <- true
// The context should have been canceled.
if svc.ctx.Err() != context.Canceled {
t.Error("context was not canceled")
}
}

View File

@@ -1,45 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"service.go",
"vote_metrics.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/attestation",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bitutil:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/messagehandler:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["service_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/internal:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -1,333 +0,0 @@
// Package attestation defines the life-cycle and status of single and aggregated attestation.
package attestation
import (
"context"
"fmt"
"sort"
"sync"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bitutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/hashutil"
handler "github.com/prysmaticlabs/prysm/shared/messagehandler"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "attestation")
var committeeCache = cache.NewCommitteesCache()
// TargetHandler provides an interface for fetching latest attestation targets
// and updating attestations in batches.
type TargetHandler interface {
LatestAttestationTarget(state *pb.BeaconState, validatorIndex uint64) (*pb.AttestationTarget, error)
BatchUpdateLatestAttestation(ctx context.Context, atts []*pb.Attestation) error
}
type attestationStore struct {
sync.RWMutex
m map[[48]byte]*pb.Attestation
}
// Service represents a service that handles the internal
// logic of managing single and aggregated attestation.
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB *db.BeaconDB
incomingFeed *event.Feed
incomingChan chan *pb.Attestation
// store is the mapping of individual
// validator's public key to it's latest attestation.
store attestationStore
}
// Config options for the service.
type Config struct {
BeaconDB *db.BeaconDB
}
// NewAttestationService instantiates a new service instance that will
// be registered into a running beacon node.
func NewAttestationService(ctx context.Context, cfg *Config) *Service {
ctx, cancel := context.WithCancel(ctx)
return &Service{
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
incomingFeed: new(event.Feed),
incomingChan: make(chan *pb.Attestation, params.BeaconConfig().DefaultBufferSize),
store: attestationStore{m: make(map[[48]byte]*pb.Attestation)},
}
}
// Start an attestation service's main event loop.
func (a *Service) Start() {
log.Info("Starting service")
go a.attestationPool()
}
// Stop the Attestation service's main event loop and associated goroutines.
func (a *Service) Stop() error {
defer a.cancel()
log.Info("Stopping service")
return nil
}
// Status always returns nil.
// TODO(1201): Add service health checks.
func (a *Service) Status() error {
return nil
}
// IncomingAttestationFeed returns a feed that any service can send incoming p2p attestations into.
// The attestation service will subscribe to this feed in order to relay incoming attestations.
func (a *Service) IncomingAttestationFeed() *event.Feed {
return a.incomingFeed
}
// LatestAttestationTarget returns the target block that the validator index attested to,
// the highest slotNumber attestation in attestation pool gets returned.
//
// Spec pseudocode definition:
// Let `get_latest_attestation_target(store: Store, validator_index: ValidatorIndex) ->
// BeaconBlock` be the target block in the attestation
// `get_latest_attestation(store, validator_index)`.
func (a *Service) LatestAttestationTarget(beaconState *pb.BeaconState, index uint64) (*pb.AttestationTarget, error) {
if index >= uint64(len(beaconState.ValidatorRegistry)) {
return nil, fmt.Errorf("invalid validator index %d", index)
}
validator := beaconState.ValidatorRegistry[index]
pubKey := bytesutil.ToBytes48(validator.Pubkey)
a.store.RLock()
defer a.store.RUnlock()
if _, exists := a.store.m[pubKey]; !exists {
return nil, nil
}
attestation := a.store.m[pubKey]
if attestation == nil {
return nil, nil
}
targetRoot := bytesutil.ToBytes32(attestation.Data.BeaconBlockRootHash32)
if !a.beaconDB.HasBlock(targetRoot) {
return nil, nil
}
return a.beaconDB.AttestationTarget(targetRoot)
}
// attestationPool takes an newly received attestation from sync service
// and updates attestation pool.
func (a *Service) attestationPool() {
incomingSub := a.incomingFeed.Subscribe(a.incomingChan)
defer incomingSub.Unsubscribe()
for {
select {
case <-a.ctx.Done():
log.Debug("Attestation pool closed, exiting goroutine")
return
// Listen for a newly received incoming attestation from the sync service.
case attestations := <-a.incomingChan:
handler.SafelyHandleMessage(a.ctx, a.handleAttestation, attestations)
}
}
}
func (a *Service) handleAttestation(ctx context.Context, msg proto.Message) error {
attestation := msg.(*pb.Attestation)
if err := a.UpdateLatestAttestation(ctx, attestation); err != nil {
return fmt.Errorf("could not update attestation pool: %v", err)
}
return nil
}
// UpdateLatestAttestation inputs an new attestation and checks whether
// the attesters who submitted this attestation with the higher slot number
// have been noted in the attestation pool. If not, it updates the
// attestation pool with attester's public key to attestation.
func (a *Service) UpdateLatestAttestation(ctx context.Context, attestation *pb.Attestation) error {
totalAttestationSeen.Inc()
// Potential improvement, instead of getting the state,
// we could get a mapping of validator index to public key.
beaconState, err := a.beaconDB.HeadState(ctx)
if err != nil {
return err
}
head, err := a.beaconDB.ChainHead()
if err != nil {
return err
}
headRoot, err := hashutil.HashBeaconBlock(head)
if err != nil {
return err
}
return a.updateAttestation(ctx, headRoot, beaconState, attestation)
}
// BatchUpdateLatestAttestation updates multiple attestations and adds them into the attestation store
// if they are valid.
func (a *Service) BatchUpdateLatestAttestation(ctx context.Context, attestations []*pb.Attestation) error {
if attestations == nil {
return nil
}
// Potential improvement, instead of getting the state,
// we could get a mapping of validator index to public key.
beaconState, err := a.beaconDB.HeadState(ctx)
if err != nil {
return err
}
head, err := a.beaconDB.ChainHead()
if err != nil {
return err
}
headRoot, err := hashutil.HashBeaconBlock(head)
if err != nil {
return err
}
attestations = a.sortAttestations(attestations)
for _, attestation := range attestations {
if err := a.updateAttestation(ctx, headRoot, beaconState, attestation); err != nil {
return err
}
}
return nil
}
// InsertAttestationIntoStore locks the store, inserts the attestation, then
// unlocks the store again. This method may be used by external services
// in testing to populate the attestation store.
func (a *Service) InsertAttestationIntoStore(pubkey [48]byte, att *pb.Attestation) {
a.store.Lock()
defer a.store.Unlock()
a.store.m[pubkey] = att
}
func (a *Service) updateAttestation(ctx context.Context, headRoot [32]byte, beaconState *pb.BeaconState,
attestation *pb.Attestation) error {
totalAttestationSeen.Inc()
slot := attestation.Data.Slot
var committee []uint64
var cachedCommittees *cache.CommitteesInSlot
var err error
for beaconState.Slot < slot {
beaconState, err = state.ExecuteStateTransition(
ctx, beaconState, nil /* block */, headRoot, &state.TransitionConfig{},
)
if err != nil {
return fmt.Errorf("could not execute head transition: %v", err)
}
}
cachedCommittees, err = committeeCache.CommitteesInfoBySlot(slot)
if err != nil {
return err
}
if cachedCommittees == nil {
crosslinkCommittees, err := helpers.CrosslinkCommitteesAtSlot(beaconState, slot, false /* registryChange */)
if err != nil {
return err
}
cachedCommittees = helpers.ToCommitteeCache(slot, crosslinkCommittees)
if err := committeeCache.AddCommittees(cachedCommittees); err != nil {
return err
}
}
// Find committee for shard.
for _, v := range cachedCommittees.Committees {
if v.Shard == attestation.Data.Shard {
committee = v.Committee
break
}
}
log.WithFields(logrus.Fields{
"attestationSlot": attestation.Data.Slot - params.BeaconConfig().GenesisSlot,
"attestationShard": attestation.Data.Shard,
"committeesShard": cachedCommittees.Committees[0].Shard,
"committeesList": cachedCommittees.Committees[0].Committee,
"lengthOfCommittees": len(cachedCommittees.Committees),
}).Debug("Updating latest attestation")
// The participation bitfield from attestation is represented in bytes,
// here we multiply by 8 to get an accurate validator count in bits.
bitfield := attestation.AggregationBitfield
totalBits := len(bitfield) * 8
// Check each bit of participation bitfield to find out which
// attester has submitted new attestation.
// This is has O(n) run time and could be optimized down the line.
for i := 0; i < totalBits; i++ {
bitSet, err := bitutil.CheckBit(bitfield, i)
if err != nil {
return err
}
if !bitSet {
continue
}
if i >= len(committee) {
log.Errorf("Bitfield points to an invalid index in the committee: bitfield %08b", bitfield)
continue
}
if int(committee[i]) >= len(beaconState.ValidatorRegistry) {
log.Errorf("Index doesn't exist in validator registry: index %d", committee[i])
}
// If the attestation came from this attester. We use the slot committee to find the
// validator's actual index.
pubkey := bytesutil.ToBytes48(beaconState.ValidatorRegistry[committee[i]].Pubkey)
newAttestationSlot := attestation.Data.Slot
currentAttestationSlot := uint64(0)
a.store.Lock()
defer a.store.Unlock()
if _, exists := a.store.m[pubkey]; exists {
currentAttestationSlot = a.store.m[pubkey].Data.Slot
}
// If the attestation is newer than this attester's one in pool.
if newAttestationSlot > currentAttestationSlot {
a.store.m[pubkey] = attestation
log.WithFields(
logrus.Fields{
"attestationSlot": attestation.Data.Slot - params.BeaconConfig().GenesisSlot,
"justifiedEpoch": attestation.Data.JustifiedEpoch - params.BeaconConfig().GenesisEpoch,
},
).Debug("Attestation store updated")
blockRoot := bytesutil.ToBytes32(attestation.Data.BeaconBlockRootHash32)
votedBlock, err := a.beaconDB.Block(blockRoot)
if err != nil {
return err
}
reportVoteMetrics(committee[i], votedBlock)
}
}
return nil
}
// sortAttestations sorts attestations by their slot number in ascending order.
func (a *Service) sortAttestations(attestations []*pb.Attestation) []*pb.Attestation {
sort.SliceStable(attestations, func(i, j int) bool {
return attestations[i].Data.Slot < attestations[j].Data.Slot
})
return attestations
}

View File

@@ -1,446 +0,0 @@
package attestation
import (
"bytes"
"context"
"fmt"
"reflect"
"strings"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/internal"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func init() {
logrus.SetLevel(logrus.DebugLevel)
}
var _ = TargetHandler(&Service{})
func TestUpdateLatestAttestation_UpdatesLatest(t *testing.T) {
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
var validators []*pb.Validator
for i := 0; i < 64; i++ {
validators = append(validators, &pb.Validator{
Pubkey: []byte{byte(i)},
ActivationEpoch: params.BeaconConfig().GenesisEpoch,
ExitEpoch: params.BeaconConfig().GenesisEpoch + 10,
})
}
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 1,
ValidatorRegistry: validators,
}
block := &pb.BeaconBlock{
Slot: params.BeaconConfig().GenesisSlot + 1,
}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatal(err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
attestation := &pb.Attestation{
AggregationBitfield: []byte{0x80},
Data: &pb.AttestationData{
Slot: params.BeaconConfig().GenesisSlot + 1,
Shard: 1,
},
}
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
pubkey := bytesutil.ToBytes48([]byte{byte(3)})
if service.store.m[pubkey].Data.Slot !=
attestation.Data.Slot {
t.Errorf("Incorrect slot stored, wanted: %d, got: %d",
attestation.Data.Slot, service.store.m[pubkey].Data.Slot)
}
beaconState = &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 36,
ValidatorRegistry: validators,
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatalf("could not save state: %v", err)
}
attestation.Data.Slot = params.BeaconConfig().GenesisSlot + 36
attestation.Data.Shard = 36
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
if service.store.m[pubkey].Data.Slot !=
attestation.Data.Slot {
t.Errorf("Incorrect slot stored, wanted: %d, got: %d",
attestation.Data.Slot, service.store.m[pubkey].Data.Slot)
}
}
func TestAttestationPool_UpdatesAttestationPool(t *testing.T) {
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
var validators []*pb.Validator
for i := 0; i < 64; i++ {
validators = append(validators, &pb.Validator{
Pubkey: []byte{byte(i)},
ActivationEpoch: params.BeaconConfig().GenesisEpoch,
ExitEpoch: params.BeaconConfig().GenesisEpoch + 10,
})
}
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 1,
ValidatorRegistry: validators,
}
block := &pb.BeaconBlock{
Slot: params.BeaconConfig().GenesisSlot + 1,
}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatal(err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
attestation := &pb.Attestation{
AggregationBitfield: []byte{0x80},
Data: &pb.AttestationData{
Slot: params.BeaconConfig().GenesisSlot + 1,
Shard: 1,
},
}
if err := service.handleAttestation(context.Background(), attestation); err != nil {
t.Error(err)
}
}
func TestLatestAttestationTarget_CantGetAttestation(t *testing.T) {
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
if err := beaconDB.SaveState(ctx, &pb.BeaconState{
ValidatorRegistry: []*pb.Validator{{}},
}); err != nil {
t.Fatalf("could not save state: %v", err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
headState, err := beaconDB.HeadState(ctx)
if err != nil {
t.Fatal(err)
}
index := uint64(100)
want := fmt.Sprintf("invalid validator index %d", index)
if _, err := service.LatestAttestationTarget(headState, index); !strings.Contains(err.Error(), want) {
t.Errorf("Wanted error to contain %s, received %v", want, err)
}
}
func TestLatestAttestationTarget_ReturnsLatestAttestedBlock(t *testing.T) {
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
pubKey := []byte{'A'}
if err := beaconDB.SaveState(ctx, &pb.BeaconState{
ValidatorRegistry: []*pb.Validator{{Pubkey: pubKey}},
}); err != nil {
t.Fatalf("could not save state: %v", err)
}
block := &pb.BeaconBlock{Slot: 999}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatalf("could not save block: %v", err)
}
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
log.Fatalf("could not hash block: %v", err)
}
if err := beaconDB.SaveAttestationTarget(ctx, &pb.AttestationTarget{
Slot: block.Slot,
BlockRoot: blockRoot[:],
ParentRoot: []byte{},
}); err != nil {
log.Fatalf("could not save att target: %v", err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
attestation := &pb.Attestation{
Data: &pb.AttestationData{
BeaconBlockRootHash32: blockRoot[:],
}}
pubKey48 := bytesutil.ToBytes48(pubKey)
service.store.m[pubKey48] = attestation
headState, err := beaconDB.HeadState(ctx)
if err != nil {
t.Fatal(err)
}
latestAttestedTarget, err := service.LatestAttestationTarget(headState, 0)
if err != nil {
t.Fatalf("Could not get latest attestation: %v", err)
}
if !bytes.Equal(blockRoot[:], latestAttestedTarget.BlockRoot) {
t.Errorf("Wanted: %v, got: %v", blockRoot[:], latestAttestedTarget.BlockRoot)
}
}
func TestUpdateLatestAttestation_CacheEnabledAndMiss(t *testing.T) {
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
var validators []*pb.Validator
for i := 0; i < 64; i++ {
validators = append(validators, &pb.Validator{
Pubkey: []byte{byte(i)},
ActivationEpoch: params.BeaconConfig().GenesisEpoch,
ExitEpoch: params.BeaconConfig().GenesisEpoch + 10,
})
}
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 1,
ValidatorRegistry: validators,
}
block := &pb.BeaconBlock{
Slot: params.BeaconConfig().GenesisSlot + 1,
}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatal(err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
attestation := &pb.Attestation{
AggregationBitfield: []byte{0x80},
Data: &pb.AttestationData{
Slot: params.BeaconConfig().GenesisSlot + 1,
Shard: 1,
},
}
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
pubkey := bytesutil.ToBytes48([]byte{byte(3)})
if service.store.m[pubkey].Data.Slot !=
attestation.Data.Slot {
t.Errorf("Incorrect slot stored, wanted: %d, got: %d",
attestation.Data.Slot, service.store.m[pubkey].Data.Slot)
}
attestation.Data.Slot = params.BeaconConfig().GenesisSlot + 36
attestation.Data.Shard = 36
beaconState = &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 36,
ValidatorRegistry: validators,
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatalf("could not save state: %v", err)
}
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
if service.store.m[pubkey].Data.Slot !=
attestation.Data.Slot {
t.Errorf("Incorrect slot stored, wanted: %d, got: %d",
attestation.Data.Slot, service.store.m[pubkey].Data.Slot)
}
// Verify the committee for attestation's data slot was cached.
fetchedCommittees, err := committeeCache.CommitteesInfoBySlot(attestation.Data.Slot)
if err != nil {
t.Fatal(err)
}
wantedCommittee := []uint64{38}
if !reflect.DeepEqual(wantedCommittee, fetchedCommittees.Committees[0].Committee) {
t.Errorf(
"Result indices was an unexpected value. Wanted %d, got %d",
wantedCommittee,
fetchedCommittees.Committees[0].Committee,
)
}
}
func TestUpdateLatestAttestation_CacheEnabledAndHit(t *testing.T) {
var validators []*pb.Validator
for i := 0; i < 64; i++ {
validators = append(validators, &pb.Validator{
Pubkey: []byte{byte(i)},
ActivationEpoch: params.BeaconConfig().GenesisEpoch,
ExitEpoch: params.BeaconConfig().GenesisEpoch + 10,
})
}
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 2,
ValidatorRegistry: validators,
}
block := &pb.BeaconBlock{
Slot: params.BeaconConfig().GenesisSlot + 2,
}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatal(err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
slot := params.BeaconConfig().GenesisSlot + 2
shard := uint64(3)
index := uint64(4)
attestation := &pb.Attestation{
AggregationBitfield: []byte{0x80},
Data: &pb.AttestationData{
Slot: slot,
Shard: shard,
},
}
csInSlot := &cache.CommitteesInSlot{
Slot: slot,
Committees: []*cache.CommitteeInfo{
{Shard: shard, Committee: []uint64{index, 999}},
}}
if err := committeeCache.AddCommittees(csInSlot); err != nil {
t.Fatal(err)
}
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
pubkey := bytesutil.ToBytes48([]byte{byte(index)})
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
if service.store.m[pubkey].Data.Slot !=
attestation.Data.Slot {
t.Errorf("Incorrect slot stored, wanted: %d, got: %d",
attestation.Data.Slot, service.store.m[pubkey].Data.Slot)
}
}
func TestUpdateLatestAttestation_InvalidIndex(t *testing.T) {
beaconDB := internal.SetupDB(t)
hook := logTest.NewGlobal()
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
var validators []*pb.Validator
for i := 0; i < 64; i++ {
validators = append(validators, &pb.Validator{
Pubkey: []byte{byte(i)},
ActivationEpoch: params.BeaconConfig().GenesisEpoch,
ExitEpoch: params.BeaconConfig().GenesisEpoch + 10,
})
}
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 1,
ValidatorRegistry: validators,
}
block := &pb.BeaconBlock{
Slot: params.BeaconConfig().GenesisSlot + 1,
}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatal(err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
attestation := &pb.Attestation{
AggregationBitfield: []byte{0xC0},
Data: &pb.AttestationData{
Slot: params.BeaconConfig().GenesisSlot + 1,
Shard: 1,
},
}
if err := service.UpdateLatestAttestation(ctx, attestation); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
testutil.AssertLogsContain(t, hook, "Bitfield points to an invalid index in the committee")
}
func TestUpdateLatestAttestation_BatchUpdate(t *testing.T) {
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
var validators []*pb.Validator
for i := 0; i < 64; i++ {
validators = append(validators, &pb.Validator{
Pubkey: []byte{byte(i)},
ActivationEpoch: params.BeaconConfig().GenesisEpoch,
ExitEpoch: params.BeaconConfig().GenesisEpoch + 10,
})
}
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + 1,
ValidatorRegistry: validators,
}
block := &pb.BeaconBlock{
Slot: params.BeaconConfig().GenesisSlot + 1,
}
if err := beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err := beaconDB.UpdateChainHead(ctx, block, beaconState); err != nil {
t.Fatal(err)
}
service := NewAttestationService(context.Background(), &Config{BeaconDB: beaconDB})
attestations := make([]*pb.Attestation, 0)
for i := 0; i < 10; i++ {
attestations = append(attestations, &pb.Attestation{
AggregationBitfield: []byte{0x80},
Data: &pb.AttestationData{
Slot: params.BeaconConfig().GenesisSlot + 1,
Shard: 1,
},
})
}
if err := service.BatchUpdateLatestAttestation(ctx, attestations); err != nil {
t.Fatalf("could not update latest attestation: %v", err)
}
}

View File

@@ -1,34 +0,0 @@
package attestation
import (
"strconv"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
)
var (
validatorLastVoteGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "validators_last_vote",
Help: "Votes of validators, updated when there's a new attestation",
}, []string{
"validatorIndex",
})
totalAttestationSeen = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_seen_attestations",
Help: "Total number of attestations seen by the validators",
})
)
func reportVoteMetrics(index uint64, block *pb.BeaconBlock) {
// Don't update vote metrics if the incoming block is nil.
if block == nil {
return
}
s := params.BeaconConfig().GenesisSlot
validatorLastVoteGauge.WithLabelValues(
"v" + strconv.Itoa(int(index))).Set(float64(block.Slot - s))
}

View File

@@ -3,73 +3,118 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"block_processing.go",
"fork_choice.go",
"chain_info.go",
"info.go",
"log.go",
"metrics.go",
"receive_attestation.go",
"receive_block.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/attestation:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/blockchain/forkchoice:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/operations:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/p2p:go_default_library",
"//shared/params:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
test_suite(
name = "go_default_test",
tests = [
":go_raceoff_test",
":go_raceon_test",
],
)
go_test(
name = "go_raceoff_test",
size = "medium",
srcs = [
"block_processing_test.go",
"fork_choice_reorg_test.go",
"fork_choice_test.go",
"chain_info_test.go",
"receive_attestation_test.go",
"receive_block_test.go",
"service_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/attestation:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/internal:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//shared/bls:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/forkutil:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/p2p:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"//shared/trieutil:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_x_net//context:go_default_library",
],
)
go_test(
name = "go_raceon_test",
srcs = [
"chain_info_norace_test.go",
"service_norace_test.go",
],
embed = [":go_default_library"],
race = "on",
tags = ["race_on"],
deps = [
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/event:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_x_net//context:go_default_library",
],
)

View File

@@ -1,343 +0,0 @@
package blockchain
import (
"bytes"
"context"
"errors"
"fmt"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
pbrpc "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// BlockReceiver interface defines the methods in the blockchain service which
// directly receives a new block from other services and applies the full processing pipeline.
type BlockReceiver interface {
CanonicalBlockFeed() *event.Feed
ReceiveBlock(ctx context.Context, block *pb.BeaconBlock) (*pb.BeaconState, error)
IsCanonical(slot uint64, hash []byte) bool
CanonicalBlock(slot uint64) (*pb.BeaconBlock, error)
RecentCanonicalRoots(count uint64) []*pbrpc.BlockRoot
}
// BlockProcessor defines a common interface for methods useful for directly applying state transitions
// to beacon blocks and generating a new beacon state from the Ethereum 2.0 core primitives.
type BlockProcessor interface {
VerifyBlockValidity(ctx context.Context, block *pb.BeaconBlock, beaconState *pb.BeaconState) error
ApplyBlockStateTransition(ctx context.Context, block *pb.BeaconBlock, beaconState *pb.BeaconState) (*pb.BeaconState, error)
CleanupBlockOperations(ctx context.Context, block *pb.BeaconBlock) error
}
// BlockFailedProcessingErr represents a block failing a state transition function.
type BlockFailedProcessingErr struct {
err error
}
func (b *BlockFailedProcessingErr) Error() string {
return fmt.Sprintf("block failed processing: %v", b.err)
}
// ReceiveBlock is a function that defines the operations that are preformed on
// any block that is received from p2p layer or rpc. It performs the following actions: It checks the block to see
// 1. Verify a block passes pre-processing conditions
// 2. Save and broadcast the block via p2p to other peers
// 3. Apply the block state transition function and account for skip slots.
// 4. Process and cleanup any block operations, such as attestations and deposits, which would need to be
// either included or flushed from the beacon node's runtime.
func (c *ChainService) ReceiveBlock(ctx context.Context, block *pb.BeaconBlock) (*pb.BeaconState, error) {
c.receiveBlockLock.Lock()
defer c.receiveBlockLock.Unlock()
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlock")
defer span.End()
parentRoot := bytesutil.ToBytes32(block.ParentRootHash32)
parent, err := c.beaconDB.Block(parentRoot)
if err != nil {
return nil, fmt.Errorf("failed to get parent block: %v", err)
}
if parent == nil {
return nil, errors.New("parent does not exist in DB")
}
beaconState, err := c.beaconDB.HistoricalStateFromSlot(ctx, parent.Slot)
if err != nil {
return nil, fmt.Errorf("could not retrieve beacon state: %v", err)
}
saveLatestBlock := beaconState.LatestBlock
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
return nil, fmt.Errorf("could not hash beacon block")
}
// We first verify the block's basic validity conditions.
if err := c.VerifyBlockValidity(ctx, block, beaconState); err != nil {
return beaconState, fmt.Errorf("block with slot %d is not ready for processing: %v", block.Slot, err)
}
// We save the block to the DB and broadcast it to our peers.
if err := c.SaveAndBroadcastBlock(ctx, block); err != nil {
return beaconState, fmt.Errorf(
"could not save and broadcast beacon block with slot %d: %v",
block.Slot-params.BeaconConfig().GenesisSlot, err,
)
}
log.WithField("slotNumber", block.Slot-params.BeaconConfig().GenesisSlot).Info(
"Executing state transition")
// We then apply the block state transition accordingly to obtain the resulting beacon state.
beaconState, err = c.ApplyBlockStateTransition(ctx, block, beaconState)
if err != nil {
switch err.(type) {
case *BlockFailedProcessingErr:
// If the block fails processing, we mark it as blacklisted and delete it from our DB.
c.beaconDB.MarkEvilBlockHash(blockRoot)
if err := c.beaconDB.DeleteBlock(block); err != nil {
return nil, fmt.Errorf("could not delete bad block from db: %v", err)
}
return beaconState, err
default:
return beaconState, fmt.Errorf("could not apply block state transition: %v", err)
}
}
log.WithFields(logrus.Fields{
"slotNumber": block.Slot - params.BeaconConfig().GenesisSlot,
"currentEpoch": helpers.SlotToEpoch(block.Slot) - params.BeaconConfig().GenesisEpoch,
}).Info("State transition complete")
// Check state root
if featureconfig.FeatureConfig().EnableCheckBlockStateRoot {
// Calc state hash with previous block
beaconState.LatestBlock = saveLatestBlock
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
return nil, fmt.Errorf("could not hash beacon state: %v", err)
}
beaconState.LatestBlock = block
if !bytes.Equal(block.StateRootHash32, stateRoot[:]) {
return nil, fmt.Errorf("beacon state root is not equal to block state root: %#x != %#x", stateRoot, block.StateRootHash32)
}
}
// We process the block's contained deposits, attestations, and other operations
// and that may need to be stored or deleted from the beacon node's persistent storage.
if err := c.CleanupBlockOperations(ctx, block); err != nil {
return beaconState, fmt.Errorf("could not process block deposits, attestations, and other operations: %v", err)
}
log.WithField("slot", block.Slot-params.BeaconConfig().GenesisSlot).Info("Finished processing beacon block")
return beaconState, nil
}
// ApplyBlockStateTransition runs the Ethereum 2.0 state transition function
// to produce a new beacon state and also accounts for skip slots occurring.
//
// def apply_block_state_transition(block):
// # process skipped slots
// while (state.slot < block.slot - 1):
// state = slot_state_transition(state, block=None)
//
// # process slot with block
// state = slot_state_transition(state, block)
//
// # check state root
// if block.state_root == hash(state):
// return state, error
// else:
// return nil, error # or throw or whatever
//
func (c *ChainService) ApplyBlockStateTransition(
ctx context.Context, block *pb.BeaconBlock, beaconState *pb.BeaconState,
) (*pb.BeaconState, error) {
// Retrieve the last processed beacon block's hash root.
headRoot, err := c.ChainHeadRoot()
if err != nil {
return beaconState, fmt.Errorf("could not retrieve chain head root: %v", err)
}
// Check for skipped slots.
numSkippedSlots := 0
for beaconState.Slot < block.Slot-1 {
beaconState, err = c.runStateTransition(ctx, headRoot, nil, beaconState)
if err != nil {
return beaconState, err
}
numSkippedSlots++
}
if numSkippedSlots > 0 {
log.Warnf("Processed %d skipped slots", numSkippedSlots)
}
beaconState, err = c.runStateTransition(ctx, headRoot, block, beaconState)
if err != nil {
return beaconState, err
}
return beaconState, nil
}
// VerifyBlockValidity cross-checks the block against the pre-processing conditions from
// Ethereum 2.0, namely:
// The parent block with root block.parent_root has been processed and accepted.
// The node has processed its state up to slot, block.slot - 1.
// The Ethereum 1.0 block pointed to by the state.processed_pow_receipt_root has been processed and accepted.
// The node's local clock time is greater than or equal to state.genesis_time + block.slot * SECONDS_PER_SLOT.
func (c *ChainService) VerifyBlockValidity(
ctx context.Context,
block *pb.BeaconBlock,
beaconState *pb.BeaconState,
) error {
if block.Slot == params.BeaconConfig().GenesisSlot {
return fmt.Errorf("cannot process a genesis block: received block with slot %d",
block.Slot-params.BeaconConfig().GenesisSlot)
}
powBlockFetcher := c.web3Service.Client().BlockByHash
if err := b.IsValidBlock(ctx, beaconState, block,
c.beaconDB.HasBlock, powBlockFetcher, c.genesisTime); err != nil {
return fmt.Errorf("block does not fulfill pre-processing conditions %v", err)
}
return nil
}
// SaveAndBroadcastBlock stores the block in persistent storage and then broadcasts it to
// peers via p2p. Blocks which have already been saved are not processed again via p2p, which is why
// the order of operations is important in this function to prevent infinite p2p loops.
func (c *ChainService) SaveAndBroadcastBlock(ctx context.Context, block *pb.BeaconBlock) error {
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
return fmt.Errorf("could not tree hash incoming block: %v", err)
}
if err := c.beaconDB.SaveBlock(block); err != nil {
return fmt.Errorf("failed to save block: %v", err)
}
if err := c.beaconDB.SaveAttestationTarget(ctx, &pb.AttestationTarget{
Slot: block.Slot,
BlockRoot: blockRoot[:],
ParentRoot: block.ParentRootHash32,
}); err != nil {
return fmt.Errorf("failed to save attestation target: %v", err)
}
// Announce the new block to the network.
c.p2p.Broadcast(ctx, &pb.BeaconBlockAnnounce{
Hash: blockRoot[:],
SlotNumber: block.Slot,
})
return nil
}
// CleanupBlockOperations processes and cleans up any block operations relevant to the beacon node
// such as attestations, exits, and deposits. We update the latest seen attestation by validator
// in the local node's runtime, cleanup and remove pending deposits which have been included in the block
// from our node's local cache, and process validator exits and more.
func (c *ChainService) CleanupBlockOperations(ctx context.Context, block *pb.BeaconBlock) error {
// Forward processed block to operation pool to remove individual operation from DB.
if c.opsPoolService.IncomingProcessedBlockFeed().Send(block) == 0 {
log.Error("Sent processed block to no subscribers")
}
if err := c.attsService.BatchUpdateLatestAttestation(ctx, block.Body.Attestations); err != nil {
return fmt.Errorf("failed to update latest attestation for store: %v", err)
}
// Remove pending deposits from the deposit queue.
for _, dep := range block.Body.Deposits {
c.beaconDB.RemovePendingDeposit(ctx, dep)
}
return nil
}
// runStateTransition executes the Ethereum 2.0 core state transition for the beacon chain and
// updates important checkpoints and local persistent data during epoch transitions. It serves as a wrapper
// around the more low-level, core state transition function primitive.
func (c *ChainService) runStateTransition(
ctx context.Context,
headRoot [32]byte,
block *pb.BeaconBlock,
beaconState *pb.BeaconState,
) (*pb.BeaconState, error) {
newState, err := state.ExecuteStateTransition(
ctx,
beaconState,
block,
headRoot,
&state.TransitionConfig{
VerifySignatures: false, // We disable signature verification for now.
Logging: true, // We enable logging in this state transition call.
},
)
if err != nil {
return beaconState, &BlockFailedProcessingErr{err}
}
log.WithField(
"slotsSinceGenesis", newState.Slot-params.BeaconConfig().GenesisSlot,
).Info("Slot transition successfully processed")
if block != nil {
log.WithField(
"slotsSinceGenesis", newState.Slot-params.BeaconConfig().GenesisSlot,
).Info("Block transition successfully processed")
// Save Historical States.
if err := c.beaconDB.SaveHistoricalState(ctx, beaconState); err != nil {
return nil, fmt.Errorf("could not save historical state: %v", err)
}
}
if helpers.IsEpochEnd(newState.Slot) {
// Save activated validators of this epoch to public key -> index DB.
if err := c.saveValidatorIdx(newState); err != nil {
return newState, fmt.Errorf("could not save validator index: %v", err)
}
// Delete exited validators of this epoch to public key -> index DB.
if err := c.deleteValidatorIdx(newState); err != nil {
return newState, fmt.Errorf("could not delete validator index: %v", err)
}
// Update FFG checkpoints in DB.
if err := c.updateFFGCheckPts(ctx, newState); err != nil {
return newState, fmt.Errorf("could not update FFG checkpts: %v", err)
}
log.WithField(
"SlotsSinceGenesis", newState.Slot-params.BeaconConfig().GenesisSlot,
).Info("Epoch transition successfully processed")
}
return newState, nil
}
// saveValidatorIdx saves the validators public key to index mapping in DB, these
// validators were activated from current epoch. After it saves, current epoch key
// is deleted from ActivatedValidators mapping.
func (c *ChainService) saveValidatorIdx(state *pb.BeaconState) error {
activatedValidators := validators.ActivatedValFromEpoch(helpers.CurrentEpoch(state) + 1)
for _, idx := range activatedValidators {
pubKey := state.ValidatorRegistry[idx].Pubkey
if err := c.beaconDB.SaveValidatorIndex(pubKey, int(idx)); err != nil {
return fmt.Errorf("could not save validator index: %v", err)
}
}
validators.DeleteActivatedVal(helpers.CurrentEpoch(state))
return nil
}
// deleteValidatorIdx deletes the validators public key to index mapping in DB, the
// validators were exited from current epoch. After it deletes, current epoch key
// is deleted from ExitedValidators mapping.
func (c *ChainService) deleteValidatorIdx(state *pb.BeaconState) error {
exitedValidators := validators.ExitedValFromEpoch(helpers.CurrentEpoch(state) + 1)
for _, idx := range exitedValidators {
pubKey := state.ValidatorRegistry[idx].Pubkey
if err := c.beaconDB.DeleteValidatorIndex(pubKey); err != nil {
return fmt.Errorf("could not delete validator index: %v", err)
}
}
validators.DeleteExitedVal(helpers.CurrentEpoch(state))
return nil
}

View File

@@ -1,840 +0,0 @@
package blockchain
import (
"context"
"encoding/binary"
"math/big"
"strings"
"testing"
"time"
"github.com/prysmaticlabs/prysm/beacon-chain/attestation"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
v "github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/beacon-chain/internal"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/prysmaticlabs/prysm/shared/trieutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
// Ensure ChainService implements interfaces.
var _ = BlockProcessor(&ChainService{})
func initBlockStateRoot(t *testing.T, block *pb.BeaconBlock, chainService *ChainService) {
parentRoot := bytesutil.ToBytes32(block.ParentRootHash32)
parent, err := chainService.beaconDB.Block(parentRoot)
if err != nil {
t.Fatal(err)
}
beaconState, err := chainService.beaconDB.HistoricalStateFromSlot(context.Background(), parent.Slot)
if err != nil {
t.Fatalf("Unable to retrieve state %v", err)
}
saveLatestBlock := beaconState.LatestBlock
computedState, err := chainService.ApplyBlockStateTransition(context.Background(), block, beaconState)
if err != nil {
t.Fatalf("could not apply block state transition: %v", err)
}
computedState.LatestBlock = saveLatestBlock
stateRoot, err := hashutil.HashProto(computedState)
if err != nil {
t.Fatalf("could not tree hash state: %v", err)
}
block.StateRootHash32 = stateRoot[:]
t.Logf("state root after block: %#x", stateRoot)
}
func TestReceiveBlock_FaultyPOWChain(t *testing.T) {
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
chainService := setupBeaconChain(t, db, nil)
unixTime := uint64(time.Now().Unix())
deposits, _ := setupInitialDeposits(t, 100)
if err := db.InitializeState(context.Background(), unixTime, deposits, &pb.Eth1Data{}); err != nil {
t.Fatalf("Could not initialize beacon state to disk: %v", err)
}
if err := SetSlotInState(chainService, 1); err != nil {
t.Fatal(err)
}
parentBlock := &pb.BeaconBlock{
Slot: 1,
}
parentRoot, err := hashutil.HashBeaconBlock(parentBlock)
if err != nil {
t.Fatalf("Unable to tree hash block %v", err)
}
if err := chainService.beaconDB.SaveBlock(parentBlock); err != nil {
t.Fatalf("Unable to save block %v", err)
}
block := &pb.BeaconBlock{
Slot: 2,
ParentRootHash32: parentRoot[:],
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte("a"),
BlockHash32: []byte("b"),
},
}
if err := chainService.beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if _, err := chainService.ReceiveBlock(context.Background(), block); err == nil {
t.Errorf("Expected receive block to fail, received nil: %v", err)
}
}
func TestReceiveBlock_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db, nil)
deposits, privKeys := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
t.Fatalf("Could not tree hash state: %v", err)
}
if err := db.SaveHistoricalState(ctx, beaconState); err != nil {
t.Fatal(err)
}
genesis := b.NewGenesisBlock([]byte{})
if err := chainService.beaconDB.SaveBlock(genesis); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
parentHash, err := hashutil.HashBeaconBlock(genesis)
if err != nil {
t.Fatalf("Unable to get tree hash root of canonical head: %v", err)
}
if err := chainService.beaconDB.UpdateChainHead(ctx, genesis, beaconState); err != nil {
t.Fatal(err)
}
beaconState.Slot++
randaoReveal := createRandaoReveal(t, beaconState, privKeys)
block := &pb.BeaconBlock{
Slot: beaconState.Slot,
StateRootHash32: stateRoot[:],
ParentRootHash32: parentHash[:],
RandaoReveal: randaoReveal,
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte("a"),
BlockHash32: []byte("b"),
},
Body: &pb.BeaconBlockBody{
Attestations: nil,
},
}
initBlockStateRoot(t, block, chainService)
if err := chainService.beaconDB.SaveJustifiedBlock(block); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveFinalizedBlock(block); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if _, err := chainService.ReceiveBlock(context.Background(), block); err != nil {
t.Errorf("Block failed processing: %v", err)
}
testutil.AssertLogsContain(t, hook, "Finished processing beacon block")
}
func TestReceiveBlock_UsesParentBlockState(t *testing.T) {
hook := logTest.NewGlobal()
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db, nil)
deposits, _ := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
if err := chainService.beaconDB.SaveHistoricalState(ctx, beaconState); err != nil {
t.Fatal(err)
}
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
t.Fatalf("Could not tree hash state: %v", err)
}
parentHash, genesisBlock := setupGenesisBlock(t, chainService)
if err := chainService.beaconDB.UpdateChainHead(ctx, genesisBlock, beaconState); err != nil {
t.Fatal(err)
}
// We ensure the block uses the right state parent if its ancestor is not block.Slot-1.
block := &pb.BeaconBlock{
Slot: beaconState.Slot + 4,
StateRootHash32: stateRoot[:],
ParentRootHash32: parentHash[:],
RandaoReveal: []byte{},
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte("a"),
BlockHash32: []byte("b"),
},
Body: &pb.BeaconBlockBody{
Attestations: nil,
},
}
initBlockStateRoot(t, block, chainService)
if err := chainService.beaconDB.SaveBlock(block); err != nil {
t.Fatal(err)
}
if _, err := chainService.ReceiveBlock(context.Background(), block); err != nil {
t.Errorf("Block failed processing: %v", err)
}
testutil.AssertLogsContain(t, hook, "Finished processing beacon block")
}
func TestReceiveBlock_DeletesBadBlock(t *testing.T) {
featureconfig.InitFeatureConfig(&featureconfig.FeatureFlagConfig{
EnableCheckBlockStateRoot: false,
})
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db, nil)
deposits, _ := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
t.Fatalf("Could not tree hash state: %v", err)
}
if err := chainService.beaconDB.SaveHistoricalState(ctx, beaconState); err != nil {
t.Fatal(err)
}
parentHash, genesisBlock := setupGenesisBlock(t, chainService)
if err := chainService.beaconDB.UpdateChainHead(ctx, genesisBlock, beaconState); err != nil {
t.Fatal(err)
}
beaconState.Slot++
block := &pb.BeaconBlock{
Slot: beaconState.Slot,
StateRootHash32: stateRoot[:],
ParentRootHash32: parentHash[:],
RandaoReveal: []byte{},
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte("a"),
BlockHash32: []byte("b"),
},
Body: &pb.BeaconBlockBody{
Attestations: []*pb.Attestation{
{
Data: &pb.AttestationData{
JustifiedEpoch: params.BeaconConfig().GenesisSlot * 100,
},
},
},
},
}
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
t.Fatal(err)
}
_, err = chainService.ReceiveBlock(context.Background(), block)
switch err.(type) {
case *BlockFailedProcessingErr:
t.Log("Block failed processing as expected")
default:
t.Errorf("Unexpected block processing error: %v", err)
}
savedBlock, err := db.Block(blockRoot)
if err != nil {
t.Fatal(err)
}
if savedBlock != nil {
t.Errorf("Expected bad block to have been deleted, received: %v", savedBlock)
}
// We also verify the block has been blacklisted.
if !db.IsEvilBlockHash(blockRoot) {
t.Error("Expected block root to have been blacklisted")
}
featureconfig.InitFeatureConfig(&featureconfig.FeatureFlagConfig{
EnableCheckBlockStateRoot: true,
})
}
func TestReceiveBlock_CheckBlockStateRoot_GoodState(t *testing.T) {
hook := logTest.NewGlobal()
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
attsService := attestation.NewAttestationService(
context.Background(),
&attestation.Config{BeaconDB: db})
chainService := setupBeaconChain(t, db, attsService)
deposits, privKeys := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
if err := chainService.beaconDB.SaveHistoricalState(ctx, beaconState); err != nil {
t.Fatal(err)
}
parentHash, genesisBlock := setupGenesisBlock(t, chainService)
beaconState.Slot++
if err := chainService.beaconDB.UpdateChainHead(ctx, genesisBlock, beaconState); err != nil {
t.Fatal(err)
}
beaconState.Slot++
goodStateBlock := &pb.BeaconBlock{
Slot: beaconState.Slot,
ParentRootHash32: parentHash[:],
RandaoReveal: createRandaoReveal(t, beaconState, privKeys),
Body: &pb.BeaconBlockBody{},
}
beaconState.Slot--
initBlockStateRoot(t, goodStateBlock, chainService)
if err := chainService.beaconDB.SaveBlock(goodStateBlock); err != nil {
t.Fatal(err)
}
_, err = chainService.ReceiveBlock(context.Background(), goodStateBlock)
if err != nil {
t.Fatalf("error exists for good block %v", err)
}
testutil.AssertLogsContain(t, hook, "Executing state transition")
}
func TestReceiveBlock_CheckBlockStateRoot_BadState(t *testing.T) {
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
chainService := setupBeaconChain(t, db, nil)
deposits, privKeys := setupInitialDeposits(t, 100)
ctx := context.Background()
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
if err := chainService.beaconDB.SaveHistoricalState(ctx, beaconState); err != nil {
t.Fatal(err)
}
parentHash, genesisBlock := setupGenesisBlock(t, chainService)
beaconState.Slot++
if err := chainService.beaconDB.UpdateChainHead(ctx, genesisBlock, beaconState); err != nil {
t.Fatal(err)
}
beaconState.Slot++
invalidStateBlock := &pb.BeaconBlock{
Slot: beaconState.Slot,
StateRootHash32: []byte{'b', 'a', 'd', ' ', 'h', 'a', 's', 'h'},
ParentRootHash32: parentHash[:],
RandaoReveal: createRandaoReveal(t, beaconState, privKeys),
Body: &pb.BeaconBlockBody{},
}
beaconState.Slot--
_, err = chainService.ReceiveBlock(context.Background(), invalidStateBlock)
if err == nil {
t.Fatal("no error for wrong block state root")
}
if !strings.Contains(err.Error(), "beacon state root is not equal to block state root: ") {
t.Fatal(err)
}
}
func TestReceiveBlock_RemovesPendingDeposits(t *testing.T) {
hook := logTest.NewGlobal()
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
attsService := attestation.NewAttestationService(
context.Background(),
&attestation.Config{BeaconDB: db})
chainService := setupBeaconChain(t, db, attsService)
deposits, privKeys := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
if err := chainService.beaconDB.SaveJustifiedState(beaconState); err != nil {
t.Fatal(err)
}
if err := db.SaveFinalizedState(beaconState); err != nil {
t.Fatal(err)
}
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
t.Fatalf("Could not tree hash state: %v", err)
}
parentHash, genesisBlock := setupGenesisBlock(t, chainService)
beaconState.Slot++
if err := chainService.beaconDB.UpdateChainHead(ctx, genesisBlock, beaconState); err != nil {
t.Fatal(err)
}
currentSlot := params.BeaconConfig().GenesisSlot
randaoReveal := createRandaoReveal(t, beaconState, privKeys)
pendingDeposits := []*pb.Deposit{
createPreChainStartDeposit(t, []byte{'F'}, beaconState.DepositIndex),
}
pendingDepositsData := make([][]byte, len(pendingDeposits))
for i, pd := range pendingDeposits {
pendingDepositsData[i] = pd.DepositData
}
depositTrie, err := trieutil.GenerateTrieFromItems(pendingDepositsData, int(params.BeaconConfig().DepositContractTreeDepth))
if err != nil {
t.Fatalf("Could not generate deposit trie: %v", err)
}
for i := range pendingDeposits {
pendingDeposits[i].MerkleTreeIndex = 0
proof, err := depositTrie.MerkleProof(int(pendingDeposits[i].MerkleTreeIndex))
if err != nil {
t.Fatalf("Could not generate proof: %v", err)
}
pendingDeposits[i].MerkleProofHash32S = proof
}
depositRoot := depositTrie.Root()
beaconState.LatestEth1Data.DepositRootHash32 = depositRoot[:]
if err := db.SaveHistoricalState(context.Background(), beaconState); err != nil {
t.Fatal(err)
}
block := &pb.BeaconBlock{
Slot: currentSlot + 1,
StateRootHash32: stateRoot[:],
ParentRootHash32: parentHash[:],
RandaoReveal: randaoReveal,
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte("a"),
BlockHash32: []byte("b"),
},
Body: &pb.BeaconBlockBody{
Deposits: pendingDeposits,
},
}
beaconState.Slot--
beaconState.DepositIndex = 0
if err := chainService.beaconDB.SaveState(ctx, beaconState); err != nil {
t.Fatal(err)
}
initBlockStateRoot(t, block, chainService)
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
log.Fatalf("could not hash block: %v", err)
}
if err := chainService.beaconDB.SaveJustifiedBlock(block); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveFinalizedBlock(block); err != nil {
t.Fatal(err)
}
for _, dep := range pendingDeposits {
db.InsertPendingDeposit(chainService.ctx, dep, big.NewInt(0))
}
if len(db.PendingDeposits(chainService.ctx, nil)) != len(pendingDeposits) || len(pendingDeposits) == 0 {
t.Fatalf("Expected %d pending deposits", len(pendingDeposits))
}
beaconState.Slot--
if err := chainService.beaconDB.SaveState(ctx, beaconState); err != nil {
t.Fatal(err)
}
if err := db.SaveHistoricalState(context.Background(), beaconState); err != nil {
t.Fatal(err)
}
computedState, err := chainService.ReceiveBlock(context.Background(), block)
if err != nil {
t.Fatal(err)
}
for i := 0; i < len(beaconState.ValidatorRegistry); i++ {
pubKey := bytesutil.ToBytes48(beaconState.ValidatorRegistry[i].Pubkey)
attsService.InsertAttestationIntoStore(pubKey, &pb.Attestation{
Data: &pb.AttestationData{
BeaconBlockRootHash32: blockRoot[:],
}},
)
}
if err := chainService.ApplyForkChoiceRule(context.Background(), block, computedState); err != nil {
t.Fatal(err)
}
if len(db.PendingDeposits(chainService.ctx, nil)) != 0 {
t.Fatalf("Expected 0 pending deposits, but there are %+v", db.PendingDeposits(chainService.ctx, nil))
}
testutil.AssertLogsContain(t, hook, "Executing state transition")
}
// Scenario graph: http://bit.ly/2K1k2KZ
//
//digraph G {
// rankdir=LR;
// node [shape="none"];
//
// subgraph blocks {
// rankdir=LR;
// node [shape="box"];
// a->b;
// b->c;
// c->e;
// c->f;
// f->g;
// e->h;
// }
//
// { rank=same; 1; a;}
// { rank=same; 2; b;}
// { rank=same; 3; c;}
// { rank=same; 5; e;}
// { rank=same; 6; f;}
// { rank=same; 7; g;}
// { rank=same; 8; h;}
//
// 1->2->3->4->5->6->7->8->9[arrowhead=none];
//}
func TestReceiveBlock_OnChainSplit(t *testing.T) {
// The scenario to test is that we think that the canonical head is block H
// and then we receive block G. We don't have block F, so we request it. Then
// we process F, the G. The expected behavior is that we load the historical
// state from slot 3 where the common ancestor block C is present.
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db, nil)
deposits, privKeys := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
beaconState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
t.Fatalf("Could not tree hash state: %v", err)
}
parentHash, genesisBlock := setupGenesisBlock(t, chainService)
if err := db.UpdateChainHead(ctx, genesisBlock, beaconState); err != nil {
t.Fatal(err)
}
if err := db.SaveFinalizedState(beaconState); err != nil {
t.Fatal(err)
}
genesisSlot := params.BeaconConfig().GenesisSlot
// Top chain slots (see graph)
blockSlots := []uint64{1, 2, 3, 5, 8}
for _, slot := range blockSlots {
block := &pb.BeaconBlock{
Slot: genesisSlot + slot,
StateRootHash32: stateRoot[:],
ParentRootHash32: parentHash[:],
RandaoReveal: createRandaoReveal(t, beaconState, privKeys),
Body: &pb.BeaconBlockBody{},
}
initBlockStateRoot(t, block, chainService)
computedState, err := chainService.ReceiveBlock(ctx, block)
if err != nil {
t.Fatal(err)
}
stateRoot, err = hashutil.HashProto(computedState)
if err != nil {
t.Fatal(err)
}
if err = db.SaveBlock(block); err != nil {
t.Fatal(err)
}
if err = db.UpdateChainHead(ctx, block, computedState); err != nil {
t.Fatal(err)
}
parentHash, err = hashutil.HashBeaconBlock(block)
if err != nil {
t.Fatal(err)
}
}
// Common ancestor is block at slot 3
commonAncestor, err := db.BlockBySlot(ctx, genesisSlot+3)
if err != nil {
t.Fatal(err)
}
parentHash, err = hashutil.HashBeaconBlock(commonAncestor)
if err != nil {
t.Fatal(err)
}
beaconState, err = db.HistoricalStateFromSlot(ctx, commonAncestor.Slot)
if err != nil {
t.Fatal(err)
}
stateRoot, err = hashutil.HashProto(beaconState)
if err != nil {
t.Fatal(err)
}
// Then we receive the block `f` from slot 6
blockF := &pb.BeaconBlock{
Slot: genesisSlot + 6,
ParentRootHash32: parentHash[:],
StateRootHash32: stateRoot[:],
RandaoReveal: createRandaoReveal(t, beaconState, privKeys),
Body: &pb.BeaconBlockBody{},
}
initBlockStateRoot(t, blockF, chainService)
computedState, err := chainService.ReceiveBlock(ctx, blockF)
if err != nil {
t.Fatal(err)
}
stateRoot, err = hashutil.HashProto(computedState)
if err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(blockF); err != nil {
t.Fatal(err)
}
parentHash, err = hashutil.HashBeaconBlock(blockF)
if err != nil {
t.Fatal(err)
}
// Then we apply block `g` from slot 7
blockG := &pb.BeaconBlock{
Slot: genesisSlot + 7,
ParentRootHash32: parentHash[:],
StateRootHash32: stateRoot[:],
RandaoReveal: createRandaoReveal(t, computedState, privKeys),
Body: &pb.BeaconBlockBody{},
}
initBlockStateRoot(t, blockG, chainService)
computedState, err = chainService.ReceiveBlock(ctx, blockG)
if err != nil {
t.Fatal(err)
}
if computedState.Slot != blockG.Slot {
t.Errorf("Unexpect state slot %d, wanted %d", computedState.Slot, blockG.Slot)
}
}
func TestIsBlockReadyForProcessing_ValidBlock(t *testing.T) {
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db, nil)
unixTime := uint64(time.Now().Unix())
deposits, privKeys := setupInitialDeposits(t, 100)
if err := db.InitializeState(context.Background(), unixTime, deposits, &pb.Eth1Data{}); err != nil {
t.Fatalf("Could not initialize beacon state to disk: %v", err)
}
beaconState, err := db.HeadState(ctx)
if err != nil {
t.Fatalf("Can't get genesis state: %v", err)
}
block := &pb.BeaconBlock{
ParentRootHash32: []byte{'a'},
}
if err := chainService.VerifyBlockValidity(ctx, block, beaconState); err == nil {
t.Fatal("block processing succeeded despite block having no parent saved")
}
beaconState.Slot = params.BeaconConfig().GenesisSlot + 10
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
t.Fatalf("Could not tree hash state: %v", err)
}
genesis := b.NewGenesisBlock([]byte{})
if err := chainService.beaconDB.SaveBlock(genesis); err != nil {
t.Fatalf("cannot save block: %v", err)
}
parentRoot, err := hashutil.HashBeaconBlock(genesis)
if err != nil {
t.Fatalf("unable to get root of canonical head: %v", err)
}
beaconState.LatestEth1Data = &pb.Eth1Data{
DepositRootHash32: []byte{2},
BlockHash32: []byte{3},
}
beaconState.Slot = params.BeaconConfig().GenesisSlot
currentSlot := params.BeaconConfig().GenesisSlot + 1
attestationSlot := params.BeaconConfig().GenesisSlot
randaoReveal := createRandaoReveal(t, beaconState, privKeys)
block2 := &pb.BeaconBlock{
Slot: currentSlot,
StateRootHash32: stateRoot[:],
ParentRootHash32: parentRoot[:],
RandaoReveal: randaoReveal,
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte("a"),
BlockHash32: []byte("b"),
},
Body: &pb.BeaconBlockBody{
Attestations: []*pb.Attestation{{
AggregationBitfield: []byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
Data: &pb.AttestationData{
Slot: attestationSlot,
JustifiedBlockRootHash32: parentRoot[:],
},
}},
},
}
if err := chainService.VerifyBlockValidity(ctx, block2, beaconState); err != nil {
t.Fatalf("block processing failed despite being a valid block: %v", err)
}
}
func TestDeleteValidatorIdx_DeleteWorks(t *testing.T) {
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
epoch := uint64(2)
v.InsertActivatedVal(epoch+1, []uint64{0, 1, 2})
v.InsertExitedVal(epoch+1, []uint64{0, 2})
var validators []*pb.Validator
for i := 0; i < 3; i++ {
pubKeyBuf := make([]byte, params.BeaconConfig().BLSPubkeyLength)
binary.PutUvarint(pubKeyBuf, uint64(i))
validators = append(validators, &pb.Validator{
Pubkey: pubKeyBuf,
})
}
state := &pb.BeaconState{
ValidatorRegistry: validators,
Slot: epoch * params.BeaconConfig().SlotsPerEpoch,
}
chainService := setupBeaconChain(t, db, nil)
if err := chainService.saveValidatorIdx(state); err != nil {
t.Fatalf("Could not save validator idx: %v", err)
}
if err := chainService.deleteValidatorIdx(state); err != nil {
t.Fatalf("Could not delete validator idx: %v", err)
}
wantedIdx := uint64(1)
idx, err := chainService.beaconDB.ValidatorIndex(validators[wantedIdx].Pubkey)
if err != nil {
t.Fatalf("Could not get validator index: %v", err)
}
if wantedIdx != idx {
t.Errorf("Wanted: %d, got: %d", wantedIdx, idx)
}
wantedIdx = uint64(2)
if chainService.beaconDB.HasValidator(validators[wantedIdx].Pubkey) {
t.Errorf("Validator index %d should have been deleted", wantedIdx)
}
if v.ExitedValFromEpoch(epoch) != nil {
t.Errorf("Activated validators mapping for epoch %d still there", epoch)
}
}
func TestSaveValidatorIdx_SaveRetrieveWorks(t *testing.T) {
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
epoch := uint64(1)
v.InsertActivatedVal(epoch+1, []uint64{0, 1, 2})
var validators []*pb.Validator
for i := 0; i < 3; i++ {
pubKeyBuf := make([]byte, params.BeaconConfig().BLSPubkeyLength)
binary.PutUvarint(pubKeyBuf, uint64(i))
validators = append(validators, &pb.Validator{
Pubkey: pubKeyBuf,
})
}
state := &pb.BeaconState{
ValidatorRegistry: validators,
Slot: epoch * params.BeaconConfig().SlotsPerEpoch,
}
chainService := setupBeaconChain(t, db, nil)
if err := chainService.saveValidatorIdx(state); err != nil {
t.Fatalf("Could not save validator idx: %v", err)
}
wantedIdx := uint64(2)
idx, err := chainService.beaconDB.ValidatorIndex(validators[wantedIdx].Pubkey)
if err != nil {
t.Fatalf("Could not get validator index: %v", err)
}
if wantedIdx != idx {
t.Errorf("Wanted: %d, got: %d", wantedIdx, idx)
}
if v.ActivatedValFromEpoch(epoch) != nil {
t.Errorf("Activated validators mapping for epoch %d still there", epoch)
}
}

View File

@@ -0,0 +1,114 @@
package blockchain
import (
"time"
"github.com/gogo/protobuf/proto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
// directly retrieves chain info related data.
type ChainInfoFetcher interface {
HeadFetcher
CanonicalRootFetcher
FinalizationFetcher
}
// GenesisTimeFetcher retrieves the Eth2 genesis timestamp.
type GenesisTimeFetcher interface {
GenesisTime() time.Time
}
// HeadFetcher defines a common interface for methods in blockchain service which
// directly retrieves head related data.
type HeadFetcher interface {
HeadSlot() uint64
HeadRoot() []byte
HeadBlock() *ethpb.BeaconBlock
HeadState() *pb.BeaconState
}
// CanonicalRootFetcher defines a common interface for methods in blockchain service which
// directly retrieves canonical roots related data.
type CanonicalRootFetcher interface {
CanonicalRoot(slot uint64) []byte
}
// ForkFetcher retrieves the current fork information of the Ethereum beacon chain.
type ForkFetcher interface {
CurrentFork() *pb.Fork
}
// FinalizationFetcher defines a common interface for methods in blockchain service which
// directly retrieves finalization related data.
type FinalizationFetcher interface {
FinalizedCheckpt() *ethpb.Checkpoint
}
// FinalizedCheckpt returns the latest finalized checkpoint tracked in fork choice service.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
cp := s.forkChoiceStore.FinalizedCheckpt()
if cp != nil {
return cp
}
return &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
}
// HeadSlot returns the slot of the head of the chain.
func (s *Service) HeadSlot() uint64 {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.headSlot
}
// HeadRoot returns the root of the head of the chain.
func (s *Service) HeadRoot() []byte {
s.headLock.RLock()
defer s.headLock.RUnlock()
root := s.canonicalRoots[s.headSlot]
if len(root) != 0 {
return root
}
return params.BeaconConfig().ZeroHash[:]
}
// HeadBlock returns the head block of the chain.
func (s *Service) HeadBlock() *ethpb.BeaconBlock {
s.headLock.RLock()
defer s.headLock.RUnlock()
return proto.Clone(s.headBlock).(*ethpb.BeaconBlock)
}
// HeadState returns the head state of the chain.
func (s *Service) HeadState() *pb.BeaconState {
s.headLock.RLock()
defer s.headLock.RUnlock()
return proto.Clone(s.headState).(*pb.BeaconState)
}
// CanonicalRoot returns the canonical root of a given slot.
func (s *Service) CanonicalRoot(slot uint64) []byte {
s.headLock.RLock()
defer s.headLock.RUnlock()
return s.canonicalRoots[slot]
}
// GenesisTime returns the genesis time of beacon chain.
func (s *Service) GenesisTime() time.Time {
return s.genesisTime
}
// CurrentFork retrieves the latest fork information of the beacon chain.
func (s *Service) CurrentFork() *pb.Fork {
return proto.Clone(s.headState.Fork).(*pb.Fork)
}

View File

@@ -0,0 +1,77 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
func TestHeadSlot_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
}
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
[32]byte{},
)
}()
s.HeadSlot()
}
func TestHeadRoot_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
}
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
[32]byte{},
)
}()
s.HeadRoot()
}
func TestHeadBlock_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
}
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
[32]byte{},
)
}()
s.HeadBlock()
}
func TestHeadState_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
}
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
[32]byte{},
)
}()
s.HeadState()
}

View File

@@ -0,0 +1,109 @@
package blockchain
import (
"bytes"
"context"
"reflect"
"testing"
"time"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
)
// Ensure Service implements chain info interface.
var _ = ChainInfoFetcher(&Service{})
var _ = GenesisTimeFetcher(&Service{})
var _ = ForkFetcher(&Service{})
func TestFinalizedCheckpt_Nil(t *testing.T) {
c := setupBeaconChain(t, nil)
if !bytes.Equal(c.FinalizedCheckpt().Root, params.BeaconConfig().ZeroHash[:]) {
t.Error("Incorrect pre chain start value")
}
}
func TestHeadRoot_Nil(t *testing.T) {
c := setupBeaconChain(t, nil)
if !bytes.Equal(c.HeadRoot(), params.BeaconConfig().ZeroHash[:]) {
t.Error("Incorrect pre chain start value")
}
}
func TestFinalizedCheckpt_CanRetrieve(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
c := setupBeaconChain(t, db)
if err := c.forkChoiceStore.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
if c.FinalizedCheckpt().Epoch != 0 {
t.Errorf("Finalized epoch at genesis should be 0, got: %d", c.FinalizedCheckpt().Epoch)
}
}
func TestHeadSlot_CanRetrieve(t *testing.T) {
c := &Service{}
c.headSlot = 100
if c.HeadSlot() != 100 {
t.Errorf("Wanted head slot: %d, got: %d", 100, c.HeadSlot())
}
}
func TestHeadRoot_CanRetrieve(t *testing.T) {
c := &Service{canonicalRoots: make(map[uint64][]byte)}
c.headSlot = 100
c.canonicalRoots[c.headSlot] = []byte{'A'}
if !bytes.Equal([]byte{'A'}, c.HeadRoot()) {
t.Errorf("Wanted head root: %v, got: %d", []byte{'A'}, c.HeadRoot())
}
}
func TestHeadBlock_CanRetrieve(t *testing.T) {
b := &ethpb.BeaconBlock{Slot: 1}
c := &Service{headBlock: b}
if !reflect.DeepEqual(b, c.HeadBlock()) {
t.Error("incorrect head block received")
}
}
func TestHeadState_CanRetrieve(t *testing.T) {
s := &pb.BeaconState{Slot: 2}
c := &Service{headState: s}
if !reflect.DeepEqual(s, c.HeadState()) {
t.Error("incorrect head state received")
}
}
func TestGenesisTime_CanRetrieve(t *testing.T) {
c := &Service{genesisTime: time.Unix(999, 0)}
wanted := time.Unix(999, 0)
if c.GenesisTime() != wanted {
t.Error("Did not get wanted genesis time")
}
}
func TestCurrentFork_CanRetrieve(t *testing.T) {
f := &pb.Fork{Epoch: 999}
s := &pb.BeaconState{Fork: f}
c := &Service{headState: s}
if !reflect.DeepEqual(c.CurrentFork(), f) {
t.Error("Recieved incorrect fork version")
}
}
func TestCanonicalRoot_CanRetrieve(t *testing.T) {
c := &Service{canonicalRoots: make(map[uint64][]byte)}
slot := uint64(123)
r := []byte{'B'}
c.canonicalRoots[slot] = r
if !bytes.Equal(r, c.CanonicalRoot(slot)) {
t.Errorf("Wanted head root: %v, got: %d", []byte{'A'}, c.CanonicalRoot(slot))
}
}

View File

@@ -1,470 +0,0 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var (
reorgCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "reorg_counter",
Help: "The number of chain reorganization events that have happened in the fork choice rule",
})
)
var blkAncestorCache = cache.NewBlockAncestorCache()
// ForkChoice interface defines the methods for applying fork choice rule
// operations to the blockchain.
type ForkChoice interface {
ApplyForkChoiceRule(ctx context.Context, block *pb.BeaconBlock, computedState *pb.BeaconState) error
}
// updateFFGCheckPts checks whether the existing FFG check points saved in DB
// are not older than the ones just processed in state. If it's older, we update
// the db with the latest FFG check points, both justification and finalization.
func (c *ChainService) updateFFGCheckPts(ctx context.Context, state *pb.BeaconState) error {
lastJustifiedSlot := helpers.StartSlot(state.JustifiedEpoch)
savedJustifiedBlock, err := c.beaconDB.JustifiedBlock()
if err != nil {
return err
}
// If the last processed justification slot in state is greater than
// the slot of justified block saved in DB.
if lastJustifiedSlot > savedJustifiedBlock.Slot {
// Retrieve the new justified block from DB using the new justified slot and save it.
newJustifiedBlock, err := c.beaconDB.BlockBySlot(ctx, lastJustifiedSlot)
if err != nil {
return err
}
// If the new justified slot is a skip slot in db then we keep getting it's ancestors
// until we can get a block.
lastAvailBlkSlot := lastJustifiedSlot
for newJustifiedBlock == nil {
log.WithField("slot", lastAvailBlkSlot-params.BeaconConfig().GenesisSlot).Debug("Missing block in DB, looking one slot back")
lastAvailBlkSlot--
newJustifiedBlock, err = c.beaconDB.BlockBySlot(ctx, lastAvailBlkSlot)
if err != nil {
return err
}
}
// Fetch justified state from historical states db.
newJustifiedState, err := c.beaconDB.HistoricalStateFromSlot(ctx, newJustifiedBlock.Slot)
if err != nil {
return err
}
if err := c.beaconDB.SaveJustifiedBlock(newJustifiedBlock); err != nil {
return err
}
if err := c.beaconDB.SaveJustifiedState(newJustifiedState); err != nil {
return err
}
}
lastFinalizedSlot := helpers.StartSlot(state.FinalizedEpoch)
savedFinalizedBlock, err := c.beaconDB.FinalizedBlock()
// If the last processed finalized slot in state is greater than
// the slot of finalized block saved in DB.
if err != nil {
return err
}
if lastFinalizedSlot > savedFinalizedBlock.Slot {
// Retrieve the new finalized block from DB using the new finalized slot and save it.
newFinalizedBlock, err := c.beaconDB.BlockBySlot(ctx, lastFinalizedSlot)
if err != nil {
return err
}
// If the new finalized slot is a skip slot in db then we keep getting it's ancestors
// until we can get a block.
lastAvailBlkSlot := lastFinalizedSlot
for newFinalizedBlock == nil {
log.WithField("slot", lastAvailBlkSlot-params.BeaconConfig().GenesisSlot).Debug("Missing block in DB, looking one slot back")
lastAvailBlkSlot--
newFinalizedBlock, err = c.beaconDB.BlockBySlot(ctx, lastAvailBlkSlot)
if err != nil {
return err
}
}
// Generate the new finalized state with using new finalized block and
// save it.
newFinalizedState, err := c.beaconDB.HistoricalStateFromSlot(ctx, lastFinalizedSlot)
if err != nil {
return err
}
if err := c.beaconDB.SaveFinalizedBlock(newFinalizedBlock); err != nil {
return err
}
if err := c.beaconDB.SaveFinalizedState(newFinalizedState); err != nil {
return err
}
}
return nil
}
// ApplyForkChoiceRule determines the current beacon chain head using LMD
// GHOST as a block-vote weighted function to select a canonical head in
// Ethereum Serenity. The inputs are the the recently processed block and its
// associated state.
func (c *ChainService) ApplyForkChoiceRule(
ctx context.Context,
block *pb.BeaconBlock,
postState *pb.BeaconState,
) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ApplyForkChoiceRule")
defer span.End()
log.Info("Applying LMD-GHOST Fork Choice Rule")
justifiedState, err := c.beaconDB.JustifiedState()
if err != nil {
return fmt.Errorf("could not retrieve justified state: %v", err)
}
attestationTargets, err := c.attestationTargets(justifiedState)
if err != nil {
return fmt.Errorf("could not retrieve attestation target: %v", err)
}
justifiedHead, err := c.beaconDB.JustifiedBlock()
if err != nil {
return fmt.Errorf("could not retrieve justified head: %v", err)
}
newHead, err := c.lmdGhost(ctx, justifiedHead, justifiedState, attestationTargets)
if err != nil {
return fmt.Errorf("could not run fork choice: %v", err)
}
newHeadRoot, err := hashutil.HashBeaconBlock(newHead)
if err != nil {
return fmt.Errorf("could not hash head block: %v", err)
}
c.canonicalBlocksLock.Lock()
defer c.canonicalBlocksLock.Unlock()
c.canonicalBlocks[newHead.Slot] = newHeadRoot[:]
currentHead, err := c.beaconDB.ChainHead()
if err != nil {
return fmt.Errorf("could not retrieve chain head: %v", err)
}
isDescendant, err := c.isDescendant(currentHead, newHead)
if err != nil {
return fmt.Errorf("could not check if block is descendant: %v", err)
}
newState := postState
if !isDescendant {
log.Warnf("Reorg happened, last head at slot %d, new head block at slot %d",
currentHead.Slot-params.BeaconConfig().GenesisSlot, newHead.Slot-params.BeaconConfig().GenesisSlot)
// Only regenerate head state if there was a reorg.
newState, err = c.beaconDB.HistoricalStateFromSlot(ctx, newHead.Slot)
if err != nil {
return fmt.Errorf("could not gen state: %v", err)
}
for revertedSlot := currentHead.Slot; revertedSlot > newHead.Slot; revertedSlot-- {
delete(c.canonicalBlocks, revertedSlot)
}
reorgCount.Inc()
}
// If we receive forked blocks.
if newHead.Slot != newState.Slot {
newState, err = c.beaconDB.HistoricalStateFromSlot(ctx, newHead.Slot)
if err != nil {
return fmt.Errorf("could not gen state: %v", err)
}
}
if err := c.beaconDB.UpdateChainHead(ctx, newHead, newState); err != nil {
return fmt.Errorf("failed to update chain: %v", err)
}
h, err := hashutil.HashBeaconBlock(newHead)
if err != nil {
return fmt.Errorf("could not hash head: %v", err)
}
log.WithFields(logrus.Fields{
"headRoot": fmt.Sprintf("%#x", bytesutil.Trunc(h[:])),
"headSlot": newHead.Slot - params.BeaconConfig().GenesisSlot,
"stateSlot": newState.Slot - params.BeaconConfig().GenesisSlot,
}).Info("Chain head block and state updated")
return nil
}
// lmdGhost applies the Latest Message Driven, Greediest Heaviest Observed Sub-Tree
// fork-choice rule defined in the Ethereum Serenity specification for the beacon chain.
//
// Spec pseudocode definition:
// def lmd_ghost(store: Store, start_state: BeaconState, start_block: BeaconBlock) -> BeaconBlock:
// """
// Execute the LMD-GHOST algorithm to find the head ``BeaconBlock``.
// """
// validators = start_state.validator_registry
// active_validator_indices = get_active_validator_indices(validators, slot_to_epoch(start_state.slot))
// attestation_targets = [
// (validator_index, get_latest_attestation_target(store, validator_index))
// for validator_index in active_validator_indices
// ]
//
// def get_vote_count(block: BeaconBlock) -> int:
// return sum(
// get_effective_balance(start_state.validator_balances[validator_index]) // FORK_CHOICE_BALANCE_INCREMENT
// for validator_index, target in attestation_targets
// if get_ancestor(store, target, block.slot) == block
// )
//
// head = start_block
// while 1:
// children = get_children(store, head)
// if len(children) == 0:
// return head
// head = max(children, key=get_vote_count)
func (c *ChainService) lmdGhost(
ctx context.Context,
startBlock *pb.BeaconBlock,
startState *pb.BeaconState,
voteTargets map[uint64]*pb.AttestationTarget,
) (*pb.BeaconBlock, error) {
highestSlot := c.beaconDB.HighestBlockSlot()
head := startBlock
for {
children, err := c.blockChildren(ctx, head, highestSlot)
if err != nil {
return nil, fmt.Errorf("could not fetch block children: %v", err)
}
if len(children) == 0 {
return head, nil
}
maxChild := children[0]
maxChildVotes, err := VoteCount(maxChild, startState, voteTargets, c.beaconDB)
if err != nil {
return nil, fmt.Errorf("unable to determine vote count for block: %v", err)
}
for i := 1; i < len(children); i++ {
candidateChildVotes, err := VoteCount(children[i], startState, voteTargets, c.beaconDB)
if err != nil {
return nil, fmt.Errorf("unable to determine vote count for block: %v", err)
}
maxChildRoot, err := hashutil.HashBeaconBlock(maxChild)
if err != nil {
return nil, err
}
candidateChildRoot, err := hashutil.HashBeaconBlock(children[i])
if err != nil {
return nil, err
}
if candidateChildVotes > maxChildVotes ||
(candidateChildVotes == maxChildVotes && bytesutil.LowerThan(maxChildRoot[:], candidateChildRoot[:])) {
maxChild = children[i]
}
}
head = maxChild
}
}
// blockChildren returns the child blocks of the given block up to a given
// highest slot.
//
// ex:
// /- C - E
// A - B - D - F
// \- G
// Input: B. Output: [C, D, G]
//
// Spec pseudocode definition:
// get_children(store: Store, block: BeaconBlock) -> List[BeaconBlock]
// returns the child blocks of the given block.
func (c *ChainService) blockChildren(ctx context.Context, block *pb.BeaconBlock, highestSlot uint64) ([]*pb.BeaconBlock, error) {
var children []*pb.BeaconBlock
currentRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
return nil, fmt.Errorf("could not tree hash incoming block: %v", err)
}
startSlot := block.Slot + 1
for i := startSlot; i <= highestSlot; i++ {
block, err := c.beaconDB.BlockBySlot(ctx, i)
if err != nil {
return nil, fmt.Errorf("could not get block by slot: %v", err)
}
// Continue if there's a skip block.
if block == nil {
continue
}
parentRoot := bytesutil.ToBytes32(block.ParentRootHash32)
if currentRoot == parentRoot {
children = append(children, block)
}
}
return children, nil
}
// isDescendant checks if the new head block is a descendant block of the current head.
func (c *ChainService) isDescendant(currentHead *pb.BeaconBlock, newHead *pb.BeaconBlock) (bool, error) {
currentHeadRoot, err := hashutil.HashBeaconBlock(currentHead)
if err != nil {
return false, nil
}
for newHead.Slot > currentHead.Slot {
if bytesutil.ToBytes32(newHead.ParentRootHash32) == currentHeadRoot {
return true, nil
}
newHead, err = c.beaconDB.Block(bytesutil.ToBytes32(newHead.ParentRootHash32))
if err != nil {
return false, err
}
if newHead == nil {
return false, nil
}
}
return false, nil
}
// attestationTargets retrieves the list of attestation targets since last finalized epoch,
// each attestation target consists of validator index and its attestation target (i.e. the block
// which the validator attested to)
func (c *ChainService) attestationTargets(state *pb.BeaconState) (map[uint64]*pb.AttestationTarget, error) {
indices := helpers.ActiveValidatorIndices(state.ValidatorRegistry, helpers.CurrentEpoch(state))
attestationTargets := make(map[uint64]*pb.AttestationTarget)
for i, index := range indices {
target, err := c.attsService.LatestAttestationTarget(state, index)
if err != nil {
return nil, fmt.Errorf("could not retrieve attestation target: %v", err)
}
if target == nil {
continue
}
attestationTargets[uint64(i)] = target
}
return attestationTargets, nil
}
// VoteCount determines the number of votes on a beacon block by counting the number
// of target blocks that have such beacon block as a common ancestor.
//
// Spec pseudocode definition:
// def get_vote_count(block: BeaconBlock) -> int:
// return sum(
// get_effective_balance(start_state.validator_balances[validator_index]) // FORK_CHOICE_BALANCE_INCREMENT
// for validator_index, target in attestation_targets
// if get_ancestor(store, target, block.slot) == block
// )
func VoteCount(block *pb.BeaconBlock, state *pb.BeaconState, targets map[uint64]*pb.AttestationTarget, beaconDB *db.BeaconDB) (int, error) {
balances := 0
var ancestorRoot []byte
var err error
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
return 0, err
}
for validatorIndex, target := range targets {
ancestorRoot, err = cachedAncestor(target, block.Slot, beaconDB)
if err != nil {
return 0, err
}
// This covers the following case, we start at B5, and want to process B6 and B7
// B6 can be processed, B7 can not be processed because it's pointed to the
// block older than current block 5.
// B4 - B5 - B6
// \ - - - - - B7
if ancestorRoot == nil {
continue
}
if bytes.Equal(blockRoot[:], ancestorRoot) {
balances += int(helpers.EffectiveBalance(state, validatorIndex))
}
}
return balances, nil
}
// BlockAncestor obtains the ancestor at of a block at a certain slot.
//
// Spec pseudocode definition:
// def get_ancestor(store: Store, block: BeaconBlock, slot: Slot) -> BeaconBlock:
// """
// Get the ancestor of ``block`` with slot number ``slot``; return ``None`` if not found.
// """
// if block.slot == slot:
// return block
// elif block.slot < slot:
// return None
// else:
// return get_ancestor(store, store.get_parent(block), slot)
func BlockAncestor(targetBlock *pb.AttestationTarget, slot uint64, beaconDB *db.BeaconDB) ([]byte, error) {
if targetBlock.Slot == slot {
return targetBlock.BlockRoot[:], nil
}
if targetBlock.Slot < slot {
return nil, nil
}
parentRoot := bytesutil.ToBytes32(targetBlock.ParentRoot)
parent, err := beaconDB.Block(parentRoot)
if err != nil {
return nil, fmt.Errorf("could not get parent block: %v", err)
}
if parent == nil {
return nil, fmt.Errorf("parent block does not exist: %v", err)
}
newTarget := &pb.AttestationTarget{
Slot: parent.Slot,
BlockRoot: parentRoot[:],
ParentRoot: parent.ParentRootHash32,
}
return BlockAncestor(newTarget, slot, beaconDB)
}
// cachedAncestor retrieves the cached ancestor target from block ancestor cache,
// if it's not there it looks up the block tree get it and cache it.
func cachedAncestor(target *pb.AttestationTarget, height uint64, beaconDB *db.BeaconDB) ([]byte, error) {
// check if the ancestor block of from a given block height was cached.
cachedAncestorInfo, err := blkAncestorCache.AncestorBySlot(target.BlockRoot, height)
if err != nil {
return nil, nil
}
if cachedAncestorInfo != nil {
return cachedAncestorInfo.Target.BlockRoot, nil
}
ancestorRoot, err := BlockAncestor(target, height, beaconDB)
if err != nil {
return nil, err
}
ancestor, err := beaconDB.Block(bytesutil.ToBytes32(ancestorRoot))
if err != nil {
return nil, err
}
if ancestor == nil {
return nil, nil
}
ancestorTarget := &pb.AttestationTarget{
Slot: ancestor.Slot,
BlockRoot: ancestorRoot,
ParentRoot: ancestor.ParentRootHash32,
}
if err := blkAncestorCache.AddBlockAncestor(&cache.AncestorInfo{
Height: height,
Hash: target.BlockRoot,
Target: ancestorTarget,
}); err != nil {
return nil, err
}
return ancestorRoot, nil
}

View File

@@ -1,225 +0,0 @@
package blockchain
import (
"context"
"fmt"
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/internal"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
type mockAttestationHandler struct {
targets map[uint64]*pb.AttestationTarget
}
func (m *mockAttestationHandler) LatestAttestationTarget(beaconState *pb.BeaconState, idx uint64) (*pb.AttestationTarget, error) {
return m.targets[idx], nil
}
func (m *mockAttestationHandler) BatchUpdateLatestAttestation(ctx context.Context, atts []*pb.Attestation) error {
return nil
}
func TestApplyForkChoice_ChainSplitReorg(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := internal.SetupDB(t)
defer internal.TeardownDB(t, beaconDB)
ctx := context.Background()
deposits, _ := setupInitialDeposits(t, 100)
eth1Data := &pb.Eth1Data{
DepositRootHash32: []byte{},
BlockHash32: []byte{},
}
justifiedState, err := state.GenesisBeaconState(deposits, 0, eth1Data)
if err != nil {
t.Fatalf("Can't generate genesis state: %v", err)
}
chainService := setupBeaconChain(t, beaconDB, nil)
// Construct a forked chain that looks as follows:
// /------B1 ----B3 ----- B5 (current head)
// B0 --B2 -------------B4
blocks, roots := constructForkedChain(t, justifiedState)
// We then setup a canonical chain of the following blocks:
// B0->B1->B3->B5.
if err := chainService.beaconDB.SaveBlock(blocks[0]); err != nil {
t.Fatal(err)
}
justifiedState.LatestBlock = blocks[0]
if err := chainService.beaconDB.SaveJustifiedState(justifiedState); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveJustifiedBlock(blocks[0]); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.UpdateChainHead(ctx, blocks[0], justifiedState); err != nil {
t.Fatal(err)
}
canonicalBlockIndices := []int{1, 3, 5}
postState := proto.Clone(justifiedState).(*pb.BeaconState)
for _, canonicalIndex := range canonicalBlockIndices {
postState, err = chainService.ApplyBlockStateTransition(ctx, blocks[canonicalIndex], postState)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(blocks[canonicalIndex]); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.UpdateChainHead(ctx, blocks[canonicalIndex], postState); err != nil {
t.Fatal(err)
}
}
chainHead, err := chainService.beaconDB.ChainHead()
if err != nil {
t.Fatal(err)
}
if chainHead.Slot != justifiedState.Slot+5 {
t.Errorf(
"Expected chain head with slot %d, received %d",
justifiedState.Slot+5-params.BeaconConfig().GenesisSlot,
chainHead.Slot-params.BeaconConfig().GenesisSlot,
)
}
// We then save forked blocks and their historical states (but do not update chain head).
// The fork is from B0->B2->B4.
forkedBlockIndices := []int{2, 4}
forkState := proto.Clone(justifiedState).(*pb.BeaconState)
for _, forkIndex := range forkedBlockIndices {
forkState, err = chainService.ApplyBlockStateTransition(ctx, blocks[forkIndex], forkState)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveBlock(blocks[forkIndex]); err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveHistoricalState(ctx, forkState); err != nil {
t.Fatal(err)
}
}
// Give the block from the forked chain, B4, the most votes.
voteTargets := make(map[uint64]*pb.AttestationTarget)
voteTargets[0] = &pb.AttestationTarget{
Slot: blocks[5].Slot,
BlockRoot: roots[5][:],
ParentRoot: blocks[5].ParentRootHash32,
}
for i := 1; i < len(deposits); i++ {
voteTargets[uint64(i)] = &pb.AttestationTarget{
Slot: blocks[4].Slot,
BlockRoot: roots[4][:],
ParentRoot: blocks[4].ParentRootHash32,
}
}
attHandler := &mockAttestationHandler{
targets: voteTargets,
}
chainService.attsService = attHandler
block4State, err := chainService.beaconDB.HistoricalStateFromSlot(ctx, blocks[4].Slot)
if err != nil {
t.Fatal(err)
}
// Applying the fork choice rule should reorg to B4 successfully.
if err := chainService.ApplyForkChoiceRule(ctx, blocks[4], block4State); err != nil {
t.Fatal(err)
}
newHead, err := chainService.beaconDB.ChainHead()
if err != nil {
t.Fatal(err)
}
if !proto.Equal(newHead, blocks[4]) {
t.Errorf(
"Expected chain head %v, received %v",
blocks[4],
newHead,
)
}
want := fmt.Sprintf(
"Reorg happened, last head at slot %d, new head block at slot %d",
blocks[5].Slot-params.BeaconConfig().GenesisSlot, blocks[4].Slot-params.BeaconConfig().GenesisSlot,
)
testutil.AssertLogsContain(t, hook, want)
}
func constructForkedChain(t *testing.T, beaconState *pb.BeaconState) ([]*pb.BeaconBlock, [][32]byte) {
// Construct the following chain:
// /------B1 ----B3 ----- B5 (current head)
// B0 --B2 -------------B4
blocks := make([]*pb.BeaconBlock, 6)
roots := make([][32]byte, 6)
var err error
blocks[0] = &pb.BeaconBlock{
Slot: beaconState.Slot,
ParentRootHash32: []byte{'A'},
Body: &pb.BeaconBlockBody{},
}
roots[0], err = hashutil.HashBeaconBlock(blocks[0])
if err != nil {
t.Fatalf("Could not hash block: %v", err)
}
blocks[1] = &pb.BeaconBlock{
Slot: beaconState.Slot + 2,
ParentRootHash32: roots[0][:],
Body: &pb.BeaconBlockBody{},
}
roots[1], err = hashutil.HashBeaconBlock(blocks[1])
if err != nil {
t.Fatalf("Could not hash block: %v", err)
}
blocks[2] = &pb.BeaconBlock{
Slot: beaconState.Slot + 1,
ParentRootHash32: roots[0][:],
Body: &pb.BeaconBlockBody{},
}
roots[2], err = hashutil.HashBeaconBlock(blocks[2])
if err != nil {
t.Fatalf("Could not hash block: %v", err)
}
blocks[3] = &pb.BeaconBlock{
Slot: beaconState.Slot + 3,
ParentRootHash32: roots[1][:],
Body: &pb.BeaconBlockBody{},
}
roots[3], err = hashutil.HashBeaconBlock(blocks[3])
if err != nil {
t.Fatalf("Could not hash block: %v", err)
}
blocks[4] = &pb.BeaconBlock{
Slot: beaconState.Slot + 4,
ParentRootHash32: roots[2][:],
Body: &pb.BeaconBlockBody{},
}
roots[4], err = hashutil.HashBeaconBlock(blocks[4])
if err != nil {
t.Fatalf("Could not hash block: %v", err)
}
blocks[5] = &pb.BeaconBlock{
Slot: beaconState.Slot + 5,
ParentRootHash32: roots[3][:],
Body: &pb.BeaconBlockBody{},
}
roots[5], err = hashutil.HashBeaconBlock(blocks[5])
if err != nil {
t.Fatalf("Could not hash block: %v", err)
}
return blocks, roots
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,68 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"log.go",
"metrics.go",
"process_attestation.go",
"process_block.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/forkchoice",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"benchmark_test.go",
"lmd_ghost_yaml_test.go",
"process_attestation_test.go",
"process_block_test.go",
"service_test.go",
"tree_test.go",
],
data = ["lmd_ghost_test.yaml"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@in_gopkg_yaml_v2//:go_default_library",
],
)

View File

@@ -0,0 +1,178 @@
package forkchoice
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
func BenchmarkForkChoiceTree1(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
b.Fatal(err)
}
// Benchmark fork choice with 1024 validators
validators := make([]*ethpb.Validator, 1024)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
b.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
b.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
b.Fatal(err)
}
// Spread out the votes evenly for all 3 leaf nodes
for i := 0; i < len(validators); i++ {
switch {
case i < 256:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
b.Fatal(err)
}
case i > 768:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
b.Fatal(err)
}
default:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[8]}); err != nil {
b.Fatal(err)
}
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := store.Head(ctx)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkForkChoiceTree2(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree2(db)
if err != nil {
b.Fatal(err)
}
// Benchmark fork choice with 1024 validators
validators := make([]*ethpb.Validator, 1024)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
b.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
b.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
b.Fatal(err)
}
// Spread out the votes evenly for all the leaf nodes. 8 to 15
nodeIndex := 8
for i := 0; i < len(validators); i++ {
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[nodeIndex]}); err != nil {
b.Fatal(err)
}
if i%155 == 0 {
nodeIndex++
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := store.Head(ctx)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkForkChoiceTree3(b *testing.B) {
ctx := context.Background()
db := testDB.SetupDB(b)
defer testDB.TeardownDB(b, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree3(db)
if err != nil {
b.Fatal(err)
}
// Benchmark fork choice with 1024 validators
validators := make([]*ethpb.Validator, 1024)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
b.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
b.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
b.Fatal(err)
}
// All validators vote on the same head
for i := 0; i < len(validators); i++ {
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[len(roots)-1]}); err != nil {
b.Fatal(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := store.Head(ctx)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -0,0 +1,9 @@
/*
Package forkchoice implements the Latest Message Driven GHOST (Greediest Heaviest Observed
Sub-Tree) algorithm as the Ethereum Serenity beacon chain fork choice rule. This algorithm is designed to
properly detect the canonical chain based on validator votes even in the presence of high network
latency, network partitions, and many conflicting blocks. To read more about fork choice, read the
official accompanying document:
https://github.com/ethereum/eth2.0-specs/blob/v0.8.3/specs/core/0_fork-choice.md
*/
package forkchoice

View File

@@ -0,0 +1,59 @@
test_cases:
# GHOST chooses b3 with the heaviest weight
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b1'
- id: 'b3'
parent: 'b1'
weights:
b0: 0
b1: 0
b2: 5
b3: 10
head: 'b3'
# GHOST chooses b1 with the heaviest weight
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b0'
- id: 'b3'
parent: 'b0'
weights:
b1: 5
b2: 4
b3: 3
head: 'b1'
# Equal weights children, GHOST chooses b2 because it is higher lexicographically than b3
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b0'
- id: 'b3'
parent: 'b0'
weights:
b1: 5
b2: 6
b3: 6
head: 'b3'
# Equal weights children, GHOST chooses b2 because it is higher lexicographically than b1
- blocks:
- id: 'b0'
parent: 'b0'
- id: 'b1'
parent: 'b0'
- id: 'b2'
parent: 'b0'
weights:
b1: 0
b2: 0
head: 'b2'

View File

@@ -0,0 +1,140 @@
package forkchoice
import (
"bytes"
"context"
"io/ioutil"
"path/filepath"
"strconv"
"testing"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"gopkg.in/yaml.v2"
)
type Config struct {
TestCases []struct {
Blocks []struct {
ID string `yaml:"id"`
Parent string `yaml:"parent"`
} `yaml:"blocks"`
Weights map[string]int `yaml:"weights"`
Head string `yaml:"head"`
} `yaml:"test_cases"`
}
func TestGetHeadFromYaml(t *testing.T) {
ctx := context.Background()
filename, _ := filepath.Abs("./lmd_ghost_test.yaml")
yamlFile, err := ioutil.ReadFile(filename)
if err != nil {
t.Fatal(err)
}
var c *Config
err = yaml.Unmarshal(yamlFile, &c)
for _, test := range c.TestCases {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
blksRoot := make(map[int][]byte)
// Construct block tree from yaml.
for _, blk := range test.Blocks {
// genesis block condition
if blk.ID == blk.Parent {
b := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
blksRoot[0] = root[:]
} else {
slot, err := strconv.Atoi(blk.ID[1:])
if err != nil {
t.Fatal(err)
}
parentSlot, err := strconv.Atoi(blk.Parent[1:])
if err != nil {
t.Fatal(err)
}
b := &ethpb.BeaconBlock{Slot: uint64(slot), ParentRoot: blksRoot[parentSlot]}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
blksRoot[slot] = root[:]
}
}
// Assign validator votes to the blocks as weights.
count := 0
for blk, votes := range test.Weights {
slot, err := strconv.Atoi(blk[1:])
if err != nil {
t.Fatal(err)
}
max := count + votes
for i := count; i < max; i++ {
if err := db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: blksRoot[slot]}); err != nil {
t.Fatal(err)
}
count++
}
}
store := NewForkChoiceService(ctx, db)
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = blksRoot[0]
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(blksRoot[0])); err != nil {
t.Fatal(err)
}
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
head, err := store.Head(ctx)
if err != nil {
t.Fatal(err)
}
headSlot, err := strconv.Atoi(test.Head[1:])
if err != nil {
t.Fatal(err)
}
wantedHead := blksRoot[headSlot]
if !bytes.Equal(head, wantedHead) {
t.Errorf("wanted root %#x, got root %#x", wantedHead, head)
}
helpers.ClearAllCaches()
testDB.TeardownDB(t, db)
}
}

View File

@@ -0,0 +1,40 @@
package forkchoice
import (
"fmt"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "forkchoice")
// logs epoch related data during epoch boundary.
func logEpochData(beaconState *pb.BeaconState) {
log.WithFields(logrus.Fields{
"epoch": helpers.CurrentEpoch(beaconState),
"finalizedEpoch": beaconState.FinalizedCheckpoint.Epoch,
"justifiedEpoch": beaconState.CurrentJustifiedCheckpoint.Epoch,
"previousJustifiedEpoch": beaconState.PreviousJustifiedCheckpoint.Epoch,
}).Info("Starting next epoch")
activeVals, err := helpers.ActiveValidatorIndices(beaconState, helpers.CurrentEpoch(beaconState))
if err != nil {
log.WithError(err).Error("Could not get active validator indices")
return
}
log.WithFields(logrus.Fields{
"totalValidators": len(beaconState.Validators),
"activeValidators": len(activeVals),
"averageBalance": fmt.Sprintf("%.5f ETH", averageBalance(beaconState.Balances)),
}).Info("Validator registry information")
}
func averageBalance(balances []uint64) float64 {
total := uint64(0)
for i := 0; i < len(balances); i++ {
total += balances[i]
}
return float64(total) / float64(len(balances)) / float64(params.BeaconConfig().GweiPerEth)
}

View File

@@ -0,0 +1,92 @@
package forkchoice
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
var (
beaconFinalizedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_finalized_epoch",
Help: "Last finalized epoch of the processed state",
})
beaconFinalizedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_finalized_root",
Help: "Last finalized root of the processed state",
})
beaconCurrentJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_epoch",
Help: "Current justified epoch of the processed state",
})
beaconCurrentJustifiedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_justified_root",
Help: "Current justified root of the processed state",
})
beaconPrevJustifiedEpoch = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_previous_justified_epoch",
Help: "Previous justified epoch of the processed state",
})
beaconPrevJustifiedRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_previous_justified_root",
Help: "Previous justified root of the processed state",
})
activeValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "state_active_validators",
Help: "Total number of active validators",
})
slashedValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "state_slashed_validators",
Help: "Total slashed validators",
})
withdrawnValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "state_withdrawn_validators",
Help: "Total withdrawn validators",
})
totalValidatorsGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_current_validators",
Help: "Number of status=pending|active|exited|withdrawable validators in current epoch",
})
)
func reportEpochMetrics(state *pb.BeaconState) {
currentEpoch := state.Slot / params.BeaconConfig().SlotsPerEpoch
// Validator counts
var active float64
var slashed float64
var withdrawn float64
for _, v := range state.Validators {
if v.ActivationEpoch <= currentEpoch && currentEpoch < v.ExitEpoch {
active++
}
if v.Slashed {
slashed++
}
if currentEpoch >= v.ExitEpoch {
withdrawn++
}
}
activeValidatorsGauge.Set(active)
slashedValidatorsGauge.Set(slashed)
withdrawnValidatorsGauge.Set(withdrawn)
totalValidatorsGauge.Set(float64(len(state.Validators)))
// Last justified slot
if state.CurrentJustifiedCheckpoint != nil {
beaconCurrentJustifiedEpoch.Set(float64(state.CurrentJustifiedCheckpoint.Epoch))
beaconCurrentJustifiedRoot.Set(float64(bytesutil.ToLowInt64(state.CurrentJustifiedCheckpoint.Root)))
}
// Last previous justified slot
if state.PreviousJustifiedCheckpoint != nil {
beaconPrevJustifiedEpoch.Set(float64(state.PreviousJustifiedCheckpoint.Epoch))
beaconPrevJustifiedRoot.Set(float64(bytesutil.ToLowInt64(state.PreviousJustifiedCheckpoint.Root)))
}
// Last finalized slot
if state.FinalizedCheckpoint != nil {
beaconFinalizedEpoch.Set(float64(state.FinalizedCheckpoint.Epoch))
beaconFinalizedRoot.Set(float64(bytesutil.ToLowInt64(state.FinalizedCheckpoint.Root)))
}
}

View File

@@ -0,0 +1,308 @@
package forkchoice
import (
"context"
"fmt"
"time"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// OnAttestation is called whenever an attestation is received, it updates validators latest vote,
// as well as the fork choice store struct.
//
// Spec pseudocode definition:
// def on_attestation(store: Store, attestation: Attestation) -> None:
// target = attestation.data.target
//
// # Cannot calculate the current shuffling if have not seen the target
// assert target.root in store.blocks
//
// # Attestations cannot be from future epochs. If they are, delay consideration until the epoch arrives
// base_state = store.block_states[target.root].copy()
// assert store.time >= base_state.genesis_time + compute_start_slot_of_epoch(target.epoch) * SECONDS_PER_SLOT
//
// # Store target checkpoint state if not yet seen
// if target not in store.checkpoint_states:
// process_slots(base_state, compute_start_slot_of_epoch(target.epoch))
// store.checkpoint_states[target] = base_state
// target_state = store.checkpoint_states[target]
//
// # Attestations can only affect the fork choice of subsequent slots.
// # Delay consideration in the fork choice until their slot is in the past.
// attestation_slot = get_attestation_data_slot(target_state, attestation.data)
// assert store.time >= (attestation_slot + 1) * SECONDS_PER_SLOT
//
// # Get state at the `target` to validate attestation and calculate the committees
// indexed_attestation = get_indexed_attestation(target_state, attestation)
// assert is_valid_indexed_attestation(target_state, indexed_attestation)
//
// # Update latest messages
// for i in indexed_attestation.custody_bit_0_indices + indexed_attestation.custody_bit_1_indices:
// if i not in store.latest_messages or target.epoch > store.latest_messages[i].epoch:
// store.latest_messages[i] = LatestMessage(epoch=target.epoch, root=attestation.data.beacon_block_root)
func (s *Store) OnAttestation(ctx context.Context, a *ethpb.Attestation) (uint64, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.onAttestation")
defer span.End()
tgt := proto.Clone(a.Data.Target).(*ethpb.Checkpoint)
tgtSlot := helpers.StartSlot(tgt.Epoch)
// Verify beacon node has seen the target block before.
if !s.db.HasBlock(ctx, bytesutil.ToBytes32(tgt.Root)) {
return 0, fmt.Errorf("target root %#x does not exist in db", bytesutil.Trunc(tgt.Root))
}
// Verify attestation target has had a valid pre state produced by the target block.
baseState, err := s.verifyAttPreState(ctx, tgt)
if err != nil {
return 0, err
}
// Verify Attestations cannot be from future epochs.
if err := helpers.VerifySlotTime(baseState.GenesisTime, tgtSlot); err != nil {
return 0, errors.Wrap(err, "could not verify attestation target slot")
}
// Store target checkpoint state if not yet seen.
baseState, err = s.saveCheckpointState(ctx, baseState, tgt)
if err != nil {
return 0, err
}
// Delay attestation processing until the subsequent slot.
if err := s.waitForAttInclDelay(ctx, a, baseState); err != nil {
return 0, err
}
// Verify attestations can only affect the fork choice of subsequent slots.
if err := s.verifyAttSlotTime(ctx, baseState, a.Data); err != nil {
return 0, err
}
s.attsQueueLock.Lock()
defer s.attsQueueLock.Unlock()
atts := make([]*ethpb.Attestation, 0, len(s.attsQueue))
for root, a := range s.attsQueue {
log := log.WithFields(logrus.Fields{
"AggregatedBitfield": fmt.Sprintf("%08b", a.AggregationBits),
"Root": fmt.Sprintf("%#x", root),
})
log.Debug("Updating latest votes")
// Use the target state to to validate attestation and calculate the committees.
indexedAtt, err := s.verifyAttestation(ctx, baseState, a)
if err != nil {
log.WithError(err).Warn("Removing attestation from queue.")
delete(s.attsQueue, root)
continue
}
// Update every validator's latest vote.
if err := s.updateAttVotes(ctx, indexedAtt, tgt.Root, tgt.Epoch); err != nil {
return 0, err
}
// Mark attestation as seen we don't update votes when it appears in block.
if err := s.setSeenAtt(a); err != nil {
return 0, err
}
delete(s.attsQueue, root)
att, err := s.aggregatedAttestations(ctx, a)
if err != nil {
return 0, err
}
atts = append(atts, att...)
}
if err := s.db.SaveAttestations(ctx, atts); err != nil {
return 0, err
}
return tgtSlot, nil
}
// verifyAttPreState validates input attested check point has a valid pre-state.
func (s *Store) verifyAttPreState(ctx context.Context, c *ethpb.Checkpoint) (*pb.BeaconState, error) {
baseState, err := s.db.State(ctx, bytesutil.ToBytes32(c.Root))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", helpers.StartSlot(c.Epoch))
}
if baseState == nil {
return nil, fmt.Errorf("pre state of target block %d does not exist", helpers.StartSlot(c.Epoch))
}
return baseState, nil
}
// saveCheckpointState saves and returns the processed state with the associated check point.
func (s *Store) saveCheckpointState(ctx context.Context, baseState *pb.BeaconState, c *ethpb.Checkpoint) (*pb.BeaconState, error) {
s.checkpointStateLock.Lock()
defer s.checkpointStateLock.Unlock()
cachedState, err := s.checkpointState.StateByCheckpoint(c)
if err != nil {
return nil, errors.Wrap(err, "could not get cached checkpoint state")
}
if cachedState != nil {
return cachedState, nil
}
// Advance slots only when it's higher than current state slot.
if helpers.StartSlot(c.Epoch) > baseState.Slot {
stateCopy := proto.Clone(baseState).(*pb.BeaconState)
baseState, err = state.ProcessSlots(ctx, stateCopy, helpers.StartSlot(c.Epoch))
if err != nil {
return nil, errors.Wrapf(err, "could not process slots up to %d", helpers.StartSlot(c.Epoch))
}
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: c,
State: baseState,
}); err != nil {
return nil, errors.Wrap(err, "could not saved checkpoint state to cache")
}
return baseState, nil
}
// waitForAttInclDelay waits until the next slot because attestation can only affect
// fork choice of subsequent slot. This is to delay attestation inclusion for fork choice
// until the attested slot is in the past.
func (s *Store) waitForAttInclDelay(ctx context.Context, a *ethpb.Attestation, targetState *pb.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.forkchoice.waitForAttInclDelay")
defer span.End()
slot, err := helpers.AttestationDataSlot(targetState, a.Data)
if err != nil {
return errors.Wrap(err, "could not get attestation slot")
}
nextSlot := slot + 1
duration := time.Duration(nextSlot*params.BeaconConfig().SecondsPerSlot) * time.Second
timeToInclude := time.Unix(int64(targetState.GenesisTime), 0).Add(duration)
if err := s.aggregateAttestation(ctx, a); err != nil {
return errors.Wrap(err, "could not aggregate attestation")
}
time.Sleep(time.Until(timeToInclude))
return nil
}
// aggregateAttestation aggregates the attestations in the pending queue.
func (s *Store) aggregateAttestation(ctx context.Context, att *ethpb.Attestation) error {
s.attsQueueLock.Lock()
defer s.attsQueueLock.Unlock()
root, err := ssz.HashTreeRoot(att.Data)
if err != nil {
return err
}
if a, ok := s.attsQueue[root]; ok {
a, err := helpers.AggregateAttestation(a, att)
if err != nil {
return nil
}
s.attsQueue[root] = a
return nil
}
s.attsQueue[root] = proto.Clone(att).(*ethpb.Attestation)
return nil
}
// verifyAttSlotTime validates input attestation is not from the future.
func (s *Store) verifyAttSlotTime(ctx context.Context, baseState *pb.BeaconState, d *ethpb.AttestationData) error {
aSlot, err := helpers.AttestationDataSlot(baseState, d)
if err != nil {
return errors.Wrap(err, "could not get attestation slot")
}
return helpers.VerifySlotTime(baseState.GenesisTime, aSlot+1)
}
// verifyAttestation validates input attestation is valid.
func (s *Store) verifyAttestation(ctx context.Context, baseState *pb.BeaconState, a *ethpb.Attestation) (*ethpb.IndexedAttestation, error) {
indexedAtt, err := blocks.ConvertToIndexed(ctx, baseState, a)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation")
}
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
return nil, errors.Wrap(err, "could not verify indexed attestation")
}
return indexedAtt, nil
}
// updateAttVotes updates validator's latest votes based on the incoming attestation.
func (s *Store) updateAttVotes(
ctx context.Context,
indexedAtt *ethpb.IndexedAttestation,
tgtRoot []byte,
tgtEpoch uint64) error {
indices := append(indexedAtt.CustodyBit_0Indices, indexedAtt.CustodyBit_1Indices...)
newVoteIndices := make([]uint64, 0, len(indices))
newVotes := make([]*pb.ValidatorLatestVote, 0, len(indices))
for _, i := range indices {
vote, err := s.db.ValidatorLatestVote(ctx, i)
if err != nil {
return errors.Wrapf(err, "could not get latest vote for validator %d", i)
}
if vote == nil || tgtEpoch > vote.Epoch {
newVotes = append(newVotes, &pb.ValidatorLatestVote{
Epoch: tgtEpoch,
Root: tgtRoot,
})
newVoteIndices = append(newVoteIndices, i)
}
}
return s.db.SaveValidatorLatestVotes(ctx, newVoteIndices, newVotes)
}
// setSeenAtt sets the attestation hash in seen attestation map to true.
func (s *Store) setSeenAtt(a *ethpb.Attestation) error {
s.seenAttsLock.Lock()
defer s.seenAttsLock.Unlock()
r, err := hashutil.HashProto(a)
if err != nil {
return err
}
s.seenAtts[r] = true
return nil
}
// aggregatedAttestation returns the aggregated attestation after checking saved one in db.
func (s *Store) aggregatedAttestations(ctx context.Context, att *ethpb.Attestation) ([]*ethpb.Attestation, error) {
r, err := ssz.HashTreeRoot(att.Data)
if err != nil {
return nil, err
}
saved, err := s.db.AttestationsByDataRoot(ctx, r)
if err != nil {
return nil, err
}
if saved == nil {
return []*ethpb.Attestation{att}, nil
}
aggregated, err := helpers.AggregateAttestations(append(saved, att))
if err != nil {
return nil, err
}
return aggregated, nil
}

View File

@@ -0,0 +1,272 @@
package forkchoice
import (
"bytes"
"context"
"reflect"
"strings"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
func TestStore_OnAttestation(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
_, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
BlkWithOutState := &ethpb.BeaconBlock{Slot: 0}
if err := db.SaveBlock(ctx, BlkWithOutState); err != nil {
t.Fatal(err)
}
BlkWithOutStateRoot, _ := ssz.SigningRoot(BlkWithOutState)
BlkWithStateBadAtt := &ethpb.BeaconBlock{Slot: 1}
if err := db.SaveBlock(ctx, BlkWithStateBadAtt); err != nil {
t.Fatal(err)
}
BlkWithStateBadAttRoot, _ := ssz.SigningRoot(BlkWithStateBadAtt)
if err := store.db.SaveState(ctx, &pb.BeaconState{}, BlkWithStateBadAttRoot); err != nil {
t.Fatal(err)
}
BlkWithValidState := &ethpb.BeaconBlock{Slot: 2}
if err := db.SaveBlock(ctx, BlkWithValidState); err != nil {
t.Fatal(err)
}
BlkWithValidStateRoot, _ := ssz.SigningRoot(BlkWithValidState)
if err := store.db.SaveState(ctx, &pb.BeaconState{
Fork: &pb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
ActiveIndexRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
}, BlkWithValidStateRoot); err != nil {
t.Fatal(err)
}
tests := []struct {
name string
a *ethpb.Attestation
s *pb.BeaconState
wantErr bool
wantErrString string
}{
{
name: "attestation's target root not in db",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: []byte{'A'}}}},
s: &pb.BeaconState{},
wantErr: true,
wantErrString: "target root 0x41 does not exist in db",
},
{
name: "no pre state for attestations's target block",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}},
s: &pb.BeaconState{},
wantErr: true,
wantErrString: "pre state of target block 0 does not exist",
},
{
name: "process attestation from future epoch",
a: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Epoch: params.BeaconConfig().FarFutureEpoch,
Root: BlkWithStateBadAttRoot[:]}}},
s: &pb.BeaconState{},
wantErr: true,
wantErrString: "could not process slot from the future",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
_, err := store.OnAttestation(ctx, tt.a)
if tt.wantErr {
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnAttestation() error = %v, wantErr = %v", err, tt.wantErrString)
}
} else {
t.Error(err)
}
})
}
}
func TestStore_SaveCheckpointState(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseDemoBeaconConfig()
store := NewForkChoiceService(ctx, db)
crosslinks := make([]*ethpb.Crosslink, params.BeaconConfig().ShardCount)
for i := 0; i < len(crosslinks); i++ {
crosslinks[i] = &ethpb.Crosslink{
ParentRoot: make([]byte, 32),
DataRoot: make([]byte, 32),
}
}
s := &pb.BeaconState{
Fork: &pb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
ActiveIndexRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
StateRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
LatestBlockHeader: &ethpb.BeaconBlockHeader{},
JustificationBits: []byte{0},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{},
CurrentCrosslinks: crosslinks,
CompactCommitteesRoots: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
Slashings: make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector),
FinalizedCheckpoint: &ethpb.Checkpoint{},
}
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
s1, err := store.saveCheckpointState(ctx, s, cp1)
if err != nil {
t.Fatal(err)
}
if s1.Slot != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot)
}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: []byte{'B'}}
s2, err := store.saveCheckpointState(ctx, s, cp2)
if err != nil {
t.Fatal(err)
}
if s2.Slot != 2*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot)
}
s1, err = store.saveCheckpointState(ctx, nil, cp1)
if err != nil {
t.Fatal(err)
}
if s1.Slot != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot)
}
s1, err = store.checkpointState.StateByCheckpoint(cp1)
if err != nil {
t.Fatal(err)
}
if s1.Slot != 1*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot)
}
s2, err = store.checkpointState.StateByCheckpoint(cp2)
if err != nil {
t.Fatal(err)
}
if s2.Slot != 2*params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Wanted state slot: %d, got: %d", 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot)
}
s.Slot = params.BeaconConfig().SlotsPerEpoch + 1
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'C'}}
s3, err := store.saveCheckpointState(ctx, s, cp3)
if err != nil {
t.Fatal(err)
}
if s3.Slot != s.Slot {
t.Errorf("Wanted state slot: %d, got: %d", s.Slot, s3.Slot)
}
}
func TestStore_AggregateAttestation(t *testing.T) {
_, _, privKeys := testutil.SetupInitialDeposits(t, 100)
f := &pb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
Epoch: 0,
}
domain := helpers.Domain(f, 0, params.BeaconConfig().DomainAttestation)
sig := privKeys[0].Sign([]byte{}, domain)
store := &Store{attsQueue: make(map[[32]byte]*ethpb.Attestation)}
b1 := bitfield.NewBitlist(8)
b1.SetBitAt(0, true)
a := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: b1, Signature: sig.Marshal()}
if err := store.aggregateAttestation(context.Background(), a); err != nil {
t.Fatal(err)
}
r, _ := ssz.HashTreeRoot(a.Data)
if !bytes.Equal(store.attsQueue[r].AggregationBits, b1) {
t.Error("Received incorrect aggregation bitfield")
}
b2 := bitfield.NewBitlist(8)
b2.SetBitAt(1, true)
a = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: b2, Signature: sig.Marshal()}
if err := store.aggregateAttestation(context.Background(), a); err != nil {
t.Fatal(err)
}
if !bytes.Equal(store.attsQueue[r].AggregationBits, []byte{3, 1}) {
t.Error("Received incorrect aggregation bitfield")
}
b3 := bitfield.NewBitlist(8)
b3.SetBitAt(7, true)
a = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: b3, Signature: sig.Marshal()}
if err := store.aggregateAttestation(context.Background(), a); err != nil {
t.Fatal(err)
}
if !bytes.Equal(store.attsQueue[r].AggregationBits, []byte{131, 1}) {
t.Error("Received incorrect aggregation bitfield")
}
}
func TestStore_ReturnAggregatedAttestation(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
a1 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0x02}}
err := store.db.SaveAttestation(ctx, a1)
if err != nil {
t.Fatal(err)
}
a2 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0x03}}
saved, err := store.aggregatedAttestations(ctx, a2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a2}, saved) {
t.Error("did not retrieve saved attestation")
}
}

View File

@@ -0,0 +1,401 @@
package forkchoice
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// OnBlock is called when a gossip block is received. It runs regular state transition on the block and
// update fork choice store.
//
// Spec pseudocode definition:
// def on_block(store: Store, block: BeaconBlock) -> None:
// # Make a copy of the state to avoid mutability issues
// assert block.parent_root in store.block_states
// pre_state = store.block_states[block.parent_root].copy()
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert store.time >= pre_state.genesis_time + block.slot * SECONDS_PER_SLOT
// # Add new block to the store
// store.blocks[signing_root(block)] = block
// # Check block is a descendant of the finalized block
// assert (
// get_ancestor(store, signing_root(block), store.blocks[store.finalized_checkpoint.root].slot) ==
// store.finalized_checkpoint.root
// )
// # Check that block is later than the finalized epoch slot
// assert block.slot > compute_start_slot_of_epoch(store.finalized_checkpoint.epoch)
// # Check the block is valid and compute the post-state
// state = state_transition(pre_state, block)
// # Add new state for this block to the store
// store.block_states[signing_root(block)] = state
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// store.justified_checkpoint = state.current_justified_checkpoint
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
func (s *Store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onBlock")
defer span.End()
// Retrieve incoming block's pre state.
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
return err
}
preStateValidatorCount := len(preState.Validators)
root, err := ssz.SigningRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
log.WithFields(logrus.Fields{
"slot": b.Slot,
"root": fmt.Sprintf("0x%s...", hex.EncodeToString(root[:])[:8]),
}).Info("Executing state transition on block")
postState, err := state.ExecuteStateTransition(ctx, preState, b)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
if err := s.updateBlockAttestationsVotes(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not update votes for attestations in block")
}
if err := s.db.SaveBlock(ctx, b); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
if err := s.db.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint.Epoch > s.JustifiedCheckpt().Epoch {
s.justifiedCheckpt = postState.CurrentJustifiedCheckpoint
if err := s.db.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint); err != nil {
return errors.Wrap(err, "could not save justified checkpoint")
}
}
// Update finalized check point.
// Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpoint.Epoch > s.finalizedCheckpt.Epoch {
s.clearSeenAtts()
helpers.ClearAllCaches()
if err := s.db.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
startSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch + 1)
endSlot := helpers.StartSlot(postState.FinalizedCheckpoint.Epoch+1) - 1 // Inclusive
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot+params.BeaconConfig().SlotsPerEpoch)
}
s.finalizedCheckpt = postState.FinalizedCheckpoint
}
// Update validator indices in database as needed.
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
// Save the unseen attestations from block to db.
if err := s.saveNewBlockAttestations(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not save attestations")
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if helpers.IsEpochStart(postState.Slot) {
logEpochData(postState)
reportEpochMetrics(postState)
// Update committee shuffled indices at the end of every epoch
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(postState); err != nil {
return err
}
}
}
return nil
}
// OnBlockNoVerifyStateTransition is called when an initial sync block is received.
// It runs state transition on the block and without any BLS verification. The BLS verification
// includes proposer signature, randao and attestation's aggregated signature.
func (s *Store) OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.BeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.onBlock")
defer span.End()
// Retrieve incoming block's pre state.
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
return err
}
preStateValidatorCount := len(preState.Validators)
log.WithField("slot", b.Slot).Debug("Executing state transition on block")
postState, err := state.ExecuteStateTransitionNoVerify(ctx, preState, b)
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
if err := s.db.SaveBlock(ctx, b); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
root, err := ssz.SigningRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
if err := s.db.SaveState(ctx, postState, root); err != nil {
return errors.Wrap(err, "could not save state")
}
// Update justified check point.
if postState.CurrentJustifiedCheckpoint.Epoch > s.JustifiedCheckpt().Epoch {
s.justifiedCheckpt = postState.CurrentJustifiedCheckpoint
if err := s.db.SaveJustifiedCheckpoint(ctx, postState.CurrentJustifiedCheckpoint); err != nil {
return errors.Wrap(err, "could not save justified checkpoint")
}
}
// Update finalized check point.
// Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpoint.Epoch > s.finalizedCheckpt.Epoch {
s.clearSeenAtts()
helpers.ClearAllCaches()
startSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch + 1)
endSlot := helpers.StartSlot(postState.FinalizedCheckpoint.Epoch+1) - 1 // Inclusive
if err := s.rmStatesOlderThanLastFinalized(ctx, startSlot, endSlot); err != nil {
return errors.Wrapf(err, "could not delete states prior to finalized check point, range: %d, %d",
startSlot, endSlot+params.BeaconConfig().SlotsPerEpoch)
}
s.finalizedCheckpt = postState.FinalizedCheckpoint
if err := s.db.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
}
// Update validator indices in database as needed.
if err := s.saveNewValidators(ctx, preStateValidatorCount, postState); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
// Save the unseen attestations from block to db.
if err := s.saveNewBlockAttestations(ctx, b.Body.Attestations); err != nil {
return errors.Wrap(err, "could not save attestations")
}
// Epoch boundary bookkeeping such as logging epoch summaries.
if helpers.IsEpochStart(postState.Slot) {
reportEpochMetrics(postState)
}
return nil
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.
func (s *Store) getBlockPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
// Verify incoming block has a valid pre state.
preState, err := s.verifyBlkPreState(ctx, b)
if err != nil {
return nil, err
}
// Verify block slot time is not from the feature.
if err := helpers.VerifySlotTime(preState.GenesisTime, b.Slot); err != nil {
return nil, err
}
// Verify block is a descendent of a finalized block.
if err := s.verifyBlkDescendant(ctx, bytesutil.ToBytes32(b.ParentRoot), b.Slot); err != nil {
return nil, err
}
// Verify block is later than the finalized epoch slot.
if err := s.verifyBlkFinalizedSlot(b); err != nil {
return nil, err
}
return preState, nil
}
// updateBlockAttestationsVotes checks the attestations in block and filter out the seen ones,
// the unseen ones get passed to updateBlockAttestationVote for updating fork choice votes.
func (s *Store) updateBlockAttestationsVotes(ctx context.Context, atts []*ethpb.Attestation) error {
s.seenAttsLock.Lock()
defer s.seenAttsLock.Unlock()
for _, att := range atts {
// If we have not seen the attestation yet
r, err := hashutil.HashProto(att)
if err != nil {
return err
}
if s.seenAtts[r] {
continue
}
if err := s.updateBlockAttestationVote(ctx, att); err != nil {
log.WithError(err).Warn("Attestation failed to update vote")
}
s.seenAtts[r] = true
}
return nil
}
// updateBlockAttestationVotes checks the attestation to update validator's latest votes.
func (s *Store) updateBlockAttestationVote(ctx context.Context, att *ethpb.Attestation) error {
tgt := att.Data.Target
baseState, err := s.db.State(ctx, bytesutil.ToBytes32(tgt.Root))
if err != nil {
return errors.Wrap(err, "could not get state for attestation tgt root")
}
indexedAtt, err := blocks.ConvertToIndexed(ctx, baseState, att)
if err != nil {
return errors.Wrap(err, "could not convert attestation to indexed attestation")
}
for _, i := range append(indexedAtt.CustodyBit_0Indices, indexedAtt.CustodyBit_1Indices...) {
vote, err := s.db.ValidatorLatestVote(ctx, i)
if err != nil {
return errors.Wrapf(err, "could not get latest vote for validator %d", i)
}
if vote == nil || tgt.Epoch > vote.Epoch {
if err := s.db.SaveValidatorLatestVote(ctx, i, &pb.ValidatorLatestVote{
Epoch: tgt.Epoch,
Root: tgt.Root,
}); err != nil {
return errors.Wrapf(err, "could not save latest vote for validator %d", i)
}
}
}
return nil
}
// verifyBlkPreState validates input block has a valid pre-state.
func (s *Store) verifyBlkPreState(ctx context.Context, b *ethpb.BeaconBlock) (*pb.BeaconState, error) {
preState, err := s.db.State(ctx, bytesutil.ToBytes32(b.ParentRoot))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot)
}
if preState == nil {
return nil, fmt.Errorf("pre state of slot %d does not exist", b.Slot)
}
return preState, nil
}
// verifyBlkDescendant validates input block root is a descendant of the
// current finalized block root.
func (s *Store) verifyBlkDescendant(ctx context.Context, root [32]byte, slot uint64) error {
finalizedBlk, err := s.db.Block(ctx, bytesutil.ToBytes32(s.finalizedCheckpt.Root))
if err != nil || finalizedBlk == nil {
return errors.Wrap(err, "could not get finalized block")
}
bFinalizedRoot, err := s.ancestor(ctx, root[:], finalizedBlk.Slot)
if err != nil {
return errors.Wrap(err, "could not get finalized block root")
}
if !bytes.Equal(bFinalizedRoot, s.finalizedCheckpt.Root) {
return fmt.Errorf("block from slot %d is not a descendent of the current finalized block", slot)
}
return nil
}
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Store) verifyBlkFinalizedSlot(b *ethpb.BeaconBlock) error {
finalizedSlot := helpers.StartSlot(s.finalizedCheckpt.Epoch)
if finalizedSlot >= b.Slot {
return fmt.Errorf("block is equal or earlier than finalized block, slot %d < slot %d", b.Slot, finalizedSlot)
}
return nil
}
// saveNewValidators saves newly added validator index from state to db. Does nothing if validator count has not
// changed.
func (s *Store) saveNewValidators(ctx context.Context, preStateValidatorCount int, postState *pb.BeaconState) error {
postStateValidatorCount := len(postState.Validators)
if preStateValidatorCount != postStateValidatorCount {
for i := preStateValidatorCount; i < postStateValidatorCount; i++ {
pubKey := postState.Validators[i].PublicKey
if err := s.db.SaveValidatorIndex(ctx, bytesutil.ToBytes48(pubKey), uint64(i)); err != nil {
return errors.Wrapf(err, "could not save activated validator: %d", i)
}
log.WithFields(logrus.Fields{
"index": i,
"pubKey": hex.EncodeToString(bytesutil.Trunc(pubKey)),
"totalValidatorCount": i + 1,
}).Info("New validator index saved in DB")
}
}
return nil
}
// saveNewBlockAttestations saves the new attestations in block to DB.
func (s *Store) saveNewBlockAttestations(ctx context.Context, atts []*ethpb.Attestation) error {
attestations := make([]*ethpb.Attestation, 0, len(atts))
for _, att := range atts {
aggregated, err := s.aggregatedAttestations(ctx, att)
if err != nil {
continue
}
attestations = append(attestations, aggregated...)
}
if err := s.db.SaveAttestations(ctx, atts); err != nil {
return err
}
return nil
}
// clearSeenAtts clears seen attestations map, it gets called upon new finalization.
func (s *Store) clearSeenAtts() {
s.seenAttsLock.Lock()
s.seenAttsLock.Unlock()
s.seenAtts = make(map[[32]byte]bool)
}
// rmStatesOlderThanLastFinalized deletes the states in db since last finalized check point.
func (s *Store) rmStatesOlderThanLastFinalized(ctx context.Context, startSlot uint64, endSlot uint64) error {
ctx, span := trace.StartSpan(ctx, "forkchoice.rmStatesBySlots")
defer span.End()
// Do not remove genesis state or finalized state at epoch boundary.
if startSlot%params.BeaconConfig().SlotsPerEpoch == 0 {
startSlot++
}
filter := filters.NewFilter().SetStartSlot(startSlot).SetEndSlot(endSlot)
roots, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return err
}
if err := s.db.DeleteStates(ctx, roots); err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,345 @@
package forkchoice
import (
"context"
"reflect"
"strings"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
func TestStore_OnBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
randomParentRoot := []byte{'a'}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(randomParentRoot)); err != nil {
t.Fatal(err)
}
randomParentRoot2 := roots[1]
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(randomParentRoot2)); err != nil {
t.Fatal(err)
}
validGenesisRoot := []byte{'g'}
if err := store.db.SaveState(ctx, &pb.BeaconState{}, bytesutil.ToBytes32(validGenesisRoot)); err != nil {
t.Fatal(err)
}
tests := []struct {
name string
blk *ethpb.BeaconBlock
s *pb.BeaconState
time uint64
wantErrString string
}{
{
name: "parent block root does not have a state",
blk: &ethpb.BeaconBlock{},
s: &pb.BeaconState{},
wantErrString: "pre state of slot 0 does not exist",
},
{
name: "block is from the feature",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot, Slot: params.BeaconConfig().FarFutureEpoch},
s: &pb.BeaconState{},
wantErrString: "could not process slot from the future",
},
{
name: "could not get finalized block",
blk: &ethpb.BeaconBlock{ParentRoot: randomParentRoot},
s: &pb.BeaconState{},
wantErrString: "block from slot 0 is not a descendent of the current finalized block",
},
{
name: "same slot as finalized block",
blk: &ethpb.BeaconBlock{Slot: 0, ParentRoot: randomParentRoot2},
s: &pb.BeaconState{},
wantErrString: "block is equal or earlier than finalized block, slot 0 < slot 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := store.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
store.finalizedCheckpt.Root = roots[0]
err := store.OnBlock(ctx, tt.blk)
if !strings.Contains(err.Error(), tt.wantErrString) {
t.Errorf("Store.OnBlock() error = %v, wantErr = %v", err, tt.wantErrString)
}
})
}
}
func TestStore_SaveNewValidators(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
preCount := 2 // validators 0 and validators 1
s := &pb.BeaconState{Validators: []*ethpb.Validator{
{PublicKey: []byte{0}}, {PublicKey: []byte{1}},
{PublicKey: []byte{2}}, {PublicKey: []byte{3}},
}}
if err := store.saveNewValidators(ctx, preCount, s); err != nil {
t.Fatal(err)
}
if !db.HasValidatorIndex(ctx, bytesutil.ToBytes48([]byte{2})) {
t.Error("Wanted validator saved in db")
}
if !db.HasValidatorIndex(ctx, bytesutil.ToBytes48([]byte{3})) {
t.Error("Wanted validator saved in db")
}
if db.HasValidatorIndex(ctx, bytesutil.ToBytes48([]byte{1})) {
t.Error("validator not suppose to be saved in db")
}
}
func TestStore_UpdateBlockAttestationVote(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{})
if err != nil {
t.Fatal(err)
}
store := NewForkChoiceService(ctx, db)
r := [32]byte{'A'}
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: r[:]},
Crosslink: &ethpb.Crosslink{
Shard: 0,
StartEpoch: 0,
},
},
AggregationBits: []byte{255},
CustodyBits: []byte{255},
}
if err := store.db.SaveState(ctx, beaconState, r); err != nil {
t.Fatal(err)
}
indices, err := blocks.ConvertToIndexed(ctx, beaconState, att)
if err != nil {
t.Fatal(err)
}
var attestedIndices []uint64
for _, k := range append(indices.CustodyBit_0Indices, indices.CustodyBit_1Indices...) {
attestedIndices = append(attestedIndices, k)
}
if err := store.updateBlockAttestationVote(ctx, att); err != nil {
t.Fatal(err)
}
for _, i := range attestedIndices {
v, err := store.db.ValidatorLatestVote(ctx, i)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(v.Root, r[:]) {
t.Error("Attested roots don't match")
}
}
}
func TestStore_UpdateBlockAttestationsVote(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
deposits, _, _ := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, uint64(0), &ethpb.Eth1Data{})
if err != nil {
t.Fatal(err)
}
store := NewForkChoiceService(ctx, db)
r := [32]byte{'A'}
atts := make([]*ethpb.Attestation, 5)
hashes := make([][32]byte, 5)
for i := 0; i < len(atts); i++ {
atts[i] = &ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: r[:]},
Crosslink: &ethpb.Crosslink{
Shard: uint64(i),
StartEpoch: 0,
},
},
AggregationBits: []byte{255},
CustodyBits: []byte{255},
}
h, _ := hashutil.HashProto(atts[i])
hashes[i] = h
}
if err := store.db.SaveState(ctx, beaconState, r); err != nil {
t.Fatal(err)
}
if err := store.updateBlockAttestationsVotes(ctx, atts); err != nil {
t.Fatal(err)
}
for _, h := range hashes {
if !store.seenAtts[h] {
t.Error("Seen attestation did not get recorded")
}
}
}
func TestStore_SavesNewBlockAttestations(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
a1 := &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b101}, CustodyBits: bitfield.NewBitlist(2)}
a2 := &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b110}, CustodyBits: bitfield.NewBitlist(2)}
r1, _ := ssz.HashTreeRoot(a1.Data)
r2, _ := ssz.HashTreeRoot(a2.Data)
if err := store.saveNewBlockAttestations(ctx, []*ethpb.Attestation{a1, a2}); err != nil {
t.Fatal(err)
}
saved, err := store.db.AttestationsByDataRoot(ctx, r1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a1}, saved) {
t.Error("did not retrieve saved attestation")
}
saved, err = store.db.AttestationsByDataRoot(ctx, r2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a2}, saved) {
t.Error("did not retrieve saved attestation")
}
a1 = &ethpb.Attestation{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b111}, CustodyBits: bitfield.NewBitlist(2)}
a2 = &ethpb.Attestation{Data: &ethpb.AttestationData{BeaconBlockRoot: []byte{'A'}}, AggregationBits: bitfield.Bitlist{0b111}, CustodyBits: bitfield.NewBitlist(2)}
if err := store.saveNewBlockAttestations(ctx, []*ethpb.Attestation{a1, a2}); err != nil {
t.Fatal(err)
}
saved, err = store.db.AttestationsByDataRoot(ctx, r1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a1}, saved) {
t.Error("did not retrieve saved attestation")
}
saved, err = store.db.AttestationsByDataRoot(ctx, r2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual([]*ethpb.Attestation{a2}, saved) {
t.Error("did not retrieve saved attestation")
}
}
func TestRemoveStateSinceLastFinalized(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
store := NewForkChoiceService(ctx, db)
// Save 100 blocks in DB, each has a state.
numBlocks := 100
totalBlocks := make([]*ethpb.BeaconBlock, numBlocks)
blockRoots := make([][32]byte, 0)
for i := 0; i < len(totalBlocks); i++ {
totalBlocks[i] = &ethpb.BeaconBlock{
Slot: uint64(i),
}
r, err := ssz.SigningRoot(totalBlocks[i])
if err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, &pb.BeaconState{Slot: uint64(i)}, r); err != nil {
t.Fatal(err)
}
if err := store.db.SaveBlock(ctx, totalBlocks[i]); err != nil {
t.Fatal(err)
}
blockRoots = append(blockRoots, r)
}
// New finalized epoch: 1
finalizedEpoch := uint64(1)
endSlot := helpers.StartSlot(finalizedEpoch+1) - 1 // Inclusive
if err := store.rmStatesOlderThanLastFinalized(ctx, 0, endSlot); err != nil {
t.Fatal(err)
}
for _, r := range blockRoots {
s, err := store.db.State(ctx, r)
if err != nil {
t.Fatal(err)
}
// Also verifies genesis state didnt get deleted
if s != nil && s.Slot != 0 && s.Slot < endSlot {
t.Errorf("State with slot %d should not be in DB", s.Slot)
}
}
// New finalized epoch: 5
newFinalizedEpoch := uint64(5)
endSlot = helpers.StartSlot(newFinalizedEpoch+1) - 1 // Inclusive
if err := store.rmStatesOlderThanLastFinalized(ctx, helpers.StartSlot(finalizedEpoch+1), endSlot); err != nil {
t.Fatal(err)
}
for _, r := range blockRoots {
s, err := store.db.State(ctx, r)
if err != nil {
t.Fatal(err)
}
// Also verifies boundary state didnt get deleted
if s != nil {
isBoundary := s.Slot%params.BeaconConfig().SlotsPerEpoch == 0
if !isBoundary && s.Slot < endSlot {
t.Errorf("State with slot %d should not be in DB", s.Slot)
}
}
}
}

View File

@@ -0,0 +1,255 @@
package forkchoice
import (
"bytes"
"context"
"sync"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
// ForkChoicer defines a common interface for methods useful for directly applying fork choice
// to beacon blocks to compute head.
type ForkChoicer interface {
Head(ctx context.Context) ([]byte, error)
OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error
OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.BeaconBlock) error
OnAttestation(ctx context.Context, a *ethpb.Attestation) (uint64, error)
GenesisStore(ctx context.Context, justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) error
FinalizedCheckpt() *ethpb.Checkpoint
}
// Store represents a service struct that handles the forkchoice
// logic of managing the full PoS beacon chain.
type Store struct {
ctx context.Context
cancel context.CancelFunc
db db.Database
justifiedCheckpt *ethpb.Checkpoint
finalizedCheckpt *ethpb.Checkpoint
checkpointState *cache.CheckpointStateCache
checkpointStateLock sync.Mutex
attsQueue map[[32]byte]*ethpb.Attestation
attsQueueLock sync.Mutex
seenAtts map[[32]byte]bool
seenAttsLock sync.Mutex
}
// NewForkChoiceService instantiates a new service instance that will
// be registered into a running beacon node.
func NewForkChoiceService(ctx context.Context, db db.Database) *Store {
ctx, cancel := context.WithCancel(ctx)
return &Store{
ctx: ctx,
cancel: cancel,
db: db,
checkpointState: cache.NewCheckpointStateCache(),
attsQueue: make(map[[32]byte]*ethpb.Attestation),
seenAtts: make(map[[32]byte]bool),
}
}
// GenesisStore initializes the store struct before beacon chain
// starts to advance.
//
// Spec pseudocode definition:
// def get_genesis_store(genesis_state: BeaconState) -> Store:
// genesis_block = BeaconBlock(state_root=hash_tree_root(genesis_state))
// root = signing_root(genesis_block)
// justified_checkpoint = Checkpoint(epoch=GENESIS_EPOCH, root=root)
// finalized_checkpoint = Checkpoint(epoch=GENESIS_EPOCH, root=root)
// return Store(
// time=genesis_state.genesis_time,
// justified_checkpoint=justified_checkpoint,
// finalized_checkpoint=finalized_checkpoint,
// blocks={root: genesis_block},
// block_states={root: genesis_state.copy()},
// checkpoint_states={justified_checkpoint: genesis_state.copy()},
// )
func (s *Store) GenesisStore(
ctx context.Context,
justifiedCheckpoint *ethpb.Checkpoint,
finalizedCheckpoint *ethpb.Checkpoint) error {
s.justifiedCheckpt = proto.Clone(justifiedCheckpoint).(*ethpb.Checkpoint)
s.finalizedCheckpt = proto.Clone(finalizedCheckpoint).(*ethpb.Checkpoint)
justifiedState, err := s.db.State(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
return errors.Wrap(err, "could not retrieve last justified state")
}
if err := s.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: s.justifiedCheckpt,
State: justifiedState,
}); err != nil {
return errors.Wrap(err, "could not save genesis state in check point cache")
}
return nil
}
// ancestor returns the block root of an ancestry block from the input block root.
//
// Spec pseudocode definition:
// def get_ancestor(store: Store, root: Hash, slot: Slot) -> Hash:
// block = store.blocks[root]
// if block.slot > slot:
// return get_ancestor(store, block.parent_root, slot)
// elif block.slot == slot:
// return root
// else:
// return Bytes32() # root is older than queried slot: no results.
func (s *Store) ancestor(ctx context.Context, root []byte, slot uint64) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.ancestor")
defer span.End()
b, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return nil, errors.Wrap(err, "could not get ancestor block")
}
// If we dont have the ancestor in the DB, simply return nil so rest of fork choice
// operation can proceed. This is not an error condition.
if b == nil || b.Slot < slot {
return nil, nil
}
if b.Slot == slot {
return root, nil
}
return s.ancestor(ctx, b.ParentRoot, slot)
}
// latestAttestingBalance returns the staked balance of a block from the input block root.
//
// Spec pseudocode definition:
// def get_latest_attesting_balance(store: Store, root: Hash) -> Gwei:
// state = store.checkpoint_states[store.justified_checkpoint]
// active_indices = get_active_validator_indices(state, get_current_epoch(state))
// return Gwei(sum(
// state.validators[i].effective_balance for i in active_indices
// if (i in store.latest_messages
// and get_ancestor(store, store.latest_messages[i].root, store.blocks[root].slot) == root)
// ))
func (s *Store) latestAttestingBalance(ctx context.Context, root []byte) (uint64, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.latestAttestingBalance")
defer span.End()
lastJustifiedState, err := s.checkpointState.StateByCheckpoint(s.JustifiedCheckpt())
if err != nil {
return 0, errors.Wrap(err, "could not retrieve cached state via last justified check point")
}
if lastJustifiedState == nil {
return 0, errors.Wrapf(err, "could not get justified state at epoch %d", s.JustifiedCheckpt().Epoch)
}
lastJustifiedEpoch := helpers.CurrentEpoch(lastJustifiedState)
activeIndices, err := helpers.ActiveValidatorIndices(lastJustifiedState, lastJustifiedEpoch)
if err != nil {
return 0, errors.Wrap(err, "could not get active indices for last justified checkpoint")
}
wantedBlk, err := s.db.Block(ctx, bytesutil.ToBytes32(root))
if err != nil {
return 0, errors.Wrap(err, "could not get target block")
}
balances := uint64(0)
for _, i := range activeIndices {
vote, err := s.db.ValidatorLatestVote(ctx, i)
if err != nil {
return 0, errors.Wrapf(err, "could not get validator %d's latest vote", i)
}
if vote == nil {
continue
}
wantedRoot, err := s.ancestor(ctx, vote.Root, wantedBlk.Slot)
if err != nil {
return 0, errors.Wrapf(err, "could not get ancestor root for slot %d", wantedBlk.Slot)
}
if bytes.Equal(wantedRoot, root) {
balances += lastJustifiedState.Validators[i].EffectiveBalance
}
}
return balances, nil
}
// Head returns the head of the beacon chain.
//
// Spec pseudocode definition:
// def get_head(store: Store) -> Hash:
// # Execute the LMD-GHOST fork choice
// head = store.justified_checkpoint.root
// justified_slot = compute_start_slot_of_epoch(store.justified_checkpoint.epoch)
// while True:
// children = [
// root for root in store.blocks.keys()
// if store.blocks[root].parent_root == head and store.blocks[root].slot > justified_slot
// ]
// if len(children) == 0:
// return head
// # Sort by latest attesting balance with ties broken lexicographically
// head = max(children, key=lambda root: (get_latest_attesting_balance(store, root), root))
func (s *Store) Head(ctx context.Context) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "forkchoice.head")
defer span.End()
head := s.JustifiedCheckpt().Root
for {
startSlot := s.JustifiedCheckpt().Epoch * params.BeaconConfig().SlotsPerEpoch
filter := filters.NewFilter().SetParentRoot(head).SetStartSlot(startSlot)
children, err := s.db.BlockRoots(ctx, filter)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve children info")
}
if len(children) == 0 {
return head, nil
}
// if a block has one child, then we don't have to lookup anything to
// know that this child will be the best child.
head = children[0][:]
if len(children) > 1 {
highest, err := s.latestAttestingBalance(ctx, head)
if err != nil {
return nil, errors.Wrap(err, "could not get latest balance")
}
for _, child := range children[1:] {
balance, err := s.latestAttestingBalance(ctx, child[:])
if err != nil {
return nil, errors.Wrap(err, "could not get latest balance")
}
// When there's a tie, it's broken lexicographically to favor the higher one.
if balance > highest ||
balance == highest && bytes.Compare(child[:], head) > 0 {
highest = balance
head = child[:]
}
}
}
}
}
// JustifiedCheckpt returns the latest justified check point from fork choice store.
func (s *Store) JustifiedCheckpt() *ethpb.Checkpoint {
return proto.Clone(s.justifiedCheckpt).(*ethpb.Checkpoint)
}
// FinalizedCheckpt returns the latest finalized check point from fork choice store.
func (s *Store) FinalizedCheckpt() *ethpb.Checkpoint {
return proto.Clone(s.finalizedCheckpt).(*ethpb.Checkpoint)
}

View File

@@ -0,0 +1,346 @@
package forkchoice
import (
"bytes"
"context"
"reflect"
"testing"
"time"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
func TestStore_GenesisStoreOk(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
genesisTime := time.Unix(9999, 0)
genesisState := &pb.BeaconState{GenesisTime: uint64(genesisTime.Unix())}
genesisStateRoot, err := ssz.HashTreeRoot(genesisState)
if err != nil {
t.Fatal(err)
}
genesisBlk := blocks.NewGenesisBlock(genesisStateRoot[:])
genesisBlkRoot, err := ssz.SigningRoot(genesisBlk)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(store.justifiedCheckpt, checkPoint) {
t.Error("Justified check point from genesis store did not match")
}
if !reflect.DeepEqual(store.finalizedCheckpt, checkPoint) {
t.Error("Finalized check point from genesis store did not match")
}
cachedState, err := store.checkpointState.StateByCheckpoint(checkPoint)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(cachedState, genesisState) {
t.Error("Incorrect genesis state cached")
}
}
func TestStore_AncestorOk(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
type args struct {
root []byte
slot uint64
}
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
tests := []struct {
args *args
want []byte
}{
{args: &args{roots[1], 0}, want: roots[0]},
{args: &args{roots[8], 0}, want: roots[0]},
{args: &args{roots[8], 4}, want: roots[4]},
{args: &args{roots[7], 4}, want: roots[4]},
{args: &args{roots[7], 0}, want: roots[0]},
}
for _, tt := range tests {
got, err := store.ancestor(ctx, tt.args.root, tt.args.slot)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("Store.ancestor(ctx, ) = %v, want %v", got, tt.want)
}
}
}
func TestStore_AncestorNotPartOfTheChain(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
root, err := store.ancestor(ctx, roots[8], 1)
if err != nil {
t.Fatal(err)
}
if root != nil {
t.Error("block at slot 1 is not part of the chain")
}
root, err = store.ancestor(ctx, roots[8], 2)
if err != nil {
t.Fatal(err)
}
if root != nil {
t.Error("block at slot 2 is not part of the chain")
}
}
func TestStore_LatestAttestingBalance(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
validators := make([]*ethpb.Validator, 100)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
stateRoot, err := ssz.HashTreeRoot(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, s, blkRoot); err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
// /- B1 (33 votes)
// B0 /- B5 - B7 (33 votes)
// \- B3 - B4 - B6 - B8 (34 votes)
for i := 0; i < len(validators); i++ {
switch {
case i < 33:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
t.Fatal(err)
}
case i > 66:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
t.Fatal(err)
}
default:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[8]}); err != nil {
t.Fatal(err)
}
}
}
tests := []struct {
root []byte
want uint64
}{
{root: roots[0], want: 100 * 1e9},
{root: roots[1], want: 33 * 1e9},
{root: roots[3], want: 67 * 1e9},
{root: roots[4], want: 67 * 1e9},
{root: roots[7], want: 33 * 1e9},
{root: roots[8], want: 34 * 1e9},
}
for _, tt := range tests {
got, err := store.latestAttestingBalance(ctx, tt.root)
if err != nil {
t.Fatal(err)
}
if got != tt.want {
t.Errorf("Store.latestAttestingBalance(ctx, ) = %v, want %v", got, tt.want)
}
}
}
func TestStore_ChildrenBlocksFromParentRoot(t *testing.T) {
helpers.ClearAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
filter := filters.NewFilter().SetParentRoot(roots[0]).SetStartSlot(0)
children, err := store.db.BlockRoots(ctx, filter)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(children, [][32]byte{bytesutil.ToBytes32(roots[1]), bytesutil.ToBytes32(roots[3])}) {
t.Error("Did not receive correct children roots")
}
filter = filters.NewFilter().SetParentRoot(roots[0]).SetStartSlot(2)
children, err = store.db.BlockRoots(ctx, filter)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(children, [][32]byte{bytesutil.ToBytes32(roots[3])}) {
t.Error("Did not receive correct children roots")
}
}
func TestStore_GetHead(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
store := NewForkChoiceService(ctx, db)
roots, err := blockTree1(db)
if err != nil {
t.Fatal(err)
}
validators := make([]*ethpb.Validator, 100)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{ExitEpoch: 2, EffectiveBalance: 1e9}
}
s := &pb.BeaconState{Validators: validators}
stateRoot, err := ssz.HashTreeRoot(s)
if err != nil {
t.Fatal(err)
}
b := blocks.NewGenesisBlock(stateRoot[:])
blkRoot, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
checkPoint := &ethpb.Checkpoint{Root: blkRoot[:]}
if err := store.GenesisStore(ctx, checkPoint, checkPoint); err != nil {
t.Fatal(err)
}
if err := store.db.SaveState(ctx, s, bytesutil.ToBytes32(roots[0])); err != nil {
t.Fatal(err)
}
store.justifiedCheckpt.Root = roots[0]
if err := store.checkpointState.AddCheckpointState(&cache.CheckpointState{
Checkpoint: store.justifiedCheckpt,
State: s,
}); err != nil {
t.Fatal(err)
}
// /- B1 (33 votes)
// B0 /- B5 - B7 (33 votes)
// \- B3 - B4 - B6 - B8 (34 votes)
for i := 0; i < len(validators); i++ {
switch {
case i < 33:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
t.Fatal(err)
}
case i > 66:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
t.Fatal(err)
}
default:
if err := store.db.SaveValidatorLatestVote(ctx, uint64(i), &pb.ValidatorLatestVote{Root: roots[8]}); err != nil {
t.Fatal(err)
}
}
}
// Default head is B8
head, err := store.Head(ctx)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(head, roots[8]) {
t.Error("Incorrect head")
}
// 1 validator switches vote to B7 to gain 34%, enough to switch head
if err := store.db.SaveValidatorLatestVote(ctx, 50, &pb.ValidatorLatestVote{Root: roots[7]}); err != nil {
t.Fatal(err)
}
head, err = store.Head(ctx)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(head, roots[7]) {
t.Error("Incorrect head")
}
// 18 validators switches vote to B1 to gain 51%, enough to switch head
for i := 0; i < 18; i++ {
idx := 50 + uint64(i)
if err := store.db.SaveValidatorLatestVote(ctx, idx, &pb.ValidatorLatestVote{Root: roots[1]}); err != nil {
t.Fatal(err)
}
}
head, err = store.Head(ctx)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(head, roots[1]) {
t.Log(head)
t.Error("Incorrect head")
}
}

View File

@@ -0,0 +1,144 @@
package forkchoice
import (
"context"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
// blockTree1 constructs the following tree:
// /- B1
// B0 /- B5 - B7
// \- B3 - B4 - B6 - B8
// (B1, and B3 are all from the same slots)
func blockTree1(db db.Database) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.SigningRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.SigningRoot(b1)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r0[:]}
r3, _ := ssz.SigningRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r3[:]}
r4, _ := ssz.SigningRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r4[:]}
r5, _ := ssz.SigningRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r4[:]}
r6, _ := ssz.SigningRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r5[:]}
r7, _ := ssz.SigningRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r6[:]}
r8, _ := ssz.SigningRoot(b8)
for _, b := range []*ethpb.BeaconBlock{b0, b1, b3, b4, b5, b6, b7, b8} {
if err := db.SaveBlock(context.Background(), b); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
}
// blockTree2 constructs the following tree:
// Scenario graph: shorturl.at/loyP6
//
//digraph G {
// rankdir=LR;
// node [shape="none"];
//
// subgraph blocks {
// rankdir=LR;
// node [shape="box"];
// a->b;
// a->c;
// b->d;
// b->e;
// c->f;
// c->g;
// d->h
// d->i
// d->j
// d->k
// h->l
// h->m
// g->n
// g->o
// e->p
// }
//}
func blockTree2(db db.Database) ([][]byte, error) {
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.SigningRoot(b0)
b1 := &ethpb.BeaconBlock{Slot: 1, ParentRoot: r0[:]}
r1, _ := ssz.SigningRoot(b1)
b2 := &ethpb.BeaconBlock{Slot: 2, ParentRoot: r0[:]}
r2, _ := ssz.SigningRoot(b2)
b3 := &ethpb.BeaconBlock{Slot: 3, ParentRoot: r1[:]}
r3, _ := ssz.SigningRoot(b3)
b4 := &ethpb.BeaconBlock{Slot: 4, ParentRoot: r1[:]}
r4, _ := ssz.SigningRoot(b4)
b5 := &ethpb.BeaconBlock{Slot: 5, ParentRoot: r2[:]}
r5, _ := ssz.SigningRoot(b5)
b6 := &ethpb.BeaconBlock{Slot: 6, ParentRoot: r2[:]}
r6, _ := ssz.SigningRoot(b6)
b7 := &ethpb.BeaconBlock{Slot: 7, ParentRoot: r3[:]}
r7, _ := ssz.SigningRoot(b7)
b8 := &ethpb.BeaconBlock{Slot: 8, ParentRoot: r3[:]}
r8, _ := ssz.SigningRoot(b8)
b9 := &ethpb.BeaconBlock{Slot: 9, ParentRoot: r3[:]}
r9, _ := ssz.SigningRoot(b9)
b10 := &ethpb.BeaconBlock{Slot: 10, ParentRoot: r3[:]}
r10, _ := ssz.SigningRoot(b10)
b11 := &ethpb.BeaconBlock{Slot: 11, ParentRoot: r4[:]}
r11, _ := ssz.SigningRoot(b11)
b12 := &ethpb.BeaconBlock{Slot: 12, ParentRoot: r6[:]}
r12, _ := ssz.SigningRoot(b12)
b13 := &ethpb.BeaconBlock{Slot: 13, ParentRoot: r6[:]}
r13, _ := ssz.SigningRoot(b13)
b14 := &ethpb.BeaconBlock{Slot: 14, ParentRoot: r7[:]}
r14, _ := ssz.SigningRoot(b14)
b15 := &ethpb.BeaconBlock{Slot: 15, ParentRoot: r7[:]}
r15, _ := ssz.SigningRoot(b15)
for _, b := range []*ethpb.BeaconBlock{b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, b10, b11, b12, b13, b14, b15} {
if err := db.SaveBlock(context.Background(), b); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
return [][]byte{r0[:], r1[:], r2[:], r3[:], r4[:], r5[:], r6[:], r7[:], r8[:], r9[:], r10[:], r11[:], r12[:], r13[:], r14[:], r15[:]}, nil
}
// blockTree3 constructs a tree that is 512 blocks in a row.
// B0 - B1 - B2 - B3 - .... - B512
func blockTree3(db db.Database) ([][]byte, error) {
blkCount := 512
roots := make([][]byte, 0, blkCount)
blks := make([]*ethpb.BeaconBlock, 0, blkCount)
b0 := &ethpb.BeaconBlock{Slot: 0, ParentRoot: []byte{'g'}}
r0, _ := ssz.SigningRoot(b0)
roots = append(roots, r0[:])
blks = append(blks, b0)
for i := 1; i < blkCount; i++ {
b := &ethpb.BeaconBlock{Slot: uint64(i), ParentRoot: roots[len(roots)-1]}
r, _ := ssz.SigningRoot(b)
roots = append(roots, r[:])
blks = append(blks, b)
}
for _, b := range blks {
if err := db.SaveBlock(context.Background(), b); err != nil {
return nil, err
}
if err := db.SaveState(context.Background(), &pb.BeaconState{}, bytesutil.ToBytes32(b.ParentRoot)); err != nil {
return nil, err
}
}
return roots, nil
}

View File

@@ -0,0 +1,57 @@
package blockchain
import (
"bytes"
"encoding/hex"
"fmt"
"net/http"
"sort"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/sirupsen/logrus"
)
const latestSlotCount = 10
// HeadsHandler is a handler to serve /heads page in metrics.
func (s *Service) HeadsHandler(w http.ResponseWriter, _ *http.Request) {
buf := new(bytes.Buffer)
if _, err := fmt.Fprintf(w, "\n %s\t%s\t", "Head slot", "Head root"); err != nil {
logrus.WithError(err).Error("Failed to render chain heads page")
return
}
if _, err := fmt.Fprintf(w, "\n %s\t%s\t", "---------", "---------"); err != nil {
logrus.WithError(err).Error("Failed to render chain heads page")
return
}
slots := s.latestHeadSlots()
for _, slot := range slots {
r := hex.EncodeToString(bytesutil.Trunc(s.canonicalRoots[uint64(slot)]))
if _, err := fmt.Fprintf(w, "\n %d\t\t%s\t", slot, r); err != nil {
logrus.WithError(err).Error("Failed to render chain heads page")
return
}
}
w.WriteHeader(http.StatusOK)
if _, err := w.Write(buf.Bytes()); err != nil {
log.WithError(err).Error("Failed to render chain heads page")
}
}
// This returns the latest head slots in a slice and up to latestSlotCount
func (s *Service) latestHeadSlots() []int {
slots := make([]int, 0, len(s.canonicalRoots))
for k := range s.canonicalRoots {
slots = append(slots, int(k))
}
sort.Ints(slots)
if (len(slots)) > latestSlotCount {
return slots[len(slots)-latestSlotCount:]
}
return slots
}

View File

@@ -0,0 +1,17 @@
package blockchain
import (
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "blockchain")
// logs state transition related data every slot.
func logStateTransitionData(b *ethpb.BeaconBlock, r []byte) {
log.WithFields(logrus.Fields{
"slot": b.Slot,
"attestations": len(b.Body.Attestations),
"deposits": len(b.Body.Deposits),
}).Info("Finished applying state transition")
}

View File

@@ -0,0 +1,56 @@
package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
var (
beaconSlot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_slot",
Help: "Latest slot of the beacon chain state",
})
beaconHeadSlot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_head_slot",
Help: "Slot of the head block of the beacon chain",
})
beaconHeadRoot = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacon_head_root",
Help: "Root of the head block of the beacon chain, it returns the lowest 8 bytes interpreted as little endian",
})
competingAtts = promauto.NewCounter(prometheus.CounterOpts{
Name: "competing_attestations",
Help: "The # of attestations received and processed from a competing chain",
})
competingBlks = promauto.NewCounter(prometheus.CounterOpts{
Name: "competing_blocks",
Help: "The # of blocks received and processed from a competing chain",
})
processedBlkNoPubsub = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_no_pubsub_block_counter",
Help: "The # of processed block without pubsub, this usually means the blocks from sync",
})
processedBlkNoPubsubForkchoice = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_no_pubsub_forkchoice_block_counter",
Help: "The # of processed block without pubsub and forkchoice, this means indicate blocks from initial sync",
})
processedBlk = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_block_counter",
Help: "The # of total processed in block chain service, with fork choice and pubsub",
})
processedAttNoPubsub = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_no_pubsub_attestation_counter",
Help: "The # of processed attestation without pubsub, this usually means the attestations from sync",
})
processedAtt = promauto.NewCounter(prometheus.CounterOpts{
Name: "processed_attestation_counter",
Help: "The # of processed attestation with pubsub and fork choice, this ususally means attestations from rpc",
})
)
func (s *Service) reportSlotMetrics(currentSlot uint64) {
beaconSlot.Set(float64(currentSlot))
beaconHeadSlot.Set(float64(s.HeadSlot()))
beaconHeadRoot.Set(float64(bytesutil.ToLowInt64(s.HeadRoot())))
}

View File

@@ -0,0 +1,112 @@
package blockchain
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// AttestationReceiver interface defines the methods of chain service receive and processing new attestations.
type AttestationReceiver interface {
ReceiveAttestation(ctx context.Context, att *ethpb.Attestation) error
ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation) error
}
// ReceiveAttestation is a function that defines the operations that are preformed on
// attestation that is received from regular sync. The operations consist of:
// 1. Gossip attestation to other peers
// 2. Validate attestation, update validator's latest vote
// 3. Apply fork choice to the processed attestation
// 4. Save latest head info
func (s *Service) ReceiveAttestation(ctx context.Context, att *ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveAttestation")
defer span.End()
// Broadcast the new attestation to the network.
if err := s.p2p.Broadcast(ctx, att); err != nil {
return errors.Wrap(err, "could not broadcast attestation")
}
attDataRoot, err := ssz.HashTreeRoot(att.Data)
if err != nil {
log.WithError(err).Error("Failed to hash attestation")
}
log.WithFields(logrus.Fields{
"attRoot": fmt.Sprintf("%#x", attDataRoot),
"blockRoot": fmt.Sprintf("%#x", att.Data.BeaconBlockRoot),
}).Debug("Broadcasting attestation")
if err := s.ReceiveAttestationNoPubsub(ctx, att); err != nil {
return err
}
processedAtt.Inc()
return nil
}
// ReceiveAttestationNoPubsub is a function that defines the operations that are preformed on
// attestation that is received from regular sync. The operations consist of:
// 1. Validate attestation, update validator's latest vote
// 2. Apply fork choice to the processed attestation
// 3. Save latest head info
func (s *Service) ReceiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveAttestationNoPubsub")
defer span.End()
// Update forkchoice store for the new attestation
attSlot, err := s.forkChoiceStore.OnAttestation(ctx, att)
if err != nil {
return errors.Wrap(err, "could not process attestation from fork choice service")
}
// Run fork choice for head block after updating fork choice store.
headRoot, err := s.forkChoiceStore.Head(ctx)
if err != nil {
return errors.Wrap(err, "could not get head from fork choice service")
}
// Only save head if it's different than the current head.
if !bytes.Equal(headRoot, s.HeadRoot()) {
headBlk, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
if err != nil {
return errors.Wrap(err, "could not compute state from block head")
}
if err := s.saveHead(ctx, headBlk, bytesutil.ToBytes32(headRoot)); err != nil {
return errors.Wrap(err, "could not save head")
}
}
// Skip checking for competing attestation's target roots at epoch boundary.
if !helpers.IsEpochStart(attSlot) {
s.headLock.RLock()
defer s.headLock.RUnlock()
targetRoot, err := helpers.BlockRoot(s.headState, att.Data.Target.Epoch)
if err != nil {
return errors.Wrapf(err, "could not get target root for epoch %d", att.Data.Target.Epoch)
}
isCompetingAtts(targetRoot, att.Data.Target.Root[:])
}
processedAttNoPubsub.Inc()
return nil
}
// This checks if the attestation is from a competing chain, emits warning and updates metrics.
func isCompetingAtts(headTargetRoot []byte, attTargetRoot []byte) {
if !bytes.Equal(attTargetRoot, headTargetRoot) {
log.WithFields(logrus.Fields{
"attTargetRoot": hex.EncodeToString(attTargetRoot),
"headTargetRoot": hex.EncodeToString(headTargetRoot),
}).Warn("target heads different from new attestation")
competingAtts.Inc()
}
}

View File

@@ -0,0 +1,116 @@
package blockchain
import (
"testing"
"github.com/prysmaticlabs/go-ssz"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
"golang.org/x/net/context"
)
func TestReceiveAttestation_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.SigningRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
b := &ethpb.BeaconBlock{}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
a := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root[:]},
Crosslink: &ethpb.Crosslink{},
}}
if err := chainService.ReceiveAttestation(ctx, a); err != nil {
t.Fatal(err)
}
testutil.AssertLogsContain(t, hook, "Saved new head info")
testutil.AssertLogsContain(t, hook, "Broadcasting attestation")
}
func TestReceiveAttestation_SameHead(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.SigningRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
chainService.canonicalRoots[0] = r[:]
b := &ethpb.BeaconBlock{}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
a := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root[:]},
Crosslink: &ethpb.Crosslink{},
}}
if err := chainService.ReceiveAttestation(ctx, a); err != nil {
t.Fatal(err)
}
testutil.AssertLogsDoNotContain(t, hook, "Saved new head info")
testutil.AssertLogsContain(t, hook, "Broadcasting attestation")
}
func TestReceiveAttestationNoPubsub_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
r, _ := ssz.SigningRoot(&ethpb.BeaconBlock{})
chainService.forkChoiceStore = &store{headRoot: r[:]}
b := &ethpb.BeaconBlock{}
if err := chainService.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
root, err := ssz.SigningRoot(b)
if err != nil {
t.Fatal(err)
}
if err := chainService.beaconDB.SaveState(ctx, &pb.BeaconState{}, root); err != nil {
t.Fatal(err)
}
a := &ethpb.Attestation{Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: root[:]},
Crosslink: &ethpb.Crosslink{},
}}
if err := chainService.ReceiveAttestationNoPubsub(ctx, a); err != nil {
t.Fatal(err)
}
testutil.AssertLogsContain(t, hook, "Saved new head info")
testutil.AssertLogsDoNotContain(t, hook, "Broadcasting attestation")
}

View File

@@ -0,0 +1,223 @@
package blockchain
import (
"bytes"
"context"
"encoding/hex"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// BlockReceiver interface defines the methods of chain service receive and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.BeaconBlock) error
ReceiveBlockNoVerify(ctx context.Context, block *ethpb.BeaconBlock) error
}
// ReceiveBlock is a function that defines the operations that are preformed on
// blocks that is received from rpc service. The operations consists of:
// 1. Gossip block to other peers
// 2. Validate block, apply state transition and update check points
// 3. Apply fork choice to the processed block
// 4. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlock")
defer span.End()
root, err := ssz.SigningRoot(block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
// Broadcast the new block to the network.
if err := s.p2p.Broadcast(ctx, block); err != nil {
return errors.Wrap(err, "could not broadcast block")
}
log.WithFields(logrus.Fields{
"blockRoot": hex.EncodeToString(root[:]),
}).Debug("Broadcasting block")
if err := s.ReceiveBlockNoPubsub(ctx, block); err != nil {
return err
}
processedBlk.Inc()
return nil
}
// ReceiveBlockNoPubsub is a function that defines the the operations (minus pubsub)
// that are preformed on blocks that is received from regular sync service. The operations consists of:
// 1. Validate block, apply state transition and update check points
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoPubsub")
defer span.End()
// Apply state transition on the new block.
if err := s.forkChoiceStore.OnBlock(ctx, block); err != nil {
err := errors.Wrap(err, "could not process block from fork choice service")
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.SigningRoot(block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
// Run fork choice after applying state transition on the new block.
headRoot, err := s.forkChoiceStore.Head(ctx)
if err != nil {
return errors.Wrap(err, "could not get head from fork choice service")
}
headBlk, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(headRoot))
if err != nil {
return errors.Wrap(err, "could not compute state from block head")
}
// Only save head if it's different than the current head.
if !bytes.Equal(headRoot, s.HeadRoot()) {
if err := s.saveHead(ctx, headBlk, bytesutil.ToBytes32(headRoot)); err != nil {
return errors.Wrap(err, "could not save head")
}
}
// Remove block's contained deposits, attestations, and other operations from persistent storage.
if err := s.cleanupBlockOperations(ctx, block); err != nil {
return errors.Wrap(err, "could not clean up block deposits, attestations, and other operations")
}
// Reports on block and fork choice metrics.
s.reportSlotMetrics(block.Slot)
// Log if block is a competing block.
isCompetingBlock(root[:], block.Slot, headRoot, headBlk.Slot)
// Log state transition data.
logStateTransitionData(block, root[:])
processedBlkNoPubsub.Inc()
// We write the latest saved head root to a feed for consumption by other services.
s.headUpdatedFeed.Send(bytesutil.ToBytes32(headRoot))
return nil
}
// ReceiveBlockNoPubsubForkchoice is a function that defines the all operations (minus pubsub and forkchoice)
// that are preformed blocks that is received from initial sync service. The operations consists of:
// 1. Validate block, apply state transition and update check points
// 2. Save latest head info
func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.BeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoForkchoice")
defer span.End()
// Apply state transition on the incoming newly received block.
if err := s.forkChoiceStore.OnBlock(ctx, block); err != nil {
err := errors.Wrap(err, "could not process block from fork choice service")
traceutil.AnnotateError(span, err)
return err
}
root, err := ssz.SigningRoot(block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHead(ctx, block, root); err != nil {
return errors.Wrap(err, "could not save head")
}
}
// Remove block's contained deposits, attestations, and other operations from persistent storage.
if err := s.cleanupBlockOperations(ctx, block); err != nil {
return errors.Wrap(err, "could not clean up block deposits, attestations, and other operations")
}
// Reports on block and fork choice metrics.
s.reportSlotMetrics(block.Slot)
// Log state transition data.
logStateTransitionData(block, root[:])
// We write the latest saved head root to a feed for consumption by other services.
s.headUpdatedFeed.Send(root)
processedBlkNoPubsubForkchoice.Inc()
return nil
}
// ReceiveBlockNoVerify runs state transition on a input block without verifying the block's BLS contents.
// Depends on the security model, this is the "minimal" work a node can do to sync the chain.
// It simulates light client behavior and assumes 100% trust with the syncing peer.
func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.BeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.ReceiveBlockNoVerify")
defer span.End()
// Apply state transition on the incoming newly received block without verifying its BLS contents.
if err := s.forkChoiceStore.OnBlockNoVerifyStateTransition(ctx, block); err != nil {
return errors.Wrap(err, "could not process block from fork choice service")
}
root, err := ssz.SigningRoot(block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
if !bytes.Equal(root[:], s.HeadRoot()) {
if err := s.saveHead(ctx, block, root); err != nil {
err := errors.Wrap(err, "could not save head")
traceutil.AnnotateError(span, err)
return err
}
}
// Reports on block and fork choice metrics.
s.reportSlotMetrics(block.Slot)
// Log state transition data.
log.WithFields(logrus.Fields{
"slot": block.Slot,
"attestations": len(block.Body.Attestations),
"deposits": len(block.Body.Deposits),
}).Debug("Finished applying state transition")
// We write the latest saved head root to a feed for consumption by other services.
s.headUpdatedFeed.Send(root)
return nil
}
// cleanupBlockOperations processes and cleans up any block operations relevant to the beacon node
// such as attestations, exits, and deposits. We update the latest seen attestation by validator
// in the local node's runtime, cleanup and remove pending deposits which have been included in the block
// from our node's local cache, and process validator exits and more.
func (s *Service) cleanupBlockOperations(ctx context.Context, block *ethpb.BeaconBlock) error {
// Forward processed block to operation pool to remove individual operation from DB.
if s.opsPoolService.IncomingProcessedBlockFeed().Send(block) == 0 {
log.Error("Sent processed block to no subscribers")
}
// Remove pending deposits from the deposit queue.
for _, dep := range block.Body.Deposits {
s.depositCache.RemovePendingDeposit(ctx, dep)
}
return nil
}
// This checks if the block is from a competing chain, emits warning and updates metrics.
func isCompetingBlock(root []byte, slot uint64, headRoot []byte, headSlot uint64) {
if !bytes.Equal(root[:], headRoot) {
log.WithFields(logrus.Fields{
"blkSlot": slot,
"blkRoot": hex.EncodeToString(root[:]),
"headSlot": headSlot,
"headRoot": hex.EncodeToString(headRoot),
}).Warn("Calculated head diffs from new block")
competingBlks.Inc()
}
}

View File

@@ -0,0 +1,275 @@
package blockchain
import (
"bytes"
"context"
"reflect"
"testing"
"github.com/prysmaticlabs/go-ssz"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestReceiveBlock_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
deposits, _, privKeys := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, 0, &ethpb.Eth1Data{})
if err != nil {
t.Fatal(err)
}
beaconState.Eth1DepositIndex = 100
stateRoot, err := ssz.HashTreeRoot(beaconState)
if err != nil {
t.Fatal(err)
}
genesis := b.NewGenesisBlock(stateRoot[:])
bodyRoot, err := ssz.HashTreeRoot(genesis.Body)
if err != nil {
t.Fatal(err)
}
genesisBlkRoot, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatal(err)
}
cp := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := chainService.forkChoiceStore.GenesisStore(ctx, cp, cp); err != nil {
t.Fatal(err)
}
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{
Slot: genesis.Slot,
ParentRoot: genesis.ParentRoot,
BodyRoot: bodyRoot[:],
StateRoot: genesis.StateRoot,
}
if err := chainService.beaconDB.SaveBlock(ctx, genesis); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
parentRoot, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, parentRoot); err != nil {
t.Fatal(err)
}
slot := beaconState.Slot + 1
epoch := helpers.SlotToEpoch(slot)
beaconState.Slot++
randaoReveal, err := testutil.CreateRandaoReveal(beaconState, epoch, privKeys)
if err != nil {
t.Fatal(err)
}
beaconState.Slot--
block := &ethpb.BeaconBlock{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{
Eth1Data: &ethpb.Eth1Data{
DepositCount: uint64(len(deposits)),
DepositRoot: []byte("a"),
BlockHash: []byte("b"),
},
RandaoReveal: randaoReveal[:],
Attestations: nil,
},
}
stateRootCandidate, err := state.ExecuteStateTransitionNoVerify(context.Background(), beaconState, block)
if err != nil {
t.Fatal(err)
}
stateRoot, err = ssz.HashTreeRoot(stateRootCandidate)
if err != nil {
t.Fatal(err)
}
block.StateRoot = stateRoot[:]
block, err = testutil.SignBlock(beaconState, block, privKeys)
if err != nil {
t.Error(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
if err := chainService.ReceiveBlock(context.Background(), block); err != nil {
t.Errorf("Block failed processing: %v", err)
}
testutil.AssertLogsContain(t, hook, "Finished applying state transition")
}
func TestReceiveReceiveBlockNoPubsub_CanSaveHeadInfo(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
headBlk := &ethpb.BeaconBlock{Slot: 100}
if err := db.SaveBlock(ctx, headBlk); err != nil {
t.Fatal(err)
}
r, err := ssz.SigningRoot(headBlk)
if err != nil {
t.Fatal(err)
}
chainService.forkChoiceStore = &store{headRoot: r[:]}
if err := chainService.ReceiveBlockNoPubsub(ctx, &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{}}); err != nil {
t.Fatal(err)
}
if !bytes.Equal(r[:], chainService.HeadRoot()) {
t.Error("Incorrect head root saved")
}
if !reflect.DeepEqual(headBlk, chainService.HeadBlock()) {
t.Error("Incorrect head block saved")
}
testutil.AssertLogsContain(t, hook, "Saved new head info")
}
func TestReceiveReceiveBlockNoPubsub_SameHead(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
headBlk := &ethpb.BeaconBlock{}
if err := db.SaveBlock(ctx, headBlk); err != nil {
t.Fatal(err)
}
newBlk := &ethpb.BeaconBlock{
Slot: 1,
Body: &ethpb.BeaconBlockBody{}}
newRoot, _ := ssz.SigningRoot(newBlk)
if err := db.SaveBlock(ctx, newBlk); err != nil {
t.Fatal(err)
}
chainService.forkChoiceStore = &store{headRoot: newRoot[:]}
chainService.canonicalRoots[0] = newRoot[:]
if err := chainService.ReceiveBlockNoPubsub(ctx, newBlk); err != nil {
t.Fatal(err)
}
testutil.AssertLogsDoNotContain(t, hook, "Saved new head info")
}
func TestReceiveBlockNoPubsubForkchoice_ProcessCorrectly(t *testing.T) {
hook := logTest.NewGlobal()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
chainService := setupBeaconChain(t, db)
deposits, _, privKeys := testutil.SetupInitialDeposits(t, 100)
beaconState, err := state.GenesisBeaconState(deposits, 0, &ethpb.Eth1Data{})
if err != nil {
t.Fatal(err)
}
beaconState.Eth1DepositIndex = 100
stateRoot, err := ssz.HashTreeRoot(beaconState)
if err != nil {
t.Fatal(err)
}
genesis := b.NewGenesisBlock(stateRoot[:])
bodyRoot, err := ssz.HashTreeRoot(genesis.Body)
if err != nil {
t.Fatal(err)
}
if err := chainService.forkChoiceStore.GenesisStore(ctx, &ethpb.Checkpoint{}, &ethpb.Checkpoint{}); err != nil {
t.Fatal(err)
}
beaconState.LatestBlockHeader = &ethpb.BeaconBlockHeader{
Slot: genesis.Slot,
ParentRoot: genesis.ParentRoot,
BodyRoot: bodyRoot[:],
StateRoot: genesis.StateRoot,
}
if err := chainService.beaconDB.SaveBlock(ctx, genesis); err != nil {
t.Fatalf("Could not save block to db: %v", err)
}
parentRoot, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, beaconState, parentRoot); err != nil {
t.Fatal(err)
}
slot := beaconState.Slot + 1
epoch := helpers.SlotToEpoch(slot)
beaconState.Slot++
randaoReveal, err := testutil.CreateRandaoReveal(beaconState, epoch, privKeys)
if err != nil {
t.Fatal(err)
}
beaconState.Slot--
block := &ethpb.BeaconBlock{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{
Eth1Data: &ethpb.Eth1Data{
DepositCount: uint64(len(deposits)),
DepositRoot: []byte("a"),
BlockHash: []byte("b"),
},
RandaoReveal: randaoReveal[:],
Attestations: nil,
},
}
stateRootCandidate, err := state.ExecuteStateTransitionNoVerify(context.Background(), beaconState, block)
if err != nil {
t.Fatal(err)
}
stateRoot, err = ssz.HashTreeRoot(stateRootCandidate)
if err != nil {
t.Fatal(err)
}
block.StateRoot = stateRoot[:]
block, err = testutil.SignBlock(beaconState, block, privKeys)
if err != nil {
t.Error(err)
}
if err := chainService.beaconDB.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
if err := chainService.ReceiveBlockNoPubsubForkchoice(context.Background(), block); err != nil {
t.Errorf("Block failed processing: %v", err)
}
testutil.AssertLogsContain(t, hook, "Finished applying state transition")
testutil.AssertLogsDoNotContain(t, hook, "Finished fork choice")
}

View File

@@ -4,107 +4,136 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"sort"
"runtime"
"sync"
"time"
"github.com/prysmaticlabs/prysm/beacon-chain/attestation"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/operations"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
pbrpc "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/p2p"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var log = logrus.WithField("prefix", "blockchain")
// ChainFeeds interface defines the methods of the ChainService which provide
// information feeds.
// ChainFeeds interface defines the methods of the Service which provide state related
// information feeds to consumers.
type ChainFeeds interface {
StateInitializedFeed() *event.Feed
}
// ChainService represents a service that handles the internal
// NewHeadNotifier defines a struct which can notify many consumers of a new,
// canonical chain head event occuring in the node.
type NewHeadNotifier interface {
HeadUpdatedFeed() *event.Feed
}
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type ChainService struct {
type Service struct {
ctx context.Context
cancel context.CancelFunc
beaconDB *db.BeaconDB
web3Service *powchain.Web3Service
attsService attestation.TargetHandler
beaconDB db.Database
depositCache *depositcache.DepositCache
chainStartFetcher powchain.ChainStartFetcher
opsPoolService operations.OperationFeeds
forkChoiceStore forkchoice.ForkChoicer
chainStartChan chan time.Time
canonicalBlockFeed *event.Feed
genesisTime time.Time
finalizedEpoch uint64
stateInitializedFeed *event.Feed
headUpdatedFeed *event.Feed
p2p p2p.Broadcaster
canonicalBlocks map[uint64][]byte
canonicalBlocksLock sync.RWMutex
receiveBlockLock sync.Mutex
maxRoutines int64
headSlot uint64
headBlock *ethpb.BeaconBlock
headState *pb.BeaconState
canonicalRoots map[uint64][]byte
headLock sync.RWMutex
}
// Config options for the service.
type Config struct {
BeaconBlockBuf int
Web3Service *powchain.Web3Service
AttsService attestation.TargetHandler
BeaconDB *db.BeaconDB
OpsPoolService operations.OperationFeeds
DevMode bool
P2p p2p.Broadcaster
BeaconBlockBuf int
ChainStartFetcher powchain.ChainStartFetcher
BeaconDB db.Database
DepositCache *depositcache.DepositCache
OpsPoolService operations.OperationFeeds
P2p p2p.Broadcaster
MaxRoutines int64
}
// NewChainService instantiates a new service instance that will
// NewService instantiates a new block service instance that will
// be registered into a running beacon node.
func NewChainService(ctx context.Context, cfg *Config) (*ChainService, error) {
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
return &ChainService{
store := forkchoice.NewForkChoiceService(ctx, cfg.BeaconDB)
return &Service{
ctx: ctx,
cancel: cancel,
beaconDB: cfg.BeaconDB,
web3Service: cfg.Web3Service,
depositCache: cfg.DepositCache,
chainStartFetcher: cfg.ChainStartFetcher,
opsPoolService: cfg.OpsPoolService,
attsService: cfg.AttsService,
canonicalBlockFeed: new(event.Feed),
forkChoiceStore: store,
chainStartChan: make(chan time.Time),
stateInitializedFeed: new(event.Feed),
headUpdatedFeed: new(event.Feed),
p2p: cfg.P2p,
canonicalBlocks: make(map[uint64][]byte),
canonicalRoots: make(map[uint64][]byte),
maxRoutines: cfg.MaxRoutines,
}, nil
}
// Start a blockchain service's main event loop.
func (c *ChainService) Start() {
beaconState, err := c.beaconDB.HeadState(c.ctx)
func (s *Service) Start() {
ctx := context.TODO()
beaconState, err := s.beaconDB.HeadState(ctx)
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
}
// If the chain has already been initialized, simply start the block processing routine.
if beaconState != nil {
log.Info("Beacon chain data already exists, starting service")
c.genesisTime = time.Unix(int64(beaconState.GenesisTime), 0)
c.finalizedEpoch = beaconState.FinalizedEpoch
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = time.Unix(int64(beaconState.GenesisTime), 0)
if err := s.initializeChainInfo(ctx); err != nil {
log.Fatalf("Could not set up chain info: %v", err)
}
justifiedCheckpoint, err := s.beaconDB.JustifiedCheckpoint(ctx)
if err != nil {
log.Fatalf("Could not get justified checkpoint: %v", err)
}
finalizedCheckpoint, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
log.Fatalf("Could not get finalized checkpoint: %v", err)
}
if err := s.forkChoiceStore.GenesisStore(ctx, justifiedCheckpoint, finalizedCheckpoint); err != nil {
log.Fatalf("Could not start fork choice service: %v", err)
}
s.stateInitializedFeed.Send(s.genesisTime)
} else {
log.Info("Waiting for ChainStart log from the Validator Deposit Contract to start the beacon chain...")
if c.web3Service == nil {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.chainStartFetcher == nil {
log.Fatal("Not configured web3Service for POW chain")
return // return need for TestStartUninitializedChainWithoutConfigPOWChain.
}
subChainStart := c.web3Service.ChainStartFeed().Subscribe(c.chainStartChan)
subChainStart := s.chainStartFetcher.ChainStartFeed().Subscribe(s.chainStartChan)
go func() {
genesisTime := <-c.chainStartChan
c.processChainStartTime(genesisTime, subChainStart)
genesisTime := <-s.chainStartChan
s.processChainStartTime(ctx, genesisTime, subChainStart)
return
}()
}
@@ -112,171 +141,180 @@ func (c *ChainService) Start() {
// processChainStartTime initializes a series of deposits from the ChainStart deposits in the eth1
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (c *ChainService) processChainStartTime(genesisTime time.Time, chainStartSub event.Subscription) {
initialDepositsData := c.web3Service.ChainStartDeposits()
initialDeposits := make([]*pb.Deposit, len(initialDepositsData))
for i := range initialDepositsData {
initialDeposits[i] = &pb.Deposit{DepositData: initialDepositsData[i]}
}
beaconState, err := c.initializeBeaconChain(genesisTime, initialDeposits, c.web3Service.ChainStartETH1Data())
if err != nil {
func (s *Service) processChainStartTime(ctx context.Context, genesisTime time.Time, chainStartSub event.Subscription) {
initialDeposits := s.chainStartFetcher.ChainStartDeposits()
if err := s.initializeBeaconChain(ctx, genesisTime, initialDeposits, s.chainStartFetcher.ChainStartEth1Data()); err != nil {
log.Fatalf("Could not initialize beacon chain: %v", err)
}
c.finalizedEpoch = beaconState.FinalizedEpoch
c.stateInitializedFeed.Send(genesisTime)
s.stateInitializedFeed.Send(genesisTime)
chainStartSub.Unsubscribe()
}
// initializes the state and genesis block of the beacon chain to persistent storage
// based on a genesis timestamp value obtained from the ChainStart event emitted
// by the ETH1.0 Deposit Contract and the POWChain service of the node.
func (c *ChainService) initializeBeaconChain(genesisTime time.Time, deposits []*pb.Deposit,
eth1data *pb.Eth1Data) (*pb.BeaconState, error) {
ctx, span := trace.StartSpan(context.Background(), "beacon-chain.ChainService.initializeBeaconChain")
func (s *Service) initializeBeaconChain(
ctx context.Context,
genesisTime time.Time,
deposits []*ethpb.Deposit,
eth1data *ethpb.Eth1Data) error {
_, span := trace.StartSpan(context.Background(), "beacon-chain.Service.initializeBeaconChain")
defer span.End()
log.Info("ChainStart time reached, starting the beacon chain!")
c.genesisTime = genesisTime
log.Info("Genesis time reached, starting the beacon chain")
s.genesisTime = genesisTime
unixTime := uint64(genesisTime.Unix())
if err := c.beaconDB.InitializeState(c.ctx, unixTime, deposits, eth1data); err != nil {
return nil, fmt.Errorf("could not initialize beacon state to disk: %v", err)
}
beaconState, err := c.beaconDB.HeadState(c.ctx)
genesisState, err := state.GenesisBeaconState(deposits, unixTime, eth1data)
if err != nil {
return nil, fmt.Errorf("could not attempt fetch beacon state: %v", err)
return errors.Wrap(err, "could not initialize genesis state")
}
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
return nil, fmt.Errorf("could not hash beacon state: %v", err)
}
genBlock := b.NewGenesisBlock(stateRoot[:])
genBlockRoot, err := hashutil.HashBeaconBlock(genBlock)
if err != nil {
return nil, fmt.Errorf("could not hash beacon block: %v", err)
if err := s.saveGenesisData(ctx, genesisState); err != nil {
return errors.Wrap(err, "could not save genesis data")
}
// TODO(#2011): Remove this in state caching.
beaconState.LatestBlock = genBlock
// Update committee shuffled indices for genesis epoch.
if featureconfig.Get().EnableNewCache {
if err := helpers.UpdateCommitteeCache(genesisState); err != nil {
return err
}
}
if err := c.beaconDB.SaveBlock(genBlock); err != nil {
return nil, fmt.Errorf("could not save genesis block to disk: %v", err)
}
if err := c.beaconDB.SaveAttestationTarget(ctx, &pb.AttestationTarget{
Slot: genBlock.Slot,
BlockRoot: genBlockRoot[:],
ParentRoot: genBlock.ParentRootHash32,
}); err != nil {
return nil, fmt.Errorf("failed to save attestation target: %v", err)
}
if err := c.beaconDB.UpdateChainHead(ctx, genBlock, beaconState); err != nil {
return nil, fmt.Errorf("could not set chain head, %v", err)
}
if err := c.beaconDB.SaveJustifiedBlock(genBlock); err != nil {
return nil, fmt.Errorf("could not save gensis block as justified block: %v", err)
}
if err := c.beaconDB.SaveFinalizedBlock(genBlock); err != nil {
return nil, fmt.Errorf("could not save gensis block as finalized block: %v", err)
}
if err := c.beaconDB.SaveJustifiedState(beaconState); err != nil {
return nil, fmt.Errorf("could not save gensis state as justified state: %v", err)
}
if err := c.beaconDB.SaveFinalizedState(beaconState); err != nil {
return nil, fmt.Errorf("could not save gensis state as finalized state: %v", err)
}
return beaconState, nil
return nil
}
// Stop the blockchain service's main event loop and associated goroutines.
func (c *ChainService) Stop() error {
defer c.cancel()
log.Info("Stopping service")
func (s *Service) Stop() error {
defer s.cancel()
return nil
}
// Status always returns nil.
// TODO(1202): Add service health checks.
func (c *ChainService) Status() error {
// Status always returns nil unless there is an error condition that causes
// this service to be unhealthy.
func (s *Service) Status() error {
if runtime.NumGoroutine() > int(s.maxRoutines) {
return fmt.Errorf("too many goroutines %d", runtime.NumGoroutine())
}
return nil
}
// CanonicalBlockFeed returns a channel that is written to
// whenever a new block is determined to be canonical in the chain.
func (c *ChainService) CanonicalBlockFeed() *event.Feed {
return c.canonicalBlockFeed
}
// StateInitializedFeed returns a feed that is written to
// when the beacon state is first initialized.
func (c *ChainService) StateInitializedFeed() *event.Feed {
return c.stateInitializedFeed
func (s *Service) StateInitializedFeed() *event.Feed {
return s.stateInitializedFeed
}
// ChainHeadRoot returns the hash root of the last beacon block processed by the
// block chain service.
func (c *ChainService) ChainHeadRoot() ([32]byte, error) {
head, err := c.beaconDB.ChainHead()
// HeadUpdatedFeed is a feed containing the head block root and
// is written to when a new head block is saved to DB.
func (s *Service) HeadUpdatedFeed() *event.Feed {
return s.headUpdatedFeed
}
// This gets called to update canonical root mapping.
func (s *Service) saveHead(ctx context.Context, b *ethpb.BeaconBlock, r [32]byte) error {
s.headLock.Lock()
defer s.headLock.Unlock()
s.headSlot = b.Slot
s.canonicalRoots[b.Slot] = r[:]
if err := s.beaconDB.SaveHeadBlockRoot(ctx, r); err != nil {
return errors.Wrap(err, "could not save head root in DB")
}
s.headBlock = b
headState, err := s.beaconDB.State(ctx, r)
if err != nil {
return [32]byte{}, fmt.Errorf("could not retrieve chain head: %v", err)
return errors.Wrap(err, "could not retrieve head state in DB")
}
s.headState = headState
root, err := hashutil.HashBeaconBlock(head)
if err != nil {
return [32]byte{}, fmt.Errorf("could not tree hash parent block: %v", err)
}
return root, nil
log.WithFields(logrus.Fields{
"slot": b.Slot,
"headRoot": fmt.Sprintf("%#x", r),
}).Debug("Saved new head info")
return nil
}
// IsCanonical returns true if the input block hash of the corresponding slot
// is part of the canonical chain. False otherwise.
func (c *ChainService) IsCanonical(slot uint64, hash []byte) bool {
c.canonicalBlocksLock.RLock()
defer c.canonicalBlocksLock.RUnlock()
if canonicalHash, ok := c.canonicalBlocks[slot]; ok {
return bytes.Equal(canonicalHash, hash)
}
return false
}
// CanonicalBlock returns canonical block of a given slot, it returns nil
// if there's no canonical block saved of a given slot.
func (c *ChainService) CanonicalBlock(slot uint64) (*pb.BeaconBlock, error) {
c.canonicalBlocksLock.RLock()
defer c.canonicalBlocksLock.RUnlock()
root, exists := c.canonicalBlocks[slot]
if !exists {
return nil, nil
}
return c.beaconDB.Block(bytesutil.ToBytes32(root))
}
// RecentCanonicalRoots returns the latest block slot and root of the canonical block chain,
// the block slots and roots are sorted and in descending order. Input count determines
// the number of block slots and roots to return.
func (c *ChainService) RecentCanonicalRoots(count uint64) []*pbrpc.BlockRoot {
c.canonicalBlocksLock.RLock()
defer c.canonicalBlocksLock.RUnlock()
var slots []int
for s := range c.canonicalBlocks {
slots = append(slots, int(s))
}
// Return the all the canonical blocks if the input count is greater than
// the depth of the block tree.
totalRoots := uint64(len(slots))
if count > totalRoots {
count = totalRoots
}
sort.Sort(sort.Reverse(sort.IntSlice(slots)))
blockRoots := make([]*pbrpc.BlockRoot, count)
for i := 0; i < int(count); i++ {
slot := uint64(slots[i])
blockRoots[i] = &pbrpc.BlockRoot{
Slot: slot,
Root: c.canonicalBlocks[slot],
// This gets called when beacon chain is first initialized to save validator indices and pubkeys in db
func (s *Service) saveGenesisValidators(ctx context.Context, state *pb.BeaconState) error {
for i, v := range state.Validators {
if err := s.beaconDB.SaveValidatorIndex(ctx, bytesutil.ToBytes48(v.PublicKey), uint64(i)); err != nil {
return errors.Wrapf(err, "could not save validator index: %d", i)
}
}
return blockRoots
return nil
}
// This gets called when beacon chain is first initialized to save genesis data (state, block, and more) in db
func (s *Service) saveGenesisData(ctx context.Context, genesisState *pb.BeaconState) error {
s.headLock.Lock()
defer s.headLock.Unlock()
stateRoot, err := ssz.HashTreeRoot(genesisState)
if err != nil {
return errors.Wrap(err, "could not tree hash genesis state")
}
genesisBlk := blocks.NewGenesisBlock(stateRoot[:])
genesisBlkRoot, err := ssz.SigningRoot(genesisBlk)
if err != nil {
return errors.Wrap(err, "could not get genesis block root")
}
if err := s.beaconDB.SaveBlock(ctx, genesisBlk); err != nil {
return errors.Wrap(err, "could not save genesis block")
}
if err := s.beaconDB.SaveHeadBlockRoot(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save head block root")
}
if err := s.beaconDB.SaveGenesisBlockRoot(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could save genesis block root")
}
if err := s.beaconDB.SaveState(ctx, genesisState, genesisBlkRoot); err != nil {
return errors.Wrap(err, "could not save genesis state")
}
if err := s.saveGenesisValidators(ctx, genesisState); err != nil {
return errors.Wrap(err, "could not save genesis validators")
}
genesisCheckpoint := &ethpb.Checkpoint{Root: genesisBlkRoot[:]}
if err := s.forkChoiceStore.GenesisStore(ctx, genesisCheckpoint, genesisCheckpoint); err != nil {
return errors.Wrap(err, "Could not start fork choice service: %v")
}
s.headBlock = genesisBlk
s.headState = genesisState
s.canonicalRoots[genesisState.Slot] = genesisBlkRoot[:]
return nil
}
// This gets called to initialize chain info variables using the finalized checkpoint stored in DB
func (s *Service) initializeChainInfo(ctx context.Context) error {
s.headLock.Lock()
defer s.headLock.Unlock()
finalized, err := s.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint from db")
}
if finalized == nil {
// This should never happen. At chain start, the finalized checkpoint
// would be the genesis state and block.
return errors.New("no finalized epoch in the database")
}
s.headState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(finalized.Root))
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
s.headBlock, err = s.beaconDB.Block(ctx, bytesutil.ToBytes32(finalized.Root))
if err != nil {
return errors.Wrap(err, "could not get finalized block from db")
}
s.headSlot = s.headState.Slot
s.canonicalRoots[s.headSlot] = finalized.Root
return nil
}

View File

@@ -0,0 +1,37 @@
package blockchain
import (
"context"
"io/ioutil"
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/sirupsen/logrus"
)
func init() {
logrus.SetLevel(logrus.DebugLevel)
logrus.SetOutput(ioutil.Discard)
}
func TestChainService_SaveHead_DataRace(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
s := &Service{
beaconDB: db,
canonicalRoots: make(map[uint64][]byte),
}
go func() {
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 777},
[32]byte{},
)
}()
s.saveHead(
context.Background(),
&ethpb.BeaconBlock{Slot: 888},
[32]byte{},
)
}

View File

@@ -1,9 +1,9 @@
package blockchain
import (
"bytes"
"context"
"crypto/rand"
"encoding/binary"
"encoding/hex"
"errors"
"io/ioutil"
"math/big"
@@ -11,40 +11,63 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
gethTypes "github.com/ethereum/go-ethereum/core/types"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/attestation"
ssz "github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/internal"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
pbrpc "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/event"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/forkutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/p2p"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
// Ensure ChainService implements interfaces.
var _ = ChainFeeds(&ChainService{})
// Ensure Service implements interfaces.
var _ = ChainFeeds(&Service{})
var _ = NewHeadNotifier(&Service{})
func init() {
logrus.SetLevel(logrus.DebugLevel)
logrus.SetOutput(ioutil.Discard)
featureconfig.InitFeatureConfig(&featureconfig.FeatureFlagConfig{
EnableCrosslinks: true,
EnableCheckBlockStateRoot: true,
})
}
type store struct {
headRoot []byte
}
func (s *store) OnBlock(ctx context.Context, b *ethpb.BeaconBlock) error {
return nil
}
func (s *store) OnBlockNoVerifyStateTransition(ctx context.Context, b *ethpb.BeaconBlock) error {
return nil
}
func (s *store) OnAttestation(ctx context.Context, a *ethpb.Attestation) (uint64, error) {
return 0, nil
}
func (s *store) GenesisStore(ctx context.Context, justifiedCheckpoint *ethpb.Checkpoint, finalizedCheckpoint *ethpb.Checkpoint) error {
return nil
}
func (s *store) FinalizedCheckpt() *ethpb.Checkpoint {
return nil
}
func (s *store) Head(ctx context.Context) ([]byte, error) {
return s.headRoot, nil
}
type mockOperationService struct{}
@@ -152,81 +175,32 @@ type mockBroadcaster struct {
broadcastCalled bool
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) {
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
}
var _ = p2p.Broadcaster(&mockBroadcaster{})
func setupInitialDeposits(t *testing.T, numDeposits int) ([]*pb.Deposit, []*bls.SecretKey) {
privKeys := make([]*bls.SecretKey, numDeposits)
deposits := make([]*pb.Deposit, numDeposits)
for i := 0; i < len(deposits); i++ {
priv, err := bls.RandKey(rand.Reader)
if err != nil {
t.Fatal(err)
}
depositInput := &pb.DepositInput{
Pubkey: priv.PublicKey().Marshal(),
}
balance := params.BeaconConfig().MaxDepositAmount
depositData, err := helpers.EncodeDepositData(depositInput, balance, time.Now().Unix())
if err != nil {
t.Fatalf("Cannot encode data: %v", err)
}
deposits[i] = &pb.Deposit{
DepositData: depositData,
MerkleTreeIndex: uint64(i),
}
privKeys[i] = priv
}
return deposits, privKeys
}
func createPreChainStartDeposit(t *testing.T, pk []byte, index uint64) *pb.Deposit {
depositInput := &pb.DepositInput{Pubkey: pk}
balance := params.BeaconConfig().MaxDepositAmount
depositData, err := helpers.EncodeDepositData(depositInput, balance, time.Now().Unix())
if err != nil {
t.Fatalf("Cannot encode data: %v", err)
}
return &pb.Deposit{DepositData: depositData, MerkleTreeIndex: index}
}
func createRandaoReveal(t *testing.T, beaconState *pb.BeaconState, privKeys []*bls.SecretKey) []byte {
// We fetch the proposer's index as that is whom the RANDAO will be verified against.
proposerIdx, err := helpers.BeaconProposerIndex(beaconState, beaconState.Slot)
if err != nil {
t.Fatal(err)
}
epoch := helpers.SlotToEpoch(beaconState.Slot)
buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, epoch)
domain := forkutil.DomainVersion(beaconState.Fork, epoch, params.BeaconConfig().DomainRandao)
// We make the previous validator's index sign the message instead of the proposer.
epochSignature := privKeys[proposerIdx].Sign(buf, domain)
return epochSignature.Marshal()
}
func setupGenesisBlock(t *testing.T, cs *ChainService) ([32]byte, *pb.BeaconBlock) {
func setupGenesisBlock(t *testing.T, cs *Service) ([32]byte, *ethpb.BeaconBlock) {
genesis := b.NewGenesisBlock([]byte{})
if err := cs.beaconDB.SaveBlock(genesis); err != nil {
if err := cs.beaconDB.SaveBlock(context.Background(), genesis); err != nil {
t.Fatalf("could not save block to db: %v", err)
}
parentHash, err := hashutil.HashBeaconBlock(genesis)
parentHash, err := ssz.SigningRoot(genesis)
if err != nil {
t.Fatalf("unable to get tree hash root of canonical head: %v", err)
}
return parentHash, genesis
}
func setupBeaconChain(t *testing.T, beaconDB *db.BeaconDB, attsService *attestation.Service) *ChainService {
func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
endpoint := "ws://127.0.0.1"
ctx := context.Background()
var web3Service *powchain.Web3Service
var web3Service *powchain.Service
var err error
client := &mockClient{}
web3Service, err = powchain.NewWeb3Service(ctx, &powchain.Web3ServiceConfig{
web3Service, err = powchain.NewService(ctx, &powchain.Web3ServiceConfig{
Endpoint: endpoint,
DepositContract: common.Address{},
Reader: client,
@@ -238,17 +212,17 @@ func setupBeaconChain(t *testing.T, beaconDB *db.BeaconDB, attsService *attestat
}
cfg := &Config{
BeaconBlockBuf: 0,
BeaconDB: beaconDB,
Web3Service: web3Service,
OpsPoolService: &mockOperationService{},
AttsService: attsService,
P2p: &mockBroadcaster{},
BeaconBlockBuf: 0,
BeaconDB: beaconDB,
DepositCache: depositcache.NewDepositCache(),
ChainStartFetcher: web3Service,
OpsPoolService: &mockOperationService{},
P2p: &mockBroadcaster{},
}
if err != nil {
t.Fatalf("could not register blockchain service: %v", err)
}
chainService, err := NewChainService(ctx, cfg)
chainService, err := NewService(ctx, cfg)
if err != nil {
t.Fatalf("unable to setup chain service: %v", err)
}
@@ -256,21 +230,12 @@ func setupBeaconChain(t *testing.T, beaconDB *db.BeaconDB, attsService *attestat
return chainService
}
func SetSlotInState(service *ChainService, slot uint64) error {
bState, err := service.beaconDB.HeadState(context.Background())
if err != nil {
return err
}
bState.Slot = slot
return service.beaconDB.SaveState(context.Background(), bState)
}
func TestChainStartStop_Uninitialized(t *testing.T) {
helpers.ClearAllCaches()
hook := logTest.NewGlobal()
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
chainService := setupBeaconChain(t, db, nil)
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
chainService := setupBeaconChain(t, db)
// Test the start function.
genesisChan := make(chan time.Time, 0)
@@ -291,7 +256,7 @@ func TestChainStartStop_Uninitialized(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if beaconState == nil || beaconState.Slot != params.BeaconConfig().GenesisSlot {
if beaconState == nil || beaconState.Slot != 0 {
t.Error("Expected canonical state feed to send a state with genesis block")
}
if err := chainService.Stop(); err != nil {
@@ -301,23 +266,39 @@ func TestChainStartStop_Uninitialized(t *testing.T) {
if chainService.ctx.Err() != context.Canceled {
t.Error("Context was not canceled")
}
testutil.AssertLogsContain(t, hook, "Waiting for ChainStart log from the Validator Deposit Contract to start the beacon chain...")
testutil.AssertLogsContain(t, hook, "ChainStart time reached, starting the beacon chain!")
testutil.AssertLogsContain(t, hook, "Waiting")
testutil.AssertLogsContain(t, hook, "Genesis time reached")
}
func TestChainStartStop_Initialized(t *testing.T) {
hook := logTest.NewGlobal()
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
chainService := setupBeaconChain(t, db, nil)
chainService := setupBeaconChain(t, db)
unixTime := uint64(time.Now().Unix())
deposits, _ := setupInitialDeposits(t, 100)
if err := db.InitializeState(context.Background(), unixTime, deposits, &pb.Eth1Data{}); err != nil {
t.Fatalf("Could not initialize beacon state to disk: %v", err)
genesisBlk := b.NewGenesisBlock([]byte{})
blkRoot, err := ssz.SigningRoot(genesisBlk)
if err != nil {
t.Fatal(err)
}
setupGenesisBlock(t, chainService)
if err := db.SaveBlock(ctx, genesisBlk); err != nil {
t.Fatal(err)
}
if err := db.SaveHeadBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveGenesisBlockRoot(ctx, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveState(ctx, &pb.BeaconState{Slot: 1}, blkRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}); err != nil {
t.Fatal(err)
}
// Test the start function.
chainService.Start()
@@ -329,98 +310,80 @@ func TestChainStartStop_Initialized(t *testing.T) {
if chainService.ctx.Err() != context.Canceled {
t.Error("context was not canceled")
}
testutil.AssertLogsContain(t, hook, "Beacon chain data already exists, starting service")
testutil.AssertLogsContain(t, hook, "data already exists")
}
func TestRecentCanonicalRoots_CanFilter(t *testing.T) {
service := setupBeaconChain(t, nil, nil)
blks := map[uint64][]byte{
1: {'A'},
50: {'E'},
2: {'B'},
99: {'F'},
30: {'D'},
3: {'C'},
}
service.canonicalBlocks = blks
func TestChainService_InitializeBeaconChain(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
want := []*pbrpc.BlockRoot{{Slot: 99, Root: []byte{'F'}}}
roots := service.RecentCanonicalRoots(1)
if !reflect.DeepEqual(want, roots) {
t.Log("Incorrect block roots received")
bc := setupBeaconChain(t, db)
// Set up 10 deposits pre chain start for validators to register
count := uint64(10)
deposits, _, _ := testutil.SetupInitialDeposits(t, count)
if err := bc.initializeBeaconChain(ctx, time.Unix(0, 0), deposits, &ethpb.Eth1Data{}); err != nil {
t.Fatal(err)
}
want = []*pbrpc.BlockRoot{
{Slot: 99, Root: []byte{'F'}},
{Slot: 50, Root: []byte{'E'}},
{Slot: 30, Root: []byte{'D'}},
}
roots = service.RecentCanonicalRoots(3)
if !reflect.DeepEqual(want, roots) {
t.Log("Incorrect block roots received")
s, err := bc.beaconDB.State(ctx, bytesutil.ToBytes32(bc.canonicalRoots[0]))
if err != nil {
t.Fatal(err)
}
want = []*pbrpc.BlockRoot{
{Slot: 99, Root: []byte{'F'}},
{Slot: 50, Root: []byte{'E'}},
{Slot: 30, Root: []byte{'D'}},
{Slot: 3, Root: []byte{'C'}},
{Slot: 2, Root: []byte{'B'}},
{Slot: 1, Root: []byte{'A'}},
}
roots = service.RecentCanonicalRoots(100)
if !reflect.DeepEqual(want, roots) {
t.Log("Incorrect block roots received")
for _, v := range s.Validators {
if !db.HasValidatorIndex(ctx, bytesutil.ToBytes48(v.PublicKey)) {
t.Errorf("Validator %s missing from db", hex.EncodeToString(v.PublicKey))
}
}
if bc.HeadState() == nil {
t.Error("Head state can't be nil after initialize beacon chain")
}
if bc.HeadBlock() == nil {
t.Error("Head state can't be nil after initialize beacon chain")
}
if bc.CanonicalRoot(0) == nil {
t.Error("Canonical root for slot 0 can't be nil after initialize beacon chain")
}
}
func TestCanonicalBlock_CanGet(t *testing.T) {
db := internal.SetupDB(t)
defer internal.TeardownDB(t, db)
service := setupBeaconChain(t, db, nil)
func TestChainService_InitializeChainInfo(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
ctx := context.Background()
blk1 := &pb.BeaconBlock{Slot: 500}
blk1Root, err := hashutil.HashBeaconBlock(blk1)
if err != nil {
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := &ethpb.BeaconBlock{Slot: finalizedSlot}
headState := &pb.BeaconState{Slot: finalizedSlot}
headRoot, _ := ssz.SigningRoot(headBlock)
if err := db.SaveState(ctx, headState, headRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(blk1); err != nil {
if err := db.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: helpers.SlotToEpoch(finalizedSlot),
Root: headRoot[:],
}); err != nil {
t.Fatal(err)
}
blk2 := &pb.BeaconBlock{Slot: 600}
blk2Root, _ := hashutil.HashBeaconBlock(blk2)
if err != nil {
if err := db.SaveBlock(ctx, headBlock); err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(blk2); err != nil {
c := &Service{beaconDB: db, canonicalRoots: make(map[uint64][]byte)}
if err := c.initializeChainInfo(ctx); err != nil {
t.Fatal(err)
}
cMap := map[uint64][]byte{
blk1.Slot: blk1Root[:],
blk2.Slot: blk2Root[:],
700: {'A'},
if !reflect.DeepEqual(c.HeadBlock(), headBlock) {
t.Error("head block incorrect")
}
service.canonicalBlocks = cMap
blk1Db, err := service.CanonicalBlock(blk1.Slot)
if err != nil {
t.Fatal(err)
if !reflect.DeepEqual(c.HeadState(), headState) {
t.Error("head block incorrect")
}
if !reflect.DeepEqual(blk1, blk1Db) {
t.Error("block 1 don't match")
if headBlock.Slot != c.HeadSlot() {
t.Error("head slot incorrect")
}
blk2Db, err := service.CanonicalBlock(blk2.Slot)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(blk2, blk2Db) {
t.Error("block 2 don't match")
}
blk3Db, err := service.CanonicalBlock(999)
if err != nil {
t.Fatal(err)
}
if blk3Db != nil {
t.Error("block 3 is suppose to be nil")
if !bytes.Equal(headRoot[:], c.HeadRoot()) {
t.Error("head slot incorrect")
}
}

View File

@@ -1,32 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["state_generator.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/stategenerator",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["state_generator_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/chaintest/backend:go_default_library",
"//beacon-chain/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
],
)

View File

@@ -1,177 +0,0 @@
package stategenerator
import (
"context"
"fmt"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var log = logrus.WithField("prefix", "stategenerator")
// GenerateStateFromBlock generates state from the last finalized state to the input slot.
// Ex:
// 1A - 2B(finalized) - 3C - 4 - 5D - 6 - 7F (letters mean there's a block).
// Input: slot 6.
// Output: resulting state of state transition function after applying block C and D.
// along with skipped slot 4 and 6.
func GenerateStateFromBlock(ctx context.Context, db *db.BeaconDB, slot uint64) (*pb.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.stategenerator.GenerateStateFromBlock")
defer span.End()
fState, err := db.HistoricalStateFromSlot(ctx, slot)
if err != nil {
return nil, err
}
// return finalized state if it's the same as input slot.
if fState.Slot == slot {
return fState, nil
}
// input slot can't be smaller than last finalized state's slot.
if fState.Slot > slot {
return nil, fmt.Errorf(
"requested slot %d < current slot %d in the finalized beacon state",
slot-params.BeaconConfig().GenesisSlot,
fState.Slot-params.BeaconConfig().GenesisSlot,
)
}
if fState.LatestBlock == nil {
return nil, fmt.Errorf("latest head in state is nil %v", err)
}
fRoot, err := hashutil.HashBeaconBlock(fState.LatestBlock)
if err != nil {
return nil, fmt.Errorf("unable to get block root %v", err)
}
// from input slot, retrieve its corresponding block and call that the most recent block.
mostRecentBlock, err := db.BlockBySlot(ctx, slot)
if err != nil {
return nil, err
}
// if the most recent block is a skip block, we get its parent block.
// ex:
// 1A - 2B - 3C - 4 - 5 (letters mean there's a block).
// input slot is 5, but slots 4 and 5 are skipped, we get block C from slot 3.
lastSlot := slot
for mostRecentBlock == nil {
lastSlot--
mostRecentBlock, err = db.BlockBySlot(ctx, lastSlot)
if err != nil {
return nil, err
}
}
// retrieve the block list to recompute state of the input slot.
blocks, err := blocksSinceFinalized(ctx, db, mostRecentBlock, fRoot)
if err != nil {
return nil, fmt.Errorf("unable to look up block ancestors %v", err)
}
log.Infof("Recompute state starting last finalized slot %d and ending slot %d",
fState.Slot-params.BeaconConfig().GenesisSlot, slot-params.BeaconConfig().GenesisSlot)
postState := fState
root := fRoot
// this recomputes state up to the last available block.
// ex: 1A - 2B (finalized) - 3C - 4 - 5 - 6C - 7 - 8 (C is the last block).
// input slot 8, this recomputes state to slot 6.
for i := len(blocks); i > 0; i-- {
block := blocks[i-1]
if block.Slot <= postState.Slot {
continue
}
// running state transitions for skipped slots.
for block.Slot != fState.Slot+1 {
postState, err = state.ExecuteStateTransition(
ctx,
postState,
nil,
root,
&state.TransitionConfig{
VerifySignatures: false,
Logging: false,
},
)
if err != nil {
return nil, fmt.Errorf("could not execute state transition %v", err)
}
}
postState, err = state.ExecuteStateTransition(
ctx,
postState,
block,
root,
&state.TransitionConfig{
VerifySignatures: false,
Logging: false,
},
)
if err != nil {
return nil, fmt.Errorf("could not execute state transition %v", err)
}
root, err = hashutil.HashBeaconBlock(block)
if err != nil {
return nil, fmt.Errorf("unable to get block root %v", err)
}
}
// this recomputes state from last block to last slot if there's skipp slots after.
// ex: 1A - 2B (finalized) - 3C - 4 - 5 - 6C - 7 - 8 (7 and 8 are skipped slots).
// input slot 8, this recomputes state from 6C to 8.
for i := postState.Slot; i < slot; i++ {
postState, err = state.ExecuteStateTransition(
ctx,
postState,
nil,
root,
&state.TransitionConfig{
VerifySignatures: false,
Logging: false,
},
)
if err != nil {
return nil, fmt.Errorf("could not execute state transition %v", err)
}
}
log.Infof("Finished recompute state with slot %d and finalized epoch %d",
postState.Slot-params.BeaconConfig().GenesisSlot, postState.FinalizedEpoch-params.BeaconConfig().GenesisEpoch)
return postState, nil
}
// blocksSinceFinalized will return a list of linked blocks that's
// between the input block and the last finalized block in the db.
// The input block is also returned in the list.
// Ex:
// A -> B(finalized) -> C -> D -> E -> D.
// Input: E, output: [E, D, C, B].
func blocksSinceFinalized(ctx context.Context, db *db.BeaconDB, block *pb.BeaconBlock,
finalizedBlockRoot [32]byte) ([]*pb.BeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.stategenerator.blocksSinceFinalized")
defer span.End()
blockAncestors := make([]*pb.BeaconBlock, 0)
blockAncestors = append(blockAncestors, block)
parentRoot := bytesutil.ToBytes32(block.ParentRootHash32)
// looking up ancestors, until the finalized block.
for parentRoot != finalizedBlockRoot {
retblock, err := db.Block(parentRoot)
if err != nil {
return nil, err
}
blockAncestors = append(blockAncestors, retblock)
parentRoot = bytesutil.ToBytes32(retblock.ParentRootHash32)
}
return blockAncestors, nil
}

View File

@@ -1,189 +0,0 @@
package stategenerator_test
import (
"context"
"strings"
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain/stategenerator"
"github.com/prysmaticlabs/prysm/beacon-chain/chaintest/backend"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
)
func init() {
featureconfig.InitFeatureConfig(&featureconfig.FeatureFlagConfig{
CacheTreeHash: false,
})
}
func TestGenerateState_OK(t *testing.T) {
b, err := backend.NewSimulatedBackend()
if err != nil {
t.Fatalf("Could not create a new simulated backend %v", err)
}
privKeys, err := b.SetupBackend(100)
if err != nil {
t.Fatalf("Could not set up backend %v", err)
}
beaconDb := b.DB()
defer b.Shutdown()
defer db.TeardownDB(beaconDb)
ctx := context.Background()
slotLimit := uint64(30)
// Run the simulated chain for 30 slots, to get a state that we can save as finalized.
for i := uint64(0); i < slotLimit; i++ {
if err := b.GenerateBlockAndAdvanceChain(&backend.SimulatedObjects{}, privKeys); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, b.State().Slot+1)
}
inMemBlocks := b.InMemoryBlocks()
if err := beaconDb.SaveBlock(inMemBlocks[len(inMemBlocks)-1]); err != nil {
t.Fatalf("Unable to save block %v", err)
}
if err := beaconDb.UpdateChainHead(ctx, inMemBlocks[len(inMemBlocks)-1], b.State()); err != nil {
t.Fatalf("Unable to save block %v", err)
}
if err := beaconDb.SaveFinalizedBlock(inMemBlocks[len(inMemBlocks)-1]); err != nil {
t.Fatalf("Unable to save finalized state: %v", err)
}
}
if err := beaconDb.SaveFinalizedState(b.State()); err != nil {
t.Fatalf("Unable to save finalized state: %v", err)
}
// Run the chain for another 30 slots so that we can have this at the current head.
for i := uint64(0); i < slotLimit; i++ {
if err := b.GenerateBlockAndAdvanceChain(&backend.SimulatedObjects{}, privKeys); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, b.State().Slot+1)
}
inMemBlocks := b.InMemoryBlocks()
if err := beaconDb.SaveBlock(inMemBlocks[len(inMemBlocks)-1]); err != nil {
t.Fatalf("Unable to save block %v", err)
}
if err := beaconDb.UpdateChainHead(ctx, inMemBlocks[len(inMemBlocks)-1], b.State()); err != nil {
t.Fatalf("Unable to save block %v", err)
}
}
// Ran 30 slots to save finalized slot then ran another 30 slots.
slotToGenerateTill := params.BeaconConfig().GenesisSlot + slotLimit*2
newState, err := stategenerator.GenerateStateFromBlock(context.Background(), beaconDb, slotToGenerateTill)
if err != nil {
t.Fatalf("Unable to generate new state from previous finalized state %v", err)
}
if newState.Slot != b.State().Slot {
t.Fatalf("The generated state and the current state do not have the same slot, expected: %d but got %d",
b.State().Slot, newState.Slot)
}
if !proto.Equal(newState, b.State()) {
t.Error("Generated and saved states are unequal")
}
}
func TestGenerateState_WithNilBlocksOK(t *testing.T) {
b, err := backend.NewSimulatedBackend()
if err != nil {
t.Fatalf("Could not create a new simulated backend %v", err)
}
privKeys, err := b.SetupBackend(100)
if err != nil {
t.Fatalf("Could not set up backend %v", err)
}
beaconDb := b.DB()
defer b.Shutdown()
defer db.TeardownDB(beaconDb)
ctx := context.Background()
slotLimit := uint64(30)
// Run the simulated chain for 30 slots, to get a state that we can save as finalized.
for i := uint64(0); i < slotLimit; i++ {
if err := b.GenerateBlockAndAdvanceChain(&backend.SimulatedObjects{}, privKeys); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, b.State().Slot+1)
}
inMemBlocks := b.InMemoryBlocks()
if err := beaconDb.SaveBlock(inMemBlocks[len(inMemBlocks)-1]); err != nil {
t.Fatalf("Unable to save block %v", err)
}
if err := beaconDb.UpdateChainHead(ctx, inMemBlocks[len(inMemBlocks)-1], b.State()); err != nil {
t.Fatalf("Unable to save block %v", err)
}
if err := beaconDb.SaveFinalizedBlock(inMemBlocks[len(inMemBlocks)-1]); err != nil {
t.Fatalf("Unable to save finalized state: %v", err)
}
}
if err := beaconDb.SaveFinalizedState(b.State()); err != nil {
t.Fatalf("Unable to save finalized state")
}
slotsWithNil := uint64(10)
// Run the chain for 10 slots with nil blocks.
for i := uint64(0); i < slotsWithNil; i++ {
if err := b.GenerateNilBlockAndAdvanceChain(); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, b.State().Slot+1)
}
}
for i := uint64(0); i < slotLimit-slotsWithNil; i++ {
if err := b.GenerateBlockAndAdvanceChain(&backend.SimulatedObjects{}, privKeys); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, b.State().Slot+1)
}
inMemBlocks := b.InMemoryBlocks()
if err := beaconDb.SaveBlock(inMemBlocks[len(inMemBlocks)-1]); err != nil {
t.Fatalf("Unable to save block %v", err)
}
if err := beaconDb.UpdateChainHead(ctx, inMemBlocks[len(inMemBlocks)-1], b.State()); err != nil {
t.Fatalf("Unable to save block %v", err)
}
}
// Ran 30 slots to save finalized slot then ran another 10 slots w/o blocks and 20 slots w/ blocks.
slotToGenerateTill := params.BeaconConfig().GenesisSlot + slotLimit*2
newState, err := stategenerator.GenerateStateFromBlock(context.Background(), beaconDb, slotToGenerateTill)
if err != nil {
t.Fatalf("Unable to generate new state from previous finalized state %v", err)
}
if newState.Slot != b.State().Slot {
t.Fatalf("The generated state and the current state do not have the same slot, expected: %d but got %d",
b.State().Slot, newState.Slot)
}
if !proto.Equal(newState, b.State()) {
t.Error("generated and saved states are unequal")
}
}
func TestGenerateState_NilLatestFinalizedBlock(t *testing.T) {
b, err := backend.NewSimulatedBackend()
if err != nil {
t.Fatalf("Could not create a new simulated backend %v", err)
}
beaconDB := b.DB()
defer b.Shutdown()
defer db.TeardownDB(beaconDB)
beaconState := &pb.BeaconState{
Slot: params.BeaconConfig().GenesisSlot + params.BeaconConfig().SlotsPerEpoch*4,
}
if err := beaconDB.SaveFinalizedState(beaconState); err != nil {
t.Fatalf("Unable to save finalized state")
}
if err := beaconDB.SaveHistoricalState(context.Background(), beaconState); err != nil {
t.Fatalf("Unable to save finalized state")
}
slot := params.BeaconConfig().GenesisSlot + 1 + params.BeaconConfig().SlotsPerEpoch*4
want := "latest head in state is nil"
if _, err := stategenerator.GenerateStateFromBlock(context.Background(), beaconDB, slot); !strings.Contains(err.Error(), want) {
t.Errorf("Expected %v, received %v", want, err)
}
}

View File

@@ -0,0 +1,14 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["mock.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//proto/beacon/p2p/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/event:go_default_library",
],
)

View File

@@ -0,0 +1,102 @@
package testing
import (
"context"
"time"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/event"
)
// ChainService defines the mock interface for testing
type ChainService struct {
State *pb.BeaconState
Root []byte
Block *ethpb.BeaconBlock
FinalizedCheckPoint *ethpb.Checkpoint
StateFeed *event.Feed
BlocksReceived []*ethpb.BeaconBlock
Genesis time.Time
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (ms *ChainService) ReceiveBlock(ctx context.Context, block *ethpb.BeaconBlock) error {
return nil
}
// ReceiveBlockNoVerify mocks ReceiveBlockNoVerify method in chain service.
func (ms *ChainService) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.BeaconBlock) error {
return nil
}
// ReceiveBlockNoPubsub mocks ReceiveBlockNoPubsub method in chain service.
func (ms *ChainService) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.BeaconBlock) error {
return nil
}
// ReceiveBlockNoPubsubForkchoice mocks ReceiveBlockNoPubsubForkchoice method in chain service.
func (ms *ChainService) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.BeaconBlock) error {
if ms.State == nil {
ms.State = &pb.BeaconState{}
}
ms.State.Slot = block.Slot
ms.BlocksReceived = append(ms.BlocksReceived, block)
return nil
}
// HeadSlot mocks HeadSlot method in chain service.
func (ms *ChainService) HeadSlot() uint64 {
return ms.State.Slot
}
// HeadRoot mocks HeadRoot method in chain service.
func (ms *ChainService) HeadRoot() []byte {
return ms.Root
}
// HeadBlock mocks HeadBlock method in chain service.
func (ms *ChainService) HeadBlock() *ethpb.BeaconBlock {
return ms.Block
}
// HeadState mocks HeadState method in chain service.
func (ms *ChainService) HeadState() *pb.BeaconState {
return ms.State
}
// FinalizedCheckpt mocks FinalizedCheckpt method in chain service.
func (ms *ChainService) FinalizedCheckpt() *ethpb.Checkpoint {
return ms.FinalizedCheckPoint
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
func (ms *ChainService) ReceiveAttestation(context.Context, *ethpb.Attestation) error {
return nil
}
// ReceiveAttestationNoPubsub mocks ReceiveAttestationNoPubsub method in chain service.
func (ms *ChainService) ReceiveAttestationNoPubsub(context.Context, *ethpb.Attestation) error {
return nil
}
// StateInitializedFeed mocks the same method in the chain service.
func (ms *ChainService) StateInitializedFeed() *event.Feed {
if ms.StateFeed != nil {
return ms.StateFeed
}
ms.StateFeed = new(event.Feed)
return ms.StateFeed
}
// HeadUpdatedFeed mocks the same method in the chain service.
func (ms *ChainService) HeadUpdatedFeed() *event.Feed {
return new(event.Feed)
}
// GenesisTime mocks the same method in the chain service.
func (ms *ChainService) GenesisTime() time.Time {
return ms.Genesis
}

View File

@@ -3,14 +3,26 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"block.go",
"active_count.go",
"active_indices.go",
"attestation_data.go",
"checkpoint_state.go",
"committee.go",
"common.go",
"eth1_data.go",
"shuffled_indices.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/sliceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@io_k8s_client_go//tools/cache:go_default_library",
@@ -19,10 +31,27 @@ go_library(
go_test(
name = "go_default_test",
size = "small",
srcs = [
"block_test.go",
"active_count_test.go",
"active_indices_test.go",
"attestation_data_test.go",
"benchmarks_test.go",
"checkpoint_state_test.go",
"committee_test.go",
"eth1_data_test.go",
"feature_flag_test.go",
"shuffled_indices_test.go",
],
embed = [":go_default_library"],
deps = ["//proto/beacon/p2p/v1:go_default_library"],
race = "on",
deps = [
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//proto/eth/v1alpha1:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
],
)

98
beacon-chain/cache/active_count.go vendored Normal file
View File

@@ -0,0 +1,98 @@
package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/params"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotActiveCountInfo will be returned when a cache object is not a pointer to
// a ActiveCountByEpoch struct.
ErrNotActiveCountInfo = errors.New("object is not a active count obj")
// maxActiveCountListSize defines the max number of active count can cache.
maxActiveCountListSize = 1000
// Metrics.
activeCountCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_count_cache_miss",
Help: "The number of active validator count requests that aren't present in the cache.",
})
activeCountCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_count_cache_hit",
Help: "The number of active validator count requests that are present in the cache.",
})
)
// ActiveCountByEpoch defines the active validator count per epoch.
type ActiveCountByEpoch struct {
Epoch uint64
ActiveCount uint64
}
// ActiveCountCache is a struct with 1 queue for looking up active count by epoch.
type ActiveCountCache struct {
activeCountCache *cache.FIFO
lock sync.RWMutex
}
// activeCountKeyFn takes the epoch as the key for the active count of a given epoch.
func activeCountKeyFn(obj interface{}) (string, error) {
aInfo, ok := obj.(*ActiveCountByEpoch)
if !ok {
return "", ErrNotActiveCountInfo
}
return strconv.Itoa(int(aInfo.Epoch)), nil
}
// NewActiveCountCache creates a new active count cache for storing/accessing active validator count.
func NewActiveCountCache() *ActiveCountCache {
return &ActiveCountCache{
activeCountCache: cache.NewFIFO(activeCountKeyFn),
}
}
// ActiveCountInEpoch fetches ActiveCountByEpoch by epoch. Returns true with a
// reference to the ActiveCountInEpoch info, if exists. Otherwise returns false, nil.
func (c *ActiveCountCache) ActiveCountInEpoch(epoch uint64) (uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.activeCountCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return params.BeaconConfig().FarFutureEpoch, err
}
if exists {
activeCountCacheHit.Inc()
} else {
activeCountCacheMiss.Inc()
return params.BeaconConfig().FarFutureEpoch, nil
}
aInfo, ok := obj.(*ActiveCountByEpoch)
if !ok {
return params.BeaconConfig().FarFutureEpoch, ErrNotActiveCountInfo
}
return aInfo.ActiveCount, nil
}
// AddActiveCount adds ActiveCountByEpoch object to the cache. This method also trims the least
// recently added ActiveCountByEpoch object if the cache size has ready the max cache size limit.
func (c *ActiveCountCache) AddActiveCount(activeCount *ActiveCountByEpoch) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.activeCountCache.AddIfNotPresent(activeCount); err != nil {
return err
}
trim(c.activeCountCache, maxActiveCountListSize)
return nil
}

83
beacon-chain/cache/active_count_test.go vendored Normal file
View File

@@ -0,0 +1,83 @@
package cache
import (
"reflect"
"strconv"
"testing"
"github.com/prysmaticlabs/prysm/shared/params"
)
func TestActiveCountKeyFn_OK(t *testing.T) {
aInfo := &ActiveCountByEpoch{
Epoch: 999,
ActiveCount: 10,
}
key, err := activeCountKeyFn(aInfo)
if err != nil {
t.Fatal(err)
}
if key != strconv.Itoa(int(aInfo.Epoch)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, strconv.Itoa(int(aInfo.Epoch)))
}
}
func TestActiveCountKeyFn_InvalidObj(t *testing.T) {
_, err := activeCountKeyFn("bad")
if err != ErrNotActiveCountInfo {
t.Errorf("Expected error %v, got %v", ErrNotActiveCountInfo, err)
}
}
func TestActiveCountCache_ActiveCountByEpoch(t *testing.T) {
cache := NewActiveCountCache()
aInfo := &ActiveCountByEpoch{
Epoch: 99,
ActiveCount: 11,
}
activeCount, err := cache.ActiveCountInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if activeCount != params.BeaconConfig().FarFutureEpoch {
t.Error("Expected active count not to exist in empty cache")
}
if err := cache.AddActiveCount(aInfo); err != nil {
t.Fatal(err)
}
activeCount, err = cache.ActiveCountInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(activeCount, aInfo.ActiveCount) {
t.Errorf(
"Expected fetched active count to be %v, got %v",
aInfo.ActiveCount,
activeCount,
)
}
}
func TestActiveCount_MaxSize(t *testing.T) {
cache := NewActiveCountCache()
for i := uint64(0); i < 1001; i++ {
aInfo := &ActiveCountByEpoch{
Epoch: i,
}
if err := cache.AddActiveCount(aInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.activeCountCache.ListKeys()) != maxActiveCountListSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxActiveCountListSize,
len(cache.activeCountCache.ListKeys()),
)
}
}

102
beacon-chain/cache/active_indices.go vendored Normal file
View File

@@ -0,0 +1,102 @@
package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotActiveIndicesInfo will be returned when a cache object is not a pointer to
// a ActiveIndicesByEpoch struct.
ErrNotActiveIndicesInfo = errors.New("object is not a active indices list")
// maxActiveIndicesListSize defines the max number of active indices can cache.
maxActiveIndicesListSize = 4
// Metrics.
activeIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_indices_cache_miss",
Help: "The number of active validator indices requests that aren't present in the cache.",
})
activeIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "active_validator_indices_cache_hit",
Help: "The number of active validator indices requests that are present in the cache.",
})
)
// ActiveIndicesByEpoch defines the active validator indices per epoch.
type ActiveIndicesByEpoch struct {
Epoch uint64
ActiveIndices []uint64
}
// ActiveIndicesCache is a struct with 1 queue for looking up active indices by epoch.
type ActiveIndicesCache struct {
activeIndicesCache *cache.FIFO
lock sync.RWMutex
}
// activeIndicesKeyFn takes the epoch as the key for the active indices of a given epoch.
func activeIndicesKeyFn(obj interface{}) (string, error) {
aInfo, ok := obj.(*ActiveIndicesByEpoch)
if !ok {
return "", ErrNotActiveIndicesInfo
}
return strconv.Itoa(int(aInfo.Epoch)), nil
}
// NewActiveIndicesCache creates a new active indices cache for storing/accessing active validator indices.
func NewActiveIndicesCache() *ActiveIndicesCache {
return &ActiveIndicesCache{
activeIndicesCache: cache.NewFIFO(activeIndicesKeyFn),
}
}
// ActiveIndicesInEpoch fetches ActiveIndicesByEpoch by epoch. Returns true with a
// reference to the ActiveIndicesInEpoch info, if exists. Otherwise returns false, nil.
func (c *ActiveIndicesCache) ActiveIndicesInEpoch(epoch uint64) ([]uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.activeIndicesCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return nil, err
}
if exists {
activeIndicesCacheHit.Inc()
} else {
activeIndicesCacheMiss.Inc()
return nil, nil
}
aInfo, ok := obj.(*ActiveIndicesByEpoch)
if !ok {
return nil, ErrNotActiveIndicesInfo
}
return aInfo.ActiveIndices, nil
}
// AddActiveIndicesList adds ActiveIndicesByEpoch object to the cache. This method also trims the least
// recently added ActiveIndicesByEpoch object if the cache size has ready the max cache size limit.
func (c *ActiveIndicesCache) AddActiveIndicesList(activeIndices *ActiveIndicesByEpoch) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.activeIndicesCache.AddIfNotPresent(activeIndices); err != nil {
return err
}
trim(c.activeIndicesCache, maxActiveIndicesListSize)
return nil
}
// ActiveIndicesKeys returns the keys of the active indices cache.
func (c *ActiveIndicesCache) ActiveIndicesKeys() []string {
return c.activeIndicesCache.ListKeys()
}

View File

@@ -0,0 +1,82 @@
package cache
import (
"reflect"
"strconv"
"testing"
)
func TestActiveIndicesKeyFn_OK(t *testing.T) {
aInfo := &ActiveIndicesByEpoch{
Epoch: 999,
ActiveIndices: []uint64{1, 2, 3, 4, 5},
}
key, err := activeIndicesKeyFn(aInfo)
if err != nil {
t.Fatal(err)
}
if key != strconv.Itoa(int(aInfo.Epoch)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, strconv.Itoa(int(aInfo.Epoch)))
}
}
func TestActiveIndicesKeyFn_InvalidObj(t *testing.T) {
_, err := activeIndicesKeyFn("bad")
if err != ErrNotActiveIndicesInfo {
t.Errorf("Expected error %v, got %v", ErrNotActiveIndicesInfo, err)
}
}
func TestActiveIndicesCache_ActiveIndicesByEpoch(t *testing.T) {
cache := NewActiveIndicesCache()
aInfo := &ActiveIndicesByEpoch{
Epoch: 99,
ActiveIndices: []uint64{1, 2, 3, 4},
}
activeIndices, err := cache.ActiveIndicesInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if activeIndices != nil {
t.Error("Expected active indices not to exist in empty cache")
}
if err := cache.AddActiveIndicesList(aInfo); err != nil {
t.Fatal(err)
}
activeIndices, err = cache.ActiveIndicesInEpoch(aInfo.Epoch)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(activeIndices, aInfo.ActiveIndices) {
t.Errorf(
"Expected fetched active indices to be %v, got %v",
aInfo.ActiveIndices,
activeIndices,
)
}
}
func TestActiveIndices_MaxSize(t *testing.T) {
cache := NewActiveIndicesCache()
for i := uint64(0); i < 100; i++ {
aInfo := &ActiveIndicesByEpoch{
Epoch: i,
}
if err := cache.AddActiveIndicesList(aInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.activeIndicesCache.ListKeys()) != maxActiveIndicesListSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxActiveIndicesListSize,
len(cache.activeIndicesCache.ListKeys()),
)
}
}

190
beacon-chain/cache/attestation_data.go vendored Normal file
View File

@@ -0,0 +1,190 @@
package cache
import (
"context"
"errors"
"fmt"
"math"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"k8s.io/client-go/tools/cache"
)
var (
// Delay parameters
minDelay = float64(10) // 10 nanoseconds
maxDelay = float64(100000000) // 0.1 second
delayFactor = 1.1
// Metrics
attestationCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "attestation_cache_miss",
Help: "The number of attestation data requests that aren't present in the cache.",
})
attestationCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "attestation_cache_hit",
Help: "The number of attestation data requests that are present in the cache.",
})
attestationCacheSize = promauto.NewGauge(prometheus.GaugeOpts{
Name: "attestation_cache_size",
Help: "The number of attestation data in the attestations cache",
})
)
// ErrAlreadyInProgress appears when attempting to mark a cache as in progress while it is
// already in progress. The client should handle this error and wait for the in progress
// data to resolve via Get.
var ErrAlreadyInProgress = errors.New("already in progress")
// AttestationCache is used to store the cached results of an AttestationData request.
type AttestationCache struct {
cache *cache.FIFO
lock sync.RWMutex
inProgress map[string]bool
}
// NewAttestationCache initializes the map and underlying cache.
func NewAttestationCache() *AttestationCache {
return &AttestationCache{
cache: cache.NewFIFO(wrapperToKey),
inProgress: make(map[string]bool),
}
}
// Get waits for any in progress calculation to complete before returning a
// cached response, if any.
func (c *AttestationCache) Get(ctx context.Context, req *pb.AttestationRequest) (*ethpb.AttestationData, error) {
if !featureconfig.Get().EnableAttestationCache {
// Return a miss result if cache is not enabled.
attestationCacheMiss.Inc()
return nil, nil
}
if req == nil {
return nil, errors.New("nil attestation data request")
}
s, e := reqToKey(req)
if e != nil {
return nil, e
}
delay := minDelay
// Another identical request may be in progress already. Let's wait until
// any in progress request resolves or our timeout is exceeded.
for {
if ctx.Err() != nil {
return nil, ctx.Err()
}
c.lock.RLock()
if !c.inProgress[s] {
c.lock.RUnlock()
break
}
c.lock.RUnlock()
// This increasing backoff is to decrease the CPU cycles while waiting
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
}
item, exists, err := c.cache.GetByKey(s)
if err != nil {
return nil, err
}
if exists && item != nil && item.(*attestationReqResWrapper).res != nil {
attestationCacheHit.Inc()
return item.(*attestationReqResWrapper).res, nil
}
attestationCacheMiss.Inc()
return nil, nil
}
// MarkInProgress a request so that any other similar requests will block on
// Get until MarkNotInProgress is called.
func (c *AttestationCache) MarkInProgress(req *pb.AttestationRequest) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
s, e := reqToKey(req)
if e != nil {
return e
}
if c.inProgress[s] {
return ErrAlreadyInProgress
}
if featureconfig.Get().EnableAttestationCache {
c.inProgress[s] = true
}
return nil
}
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *AttestationCache) MarkNotInProgress(req *pb.AttestationRequest) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
s, e := reqToKey(req)
if e != nil {
return e
}
delete(c.inProgress, s)
return nil
}
// Put the response in the cache.
func (c *AttestationCache) Put(ctx context.Context, req *pb.AttestationRequest, res *ethpb.AttestationData) error {
if !featureconfig.Get().EnableAttestationCache {
return nil
}
data := &attestationReqResWrapper{
req,
res,
}
if err := c.cache.AddIfNotPresent(data); err != nil {
return err
}
trim(c.cache, maxCacheSize)
attestationCacheSize.Set(float64(len(c.cache.List())))
return nil
}
func wrapperToKey(i interface{}) (string, error) {
w := i.(*attestationReqResWrapper)
if w == nil {
return "", errors.New("nil wrapper")
}
if w.req == nil {
return "", errors.New("nil wrapper.request")
}
return reqToKey(w.req)
}
func reqToKey(req *pb.AttestationRequest) (string, error) {
return fmt.Sprintf("%d-%d", req.Shard, req.Slot), nil
}
type attestationReqResWrapper struct {
req *pb.AttestationRequest
res *ethpb.AttestationData
}

View File

@@ -0,0 +1,55 @@
package cache_test
import (
"context"
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
pb "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
func TestAttestationCache_RoundTrip(t *testing.T) {
ctx := context.Background()
c := cache.NewAttestationCache()
req := &pb.AttestationRequest{
Shard: 0,
Slot: 1,
}
response, err := c.Get(ctx, req)
if err != nil {
t.Error(err)
}
if response != nil {
t.Errorf("Empty cache returned an object: %v", response)
}
if err := c.MarkInProgress(req); err != nil {
t.Error(err)
}
res := &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Epoch: 5},
}
if err = c.Put(ctx, req, res); err != nil {
t.Error(err)
}
if err := c.MarkNotInProgress(req); err != nil {
t.Error(err)
}
response, err = c.Get(ctx, req)
if err != nil {
t.Error(err)
}
if !proto.Equal(response, res) {
t.Error("Expected equal protos to return from cache")
}
}

45
beacon-chain/cache/benchmarks_test.go vendored Normal file
View File

@@ -0,0 +1,45 @@
package cache
import (
"testing"
)
var indices300k = createIndices(300000)
var epoch = uint64(1)
func createIndices(count int) *ActiveIndicesByEpoch {
indices := make([]uint64, 0, count)
for i := 0; i < count; i++ {
indices = append(indices, uint64(i))
}
return &ActiveIndicesByEpoch{
Epoch: epoch,
ActiveIndices: indices,
}
}
func BenchmarkCachingAddRetrieve(b *testing.B) {
c := NewActiveIndicesCache()
b.Run("ADD300K", func(b *testing.B) {
b.N = 10
b.ResetTimer()
for i := 0; i < b.N; i++ {
if err := c.AddActiveIndicesList(indices300k); err != nil {
b.Fatal(err)
}
}
})
b.Run("RETR300K", func(b *testing.B) {
b.N = 10
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := c.ActiveIndicesInEpoch(epoch); err != nil {
b.Fatal(err)
}
}
})
}

View File

@@ -1,104 +0,0 @@
package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotAncestorCacheObj will be returned when a cache object is not a pointer to
// block ancestor cache obj.
ErrNotAncestorCacheObj = errors.New("object is not an ancestor object for cache")
// Metrics
ancestorBlockCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "ancestor_block_cache_miss",
Help: "The number of ancestor block requests that aren't present in the cache.",
})
ancestorBlockCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "ancestor_block_cache_hit",
Help: "The number of ancestor block requests that are present in the cache.",
})
ancestorBlockCacheSize = promauto.NewGauge(prometheus.GaugeOpts{
Name: "ancestor_block_cache_size",
Help: "The number of ancestor blocks in the ancestorBlock cache",
})
)
// AncestorInfo defines the cached ancestor block object for height.
type AncestorInfo struct {
Height uint64
Hash []byte
Target *pb.AttestationTarget
}
// AncestorBlockCache structs with 1 queue for looking up block ancestor by height.
type AncestorBlockCache struct {
ancestorBlockCache *cache.FIFO
lock sync.RWMutex
}
// heightKeyFn takes the string representation of the block hash + height as the key
// for the ancestor of a given block (AncestorInfo).
func heightKeyFn(obj interface{}) (string, error) {
aInfo, ok := obj.(*AncestorInfo)
if !ok {
return "", ErrNotAncestorCacheObj
}
return string(aInfo.Hash) + strconv.Itoa(int(aInfo.Height)), nil
}
// NewBlockAncestorCache creates a new block ancestor cache for storing/accessing block ancestor
// from memory.
func NewBlockAncestorCache() *AncestorBlockCache {
return &AncestorBlockCache{
ancestorBlockCache: cache.NewFIFO(heightKeyFn),
}
}
// AncestorBySlot fetches block's ancestor by height. Returns true with a
// reference to the ancestor block, if exists. Otherwise returns false, nil.
func (a *AncestorBlockCache) AncestorBySlot(blockHash []byte, height uint64) (*AncestorInfo, error) {
a.lock.RLock()
defer a.lock.RUnlock()
obj, exists, err := a.ancestorBlockCache.GetByKey(string(blockHash) + strconv.Itoa(int(height)))
if err != nil {
return nil, err
}
if exists {
ancestorBlockCacheHit.Inc()
} else {
ancestorBlockCacheMiss.Inc()
return nil, nil
}
aInfo, ok := obj.(*AncestorInfo)
if !ok {
return nil, ErrNotACommitteeInfo
}
return aInfo, nil
}
// AddBlockAncestor adds block ancestor object to the cache. This method also trims the least
// recently added ancestor if the cache size has ready the max cache size limit.
func (a *AncestorBlockCache) AddBlockAncestor(ancestorInfo *AncestorInfo) error {
a.lock.Lock()
defer a.lock.Unlock()
if err := a.ancestorBlockCache.AddIfNotPresent(ancestorInfo); err != nil {
return err
}
trim(a.ancestorBlockCache, maxCacheSize)
ancestorBlockCacheSize.Set(float64(len(a.ancestorBlockCache.ListKeys())))
return nil
}

View File

@@ -1,111 +0,0 @@
package cache
import (
"reflect"
"strconv"
"testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestHeightHeightFn_OK(t *testing.T) {
height := uint64(999)
hash := []byte{'A'}
aInfo := &AncestorInfo{
Height: height,
Hash: hash,
Target: &pb.AttestationTarget{
Slot: height,
BlockRoot: hash,
},
}
key, err := heightKeyFn(aInfo)
if err != nil {
t.Fatal(err)
}
strHeightKey := string(aInfo.Target.BlockRoot) + strconv.Itoa(int(aInfo.Target.Slot))
if key != strHeightKey {
t.Errorf("Incorrect hash key: %s, expected %s", key, strHeightKey)
}
}
func TestHeightKeyFn_InvalidObj(t *testing.T) {
_, err := heightKeyFn("bad")
if err != ErrNotAncestorCacheObj {
t.Errorf("Expected error %v, got %v", ErrNotAncestorCacheObj, err)
}
}
func TestAncestorCache_AncestorInfoByHeight(t *testing.T) {
cache := NewBlockAncestorCache()
height := uint64(123)
hash := []byte{'B'}
aInfo := &AncestorInfo{
Height: height,
Hash: hash,
Target: &pb.AttestationTarget{
Slot: height,
BlockRoot: hash,
},
}
fetchedInfo, err := cache.AncestorBySlot(hash, height)
if err != nil {
t.Fatal(err)
}
if fetchedInfo != nil {
t.Error("Expected ancestor info not to exist in empty cache")
}
if err := cache.AddBlockAncestor(aInfo); err != nil {
t.Fatal(err)
}
fetchedInfo, err = cache.AncestorBySlot(hash, height)
if err != nil {
t.Fatal(err)
}
if fetchedInfo == nil {
t.Error("Expected ancestor info to exist")
}
if fetchedInfo.Height != height {
t.Errorf(
"Expected fetched slot number to be %d, got %d",
aInfo.Target.Slot,
fetchedInfo.Target.Slot,
)
}
if !reflect.DeepEqual(fetchedInfo.Target, aInfo.Target) {
t.Errorf(
"Expected fetched info committee to be %v, got %v",
aInfo.Target,
fetchedInfo.Target,
)
}
}
func TestBlockAncestor_maxSize(t *testing.T) {
cache := NewBlockAncestorCache()
for i := 0; i < maxCacheSize+10; i++ {
aInfo := &AncestorInfo{
Height: uint64(i),
Target: &pb.AttestationTarget{
Slot: uint64(i),
},
}
if err := cache.AddBlockAncestor(aInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.ancestorBlockCache.ListKeys()) != maxCacheSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxCacheSize,
len(cache.ancestorBlockCache.ListKeys()),
)
}
}

114
beacon-chain/cache/checkpoint_state.go vendored Normal file
View File

@@ -0,0 +1,114 @@
package cache
import (
"errors"
"sync"
"github.com/gogo/protobuf/proto"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotCheckpointState will be returned when a cache object is not a pointer to
// a CheckpointState struct.
ErrNotCheckpointState = errors.New("object is not a state by check point struct")
// maxCheckpointStateSize defines the max number of entries check point to state cache can contain.
maxCheckpointStateSize = 4
// Metrics.
checkpointStateMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "check_point_statecache_miss",
Help: "The number of check point state requests that aren't present in the cache.",
})
checkpointStateHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "check_point_state_cache_hit",
Help: "The number of check point state requests that are present in the cache.",
})
)
// CheckpointState defines the active validator indices per epoch.
type CheckpointState struct {
Checkpoint *ethpb.Checkpoint
State *pb.BeaconState
}
// CheckpointStateCache is a struct with 1 queue for looking up state by checkpoint.
type CheckpointStateCache struct {
cache *cache.FIFO
lock sync.RWMutex
}
// checkpointState takes the checkpoint as the key of the resulting state.
func checkpointState(obj interface{}) (string, error) {
info, ok := obj.(*CheckpointState)
if !ok {
return "", ErrNotCheckpointState
}
h, err := hashutil.HashProto(info.Checkpoint)
if err != nil {
return "", err
}
return string(h[:]), nil
}
// NewCheckpointStateCache creates a new checkpoint state cache for storing/accessing processed state.
func NewCheckpointStateCache() *CheckpointStateCache {
return &CheckpointStateCache{
cache: cache.NewFIFO(checkpointState),
}
}
// StateByCheckpoint fetches state by checkpoint. Returns true with a
// reference to the CheckpointState info, if exists. Otherwise returns false, nil.
func (c *CheckpointStateCache) StateByCheckpoint(cp *ethpb.Checkpoint) (*pb.BeaconState, error) {
c.lock.RLock()
defer c.lock.RUnlock()
h, err := hashutil.HashProto(cp)
if err != nil {
return nil, err
}
obj, exists, err := c.cache.GetByKey(string(h[:]))
if err != nil {
return nil, err
}
if exists {
checkpointStateHit.Inc()
} else {
checkpointStateMiss.Inc()
return nil, nil
}
info, ok := obj.(*CheckpointState)
if !ok {
return nil, ErrNotCheckpointState
}
return proto.Clone(info.State).(*pb.BeaconState), nil
}
// AddCheckpointState adds CheckpointState object to the cache. This method also trims the least
// recently added CheckpointState object if the cache size has ready the max cache size limit.
func (c *CheckpointStateCache) AddCheckpointState(cp *CheckpointState) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.cache.AddIfNotPresent(cp); err != nil {
return err
}
trim(c.cache, maxCheckpointStateSize)
return nil
}
// CheckpointStateKeys returns the keys of the state in cache.
func (c *CheckpointStateCache) CheckpointStateKeys() []string {
return c.cache.ListKeys()
}

View File

@@ -0,0 +1,110 @@
package cache
import (
"reflect"
"testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
)
func TestCheckpointStateCacheKeyFn_OK(t *testing.T) {
cp := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
info := &CheckpointState{
Checkpoint: cp,
State: &pb.BeaconState{Slot: 64},
}
key, err := checkpointState(info)
if err != nil {
t.Fatal(err)
}
wantedKey, err := hashutil.HashProto(cp)
if err != nil {
t.Fatal(err)
}
if key != string(wantedKey[:]) {
t.Errorf("Incorrect hash key: %s, expected %s", key, string(wantedKey[:]))
}
}
func TestCheckpointStateCacheKeyFn_InvalidObj(t *testing.T) {
_, err := checkpointState("bad")
if err != ErrNotCheckpointState {
t.Errorf("Expected error %v, got %v", ErrNotCheckpointState, err)
}
}
func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
cache := NewCheckpointStateCache()
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
info1 := &CheckpointState{
Checkpoint: cp1,
State: &pb.BeaconState{Slot: 64},
}
state, err := cache.StateByCheckpoint(cp1)
if err != nil {
t.Fatal(err)
}
if state != nil {
t.Error("Expected state not to exist in empty cache")
}
if err := cache.AddCheckpointState(info1); err != nil {
t.Fatal(err)
}
state, err = cache.StateByCheckpoint(cp1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, info1.State) {
t.Error("incorrectly cached state")
}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: []byte{'B'}}
info2 := &CheckpointState{
Checkpoint: cp2,
State: &pb.BeaconState{Slot: 128},
}
if err := cache.AddCheckpointState(info2); err != nil {
t.Fatal(err)
}
state, err = cache.StateByCheckpoint(cp2)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, info2.State) {
t.Error("incorrectly cached state")
}
state, err = cache.StateByCheckpoint(cp1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(state, info1.State) {
t.Error("incorrectly cached state")
}
}
func TestCheckpointStateCache__MaxSize(t *testing.T) {
c := NewCheckpointStateCache()
for i := 0; i < maxCheckpointStateSize+100; i++ {
info := &CheckpointState{
Checkpoint: &ethpb.Checkpoint{Epoch: uint64(i)},
State: &pb.BeaconState{Slot: uint64(i)},
}
if err := c.AddCheckpointState(info); err != nil {
t.Fatal(err)
}
}
if len(c.cache.ListKeys()) != maxCheckpointStateSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxCheckpointStateSize,
len(c.cache.ListKeys()),
)
}
}

View File

@@ -8,120 +8,212 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotACommitteeInfo will be returned when a cache object is not a pointer to
// a committeeInfo struct.
ErrNotACommitteeInfo = errors.New("object is not an committee info")
// ErrNotCommittee will be returned when a cache object is not a pointer to
// a Committee struct.
ErrNotCommittee = errors.New("object is not a committee struct")
// maxCacheSize is 4x of the epoch length for additional cache padding.
// Requests should be only accessing committees within defined epoch length.
maxCacheSize = int(4 * params.BeaconConfig().SlotsPerEpoch)
// maxShuffledIndicesSize defines the max number of shuffled indices list can cache.
// 3 for previous, current epoch and next epoch.
maxShuffledIndicesSize = 3
// Metrics
committeeCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
// CommitteeCacheMiss tracks the number of committee requests that aren't present in the cache.
CommitteeCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "committee_cache_miss",
Help: "The number of committee requests that aren't present in the cache.",
})
committeeCacheHit = promauto.NewCounter(prometheus.CounterOpts{
// CommitteeCacheHit tracks the number of committee requests that are in the cache.
CommitteeCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "committee_cache_hit",
Help: "The number of committee requests that are present in the cache.",
})
committeeCacheSize = promauto.NewGauge(prometheus.GaugeOpts{
Name: "committee_cache_size",
Help: "The number of committees in the committee cache",
})
)
// CommitteeInfo defines the validator committee of slot and shard combinations.
type CommitteeInfo struct {
Committee []uint64
Shard uint64
// Committee defines the committee per epoch and shard.
type Committee struct {
StartShard uint64
CommitteeCount uint64
Epoch uint64
Committee []uint64
}
// CommitteesInSlot specifies how many CommitteeInfos are in a given slot.
type CommitteesInSlot struct {
Slot uint64
Committees []*CommitteeInfo
// CommitteeCache is a struct with 1 queue for looking up shuffled indices list by epoch and shard.
type CommitteeCache struct {
CommitteeCache *cache.FIFO
lock sync.RWMutex
}
// CommitteesCache structs with 1 queue for looking up committees by slot.
type CommitteesCache struct {
committeesCache *cache.FIFO
lock sync.RWMutex
}
// slotKeyFn takes the string representation of the slot number as the key
// for the committees of a given slot (CommitteesInSlot).
func slotKeyFn(obj interface{}) (string, error) {
cInfo, ok := obj.(*CommitteesInSlot)
// committeeKeyFn takes the epoch as the key to retrieve shuffled indices of a committee in a given epoch.
func committeeKeyFn(obj interface{}) (string, error) {
info, ok := obj.(*Committee)
if !ok {
return "", ErrNotACommitteeInfo
return "", ErrNotCommittee
}
return strconv.Itoa(int(cInfo.Slot)), nil
return strconv.Itoa(int(info.Epoch)), nil
}
// NewCommitteesCache creates a new committee cache for storing/accessing blockInfo from
// memory.
func NewCommitteesCache() *CommitteesCache {
return &CommitteesCache{
committeesCache: cache.NewFIFO(slotKeyFn),
// NewCommitteeCache creates a new committee cache for storing/accessing shuffled indices of a committee.
func NewCommitteeCache() *CommitteeCache {
return &CommitteeCache{
CommitteeCache: cache.NewFIFO(committeeKeyFn),
}
}
// CommitteesInfoBySlot fetches CommitteesInSlot by slot. Returns true with a
// reference to the committees info, if exists. Otherwise returns false, nil.
func (c *CommitteesCache) CommitteesInfoBySlot(slot uint64) (*CommitteesInSlot, error) {
// ShuffledIndices fetches the shuffled indices by epoch and shard. Every list of indices
// represent one committee. Returns true if the list exists with epoch and shard. Otherwise returns false, nil.
func (c *CommitteeCache) ShuffledIndices(epoch uint64, shard uint64) ([]uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.committeesCache.GetByKey(strconv.Itoa(int(slot)))
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return nil, err
}
if exists {
committeeCacheHit.Inc()
CommitteeCacheHit.Inc()
} else {
committeeCacheMiss.Inc()
CommitteeCacheMiss.Inc()
return nil, nil
}
cInfo, ok := obj.(*CommitteesInSlot)
item, ok := obj.(*Committee)
if !ok {
return nil, ErrNotACommitteeInfo
return nil, ErrNotCommittee
}
return cInfo, nil
start, end := startEndIndices(item, shard)
return item.Committee[start:end], nil
}
// AddCommittees adds CommitteesInSlot object to the cache. This method also trims the least
// recently added committeeInfo object if the cache size has ready the max cache size limit.
func (c *CommitteesCache) AddCommittees(committees *CommitteesInSlot) error {
// AddCommitteeShuffledList adds Committee shuffled list object to the cache. T
// his method also trims the least recently list if the cache size has ready the max cache size limit.
func (c *CommitteeCache) AddCommitteeShuffledList(committee *Committee) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.committeesCache.AddIfNotPresent(committees); err != nil {
if err := c.CommitteeCache.AddIfNotPresent(committee); err != nil {
return err
}
trim(c.committeesCache, maxCacheSize)
committeeCacheSize.Set(float64(len(c.committeesCache.ListKeys())))
trim(c.CommitteeCache, maxShuffledIndicesSize)
return nil
}
// trim the FIFO queue to the maxSize.
func trim(queue *cache.FIFO, maxSize int) {
for s := len(queue.ListKeys()); s > maxSize; s-- {
// #nosec G104 popProcessNoopFunc never returns an error
_, _ = queue.Pop(popProcessNoopFunc)
// Epochs returns the epochs stored in the committee cache. These are the keys to the cache.
func (c *CommitteeCache) Epochs() ([]uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
epochs := make([]uint64, len(c.CommitteeCache.ListKeys()))
for i, s := range c.CommitteeCache.ListKeys() {
epoch, err := strconv.Atoi(s)
if err != nil {
return nil, err
}
epochs[i] = uint64(epoch)
}
return epochs, nil
}
// popProcessNoopFunc is a no-op function that never returns an error.
func popProcessNoopFunc(obj interface{}) error {
return nil
// EpochInCache returns true if an input epoch is part of keys in cache.
func (c *CommitteeCache) EpochInCache(wantedEpoch uint64) (bool, error) {
c.lock.RLock()
defer c.lock.RUnlock()
for _, s := range c.CommitteeCache.ListKeys() {
epoch, err := strconv.Atoi(s)
if err != nil {
return false, err
}
if wantedEpoch == uint64(epoch) {
return true, nil
}
}
return false, nil
}
// CommitteeCount returns the total number of committees in a given epoch as stored in cache.
func (c *CommitteeCache) CommitteeCount(epoch uint64) (uint64, bool, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return 0, false, err
}
if exists {
CommitteeCacheHit.Inc()
} else {
CommitteeCacheMiss.Inc()
return 0, false, nil
}
item, ok := obj.(*Committee)
if !ok {
return 0, false, ErrNotCommittee
}
return item.CommitteeCount, true, nil
}
// StartShard returns the start shard number in a given epoch as stored in cache.
func (c *CommitteeCache) StartShard(epoch uint64) (uint64, bool, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return 0, false, err
}
if exists {
CommitteeCacheHit.Inc()
} else {
CommitteeCacheMiss.Inc()
return 0, false, nil
}
item, ok := obj.(*Committee)
if !ok {
return 0, false, ErrNotCommittee
}
return item.StartShard, true, nil
}
// ActiveIndices returns the active indices of a given epoch stored in cache.
func (c *CommitteeCache) ActiveIndices(epoch uint64) ([]uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.CommitteeCache.GetByKey(strconv.Itoa(int(epoch)))
if err != nil {
return nil, err
}
if exists {
CommitteeCacheHit.Inc()
} else {
CommitteeCacheMiss.Inc()
return nil, nil
}
item, ok := obj.(*Committee)
if !ok {
return nil, ErrNotCommittee
}
return item.Committee, nil
}
func startEndIndices(c *Committee, wantedShard uint64) (uint64, uint64) {
shardCount := params.BeaconConfig().ShardCount
currentShard := (wantedShard + shardCount - c.StartShard) % shardCount
validatorCount := uint64(len(c.Committee))
start := sliceutil.SplitOffset(validatorCount, c.CommitteeCount, currentShard)
end := sliceutil.SplitOffset(validatorCount, c.CommitteeCount, currentShard+1)
return start, end
}

View File

@@ -6,91 +6,234 @@ import (
"testing"
)
func TestSlotKeyFn_OK(t *testing.T) {
cInfo := &CommitteesInSlot{
Slot: 999,
Committees: []*CommitteeInfo{
{Shard: 1, Committee: []uint64{1, 2, 3}},
{Shard: 1, Committee: []uint64{4, 5, 6}},
},
func TestCommitteeKeyFn_OK(t *testing.T) {
item := &Committee{
Epoch: 999,
CommitteeCount: 1,
Committee: []uint64{1, 2, 3, 4, 5},
}
key, err := slotKeyFn(cInfo)
key, err := committeeKeyFn(item)
if err != nil {
t.Fatal(err)
}
strSlot := strconv.Itoa(int(cInfo.Slot))
if key != strSlot {
t.Errorf("Incorrect hash key: %s, expected %s", key, strSlot)
if key != strconv.Itoa(int(item.Epoch)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, strconv.Itoa(int(item.Epoch)))
}
}
func TestSlotKeyFn_InvalidObj(t *testing.T) {
_, err := slotKeyFn("bad")
if err != ErrNotACommitteeInfo {
t.Errorf("Expected error %v, got %v", ErrNotACommitteeInfo, err)
func TestCommitteeKeyFn_InvalidObj(t *testing.T) {
_, err := committeeKeyFn("bad")
if err != ErrNotCommittee {
t.Errorf("Expected error %v, got %v", ErrNotCommittee, err)
}
}
func TestCommitteesCache_CommitteesInfoBySlot(t *testing.T) {
cache := NewCommitteesCache()
func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
cache := NewCommitteeCache()
cInfo := &CommitteesInSlot{
Slot: 123,
Committees: []*CommitteeInfo{{Shard: 456}},
item := &Committee{
Epoch: 1,
Committee: []uint64{1, 2, 3, 4, 5, 6},
CommitteeCount: 3,
StartShard: 1,
}
fetchedInfo, err := cache.CommitteesInfoBySlot(cInfo.Slot)
epoch := uint64(1)
startShard := uint64(1)
indices, err := cache.ShuffledIndices(epoch, startShard)
if err != nil {
t.Fatal(err)
}
if fetchedInfo != nil {
t.Error("Expected committees info not to exist in empty cache")
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
}
if err := cache.AddCommittees(cInfo); err != nil {
if err := cache.AddCommitteeShuffledList(item); err != nil {
t.Fatal(err)
}
fetchedInfo, err = cache.CommitteesInfoBySlot(cInfo.Slot)
wantedShard := uint64(2)
indices, err = cache.ShuffledIndices(epoch, wantedShard)
if err != nil {
t.Fatal(err)
}
if fetchedInfo == nil {
t.Error("Expected committee info to exist")
}
if fetchedInfo.Slot != cInfo.Slot {
start, end := startEndIndices(item, wantedShard)
if !reflect.DeepEqual(indices, item.Committee[start:end]) {
t.Errorf(
"Expected fetched slot number to be %d, got %d",
cInfo.Slot,
fetchedInfo.Slot,
)
}
if !reflect.DeepEqual(fetchedInfo.Committees, cInfo.Committees) {
t.Errorf(
"Expected fetched info committee to be %v, got %v",
cInfo.Committees,
fetchedInfo.Committees,
"Expected fetched active indices to be %v, got %v",
indices,
item.Committee[start:end],
)
}
}
func TestBlockCache_maxSize(t *testing.T) {
cache := NewCommitteesCache()
for i := 0; i < maxCacheSize+10; i++ {
cInfo := &CommitteesInSlot{
Slot: uint64(i),
}
if err := cache.AddCommittees(cInfo); err != nil {
t.Fatal(err)
}
func TestCommitteeCache_CanRotate(t *testing.T) {
cache := NewCommitteeCache()
item1 := &Committee{Epoch: 1}
if err := cache.AddCommitteeShuffledList(item1); err != nil {
t.Fatal(err)
}
item2 := &Committee{Epoch: 2}
if err := cache.AddCommitteeShuffledList(item2); err != nil {
t.Fatal(err)
}
epochs, err := cache.Epochs()
if err != nil {
t.Fatal(err)
}
wanted := item1.Epoch + item2.Epoch
if sum(epochs) != wanted {
t.Errorf("Wanted: %v, got: %v", wanted, sum(epochs))
}
if len(cache.committeesCache.ListKeys()) != maxCacheSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxCacheSize,
len(cache.committeesCache.ListKeys()),
)
item3 := &Committee{Epoch: 4}
if err := cache.AddCommitteeShuffledList(item3); err != nil {
t.Fatal(err)
}
epochs, err = cache.Epochs()
if err != nil {
t.Fatal(err)
}
wanted = item1.Epoch + item2.Epoch + item3.Epoch
if sum(epochs) != wanted {
t.Errorf("Wanted: %v, got: %v", wanted, sum(epochs))
}
item4 := &Committee{Epoch: 6}
if err := cache.AddCommitteeShuffledList(item4); err != nil {
t.Fatal(err)
}
epochs, err = cache.Epochs()
if err != nil {
t.Fatal(err)
}
wanted = item2.Epoch + item3.Epoch + item4.Epoch
if sum(epochs) != wanted {
t.Errorf("Wanted: %v, got: %v", wanted, sum(epochs))
}
}
func TestCommitteeCache_EpochInCache(t *testing.T) {
cache := NewCommitteeCache()
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 1}); err != nil {
t.Fatal(err)
}
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 2}); err != nil {
t.Fatal(err)
}
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 99}); err != nil {
t.Fatal(err)
}
if err := cache.AddCommitteeShuffledList(&Committee{Epoch: 100}); err != nil {
t.Fatal(err)
}
inCache, err := cache.EpochInCache(1)
if err != nil {
t.Fatal(err)
}
if inCache {
t.Error("Epoch shouldn't be in cache")
}
inCache, err = cache.EpochInCache(100)
if err != nil {
t.Fatal(err)
}
if !inCache {
t.Error("Epoch should be in cache")
}
}
func TestCommitteeCache_CommitteesCount(t *testing.T) {
cache := NewCommitteeCache()
committeeCount := uint64(3)
epoch := uint64(10)
item := &Committee{Epoch: epoch, CommitteeCount: committeeCount}
_, exists, err := cache.CommitteeCount(1)
if err != nil {
t.Fatal(err)
}
if exists {
t.Error("Expected committee count not to exist in empty cache")
}
if err := cache.AddCommitteeShuffledList(item); err != nil {
t.Fatal(err)
}
count, exists, err := cache.CommitteeCount(epoch)
if err != nil {
t.Fatal(err)
}
if !exists {
t.Error("Expected committee count to be in cache")
}
if count != committeeCount {
t.Errorf("wanted: %d, got: %d", committeeCount, count)
}
}
func TestCommitteeCache_ShardCount(t *testing.T) {
cache := NewCommitteeCache()
startShard := uint64(7)
epoch := uint64(3)
item := &Committee{Epoch: epoch, StartShard: startShard}
_, exists, err := cache.StartShard(1)
if err != nil {
t.Fatal(err)
}
if exists {
t.Error("Expected start shard not to exist in empty cache")
}
if err := cache.AddCommitteeShuffledList(item); err != nil {
t.Fatal(err)
}
shard, exists, err := cache.StartShard(epoch)
if err != nil {
t.Fatal(err)
}
if !exists {
t.Error("Expected start shard to be in cache")
}
if shard != startShard {
t.Errorf("wanted: %d, got: %d", startShard, shard)
}
}
func TestCommitteeCache_ActiveIndices(t *testing.T) {
cache := NewCommitteeCache()
item := &Committee{Epoch: 1, Committee: []uint64{1, 2, 3, 4, 5, 6}}
indices, err := cache.ActiveIndices(1)
if err != nil {
t.Fatal(err)
}
if indices != nil {
t.Error("Expected committee count not to exist in empty cache")
}
if err := cache.AddCommitteeShuffledList(item); err != nil {
t.Fatal(err)
}
indices, err = cache.ActiveIndices(1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(indices, item.Committee) {
t.Error("Did not receive correct active indices from cache")
}
}
func sum(values []uint64) uint64 {
sum := uint64(0)
for _, v := range values {
sum = v + sum
}
return sum
}

25
beacon-chain/cache/common.go vendored Normal file
View File

@@ -0,0 +1,25 @@
package cache
import (
"github.com/prysmaticlabs/prysm/shared/params"
"k8s.io/client-go/tools/cache"
)
var (
// maxCacheSize is 4x of the epoch length for additional cache padding.
// Requests should be only accessing committees within defined epoch length.
maxCacheSize = int(4 * params.BeaconConfig().SlotsPerEpoch)
)
// trim the FIFO queue to the maxSize.
func trim(queue *cache.FIFO, maxSize int) {
for s := len(queue.ListKeys()); s > maxSize; s-- {
// #nosec G104 popProcessNoopFunc never returns an error
_, _ = queue.Pop(popProcessNoopFunc)
}
}
// popProcessNoopFunc is a no-op function that never returns an error.
func popProcessNoopFunc(obj interface{}) error {
return nil
}

View File

@@ -0,0 +1,34 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"deposits_cache.go",
"pending_deposits.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//proto/eth/v1alpha1:go_default_library",
"//shared/hashutil:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"deposits_test.go",
"pending_deposits_test.go",
],
embed = [":go_default_library"],
deps = [
"//proto/eth/v1alpha1:go_default_library",
"//shared/bytesutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -0,0 +1,159 @@
package depositcache
import (
"bytes"
"context"
"encoding/hex"
"math/big"
"sort"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var (
historicalDepositsCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacondb_all_deposits",
Help: "The number of total deposits in the beaconDB in-memory database",
})
)
// DepositFetcher defines a struct which can retrieve deposit information from a store.
type DepositFetcher interface {
AllDeposits(ctx context.Context, beforeBlk *big.Int) []*ethpb.Deposit
DepositByPubkey(ctx context.Context, pubKey []byte) (*ethpb.Deposit, *big.Int)
DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight *big.Int) (uint64, [32]byte)
}
// DepositCache stores all in-memory deposit objects. This
// stores all the deposit related data that is required by the beacon-node.
type DepositCache struct {
// Beacon chain deposits in memory.
pendingDeposits []*DepositContainer
deposits []*DepositContainer
depositsLock sync.RWMutex
chainStartDeposits []*ethpb.Deposit
chainstartPubkeys map[string]bool
chainstartPubkeysLock sync.RWMutex
}
// DepositContainer object for holding the deposit and a reference to the block in
// which the deposit transaction was included in the proof of work chain.
type DepositContainer struct {
Deposit *ethpb.Deposit
Block *big.Int
Index int
depositRoot [32]byte
}
// NewDepositCache instantiates a new deposit cache
func NewDepositCache() *DepositCache {
return &DepositCache{
pendingDeposits: []*DepositContainer{},
deposits: []*DepositContainer{},
chainstartPubkeys: make(map[string]bool),
chainStartDeposits: make([]*ethpb.Deposit, 0),
}
}
// InsertDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blockNum *big.Int, index int, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.InsertDeposit")
defer span.End()
if d == nil || blockNum == nil {
log.WithFields(log.Fields{
"block": blockNum,
"deposit": d,
"index": index,
"deposit root": hex.EncodeToString(depositRoot[:]),
}).Warn("Ignoring nil deposit insertion")
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
// keep the slice sorted on insertion in order to avoid costly sorting on retrival.
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Index >= index })
newDeposits := append([]*DepositContainer{{Deposit: d, Block: blockNum, depositRoot: depositRoot, Index: index}}, dc.deposits[heightIdx:]...)
dc.deposits = append(dc.deposits[:heightIdx], newDeposits...)
historicalDepositsCount.Inc()
}
// MarkPubkeyForChainstart sets the pubkey deposit status to true.
func (dc *DepositCache) MarkPubkeyForChainstart(ctx context.Context, pubkey string) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.MarkPubkeyForChainstart")
defer span.End()
dc.chainstartPubkeysLock.Lock()
defer dc.chainstartPubkeysLock.Unlock()
dc.chainstartPubkeys[pubkey] = true
}
// PubkeyInChainstart returns bool for whether the pubkey passed in has deposited.
func (dc *DepositCache) PubkeyInChainstart(ctx context.Context, pubkey string) bool {
ctx, span := trace.StartSpan(ctx, "BeaconDB.PubkeyInChainstart")
defer span.End()
dc.chainstartPubkeysLock.Lock()
defer dc.chainstartPubkeysLock.Unlock()
if dc.chainstartPubkeys != nil {
return dc.chainstartPubkeys[pubkey]
}
dc.chainstartPubkeys = make(map[string]bool)
return false
}
// AllDeposits returns a list of deposits all historical deposits until the given block number
// (inclusive). If no block is specified then this method returns all historical deposits.
func (dc *DepositCache) AllDeposits(ctx context.Context, beforeBlk *big.Int) []*ethpb.Deposit {
ctx, span := trace.StartSpan(ctx, "BeaconDB.AllDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var deposits []*ethpb.Deposit
for _, ctnr := range dc.deposits {
if beforeBlk == nil || beforeBlk.Cmp(ctnr.Block) > -1 {
deposits = append(deposits, ctnr.Deposit)
}
}
return deposits
}
// DepositsNumberAndRootAtHeight returns number of deposits made prior to blockheight and the
// root that corresponds to the latest deposit at that blockheight.
func (dc *DepositCache) DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight *big.Int) (uint64, [32]byte) {
ctx, span := trace.StartSpan(ctx, "Beacondb.DepositsNumberAndRootAtHeight")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Block.Cmp(blockHeight) > 0 })
// send the deposit root of the empty trie, if eth1follow distance is greater than the time of the earliest
// deposit.
if heightIdx == 0 {
return 0, [32]byte{}
}
return uint64(heightIdx), dc.deposits[heightIdx-1].depositRoot
}
// DepositByPubkey looks through historical deposits and finds one which contains
// a certain public key within its deposit data.
func (dc *DepositCache) DepositByPubkey(ctx context.Context, pubKey []byte) (*ethpb.Deposit, *big.Int) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.DepositByPubkey")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var deposit *ethpb.Deposit
var blockNum *big.Int
for _, ctnr := range dc.deposits {
if bytes.Equal(ctnr.Deposit.Data.PublicKey, pubKey) {
deposit = ctnr.Deposit
blockNum = ctnr.Block
break
}
}
return deposit, blockNum
}

View File

@@ -0,0 +1,288 @@
package depositcache
import (
"bytes"
"context"
"math/big"
"testing"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
const nilDepositErr = "Ignoring nil deposit insertion"
var _ = DepositFetcher(&DepositCache{})
func TestBeaconDB_InsertDeposit_LogsOnNilDepositInsertion(t *testing.T) {
hook := logTest.NewGlobal()
dc := DepositCache{}
dc.InsertDeposit(context.Background(), nil, big.NewInt(1), 0, [32]byte{})
if len(dc.deposits) != 0 {
t.Fatal("Number of deposits changed")
}
if hook.LastEntry().Message != nilDepositErr {
t.Errorf("Did not log correct message, wanted \"Ignoring nil deposit insertion\", got \"%s\"", hook.LastEntry().Message)
}
}
func TestBeaconDB_InsertDeposit_LogsOnNilBlockNumberInsertion(t *testing.T) {
hook := logTest.NewGlobal()
dc := DepositCache{}
dc.InsertDeposit(context.Background(), &ethpb.Deposit{}, nil, 0, [32]byte{})
if len(dc.deposits) != 0 {
t.Fatal("Number of deposits changed")
}
if hook.LastEntry().Message != nilDepositErr {
t.Errorf("Did not log correct message, wanted \"Ignoring nil deposit insertion\", got \"%s\"", hook.LastEntry().Message)
}
}
func TestBeaconDB_InsertDeposit_MaintainsSortedOrderByIndex(t *testing.T) {
dc := DepositCache{}
insertions := []struct {
blkNum *big.Int
deposit *ethpb.Deposit
index int
}{
{
blkNum: big.NewInt(0),
deposit: &ethpb.Deposit{},
index: 0,
},
{
blkNum: big.NewInt(0),
deposit: &ethpb.Deposit{},
index: 3,
},
{
blkNum: big.NewInt(0),
deposit: &ethpb.Deposit{},
index: 1,
},
{
blkNum: big.NewInt(0),
deposit: &ethpb.Deposit{},
index: 4,
},
}
for _, ins := range insertions {
dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{})
}
expectedIndices := []int{0, 1, 3, 4}
for i, ei := range expectedIndices {
if dc.deposits[i].Index != ei {
t.Errorf("dc.deposits[%d].Index = %d, wanted %d", i, dc.deposits[i].Index, ei)
}
}
}
func TestBeaconDB_AllDeposits_ReturnsAllDeposits(t *testing.T) {
dc := DepositCache{}
deposits := []*DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
},
}
dc.deposits = deposits
d := dc.AllDeposits(context.Background(), nil)
if len(d) != len(deposits) {
t.Errorf("Return the wrong number of deposits (%d) wanted %d", len(d), len(deposits))
}
}
func TestBeaconDB_AllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testing.T) {
dc := DepositCache{}
deposits := []*DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
},
}
dc.deposits = deposits
d := dc.AllDeposits(context.Background(), big.NewInt(11))
expected := 5
if len(d) != expected {
t.Errorf("Return the wrong number of deposits (%d) wanted %d", len(d), expected)
}
}
func TestBeaconDB_DepositsNumberAndRootAtHeight_ReturnsAppropriateCountAndRoot(t *testing.T) {
dc := DepositCache{}
dc.deposits = []*DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
depositRoot: bytesutil.ToBytes32([]byte("root")),
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{},
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(11))
if int(n) != 5 {
t.Errorf("Returned unexpected deposits number %d wanted %d", n, 5)
}
if root != bytesutil.ToBytes32([]byte("root")) {
t.Errorf("Returned unexpected root: %v", root)
}
}
func TestBeaconDB_DepositsNumberAndRootAtHeight_ReturnsEmptyTrieIfBlockHeightLessThanOldestDeposit(t *testing.T) {
dc := DepositCache{}
dc.deposits = []*DepositContainer{
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{},
depositRoot: bytesutil.ToBytes32([]byte("root")),
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{},
depositRoot: bytesutil.ToBytes32([]byte("root")),
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(2))
if int(n) != 0 {
t.Errorf("Returned unexpected deposits number %d wanted %d", n, 0)
}
if root != [32]byte{} {
t.Errorf("Returned unexpected root: %v", root)
}
}
func TestBeaconDB_DepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
dc := DepositCache{}
dc.deposits = []*DepositContainer{
{
Block: big.NewInt(9),
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk0"),
},
},
},
{
Block: big.NewInt(10),
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk1"),
},
},
},
{
Block: big.NewInt(11),
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk1"),
},
},
},
{
Block: big.NewInt(12),
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: []byte("pk2"),
},
},
},
}
dep, blkNum := dc.DepositByPubkey(context.Background(), []byte("pk1"))
if !bytes.Equal(dep.Data.PublicKey, []byte("pk1")) {
t.Error("Returned wrong deposit")
}
if blkNum.Cmp(big.NewInt(10)) != 0 {
t.Errorf("Returned wrong block number %v", blkNum)
}
}

View File

@@ -0,0 +1,163 @@
package depositcache
import (
"context"
"math/big"
"sort"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var (
pendingDepositsCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacondb_pending_deposits",
Help: "The number of pending deposits in the beaconDB in-memory database",
})
)
// PendingDepositsFetcher specifically outlines a struct that can retrieve deposits
// which have not yet been included in the chain.
type PendingDepositsFetcher interface {
PendingContainers(ctx context.Context, beforeBlk *big.Int) []*DepositContainer
}
// InsertPendingDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum *big.Int, index int, depositRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.InsertPendingDeposit")
defer span.End()
if d == nil || blockNum == nil {
log.WithFields(log.Fields{
"block": blockNum,
"deposit": d,
}).Debug("Ignoring nil deposit insertion")
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
dc.pendingDeposits = append(dc.pendingDeposits, &DepositContainer{Deposit: d, Block: blockNum, Index: index, depositRoot: depositRoot})
pendingDepositsCount.Inc()
span.AddAttributes(trace.Int64Attribute("count", int64(len(dc.pendingDeposits))))
}
// PendingDeposits returns a list of deposits until the given block number
// (inclusive). If no block is specified then this method returns all pending
// deposits.
func (dc *DepositCache) PendingDeposits(ctx context.Context, beforeBlk *big.Int) []*ethpb.Deposit {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var depositCntrs []*DepositContainer
for _, ctnr := range dc.pendingDeposits {
if beforeBlk == nil || beforeBlk.Cmp(ctnr.Block) > -1 {
depositCntrs = append(depositCntrs, ctnr)
}
}
// Sort the deposits by Merkle index.
sort.SliceStable(depositCntrs, func(i, j int) bool {
return depositCntrs[i].Index < depositCntrs[j].Index
})
var deposits []*ethpb.Deposit
for _, dep := range depositCntrs {
deposits = append(deposits, dep.Deposit)
}
span.AddAttributes(trace.Int64Attribute("count", int64(len(deposits))))
return deposits
}
// PendingContainers returns a list of deposit containers until the given block number
// (inclusive).
func (dc *DepositCache) PendingContainers(ctx context.Context, beforeBlk *big.Int) []*DepositContainer {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var depositCntrs []*DepositContainer
for _, ctnr := range dc.pendingDeposits {
if beforeBlk == nil || beforeBlk.Cmp(ctnr.Block) > -1 {
depositCntrs = append(depositCntrs, ctnr)
}
}
// Sort the deposits by Merkle index.
sort.SliceStable(depositCntrs, func(i, j int) bool {
return depositCntrs[i].Index < depositCntrs[j].Index
})
span.AddAttributes(trace.Int64Attribute("count", int64(len(depositCntrs))))
return depositCntrs
}
// RemovePendingDeposit from the database. The deposit is indexed by the
// Index. This method does nothing if deposit ptr is nil.
func (dc *DepositCache) RemovePendingDeposit(ctx context.Context, d *ethpb.Deposit) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.RemovePendingDeposit")
defer span.End()
if d == nil {
log.Debug("Ignoring nil deposit removal")
return
}
depRoot, err := hashutil.HashProto(d)
if err != nil {
log.Errorf("Could not remove deposit %v", err)
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
idx := -1
for i, ctnr := range dc.pendingDeposits {
hash, err := hashutil.HashProto(ctnr.Deposit)
if err != nil {
log.Errorf("Could not hash deposit %v", err)
continue
}
if hash == depRoot {
idx = i
break
}
}
if idx >= 0 {
dc.pendingDeposits = append(dc.pendingDeposits[:idx], dc.pendingDeposits[idx+1:]...)
pendingDepositsCount.Dec()
}
}
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (dc *DepositCache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int) {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
log.Debug("Ignoring 0 deposit removal")
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
var cleanDeposits []*DepositContainer
for _, dp := range dc.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)
}
}
dc.pendingDeposits = cleanDeposits
pendingDepositsCount.Set(float64(len(dc.pendingDeposits)))
}

View File

@@ -0,0 +1,162 @@
package depositcache
import (
"context"
"math/big"
"reflect"
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/prysm/proto/eth/v1alpha1"
)
var _ = PendingDepositsFetcher(&DepositCache{})
func TestInsertPendingDeposit_OK(t *testing.T) {
dc := DepositCache{}
dc.InsertPendingDeposit(context.Background(), &ethpb.Deposit{}, big.NewInt(111), 100, [32]byte{})
if len(dc.pendingDeposits) != 1 {
t.Error("Deposit not inserted")
}
}
func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, nil /*blockNum*/, 0, [32]byte{})
if len(dc.pendingDeposits) > 0 {
t.Error("Unexpected deposit insertion")
}
}
func TestRemovePendingDeposit_OK(t *testing.T) {
db := DepositCache{}
depToRemove := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
otherDep := &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}
db.pendingDeposits = []*DepositContainer{
{Deposit: depToRemove, Index: 1},
{Deposit: otherDep, Index: 5},
}
db.RemovePendingDeposit(context.Background(), depToRemove)
if len(db.pendingDeposits) != 1 || !proto.Equal(db.pendingDeposits[0].Deposit, otherDep) {
t.Error("Failed to remove deposit")
}
}
func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{{Deposit: &ethpb.Deposit{}}}
dc.RemovePendingDeposit(context.Background(), nil /*deposit*/)
if len(dc.pendingDeposits) != 1 {
t.Errorf("Deposit unexpectedly removed")
}
}
func TestPendingDeposit_RoundTrip(t *testing.T) {
dc := DepositCache{}
dep := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
dc.InsertPendingDeposit(context.Background(), dep, big.NewInt(111), 100, [32]byte{})
dc.RemovePendingDeposit(context.Background(), dep)
if len(dc.pendingDeposits) != 0 {
t.Error("Failed to insert & delete a pending deposit")
}
}
func TestPendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}},
{Block: big.NewInt(4), Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}},
{Block: big.NewInt(6), Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
}
deposits := dc.PendingDeposits(context.Background(), big.NewInt(4))
expected := []*ethpb.Deposit{
{Proof: [][]byte{[]byte("A")}},
{Proof: [][]byte{[]byte("B")}},
}
if !reflect.DeepEqual(deposits, expected) {
t.Errorf("Unexpected deposits. got=%+v want=%+v", deposits, expected)
}
all := dc.PendingDeposits(context.Background(), nil)
if len(all) != len(dc.pendingDeposits) {
t.Error("PendingDeposits(ctx, nil) did not return all deposits")
}
}
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
}
if !reflect.DeepEqual(dc.pendingDeposits, expected) {
t.Errorf("Unexpected deposits. got=%+v want=%+v", dc.pendingDeposits, expected)
}
}
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*DepositContainer{
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
}
if !reflect.DeepEqual(dc.pendingDeposits, expected) {
t.Errorf("Unexpected deposits. got=%+v want=%+v", dc.pendingDeposits, expected)
}
dc.pendingDeposits = []*DepositContainer{
{Block: big.NewInt(2), Index: 2},
{Block: big.NewInt(4), Index: 4},
{Block: big.NewInt(6), Index: 6},
{Block: big.NewInt(8), Index: 8},
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*DepositContainer{
{Block: big.NewInt(10), Index: 10},
{Block: big.NewInt(12), Index: 12},
}
if !reflect.DeepEqual(dc.pendingDeposits, expected) {
t.Errorf("Unexpected deposits. got=%+v want=%+v", dc.pendingDeposits, expected)
}
}

136
beacon-chain/cache/eth1_data.go vendored Normal file
View File

@@ -0,0 +1,136 @@
package cache
import (
"errors"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotEth1DataVote will be returned when a cache object is not a pointer to
// a Eth1DataVote struct.
ErrNotEth1DataVote = errors.New("object is not a eth1 data vote obj")
// maxEth1DataVoteSize defines the max number of eth1 data votes can cache.
maxEth1DataVoteSize = 1000
// Metrics.
eth1DataVoteCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "eth1_data_vote_cache_miss",
Help: "The number of eth1 data vote count requests that aren't present in the cache.",
})
eth1DataVoteCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "eth1_data_vote_cache_hit",
Help: "The number of eth1 data vote count requests that are present in the cache.",
})
)
// Eth1DataVote defines the struct which keeps track of the vote count of individual deposit root.
type Eth1DataVote struct {
Eth1DataHash [32]byte
VoteCount uint64
}
// Eth1DataVoteCache is a struct with 1 queue for looking up eth1 data vote count by deposit root.
type Eth1DataVoteCache struct {
eth1DataVoteCache *cache.FIFO
lock sync.RWMutex
}
// eth1DataVoteKeyFn takes the eth1data hash as the key for the eth1 data vote count of a given eth1data object.
func eth1DataVoteKeyFn(obj interface{}) (string, error) {
eInfo, ok := obj.(*Eth1DataVote)
if !ok {
return "", ErrNotEth1DataVote
}
return string(eInfo.Eth1DataHash[:]), nil
}
// NewEth1DataVoteCache creates a new eth1 data vote count cache for storing/accessing Eth1DataVote.
func NewEth1DataVoteCache() *Eth1DataVoteCache {
return &Eth1DataVoteCache{
eth1DataVoteCache: cache.NewFIFO(eth1DataVoteKeyFn),
}
}
// Eth1DataVote fetches eth1 data vote count by the eth1data hash. Returns vote count,
// if exists. Otherwise returns false, nil.
func (c *Eth1DataVoteCache) Eth1DataVote(eth1DataHash [32]byte) (uint64, error) {
if !featureconfig.Get().EnableEth1DataVoteCache {
// Return a miss result if cache is not enabled.
eth1DataVoteCacheMiss.Inc()
return 0, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.eth1DataVoteCache.GetByKey(string(eth1DataHash[:]))
if err != nil {
return 0, err
}
if exists {
eth1DataVoteCacheHit.Inc()
} else {
eth1DataVoteCacheMiss.Inc()
return 0, nil
}
eInfo, ok := obj.(*Eth1DataVote)
if !ok {
return 0, ErrNotEth1DataVote
}
return eInfo.VoteCount, nil
}
// AddEth1DataVote adds eth1 data vote object to the cache. This method also trims the least
// recently added Eth1DataVoteByEpoch object if the cache size has ready the max cache size limit.
func (c *Eth1DataVoteCache) AddEth1DataVote(eth1DataVote *Eth1DataVote) error {
if !featureconfig.Get().EnableEth1DataVoteCache {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
if err := c.eth1DataVoteCache.Add(eth1DataVote); err != nil {
return err
}
trim(c.eth1DataVoteCache, maxEth1DataVoteSize)
return nil
}
// IncrementEth1DataVote increments the existing eth1 data object's vote count by 1,
// and returns the vote count.
func (c *Eth1DataVoteCache) IncrementEth1DataVote(eth1DataHash [32]byte) (uint64, error) {
if !featureconfig.Get().EnableEth1DataVoteCache {
return 0, nil
}
c.lock.RLock()
defer c.lock.RUnlock()
obj, exists, err := c.eth1DataVoteCache.GetByKey(string(eth1DataHash[:]))
if err != nil {
return 0, err
}
if !exists {
return 0, errors.New("eth1 data vote object does not exist")
}
eth1DataVoteCacheHit.Inc()
eInfo, _ := obj.(*Eth1DataVote)
eInfo.VoteCount++
if err := c.eth1DataVoteCache.Add(eInfo); err != nil {
return 0, err
}
return eInfo.VoteCount, nil
}

110
beacon-chain/cache/eth1_data_test.go vendored Normal file
View File

@@ -0,0 +1,110 @@
package cache
import (
"strconv"
"testing"
)
func TestEth1DataVoteKeyFn_OK(t *testing.T) {
eInfo := &Eth1DataVote{
VoteCount: 44,
Eth1DataHash: [32]byte{'A'},
}
key, err := eth1DataVoteKeyFn(eInfo)
if err != nil {
t.Fatal(err)
}
if key != string(eInfo.Eth1DataHash[:]) {
t.Errorf("Incorrect hash key: %s, expected %s", key, string(eInfo.Eth1DataHash[:]))
}
}
func TestEth1DataVoteKeyFn_InvalidObj(t *testing.T) {
_, err := eth1DataVoteKeyFn("bad")
if err != ErrNotEth1DataVote {
t.Errorf("Expected error %v, got %v", ErrNotEth1DataVote, err)
}
}
func TestEth1DataVoteCache_CanAdd(t *testing.T) {
cache := NewEth1DataVoteCache()
eInfo := &Eth1DataVote{
VoteCount: 55,
Eth1DataHash: [32]byte{'B'},
}
count, err := cache.Eth1DataVote(eInfo.Eth1DataHash)
if err != nil {
t.Fatal(err)
}
if count != 0 {
t.Error("Expected seed not to exist in empty cache")
}
if err := cache.AddEth1DataVote(eInfo); err != nil {
t.Fatal(err)
}
count, err = cache.Eth1DataVote(eInfo.Eth1DataHash)
if err != nil {
t.Fatal(err)
}
if count != eInfo.VoteCount {
t.Errorf(
"Expected vote count to be %d, got %d",
eInfo.VoteCount,
count,
)
}
}
func TestEth1DataVoteCache_CanIncrement(t *testing.T) {
cache := NewEth1DataVoteCache()
eInfo := &Eth1DataVote{
VoteCount: 55,
Eth1DataHash: [32]byte{'B'},
}
if err := cache.AddEth1DataVote(eInfo); err != nil {
t.Fatal(err)
}
_, err := cache.IncrementEth1DataVote(eInfo.Eth1DataHash)
if err != nil {
t.Fatal(err)
}
_, _ = cache.IncrementEth1DataVote(eInfo.Eth1DataHash)
count, _ := cache.IncrementEth1DataVote(eInfo.Eth1DataHash)
if count != 58 {
t.Errorf(
"Expected vote count to be %d, got %d",
58,
count,
)
}
}
func TestEth1Data_MaxSize(t *testing.T) {
cache := NewEth1DataVoteCache()
for i := 0; i < maxEth1DataVoteSize+1; i++ {
var hash [32]byte
copy(hash[:], []byte(strconv.Itoa(i)))
eInfo := &Eth1DataVote{
Eth1DataHash: hash,
}
if err := cache.AddEth1DataVote(eInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.eth1DataVoteCache.ListKeys()) != maxEth1DataVoteSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxEth1DataVoteSize,
len(cache.eth1DataVoteCache.ListKeys()),
)
}
}

10
beacon-chain/cache/feature_flag_test.go vendored Normal file
View File

@@ -0,0 +1,10 @@
package cache
import "github.com/prysmaticlabs/prysm/shared/featureconfig"
func init() {
featureconfig.Init(&featureconfig.Flag{
EnableAttestationCache: true,
EnableEth1DataVoteCache: true,
})
}

99
beacon-chain/cache/shuffled_indices.go vendored Normal file
View File

@@ -0,0 +1,99 @@
package cache
import (
"errors"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"k8s.io/client-go/tools/cache"
)
var (
// ErrNotValidatorListInfo will be returned when a cache object is not a pointer to
// a ValidatorList struct.
ErrNotValidatorListInfo = errors.New("object is not a shuffled validator list")
// maxShuffledListSize defines the max number of shuffled list can cache.
maxShuffledListSize = 1000
// Metrics.
shuffledIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "shuffled_validators_cache_miss",
Help: "The number of shuffled validators requests that aren't present in the cache.",
})
shuffledIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "shuffled_validators_cache_hit",
Help: "The number of shuffled validators requests that are present in the cache.",
})
)
// IndicesByIndexSeed defines the shuffled validator indices per randao seed.
type IndicesByIndexSeed struct {
Index uint64
Seed []byte
ShuffledIndices []uint64
}
// ShuffledIndicesCache is a struct with 1 queue for looking up shuffled validators by seed.
type ShuffledIndicesCache struct {
shuffledIndicesCache *cache.FIFO
lock sync.RWMutex
}
// slotKeyFn takes the randao seed as the key for the shuffled validators of a given epoch.
func shuffleKeyFn(obj interface{}) (string, error) {
sInfo, ok := obj.(*IndicesByIndexSeed)
if !ok {
return "", ErrNotValidatorListInfo
}
return string(sInfo.Seed) + strconv.Itoa(int(sInfo.Index)), nil
}
// NewShuffledIndicesCache creates a new shuffled validators cache for storing/accessing shuffled validator indices
func NewShuffledIndicesCache() *ShuffledIndicesCache {
return &ShuffledIndicesCache{
shuffledIndicesCache: cache.NewFIFO(shuffleKeyFn),
}
}
// IndicesByIndexSeed fetches IndicesByIndexSeed by epoch and seed. Returns true with a
// reference to the ShuffledIndicesInEpoch info, if exists. Otherwise returns false, nil.
func (c *ShuffledIndicesCache) IndicesByIndexSeed(index uint64, seed []byte) ([]uint64, error) {
c.lock.RLock()
defer c.lock.RUnlock()
key := string(seed) + strconv.Itoa(int(index))
obj, exists, err := c.shuffledIndicesCache.GetByKey(key)
if err != nil {
return nil, err
}
if exists {
shuffledIndicesCacheHit.Inc()
} else {
shuffledIndicesCacheMiss.Inc()
return nil, nil
}
cInfo, ok := obj.(*IndicesByIndexSeed)
if !ok {
return nil, ErrNotValidatorListInfo
}
return cInfo.ShuffledIndices, nil
}
// AddShuffledValidatorList adds IndicesByIndexSeed object to the cache. This method also trims the least
// recently added IndicesByIndexSeed object if the cache size has ready the max cache size limit.
func (c *ShuffledIndicesCache) AddShuffledValidatorList(shuffledIndices *IndicesByIndexSeed) error {
c.lock.Lock()
defer c.lock.Unlock()
if err := c.shuffledIndicesCache.AddIfNotPresent(shuffledIndices); err != nil {
return err
}
trim(c.shuffledIndicesCache, maxShuffledListSize)
return nil
}

View File

@@ -0,0 +1,85 @@
package cache
import (
"reflect"
"strconv"
"testing"
)
func TestShuffleKeyFn_OK(t *testing.T) {
sInfo := &IndicesByIndexSeed{
Index: 999,
Seed: []byte{'A'},
ShuffledIndices: []uint64{1, 2, 3, 4, 5},
}
key, err := shuffleKeyFn(sInfo)
if err != nil {
t.Fatal(err)
}
if key != string(sInfo.Seed)+strconv.Itoa(int(sInfo.Index)) {
t.Errorf("Incorrect hash key: %s, expected %s", key, string(sInfo.Seed)+strconv.Itoa(int(sInfo.Index)))
}
}
func TestShuffleKeyFn_InvalidObj(t *testing.T) {
_, err := shuffleKeyFn("bad")
if err != ErrNotValidatorListInfo {
t.Errorf("Expected error %v, got %v", ErrNotValidatorListInfo, err)
}
}
func TestShuffledIndicesCache_ShuffledIndicesBySeed2(t *testing.T) {
cache := NewShuffledIndicesCache()
sInfo := &IndicesByIndexSeed{
Index: 99,
Seed: []byte{'A'},
ShuffledIndices: []uint64{1, 2, 3, 4},
}
shuffledIndices, err := cache.IndicesByIndexSeed(sInfo.Index, sInfo.Seed)
if err != nil {
t.Fatal(err)
}
if shuffledIndices != nil {
t.Error("Expected shuffled indices not to exist in empty cache")
}
if err := cache.AddShuffledValidatorList(sInfo); err != nil {
t.Fatal(err)
}
shuffledIndices, err = cache.IndicesByIndexSeed(sInfo.Index, sInfo.Seed)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(shuffledIndices, sInfo.ShuffledIndices) {
t.Errorf(
"Expected fetched info committee to be %v, got %v",
sInfo.ShuffledIndices,
shuffledIndices,
)
}
}
func TestShuffledIndices_MaxSize(t *testing.T) {
cache := NewShuffledIndicesCache()
for i := uint64(0); i < 1001; i++ {
sInfo := &IndicesByIndexSeed{
Index: i,
Seed: []byte{byte(i)},
}
if err := cache.AddShuffledValidatorList(sInfo); err != nil {
t.Fatal(err)
}
}
if len(cache.shuffledIndicesCache.ListKeys()) != maxShuffledListSize {
t.Errorf(
"Expected hash cache key size to be %d, got %d",
maxShuffledListSize,
len(cache.shuffledIndicesCache.ListKeys()),
)
}
}

View File

@@ -1,32 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["main.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/chaintest",
visibility = ["//visibility:private"],
deps = [
"//beacon-chain/chaintest/backend:go_default_library",
"//shared/featureconfig:go_default_library",
"@com_github_go_yaml_yaml//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
],
)
go_binary(
name = "chaintest",
embed = [":go_default_library"],
visibility = ["//visibility:private"],
)
go_test(
name = "go_default_test",
srcs = ["yaml_test.go"],
data = glob(["tests/**"]),
embed = [":go_default_library"],
deps = [
"//beacon-chain/chaintest/backend:go_default_library",
"//shared/featureconfig:go_default_library",
],
)

View File

@@ -1,238 +0,0 @@
# Ethereum 2.0 E2E Test Suite
This is a test-suite for conformity end-2-end tests for Prysm's implementation of the Ethereum 2.0 specification. Implementation teams have decided to utilize YAML as a general conformity test format for the current beacon chain's runtime functionality.
The test suite opts for YAML due to wide language support and support for inline comments.
# Testing Format
The testing format follows the official ETH2.0 Specification created [here](https://github.com/ethereum/eth2.0-specs/blob/master/specs/test-format.md)
## Stateful Tests
Chain tests check for conformity of a certain client to the beacon chain specification for items such as the fork choice rule and Casper FFG validator rewards & penalties. Stateful tests need to specify a certain configuration of a beacon chain, with items such as the number validators, in the YAML file. Sample tests will all required fields are shown below.
### State Transition
The most important use case for this test format is to verify the ins and outs of the Ethereum Phase 0 Beacon Chain state advancement. The specification details very strict guidelines for blocks to successfully trigger a state transition, including items such as Casper Proof of Stake slashing conditions of validators, pseudorandomness in the form of RANDAO, and attestation on shard blocks being processed all inside each incoming beacon block. The YAML configuration for this test type allows for configuring a state transition run over N slots, triggering slashing conditions, processing deposits of new validators, and more.
An example state transition test for testing slot and block processing will look as follows:
```yaml
title: Sample Ethereum Serenity State Transition Tests
summary: Testing full state transition block processing
test_suite: prysm
fork: sapphire
version: 1.0
test_cases:
- config:
epoch_length: 64
deposits_for_chain_start: 1000
num_slots: 32 # Testing advancing state to slot < SlotsPerEpoch
results:
slot: 32
num_validators: 1000
- config:
epoch_length: 64
deposits_for_chain_start: 16384
num_slots: 64
deposits:
- slot: 1
amount: 32
merkle_index: 0
pubkey: !!binary |
SlAAbShSkUg7PLiPHZI/rTS1uAvKiieOrifPN6Moso0=
- slot: 15
amount: 32
merkle_index: 1
pubkey: !!binary |
Oklajsjdkaklsdlkajsdjlajslkdjlkasjlkdjlajdsd
- slot: 55
amount: 32
merkle_index: 2
pubkey: !!binary |
LkmqmqoodLKAslkjdkajsdljasdkajlksjdasldjasdd
proposer_slashings:
- slot: 16 # At slot 16, we trigger a proposal slashing occurring
proposer_index: 16385 # We penalize the proposer that was just added from slot 15
proposal_1_shard: 0
proposal_1_slot: 15
proposal_1_root: !!binary |
LkmqmqoodLKAslkjdkajsdljasdkajlksjdasldjasdd
proposal_2_shard: 0
proposal_2_slot: 15
proposal_2_root: !!binary |
LkmqmqoodLKAslkjdkajsdljasdkajlksjdasldjasdd
attester_slashings:
- slot: 59 # At slot 59, we trigger a attester slashing
slashable_vote_data_1_slot: 55
slashable_vote_data_2_slot: 55
slashable_vote_data_1_justified_slot: 0
slashable_vote_data_2_justified_slot: 1
slashable_vote_data_1_custody_0_indices: [16386]
slashable_vote_data_1_custody_1_indices: []
slashable_vote_data_2_custody_0_indices: []
slashable_vote_data_2_custody_1_indices: [16386]
results:
slot: 64
num_validators: 16387
penalized_validators: [16385, 16386] # We test that the validators at indices 16385, 16386 were indeed penalized
- config:
skip_slots: [10, 20]
epoch_length: 64
deposits_for_chain_start: 1000
num_slots: 128 # Testing advancing state's slot == 2*SlotsPerEpoch
deposits:
- slot: 10
amount: 32
merkle_index: 0
pubkey: !!binary |
SlAAbShSkUg7PLiPHZI/rTS1uAvKiieOrifPN6Moso0=
- slot: 20
amount: 32
merkle_index: 1
pubkey: !!binary |
Oklajsjdkaklsdlkajsdjlajslkdjlkasjlkdjlajdsd
results:
slot: 128
num_validators: 1000 # Validator registry should not have grown if slots 10 and 20 were skipped
```
#### Test Configuration Options
The following configuration options are available for state transition tests:
**Config**
- **skip_slots**: `[int]` determines which slot numbers to simulate a proposer not submitting a block in the state transition TODO
- **epoch_length**: `int` the number of slots in an epoch
- **deposits_for_chain_start**: `int` the number of eth deposits needed for the beacon chain to initialize (this simulates an initial validator registry based on this number in the test)
- **num_slots**: `int` the number of times we run a state transition in the test
- **deposits**: `[Deposit Config]` trigger a new validator deposit into the beacon state based on configuration options
- **proposer_slashings**: `[Proposer Slashing Config]` trigger a proposer slashing at a certain slot for a certain proposer index
- **attester_slashings**: `[Casper Slashing Config]` trigger a attester slashing at a certain slot
- **validator_exits**: `[Validator Exit Config]` trigger a voluntary validator exit at a certain slot for a validator index
**Deposit Config**
- **slot**: `int` a slot in which to trigger a deposit during a state transition test
- **amount**: `int` the ETH deposit amount to trigger
- **merkle_index**: `int` the index of the deposit in the validator deposit contract's Merkle trie
- **pubkey**: `!!binary` the public key of the validator in the triggered deposit object
**Proposer Slashing Config**
- **slot**: `int` a slot in which to trigger a proposer slashing during a state transition test
- **proposer_index**: `int` the proposer to penalize
- **proposal_1_shard**: `int` the first proposal data's shard id
- **proposal_1_slot**: `int` the first proposal data's slot
- **proposal_1_root**: `!!binary` the second proposal data's block root
- **proposal_2_shard**: `int` the second proposal data's shard id
- **proposal_2_slot**: `int` the second proposal data's slot
- **proposal_2_root**: `!!binary` the second proposal data's block root
**Casper Slashing Config**
- **slot**: `int` a slot in which to trigger a attester slashing during a state transition test
- **slashable_vote_data_1_slot**: `int` the slot of the attestation data of slashableVoteData1
- **slashable_vote_data_2_slot**: `int` the slot of the attestation data of slashableVoteData2
- **slashable_vote_data_1_justified_slot**: `int` the justified slot of the attestation data of slashableVoteData1
- **slashable_vote_data_2_justified_slot**: `int` the justified slot of the attestation data of slashableVoteData2
- **slashable_vote_data_1_custody_0_indices**: `[int]` the custody indices 0 for slashableVoteData1
- **slashable_vote_data_1_custody_1_indices**: `[int]` the custody indices 1 for slashableVoteData1
- **slashable_vote_data_2_custody_0_indices**: `[int]` the custody indices 0 for slashableVoteData2
- **slashable_vote_data_2_custody_1_indices**: `[int]` the custody indices 1 for slashableVoteData2
**Validator Exit Config**
- **slot**: `int` the slot at which a validator wants to voluntarily exit the validator registry
- **validator_index**: `int` the index of the validator in the registry that is exiting
#### Test Results
The following are **mandatory** fields as they correspond to checks done at the end of the test run.
- **slot**: `int` check the slot of the state resulting from applying N state transitions in the test
- **num_validators** `[int]` check the number of validators in the validator registry after applying N state transitions
- **penalized_validators** `[int]` the list of validator indices we verify were penalized during the test
- **exited_validators**: `[int]` the list of validator indices we verify voluntarily exited the registry during the test
## Stateless Tests
Stateless tests represent simple unit test definitions for important invariants in the ETH2.0 runtime. In particular, these test conformity across clients with respect to items such as Simple Serialize (SSZ), Signature Aggregation (BLS), and Validator Shuffling
**Simple Serialize**
TODO
**Signature Aggregation**
TODO
**Validator Shuffling**
```yaml
title: Shuffling Algorithm Tests
summary: Test vectors for shuffling a list based upon a seed using `shuffle`
test_suite: shuffle
fork: tchaikovsky
version: 1.0
test_cases:
- input: []
output: []
seed: !!binary ""
- name: boring_list
description: List with a single element, 0
input: [0]
output: [0]
seed: !!binary ""
- input: [255]
output: [255]
seed: !!binary ""
- input: [4, 6, 2, 6, 1, 4, 6, 2, 1, 5]
output: [1, 6, 4, 1, 6, 6, 2, 2, 4, 5]
seed: !!binary ""
- input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
output: [4, 7, 10, 13, 3, 1, 2, 9, 12, 6, 11, 8, 5]
seed: !!binary ""
- input: [65, 6, 2, 6, 1, 4, 6, 2, 1, 5]
output: [6, 65, 2, 5, 4, 2, 6, 6, 1, 1]
seed: !!binary |
JlAYJ5H2j8g7PLiPHZI/rTS1uAvKiieOrifPN6Moso0=
```
# Using the Runner
First, create a directory containing the YAML files you wish to test (or use the default `./sampletests` directory included with Prysm).
Then, make sure you have the following folder structure for the directory:
```
yourtestdir/
fork-choice-tests/
*.yaml
...
shuffle-tests/
*.yaml
...
state-tests/
*.yaml
...
```
Then, navigate to the test runner's directory and use the go tool as follows:
```bash
go run main.go -tests-dir /path/to/your/testsdir
```
The runner will then start up a simulated backend and run all your specified YAML tests.
```bash
[2018-11-06 15:01:44] INFO ----Running Chain Tests----
[2018-11-06 15:01:44] INFO Running 4 YAML Tests
[2018-11-06 15:01:44] INFO Title: Sample Ethereum 2.0 Beacon Chain Test
[2018-11-06 15:01:44] INFO Summary: Basic, functioning fork choice rule for Ethereum 2.0
[2018-11-06 15:01:44] INFO Test Suite: prysm
[2018-11-06 15:01:44] INFO Test Runs Finished In: 0.000643545 Seconds
```

View File

@@ -1,42 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"fork_choice_test_format.go",
"helpers.go",
"shuffle_test_format.go",
"simulated_backend.go",
"state_test_format.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/chaintest/backend",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/utils:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bls:go_default_library",
"//shared/forkutil:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/sliceutil:go_default_library",
"//shared/trieutil:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["simulated_backend_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/db:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
],
)

View File

@@ -1,51 +0,0 @@
package backend
// ForkChoiceTest --
type ForkChoiceTest struct {
Title string
Summary string
TestSuite string `yaml:"test_suite"`
TestCases []*ForkChoiceTestCase `yaml:"test_cases"`
}
// ForkChoiceTestCase --
type ForkChoiceTestCase struct {
Config *ForkChoiceTestConfig `yaml:"config"`
Slots []*ForkChoiceTestSlot `yaml:"slots,flow"`
Results *ForkChoiceTestResult `yaml:"results"`
}
// ForkChoiceTestConfig --
type ForkChoiceTestConfig struct {
ValidatorCount uint64 `yaml:"validator_count"`
CycleLength uint64 `yaml:"cycle_length"`
ShardCount uint64 `yaml:"shard_count"`
MinCommitteeSize uint64 `yaml:"min_committee_size"`
}
// ForkChoiceTestSlot --
type ForkChoiceTestSlot struct {
SlotNumber uint64 `yaml:"slot_number"`
NewBlock *TestBlock `yaml:"new_block"`
Attestations []*TestAttestation `yaml:",flow"`
}
// ForkChoiceTestResult --
type ForkChoiceTestResult struct {
Head string
LastJustifiedBlock string `yaml:"last_justified_block"`
LastFinalizedBlock string `yaml:"last_finalized_block"`
}
// TestBlock --
type TestBlock struct {
ID string `yaml:"ID"`
Parent string `yaml:"parent"`
}
// TestAttestation --
type TestAttestation struct {
Block string `yaml:"block"`
ValidatorRegistry string `yaml:"validators"`
CommitteeSlot uint64 `yaml:"committee_slot"`
}

View File

@@ -1,170 +0,0 @@
package backend
import (
"crypto/rand"
"encoding/binary"
"fmt"
"time"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/forkutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/trieutil"
)
// Generates a simulated beacon block to use
// in the next state transition given the current state,
// the previous beacon block, and previous beacon block root.
func generateSimulatedBlock(
beaconState *pb.BeaconState,
prevBlockRoot [32]byte,
historicalDeposits []*pb.Deposit,
simObjects *SimulatedObjects,
privKeys []*bls.SecretKey,
) (*pb.BeaconBlock, [32]byte, error) {
stateRoot, err := hashutil.HashProto(beaconState)
if err != nil {
return nil, [32]byte{}, fmt.Errorf("could not tree hash state: %v", err)
}
proposerIdx, err := helpers.BeaconProposerIndex(beaconState, beaconState.Slot+1)
if err != nil {
return nil, [32]byte{}, err
}
epoch := helpers.SlotToEpoch(beaconState.Slot + 1)
buf := make([]byte, 32)
binary.LittleEndian.PutUint64(buf, epoch)
domain := forkutil.DomainVersion(beaconState.Fork, epoch, params.BeaconConfig().DomainRandao)
// We make the previous validator's index sign the message instead of the proposer.
epochSignature := privKeys[proposerIdx].Sign(buf, domain)
block := &pb.BeaconBlock{
Slot: beaconState.Slot + 1,
RandaoReveal: epochSignature.Marshal(),
ParentRootHash32: prevBlockRoot[:],
StateRootHash32: stateRoot[:],
Eth1Data: &pb.Eth1Data{
DepositRootHash32: []byte{1},
BlockHash32: []byte{2},
},
Body: &pb.BeaconBlockBody{
ProposerSlashings: []*pb.ProposerSlashing{},
AttesterSlashings: []*pb.AttesterSlashing{},
Attestations: []*pb.Attestation{},
Deposits: []*pb.Deposit{},
VoluntaryExits: []*pb.VoluntaryExit{},
},
}
if simObjects.simDeposit != nil {
depositInput := &pb.DepositInput{
Pubkey: []byte(simObjects.simDeposit.Pubkey),
WithdrawalCredentialsHash32: make([]byte, 32),
ProofOfPossession: make([]byte, 96),
}
data, err := helpers.EncodeDepositData(depositInput, simObjects.simDeposit.Amount, time.Now().Unix())
if err != nil {
return nil, [32]byte{}, fmt.Errorf("could not encode deposit data: %v", err)
}
// We then update the deposits Merkle trie with the deposit data and return
// its Merkle branch leading up to the root of the trie.
historicalDepositData := make([][]byte, len(historicalDeposits))
for i := range historicalDeposits {
historicalDepositData[i] = historicalDeposits[i].DepositData
}
newTrie, err := trieutil.GenerateTrieFromItems(append(historicalDepositData, data), int(params.BeaconConfig().DepositContractTreeDepth))
if err != nil {
return nil, [32]byte{}, fmt.Errorf("could not regenerate trie: %v", err)
}
proof, err := newTrie.MerkleProof(int(simObjects.simDeposit.MerkleIndex))
if err != nil {
return nil, [32]byte{}, fmt.Errorf("could not generate proof: %v", err)
}
root := newTrie.Root()
block.Eth1Data.DepositRootHash32 = root[:]
block.Body.Deposits = append(block.Body.Deposits, &pb.Deposit{
DepositData: data,
MerkleProofHash32S: proof,
MerkleTreeIndex: simObjects.simDeposit.MerkleIndex,
})
}
if simObjects.simProposerSlashing != nil {
block.Body.ProposerSlashings = append(block.Body.ProposerSlashings, &pb.ProposerSlashing{
ProposerIndex: simObjects.simProposerSlashing.ProposerIndex,
ProposalData_1: &pb.ProposalSignedData{
Slot: simObjects.simProposerSlashing.Proposal1Slot,
Shard: simObjects.simProposerSlashing.Proposal1Shard,
BlockRootHash32: []byte(simObjects.simProposerSlashing.Proposal1Root),
},
ProposalData_2: &pb.ProposalSignedData{
Slot: simObjects.simProposerSlashing.Proposal2Slot,
Shard: simObjects.simProposerSlashing.Proposal2Shard,
BlockRootHash32: []byte(simObjects.simProposerSlashing.Proposal2Root),
},
})
}
if simObjects.simAttesterSlashing != nil {
block.Body.AttesterSlashings = append(block.Body.AttesterSlashings, &pb.AttesterSlashing{
SlashableAttestation_1: &pb.SlashableAttestation{
Data: &pb.AttestationData{
Slot: simObjects.simAttesterSlashing.SlashableAttestation1Slot,
JustifiedEpoch: simObjects.simAttesterSlashing.SlashableAttestation1JustifiedEpoch,
},
CustodyBitfield: []byte(simObjects.simAttesterSlashing.SlashableAttestation1CustodyBitField),
ValidatorIndices: simObjects.simAttesterSlashing.SlashableAttestation1ValidatorIndices,
},
SlashableAttestation_2: &pb.SlashableAttestation{
Data: &pb.AttestationData{
Slot: simObjects.simAttesterSlashing.SlashableAttestation2Slot,
JustifiedEpoch: simObjects.simAttesterSlashing.SlashableAttestation2JustifiedEpoch,
},
CustodyBitfield: []byte(simObjects.simAttesterSlashing.SlashableAttestation2CustodyBitField),
ValidatorIndices: simObjects.simAttesterSlashing.SlashableAttestation2ValidatorIndices,
},
})
}
if simObjects.simValidatorExit != nil {
block.Body.VoluntaryExits = append(block.Body.VoluntaryExits, &pb.VoluntaryExit{
Epoch: simObjects.simValidatorExit.Epoch,
ValidatorIndex: simObjects.simValidatorExit.ValidatorIndex,
})
}
blockRoot, err := hashutil.HashBeaconBlock(block)
if err != nil {
return nil, [32]byte{}, fmt.Errorf("could not tree hash new block: %v", err)
}
return block, blockRoot, nil
}
// generateInitialSimulatedDeposits generates initial deposits for creating a beacon state in the simulated
// backend based on the yaml configuration.
func generateInitialSimulatedDeposits(numDeposits uint64) ([]*pb.Deposit, []*bls.SecretKey, error) {
genesisTime := time.Date(2018, 9, 0, 0, 0, 0, 0, time.UTC).Unix()
deposits := make([]*pb.Deposit, numDeposits)
privKeys := make([]*bls.SecretKey, numDeposits)
for i := 0; i < len(deposits); i++ {
priv, err := bls.RandKey(rand.Reader)
if err != nil {
return nil, nil, fmt.Errorf("could not initialize key: %v", err)
}
depositInput := &pb.DepositInput{
Pubkey: priv.PublicKey().Marshal(),
WithdrawalCredentialsHash32: make([]byte, 32),
ProofOfPossession: make([]byte, 96),
}
depositData, err := helpers.EncodeDepositData(
depositInput,
params.BeaconConfig().MaxDepositAmount,
genesisTime,
)
if err != nil {
return nil, nil, fmt.Errorf("could not encode genesis block deposits: %v", err)
}
deposits[i] = &pb.Deposit{DepositData: depositData, MerkleTreeIndex: uint64(i)}
privKeys[i] = priv
}
return deposits, privKeys, nil
}

View File

@@ -1,18 +0,0 @@
package backend
// ShuffleTest --
type ShuffleTest struct {
Title string `yaml:"title"`
Summary string `yaml:"summary"`
TestSuite string `yaml:"test_suite"`
Fork string `yaml:"fork"`
Version string `yaml:"version"`
TestCases []*ShuffleTestCase `yaml:"test_cases"`
}
// ShuffleTestCase --
type ShuffleTestCase struct {
Input []uint64 `yaml:"input,flow"`
Output []uint64 `yaml:"output,flow"`
Seed string
}

View File

@@ -1,393 +0,0 @@
// Package backend contains utilities for simulating an entire
// ETH 2.0 beacon chain for e2e tests and benchmarking
// purposes.
package backend
import (
"context"
"fmt"
"reflect"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
b "github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/utils"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
log "github.com/sirupsen/logrus"
)
// SimulatedBackend allowing for a programmatic advancement
// of an in-memory beacon chain for client test runs
// and other e2e use cases.
type SimulatedBackend struct {
chainService *blockchain.ChainService
beaconDB *db.BeaconDB
state *pb.BeaconState
prevBlockRoots [][32]byte
inMemoryBlocks []*pb.BeaconBlock
historicalDeposits []*pb.Deposit
}
// SimulatedObjects is a container to hold the
// required primitives for generation of a beacon
// block.
type SimulatedObjects struct {
simDeposit *StateTestDeposit
simProposerSlashing *StateTestProposerSlashing
simAttesterSlashing *StateTestAttesterSlashing
simValidatorExit *StateTestValidatorExit
}
// NewSimulatedBackend creates an instance by initializing a chain service
// utilizing a mockDB which will act according to test run parameters specified
// in the common ETH 2.0 client test YAML format.
func NewSimulatedBackend() (*SimulatedBackend, error) {
db, err := db.SetupDB()
if err != nil {
return nil, fmt.Errorf("could not setup simulated backend db: %v", err)
}
cs, err := blockchain.NewChainService(context.Background(), &blockchain.Config{
BeaconDB: db,
})
if err != nil {
return nil, err
}
return &SimulatedBackend{
chainService: cs,
beaconDB: db,
inMemoryBlocks: make([]*pb.BeaconBlock, 0),
historicalDeposits: make([]*pb.Deposit, 0),
}, nil
}
// SetupBackend sets up the simulated backend with simulated deposits, and initializes the
// state and genesis block.
func (sb *SimulatedBackend) SetupBackend(numOfDeposits uint64) ([]*bls.SecretKey, error) {
initialDeposits, privKeys, err := generateInitialSimulatedDeposits(numOfDeposits)
if err != nil {
return nil, fmt.Errorf("could not simulate initial validator deposits: %v", err)
}
if err := sb.setupBeaconStateAndGenesisBlock(initialDeposits); err != nil {
return nil, fmt.Errorf("could not set up beacon state and initialize genesis block %v", err)
}
return privKeys, nil
}
// DB returns the underlying db instance in the simulated
// backend.
func (sb *SimulatedBackend) DB() *db.BeaconDB {
return sb.beaconDB
}
// GenerateBlockAndAdvanceChain generates a simulated block and runs that block though
// state transition.
func (sb *SimulatedBackend) GenerateBlockAndAdvanceChain(objects *SimulatedObjects, privKeys []*bls.SecretKey) error {
prevBlockRoot := sb.prevBlockRoots[len(sb.prevBlockRoots)-1]
// We generate a new block to pass into the state transition.
newBlock, newBlockRoot, err := generateSimulatedBlock(
sb.state,
prevBlockRoot,
sb.historicalDeposits,
objects,
privKeys,
)
if err != nil {
return fmt.Errorf("could not generate simulated beacon block %v", err)
}
newState := sb.state
newState.LatestEth1Data = newBlock.Eth1Data
newState, err = state.ExecuteStateTransition(
context.Background(),
sb.state,
newBlock,
prevBlockRoot,
state.DefaultConfig(),
)
if err != nil {
return fmt.Errorf("could not execute state transition: %v", err)
}
sb.state = newState
sb.prevBlockRoots = append(sb.prevBlockRoots, newBlockRoot)
sb.inMemoryBlocks = append(sb.inMemoryBlocks, newBlock)
if len(newBlock.Body.Deposits) > 0 {
sb.historicalDeposits = append(sb.historicalDeposits, newBlock.Body.Deposits...)
}
return nil
}
// GenerateNilBlockAndAdvanceChain would trigger a state transition with a nil block.
func (sb *SimulatedBackend) GenerateNilBlockAndAdvanceChain() error {
prevBlockRoot := sb.prevBlockRoots[len(sb.prevBlockRoots)-1]
newState, err := state.ExecuteStateTransition(
context.Background(),
sb.state,
nil,
prevBlockRoot,
state.DefaultConfig(),
)
if err != nil {
return fmt.Errorf("could not execute state transition: %v", err)
}
sb.state = newState
return nil
}
// Shutdown closes the db associated with the simulated backend.
func (sb *SimulatedBackend) Shutdown() error {
return sb.beaconDB.Close()
}
// State is a getter to return the current beacon state
// of the backend.
func (sb *SimulatedBackend) State() *pb.BeaconState {
return sb.state
}
// InMemoryBlocks returns the blocks that have been processed by the simulated
// backend.
func (sb *SimulatedBackend) InMemoryBlocks() []*pb.BeaconBlock {
return sb.inMemoryBlocks
}
// RunForkChoiceTest uses a parsed set of chaintests from a YAML file
// according to the ETH 2.0 client chain test specification and runs them
// against the simulated backend.
func (sb *SimulatedBackend) RunForkChoiceTest(testCase *ForkChoiceTestCase) error {
defer db.TeardownDB(sb.beaconDB)
// Utilize the config parameters in the test case to setup
// the DB and set global config parameters accordingly.
// Config parameters include: ValidatorCount, ShardCount,
// CycleLength, MinCommitteeSize, and more based on the YAML
// test language specification.
c := params.BeaconConfig()
c.ShardCount = testCase.Config.ShardCount
c.SlotsPerEpoch = testCase.Config.CycleLength
c.TargetCommitteeSize = testCase.Config.MinCommitteeSize
params.OverrideBeaconConfig(c)
// Then, we create the validators based on the custom test config.
validators := make([]*pb.Validator, testCase.Config.ValidatorCount)
for i := uint64(0); i < testCase.Config.ValidatorCount; i++ {
validators[i] = &pb.Validator{
ExitEpoch: params.BeaconConfig().ActivationExitDelay,
Pubkey: []byte{},
}
}
// TODO(#718): Next step is to update and save the blocks specified
// in the case case into the DB.
//
// Then, we call the updateHead routine and confirm the
// chain's head is the expected result from the test case.
return nil
}
// RunShuffleTest uses validator set specified from a YAML file, runs the validator shuffle
// algorithm, then compare the output with the expected output from the YAML file.
func (sb *SimulatedBackend) RunShuffleTest(testCase *ShuffleTestCase) error {
defer db.TeardownDB(sb.beaconDB)
seed := common.BytesToHash([]byte(testCase.Seed))
output, err := utils.ShuffleIndices(seed, testCase.Input)
if err != nil {
return err
}
if !reflect.DeepEqual(output, testCase.Output) {
return fmt.Errorf("shuffle result error: expected %v, actual %v", testCase.Output, output)
}
return nil
}
// RunStateTransitionTest advances a beacon chain state transition an N amount of
// slots from a genesis state, with a block being processed at every iteration
// of the state transition function.
func (sb *SimulatedBackend) RunStateTransitionTest(testCase *StateTestCase) error {
defer db.TeardownDB(sb.beaconDB)
setTestConfig(testCase)
privKeys, err := sb.initializeStateTest(testCase)
if err != nil {
return fmt.Errorf("could not initialize state test %v", err)
}
averageTimesPerTransition := []time.Duration{}
startSlot := params.BeaconConfig().GenesisSlot
for i := startSlot; i < startSlot+testCase.Config.NumSlots; i++ {
// If the slot is marked as skipped in the configuration options,
// we simply run the state transition with a nil block argument.
if sliceutil.IsInUint64(i, testCase.Config.SkipSlots) {
if err := sb.GenerateNilBlockAndAdvanceChain(); err != nil {
return fmt.Errorf("could not advance the chain with a nil block %v", err)
}
continue
}
simulatedObjects := sb.generateSimulatedObjects(testCase, i)
startTime := time.Now()
if err := sb.GenerateBlockAndAdvanceChain(simulatedObjects, privKeys); err != nil {
return fmt.Errorf("could not generate the block and advance the chain %v", err)
}
endTime := time.Now()
averageTimesPerTransition = append(averageTimesPerTransition, endTime.Sub(startTime))
}
log.Infof(
"with %d initial deposits, each state transition took average time = %v",
testCase.Config.DepositsForChainStart,
averageDuration(averageTimesPerTransition),
)
if err := sb.compareTestCase(testCase); err != nil {
return err
}
return nil
}
// initializeStateTest sets up the environment by generating all the required objects in order
// to proceed with the state test.
func (sb *SimulatedBackend) initializeStateTest(testCase *StateTestCase) ([]*bls.SecretKey, error) {
initialDeposits, privKeys, err := generateInitialSimulatedDeposits(testCase.Config.DepositsForChainStart)
if err != nil {
return nil, fmt.Errorf("could not simulate initial validator deposits: %v", err)
}
if err := sb.setupBeaconStateAndGenesisBlock(initialDeposits); err != nil {
return nil, fmt.Errorf("could not set up beacon state and initialize genesis block %v", err)
}
return privKeys, nil
}
// setupBeaconStateAndGenesisBlock creates the initial beacon state and genesis block in order to
// proceed with the test.
func (sb *SimulatedBackend) setupBeaconStateAndGenesisBlock(initialDeposits []*pb.Deposit) error {
var err error
genesisTime := time.Date(2018, 9, 0, 0, 0, 0, 0, time.UTC).Unix()
sb.state, err = state.GenesisBeaconState(initialDeposits, uint64(genesisTime), nil)
if err != nil {
return fmt.Errorf("could not initialize simulated beacon state: %v", err)
}
sb.historicalDeposits = initialDeposits
// We do not expect hashing initial beacon state and genesis block to
// fail, so we can safely ignore the error below.
// #nosec G104
stateRoot, err := hashutil.HashProto(sb.state)
if err != nil {
return fmt.Errorf("could not tree hash state: %v", err)
}
genesisBlock := b.NewGenesisBlock(stateRoot[:])
genesisBlockRoot, err := hashutil.HashBeaconBlock(genesisBlock)
if err != nil {
return fmt.Errorf("could not tree hash genesis block: %v", err)
}
// We now keep track of generated blocks for each state transition in
// a slice.
sb.prevBlockRoots = [][32]byte{genesisBlockRoot}
sb.inMemoryBlocks = append(sb.inMemoryBlocks, genesisBlock)
return nil
}
// generateSimulatedObjects generates the simulated objects depending on the testcase and current slot.
func (sb *SimulatedBackend) generateSimulatedObjects(testCase *StateTestCase, slotNumber uint64) *SimulatedObjects {
// If the slot is not skipped, we check if we are simulating a deposit at the current slot.
var simulatedDeposit *StateTestDeposit
for _, deposit := range testCase.Config.Deposits {
if deposit.Slot == slotNumber {
simulatedDeposit = deposit
break
}
}
var simulatedProposerSlashing *StateTestProposerSlashing
for _, pSlashing := range testCase.Config.ProposerSlashings {
if pSlashing.Slot == slotNumber {
simulatedProposerSlashing = pSlashing
break
}
}
var simulatedAttesterSlashing *StateTestAttesterSlashing
for _, cSlashing := range testCase.Config.AttesterSlashings {
if cSlashing.Slot == slotNumber {
simulatedAttesterSlashing = cSlashing
break
}
}
var simulatedValidatorExit *StateTestValidatorExit
for _, exit := range testCase.Config.ValidatorExits {
if exit.Epoch == slotNumber/params.BeaconConfig().SlotsPerEpoch {
simulatedValidatorExit = exit
break
}
}
return &SimulatedObjects{
simDeposit: simulatedDeposit,
simProposerSlashing: simulatedProposerSlashing,
simAttesterSlashing: simulatedAttesterSlashing,
simValidatorExit: simulatedValidatorExit,
}
}
// compareTestCase compares the state in the simulated backend against the values in inputted test case. If
// there are any discrepancies it returns an error.
func (sb *SimulatedBackend) compareTestCase(testCase *StateTestCase) error {
if sb.state.Slot != testCase.Results.Slot {
return fmt.Errorf(
"incorrect state slot after %d state transitions without blocks, wanted %d, received %d",
testCase.Config.NumSlots,
sb.state.Slot,
testCase.Results.Slot,
)
}
if len(sb.state.ValidatorRegistry) != testCase.Results.NumValidators {
return fmt.Errorf(
"incorrect num validators after %d state transitions without blocks, wanted %d, received %d",
testCase.Config.NumSlots,
testCase.Results.NumValidators,
len(sb.state.ValidatorRegistry),
)
}
for _, slashed := range testCase.Results.SlashedValidators {
if sb.state.ValidatorRegistry[slashed].SlashedEpoch == params.BeaconConfig().FarFutureEpoch {
return fmt.Errorf(
"expected validator at index %d to have been slashed",
slashed,
)
}
}
for _, exited := range testCase.Results.ExitedValidators {
if sb.state.ValidatorRegistry[exited].StatusFlags != pb.Validator_INITIATED_EXIT {
return fmt.Errorf(
"expected validator at index %d to have exited",
exited,
)
}
}
return nil
}
func setTestConfig(testCase *StateTestCase) {
// We setup the initial configuration for running state
// transition tests below.
c := params.BeaconConfig()
c.SlotsPerEpoch = testCase.Config.SlotsPerEpoch
c.DepositsForChainStart = testCase.Config.DepositsForChainStart
params.OverrideBeaconConfig(c)
}
func averageDuration(times []time.Duration) time.Duration {
sum := int64(0)
for _, t := range times {
sum += t.Nanoseconds()
}
return time.Duration(sum / int64(len(times)))
}

View File

@@ -1,85 +0,0 @@
package backend
import (
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
)
func init() {
featureconfig.InitFeatureConfig(&featureconfig.FeatureFlagConfig{
EnableCrosslinks: true,
})
}
func TestSimulatedBackendStop_ShutsDown(t *testing.T) {
backend, err := NewSimulatedBackend()
if err != nil {
t.Fatalf("Could not create a new simulated backedn %v", err)
}
if err := backend.Shutdown(); err != nil {
t.Errorf("Could not successfully shutdown simulated backend %v", err)
}
db.TeardownDB(backend.beaconDB)
}
func TestGenerateBlockAndAdvanceChain_IncreasesSlot(t *testing.T) {
backend, err := NewSimulatedBackend()
if err != nil {
t.Fatalf("Could not create a new simulated backend %v", err)
}
privKeys, err := backend.SetupBackend(100)
if err != nil {
t.Fatalf("Could not set up backend %v", err)
}
defer backend.Shutdown()
defer db.TeardownDB(backend.beaconDB)
slotLimit := params.BeaconConfig().SlotsPerEpoch + uint64(1)
for i := uint64(0); i < slotLimit; i++ {
if err := backend.GenerateBlockAndAdvanceChain(&SimulatedObjects{}, privKeys); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, backend.state.Slot+1)
}
if backend.inMemoryBlocks[len(backend.inMemoryBlocks)-1].Slot != backend.state.Slot {
t.Errorf("In memory Blocks do not have the same last slot as the state, expected %d but got %v",
backend.state.Slot, backend.inMemoryBlocks[len(backend.inMemoryBlocks)-1])
}
}
if backend.state.Slot != params.BeaconConfig().GenesisSlot+uint64(slotLimit) {
t.Errorf("Unequal state slot and expected slot %d %d", backend.state.Slot, slotLimit)
}
}
func TestGenerateNilBlockAndAdvanceChain_IncreasesSlot(t *testing.T) {
backend, err := NewSimulatedBackend()
if err != nil {
t.Fatalf("Could not create a new simulated backedn %v", err)
}
if _, err := backend.SetupBackend(100); err != nil {
t.Fatalf("Could not set up backend %v", err)
}
defer backend.Shutdown()
defer db.TeardownDB(backend.beaconDB)
slotLimit := params.BeaconConfig().SlotsPerEpoch + uint64(1)
for i := uint64(0); i < slotLimit; i++ {
if err := backend.GenerateNilBlockAndAdvanceChain(); err != nil {
t.Fatalf("Could not generate block and transition state successfully %v for slot %d", err, backend.state.Slot+1)
}
}
if backend.state.Slot != params.BeaconConfig().GenesisSlot+uint64(slotLimit) {
t.Errorf("Unequal state slot and expected slot %d %d", backend.state.Slot, slotLimit)
}
}

View File

@@ -1,78 +0,0 @@
package backend
// StateTest --
type StateTest struct {
Title string
Summary string
Fork string `yaml:"fork"`
Version string `yaml:"version"`
TestSuite string `yaml:"test_suite"`
TestCases []*StateTestCase `yaml:"test_cases"`
}
// StateTestCase --
type StateTestCase struct {
Config *StateTestConfig `yaml:"config"`
Results *StateTestResults `yaml:"results"`
}
// StateTestConfig --
type StateTestConfig struct {
SkipSlots []uint64 `yaml:"skip_slots"`
DepositSlots []uint64 `yaml:"deposit_slots"`
Deposits []*StateTestDeposit `yaml:"deposits"`
ProposerSlashings []*StateTestProposerSlashing `yaml:"proposer_slashings"`
AttesterSlashings []*StateTestAttesterSlashing `yaml:"attester_slashings"`
ValidatorExits []*StateTestValidatorExit `yaml:"validator_exits"`
SlotsPerEpoch uint64 `yaml:"slots_per_epoch"`
ShardCount uint64 `yaml:"shard_count"`
DepositsForChainStart uint64 `yaml:"deposits_for_chain_start"`
NumSlots uint64 `yaml:"num_slots"`
}
// StateTestDeposit --
type StateTestDeposit struct {
Slot uint64 `yaml:"slot"`
Amount uint64 `yaml:"amount"`
MerkleIndex uint64 `yaml:"merkle_index"`
Pubkey string `yaml:"pubkey"`
}
// StateTestProposerSlashing --
type StateTestProposerSlashing struct {
Slot uint64 `yaml:"slot"`
ProposerIndex uint64 `yaml:"proposer_index"`
Proposal1Shard uint64 `yaml:"proposal_1_shard"`
Proposal2Shard uint64 `yaml:"proposal_2_shard"`
Proposal1Slot uint64 `yaml:"proposal_1_slot"`
Proposal2Slot uint64 `yaml:"proposal_2_slot"`
Proposal1Root string `yaml:"proposal_1_root"`
Proposal2Root string `yaml:"proposal_2_root"`
}
// StateTestAttesterSlashing --
type StateTestAttesterSlashing struct {
Slot uint64 `yaml:"slot"`
SlashableAttestation1Slot uint64 `yaml:"slashable_attestation_1_slot"`
SlashableAttestation1JustifiedEpoch uint64 `yaml:"slashable_attestation_1_justified_epoch"`
SlashableAttestation1ValidatorIndices []uint64 `yaml:"slashable_attestation_1_validator_indices"`
SlashableAttestation1CustodyBitField string `yaml:"slashable_attestation_1_custody_bitfield"`
SlashableAttestation2Slot uint64 `yaml:"slashable_attestation_2_slot"`
SlashableAttestation2JustifiedEpoch uint64 `yaml:"slashable_attestation_2_justified_epoch"`
SlashableAttestation2ValidatorIndices []uint64 `yaml:"slashable_attestation_2_validator_indices"`
SlashableAttestation2CustodyBitField string `yaml:"slashable_attestation_2_custody_bitfield"`
}
// StateTestValidatorExit --
type StateTestValidatorExit struct {
Epoch uint64 `yaml:"epoch"`
ValidatorIndex uint64 `yaml:"validator_index"`
}
// StateTestResults --
type StateTestResults struct {
Slot uint64
NumValidators int `yaml:"num_validators"`
SlashedValidators []uint64 `yaml:"slashed_validators"`
ExitedValidators []uint64 `yaml:"exited_validators"`
}

View File

@@ -1,145 +0,0 @@
package main
import (
"flag"
"fmt"
"io/ioutil"
"path"
"time"
"github.com/go-yaml/yaml"
"github.com/prysmaticlabs/prysm/beacon-chain/chaintest/backend"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
log "github.com/sirupsen/logrus"
prefixed "github.com/x-cray/logrus-prefixed-formatter"
)
func init() {
featureconfig.InitFeatureConfig(&featureconfig.FeatureFlagConfig{
EnableCrosslinks: false,
})
}
func readTestsFromYaml(yamlDir string) ([]interface{}, error) {
const forkChoiceTestsFolderName = "fork-choice-tests"
const shuffleTestsFolderName = "shuffle-tests"
const stateTestsFolderName = "state-tests"
var tests []interface{}
dirs, err := ioutil.ReadDir(yamlDir)
if err != nil {
return nil, fmt.Errorf("could not read YAML tests directory: %v", err)
}
for _, dir := range dirs {
files, err := ioutil.ReadDir(path.Join(yamlDir, dir.Name()))
if err != nil {
return nil, fmt.Errorf("could not read YAML tests directory: %v", err)
}
for _, file := range files {
filePath := path.Join(yamlDir, dir.Name(), file.Name())
// #nosec G304
data, err := ioutil.ReadFile(filePath)
if err != nil {
return nil, fmt.Errorf("could not read YAML file: %v", err)
}
switch dir.Name() {
case forkChoiceTestsFolderName:
decoded := &backend.ForkChoiceTest{}
if err := yaml.Unmarshal(data, decoded); err != nil {
return nil, fmt.Errorf("could not unmarshal YAML file into test struct: %v", err)
}
tests = append(tests, decoded)
case shuffleTestsFolderName:
decoded := &backend.ShuffleTest{}
if err := yaml.Unmarshal(data, decoded); err != nil {
return nil, fmt.Errorf("could not unmarshal YAML file into test struct: %v", err)
}
tests = append(tests, decoded)
case stateTestsFolderName:
decoded := &backend.StateTest{}
if err := yaml.Unmarshal(data, decoded); err != nil {
return nil, fmt.Errorf("could not unmarshal YAML file into test struct: %v", err)
}
tests = append(tests, decoded)
}
}
}
return tests, nil
}
func runTests(tests []interface{}, sb *backend.SimulatedBackend) error {
for _, tt := range tests {
switch typedTest := tt.(type) {
case *backend.ForkChoiceTest:
log.Infof("Title: %v", typedTest.Title)
log.Infof("Summary: %v", typedTest.Summary)
log.Infof("Test Suite: %v", typedTest.TestSuite)
for _, testCase := range typedTest.TestCases {
if err := sb.RunForkChoiceTest(testCase); err != nil {
return fmt.Errorf("chain test failed: %v", err)
}
}
log.Info("Test PASSED")
case *backend.ShuffleTest:
log.Infof("Title: %v", typedTest.Title)
log.Infof("Summary: %v", typedTest.Summary)
log.Infof("Test Suite: %v", typedTest.TestSuite)
log.Infof("Fork: %v", typedTest.Fork)
log.Infof("Version: %v", typedTest.Version)
for _, testCase := range typedTest.TestCases {
if err := sb.RunShuffleTest(testCase); err != nil {
return fmt.Errorf("chain test failed: %v", err)
}
}
log.Info("Test PASSED")
case *backend.StateTest:
log.Infof("Title: %v", typedTest.Title)
log.Infof("Summary: %v", typedTest.Summary)
log.Infof("Test Suite: %v", typedTest.TestSuite)
log.Infof("Fork: %v", typedTest.Fork)
log.Infof("Version: %v", typedTest.Version)
for _, testCase := range typedTest.TestCases {
if err := sb.RunStateTransitionTest(testCase); err != nil {
return fmt.Errorf("chain test failed: %v", err)
}
}
log.Info("Test PASSED")
default:
return fmt.Errorf("receive unknown test type: %T", typedTest)
}
log.Info("-----------------------------")
}
return nil
}
func main() {
var yamlDir = flag.String("tests-dir", "", "path to directory of yaml tests")
flag.Parse()
customFormatter := new(prefixed.TextFormatter)
customFormatter.TimestampFormat = "2006-01-02 15:04:05"
customFormatter.FullTimestamp = true
log.SetFormatter(customFormatter)
tests, err := readTestsFromYaml(*yamlDir)
if err != nil {
log.Fatalf("Fail to load tests from yaml: %v", err)
}
sb, err := backend.NewSimulatedBackend()
if err != nil {
log.Fatalf("Could not create backend: %v", err)
}
log.Info("----Running Tests----")
startTime := time.Now()
err = runTests(tests, sb)
if err != nil {
log.Fatalf("Test failed %v", err)
}
endTime := time.Now()
log.Infof("Test Runs Finished In: %v", endTime.Sub(startTime))
}

View File

@@ -1,63 +0,0 @@
# Credits to Danny Ryan (Ethereum Foundation)
---
title: Sample Ethereum 2.0 Beacon Chain Test
summary: Basic, functioning fork choice rule for Ethereum 2.0
test_suite: prysm
test_cases:
- config:
validator_count: 100
cycle_length: 8
shard_count: 64
min_committee_size: 8
slots:
# "slot_number" has a minimum of 1
- slot_number: 1
new_block:
id: A
# "*" is used for the genesis block
parent: "*"
attestations:
- block: A
# the following is a shorthand string for [0, 1, 2, 3, 4, 5]
validators: "0-5"
- slot_number: 2
new_block:
id: B
parent: A
attestations:
- block: B
validators: "0-5"
- slot_number: 3
new_block:
id: C
parent: A
attestations:
# attestation "committee_slot" defaults to the slot during which the attestation occurs
- block: C
validators: "2-7"
# default "committee_slot" can be directly overridden
- block: C
committee_slot: 2
validators: "6, 7"
- slot_number: 4
new_block:
id: D
parent: C
attestations:
- block: D
validators: "1-4"
# slots can be skipped entirely (5 in this case)
- slot_number: 6
new_block:
id: E
parent: D
attestations:
- block: E
validators: "0-4"
- block: B
validators: "5, 6, 7"
results:
head: E
last_justified_block: "*"
last_finalized_block: "*"

View File

@@ -1,44 +0,0 @@
# Credits to Danny Ryan (Ethereum Foundation)
---
title: Shuffling Algorithm Tests
summary: Test vectors for shuffling a list based upon a seed using `shuffle`
test_suite: shuffle
fork: tchaikovsky
version: 1.0
test_cases:
- config:
validator_count: 100
cycle_length: 8
shard_count: 32
min_committee_size: 8
- input: []
output: []
seed: !!binary ""
- name: boring_list
description: List with a single element, 0
input: [0]
output: [0]
seed: !!binary ""
- input: [255]
output: [255]
seed: !!binary ""
- input: [4, 6, 2, 6, 1, 4, 6, 2, 1, 5]
output: [2, 1, 6, 1, 4, 5, 6, 4, 6, 2]
seed: !!binary ""
- input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
output: [4, 9, 1, 13, 8, 3, 5, 10, 7, 6, 11, 2, 12]
seed: !!binary ""
- input: [65, 6, 2, 6, 1, 4, 6, 2, 1, 5]
output: [6, 1, 2, 2, 6, 6, 1, 5, 65, 4]
seed: !!binary |
JlAYJ5H2j8g7PLiPHZI/rTS1uAvKiieOrifPN6Moso0=
- input: [35, 6, 2, 6, 1, 4, 6, 2, 1, 5, 7, 98, 3, 2, 11]
output: [35, 1, 6, 4, 6, 6, 5, 11, 2, 3, 7, 1, 2, 2, 98]
seed: !!binary |
VGhlIHF1aWNrIGJyb3duIGZveCBqdW1wcyBvdmVyIDEzIGxhenkgZG9ncy4=
- input: [35, 6, 2, 6, 1, 4, 6, 2, 1, 5, 7, 98, 3, 2, 11]
output: [98, 6, 6, 11, 5, 35, 2, 7, 2, 6, 4, 2, 1, 3, 1]
seed: !!binary |
rDTbe23J4UA0yLIurjbJqk49VcavAC0Nysas+l5MlwvLc0B/JqQ=

Some files were not shown because too many files have changed in this diff Show More