Compare commits

...

47 Commits

Author SHA1 Message Date
kasey
39c33b82ad Switch to lazy state balance cache (#9822)
* quick lazy balance cache proof of concept

* WIP refactoring to use lazy cache

* updating tests to use functional opts

* updating the rest of the tests, all passing

* use mock stategen where possible

reduces the number of test cases that require db setup

* rename test opt method for clear link

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* test assumption that zerohash is in db

* remove unused MockDB (mocking stategen instead)

* fix cache bug, switch to sync.Mutex

* improve test coverage for the state cache

* uncomment failing genesis test for discussion

* gofmt

* remove unused Service struct member

* cleanup unused func input

* combining type declaration in signature

* don't export the state cache constructor

* work around blockchain deps w/ new file

service_test brings in a ton of dependencies that make bazel rules
for blockchain complex, so just sticking these mocks in their own
file simplifies things.

* gofmt

* remove intentionally failing test

this test established that the zero root can't be used to look up the
state, resulting in a change in another PR to update stategen to use the
GenesisState db method instead when the zero root is detected.

* fixed error introduced by develop refresh

* fix import ordering

* appease deepsource

* remove unused function

* godoc comments on new requires/assert

* defensive constructor per terence's PR comment

* more differentiated balance cache metric names

Co-authored-by: kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-19 15:59:26 +00:00
Potuz
905e0f4c1c Monitor metrics (#9921)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-19 15:34:28 +00:00
Nishant Das
0ade1f121d Add Balance Field Trie (#9793)
* save stuff

* fix in v1

* clean up more

* fix bugs

* add comments and clean up

* add flag + test

* add tests

* fmt

* radek's review

* gaz

* kasey's review

* gaz and new conditional

* improve naming
2021-11-19 20:01:15 +08:00
Raul Jordan
ee52f8dff3 Implement Validator Standard Key Manager API Delete Keystores (#9886)
* begin

* implement delete and filter export history

* rem deleted code

* delete keystores all tests

* gaz

* test

* double import fix

* test

* surface errors to user

* add in changes

* edit proto

* edit

* del

* tests

* gaz

* slice

* duplicate key found in request
2021-11-19 04:11:54 +00:00
Potuz
50159c2e48 Monitor attestations (#9901)
Log attestation performance on the validator monitor
2021-11-18 22:14:56 -03:00
Preston Van Loon
cae58bbbd8 Unskip v2 end to end check for prior release (#9920) 2021-11-18 23:17:19 +00:00
Raul Jordan
9b37418761 Warn Users In Case Slashing Protection Exports are Empty (#9919)
* export text

* Update cmd/validator/slashing-protection/export.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2021-11-18 20:49:19 +00:00
Raul Jordan
a78cdf86cc Warn Users if Slashing Protection History is Empty (#9909)
* warning in case not found

* better comment

* fatal
2021-11-17 15:39:43 +00:00
Nishant Das
1c4ea75a18 Prevent Reprocessing of a Block From Our Pending Queue (#9904)
* fix bugs

* test

* raul's review
2021-11-17 10:05:50 -05:00
terence tsao
6f4c80531c Add field roots for beacon state v3 (#9914)
* Add field roots for beacon state

* Update BUILD.bazel

* Adding an exception for state v3

* fix deadcode

Co-authored-by: nisdas <nishdas93@gmail.com>
2021-11-17 09:04:49 +00:00
terence tsao
e3246922eb Add merge state type definitions (#9908) 2021-11-16 08:36:13 -08:00
Nishant Das
5962363847 Add in Deposit Map To Cache (#9885)
* add in map + tests

* tes

* raul's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-16 02:07:42 +00:00
Potuz
5e8cf9cd28 Monitor exits (#9899)
* Validator monitor process slashings

* Preston's comment

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* add test for tracked index

* Prestons requested changes

* Process voluntary exits in validator monitor

* Address Reviewers' comments

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-15 19:45:18 +00:00
Raul Jordan
b5ca09bce6 Add Network Flags to Slashing Protection Export Command (#9907) 2021-11-15 19:21:54 +00:00
terence tsao
720ee3f2a4 Add merge state protobuf (#9888)
* Add beacon state protobuf

* Update proto/prysm/v1alpha1/beacon_state.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update proto/prysm/v1alpha1/beacon_state.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Regenerate pbs

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-15 18:53:43 +00:00
Potuz
3d139d35f6 Validator monitor process slashings (#9898)
* Validator monitor process slashings

* Preston's comment

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* add test for tracked index

* Prestons requested changes

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2021-11-15 17:39:55 +00:00
terence tsao
ea38969af2 Register start-up state when interop mode (#9900)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-15 16:32:54 +00:00
terence tsao
f753ce81cc Add merge block protobuf (#9887)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-15 15:42:18 +00:00
Nishant Das
9d678b0c47 Clean Up Methods In Prysm (#9903)
* clean up

* go simple
2021-11-15 10:13:52 -05:00
Nishant Das
4a4a7e97df handle canceled contexts (#9893)
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-13 16:03:27 +08:00
Nishant Das
652b1617ed Use Next Slot Cache In More Places (#9884)
* add changes

* terence's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-12 23:46:06 +00:00
Nishant Das
42edc4f8dd Improve RNG Documentation (#9892)
* add comments

* comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-12 20:30:09 +00:00
terence tsao
70d5bc448f Add cli override setting for merge (#9891) 2021-11-12 12:01:14 -08:00
Preston Van Loon
d1159308c8 Update security.txt (#9896)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-12 15:55:45 +00:00
Nishant Das
e01298bd08 Remove Superflous Errors From Parameter Registration (#9894) 2021-11-12 15:28:21 +00:00
terence tsao
2bcb62db28 Remove unused import (#9890) 2021-11-11 22:54:48 +00:00
terence tsao
672fb72a7f Update spec tests to v1.1.5 (#9875)
* Update spec tests to v1.1.5

* Ignore `TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH`

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-10 02:41:16 +00:00
terence tsao
7fbd5b06da time pkg: can upgrade to merge helper (#9879)
* Helper: can upgrade to merge

* Typo
2021-11-09 18:36:50 +00:00
Raul Jordan
0fb91437fc Group Slashing Protection History Packages Idiomatically (#9873)
* rename

* gaz

* gaz

* Gaz

* rename

* edit

* gaz

* gaz

* build

* fix

* build

* fix up

* fix

* gaz

* cli import export

* gaz

* flag

* rev

* comm

* package renames

* radek
2021-11-09 16:49:28 +00:00
Raul Jordan
6e731bdedd Implement List Keystores for Standard API (#9863)
* start api

* keystores list

* gaz

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2021-11-08 19:08:17 +00:00
Dan Loewenherz
40eb718ba2 Fix typos: repones -> response, attestion -> attestation (#9868)
* Fix typo: repones -> response

* Fix typo: attestion -> attestation
2021-11-07 11:02:01 -06:00
terence tsao
e1840f7523 Share finalized state at start up (#9843)
* Reuse finalized beacon state at startup

* Better logging for replay

* Update tests

* Fix lint

* Add `WithFinalizedStateAtStartup`

* Update service.go

* Remove unused fields

* Update service_test.go

* Update service_test.go

* Update service_test.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-06 13:45:16 +00:00
terence tsao
b07e1ba7a4 GenesisState when zero hash tests (#9866) 2021-11-06 06:57:54 +01:00
Raul Jordan
233171d17c [Service Config Revamp] - Sync Service With Functional Options (#9859)
* sync config refactor

* rem

* rem

* testing

* gaz

* next

* fuzz

* build

* fuzz

* rev

* log

* cfg
2021-11-05 19:08:58 +00:00
kasey
2b0e132201 fix #9851 using GenesisState when zero hash in stategen StateByRoot/StateByRootInitialSync (#9852)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2021-11-05 18:14:37 +00:00
Raul Jordan
ad9e5331f5 Update Prysm Web to Use v1.0.1 (#9858)
* update web

* site data update

* fmt
2021-11-04 16:12:32 -04:00
terence tsao
341a2f1ea3 Use math.MaxUint64 (#9857)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-04 18:48:09 +00:00
Raul Jordan
7974fe01cd [Service Revamp] - Powchain Service With Functional Options (#9856)
* begin powchain service refactor

* begin refactor

* powchain passes

* options pkg

* gaz

* rev

* rev

* comments

* move to right place

* bazel powchain

* fix test

* log

* contract addr

* happy path and comments

* gaz

* new service
2021-11-04 14:19:44 -04:00
Raul Jordan
ae56f643eb Rename Web UI Performance Endpoint to Summary (#9855)
* rename endpoint

* rename endpoint
2021-11-04 15:13:16 +00:00
terence tsao
2ea09b621e Simplify should update justified (#9837)
* Simplify should update justified

* Update tests

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-03 22:29:03 +00:00
terence tsao
40fedee137 Use prev epoch source naming correctly (#9840)
* Use prev epoch source correctly

* Update epoch_precompute_test.go

* Update type.go

* Update tests

* Gazelle

* Move version to read only

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-03 21:50:41 +00:00
kasey
7cdddcb015 Weak subjectivity verification refactor (#9832)
* weak subjectivity verification refactor

This separates weak subjectivity verification into a
distinct type which does not have a dependency on
blockchain.Service

* remove unused variable

* saving enqueued init blocks before ws verify

* remove TODO, handled in previous commit

* accept suggested comment change

start comment w/ name of function NewWeakSubjectivityVerifier

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* update log w/ Raul's suggested language

* explicit zero value for clarity

* add comments clarifying how we adhere to spec

* more clear TODO per Raul's feedback

* gofmt

Co-authored-by: kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2021-11-03 15:07:46 +00:00
Raul Jordan
4440ac199f Empty Genesis Validators Root Check in Slashing Protection Export (#9849)
* test for empty genesis validators root

* precod

* fix test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-03 14:16:57 +00:00
Radosław Kapka
d78428c49e Return proper responses from KeyManagement service (#9846)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-02 15:46:11 +00:00
Raul Jordan
2e45fada34 Validate Password on RPC CreateWallet Request (#9848)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-02 15:20:43 +00:00
Raul Jordan
4c18d291f4 Rename Interop-Cold-Start Package to Deterministic-Genesis (#9841)
* rename interop-cold-start

* rev

* rev

* BUILD

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2021-11-02 14:55:36 +00:00
Preston Van Loon
dfe33b0770 stateutil: Remove duplicated MerkleizeTrieLeaves method (#9847) 2021-11-02 14:31:16 +00:00
294 changed files with 8634 additions and 4123 deletions

View File

@@ -5,21 +5,22 @@ Contact: mailto:security@prysmaticlabs.com
Encryption: openpgp4fpr:0AE0051D647BA3C1A917AF4072E33E4DF1A5036E
Encryption: openpgp4fpr:CD08DE68C60B82D3EE2A3F7D95452A701810FEDB
Encryption: openpgp4fpr:317D6E91058F8F3C2303BA7756313E44581297A6
Encryption: openpgp4fpr:79C59A585E3FD3AFFA00F5C22940A6479DA7C9EC
Preferred-Languages: en
Canonical: https://github.com/prysmaticlabs/prysm/tree/master/.well-known/security.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAl++klgACgkQcuM+TfGl
A27rQw/6A29p1W20J0v+h218p8XWLSUpTIGLnZTxw6KqdyVXMzlsQK0YG4G2s2AB
0LKh7Ae/Di5E0U+Z4AjUW5nc5eaCxK36GMscH9Ah0rgJwNYxEJw7/2o8ZqVT/Ip2
+56rFihRqxFZfaCNKFVuZFaL9jKewV9FKYP38ID6/SnTcrOHiu2AoAlyZGmB03p+
iT57SPRHatygeY4xb/gwcfREFWEv+VHGyBTv8A+6ABZDxyurboCFMERHzFICrbmk
8UdHxxlWZDnHAbAUyAwpERC5znx6IHXQJwF8TMtu6XY6a6axT2XBOyJDF9/mZOz+
kdkz6loX5uxaQBGLtTv6Kqf1yUGANOZ16VhHvWwL209LmHmigIVQ+qSM6c79PsW/
vrsqdz3GBsiMC5Fq2vYgnbgzpfE8Atjn0y7E+j4R7IvwOAE/Ro/b++nqnc4YqhME
P/yTcfGftaCrdSNnQCXeoV9JxpFM5Xy8KV3eexvNKbcgA/9DtgxL5i+s5ZJkUT9A
+qJvoRrRyIym32ghkHgtFJKB3PLCdobeoOVRk6EnMo9zKSiSK2rZEJW8Ccbo515D
W9qUOn3GF7lNVuUFAU/YKEdmDp/AVaViZ7vH+8aq0LC0HBkZ8XlzWnWoArS8sMhw
fX0R9g/HMgrwNte/d0mwim5lJ2Plgv60Bh4grJqwZJeWbU0zi1U=
=uW+X
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAmGOfiYACgkQcuM+TfGl
A24YwRAAiQk3w6yzqSEggrOlNoNn04iu/rWZdn5ihkQgzACXy8XH2D1gdKLChE/X
7e5bUtgE2aCuHryQjwoKxqZakviBJFstVmHgF64rXv2zKhpqA30Mj4fI+T3zn8I+
+FpFV0TTsxNLDx+AcR1eQ1nSayO7ImUDIfOQNDDnSZZy42Bc+F+QIGKB3aH/8bpG
kT+bDTZrXvX+TE1gZTbAtZG8sH8g/zadoWEHIhfXUuYb0kTz+DRzAxoqU4j4Z4ee
1zSfFAgfJwxJP4kWD7s4xkE1sBbCgGBeD6cW/C2lbcfIei+XSizLpHW3jD9dNqh4
fLkmEspSa/LV/iXFq8nFzu/GLww4q+sQZDzzDKZyws54CrATinRitZMhzoIL0bTn
yFZVOGHosFAMEVZ36dl1Aw2+B2W6tr2CVr9c5zfV+kup5/KZH1EmT5nYY/zFwfg2
jYCFB5wmYeiyWZvuprgJXRArgVZLZaJxwWazlPVk4i/4vPvRgvfHqOwHCBe8DXy0
VHPhpewwb/ECYek1KoaNQflgR8iH2GMHkC5RjhGDAt1S0AQDtite5m4ZYt1kvO9E
k/znkv89dduhL9CKDvZvnI+DICwsTrf//4KJ8PM/qaPAJa4GvtiUU/eS/jKBivtv
OP5dZQtX6KPc9ewqqZgn622uHSezoBidgeTkdZsJ6tw2eIu0lsY=
=V7L0
-----END PGP SIGNATURE-----

View File

@@ -225,7 +225,7 @@ filegroup(
url = "https://github.com/eth2-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
)
consensus_spec_version = "v1.1.3"
consensus_spec_version = "v1.1.5"
bls_test_version = "v0.1.1"
@@ -241,7 +241,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "e572f8c57e2dbbaeee056a600dc9d08396010dd5134a3a95e43c540470acf6f5",
sha256 = "a7d7173d953494c0dfde432c9fc064c25d46d666b024749b3474ae0cdfc50050",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -257,7 +257,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "7e2f62eaae9fd541690cc61d252556d0c5deb585ca1873aacbeb5b02d06f1362",
sha256 = "f86872061588c0197516b23025d39e9365b4716c112218a618739dc0d6f4666a",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -273,7 +273,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "05cbb89810c8acd6c57c4773ddfd167305cd4539960e9b4d7b69e1a988b35ad2",
sha256 = "7a06975360fd37fbb4694d0e06abb78d2a0835146c1d9b26d33569edff8b98f0",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -288,7 +288,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "0cef67b08448f7eb43bf66c464451c9e7a4852df8ef90555cca6d440e3436882",
sha256 = "87d8089200163340484d61212fbdffbb5d9d03e1244622761dcb91e641a65761",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -362,9 +362,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "54ce527b83d092da01127f2e3816f4d5cfbab69354caba8537f1ea55889b6d7c",
sha256 = "0a3d94428ea28916276694c517b82b364122063fdbf924f54ee9ae0bc500289f",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v1.0.0-beta.4/prysm-web-ui.tar.gz",
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v1.0.1/prysm-web-ui.tar.gz",
],
)

View File

@@ -18,6 +18,7 @@ go_library(
"receive_attestation.go",
"receive_block.go",
"service.go",
"state_balance_cache.go",
"weak_subjectivity_checks.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/blockchain",
@@ -95,6 +96,7 @@ go_test(
"init_test.go",
"log_test.go",
"metrics_test.go",
"mock_test.go",
"process_attestation_test.go",
"process_block_test.go",
"receive_attestation_test.go",
@@ -141,6 +143,7 @@ go_test(
"chain_info_norace_test.go",
"checktags_test.go",
"init_test.go",
"mock_test.go",
"receive_block_test.go",
"service_norace_test.go",
],

View File

@@ -10,7 +10,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
@@ -42,9 +41,6 @@ func (s *Service) updateHead(ctx context.Context, balances []uint64) error {
// ensure head gets its best justified info.
if s.bestJustifiedCheckpt.Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = s.bestJustifiedCheckpt
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
return err
}
}
// Get head from the fork choice service.
@@ -273,57 +269,6 @@ func (s *Service) hasHeadState() bool {
return s.head != nil && s.head.state != nil
}
// This caches justified state balances to be used for fork choice.
func (s *Service) cacheJustifiedStateBalances(ctx context.Context, justifiedRoot [32]byte) error {
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
var justifiedState state.BeaconState
var err error
if justifiedRoot == s.genesisRoot {
justifiedState, err = s.cfg.BeaconDB.GenesisState(ctx)
if err != nil {
return err
}
} else {
justifiedState, err = s.cfg.StateGen.StateByRoot(ctx, justifiedRoot)
if err != nil {
return err
}
}
if justifiedState == nil || justifiedState.IsNil() {
return errors.New("justified state can't be nil")
}
epoch := time.CurrentEpoch(justifiedState)
justifiedBalances := make([]uint64, justifiedState.NumValidators())
if err := justifiedState.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if helpers.IsActiveValidatorUsingTrie(val, epoch) {
justifiedBalances[idx] = val.EffectiveBalance()
} else {
justifiedBalances[idx] = 0
}
return nil
}); err != nil {
return err
}
s.justifiedBalancesLock.Lock()
defer s.justifiedBalancesLock.Unlock()
s.justifiedBalances = justifiedBalances
return nil
}
func (s *Service) getJustifiedBalances() []uint64 {
s.justifiedBalancesLock.RLock()
defer s.justifiedBalancesLock.RUnlock()
return s.justifiedBalances
}
// Notifies a common event feed of a new chain head event. Called right after a new
// chain head is determined, set, and saved to disk.
func (s *Service) notifyNewHeadEvent(

View File

@@ -123,13 +123,15 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
func TestCacheJustifiedStateBalances_CanCache(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
ctx := context.Background()
state, _ := util.DeterministicGenesisState(t, 100)
r := [32]byte{'a'}
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: r[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), state, r))
require.NoError(t, service.cacheJustifiedStateBalances(context.Background(), r))
require.DeepEqual(t, service.getJustifiedBalances(), state.Balances(), "Incorrect justified balances")
balances, err := service.justifiedBalances.get(ctx, r)
require.NoError(t, err)
require.DeepEqual(t, balances, state.Balances(), "Incorrect justified balances")
}
func TestUpdateHead_MissingJustifiedRoot(t *testing.T) {

View File

@@ -25,18 +25,18 @@ func TestService_TreeHandler(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetBalances([]uint64{params.BeaconConfig().GweiPerEth}))
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
[32]byte{'a'},
),
StateGen: stategen.New(beaconDB),
fcs := protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
[32]byte{'a'},
)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
s, err := NewService(ctx)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
s.cfg = cfg
require.NoError(t, s.cfg.ForkChoiceStore.ProcessBlock(ctx, 0, [32]byte{'a'}, [32]byte{'g'}, [32]byte{'c'}, 0, 0))
require.NoError(t, s.cfg.ForkChoiceStore.ProcessBlock(ctx, 1, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'c'}, 0, 0))
s.setHead([32]byte{'a'}, wrapper.WrappedPhase0SignedBeaconBlock(util.NewBeaconBlock()), headState)

View File

@@ -130,6 +130,14 @@ var (
Name: "sync_head_state_hit",
Help: "The number of sync head state requests that are present in the cache.",
})
stateBalanceCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "state_balance_cache_hit",
Help: "Count the number of state balance cache hits.",
})
stateBalanceCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "state_balance_cache_miss",
Help: "Count the number of state balance cache hits.",
})
)
// reportSlotMetrics reports slot related metrics.

View File

@@ -0,0 +1,50 @@
package blockchain
import (
"context"
"errors"
"testing"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
)
func testServiceOptsWithDB(t *testing.T) []Option {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
return []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
}
// warning: only use these opts when you are certain there are no db calls
// in your code path. this is a lightweight way to satisfy the stategen/beacondb
// initialization requirements w/o the overhead of db init.
func testServiceOptsNoDB() []Option {
return []Option{
withStateBalanceCache(satisfactoryStateBalanceCache()),
}
}
type mockStateByRooter struct {
state state.BeaconState
err error
}
var _ stateByRooter = &mockStateByRooter{}
func (m mockStateByRooter) StateByRoot(_ context.Context, _ [32]byte) (state.BeaconState, error) {
return m.state, m.err
}
// returns an instance of the state balance cache that can be used
// to satisfy the requirement for one in NewService, but which will
// always return an error if used.
func satisfactoryStateBalanceCache() *stateBalanceCache {
err := errors.New("satisfactoryStateBalanceCache doesn't perform real caching")
return &stateBalanceCache{stateGen: mockStateByRooter{err: err}}
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
@@ -128,3 +129,18 @@ func WithSlasherAttestationsFeed(f *event.Feed) Option {
return nil
}
}
func withStateBalanceCache(c *stateBalanceCache) Option {
return func(s *Service) error {
s.justifiedBalances = c
return nil
}
}
// WithFinalizedStateAtStartUp to store finalized state at start up.
func WithFinalizedStateAtStartUp(st state.BeaconState) Option {
return func(s *Service) error {
s.cfg.FinalizedStateAtStartUp = st
return nil
}
}

View File

@@ -24,14 +24,13 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
@@ -131,14 +130,14 @@ func TestStore_OnAttestation_Ok(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(beaconDB),
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(time.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
@@ -157,13 +156,12 @@ func TestStore_SaveCheckpointState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
s, err := util.NewBeaconState()
require.NoError(t, err)
@@ -230,13 +228,12 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
epoch := types.Epoch(1)
baseState, _ := util.DeterministicGenesisState(t, 1)
@@ -268,12 +265,10 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, service.verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, 32)}))
@@ -281,12 +276,10 @@ func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, service.verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
@@ -294,12 +287,10 @@ func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
err = service.verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, 32)})
@@ -308,12 +299,9 @@ func TestAttEpoch_NotMatch(t *testing.T) {
func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
d := util.HydrateAttestationData(&ethpb.AttestationData{})
assert.ErrorContains(t, "signed beacon block can't be nil", service.verifyBeaconBlock(ctx, d))
@@ -321,12 +309,10 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b := util.NewBeaconBlock()
b.Block.Slot = 2
@@ -340,12 +326,10 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
func TestVerifyBeaconBlock_OK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b := util.NewBeaconBlock()
b.Block.Slot = 2
@@ -361,10 +345,14 @@ func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
@@ -387,12 +375,10 @@ func TestVerifyFinalizedConsistency_InconsistentRoot(t *testing.T) {
func TestVerifyFinalizedConsistency_OK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
@@ -415,12 +401,10 @@ func TestVerifyFinalizedConsistency_OK(t *testing.T) {
func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32

View File

@@ -151,7 +151,12 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
s.finalizedCheckpt = postState.FinalizedCheckpoint()
}
if err := s.updateHead(ctx, s.getJustifiedBalances()); err != nil {
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
msg := fmt.Sprintf("could not read balances for state w/ justified checkpoint %#x", s.justifiedCheckpt.Root)
return errors.Wrap(err, msg)
}
if err := s.updateHead(ctx, balances); err != nil {
log.WithError(err).Warn("Could not update head")
}

View File

@@ -135,7 +135,24 @@ func (s *Service) verifyBlkFinalizedSlot(b block.BeaconBlock) error {
// shouldUpdateCurrentJustified prevents bouncing attack, by only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
//
// Spec code:
// def should_update_justified_checkpoint(store: Store, new_justified_checkpoint: Checkpoint) -> bool:
// """
// To address the bouncing attack, only update conflicting justified
// checkpoints in the fork choice if in the early slots of the epoch.
// Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
//
// See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
// """
// if compute_slots_since_epoch_start(get_current_slot(store)) < SAFE_SLOTS_TO_UPDATE_JUSTIFIED:
// return True
//
// justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
// if not get_ancestor(store, new_justified_checkpoint.root, justified_slot) == store.justified_checkpoint.root:
// return False
//
// return True
func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustifiedCheckpt *ethpb.Checkpoint) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.shouldUpdateCurrentJustified")
defer span.End()
@@ -143,51 +160,20 @@ func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustified
if slots.SinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
var newJustifiedBlockSigned block.SignedBeaconBlock
justifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(newJustifiedCheckpt.Root))
var err error
if s.hasInitSyncBlock(justifiedRoot) {
newJustifiedBlockSigned = s.getInitSyncBlock(justifiedRoot)
} else {
newJustifiedBlockSigned, err = s.cfg.BeaconDB.Block(ctx, justifiedRoot)
if err != nil {
return false, err
}
}
if newJustifiedBlockSigned == nil || newJustifiedBlockSigned.IsNil() || newJustifiedBlockSigned.Block().IsNil() {
return false, errors.New("nil new justified block")
}
newJustifiedBlock := newJustifiedBlockSigned.Block()
jSlot, err := slots.EpochStart(s.justifiedCheckpt.Epoch)
if err != nil {
return false, err
}
if newJustifiedBlock.Slot() <= jSlot {
return false, nil
}
var justifiedBlockSigned block.SignedBeaconBlock
cachedJustifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if s.hasInitSyncBlock(cachedJustifiedRoot) {
justifiedBlockSigned = s.getInitSyncBlock(cachedJustifiedRoot)
} else {
justifiedBlockSigned, err = s.cfg.BeaconDB.Block(ctx, cachedJustifiedRoot)
if err != nil {
return false, err
}
}
if justifiedBlockSigned == nil || justifiedBlockSigned.IsNil() || justifiedBlockSigned.Block().IsNil() {
return false, errors.New("nil justified block")
}
justifiedBlock := justifiedBlockSigned.Block()
b, err := s.ancestor(ctx, justifiedRoot[:], justifiedBlock.Slot())
justifiedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(newJustifiedCheckpt.Root))
b, err := s.ancestor(ctx, justifiedRoot[:], jSlot)
if err != nil {
return false, err
}
if !bytes.Equal(b, s.justifiedCheckpt.Root) {
return false, nil
}
return true, nil
}
@@ -207,9 +193,6 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
if canUpdate {
s.prevJustifiedCheckpt = s.justifiedCheckpt
s.justifiedCheckpt = cpt
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
return err
}
}
return nil
@@ -220,12 +203,13 @@ func (s *Service) updateJustified(ctx context.Context, state state.ReadOnlyBeaco
// This method does not have defense against fork choice bouncing attack, which is why it's only recommend to be used during initial syncing.
func (s *Service) updateJustifiedInitSync(ctx context.Context, cp *ethpb.Checkpoint) error {
s.prevJustifiedCheckpt = s.justifiedCheckpt
s.justifiedCheckpt = cp
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cp); err != nil {
return err
}
s.justifiedCheckpt = cp
return s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, cp)
return nil
}
func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) error {
@@ -344,7 +328,9 @@ func (s *Service) finalizedImpliesNewJustified(ctx context.Context, state state.
if !attestation.CheckPointIsEqual(s.justifiedCheckpt, state.CurrentJustifiedCheckpoint()) {
if state.CurrentJustifiedCheckpoint().Epoch > s.justifiedCheckpt.Epoch {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint()
return s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
// we don't need to check if the previous justified checkpoint was an ancestor since the new
// finalized checkpoint is overriding it.
return nil
}
// Update justified if store justified is not in chain with finalized check point.
@@ -359,9 +345,6 @@ func (s *Service) finalizedImpliesNewJustified(ctx context.Context, state state.
}
if !bytes.Equal(anc, s.finalizedCheckpt.Root) {
s.justifiedCheckpt = state.CurrentJustifiedCheckpoint()
if err := s.cacheJustifiedStateBalances(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root)); err != nil {
return err
}
}
}
return nil

View File

@@ -35,16 +35,17 @@ import (
func TestStore_OnBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
assert.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(genesis)))
@@ -134,13 +135,12 @@ func TestStore_OnBlockBatch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -186,14 +186,12 @@ func TestStore_OnBlockBatch(t *testing.T) {
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
service.genesisTime = time.Now()
update, err := service.shouldUpdateCurrentJustified(ctx, &ethpb.Checkpoint{Root: make([]byte, 32)})
@@ -221,14 +219,13 @@ func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) {
func TestShouldUpdateJustified_ReturnFalse(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
params.UseMinimalConfig()
defer params.UseMainnetConfig()
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{})
lastJustifiedBlk := util.NewBeaconBlock()
lastJustifiedBlk.Block.ParentRoot = bytesutil.PadTo([]byte{'G'}, 32)
lastJustifiedRoot, err := lastJustifiedBlk.Block.HashTreeRoot()
@@ -253,13 +250,12 @@ func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
s, err := v1.InitializeFromProto(&ethpb.BeaconState{Slot: 1, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
require.NoError(t, err)
@@ -287,13 +283,12 @@ func TestCachedPreState_CanGetFromDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
genesisStateRoot := [32]byte{}
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
@@ -325,10 +320,13 @@ func TestUpdateJustified_CouldUpdateBest(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}
service, err := NewService(ctx)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(protoarray.New(0, 0, [32]byte{})),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
signedBlock := util.NewBeaconBlock()
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(signedBlock)))
@@ -359,10 +357,12 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.finalizedCheckpt = &ethpb.Checkpoint{Root: make([]byte, 32)}
@@ -398,10 +398,12 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
service.finalizedCheckpt = &ethpb.Checkpoint{Root: make([]byte, 32)}
@@ -440,10 +442,12 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
service.cfg.ForkChoiceStore = protoarray.New(0, 0, [32]byte{'A'})
// Set finalized epoch to 1.
service.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 1}
@@ -584,7 +588,8 @@ func TestCurrentSlot_HandlesOverflow(t *testing.T) {
}
func TestAncestorByDB_CtxErr(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
cancel()
@@ -596,10 +601,14 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
@@ -640,10 +649,9 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
func TestAncestor_CanUseForkchoice(t *testing.T) {
ctx := context.Background()
cfg := &config{ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
@@ -680,10 +688,14 @@ func TestAncestor_CanUseDB(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
@@ -718,10 +730,9 @@ func TestAncestor_CanUseDB(t *testing.T) {
func TestEnsureRootNotZeroHashes(t *testing.T) {
ctx := context.Background()
cfg := &config{}
service, err := NewService(ctx)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
service.genesisRoot = [32]byte{'a'}
r := service.ensureRootNotZeros(params.BeaconConfig().ZeroHash)
@@ -733,6 +744,12 @@ func TestEnsureRootNotZeroHashes(t *testing.T) {
func TestFinalizedImpliesNewJustified(t *testing.T) {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
ctx := context.Background()
type args struct {
cachedCheckPoint *ethpb.Checkpoint
@@ -774,9 +791,8 @@ func TestFinalizedImpliesNewJustified(t *testing.T) {
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(test.args.stateCheckPoint))
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service.justifiedCheckpt = test.args.cachedCheckPoint
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo(test.want.Root, 32)}))
genesisState, err := util.NewBeaconState()
@@ -813,6 +829,12 @@ func TestVerifyBlkDescendant(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
fcs := protoarray.New(0, 0, [32]byte{'a'})
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
b := util.NewBeaconBlock()
b.Block.Slot = 1
r, err := b.Block.HashTreeRoot()
@@ -867,9 +889,8 @@ func TestVerifyBlkDescendant(t *testing.T) {
},
}
for _, tt := range tests {
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service.finalizedCheckpt = &ethpb.Checkpoint{
Root: tt.args.finalizedRoot[:],
}
@@ -883,12 +904,10 @@ func TestVerifyBlkDescendant(t *testing.T) {
}
func TestUpdateJustifiedInitSync(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
cfg := &config{BeaconDB: beaconDB}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
gBlk := util.NewBeaconBlock()
gRoot, err := gBlk.Block.HashTreeRoot()
@@ -914,10 +933,9 @@ func TestUpdateJustifiedInitSync(t *testing.T) {
func TestHandleEpochBoundary_BadMetrics(t *testing.T) {
ctx := context.Background()
cfg := &config{}
service, err := NewService(ctx)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
s, err := util.NewBeaconState()
require.NoError(t, err)
@@ -929,10 +947,9 @@ func TestHandleEpochBoundary_BadMetrics(t *testing.T) {
func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
ctx := context.Background()
cfg := &config{}
service, err := NewService(ctx)
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
s, _ := util.DeterministicGenesisState(t, 1024)
service.head = &head{state: s}
@@ -944,18 +961,18 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
func TestOnBlock_CanFinalize(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New(0, 0, [32]byte{'a'})
depositCache, err := depositcache.New()
require.NoError(t, err)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
DepositCache: depositCache,
StateNotifier: &mock.MockStateNotifier{},
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
WithDepositCache(depositCache),
WithStateNotifier(&mock.MockStateNotifier{}),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
@@ -989,18 +1006,12 @@ func TestOnBlock_CanFinalize(t *testing.T) {
func TestInsertFinalizedDeposits(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
cfg := &config{
BeaconDB: beaconDB,
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
DepositCache: depositCache,
}
service, err := NewService(ctx)
opts = append(opts, WithDepositCache(depositCache))
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))

View File

@@ -131,7 +131,13 @@ func (s *Service) processAttestationsRoutine(subscribedToStateEvents chan<- stru
continue
}
s.processAttestations(s.ctx)
if err := s.updateHead(s.ctx, s.getJustifiedBalances()); err != nil {
balances, err := s.justifiedBalances.get(s.ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
log.Errorf("Unable to get justified balances for root %v w/ error %s", s.justifiedCheckpt.Root, err)
continue
}
if err := s.updateHead(s.ctx, balances); err != nil {
log.Warnf("Resolving fork due to new attestation: %v", err)
}
}

View File

@@ -9,9 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -42,12 +40,10 @@ func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
@@ -71,12 +67,10 @@ func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
cfg := &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New(0, 0, [32]byte{})}
service, err := NewService(ctx)
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg = cfg
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
@@ -101,18 +95,12 @@ func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
func TestProcessAttestations_Ok(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
opts = append(opts, WithAttestationPool(attestations.NewPool()))
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(0, 0, [32]byte{}),
StateGen: stategen.New(beaconDB),
AttPool: attestations.NewPool(),
}
service, err := NewService(ctx)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
service.cfg = cfg
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, service.saveGenesisData(ctx, genesisState))

View File

@@ -101,7 +101,10 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []block.SignedBe
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), s.finalizedCheckpt)
}
if err := s.VerifyWeakSubjectivityRoot(s.ctx); err != nil {
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
if err := s.wsVerifier.VerifyWeakSubjectivity(s.ctx, s.finalizedCheckpt.Epoch); err != nil {
// log.Fatalf will prevent defer from being called
span.End()
// Exit run time if the node failed to verify weak subjectivity checkpoint.

View File

@@ -124,21 +124,16 @@ func TestService_ReceiveBlock(t *testing.T) {
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
AttPool: attestations.NewPool(),
ExitPool: voluntaryexits.NewPool(),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: true},
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
s, err := NewService(ctx)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
s.cfg = cfg
require.NoError(t, s.saveGenesisData(ctx, genesis))
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
@@ -166,21 +161,17 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
AttPool: attestations.NewPool(),
ExitPool: voluntaryexits.NewPool(),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: true},
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
s, err := NewService(ctx)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
s.cfg = cfg
require.NoError(t, s.saveGenesisData(ctx, genesis))
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
@@ -250,19 +241,14 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisBlockRoot, err := genesis.HashTreeRoot(ctx)
require.NoError(t, err)
cfg := &config{
BeaconDB: beaconDB,
ForkChoiceStore: protoarray.New(
0, // justifiedEpoch
0, // finalizedEpoch
genesisBlockRoot,
),
StateNotifier: &blockchainTesting.MockStateNotifier{RecordEvents: true},
StateGen: stategen.New(beaconDB),
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New(0, 0, genesisBlockRoot)),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
}
s, err := NewService(ctx)
s, err := NewService(ctx, opts...)
require.NoError(t, err)
s.cfg = cfg
err = s.saveGenesisData(ctx, genesis)
require.NoError(t, err)
gBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
@@ -287,9 +273,10 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
}
func TestService_HasInitSyncBlock(t *testing.T) {
s, err := NewService(context.Background())
opts := testServiceOptsNoDB()
opts = append(opts, WithStateNotifier(&blockchainTesting.MockStateNotifier{}))
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.cfg = &config{StateNotifier: &blockchainTesting.MockStateNotifier{}}
r := [32]byte{'a'}
if s.HasInitSyncBlock(r) {
t.Error("Should not have block")
@@ -301,11 +288,10 @@ func TestService_HasInitSyncBlock(t *testing.T) {
}
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
beaconDB := testDB.SetupDB(t)
opts := testServiceOptsWithDB(t)
hook := logTest.NewGlobal()
s, err := NewService(context.Background())
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.cfg = &config{StateGen: stategen.New(beaconDB)}
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
s.finalizedCheckpt = &ethpb.Checkpoint{}
@@ -315,11 +301,10 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
}
func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
beaconDB := testDB.SetupDB(t)
hook := logTest.NewGlobal()
s, err := NewService(context.Background())
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.cfg = &config{StateGen: stategen.New(beaconDB)}
s.finalizedCheckpt = &ethpb.Checkpoint{}
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
s.genesisTime = time.Now()
@@ -329,11 +314,10 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
}
func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
beaconDB := testDB.SetupDB(t)
hook := logTest.NewGlobal()
s, err := NewService(context.Background())
opts := testServiceOptsWithDB(t)
s, err := NewService(context.Background(), opts...)
require.NoError(t, err)
s.cfg = &config{StateGen: stategen.New(beaconDB)}
s.finalizedCheckpt = &ethpb.Checkpoint{Epoch: 10000000}
s.genesisTime = time.Now()

View File

@@ -62,9 +62,9 @@ type Service struct {
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]block.SignedBeaconBlock
initSyncBlocksLock sync.RWMutex
justifiedBalances []uint64
justifiedBalancesLock sync.RWMutex
wsVerified bool
//justifiedBalances []uint64
justifiedBalances *stateBalanceCache
wsVerifier *WeakSubjectivityVerifier
}
// config options for the service.
@@ -84,6 +84,7 @@ type config struct {
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
FinalizedStateAtStartUp state.BeaconState
}
// NewService instantiates a new block service instance that will
@@ -96,7 +97,6 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
boundaryRoots: [][32]byte{},
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]block.SignedBeaconBlock),
justifiedBalances: make([]uint64, 0),
cfg: &config{},
}
for _, opt := range opts {
@@ -104,39 +104,23 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
return nil, err
}
}
var err error
if srv.justifiedBalances == nil {
srv.justifiedBalances, err = newStateBalanceCache(srv.cfg.StateGen)
if err != nil {
return nil, err
}
}
srv.wsVerifier, err = NewWeakSubjectivityVerifier(srv.cfg.WeakSubjectivityCheckpt, srv.cfg.BeaconDB)
if err != nil {
return nil, err
}
return srv, nil
}
// Start a blockchain service's main event loop.
func (s *Service) Start() {
// For running initial sync with state cache, in an event of restart, we use
// last finalized check point as start point to sync instead of head
// state. This is because we no longer save state every slot during sync.
cp, err := s.cfg.BeaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
r := bytesutil.ToBytes32(cp.Root)
// Before the first finalized epoch, in the current epoch,
// the finalized root is defined as zero hashes instead of genesis root hash.
// We want to use genesis root to retrieve for state.
if r == params.BeaconConfig().ZeroHash {
genesisBlock, err := s.cfg.BeaconDB.GenesisBlock(s.ctx)
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
if genesisBlock != nil && !genesisBlock.IsNil() {
r, err = genesisBlock.Block().HashTreeRoot()
if err != nil {
log.Fatalf("Could not tree hash genesis block: %v", err)
}
}
}
beaconState, err := s.cfg.StateGen.StateByRoot(s.ctx, r)
if err != nil {
log.Fatalf("Could not fetch beacon state by root: %v", err)
}
beaconState := s.cfg.FinalizedStateAtStartUp
// Make sure that attestation processor is subscribed and ready for state initializing event.
attestationProcessorSubscribed := make(chan struct{}, 1)
@@ -172,9 +156,6 @@ func (s *Service) Start() {
// Resume fork choice.
s.justifiedCheckpt = ethpb.CopyCheckpoint(justifiedCheckpoint)
if err := s.cacheJustifiedStateBalances(s.ctx, s.ensureRootNotZeros(bytesutil.ToBytes32(s.justifiedCheckpt.Root))); err != nil {
log.Fatalf("Could not cache justified state balances: %v", err)
}
s.prevJustifiedCheckpt = ethpb.CopyCheckpoint(justifiedCheckpoint)
s.bestJustifiedCheckpt = ethpb.CopyCheckpoint(justifiedCheckpoint)
s.finalizedCheckpt = ethpb.CopyCheckpoint(finalizedCheckpoint)
@@ -196,9 +177,11 @@ func (s *Service) Start() {
}
}
if err := s.VerifyWeakSubjectivityRoot(s.ctx); err != nil {
// not attempting to save initial sync blocks here, because there shouldn't be until
// after the statefeed.Initialized event is fired (below)
if err := s.wsVerifier.VerifyWeakSubjectivity(s.ctx, s.finalizedCheckpt.Epoch); err != nil {
// Exit run time if the node failed to verify weak subjectivity checkpoint.
log.Fatalf("Could not verify weak subjectivity checkpoint: %v", err)
log.Fatalf("could not verify initial checkpoint provided for chain sync, with err=: %v", err)
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
@@ -359,9 +342,6 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
genesisCheckpoint := genesisState.FinalizedCheckpoint()
s.justifiedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
if err := s.cacheJustifiedStateBalances(ctx, genesisBlkRoot); err != nil {
return err
}
s.prevJustifiedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
s.bestJustifiedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
s.finalizedCheckpt = ethpb.CopyCheckpoint(genesisCheckpoint)
@@ -408,7 +388,7 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
finalizedRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
var finalizedState state.BeaconState
finalizedState, err = s.cfg.StateGen.Resume(ctx)
finalizedState, err = s.cfg.StateGen.Resume(ctx, s.cfg.FinalizedStateAtStartUp)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}
@@ -430,7 +410,7 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
if err != nil {
return errors.Wrap(err, "could not hash head block")
}
finalizedState, err := s.cfg.StateGen.Resume(ctx)
finalizedState, err := s.cfg.StateGen.Resume(ctx, s.cfg.FinalizedStateAtStartUp)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}

View File

@@ -94,11 +94,12 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
DepositContainers: []*ethpb.DepositContainer{},
})
require.NoError(t, err)
web3Service, err = powchain.NewService(ctx, &powchain.Web3ServiceConfig{
BeaconDB: beaconDB,
HttpEndpoints: []string{endpoint},
DepositContract: common.Address{},
})
web3Service, err = powchain.NewService(
ctx,
powchain.WithDatabase(beaconDB),
powchain.WithHttpEndpoints([]string{endpoint}),
powchain.WithDepositContractAddress(common.Address{}),
)
require.NoError(t, err, "Unable to set up web3 service")
attService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
@@ -107,25 +108,24 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
require.NoError(t, err)
cfg := &config{
BeaconBlockBuf: 0,
BeaconDB: beaconDB,
DepositCache: depositCache,
ChainStartFetcher: web3Service,
P2p: &mockBroadcaster{},
StateNotifier: &mockBeaconNode{},
AttPool: attestations.NewPool(),
StateGen: stategen.New(beaconDB),
ForkChoiceStore: protoarray.New(0, 0, params.BeaconConfig().ZeroHash),
AttService: attService,
stateGen := stategen.New(beaconDB)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
require.NoError(t, stateGen.SaveState(ctx, bytesutil.ToBytes32(bState.FinalizedCheckpoint().Root), bState))
opts := []Option{
WithDatabase(beaconDB),
WithDepositCache(depositCache),
WithChainStartFetcher(web3Service),
WithAttestationPool(attestations.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(protoarray.New(0, 0, params.BeaconConfig().ZeroHash)),
WithAttestationService(attService),
WithStateGen(stateGen),
}
// Safe a state in stategen to purposes of testing a service stop / shutdown.
require.NoError(t, cfg.StateGen.SaveState(ctx, bytesutil.ToBytes32(bState.FinalizedCheckpoint().Root), bState))
chainService, err := NewService(ctx)
chainService, err := NewService(ctx, opts...)
require.NoError(t, err, "Unable to setup chain service")
chainService.cfg = cfg
chainService.genesisTime = time.Unix(1, 0) // non-zero time
return chainService
@@ -150,7 +150,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
@@ -177,7 +177,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
@@ -248,7 +248,7 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
chainService.Start()
@@ -284,6 +284,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(headBlock)))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
c := &Service{cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}}
c.cfg.FinalizedStateAtStartUp = headState
require.NoError(t, c.initializeChainInfo(ctx))
headBlk, err := c.HeadBlock(ctx)
require.NoError(t, err)
@@ -381,7 +382,7 @@ func TestChainService_InitializeChainInfo_HeadSync(t *testing.T) {
}))
c := &Service{cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)}}
c.cfg.FinalizedStateAtStartUp = headState
require.NoError(t, c.initializeChainInfo(ctx))
s, err := c.HeadState(ctx)
require.NoError(t, err)

View File

@@ -0,0 +1,82 @@
package blockchain
import (
"context"
"errors"
"sync"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
)
var errNilStateFromStategen = errors.New("justified state can't be nil")
type stateBalanceCache struct {
sync.Mutex
balances []uint64
root [32]byte
stateGen stateByRooter
}
type stateByRooter interface {
StateByRoot(context.Context, [32]byte) (state.BeaconState, error)
}
// newStateBalanceCache exists to remind us that stateBalanceCache needs a stagegen
// to avoid nil pointer bugs when updating the cache in the read path (get())
func newStateBalanceCache(sg *stategen.State) (*stateBalanceCache, error) {
if sg == nil {
return nil, errors.New("Can't initialize state balance cache without stategen")
}
return &stateBalanceCache{stateGen: sg}, nil
}
// update is called by get() when the requested root doesn't match
// the previously read value. This cache assumes we only want to cache one
// set of balances for a single root (the current justified root).
//
// warning: this is not thread-safe on its own, relies on get() for locking
func (c *stateBalanceCache) update(ctx context.Context, justifiedRoot [32]byte) ([]uint64, error) {
stateBalanceCacheMiss.Inc()
justifiedState, err := c.stateGen.StateByRoot(ctx, justifiedRoot)
if err != nil {
return nil, err
}
if justifiedState == nil || justifiedState.IsNil() {
return nil, errNilStateFromStategen
}
epoch := time.CurrentEpoch(justifiedState)
justifiedBalances := make([]uint64, justifiedState.NumValidators())
var balanceAccumulator = func(idx int, val state.ReadOnlyValidator) error {
if helpers.IsActiveValidatorUsingTrie(val, epoch) {
justifiedBalances[idx] = val.EffectiveBalance()
} else {
justifiedBalances[idx] = 0
}
return nil
}
if err := justifiedState.ReadFromEveryValidator(balanceAccumulator); err != nil {
return nil, err
}
c.balances = justifiedBalances
c.root = justifiedRoot
return c.balances, nil
}
// getBalances takes an explicit justifiedRoot so it can invalidate the singleton cache key
// when the justified root changes, and takes a context so that the long-running stategen
// read path can connect to the upstream cancellation/timeout chain.
func (c *stateBalanceCache) get(ctx context.Context, justifiedRoot [32]byte) ([]uint64, error) {
c.Lock()
defer c.Unlock()
if justifiedRoot == c.root {
stateBalanceCacheHit.Inc()
return c.balances, nil
}
return c.update(ctx, justifiedRoot)
}

View File

@@ -0,0 +1,225 @@
package blockchain
import (
"context"
"encoding/binary"
"errors"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
v2 "github.com/prysmaticlabs/prysm/beacon-chain/state/v2"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/time/slots"
)
type mockStateByRoot struct {
state state.BeaconState
err error
}
func (m *mockStateByRoot) StateByRoot(context.Context, [32]byte) (state.BeaconState, error) {
return m.state, m.err
}
type testStateOpt func(*ethpb.BeaconStateAltair)
func testStateWithValidators(v []*ethpb.Validator) testStateOpt {
return func(a *ethpb.BeaconStateAltair) {
a.Validators = v
}
}
func testStateWithSlot(slot types.Slot) testStateOpt {
return func(a *ethpb.BeaconStateAltair) {
a.Slot = slot
}
}
func testStateFixture(opts ...testStateOpt) state.BeaconState {
a := &ethpb.BeaconStateAltair{}
for _, o := range opts {
o(a)
}
s, _ := v2.InitializeFromProtoUnsafe(a)
return s
}
func generateTestValidators(count int, opts ...func(*ethpb.Validator)) []*ethpb.Validator {
vs := make([]*ethpb.Validator, count)
var i uint32 = 0
for ; i < uint32(count); i++ {
pk := make([]byte, 48)
binary.LittleEndian.PutUint32(pk, i)
v := &ethpb.Validator{PublicKey: pk}
for _, o := range opts {
o(v)
}
vs[i] = v
}
return vs
}
func oddValidatorsExpired(currentSlot types.Slot) func(*ethpb.Validator) {
return func(v *ethpb.Validator) {
pki := binary.LittleEndian.Uint64(v.PublicKey)
if pki%2 == 0 {
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
} else {
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) - 1)
}
}
}
func oddValidatorsQueued(currentSlot types.Slot) func(*ethpb.Validator) {
return func(v *ethpb.Validator) {
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
pki := binary.LittleEndian.Uint64(v.PublicKey)
if pki%2 == 0 {
v.ActivationEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) - 1)
} else {
v.ActivationEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
}
}
}
func allValidatorsValid(currentSlot types.Slot) func(*ethpb.Validator) {
return func(v *ethpb.Validator) {
v.ActivationEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) - 1)
v.ExitEpoch = types.Epoch(int(slots.ToEpoch(currentSlot)) + 1)
}
}
func balanceIsKeyTimes2(v *ethpb.Validator) {
pki := binary.LittleEndian.Uint64(v.PublicKey)
v.EffectiveBalance = uint64(pki) * 2
}
func testHalfExpiredValidators() ([]*ethpb.Validator, []uint64) {
balances := []uint64{0, 0, 4, 0, 8, 0, 12, 0, 16, 0}
return generateTestValidators(10,
oddValidatorsExpired(types.Slot(99)),
balanceIsKeyTimes2), balances
}
func testHalfQueuedValidators() ([]*ethpb.Validator, []uint64) {
balances := []uint64{0, 0, 4, 0, 8, 0, 12, 0, 16, 0}
return generateTestValidators(10,
oddValidatorsQueued(types.Slot(99)),
balanceIsKeyTimes2), balances
}
func testAllValidValidators() ([]*ethpb.Validator, []uint64) {
balances := []uint64{0, 2, 4, 6, 8, 10, 12, 14, 16, 18}
return generateTestValidators(10,
allValidatorsValid(types.Slot(99)),
balanceIsKeyTimes2), balances
}
func TestStateBalanceCache(t *testing.T) {
type sbcTestCase struct {
err error
root [32]byte
sbc *stateBalanceCache
balances []uint64
name string
}
sentinelCacheMiss := errors.New("Cache missed, as expected!")
sentinelBalances := []uint64{1, 2, 3, 4, 5}
halfExpiredValidators, halfExpiredBalances := testHalfExpiredValidators()
halfQueuedValidators, halfQueuedBalances := testHalfQueuedValidators()
allValidValidators, allValidBalances := testAllValidValidators()
cases := []sbcTestCase{
{
root: bytesutil.ToBytes32([]byte{'A'}),
balances: sentinelBalances,
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
err: sentinelCacheMiss,
},
root: bytesutil.ToBytes32([]byte{'A'}),
balances: sentinelBalances,
},
name: "cache hit",
},
// this works by using a staterooter that returns a known error
// so really we're testing the miss by making sure stategen got called
// this also tells us stategen errors are propagated
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
//state: generateTestValidators(1, testWithBadEpoch),
err: sentinelCacheMiss,
},
root: bytesutil.ToBytes32([]byte{'B'}),
},
err: sentinelCacheMiss,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "cache miss",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{},
root: bytesutil.ToBytes32([]byte{'B'}),
},
err: errNilStateFromStategen,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "error for nil state upon cache miss",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
state: testStateFixture(
testStateWithSlot(99),
testStateWithValidators(halfExpiredValidators)),
},
},
balances: halfExpiredBalances,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "test filtering by exit epoch",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
state: testStateFixture(
testStateWithSlot(99),
testStateWithValidators(halfQueuedValidators)),
},
},
balances: halfQueuedBalances,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "test filtering by activation epoch",
},
{
sbc: &stateBalanceCache{
stateGen: &mockStateByRooter{
state: testStateFixture(
testStateWithSlot(99),
testStateWithValidators(allValidValidators)),
},
},
balances: allValidBalances,
root: bytesutil.ToBytes32([]byte{'A'}),
name: "happy path",
},
}
ctx := context.Background()
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cache := c.sbc
cacheRootStart := cache.root
b, err := cache.get(ctx, c.root)
require.ErrorIs(t, err, c.err)
require.DeepEqual(t, c.balances, b)
if c.err != nil {
// if there was an error somewhere, the root should not have changed (unless it already matched)
require.Equal(t, cacheRootStart, cache.root)
} else {
// when successful, the cache should always end with a root matching the request
require.Equal(t, c.root, cache.root)
}
})
}
}

View File

@@ -4,57 +4,94 @@ import (
"context"
"fmt"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
)
// VerifyWeakSubjectivityRoot verifies the weak subjectivity root in the service struct.
var errWSBlockNotFound = errors.New("weak subjectivity root not found in db")
var errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
type weakSubjectivityDB interface {
HasBlock(ctx context.Context, blockRoot [32]byte) bool
BlockRoots(ctx context.Context, f *filters.QueryFilter) ([][32]byte, error)
}
type WeakSubjectivityVerifier struct {
enabled bool
verified bool
root [32]byte
epoch types.Epoch
slot types.Slot
db weakSubjectivityDB
}
// NewWeakSubjectivityVerifier validates a checkpoint, and if valid, uses it to initialize a weak subjectivity verifier
func NewWeakSubjectivityVerifier(wsc *ethpb.Checkpoint, db weakSubjectivityDB) (*WeakSubjectivityVerifier, error) {
// TODO(7342): Weak subjectivity checks are currently optional. When we require the flag to be specified
// per 7342, a nil checkpoint, zero-root or zero-epoch should all fail validation
// and return an error instead of creating a WeakSubjectivityVerifier that permits any chain history.
if wsc == nil || len(wsc.Root) == 0 || wsc.Epoch == 0 {
return &WeakSubjectivityVerifier{
enabled: false,
}, nil
}
startSlot, err := slots.EpochStart(wsc.Epoch)
if err != nil {
return nil, err
}
return &WeakSubjectivityVerifier{
enabled: true,
verified: false,
root: bytesutil.ToBytes32(wsc.Root),
epoch: wsc.Epoch,
db: db,
slot: startSlot,
}, nil
}
// VerifyWeakSubjectivity verifies the weak subjectivity root in the service struct.
// Reference design: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/weak-subjectivity.md#weak-subjectivity-sync-procedure
func (s *Service) VerifyWeakSubjectivityRoot(ctx context.Context) error {
// TODO(7342): Remove the following to fully use weak subjectivity in production.
if s.cfg.WeakSubjectivityCheckpt == nil || len(s.cfg.WeakSubjectivityCheckpt.Root) == 0 || s.cfg.WeakSubjectivityCheckpt.Epoch == 0 {
func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, finalizedEpoch types.Epoch) error {
if v.verified || !v.enabled {
return nil
}
// Do nothing if the weak subjectivity has previously been verified,
// or weak subjectivity epoch is higher than last finalized epoch.
if s.wsVerified {
return nil
}
if s.cfg.WeakSubjectivityCheckpt.Epoch > s.finalizedCheckpt.Epoch {
// Two conditions are described in the specs:
// IF epoch_number > store.finalized_checkpoint.epoch,
// then ASSERT during block sync that block with root block_root
// is in the sync path at epoch epoch_number. Emit descriptive critical error if this assert fails,
// then exit client process.
// we do not handle this case ^, because we can only blocks that have been processed / are currently
// in line for finalization, we don't have the ability to look ahead. so we only satisfy the following:
// IF epoch_number <= store.finalized_checkpoint.epoch,
// then ASSERT that the block in the canonical chain at epoch epoch_number has root block_root.
// Emit descriptive critical error if this assert fails, then exit client process.
if v.epoch > finalizedEpoch {
return nil
}
log.Infof("Performing weak subjectivity check for root %#x in epoch %d", v.root, v.epoch)
r := bytesutil.ToBytes32(s.cfg.WeakSubjectivityCheckpt.Root)
log.Infof("Performing weak subjectivity check for root %#x in epoch %d", r, s.cfg.WeakSubjectivityCheckpt.Epoch)
// Save initial sync cached blocks to DB.
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
// A node should have the weak subjectivity block in the DB.
if !s.cfg.BeaconDB.HasBlock(ctx, r) {
return fmt.Errorf("node does not have root in DB: %#x", r)
}
startSlot, err := slots.EpochStart(s.cfg.WeakSubjectivityCheckpt.Epoch)
if err != nil {
return err
if !v.db.HasBlock(ctx, v.root) {
return errors.Wrap(errWSBlockNotFound, fmt.Sprintf("missing root %#x", v.root))
}
filter := filters.NewFilter().SetStartSlot(v.slot).SetEndSlot(v.slot + params.BeaconConfig().SlotsPerEpoch)
// A node should have the weak subjectivity block corresponds to the correct epoch in the DB.
filter := filters.NewFilter().SetStartSlot(startSlot).SetEndSlot(startSlot + params.BeaconConfig().SlotsPerEpoch)
roots, err := s.cfg.BeaconDB.BlockRoots(ctx, filter)
roots, err := v.db.BlockRoots(ctx, filter)
if err != nil {
return err
return errors.Wrap(err, "error while retrieving block roots to verify weak subjectivity")
}
for _, root := range roots {
if r == root {
if v.root == root {
log.Info("Weak subjectivity check has passed")
s.wsVerified = true
v.verified = true
return nil
}
}
return fmt.Errorf("node does not have root in db corresponding to epoch: %#x %d", r, s.cfg.WeakSubjectivityCheckpt.Epoch)
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"testing"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -23,59 +24,57 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
require.NoError(t, err)
tests := []struct {
wsVerified bool
wantErr bool
wantErr error
checkpt *ethpb.Checkpoint
finalizedEpoch types.Epoch
errString string
name string
}{
{
name: "nil root and epoch",
wantErr: false,
name: "nil root and epoch",
},
{
name: "already verified",
checkpt: &ethpb.Checkpoint{Epoch: 2},
finalizedEpoch: 2,
wsVerified: true,
wantErr: false,
},
{
name: "not yet to verify, ws epoch higher than finalized epoch",
checkpt: &ethpb.Checkpoint{Epoch: 2},
finalizedEpoch: 1,
wantErr: false,
},
{
name: "can't find the block in DB",
checkpt: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'a'}, 32), Epoch: 1},
finalizedEpoch: 3,
wantErr: true,
errString: "node does not have root in DB",
wantErr: errWSBlockNotFound,
},
{
name: "can't find the block corresponds to ws epoch in DB",
checkpt: &ethpb.Checkpoint{Root: r[:], Epoch: 2}, // Root belongs in epoch 1.
finalizedEpoch: 3,
wantErr: true,
errString: "node does not have root in db corresponding to epoch",
wantErr: errWSBlockNotFoundInEpoch,
},
{
name: "can verify and pass",
checkpt: &ethpb.Checkpoint{Root: r[:], Epoch: 1},
finalizedEpoch: 3,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.NoError(t, err)
s := &Service{
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt},
wsVerified: tt.wsVerified,
finalizedCheckpt: &ethpb.Checkpoint{Epoch: tt.finalizedEpoch},
wsVerifier: wv,
}
if err := s.VerifyWeakSubjectivityRoot(context.Background()); (err != nil) != tt.wantErr {
require.ErrorContains(t, tt.errString, err)
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), s.finalizedCheckpt.Epoch)
if tt.wantErr == nil {
require.NoError(t, err)
} else {
require.Equal(t, true, errors.Is(err, tt.wantErr))
}
})
}

View File

@@ -5,7 +5,6 @@
package depositcache
import (
"bytes"
"context"
"encoding/hex"
"math/big"
@@ -54,6 +53,7 @@ type DepositCache struct {
pendingDeposits []*dbpb.DepositContainer
deposits []*dbpb.DepositContainer
finalizedDeposits *FinalizedDeposits
depositsByKey map[[48]byte][]*dbpb.DepositContainer
depositsLock sync.RWMutex
}
@@ -69,6 +69,7 @@ func New() (*DepositCache, error) {
return &DepositCache{
pendingDeposits: []*dbpb.DepositContainer{},
deposits: []*dbpb.DepositContainer{},
depositsByKey: map[[48]byte][]*ethpb.DepositContainer{},
finalizedDeposits: &FinalizedDeposits{Deposits: finalizedDepositsTrie, MerkleTrieIndex: -1},
}, nil
}
@@ -95,10 +96,15 @@ func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blo
}
// Keep the slice sorted on insertion in order to avoid costly sorting on retrieval.
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Index >= index })
depCtr := &dbpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, DepositRoot: depositRoot[:], Index: index}
newDeposits := append(
[]*dbpb.DepositContainer{{Deposit: d, Eth1BlockHeight: blockNum, DepositRoot: depositRoot[:], Index: index}},
[]*dbpb.DepositContainer{depCtr},
dc.deposits[heightIdx:]...)
dc.deposits = append(dc.deposits[:heightIdx], newDeposits...)
// Append the deposit to our map, in the event no deposits
// exist for the pubkey , it is simply added to the map.
pubkey := bytesutil.ToBytes48(d.Data.PublicKey)
dc.depositsByKey[pubkey] = append(dc.depositsByKey[pubkey], depCtr)
historicalDepositsCount.Inc()
return nil
}
@@ -112,6 +118,13 @@ func (dc *DepositCache) InsertDepositContainers(ctx context.Context, ctrs []*dbp
sort.SliceStable(ctrs, func(i int, j int) bool { return ctrs[i].Index < ctrs[j].Index })
dc.deposits = ctrs
for _, c := range ctrs {
// Use a new value, as the reference
// of c changes in the next iteration.
newPtr := c
pKey := bytesutil.ToBytes48(newPtr.Deposit.Data.PublicKey)
dc.depositsByKey[pKey] = append(dc.depositsByKey[pKey], newPtr)
}
historicalDepositsCount.Add(float64(len(ctrs)))
}
@@ -202,13 +215,15 @@ func (dc *DepositCache) DepositByPubkey(ctx context.Context, pubKey []byte) (*et
var deposit *ethpb.Deposit
var blockNum *big.Int
for _, ctnr := range dc.deposits {
if bytes.Equal(ctnr.Deposit.Data.PublicKey, pubKey) {
deposit = ctnr.Deposit
blockNum = big.NewInt(int64(ctnr.Eth1BlockHeight))
break
}
deps, ok := dc.depositsByKey[bytesutil.ToBytes48(pubKey)]
if !ok || len(deps) == 0 {
return deposit, blockNum
}
// We always return the first deposit if a particular
// validator key has multiple deposits assigned to
// it.
deposit = deps[0].Deposit
blockNum = big.NewInt(int64(deps[0].Eth1BlockHeight))
return deposit, blockNum
}

View File

@@ -44,31 +44,31 @@ func TestInsertDeposit_MaintainsSortedOrderByIndex(t *testing.T) {
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &dbpb.Deposit_Data{PublicKey: []byte{'A'}}},
index: 0,
expectedErr: "",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &dbpb.Deposit_Data{PublicKey: []byte{'B'}}},
index: 3,
expectedErr: "wanted deposit with index 1 to be inserted but received 3",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &dbpb.Deposit_Data{PublicKey: []byte{'C'}}},
index: 1,
expectedErr: "",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &dbpb.Deposit_Data{PublicKey: []byte{'D'}}},
index: 4,
expectedErr: "wanted deposit with index 2 to be inserted but received 4",
},
{
blkNum: 0,
deposit: &ethpb.Deposit{},
deposit: &ethpb.Deposit{Data: &dbpb.Deposit_Data{PublicKey: []byte{'E'}}},
index: 2,
expectedErr: "",
},
@@ -316,8 +316,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.deposits = []*dbpb.DepositContainer{
ctrs := []*dbpb.DepositContainer{
{
Eth1BlockHeight: 9,
Deposit: &ethpb.Deposit{
@@ -359,6 +358,7 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
}
dc.InsertDepositContainers(context.Background(), ctrs)
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, blkNum := dc.DepositByPubkey(context.Background(), pk1)
@@ -626,24 +626,28 @@ func TestPruneProofs_Ok(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 1,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 2,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -669,24 +673,26 @@ func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}}, index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
deposit: &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -709,24 +715,28 @@ func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 1,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 2,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -752,24 +762,28 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 0,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 1,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 2,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof()},
index: 3,
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
@@ -785,6 +799,36 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestDepositMap_WorksCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)
pk0 := bytesutil.PadTo([]byte("pk0"), 48)
dep, _ := dc.DepositByPubkey(context.Background(), pk0)
var nilDep *ethpb.Deposit
assert.DeepEqual(t, nilDep, dep)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 1000}}
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 0, [32]byte{}))
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 10000}}
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 1, [32]byte{}))
// Make sure we have the same deposit returned over here.
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
// Make sure another key doesn't work.
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, _ = dc.DepositByPubkey(context.Background(), pk1)
assert.DeepEqual(t, nilDep, dep)
}
func makeDepositProof() [][]byte {
proof := make([][]byte, int(params.BeaconConfig().DepositContractTreeDepth)+1)
for i := range proof {

View File

@@ -120,26 +120,22 @@ func (c *SkipSlotCache) MarkInProgress(r [32]byte) error {
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *SkipSlotCache) MarkNotInProgress(r [32]byte) error {
func (c *SkipSlotCache) MarkNotInProgress(r [32]byte) {
if c.disabled {
return nil
return
}
c.lock.Lock()
defer c.lock.Unlock()
delete(c.inProgress, r)
return nil
}
// Put the response in the cache.
func (c *SkipSlotCache) Put(_ context.Context, r [32]byte, state state.BeaconState) error {
func (c *SkipSlotCache) Put(_ context.Context, r [32]byte, state state.BeaconState) {
if c.disabled {
return nil
return
}
// Copy state so cached value is not mutated.
c.cache.Add(r, state.Copy())
return nil
}

View File

@@ -28,8 +28,8 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
})
require.NoError(t, err)
require.NoError(t, c.Put(ctx, r, s))
require.NoError(t, c.MarkNotInProgress(r))
c.Put(ctx, r, s)
c.MarkNotInProgress(r)
res, err := c.Get(ctx, r)
require.NoError(t, err)

View File

@@ -183,12 +183,14 @@ func ProcessEpochParticipation(
}
if has && vals[i].IsActivePrevEpoch {
vals[i].IsPrevEpochAttester = true
vals[i].IsPrevEpochSourceAttester = true
}
has, err = HasValidatorFlag(b, targetIdx)
if err != nil {
return nil, nil, err
}
if has && vals[i].IsActivePrevEpoch {
vals[i].IsPrevEpochAttester = true
vals[i].IsPrevEpochTargetAttester = true
}
has, err = HasValidatorFlag(b, headIdx)
@@ -199,7 +201,7 @@ func ProcessEpochParticipation(
vals[i].IsPrevEpochHeadAttester = true
}
}
bal = precompute.UpdateBalance(vals, bal)
bal = precompute.UpdateBalance(vals, bal, beaconState.Version())
return vals, bal, nil
}
@@ -298,7 +300,7 @@ func attestationDelta(
headWeight := cfg.TimelyHeadWeight
reward, penalty = uint64(0), uint64(0)
// Process source reward / penalty
if val.IsPrevEpochAttester && !val.IsSlashed {
if val.IsPrevEpochSourceAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * srcWeight * (bal.PrevEpochAttested / increment)
reward += n / (activeIncrement * weightDenominator)

View File

@@ -108,6 +108,7 @@ func TestProcessEpochParticipation(t *testing.T) {
CurrentEpochEffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
IsCurrentEpochAttester: true,
IsPrevEpochAttester: true,
IsPrevEpochSourceAttester: true,
}, validators[1])
require.DeepEqual(t, &precompute.Validator{
IsActiveCurrentEpoch: true,
@@ -116,6 +117,7 @@ func TestProcessEpochParticipation(t *testing.T) {
CurrentEpochEffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
IsCurrentEpochAttester: true,
IsPrevEpochAttester: true,
IsPrevEpochSourceAttester: true,
IsCurrentEpochTargetAttester: true,
IsPrevEpochTargetAttester: true,
}, validators[2])
@@ -126,6 +128,7 @@ func TestProcessEpochParticipation(t *testing.T) {
CurrentEpochEffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
IsCurrentEpochAttester: true,
IsPrevEpochAttester: true,
IsPrevEpochSourceAttester: true,
IsCurrentEpochTargetAttester: true,
IsPrevEpochTargetAttester: true,
IsPrevEpochHeadAttester: true,
@@ -180,6 +183,7 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
IsActiveCurrentEpoch: false,
IsActivePrevEpoch: true,
IsPrevEpochAttester: true,
IsPrevEpochSourceAttester: true,
IsPrevEpochTargetAttester: true,
IsWithdrawableCurrentEpoch: true,
CurrentEpochEffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
@@ -191,6 +195,7 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
CurrentEpochEffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
IsCurrentEpochAttester: true,
IsPrevEpochAttester: true,
IsPrevEpochSourceAttester: true,
IsCurrentEpochTargetAttester: true,
IsPrevEpochTargetAttester: true,
IsPrevEpochHeadAttester: true,

View File

@@ -62,7 +62,7 @@ func ProcessAttesterSlashing(
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
}
slashableIndices := slashableAttesterIndices(slashing)
slashableIndices := SlashableAttesterIndices(slashing)
sort.SliceStable(slashableIndices, func(i, j int) bool {
return slashableIndices[i] < slashableIndices[j]
})
@@ -152,7 +152,8 @@ func IsSlashableAttestationData(data1, data2 *ethpb.AttestationData) bool {
return isDoubleVote || isSurroundVote
}
func slashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
// SlashableAttesterIndices returns the intersection of attester indices from both attestations in this slashing.
func SlashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
if slashing == nil || slashing.Attestation_1 == nil || slashing.Attestation_2 == nil {
return nil
}

View File

@@ -245,7 +245,7 @@ func TestFuzzslashableAttesterIndices_10000(t *testing.T) {
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(attesterSlashing)
slashableAttesterIndices(attesterSlashing)
SlashableAttesterIndices(attesterSlashing)
}
}

View File

@@ -24,6 +24,7 @@ go_library(
"//monitoring/tracing:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
@@ -51,6 +52,7 @@ go_test(
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -13,6 +13,7 @@ import (
"github.com/prysmaticlabs/prysm/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/runtime/version"
"go.opencensus.io/trace"
)
@@ -64,7 +65,7 @@ func ProcessAttestations(
vp = UpdateValidator(vp, v, indices, a, a.Data.Slot)
}
pBal = UpdateBalance(vp, pBal)
pBal = UpdateBalance(vp, pBal, state.Version())
return vp, pBal, nil
}
@@ -170,7 +171,7 @@ func UpdateValidator(vp []*Validator, record *Validator, indices []uint64, a *et
}
// UpdateBalance updates pre computed balance store.
func UpdateBalance(vp []*Validator, bBal *Balance) *Balance {
func UpdateBalance(vp []*Validator, bBal *Balance, stateVersion int) *Balance {
for _, v := range vp {
if !v.IsSlashed {
if v.IsCurrentEpochAttester {
@@ -179,7 +180,10 @@ func UpdateBalance(vp []*Validator, bBal *Balance) *Balance {
if v.IsCurrentEpochTargetAttester {
bBal.CurrentEpochTargetAttested += v.CurrentEpochEffectiveBalance
}
if v.IsPrevEpochAttester {
if stateVersion == version.Phase0 && v.IsPrevEpochAttester {
bBal.PrevEpochAttested += v.CurrentEpochEffectiveBalance
}
if stateVersion == version.Altair && v.IsPrevEpochSourceAttester {
bBal.PrevEpochAttested += v.CurrentEpochEffectiveBalance
}
if v.IsPrevEpochTargetAttester {

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
@@ -65,7 +66,7 @@ func TestUpdateBalance(t *testing.T) {
PrevEpochTargetAttested: 100 * params.BeaconConfig().EffectiveBalanceIncrement,
PrevEpochHeadAttested: 200 * params.BeaconConfig().EffectiveBalanceIncrement,
}
pBal := precompute.UpdateBalance(vp, &precompute.Balance{})
pBal := precompute.UpdateBalance(vp, &precompute.Balance{}, version.Phase0)
assert.DeepEqual(t, wantedPBal, pBal, "Incorrect balance calculations")
}

View File

@@ -20,12 +20,14 @@ type Validator struct {
IsCurrentEpochTargetAttester bool
// IsPrevEpochAttester is true if the validator attested previous epoch.
IsPrevEpochAttester bool
// IsPrevEpochSourceAttester is true if the validator attested to source previous epoch. [Only for Altair]
IsPrevEpochSourceAttester bool
// IsPrevEpochTargetAttester is true if the validator attested previous epoch target.
IsPrevEpochTargetAttester bool
// IsHeadAttester is true if the validator attested head.
IsPrevEpochHeadAttester bool
// CurrentEpochEffectiveBalance is how much effective balance this validator validator has current epoch.
// CurrentEpochEffectiveBalance is how much effective balance this validator has current epoch.
CurrentEpochEffectiveBalance uint64
// InclusionSlot is the slot of when the attestation gets included in the chain.
InclusionSlot types.Slot

View File

@@ -199,7 +199,7 @@ func CommitteeAssignments(
// Each slot in an epoch has a different set of committees. This value is derived from the
// active validator set, which does not change.
numCommitteesPerSlot := SlotCommitteeCount(uint64(len(activeValidatorIndices)))
validatorIndexToCommittee := make(map[types.ValidatorIndex]*CommitteeAssignmentContainer, params.BeaconConfig().SlotsPerEpoch.Mul(numCommitteesPerSlot))
validatorIndexToCommittee := make(map[types.ValidatorIndex]*CommitteeAssignmentContainer, len(activeValidatorIndices))
// Compute all committees for all slots.
for i := types.Slot(0); i < params.BeaconConfig().SlotsPerEpoch; i++ {

View File

@@ -2,9 +2,9 @@ package signing
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/eth2-types"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/crypto/bls"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// Domain returns the domain version for BLS private key to sign and verify.

View File

@@ -3,9 +3,9 @@ package signing
import (
"testing"
"github.com/prysmaticlabs/eth2-types"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
eth "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
)

View File

@@ -60,6 +60,16 @@ func CanUpgradeToAltair(slot types.Slot) bool {
return epochStart && altairEpoch
}
// CanUpgradeToMerge returns true if the input `slot` can upgrade to Merge fork.
//
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == MERGE_FORK_EPOCH
func CanUpgradeToMerge(slot types.Slot) bool {
epochStart := slots.IsEpochStart(slot)
mergeEpoch := slots.ToEpoch(slot) == params.BeaconConfig().MergeForkEpoch
return epochStart && mergeEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -114,6 +114,40 @@ func TestCanUpgradeToAltair(t *testing.T) {
}
}
func TestCanUpgradeToMerge(t *testing.T) {
bc := params.BeaconConfig()
bc.MergeForkEpoch = 5
params.OverrideBeaconConfig(bc)
tests := []struct {
name string
slot types.Slot
want bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
},
{
name: "not merge epoch",
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
},
{
name: "merge epoch",
slot: types.Slot(params.BeaconConfig().MergeForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := CanUpgradeToMerge(tt.slot); got != tt.want {
t.Errorf("CanUpgradeToMerge() = %v, want %v", got, tt.want)
}
})
}
}
func TestCanProcessEpoch_TrueOnEpochsLastSlot(t *testing.T) {
tests := []struct {
slot types.Slot

View File

@@ -214,10 +214,7 @@ func ProcessSlots(ctx context.Context, state state.BeaconState, slot types.Slot)
return nil, err
}
defer func() {
if err := SkipSlotCache.MarkNotInProgress(key); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Failed to mark skip slot no longer in progress")
}
SkipSlotCache.MarkNotInProgress(key)
}()
for state.Slot() < slot {
@@ -225,7 +222,7 @@ func ProcessSlots(ctx context.Context, state state.BeaconState, slot types.Slot)
tracing.AnnotateError(span, ctx.Err())
// Cache last best value.
if highestSlot < state.Slot() {
if err := SkipSlotCache.Put(ctx, key, state); err != nil {
if SkipSlotCache.Put(ctx, key, state); err != nil {
log.WithError(err).Error("Failed to put skip slot cache value")
}
}
@@ -269,10 +266,7 @@ func ProcessSlots(ctx context.Context, state state.BeaconState, slot types.Slot)
}
if highestSlot < state.Slot() {
if err := SkipSlotCache.Put(ctx, key, state); err != nil {
log.WithError(err).Error("Failed to put skip slot cache value")
tracing.AnnotateError(span, err)
}
SkipSlotCache.Put(ctx, key, state)
}
return state, nil

View File

@@ -39,14 +39,6 @@ type ReadOnlyDatabase interface {
StateSummary(ctx context.Context, blockRoot [32]byte) (*ethpb.StateSummary, error)
HasStateSummary(ctx context.Context, blockRoot [32]byte) bool
HighestSlotStatesBelow(ctx context.Context, slot types.Slot) ([]state.ReadOnlyBeaconState, error)
// Slashing operations.
ProposerSlashing(ctx context.Context, slashingRoot [32]byte) (*eth.ProposerSlashing, error)
AttesterSlashing(ctx context.Context, slashingRoot [32]byte) (*eth.AttesterSlashing, error)
HasProposerSlashing(ctx context.Context, slashingRoot [32]byte) bool
HasAttesterSlashing(ctx context.Context, slashingRoot [32]byte) bool
// Block operations.
VoluntaryExit(ctx context.Context, exitRoot [32]byte) (*eth.VoluntaryExit, error)
HasVoluntaryExit(ctx context.Context, exitRoot [32]byte) bool
// Checkpoint operations.
JustifiedCheckpoint(ctx context.Context) (*eth.Checkpoint, error)
FinalizedCheckpoint(ctx context.Context) (*eth.Checkpoint, error)
@@ -75,11 +67,6 @@ type NoHeadAccessDatabase interface {
DeleteStates(ctx context.Context, blockRoots [][32]byte) error
SaveStateSummary(ctx context.Context, summary *ethpb.StateSummary) error
SaveStateSummaries(ctx context.Context, summaries []*ethpb.StateSummary) error
// Slashing operations.
SaveProposerSlashing(ctx context.Context, slashing *eth.ProposerSlashing) error
SaveAttesterSlashing(ctx context.Context, slashing *eth.AttesterSlashing) error
// Block operations.
SaveVoluntaryExit(ctx context.Context, exit *eth.VoluntaryExit) error
// Checkpoint operations.
SaveJustifiedCheckpoint(ctx context.Context, checkpoint *eth.Checkpoint) error
SaveFinalizedCheckpoint(ctx context.Context, checkpoint *eth.Checkpoint) error

View File

@@ -18,10 +18,8 @@ go_library(
"migration_archived_index.go",
"migration_block_slot_index.go",
"migration_state_validators.go",
"operations.go",
"powchain.go",
"schema.go",
"slashings.go",
"state.go",
"state_summary.go",
"state_summary_cache.go",
@@ -88,9 +86,7 @@ go_test(
"migration_archived_index_test.go",
"migration_block_slot_index_test.go",
"migration_state_validators_test.go",
"operations_test.go",
"powchain_test.go",
"slashings_test.go",
"state_summary_test.go",
"state_test.go",
"utils_test.go",

View File

@@ -150,11 +150,7 @@ func (s *Store) BlocksBySlot(ctx context.Context, slot types.Slot) (bool, []bloc
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
keys, err := blockRootsBySlot(ctx, tx, slot)
if err != nil {
return err
}
keys := blockRootsBySlot(ctx, tx, slot)
for i := 0; i < len(keys); i++ {
encoded := bkt.Get(keys[i])
blk, err := unmarshalBlock(ctx, encoded)
@@ -174,11 +170,7 @@ func (s *Store) BlockRootsBySlot(ctx context.Context, slot types.Slot) (bool, []
defer span.End()
blockRoots := make([][32]byte, 0)
err := s.db.View(func(tx *bolt.Tx) error {
keys, err := blockRootsBySlot(ctx, tx, slot)
if err != nil {
return err
}
keys := blockRootsBySlot(ctx, tx, slot)
for i := 0; i < len(keys); i++ {
blockRoots = append(blockRoots, bytesutil.ToBytes32(keys[i]))
}
@@ -519,7 +511,7 @@ func blockRootsBySlotRange(
}
// blockRootsBySlot retrieves the block roots by slot
func blockRootsBySlot(ctx context.Context, tx *bolt.Tx, slot types.Slot) ([][]byte, error) {
func blockRootsBySlot(ctx context.Context, tx *bolt.Tx, slot types.Slot) [][]byte {
ctx, span := trace.StartSpan(ctx, "BeaconDB.blockRootsBySlot")
defer span.End()
@@ -533,7 +525,7 @@ func blockRootsBySlot(ctx context.Context, tx *bolt.Tx, slot types.Slot) ([][]by
roots = append(roots, v[i:i+32])
}
}
return roots, nil
return roots
}
// createBlockIndicesFromBlock takes in a beacon block and returns

View File

@@ -1,78 +0,0 @@
package kv
import (
"context"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
// VoluntaryExit retrieval by signing root.
func (s *Store) VoluntaryExit(ctx context.Context, exitRoot [32]byte) (*ethpb.VoluntaryExit, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.VoluntaryExit")
defer span.End()
enc, err := s.voluntaryExitBytes(ctx, exitRoot)
if err != nil {
return nil, err
}
if len(enc) == 0 {
return nil, nil
}
exit := &ethpb.VoluntaryExit{}
if err := decode(ctx, enc, exit); err != nil {
return nil, err
}
return exit, nil
}
// HasVoluntaryExit verifies if a voluntary exit is stored in the db by its signing root.
func (s *Store) HasVoluntaryExit(ctx context.Context, exitRoot [32]byte) bool {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HasVoluntaryExit")
defer span.End()
enc, err := s.voluntaryExitBytes(ctx, exitRoot)
if err != nil {
panic(err)
}
return len(enc) > 0
}
// SaveVoluntaryExit to the db by its signing root.
func (s *Store) SaveVoluntaryExit(ctx context.Context, exit *ethpb.VoluntaryExit) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveVoluntaryExit")
defer span.End()
exitRoot, err := exit.HashTreeRoot()
if err != nil {
return err
}
enc, err := encode(ctx, exit)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(voluntaryExitsBucket)
return bucket.Put(exitRoot[:], enc)
})
}
func (s *Store) voluntaryExitBytes(ctx context.Context, exitRoot [32]byte) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.voluntaryExitBytes")
defer span.End()
var dst []byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(voluntaryExitsBucket)
dst = bkt.Get(exitRoot[:])
return nil
})
return dst, err
}
// deleteVoluntaryExit clears a voluntary exit from the db by its signing root.
func (s *Store) deleteVoluntaryExit(ctx context.Context, exitRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.deleteVoluntaryExit")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(voluntaryExitsBucket)
return bucket.Delete(exitRoot[:])
})
}

View File

@@ -1,31 +0,0 @@
package kv
import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"google.golang.org/protobuf/proto"
)
func TestStore_VoluntaryExits_CRUD(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
exit := &ethpb.VoluntaryExit{
Epoch: 5,
}
exitRoot, err := exit.HashTreeRoot()
require.NoError(t, err)
retrieved, err := db.VoluntaryExit(ctx, exitRoot)
require.NoError(t, err)
assert.Equal(t, (*ethpb.VoluntaryExit)(nil), retrieved, "Expected nil voluntary exit")
require.NoError(t, db.SaveVoluntaryExit(ctx, exit))
assert.Equal(t, true, db.HasVoluntaryExit(ctx, exitRoot), "Expected voluntary exit to exist in the db")
retrieved, err = db.VoluntaryExit(ctx, exitRoot)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(exit, retrieved), "Wanted %v, received %v", exit, retrieved)
require.NoError(t, db.deleteVoluntaryExit(ctx, exitRoot))
assert.Equal(t, false, db.HasVoluntaryExit(ctx, exitRoot), "Expected voluntary exit to have been deleted from the db")
}

View File

@@ -1,147 +0,0 @@
package kv
import (
"context"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
// ProposerSlashing retrieval by slashing root.
func (s *Store) ProposerSlashing(ctx context.Context, slashingRoot [32]byte) (*ethpb.ProposerSlashing, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.ProposerSlashing")
defer span.End()
enc, err := s.proposerSlashingBytes(ctx, slashingRoot)
if err != nil {
return nil, err
}
if len(enc) == 0 {
return nil, nil
}
proposerSlashing := &ethpb.ProposerSlashing{}
if err := decode(ctx, enc, proposerSlashing); err != nil {
return nil, err
}
return proposerSlashing, nil
}
// HasProposerSlashing verifies if a slashing is stored in the db.
func (s *Store) HasProposerSlashing(ctx context.Context, slashingRoot [32]byte) bool {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HasProposerSlashing")
defer span.End()
enc, err := s.proposerSlashingBytes(ctx, slashingRoot)
if err != nil {
panic(err)
}
return len(enc) > 0
}
// SaveProposerSlashing to the db by its hash tree root.
func (s *Store) SaveProposerSlashing(ctx context.Context, slashing *ethpb.ProposerSlashing) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveProposerSlashing")
defer span.End()
slashingRoot, err := slashing.HashTreeRoot()
if err != nil {
return err
}
enc, err := encode(ctx, slashing)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(proposerSlashingsBucket)
return bucket.Put(slashingRoot[:], enc)
})
}
func (s *Store) proposerSlashingBytes(ctx context.Context, slashingRoot [32]byte) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.proposerSlashingBytes")
defer span.End()
var dst []byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(proposerSlashingsBucket)
dst = bkt.Get(slashingRoot[:])
return nil
})
return dst, err
}
// deleteProposerSlashing clears a proposer slashing from the db by its hash tree root.
func (s *Store) deleteProposerSlashing(ctx context.Context, slashingRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.deleteProposerSlashing")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(proposerSlashingsBucket)
return bucket.Delete(slashingRoot[:])
})
}
// AttesterSlashing retrieval by hash tree root.
func (s *Store) AttesterSlashing(ctx context.Context, slashingRoot [32]byte) (*ethpb.AttesterSlashing, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.AttesterSlashing")
defer span.End()
enc, err := s.attesterSlashingBytes(ctx, slashingRoot)
if err != nil {
return nil, err
}
if len(enc) == 0 {
return nil, nil
}
attSlashing := &ethpb.AttesterSlashing{}
if err := decode(ctx, enc, attSlashing); err != nil {
return nil, err
}
return attSlashing, nil
}
// HasAttesterSlashing verifies if a slashing is stored in the db.
func (s *Store) HasAttesterSlashing(ctx context.Context, slashingRoot [32]byte) bool {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HasAttesterSlashing")
defer span.End()
enc, err := s.attesterSlashingBytes(ctx, slashingRoot)
if err != nil {
panic(err)
}
return len(enc) > 0
}
// SaveAttesterSlashing to the db by its hash tree root.
func (s *Store) SaveAttesterSlashing(ctx context.Context, slashing *ethpb.AttesterSlashing) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveAttesterSlashing")
defer span.End()
slashingRoot, err := slashing.HashTreeRoot()
if err != nil {
return err
}
enc, err := encode(ctx, slashing)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(attesterSlashingsBucket)
return bucket.Put(slashingRoot[:], enc)
})
}
func (s *Store) attesterSlashingBytes(ctx context.Context, slashingRoot [32]byte) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.attesterSlashingBytes")
defer span.End()
var dst []byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(attesterSlashingsBucket)
dst = bkt.Get(slashingRoot[:])
return nil
})
return dst, err
}
// deleteAttesterSlashing clears an attester slashing from the db by its hash tree root.
func (s *Store) deleteAttesterSlashing(ctx context.Context, slashingRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.deleteAttesterSlashing")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(attesterSlashingsBucket)
return bucket.Delete(slashingRoot[:])
})
}

View File

@@ -1,67 +0,0 @@
package kv
import (
"context"
"testing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"google.golang.org/protobuf/proto"
)
func TestStore_ProposerSlashing_CRUD(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
prop := &ethpb.ProposerSlashing{
Header_1: util.HydrateSignedBeaconHeader(&ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 5,
},
}),
Header_2: util.HydrateSignedBeaconHeader(&ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 5,
},
}),
}
slashingRoot, err := prop.HashTreeRoot()
require.NoError(t, err)
retrieved, err := db.ProposerSlashing(ctx, slashingRoot)
require.NoError(t, err)
assert.Equal(t, (*ethpb.ProposerSlashing)(nil), retrieved, "Expected nil proposer slashing")
require.NoError(t, db.SaveProposerSlashing(ctx, prop))
assert.Equal(t, true, db.HasProposerSlashing(ctx, slashingRoot), "Expected proposer slashing to exist in the db")
retrieved, err = db.ProposerSlashing(ctx, slashingRoot)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(prop, retrieved), "Wanted %v, received %v", prop, retrieved)
require.NoError(t, db.deleteProposerSlashing(ctx, slashingRoot))
assert.Equal(t, false, db.HasProposerSlashing(ctx, slashingRoot), "Expected proposer slashing to have been deleted from the db")
}
func TestStore_AttesterSlashing_CRUD(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
att := &ethpb.AttesterSlashing{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Slot: 5,
}}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Slot: 7,
}})}
slashingRoot, err := att.HashTreeRoot()
require.NoError(t, err)
retrieved, err := db.AttesterSlashing(ctx, slashingRoot)
require.NoError(t, err)
assert.Equal(t, (*ethpb.AttesterSlashing)(nil), retrieved, "Expected nil attester slashing")
require.NoError(t, db.SaveAttesterSlashing(ctx, att))
assert.Equal(t, true, db.HasAttesterSlashing(ctx, slashingRoot), "Expected attester slashing to exist in the db")
retrieved, err = db.AttesterSlashing(ctx, slashingRoot)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(att, retrieved), "Wanted %v, received %v", att, retrieved)
require.NoError(t, db.deleteAttesterSlashing(ctx, slashingRoot))
assert.Equal(t, false, db.HasAttesterSlashing(ctx, slashingRoot), "Expected attester slashing to have been deleted from the db")
}

View File

@@ -6,7 +6,7 @@ go_library(
"log.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/interop-cold-start",
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/deterministic-genesis",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/cache/depositcache:go_default_library",

View File

@@ -4,4 +4,4 @@ import (
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "interop-cold-start")
var log = logrus.WithField("prefix", "deterministic-genesis")

View File

@@ -1,4 +1,4 @@
// Package interopcoldstart allows for spinning up a deterministic
// Package interopcoldstart allows for spinning up a deterministic-genesis
// local chain without the need for eth1 deposits useful for
// local client development and interoperability testing.
package interopcoldstart

View File

@@ -0,0 +1,58 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"metrics.go",
"process_attestation.go",
"process_block.go",
"process_exit.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/monitor",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/block:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"process_attestation_test.go",
"process_block_test.go",
"process_exit_test.go",
"service_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/wrapper:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -0,0 +1,7 @@
/*
Package monitor defines a runtime service which receives
notifications triggered by events related to performance of tracked
validating keys. It then logs and emits metrics for a user to keep finely
detailed performance measures.
*/
package monitor

View File

@@ -0,0 +1,70 @@
package monitor
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
var (
log = logrus.WithField("prefix", "monitor")
// TODO: The Prometheus gauge vectors and counters in this package deprecate the
// corresponding gauge vectors and counters in the validator client.
// inclusionSlotGauge used to track attestation inclusion distance
inclusionSlotGauge = promauto.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: "monitor",
Name: "inclusion_slot",
Help: "Attestations inclusion slot",
},
[]string{
"validator_index",
},
)
// timelyHeadCounter used to track attestation timely head flags
timelyHeadCounter = promauto.NewCounterVec(
prometheus.CounterOpts{
Namespace: "monitor",
Name: "timely_head",
Help: "Attestation timely Head flag",
},
[]string{
"validator_index",
},
)
// timelyTargetCounter used to track attestation timely head flags
timelyTargetCounter = promauto.NewCounterVec(
prometheus.CounterOpts{
Namespace: "monitor",
Name: "timely_target",
Help: "Attestation timely Target flag",
},
[]string{
"validator_index",
},
)
// timelySourceCounter used to track attestation timely head flags
timelySourceCounter = promauto.NewCounterVec(
prometheus.CounterOpts{
Namespace: "monitor",
Name: "timely_source",
Help: "Attestation timely Source flag",
},
[]string{
"validator_index",
},
)
// aggregationCounter used to track aggregations
aggregationCounter = promauto.NewCounterVec(
prometheus.CounterOpts{
Namespace: "monitor",
Name: "aggregations",
Help: "Number of aggregation duties performed",
},
[]string{
"validator_index",
},
)
)

View File

@@ -0,0 +1,218 @@
package monitor
import (
"context"
"fmt"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/sirupsen/logrus"
)
// updatedPerformanceFromTrackedVal returns true if the validator is tracked and if the
// given slot is different than the last attested slot from this validator.
func (s *Service) updatedPerformanceFromTrackedVal(idx types.ValidatorIndex, slot types.Slot) bool {
if !s.TrackedIndex(types.ValidatorIndex(idx)) {
return false
}
if lp, ok := s.latestPerformance[types.ValidatorIndex(idx)]; ok {
return lp.attestedSlot != slot
}
return false
}
// attestingIndices returns the indices of validators that appear in the
// given aggregated atestation.
func attestingIndices(ctx context.Context, state state.BeaconState, att *ethpb.Attestation) ([]uint64, error) {
committee, err := helpers.BeaconCommitteeFromState(ctx, state, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return nil, err
}
return attestation.AttestingIndices(att.AggregationBits, committee)
}
// logMessageTimelyFlagsForIndex returns the log message with the basic
// performance indicators for the attestation (head, source, target)
func logMessageTimelyFlagsForIndex(idx types.ValidatorIndex, data *ethpb.AttestationData) logrus.Fields {
return logrus.Fields{
"ValidatorIndex": idx,
"Slot": data.Slot,
"Source": fmt.Sprintf("%#x", bytesutil.Trunc(data.Source.Root)),
"Target": fmt.Sprintf("%#x", bytesutil.Trunc(data.Target.Root)),
"Head": fmt.Sprintf("%#x", bytesutil.Trunc(data.BeaconBlockRoot)),
}
}
// processAttestations logs the event that one of our tracked validators'
// attestations was included in a block
func (s *Service) processAttestations(ctx context.Context, state state.BeaconState, blk block.BeaconBlock) {
if blk == nil || blk.Body() == nil {
return
}
for _, attestation := range blk.Body().Attestations() {
s.processIncludedAttestation(ctx, state, attestation)
}
}
// processIncludedAttestation logs in the event that one of our tracked validators'
// appears in the attesting indices and this included attestation was not included
// before.
func (s *Service) processIncludedAttestation(ctx context.Context, state state.BeaconState, att *ethpb.Attestation) {
attestingIndices, err := attestingIndices(ctx, state, att)
if err != nil {
log.WithError(err).Error("Could not get attesting indices")
return
}
for _, idx := range attestingIndices {
if s.updatedPerformanceFromTrackedVal(types.ValidatorIndex(idx), att.Data.Slot) {
logFields := logMessageTimelyFlagsForIndex(types.ValidatorIndex(idx), att.Data)
balance, err := state.BalanceAtIndex(types.ValidatorIndex(idx))
if err != nil {
log.WithError(err).Error("Could not get balance")
return
}
aggregatedPerf := s.aggregatedPerformance[types.ValidatorIndex(idx)]
aggregatedPerf.totalAttestedCount++
aggregatedPerf.totalRequestedCount++
latestPerf := s.latestPerformance[types.ValidatorIndex(idx)]
balanceChg := balance - latestPerf.balance
latestPerf.balanceChange = balanceChg
latestPerf.balance = balance
latestPerf.attestedSlot = att.Data.Slot
latestPerf.inclusionSlot = state.Slot()
inclusionSlotGauge.WithLabelValues(fmt.Sprintf("%d", idx)).Set(float64(latestPerf.inclusionSlot))
aggregatedPerf.totalDistance += uint64(latestPerf.inclusionSlot - latestPerf.attestedSlot)
if state.Version() == version.Altair {
targetIdx := params.BeaconConfig().TimelyTargetFlagIndex
sourceIdx := params.BeaconConfig().TimelySourceFlagIndex
headIdx := params.BeaconConfig().TimelyHeadFlagIndex
var participation []byte
if slots.ToEpoch(latestPerf.inclusionSlot) ==
slots.ToEpoch(latestPerf.attestedSlot) {
participation, err = state.CurrentEpochParticipation()
if err != nil {
log.WithError(err).Error("Could not get current epoch participation")
return
}
} else {
participation, err = state.PreviousEpochParticipation()
if err != nil {
log.WithError(err).Error("Could not get previous epoch participation")
return
}
}
flags := participation[idx]
hasFlag, err := altair.HasValidatorFlag(flags, sourceIdx)
if err != nil {
log.WithError(err).Error("Could not get timely Source flag")
return
}
latestPerf.timelySource = hasFlag
hasFlag, err = altair.HasValidatorFlag(flags, headIdx)
if err != nil {
log.WithError(err).Error("Could not get timely Head flag")
return
}
latestPerf.timelyHead = hasFlag
hasFlag, err = altair.HasValidatorFlag(flags, targetIdx)
if err != nil {
log.WithError(err).Error("Could not get timely Target flag")
return
}
latestPerf.timelyTarget = hasFlag
if latestPerf.timelySource {
timelySourceCounter.WithLabelValues(fmt.Sprintf("%d", idx)).Inc()
aggregatedPerf.totalCorrectSource++
}
if latestPerf.timelyHead {
timelyHeadCounter.WithLabelValues(fmt.Sprintf("%d", idx)).Inc()
aggregatedPerf.totalCorrectHead++
}
if latestPerf.timelyTarget {
timelyTargetCounter.WithLabelValues(fmt.Sprintf("%d", idx)).Inc()
aggregatedPerf.totalCorrectTarget++
}
}
logFields["CorrectHead"] = latestPerf.timelyHead
logFields["CorrectSource"] = latestPerf.timelySource
logFields["CorrectTarget"] = latestPerf.timelyTarget
logFields["InclusionSlot"] = latestPerf.inclusionSlot
logFields["NewBalance"] = balance
logFields["BalanceChange"] = balanceChg
s.latestPerformance[types.ValidatorIndex(idx)] = latestPerf
s.aggregatedPerformance[types.ValidatorIndex(idx)] = aggregatedPerf
log.WithFields(logFields).Info("Attestation included")
}
}
}
// processUnaggregatedAttestation logs when the beacon node sees an unaggregated attestation from one of our
// tracked validators
func (s *Service) processUnaggregatedAttestation(ctx context.Context, att *ethpb.Attestation) {
root := bytesutil.ToBytes32(att.Data.BeaconBlockRoot)
state := s.config.StateGen.StateByRootIfCachedNoCopy(root)
if state == nil {
log.Debug("Skipping unaggregated attestation due to state not found in cache")
return
}
attestingIndices, err := attestingIndices(ctx, state, att)
if err != nil {
log.WithError(err).Error("Could not get attesting indices")
return
}
for _, idx := range attestingIndices {
if s.updatedPerformanceFromTrackedVal(types.ValidatorIndex(idx), att.Data.Slot) {
logFields := logMessageTimelyFlagsForIndex(types.ValidatorIndex(idx), att.Data)
log.WithFields(logFields).Info("Processed unaggregated attestation")
}
}
}
// processAggregatedAttestation logs when we see an aggregation from one of our tracked validators or an aggregated
// attestation from one of our tracked validators
func (s *Service) processAggregatedAttestation(ctx context.Context, att *ethpb.AggregateAttestationAndProof) {
if s.TrackedIndex(att.AggregatorIndex) {
log.WithFields(logrus.Fields{
"ValidatorIndex": att.AggregatorIndex,
}).Info("Processed attestation aggregation")
aggregatedPerf := s.aggregatedPerformance[att.AggregatorIndex]
aggregatedPerf.totalAggregations++
s.aggregatedPerformance[att.AggregatorIndex] = aggregatedPerf
aggregationCounter.WithLabelValues(fmt.Sprintf("%d", att.AggregatorIndex)).Inc()
}
var root [32]byte
copy(root[:], att.Aggregate.Data.BeaconBlockRoot)
state := s.config.StateGen.StateByRootIfCachedNoCopy(root)
if state == nil {
log.Debug("Skipping agregated attestation due to state not found in cache")
return
}
attestingIndices, err := attestingIndices(ctx, state, att.Aggregate)
if err != nil {
log.WithError(err).Error("Could not get attesting indices")
return
}
for _, idx := range attestingIndices {
if s.updatedPerformanceFromTrackedVal(types.ValidatorIndex(idx), att.Aggregate.Data.Slot) {
logFields := logMessageTimelyFlagsForIndex(types.ValidatorIndex(idx), att.Aggregate.Data)
log.WithFields(logFields).Info("Processed aggregated attestation")
}
}
}

View File

@@ -0,0 +1,287 @@
package monitor
import (
"bytes"
"context"
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/go-bitfield"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func setupService(t *testing.T) *Service {
beaconDB := testDB.SetupDB(t)
trackedVals := map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
12: nil,
}
latestPerformance := map[types.ValidatorIndex]ValidatorLatestPerformance{
1: {
balance: 32000000000,
},
2: {
balance: 32000000000,
},
12: {
balance: 31900000000,
},
}
aggregatedPerformance := map[types.ValidatorIndex]ValidatorAggregatedPerformance{
1: {},
2: {},
12: {},
}
return &Service{
config: &ValidatorMonitorConfig{
StateGen: stategen.New(beaconDB),
TrackedValidators: trackedVals,
},
latestPerformance: latestPerformance,
aggregatedPerformance: aggregatedPerformance,
}
}
func TestGetAttestingIndices(t *testing.T) {
ctx := context.Background()
beaconState, _ := util.DeterministicGenesisState(t, 256)
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
},
AggregationBits: bitfield.Bitlist{0b11, 0b1},
}
attestingIndices, err := attestingIndices(ctx, beaconState, att)
require.NoError(t, err)
require.DeepEqual(t, attestingIndices, []uint64{0xc, 0x2})
}
func TestProcessIncludedAttestationTwoTracked(t *testing.T) {
hook := logTest.NewGlobal()
s := setupService(t)
state, _ := util.DeterministicGenesisStateAltair(t, 256)
require.NoError(t, state.SetSlot(2))
require.NoError(t, state.SetCurrentParticipationBits(bytes.Repeat([]byte{0xff}, 13)))
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("hello-world"), 32),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
},
AggregationBits: bitfield.Bitlist{0b11, 0b1},
}
s.processIncludedAttestation(context.Background(), state, att)
wanted1 := "\"Attestation included\" BalanceChange=0 CorrectHead=true CorrectSource=true CorrectTarget=true Head=0x68656c6c6f2d InclusionSlot=2 NewBalance=32000000000 Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=2 prefix=monitor"
wanted2 := "\"Attestation included\" BalanceChange=100000000 CorrectHead=true CorrectSource=true CorrectTarget=true Head=0x68656c6c6f2d InclusionSlot=2 NewBalance=32000000000 Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=12 prefix=monitor"
require.LogsContain(t, hook, wanted1)
require.LogsContain(t, hook, wanted2)
}
func TestProcessUnaggregatedAttestationStateNotCached(t *testing.T) {
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx := context.Background()
s := setupService(t)
state, _ := util.DeterministicGenesisStateAltair(t, 256)
require.NoError(t, state.SetSlot(2))
header := state.LatestBlockHeader()
participation := []byte{0xff, 0xff, 0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
require.NoError(t, state.SetCurrentParticipationBits(participation))
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
BeaconBlockRoot: header.GetStateRoot(),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
},
AggregationBits: bitfield.Bitlist{0b11, 0b1},
}
s.processUnaggregatedAttestation(ctx, att)
require.LogsContain(t, hook, "Skipping unaggregated attestation due to state not found in cache")
logrus.SetLevel(logrus.InfoLevel)
}
func TestProcessUnaggregatedAttestationStateCached(t *testing.T) {
ctx := context.Background()
hook := logTest.NewGlobal()
s := setupService(t)
state, _ := util.DeterministicGenesisStateAltair(t, 256)
participation := []byte{0xff, 0xff, 0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
require.NoError(t, state.SetCurrentParticipationBits(participation))
root := [32]byte{}
copy(root[:], "hello-world")
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: root[:],
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
},
},
AggregationBits: bitfield.Bitlist{0b11, 0b1},
}
require.NoError(t, s.config.StateGen.SaveState(ctx, root, state))
s.processUnaggregatedAttestation(context.Background(), att)
wanted1 := "\"Processed unaggregated attestation\" Head=0x68656c6c6f2d Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=2 prefix=monitor"
wanted2 := "\"Processed unaggregated attestation\" Head=0x68656c6c6f2d Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=12 prefix=monitor"
require.LogsContain(t, hook, wanted1)
require.LogsContain(t, hook, wanted2)
}
func TestProcessAggregatedAttestationStateNotCached(t *testing.T) {
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx := context.Background()
s := setupService(t)
state, _ := util.DeterministicGenesisStateAltair(t, 256)
require.NoError(t, state.SetSlot(2))
header := state.LatestBlockHeader()
participation := []byte{0xff, 0xff, 0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
require.NoError(t, state.SetCurrentParticipationBits(participation))
att := &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 2,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
BeaconBlockRoot: header.GetStateRoot(),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
},
AggregationBits: bitfield.Bitlist{0b11, 0b1},
},
}
s.processAggregatedAttestation(ctx, att)
require.LogsContain(t, hook, "\"Processed attestation aggregation\" ValidatorIndex=2 prefix=monitor")
require.LogsContain(t, hook, "Skipping agregated attestation due to state not found in cache")
logrus.SetLevel(logrus.InfoLevel)
}
func TestProcessAggregatedAttestationStateCached(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
s := setupService(t)
state, _ := util.DeterministicGenesisStateAltair(t, 256)
participation := []byte{0xff, 0xff, 0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
require.NoError(t, state.SetCurrentParticipationBits(participation))
root := [32]byte{}
copy(root[:], "hello-world")
att := &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 2,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: root[:],
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
},
},
AggregationBits: bitfield.Bitlist{0b10, 0b1},
},
}
require.NoError(t, s.config.StateGen.SaveState(ctx, root, state))
s.processAggregatedAttestation(ctx, att)
require.LogsContain(t, hook, "\"Processed attestation aggregation\" ValidatorIndex=2 prefix=monitor")
require.LogsContain(t, hook, "\"Processed aggregated attestation\" Head=0x68656c6c6f2d Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=2 prefix=monitor")
require.LogsDoNotContain(t, hook, "\"Processed aggregated attestation\" Head=0x68656c6c6f2d Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=12 prefix=monitor")
}
func TestProcessAttestations(t *testing.T) {
hook := logTest.NewGlobal()
s := setupService(t)
ctx := context.Background()
state, _ := util.DeterministicGenesisStateAltair(t, 256)
require.NoError(t, state.SetSlot(2))
require.NoError(t, state.SetCurrentParticipationBits(bytes.Repeat([]byte{0xff}, 13)))
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
BeaconBlockRoot: bytesutil.PadTo([]byte("hello-world"), 32),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
Target: &ethpb.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("hello-world"), 32),
},
},
AggregationBits: bitfield.Bitlist{0b11, 0b1},
}
block := &ethpb.BeaconBlockAltair{
Slot: 2,
Body: &ethpb.BeaconBlockBodyAltair{
Attestations: []*ethpb.Attestation{att},
},
}
wrappedBlock, err := wrapper.WrappedAltairBeaconBlock(block)
require.NoError(t, err)
s.processAttestations(ctx, state, wrappedBlock)
wanted1 := "\"Attestation included\" BalanceChange=0 CorrectHead=true CorrectSource=true CorrectTarget=true Head=0x68656c6c6f2d InclusionSlot=2 NewBalance=32000000000 Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=2 prefix=monitor"
wanted2 := "\"Attestation included\" BalanceChange=100000000 CorrectHead=true CorrectSource=true CorrectTarget=true Head=0x68656c6c6f2d InclusionSlot=2 NewBalance=32000000000 Slot=1 Source=0x68656c6c6f2d Target=0x68656c6c6f2d ValidatorIndex=12 prefix=monitor"
require.LogsContain(t, hook, wanted1)
require.LogsContain(t, hook, wanted2)
}

View File

@@ -0,0 +1,47 @@
package monitor
import (
"fmt"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/sirupsen/logrus"
)
// processSlashings logs the event of one of our tracked validators was slashed
func (s *Service) processSlashings(blk block.BeaconBlock) {
for _, slashing := range blk.Body().ProposerSlashings() {
idx := slashing.Header_1.Header.ProposerIndex
if s.TrackedIndex(idx) {
log.WithFields(logrus.Fields{
"ProposerIndex": idx,
"Slot:": blk.Slot(),
"SlashingSlot": slashing.Header_1.Header.Slot,
"Root1": fmt.Sprintf("%#x", bytesutil.Trunc(slashing.Header_1.Header.BodyRoot)),
"Root2": fmt.Sprintf("%#x", bytesutil.Trunc(slashing.Header_2.Header.BodyRoot)),
}).Info("Proposer slashing was included")
}
}
for _, slashing := range blk.Body().AttesterSlashings() {
for _, idx := range blocks.SlashableAttesterIndices(slashing) {
if s.TrackedIndex(types.ValidatorIndex(idx)) {
log.WithFields(logrus.Fields{
"AttesterIndex": idx,
"Slot:": blk.Slot(),
"Slot1": slashing.Attestation_1.Data.Slot,
"Root1": fmt.Sprintf("%#x", bytesutil.Trunc(slashing.Attestation_1.Data.BeaconBlockRoot)),
"SourceEpoch1": slashing.Attestation_1.Data.Source.Epoch,
"TargetEpoch1": slashing.Attestation_1.Data.Target.Epoch,
"Slot2": slashing.Attestation_2.Data.Slot,
"Root2": fmt.Sprintf("%#x", bytesutil.Trunc(slashing.Attestation_2.Data.BeaconBlockRoot)),
"SourceEpoch2": slashing.Attestation_2.Data.Source.Epoch,
"TargetEpoch2": slashing.Attestation_2.Data.Target.Epoch,
}).Info("Attester slashing was included")
}
}
}
}

View File

@@ -0,0 +1,131 @@
package monitor
import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestProcessSlashings(t *testing.T) {
tests := []struct {
name string
block *ethpb.BeaconBlock
wantedErr string
}{
{
name: "Proposer slashing a tracked index",
block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
ProposerSlashings: []*ethpb.ProposerSlashing{
{
Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 2,
Slot: params.BeaconConfig().SlotsPerEpoch + 1,
},
},
Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 2,
Slot: 0,
},
},
},
},
},
},
wantedErr: "\"Proposer slashing was included\" ProposerIndex=2",
},
{
name: "Proposer slashing an untracked index",
block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
ProposerSlashings: []*ethpb.ProposerSlashing{
{
Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 3,
Slot: params.BeaconConfig().SlotsPerEpoch + 4,
},
},
Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: 3,
Slot: 0,
},
},
},
},
},
},
wantedErr: "",
},
{
name: "Attester slashing a tracked index",
block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: []*ethpb.AttesterSlashing{
{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{1, 3, 4},
}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{1, 5, 6},
}),
},
},
},
},
wantedErr: "\"Attester slashing was included\" AttesterIndex=1",
},
{
name: "Attester slashing untracked index",
block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: []*ethpb.AttesterSlashing{
{
Attestation_1: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{1, 3, 4},
}),
Attestation_2: util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{3, 5, 6},
}),
},
},
},
},
wantedErr: "",
}}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
hook := logTest.NewGlobal()
s := &Service{
config: &ValidatorMonitorConfig{
TrackedValidators: map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
},
},
}
s.processSlashings(wrapper.WrappedPhase0BeaconBlock(tt.block))
if tt.wantedErr != "" {
require.LogsContain(t, hook, tt.wantedErr)
} else {
require.LogsDoNotContain(t, hook, "slashing")
}
})
}
}

View File

@@ -0,0 +1,31 @@
package monitor
import (
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/sirupsen/logrus"
)
// processExitsFromBlock logs the event of one of our tracked validators' exit was
// included in a block
func (s *Service) processExitsFromBlock(blk block.BeaconBlock) {
for _, exit := range blk.Body().VoluntaryExits() {
idx := exit.Exit.ValidatorIndex
if s.TrackedIndex(idx) {
log.WithFields(logrus.Fields{
"ValidatorIndex": idx,
"Slot": blk.Slot(),
}).Info("Voluntary exit was included")
}
}
}
// processExit logs the event of one of our tracked validators' exit was processed
func (s *Service) processExit(exit *ethpb.SignedVoluntaryExit) {
idx := exit.Exit.ValidatorIndex
if s.TrackedIndex(idx) {
log.WithFields(logrus.Fields{
"ValidatorIndex": idx,
}).Info("Voluntary exit was processed")
}
}

View File

@@ -0,0 +1,126 @@
package monitor
import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestProcessExitsFromBlockTrackedIndices(t *testing.T) {
hook := logTest.NewGlobal()
s := &Service{
config: &ValidatorMonitorConfig{
TrackedValidators: map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
},
},
}
exits := []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: 3,
Epoch: 1,
},
},
{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: 2,
Epoch: 0,
},
},
}
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
VoluntaryExits: exits,
},
}
s.processExitsFromBlock(wrapper.WrappedPhase0BeaconBlock(block))
require.LogsContain(t, hook, "\"Voluntary exit was included\" Slot=0 ValidatorIndex=2")
}
func TestProcessExitsFromBlockUntrackedIndices(t *testing.T) {
hook := logTest.NewGlobal()
s := &Service{
config: &ValidatorMonitorConfig{
TrackedValidators: map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
},
},
}
exits := []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: 3,
Epoch: 1,
},
},
{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: 4,
Epoch: 0,
},
},
}
block := &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
VoluntaryExits: exits,
},
}
s.processExitsFromBlock(wrapper.WrappedPhase0BeaconBlock(block))
require.LogsDoNotContain(t, hook, "\"Voluntary exit was included\"")
}
func TestProcessExitP2PTrackedIndices(t *testing.T) {
hook := logTest.NewGlobal()
s := &Service{
config: &ValidatorMonitorConfig{
TrackedValidators: map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
},
},
}
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: 1,
Epoch: 1,
},
Signature: make([]byte, 96),
}
s.processExit(exit)
require.LogsContain(t, hook, "\"Voluntary exit was processed\" ValidatorIndex=1")
}
func TestProcessExitP2PUntrackedIndices(t *testing.T) {
hook := logTest.NewGlobal()
s := &Service{
config: &ValidatorMonitorConfig{
TrackedValidators: map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
},
},
}
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: 3,
Epoch: 1,
},
}
s.processExit(exit)
require.LogsDoNotContain(t, hook, "\"Voluntary exit was processed\"")
}

View File

@@ -0,0 +1,52 @@
package monitor
import (
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
)
// ValidatorLatestPerformance keeps track of the latest participation of the validator
type ValidatorLatestPerformance struct {
attestedSlot types.Slot
inclusionSlot types.Slot
timelySource bool
timelyTarget bool
timelyHead bool
balance uint64
balanceChange uint64
}
// ValidatorAggregatedPerformance keeps track of the accumulated performance of
// the validator since launch
type ValidatorAggregatedPerformance struct {
totalAttestedCount uint64
totalRequestedCount uint64
totalDistance uint64
totalCorrectSource uint64
totalCorrectTarget uint64
totalCorrectHead uint64
totalAggregations uint64
}
// ValidatorMonitorConfig contains the list of validator indices that the
// monitor service tracks, as well as the event feed notifier that the
// monitor needs to subscribe.
type ValidatorMonitorConfig struct {
StateGen stategen.StateManager
TrackedValidators map[types.ValidatorIndex]interface{}
}
// Service is the main structure that tracks validators and reports logs and
// metrics of their performances throughout their lifetime.
type Service struct {
config *ValidatorMonitorConfig
latestPerformance map[types.ValidatorIndex]ValidatorLatestPerformance
aggregatedPerformance map[types.ValidatorIndex]ValidatorAggregatedPerformance
}
// TrackedIndex returns if the given validator index corresponds to one of the
// validators we follow
func (s *Service) TrackedIndex(idx types.ValidatorIndex) bool {
_, ok := s.config.TrackedValidators[idx]
return ok
}

View File

@@ -0,0 +1,21 @@
package monitor
import (
"testing"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/testing/require"
)
func TestTrackedIndex(t *testing.T) {
s := &Service{
config: &ValidatorMonitorConfig{
TrackedValidators: map[types.ValidatorIndex]interface{}{
1: nil,
2: nil,
},
},
}
require.Equal(t, s.TrackedIndex(types.ValidatorIndex(1)), true)
require.Equal(t, s.TrackedIndex(types.ValidatorIndex(3)), false)
}

View File

@@ -22,10 +22,10 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/deterministic-genesis:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/gateway:go_default_library",
"//beacon-chain/interop-cold-start:go_default_library",
"//beacon-chain/node/registration:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
@@ -36,6 +36,7 @@ go_library(
"//beacon-chain/rpc:go_default_library",
"//beacon-chain/rpc/apimiddleware:go_default_library",
"//beacon-chain/slasher:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//beacon-chain/sync/initial-sync:go_default_library",
@@ -44,6 +45,7 @@ go_library(
"//config/features:go_default_library",
"//config/params:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/backup:go_default_library",
"//monitoring/prometheus:go_default_library",
"//monitoring/tracing:go_default_library",
@@ -75,6 +77,7 @@ go_test(
"//config/params:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",

View File

@@ -1,6 +1,7 @@
package node
import (
"github.com/ethereum/go-ethereum/common"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/cmd"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
@@ -95,3 +96,26 @@ func configureInteropConfig(cliCtx *cli.Context) {
params.OverrideBeaconConfig(bCfg)
}
}
func configureExecutionSetting(cliCtx *cli.Context) {
if cliCtx.IsSet(flags.TerminalTotalDifficultyOverride.Name) {
c := params.BeaconConfig()
c.TerminalTotalDifficulty = cliCtx.Uint64(flags.TerminalTotalDifficultyOverride.Name)
params.OverrideBeaconConfig(c)
}
if cliCtx.IsSet(flags.TerminalBlockHashOverride.Name) {
c := params.BeaconConfig()
c.TerminalBlockHash = common.HexToHash(cliCtx.String(flags.TerminalBlockHashOverride.Name))
params.OverrideBeaconConfig(c)
}
if cliCtx.IsSet(flags.TerminalBlockHashActivationEpochOverride.Name) {
c := params.BeaconConfig()
c.TerminalBlockHashActivationEpoch = types.Epoch(cliCtx.Uint64(flags.TerminalBlockHashActivationEpochOverride.Name))
params.OverrideBeaconConfig(c)
}
if cliCtx.IsSet(flags.Coinbase.Name) {
c := params.BeaconConfig()
c.Coinbase = common.HexToAddress(cliCtx.String(flags.Coinbase.Name))
params.OverrideBeaconConfig(c)
}
}

View File

@@ -6,6 +6,7 @@ import (
"strconv"
"testing"
"github.com/ethereum/go-ethereum/common"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/cmd"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
@@ -70,6 +71,29 @@ func TestConfigureProofOfWork(t *testing.T) {
assert.Equal(t, "deposit-contract", params.BeaconConfig().DepositContractAddress)
}
func TestConfigureExecutionSetting(t *testing.T) {
params.SetupTestConfigCleanup(t)
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.Uint64(flags.TerminalTotalDifficultyOverride.Name, 0, "")
set.String(flags.TerminalBlockHashOverride.Name, "", "")
set.Uint64(flags.TerminalBlockHashActivationEpochOverride.Name, 0, "")
set.String(flags.Coinbase.Name, "", "")
require.NoError(t, set.Set(flags.TerminalTotalDifficultyOverride.Name, strconv.Itoa(100)))
require.NoError(t, set.Set(flags.TerminalBlockHashOverride.Name, "0xA"))
require.NoError(t, set.Set(flags.TerminalBlockHashActivationEpochOverride.Name, strconv.Itoa(200)))
require.NoError(t, set.Set(flags.Coinbase.Name, "0xB"))
cliCtx := cli.NewContext(&app, set, nil)
configureExecutionSetting(cliCtx)
assert.Equal(t, uint64(100), params.BeaconConfig().TerminalTotalDifficulty)
assert.Equal(t, common.HexToHash("0xA"), params.BeaconConfig().TerminalBlockHash)
assert.Equal(t, types.Epoch(200), params.BeaconConfig().TerminalBlockHashActivationEpoch)
assert.Equal(t, common.HexToAddress("0xB"), params.BeaconConfig().Coinbase)
}
func TestConfigureNetwork(t *testing.T) {
params.SetupTestConfigCleanup(t)

View File

@@ -24,10 +24,10 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/beacon-chain/db/slasherkv"
interopcoldstart "github.com/prysmaticlabs/prysm/beacon-chain/deterministic-genesis"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/beacon-chain/gateway"
interopcoldstart "github.com/prysmaticlabs/prysm/beacon-chain/interop-cold-start"
"github.com/prysmaticlabs/prysm/beacon-chain/node/registration"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/slashings"
@@ -38,6 +38,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/rpc"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/apimiddleware"
"github.com/prysmaticlabs/prysm/beacon-chain/slasher"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
regularsync "github.com/prysmaticlabs/prysm/beacon-chain/sync"
initialsync "github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync"
@@ -46,6 +47,7 @@ import (
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/container/slice"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/monitoring/backup"
"github.com/prysmaticlabs/prysm/monitoring/prometheus"
"github.com/prysmaticlabs/prysm/runtime"
@@ -61,6 +63,14 @@ const testSkipPowFlag = "test-skip-pow"
// 128MB max message size when enabling debug endpoints.
const debugGrpcMaxMsgSize = 1 << 27
// Used as a struct to keep cli flag options for configuring services
// for the beacon node. We keep this as a separate struct to not pollute the actual BeaconNode
// struct, as it is merely used to pass down configuration options into the appropriate services.
type serviceFlagOpts struct {
blockchainFlagOpts []blockchain.Option
powchainFlagOpts []powchain.Option
}
// BeaconNode defines a struct that handles the services running a random beacon chain
// full PoS node. It handles the lifecycle of the entire system and registers
// services to a service registry.
@@ -86,7 +96,8 @@ type BeaconNode struct {
collector *bcnodeCollector
slasherBlockHeadersFeed *event.Feed
slasherAttestationsFeed *event.Feed
blockchainFlagOpts []blockchain.Option
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
}
// New creates a new node instance, sets up configuration options, and registers
@@ -127,6 +138,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
syncCommitteePool: synccommittee.NewPool(),
slasherBlockHeadersFeed: new(event.Feed),
slasherAttestationsFeed: new(event.Feed),
serviceFlagOpts: &serviceFlagOpts{},
}
for _, opt := range opts {
@@ -135,7 +147,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
}
}
depositAddress, err := registration.DepositContractAddress()
depositAddress, err := powchain.DepositContractAddress()
if err != nil {
return nil, err
}
@@ -147,7 +159,9 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
return nil, err
}
beacon.startStateGen()
if err := beacon.startStateGen(); err != nil {
return nil, err
}
if err := beacon.registerP2P(cliCtx); err != nil {
return nil, err
@@ -161,7 +175,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
return nil, err
}
if err := beacon.registerInteropServices(); err != nil {
if err := beacon.registerDeterminsticGenesisService(); err != nil {
return nil, err
}
@@ -420,8 +434,34 @@ func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context) error {
return nil
}
func (b *BeaconNode) startStateGen() {
func (b *BeaconNode) startStateGen() error {
b.stateGen = stategen.New(b.db)
cp, err := b.db.FinalizedCheckpoint(b.ctx)
if err != nil {
return err
}
r := bytesutil.ToBytes32(cp.Root)
// Consider edge case where finalized root are zeros instead of genesis root hash.
if r == params.BeaconConfig().ZeroHash {
genesisBlock, err := b.db.GenesisBlock(b.ctx)
if err != nil {
return err
}
if genesisBlock != nil && !genesisBlock.IsNil() {
r, err = genesisBlock.Block().HashTreeRoot()
if err != nil {
return err
}
}
}
b.finalizedStateAtStartUp, err = b.stateGen.StateByRoot(b.ctx, r)
if err != nil {
return err
}
return nil
}
func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
@@ -488,7 +528,7 @@ func (b *BeaconNode) registerBlockchainService() error {
// skipcq: CRT-D0001
opts := append(
b.blockchainFlagOpts,
b.serviceFlagOpts.blockchainFlagOpts,
blockchain.WithDatabase(b.db),
blockchain.WithDepositCache(b.depositCache),
blockchain.WithChainStartFetcher(web3Service),
@@ -501,6 +541,7 @@ func (b *BeaconNode) registerBlockchainService() error {
blockchain.WithAttestationService(attService),
blockchain.WithStateGen(b.stateGen),
blockchain.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
if err != nil {
@@ -513,29 +554,27 @@ func (b *BeaconNode) registerPOWChainService() error {
if b.cliCtx.Bool(testSkipPowFlag) {
return b.services.RegisterService(&powchain.Service{})
}
depAddress, endpoints, err := registration.PowchainPreregistration(b.cliCtx)
if err != nil {
return err
}
bs, err := powchain.NewPowchainCollector(b.ctx)
if err != nil {
return err
}
cfg := &powchain.Web3ServiceConfig{
HttpEndpoints: endpoints,
DepositContract: common.HexToAddress(depAddress),
BeaconDB: b.db,
DepositCache: b.depositCache,
StateNotifier: b,
StateGen: b.stateGen,
Eth1HeaderReqLimit: b.cliCtx.Uint64(flags.Eth1HeaderReqLimit.Name),
BeaconNodeStatsUpdater: bs,
depositContractAddr, err := powchain.DepositContractAddress()
if err != nil {
return err
}
web3Service, err := powchain.NewService(b.ctx, cfg)
// skipcq: CRT-D0001
opts := append(
b.serviceFlagOpts.powchainFlagOpts,
powchain.WithDepositContractAddress(common.HexToAddress(depositContractAddr)),
powchain.WithDatabase(b.db),
powchain.WithDepositCache(b.depositCache),
powchain.WithStateNotifier(b),
powchain.WithStateGen(b.stateGen),
powchain.WithBeaconNodeStatsUpdater(bs),
powchain.WithFinalizedStateAtStartup(b.finalizedStateAtStartUp),
)
web3Service, err := powchain.NewService(b.ctx, opts...)
if err != nil {
return errors.Wrap(err, "could not register proof-of-work chain web3Service")
}
@@ -559,24 +598,24 @@ func (b *BeaconNode) registerSyncService() error {
return err
}
rs := regularsync.NewService(b.ctx, &regularsync.Config{
DB: b.db,
P2P: b.fetchP2P(),
Chain: chainService,
InitialSync: initSync,
StateNotifier: b,
BlockNotifier: b,
AttestationNotifier: b,
OperationNotifier: b,
AttPool: b.attestationPool,
ExitPool: b.exitPool,
SlashingPool: b.slashingsPool,
SyncCommsPool: b.syncCommitteePool,
StateGen: b.stateGen,
SlasherAttestationsFeed: b.slasherAttestationsFeed,
SlasherBlockHeadersFeed: b.slasherBlockHeadersFeed,
})
rs := regularsync.NewService(
b.ctx,
regularsync.WithDatabase(b.db),
regularsync.WithP2P(b.fetchP2P()),
regularsync.WithChainService(chainService),
regularsync.WithInitialSync(initSync),
regularsync.WithStateNotifier(b),
regularsync.WithBlockNotifier(b),
regularsync.WithAttestationNotifier(b),
regularsync.WithOperationNotifier(b),
regularsync.WithAttestationPool(b.attestationPool),
regularsync.WithExitPool(b.exitPool),
regularsync.WithSlashingPool(b.slashingsPool),
regularsync.WithSyncCommsPool(b.syncCommitteePool),
regularsync.WithStateGen(b.stateGen),
regularsync.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
regularsync.WithSlasherBlockHeadersFeed(b.slasherBlockHeadersFeed),
)
return b.services.RegisterService(rs)
}
@@ -801,7 +840,7 @@ func (b *BeaconNode) registerGRPCGateway() error {
return b.services.RegisterService(g)
}
func (b *BeaconNode) registerInteropServices() error {
func (b *BeaconNode) registerDeterminsticGenesisService() error {
genesisTime := b.cliCtx.Uint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := b.cliCtx.String(flags.InteropGenesisStateFlag.Name)
@@ -815,6 +854,14 @@ func (b *BeaconNode) registerInteropServices() error {
GenesisPath: genesisStatePath,
})
// Register genesis state as start-up state when interop mode.
// The start-up state gets reused across services.
st, err := b.db.GenesisState(b.ctx)
if err != nil {
return err
}
b.finalizedStateAtStartUp = st
return b.services.RegisterService(svc)
}
return nil

View File

@@ -1,6 +1,9 @@
package node
import "github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
import (
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
)
// Option for beacon node configuration.
type Option func(bn *BeaconNode) error
@@ -8,7 +11,15 @@ type Option func(bn *BeaconNode) error
// WithBlockchainFlagOptions includes functional options for the blockchain service related to CLI flags.
func WithBlockchainFlagOptions(opts []blockchain.Option) Option {
return func(bn *BeaconNode) error {
bn.blockchainFlagOpts = opts
bn.serviceFlagOpts.blockchainFlagOpts = opts
return nil
}
}
// WithPowchainFlagOptions includes functional options for the powchain service related to CLI flags.
func WithPowchainFlagOptions(opts []powchain.Option) Option {
return func(bn *BeaconNode) error {
bn.serviceFlagOpts.powchainFlagOpts = opts
return nil
}
}

View File

@@ -5,15 +5,12 @@ go_library(
srcs = [
"log.go",
"p2p.go",
"powchain.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/node/registration",
visibility = ["//beacon-chain/node:__subpackages__"],
deps = [
"//cmd:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/params:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
"@in_gopkg_yaml_v2//:go_default_library",
@@ -22,18 +19,13 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"p2p_test.go",
"powchain_test.go",
],
srcs = ["p2p_test.go"],
embed = [":go_default_library"],
deps = [
"//cmd:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/params:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -1,45 +0,0 @@
package registration
import (
"errors"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/urfave/cli/v2"
)
// PowchainPreregistration prepares data for powchain.Service's registration.
func PowchainPreregistration(cliCtx *cli.Context) (depositContractAddress string, endpoints []string, err error) {
depositContractAddress, err = DepositContractAddress()
if err != nil {
return "", nil, err
}
if cliCtx.String(flags.HTTPWeb3ProviderFlag.Name) == "" && len(cliCtx.StringSlice(flags.FallbackWeb3ProviderFlag.Name)) == 0 {
log.Error(
"No ETH1 node specified to run with the beacon node. Please consider running your own Ethereum proof-of-work node for better uptime, security, and decentralization of Ethereum. Visit https://docs.prylabs.network/docs/prysm-usage/setup-eth1 for more information.",
)
log.Error(
"You will need to specify --http-web3provider and/or --fallback-web3provider to attach an eth1 node to the prysm node. Without an eth1 node block proposals for your validator will be affected and the beacon node will not be able to initialize the genesis state.",
)
}
endpoints = []string{cliCtx.String(flags.HTTPWeb3ProviderFlag.Name)}
endpoints = append(endpoints, cliCtx.StringSlice(flags.FallbackWeb3ProviderFlag.Name)...)
return
}
// DepositContractAddress returns the address of the deposit contract.
func DepositContractAddress() (string, error) {
address := params.BeaconConfig().DepositContractAddress
if address == "" {
return "", errors.New("valid deposit contract is required")
}
if !common.IsHexAddress(address) {
return "", errors.New("invalid deposit contract address given: " + address)
}
return address, nil
}

View File

@@ -1,71 +0,0 @@
package registration
import (
"flag"
"testing"
"github.com/prysmaticlabs/prysm/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
func TestPowchainPreregistration(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.String(flags.HTTPWeb3ProviderFlag.Name, "primary", "")
fallback := cli.StringSlice{}
err := fallback.Set("fallback1")
require.NoError(t, err)
err = fallback.Set("fallback2")
require.NoError(t, err)
set.Var(&fallback, flags.FallbackWeb3ProviderFlag.Name, "")
ctx := cli.NewContext(&app, set, nil)
address, endpoints, err := PowchainPreregistration(ctx)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().DepositContractAddress, address)
assert.DeepEqual(t, []string{"primary", "fallback1", "fallback2"}, endpoints)
}
func TestPowchainPreregistration_EmptyWeb3Provider(t *testing.T) {
hook := logTest.NewGlobal()
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.String(flags.HTTPWeb3ProviderFlag.Name, "", "")
fallback := cli.StringSlice{}
set.Var(&fallback, flags.FallbackWeb3ProviderFlag.Name, "")
ctx := cli.NewContext(&app, set, nil)
_, _, err := PowchainPreregistration(ctx)
require.NoError(t, err)
assert.LogsContain(t, hook, "No ETH1 node specified to run with the beacon node")
}
func TestDepositContractAddress_Ok(t *testing.T) {
address, err := DepositContractAddress()
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().DepositContractAddress, address)
}
func TestDepositContractAddress_EmptyAddress(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DepositContractAddress = ""
params.OverrideBeaconConfig(config)
_, err := DepositContractAddress()
assert.ErrorContains(t, "valid deposit contract is required", err)
}
func TestDepositContractAddress_NotHexAddress(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DepositContractAddress = "abc?!"
params.OverrideBeaconConfig(config)
_, err := DepositContractAddress()
assert.ErrorContains(t, "invalid deposit contract address given", err)
}

View File

@@ -103,13 +103,13 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
case strings.Contains(topic, GossipBlockMessage):
return defaultBlockTopicParams(), nil
case strings.Contains(topic, GossipAggregateAndProofMessage):
return defaultAggregateTopicParams(activeValidators)
return defaultAggregateTopicParams(activeValidators), nil
case strings.Contains(topic, GossipAttestationMessage):
return defaultAggregateSubnetTopicParams(activeValidators)
return defaultAggregateSubnetTopicParams(activeValidators), nil
case strings.Contains(topic, GossipSyncCommitteeMessage):
return defaultSyncSubnetTopicParams(activeValidators)
return defaultSyncSubnetTopicParams(activeValidators), nil
case strings.Contains(topic, GossipContributionAndProofMessage):
return defaultSyncContributionTopicParams()
return defaultSyncContributionTopicParams(), nil
case strings.Contains(topic, GossipExitMessage):
return defaultVoluntaryExitTopicParams(), nil
case strings.Contains(topic, GossipProposerSlashingMessage):
@@ -191,19 +191,19 @@ func defaultBlockTopicParams() *pubsub.TopicScoreParams {
}
}
func defaultAggregateTopicParams(activeValidators uint64) (*pubsub.TopicScoreParams, error) {
func defaultAggregateTopicParams(activeValidators uint64) *pubsub.TopicScoreParams {
// Determine the expected message rate for the particular gossip topic.
aggPerSlot := aggregatorsPerSlot(activeValidators)
firstMessageCap, err := decayLimit(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot*2/gossipSubD))
if err != nil {
log.Warnf("skipping initializing topic scoring: %v", err)
return nil, nil
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
meshThreshold, err := decayThreshold(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot)/dampeningFactor)
if err != nil {
log.Warnf("skipping initializing topic scoring: %v", err)
return nil, nil
return nil
}
meshWeight := -scoreByWeight(aggregateWeight, meshThreshold)
meshCap := 4 * meshThreshold
@@ -230,22 +230,22 @@ func defaultAggregateTopicParams(activeValidators uint64) (*pubsub.TopicScorePar
MeshFailurePenaltyDecay: scoreDecay(1 * oneEpochDuration()),
InvalidMessageDeliveriesWeight: -maxScore() / aggregateWeight,
InvalidMessageDeliveriesDecay: scoreDecay(invalidDecayPeriod),
}, nil
}
}
func defaultSyncContributionTopicParams() (*pubsub.TopicScoreParams, error) {
func defaultSyncContributionTopicParams() *pubsub.TopicScoreParams {
// Determine the expected message rate for the particular gossip topic.
aggPerSlot := params.BeaconConfig().SyncCommitteeSubnetCount * params.BeaconConfig().TargetAggregatorsPerSyncSubcommittee
firstMessageCap, err := decayLimit(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot*2/gossipSubD))
if err != nil {
log.Warnf("skipping initializing topic scoring: %v", err)
return nil, nil
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
meshThreshold, err := decayThreshold(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot)/dampeningFactor)
if err != nil {
log.Warnf("skipping initializing topic scoring: %v", err)
return nil, nil
return nil
}
meshWeight := -scoreByWeight(syncContributionWeight, meshThreshold)
meshCap := 4 * meshThreshold
@@ -272,23 +272,23 @@ func defaultSyncContributionTopicParams() (*pubsub.TopicScoreParams, error) {
MeshFailurePenaltyDecay: scoreDecay(1 * oneEpochDuration()),
InvalidMessageDeliveriesWeight: -maxScore() / syncContributionWeight,
InvalidMessageDeliveriesDecay: scoreDecay(invalidDecayPeriod),
}, nil
}
}
func defaultAggregateSubnetTopicParams(activeValidators uint64) (*pubsub.TopicScoreParams, error) {
func defaultAggregateSubnetTopicParams(activeValidators uint64) *pubsub.TopicScoreParams {
subnetCount := params.BeaconNetworkConfig().AttestationSubnetCount
// Get weight for each specific subnet.
topicWeight := attestationTotalWeight / float64(subnetCount)
subnetWeight := activeValidators / subnetCount
if subnetWeight == 0 {
log.Warn("Subnet weight is 0, skipping initializing topic scoring")
return nil, nil
return nil
}
// Determine the amount of validators expected in a subnet in a single slot.
numPerSlot := time.Duration(subnetWeight / uint64(params.BeaconConfig().SlotsPerEpoch))
if numPerSlot == 0 {
log.Warn("numPerSlot is 0, skipping initializing topic scoring")
return nil, nil
return nil
}
comsPerSlot := committeeCountPerSlot(activeValidators)
exceedsThreshold := comsPerSlot >= 2*subnetCount/uint64(params.BeaconConfig().SlotsPerEpoch)
@@ -301,20 +301,20 @@ func defaultAggregateSubnetTopicParams(activeValidators uint64) (*pubsub.TopicSc
rate := numPerSlot * 2 / gossipSubD
if rate == 0 {
log.Warn("rate is 0, skipping initializing topic scoring")
return nil, nil
return nil
}
// Determine expected first deliveries based on the message rate.
firstMessageCap, err := decayLimit(scoreDecay(firstDecay*oneEpochDuration()), float64(rate))
if err != nil {
log.Warnf("skipping initializing topic scoring: %v", err)
return nil, nil
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
// Determine expected mesh deliveries based on message rate applied with a dampening factor.
meshThreshold, err := decayThreshold(scoreDecay(meshDecay*oneEpochDuration()), float64(numPerSlot)/dampeningFactor)
if err != nil {
log.Warnf("skipping initializing topic scoring: %v", err)
return nil, nil
return nil
}
meshWeight := -scoreByWeight(topicWeight, meshThreshold)
meshCap := 4 * meshThreshold
@@ -341,10 +341,10 @@ func defaultAggregateSubnetTopicParams(activeValidators uint64) (*pubsub.TopicSc
MeshFailurePenaltyDecay: scoreDecay(meshDecay * oneEpochDuration()),
InvalidMessageDeliveriesWeight: -maxScore() / topicWeight,
InvalidMessageDeliveriesDecay: scoreDecay(invalidDecayPeriod),
}, nil
}
}
func defaultSyncSubnetTopicParams(activeValidators uint64) (*pubsub.TopicScoreParams, error) {
func defaultSyncSubnetTopicParams(activeValidators uint64) *pubsub.TopicScoreParams {
subnetCount := params.BeaconConfig().SyncCommitteeSubnetCount
// Get weight for each specific subnet.
topicWeight := syncCommitteesTotalWeight / float64(subnetCount)
@@ -356,7 +356,7 @@ func defaultSyncSubnetTopicParams(activeValidators uint64) (*pubsub.TopicScorePa
subnetWeight := activeValidators / subnetCount
if subnetWeight == 0 {
log.Warn("Subnet weight is 0, skipping initializing topic scoring")
return nil, nil
return nil
}
firstDecay := time.Duration(1)
meshDecay := time.Duration(4)
@@ -364,20 +364,20 @@ func defaultSyncSubnetTopicParams(activeValidators uint64) (*pubsub.TopicScorePa
rate := subnetWeight * 2 / gossipSubD
if rate == 0 {
log.Warn("rate is 0, skipping initializing topic scoring")
return nil, nil
return nil
}
// Determine expected first deliveries based on the message rate.
firstMessageCap, err := decayLimit(scoreDecay(firstDecay*oneEpochDuration()), float64(rate))
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
return nil, nil
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
// Determine expected mesh deliveries based on message rate applied with a dampening factor.
meshThreshold, err := decayThreshold(scoreDecay(meshDecay*oneEpochDuration()), float64(subnetWeight)/dampeningFactor)
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
return nil, nil
return nil
}
meshWeight := -scoreByWeight(topicWeight, meshThreshold)
meshCap := 4 * meshThreshold
@@ -404,7 +404,7 @@ func defaultSyncSubnetTopicParams(activeValidators uint64) (*pubsub.TopicScorePa
MeshFailurePenaltyDecay: scoreDecay(meshDecay * oneEpochDuration()),
InvalidMessageDeliveriesWeight: -maxScore() / topicWeight,
InvalidMessageDeliveriesDecay: scoreDecay(invalidDecayPeriod),
}, nil
}
}
func defaultAttesterSlashingTopicParams() *pubsub.TopicScoreParams {

View File

@@ -68,11 +68,9 @@ func TestLoggingParameters(t *testing.T) {
logGossipParameters("testing", &pubsub.TopicScoreParams{})
// Test out actual gossip parameters.
logGossipParameters("testing", defaultBlockTopicParams())
p, err := defaultAggregateSubnetTopicParams(10000)
assert.NoError(t, err)
p := defaultAggregateSubnetTopicParams(10000)
logGossipParameters("testing", p)
p, err = defaultAggregateTopicParams(10000)
assert.NoError(t, err)
p = defaultAggregateTopicParams(10000)
logGossipParameters("testing", p)
logGossipParameters("testing", defaultAttesterSlashingTopicParams())
logGossipParameters("testing", defaultProposerSlashingTopicParams())

View File

@@ -183,8 +183,16 @@ func (s *Service) loop(ctx context.Context) {
for {
select {
case <-decayBadResponsesStats.C:
// Exit early if context is canceled.
if ctx.Err() != nil {
return
}
s.scorers.badResponsesScorer.Decay()
case <-decayBlockProviderStats.C:
// Exit early if context is canceled.
if ctx.Err() != nil {
return
}
s.scorers.blockProviderScorer.Decay()
case <-ctx.Done():
return

View File

@@ -8,6 +8,7 @@ go_library(
"deposit.go",
"log.go",
"log_processing.go",
"options.go",
"prometheus.go",
"provider.go",
"service.go",
@@ -15,6 +16,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/powchain",
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/beacon-chain:__subpackages__",
"//contracts:__subpackages__",
],
deps = [
@@ -94,7 +96,6 @@ go_test(
"//network:go_default_library",
"//network/authorization:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/wrapper:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -22,7 +22,7 @@ var endpoint = "http://127.0.0.1"
func setDefaultMocks(service *Service) *Service {
service.eth1DataFetcher = &goodFetcher{}
service.httpLogger = &goodLogger{}
service.cfg.StateNotifier = &goodNotifier{}
service.cfg.stateNotifier = &goodNotifier{}
return service
}
@@ -31,11 +31,11 @@ func TestLatestMainchainInfo_OK(t *testing.T) {
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "Unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -69,10 +69,10 @@ func TestLatestMainchainInfo_OK(t *testing.T) {
func TestBlockHashByHeight_ReturnsHash(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -96,10 +96,10 @@ func TestBlockHashByHeight_ReturnsHash(t *testing.T) {
func TestBlockHashByHeight_ReturnsError_WhenNoEth1Client(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -112,10 +112,10 @@ func TestBlockHashByHeight_ReturnsError_WhenNoEth1Client(t *testing.T) {
func TestBlockExists_ValidHash(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -143,10 +143,10 @@ func TestBlockExists_ValidHash(t *testing.T) {
func TestBlockExists_InvalidHash(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -157,10 +157,10 @@ func TestBlockExists_InvalidHash(t *testing.T) {
func TestBlockExists_UsesCachedBlockInfo(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
// nil eth1DataFetcher would panic if cached value not used
web3Service.eth1DataFetcher = nil
@@ -180,10 +180,10 @@ func TestBlockExists_UsesCachedBlockInfo(t *testing.T) {
func TestBlockExistsWithCache_UsesCachedHeaderInfo(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
header := &gethTypes.Header{
@@ -201,10 +201,10 @@ func TestBlockExistsWithCache_UsesCachedHeaderInfo(t *testing.T) {
func TestBlockExistsWithCache_HeaderNotCached(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
exists, height, err := web3Service.BlockExistsWithCache(context.Background(), common.BytesToHash([]byte("hash")))
@@ -217,10 +217,10 @@ func TestService_BlockNumberByTimestamp(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err)
web3Service = setDefaultMocks(web3Service)
web3Service.eth1DataFetcher = &goodFetcher{backend: testAcc.Backend}
@@ -244,10 +244,10 @@ func TestService_BlockNumberByTimestampLessTargetTime(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err)
web3Service = setDefaultMocks(web3Service)
web3Service.eth1DataFetcher = &goodFetcher{backend: testAcc.Backend}
@@ -277,10 +277,10 @@ func TestService_BlockNumberByTimestampMoreTargetTime(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err)
web3Service = setDefaultMocks(web3Service)
web3Service.eth1DataFetcher = &goodFetcher{backend: testAcc.Backend}
@@ -308,10 +308,10 @@ func TestService_BlockNumberByTimestampMoreTargetTime(t *testing.T) {
func TestService_BlockTimeByHeight_ReturnsError_WhenNoEth1Client(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)

View File

@@ -3,11 +3,26 @@ package powchain
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
// DepositContractAddress returns the deposit contract address for the given chain.
func DepositContractAddress() (string, error) {
address := params.BeaconConfig().DepositContractAddress
if address == "" {
return "", errors.New("valid deposit contract is required")
}
if !common.IsHexAddress(address) {
return "", errors.New("invalid deposit contract address given: " + address)
}
return address, nil
}
func (s *Service) processDeposit(ctx context.Context, eth1Data *ethpb.Eth1Data, deposit *ethpb.Deposit) error {
var err error
if err := s.preGenesisState.SetEth1Data(eth1Data); err != nil {

View File

@@ -22,12 +22,39 @@ import (
const pubKeyErr = "could not convert bytes to public key"
func TestDepositContractAddress_EmptyAddress(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DepositContractAddress = ""
params.OverrideBeaconConfig(config)
_, err := DepositContractAddress()
assert.ErrorContains(t, "valid deposit contract is required", err)
}
func TestDepositContractAddress_NotHexAddress(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DepositContractAddress = "abc?!"
params.OverrideBeaconConfig(config)
_, err := DepositContractAddress()
assert.ErrorContains(t, "invalid deposit contract address given", err)
}
func TestDepositContractAddress_OK(t *testing.T) {
params.SetupTestConfigCleanup(t)
addr, err := DepositContractAddress()
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().DepositContractAddress, addr)
}
func TestProcessDeposit_OK(t *testing.T) {
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "Unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -48,10 +75,10 @@ func TestProcessDeposit_OK(t *testing.T) {
func TestProcessDeposit_InvalidMerkleBranch(t *testing.T) {
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -74,10 +101,10 @@ func TestProcessDeposit_InvalidMerkleBranch(t *testing.T) {
func TestProcessDeposit_InvalidPublicKey(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -110,10 +137,10 @@ func TestProcessDeposit_InvalidPublicKey(t *testing.T) {
func TestProcessDeposit_InvalidSignature(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -145,10 +172,10 @@ func TestProcessDeposit_InvalidSignature(t *testing.T) {
func TestProcessDeposit_UnableToVerify(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
@@ -177,10 +204,10 @@ func TestProcessDeposit_UnableToVerify(t *testing.T) {
func TestProcessDeposit_IncompleteDeposit(t *testing.T) {
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
require.NoError(t, web3Service.preGenesisState.SetValidators([]*ethpb.Validator{}))
@@ -239,10 +266,10 @@ func TestProcessDeposit_IncompleteDeposit(t *testing.T) {
func TestProcessDeposit_AllDepositedSuccessfully(t *testing.T) {
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)

View File

@@ -52,7 +52,7 @@ func (s *Service) Eth2GenesisPowchainInfo() (uint64, *big.Int) {
func (s *Service) ProcessETH1Block(ctx context.Context, blkNum *big.Int) error {
query := ethereum.FilterQuery{
Addresses: []common.Address{
s.cfg.DepositContract,
s.cfg.depositContractAddr,
},
FromBlock: blkNum,
ToBlock: blkNum,
@@ -155,7 +155,7 @@ func (s *Service) ProcessDepositLog(ctx context.Context, depositLog gethTypes.Lo
}
// We always store all historical deposits in the DB.
err = s.cfg.DepositCache.InsertDeposit(ctx, deposit, depositLog.BlockNumber, index, s.depositTrie.HashTreeRoot())
err = s.cfg.depositCache.InsertDeposit(ctx, deposit, depositLog.BlockNumber, index, s.depositTrie.HashTreeRoot())
if err != nil {
return errors.Wrap(err, "unable to insert deposit into cache")
}
@@ -172,7 +172,7 @@ func (s *Service) ProcessDepositLog(ctx context.Context, depositLog gethTypes.Lo
validData = false
}
} else {
s.cfg.DepositCache.InsertPendingDeposit(ctx, deposit, depositLog.BlockNumber, index, s.depositTrie.HashTreeRoot())
s.cfg.depositCache.InsertPendingDeposit(ctx, deposit, depositLog.BlockNumber, index, s.depositTrie.HashTreeRoot())
}
if validData {
log.WithFields(logrus.Fields{
@@ -231,7 +231,7 @@ func (s *Service) ProcessChainStart(genesisTime uint64, eth1BlockHash [32]byte,
log.WithFields(logrus.Fields{
"ChainStartTime": chainStartTime,
}).Info("Minimum number of validators reached for beacon-chain to start")
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
s.cfg.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.ChainStarted,
Data: &statefeed.ChainStartedData{
StartTime: chainStartTime,
@@ -288,7 +288,7 @@ func (s *Service) processPastLogs(ctx context.Context) error {
return err
}
batchSize := s.cfg.Eth1HeaderReqLimit
batchSize := s.cfg.eth1HeaderReqLimit
additiveFactor := uint64(float64(batchSize) * additiveFactorMultiplier)
for currentBlockNum < latestFollowHeight {
@@ -301,7 +301,7 @@ func (s *Service) processPastLogs(ctx context.Context) error {
}
query := ethereum.FilterQuery{
Addresses: []common.Address{
s.cfg.DepositContract,
s.cfg.depositContractAddr,
},
FromBlock: big.NewInt(int64(start)),
ToBlock: big.NewInt(int64(end)),
@@ -354,18 +354,18 @@ func (s *Service) processPastLogs(ctx context.Context) error {
}
currentBlockNum = end
if batchSize < s.cfg.Eth1HeaderReqLimit {
if batchSize < s.cfg.eth1HeaderReqLimit {
// update the batchSize with additive increase
batchSize += additiveFactor
if batchSize > s.cfg.Eth1HeaderReqLimit {
batchSize = s.cfg.Eth1HeaderReqLimit
if batchSize > s.cfg.eth1HeaderReqLimit {
batchSize = s.cfg.eth1HeaderReqLimit
}
}
}
s.latestEth1Data.LastRequestedBlock = currentBlockNum
c, err := s.cfg.BeaconDB.FinalizedCheckpoint(ctx)
c, err := s.cfg.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return err
}
@@ -374,12 +374,12 @@ func (s *Service) processPastLogs(ctx context.Context) error {
if fRoot == params.BeaconConfig().ZeroHash {
return nil
}
fState, err := s.cfg.StateGen.StateByRoot(ctx, fRoot)
fState, err := s.cfg.stateGen.StateByRoot(ctx, fRoot)
if err != nil {
return err
}
if fState != nil && !fState.IsNil() && fState.Eth1DepositIndex() > 0 {
s.cfg.DepositCache.PrunePendingDeposits(ctx, int64(fState.Eth1DepositIndex()))
s.cfg.depositCache.PrunePendingDeposits(ctx, int64(fState.Eth1DepositIndex()))
}
return nil
}
@@ -500,7 +500,7 @@ func (s *Service) savePowchainData(ctx context.Context) error {
ChainstartData: s.chainStartData,
BeaconState: pbState, // I promise not to mutate it!
Trie: s.depositTrie.ToProto(),
DepositContainers: s.cfg.DepositCache.AllDepositContainers(ctx),
DepositContainers: s.cfg.depositCache.AllDepositContainers(ctx),
}
return s.cfg.BeaconDB.SavePowchainData(ctx, eth1Data)
return s.cfg.beaconDB.SavePowchainData(ctx, eth1Data)
}

View File

@@ -33,12 +33,12 @@ func TestProcessDepositLog_OK(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -62,7 +62,7 @@ func TestProcessDepositLog_OK(t *testing.T) {
query := ethereum.FilterQuery{
Addresses: []common.Address{
web3Service.cfg.DepositContract,
web3Service.cfg.depositContractAddr,
},
}
@@ -97,12 +97,12 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -129,7 +129,7 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
query := ethereum.FilterQuery{
Addresses: []common.Address{
web3Service.cfg.DepositContract,
web3Service.cfg.depositContractAddr,
},
}
@@ -143,7 +143,7 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
err = web3Service.ProcessDepositLog(context.Background(), logs[1])
require.NoError(t, err)
pendingDeposits := web3Service.cfg.DepositCache.PendingDeposits(context.Background(), nil /*blockNum*/)
pendingDeposits := web3Service.cfg.depositCache.PendingDeposits(context.Background(), nil /*blockNum*/)
require.Equal(t, 2, len(pendingDeposits), "Unexpected number of deposits")
hook.Reset()
@@ -153,11 +153,11 @@ func TestUnpackDepositLogData_OK(t *testing.T) {
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := testDB.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
DepositContract: testAcc.ContractAddr,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -179,7 +179,7 @@ func TestUnpackDepositLogData_OK(t *testing.T) {
query := ethereum.FilterQuery{
Addresses: []common.Address{
web3Service.cfg.DepositContract,
web3Service.cfg.depositContractAddr,
},
}
@@ -203,12 +203,12 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -244,7 +244,7 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
query := ethereum.FilterQuery{
Addresses: []common.Address{
web3Service.cfg.DepositContract,
web3Service.cfg.depositContractAddr,
},
}
@@ -273,12 +273,12 @@ func TestProcessETH2GenesisLog(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -311,7 +311,7 @@ func TestProcessETH2GenesisLog(t *testing.T) {
query := ethereum.FilterQuery{
Addresses: []common.Address{
web3Service.cfg.DepositContract,
web3Service.cfg.depositContractAddr,
},
}
@@ -321,7 +321,7 @@ func TestProcessETH2GenesisLog(t *testing.T) {
// Set up our subscriber now to listen for the chain started event.
stateChannel := make(chan *feed.Event, 1)
stateSub := web3Service.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
stateSub := web3Service.cfg.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for _, log := range logs {
@@ -359,12 +359,12 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: kvStore,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(kvStore),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -418,7 +418,7 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
// Set up our subscriber now to listen for the chain started event.
stateChannel := make(chan *feed.Event, 1)
stateSub := web3Service.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
stateSub := web3Service.cfg.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
err = web3Service.processPastLogs(context.Background())
@@ -452,12 +452,12 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: kvStore,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(kvStore),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -522,7 +522,7 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
// Set up our subscriber now to listen for the chain started event.
stateChannel := make(chan *feed.Event, 1)
stateSub := web3Service.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
stateSub := web3Service.cfg.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
err = web3Service.processPastLogs(context.Background())
@@ -560,13 +560,12 @@ func TestCheckForChainstart_NoValidator(t *testing.T) {
func newPowchainService(t *testing.T, eth1Backend *contracts.TestAccount, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
require.NoError(t, err)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: eth1Backend.ContractAddr,
BeaconDB: beaconDB,
DepositCache: depositCache,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(eth1Backend.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(eth1Backend.ContractAddr, eth1Backend.Backend)

View File

@@ -0,0 +1,97 @@
package powchain
import (
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/network"
)
type Option func(s *Service) error
// WithHttpEndpoints deduplicates and parses http endpoints for the powchain service to use,
// and sets the "current" endpoint that will be used first.
func WithHttpEndpoints(endpointStrings []string) Option {
return func(s *Service) error {
stringEndpoints := dedupEndpoints(endpointStrings)
endpoints := make([]network.Endpoint, len(stringEndpoints))
for i, e := range stringEndpoints {
endpoints[i] = HttpEndpoint(e)
}
// Select first http endpoint in the provided list.
var currEndpoint network.Endpoint
if len(endpointStrings) > 0 {
currEndpoint = endpoints[0]
}
s.cfg.httpEndpoints = endpoints
s.cfg.currHttpEndpoint = currEndpoint
return nil
}
}
// WithDepositContractAddress for the deposit contract.
func WithDepositContractAddress(addr common.Address) Option {
return func(s *Service) error {
s.cfg.depositContractAddr = addr
return nil
}
}
// WithDatabase for the beacon chain database.
func WithDatabase(database db.HeadAccessDatabase) Option {
return func(s *Service) error {
s.cfg.beaconDB = database
return nil
}
}
// WithDepositCache for caching deposits.
func WithDepositCache(cache *depositcache.DepositCache) Option {
return func(s *Service) error {
s.cfg.depositCache = cache
return nil
}
}
// WithStateNotifier for subscribing to state changes.
func WithStateNotifier(notifier statefeed.Notifier) Option {
return func(s *Service) error {
s.cfg.stateNotifier = notifier
return nil
}
}
// WithStateGen to regenerate beacon states from checkpoints.
func WithStateGen(gen *stategen.State) Option {
return func(s *Service) error {
s.cfg.stateGen = gen
return nil
}
}
// WithEth1HeaderRequestLimit to set the upper limit of eth1 header requests.
func WithEth1HeaderRequestLimit(limit uint64) Option {
return func(s *Service) error {
s.cfg.eth1HeaderReqLimit = limit
return nil
}
}
// WithBeaconNodeStatsUpdater to set the beacon node stats updater.
func WithBeaconNodeStatsUpdater(updater BeaconNodeStatsUpdater) Option {
return func(s *Service) error {
s.cfg.beaconNodeStatsUpdater = updater
return nil
}
}
// WithFinalizedStateAtStartup to set the beacon node's finalized state at startup.
func WithFinalizedStateAtStartup(st state.BeaconState) Option {
return func(s *Service) error {
s.cfg.finalizedStateAtStartup = st
return nil
}
}

View File

@@ -120,6 +120,20 @@ type RPCClient interface {
BatchCall(b []gethRPC.BatchElem) error
}
// config defines a config struct for dependencies into the service.
type config struct {
depositContractAddr common.Address
beaconDB db.HeadAccessDatabase
depositCache *depositcache.DepositCache
stateNotifier statefeed.Notifier
stateGen *stategen.State
eth1HeaderReqLimit uint64
beaconNodeStatsUpdater BeaconNodeStatsUpdater
httpEndpoints []network.Endpoint
currHttpEndpoint network.Endpoint
finalizedStateAtStartup state.BeaconState
}
// Service fetches important information about the canonical
// Ethereum ETH1.0 chain via a web3 endpoint using an ethclient. The Random
// Beacon Chain requires synchronization with the ETH1.0 chain's current
@@ -130,12 +144,10 @@ type Service struct {
connectedETH1 bool
isRunning bool
processingLock sync.RWMutex
cfg *Web3ServiceConfig
cfg *config
ctx context.Context
cancel context.CancelFunc
headTicker *time.Ticker
httpEndpoints []network.Endpoint
currHttpEndpoint network.Endpoint
httpLogger bind.ContractFilterer
eth1DataFetcher RPCDataFetcher
rpcClient RPCClient
@@ -147,24 +159,10 @@ type Service struct {
lastReceivedMerkleIndex int64 // Keeps track of the last received index to prevent log spam.
runError error
preGenesisState state.BeaconState
bsUpdater BeaconNodeStatsUpdater
}
// Web3ServiceConfig defines a config struct for web3 service to use through its life cycle.
type Web3ServiceConfig struct {
HttpEndpoints []string
DepositContract common.Address
BeaconDB db.HeadAccessDatabase
DepositCache *depositcache.DepositCache
StateNotifier statefeed.Notifier
StateGen *stategen.State
Eth1HeaderReqLimit uint64
BeaconNodeStatsUpdater BeaconNodeStatsUpdater
}
// NewService sets up a new instance with an ethclient when
// given a web3 endpoint as a string in the config.
func NewService(ctx context.Context, config *Web3ServiceConfig) (*Service, error) {
// NewService sets up a new instance with an ethclient when given a web3 endpoint as a string in the config.
func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
_ = cancel // govet fix for lost cancel. Cancel is handled in service.Stop()
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
@@ -177,27 +175,13 @@ func NewService(ctx context.Context, config *Web3ServiceConfig) (*Service, error
return nil, errors.Wrap(err, "could not setup genesis state")
}
if config.Eth1HeaderReqLimit == 0 {
config.Eth1HeaderReqLimit = defaultEth1HeaderReqLimit
}
stringEndpoints := dedupEndpoints(config.HttpEndpoints)
endpoints := make([]network.Endpoint, len(stringEndpoints))
for i, e := range stringEndpoints {
endpoints[i] = HttpEndpoint(e)
}
// Select first http endpoint in the provided list.
var currEndpoint network.Endpoint
if len(config.HttpEndpoints) > 0 {
currEndpoint = endpoints[0]
}
s := &Service{
ctx: ctx,
cancel: cancel,
cfg: config,
httpEndpoints: endpoints,
currHttpEndpoint: currEndpoint,
ctx: ctx,
cancel: cancel,
cfg: &config{
beaconNodeStatsUpdater: &NopBeaconNodeStatsUpdater{},
eth1HeaderReqLimit: defaultEth1HeaderReqLimit,
},
latestEth1Data: &protodb.LatestETH1Data{
BlockHeight: 0,
BlockTime: 0,
@@ -213,26 +197,25 @@ func NewService(ctx context.Context, config *Web3ServiceConfig) (*Service, error
lastReceivedMerkleIndex: -1,
preGenesisState: genState,
headTicker: time.NewTicker(time.Duration(params.BeaconConfig().SecondsPerETH1Block) * time.Second),
// use the nop updater by default, rely on upstream set up to pass in an appropriate impl
bsUpdater: config.BeaconNodeStatsUpdater,
}
if config.BeaconNodeStatsUpdater == nil {
s.bsUpdater = &NopBeaconNodeStatsUpdater{}
for _, opt := range opts {
if err := opt(s); err != nil {
return nil, err
}
}
if err := s.ensureValidPowchainData(ctx); err != nil {
return nil, errors.Wrap(err, "unable to validate powchain data")
}
eth1Data, err := config.BeaconDB.PowchainData(ctx)
eth1Data, err := s.cfg.beaconDB.PowchainData(ctx)
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve eth1 data")
}
if err := s.initializeEth1Data(ctx, eth1Data); err != nil {
return nil, err
}
return s, nil
}
@@ -240,10 +223,10 @@ func NewService(ctx context.Context, config *Web3ServiceConfig) (*Service, error
func (s *Service) Start() {
// If the chain has not started already and we don't have access to eth1 nodes, we will not be
// able to generate the genesis state.
if !s.chainStartData.Chainstarted && s.currHttpEndpoint.Url == "" {
if !s.chainStartData.Chainstarted && s.cfg.currHttpEndpoint.Url == "" {
// check for genesis state before shutting down the node,
// if a genesis state exists, we can continue on.
genState, err := s.cfg.BeaconDB.GenesisState(s.ctx)
genState, err := s.cfg.beaconDB.GenesisState(s.ctx)
if err != nil {
log.Fatal(err)
}
@@ -253,7 +236,7 @@ func (s *Service) Start() {
}
// Exit early if eth1 endpoint is not set.
if s.currHttpEndpoint.Url == "" {
if s.cfg.currHttpEndpoint.Url == "" {
return
}
go func() {
@@ -314,7 +297,7 @@ func (s *Service) Status() error {
func (s *Service) updateBeaconNodeStats() {
bs := clientstats.BeaconNodeStats{}
if len(s.httpEndpoints) > 1 {
if len(s.cfg.httpEndpoints) > 1 {
bs.SyncEth1FallbackConfigured = true
}
if s.IsConnectedToETH1() {
@@ -324,11 +307,11 @@ func (s *Service) updateBeaconNodeStats() {
bs.SyncEth1FallbackConnected = true
}
}
s.bsUpdater.Update(bs)
s.cfg.beaconNodeStatsUpdater.Update(bs)
}
func (s *Service) updateCurrHttpEndpoint(endpoint network.Endpoint) {
s.currHttpEndpoint = endpoint
s.cfg.currHttpEndpoint = endpoint
s.updateBeaconNodeStats()
}
@@ -374,7 +357,7 @@ func (s *Service) AreAllDepositsProcessed() (bool, error) {
return false, errors.Wrap(err, "could not get deposit count")
}
count := bytesutil.FromBytes8(countByte)
deposits := s.cfg.DepositCache.AllDeposits(s.ctx, nil)
deposits := s.cfg.depositCache.AllDeposits(s.ctx, nil)
if count != uint64(len(deposits)) {
return false, nil
}
@@ -392,12 +375,12 @@ func (s *Service) followBlockHeight(_ context.Context) (uint64, error) {
}
func (s *Service) connectToPowChain() error {
httpClient, rpcClient, err := s.dialETH1Nodes(s.currHttpEndpoint)
httpClient, rpcClient, err := s.dialETH1Nodes(s.cfg.currHttpEndpoint)
if err != nil {
return errors.Wrap(err, "could not dial eth1 nodes")
}
depositContractCaller, err := contracts.NewDepositContractCaller(s.cfg.DepositContract, httpClient)
depositContractCaller, err := contracts.NewDepositContractCaller(s.cfg.depositContractAddr, httpClient)
if err != nil {
return errors.Wrap(err, "could not create deposit contract caller")
}
@@ -493,7 +476,7 @@ func (s *Service) waitForConnection() {
s.updateConnectedETH1(true)
s.runError = nil
log.WithFields(logrus.Fields{
"endpoint": logs.MaskCredentialsLogging(s.currHttpEndpoint.Url),
"endpoint": logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url),
}).Info("Connected to eth1 proof-of-work chain")
return
}
@@ -522,7 +505,7 @@ func (s *Service) waitForConnection() {
for {
select {
case <-ticker.C:
log.Debugf("Trying to dial endpoint: %s", logs.MaskCredentialsLogging(s.currHttpEndpoint.Url))
log.Debugf("Trying to dial endpoint: %s", logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url))
errConnect := s.connectToPowChain()
if errConnect != nil {
errorLogger(errConnect, "Could not connect to powchain endpoint")
@@ -541,7 +524,7 @@ func (s *Service) waitForConnection() {
s.updateConnectedETH1(true)
s.runError = nil
log.WithFields(logrus.Fields{
"endpoint": logs.MaskCredentialsLogging(s.currHttpEndpoint.Url),
"endpoint": logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url),
}).Info("Connected to eth1 proof-of-work chain")
return
}
@@ -587,32 +570,29 @@ func (s *Service) initDepositCaches(ctx context.Context, ctrs []*protodb.Deposit
if len(ctrs) == 0 {
return nil
}
s.cfg.DepositCache.InsertDepositContainers(ctx, ctrs)
s.cfg.depositCache.InsertDepositContainers(ctx, ctrs)
if !s.chainStartData.Chainstarted {
// do not add to pending cache
// if no genesis state exists.
validDepositsCount.Add(float64(s.preGenesisState.Eth1DepositIndex()))
return nil
}
genesisState, err := s.cfg.BeaconDB.GenesisState(ctx)
genesisState, err := s.cfg.beaconDB.GenesisState(ctx)
if err != nil {
return err
}
// Default to all deposits post-genesis deposits in
// the event we cannot find a finalized state.
currIndex := genesisState.Eth1DepositIndex()
chkPt, err := s.cfg.BeaconDB.FinalizedCheckpoint(ctx)
chkPt, err := s.cfg.beaconDB.FinalizedCheckpoint(ctx)
if err != nil {
return err
}
rt := bytesutil.ToBytes32(chkPt.Root)
if rt != [32]byte{} {
fState, err := s.cfg.StateGen.StateByRoot(ctx, rt)
if err != nil {
return errors.Wrap(err, "could not get finalized state")
}
fState := s.cfg.finalizedStateAtStartup
if fState == nil || fState.IsNil() {
return errors.Errorf("finalized state with root %#x does not exist in the db", rt)
return errors.Errorf("finalized state with root %#x is nil", rt)
}
// Set deposit index to the one in the current archived state.
currIndex = fState.Eth1DepositIndex()
@@ -621,9 +601,9 @@ func (s *Service) initDepositCaches(ctx context.Context, ctrs []*protodb.Deposit
// accumulates. we finalize them here before we are ready to receive a block.
// Otherwise, the first few blocks will be slower to compute as we will
// hold the lock and be busy finalizing the deposits.
s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(currIndex))
s.cfg.depositCache.InsertFinalizedDeposits(ctx, int64(currIndex))
// Deposit proofs are only used during state transition and can be safely removed to save space.
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(currIndex)); err != nil {
if err = s.cfg.depositCache.PruneProofs(ctx, int64(currIndex)); err != nil {
return errors.Wrap(err, "could not prune deposit proofs")
}
}
@@ -632,7 +612,7 @@ func (s *Service) initDepositCaches(ctx context.Context, ctrs []*protodb.Deposit
// is more than the current index in state.
if uint64(len(ctrs)) > currIndex {
for _, c := range ctrs[currIndex:] {
s.cfg.DepositCache.InsertPendingDeposit(ctx, c.Deposit, c.Eth1BlockHeight, c.Index, bytesutil.ToBytes32(c.DepositRoot))
s.cfg.depositCache.InsertPendingDeposit(ctx, c.Deposit, c.Eth1BlockHeight, c.Index, bytesutil.ToBytes32(c.DepositRoot))
}
}
return nil
@@ -918,10 +898,10 @@ func (s *Service) determineEarliestVotingBlock(ctx context.Context, followBlock
// is ready to serve we connect to it again. This method is only
// relevant if we are on our backup endpoint.
func (s *Service) checkDefaultEndpoint() {
primaryEndpoint := s.httpEndpoints[0]
primaryEndpoint := s.cfg.httpEndpoints[0]
// Return early if we are running on our primary
// endpoint.
if s.currHttpEndpoint.Equals(primaryEndpoint) {
if s.cfg.currHttpEndpoint.Equals(primaryEndpoint) {
return
}
@@ -947,11 +927,11 @@ func (s *Service) checkDefaultEndpoint() {
// This is an inefficient way to search for the next endpoint, but given N is expected to be
// small ( < 25), it is fine to search this way.
func (s *Service) fallbackToNextEndpoint() {
currEndpoint := s.currHttpEndpoint
currEndpoint := s.cfg.currHttpEndpoint
currIndex := 0
totalEndpoints := len(s.httpEndpoints)
totalEndpoints := len(s.cfg.httpEndpoints)
for i, endpoint := range s.httpEndpoints {
for i, endpoint := range s.cfg.httpEndpoints {
if endpoint.Equals(currEndpoint) {
currIndex = i
break
@@ -961,9 +941,9 @@ func (s *Service) fallbackToNextEndpoint() {
if nextIndex >= totalEndpoints {
nextIndex = 0
}
s.updateCurrHttpEndpoint(s.httpEndpoints[nextIndex])
s.updateCurrHttpEndpoint(s.cfg.httpEndpoints[nextIndex])
if nextIndex != currIndex {
log.Infof("Falling back to alternative endpoint: %s", logs.MaskCredentialsLogging(s.currHttpEndpoint.Url))
log.Infof("Falling back to alternative endpoint: %s", logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url))
}
}
@@ -1019,7 +999,7 @@ func (s *Service) validateDepositContainers(ctrs []*protodb.DepositContainer) bo
// validates the current powchain data saved and makes sure that any
// embedded genesis state is correctly accounted for.
func (s *Service) ensureValidPowchainData(ctx context.Context) error {
genState, err := s.cfg.BeaconDB.GenesisState(ctx)
genState, err := s.cfg.beaconDB.GenesisState(ctx)
if err != nil {
return err
}
@@ -1027,7 +1007,7 @@ func (s *Service) ensureValidPowchainData(ctx context.Context) error {
if genState == nil || genState.IsNil() {
return nil
}
eth1Data, err := s.cfg.BeaconDB.PowchainData(ctx)
eth1Data, err := s.cfg.beaconDB.PowchainData(ctx)
if err != nil {
return errors.Wrap(err, "unable to retrieve eth1 data")
}
@@ -1048,9 +1028,9 @@ func (s *Service) ensureValidPowchainData(ctx context.Context) error {
ChainstartData: s.chainStartData,
BeaconState: pbState,
Trie: s.depositTrie.ToProto(),
DepositContainers: s.cfg.DepositCache.AllDepositContainers(ctx),
DepositContainers: s.cfg.depositCache.AllDepositContainers(ctx),
}
return s.cfg.BeaconDB.SavePowchainData(ctx, eth1Data)
return s.cfg.beaconDB.SavePowchainData(ctx, eth1Data)
}
return nil
}
@@ -1077,5 +1057,5 @@ func eth1HeadIsBehind(timestamp uint64) bool {
}
func (s *Service) primaryConnected() bool {
return s.currHttpEndpoint.Equals(s.httpEndpoints[0])
return s.cfg.currHttpEndpoint.Equals(s.cfg.httpEndpoints[0])
}

View File

@@ -27,7 +27,6 @@ import (
"github.com/prysmaticlabs/prysm/network"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
protodb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/wrapper"
"github.com/prysmaticlabs/prysm/testing/assert"
"github.com/prysmaticlabs/prysm/testing/require"
"github.com/prysmaticlabs/prysm/testing/util"
@@ -129,11 +128,11 @@ func TestStart_OK(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockPOW.RPCClient{Backend: testAcc.Backend}
@@ -158,11 +157,11 @@ func TestStart_NoHttpEndpointDefinedFails_WithoutChainStarted(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
s, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{""}, // No endpoint defined!
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
s, err := NewService(context.Background(),
WithHttpEndpoints([]string{""}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err)
// Set custom exit func so test can proceed
log.Logger.ExitFunc = func(i int) {
@@ -201,12 +200,12 @@ func TestStart_NoHttpEndpointDefinedSucceeds_WithGenesisState(t *testing.T) {
require.NoError(t, beaconDB.SaveGenesisBlockRoot(context.Background(), genRoot))
depositCache, err := depositcache.New()
require.NoError(t, err)
s, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{""}, // No endpoint defined!
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
DepositCache: depositCache,
})
s, err := NewService(context.Background(),
WithHttpEndpoints([]string{""}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithDepositCache(depositCache),
)
require.NoError(t, err)
wg := new(sync.WaitGroup)
@@ -232,11 +231,11 @@ func TestStart_NoHttpEndpointDefinedSucceeds_WithChainStarted(t *testing.T) {
ChainstartData: &protodb.ChainStartData{Chainstarted: true},
Trie: &protodb.SparseMerkleTrie{},
}))
s, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{""}, // No endpoint defined!
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
s, err := NewService(context.Background(),
WithHttpEndpoints([]string{""}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err)
s.Start()
@@ -249,11 +248,11 @@ func TestStop_OK(t *testing.T) {
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -274,11 +273,11 @@ func TestService_Eth1Synced(t *testing.T) {
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
@@ -299,11 +298,11 @@ func TestFollowBlock_OK(t *testing.T) {
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
// simulated backend sets eth1 block
@@ -372,10 +371,10 @@ func TestStatus(t *testing.T) {
func TestHandlePanic_OK(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
// nil eth1DataFetcher would panic if cached value not used
web3Service.eth1DataFetcher = nil
@@ -411,11 +410,11 @@ func TestLogTillGenesis_OK(t *testing.T) {
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
require.NoError(t, err)
@@ -438,33 +437,33 @@ func TestLogTillGenesis_OK(t *testing.T) {
func TestInitDepositCache_OK(t *testing.T) {
ctrs := []*protodb.DepositContainer{
{Index: 0, Eth1BlockHeight: 2, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}},
{Index: 1, Eth1BlockHeight: 4, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}},
{Index: 2, Eth1BlockHeight: 6, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
{Index: 0, Eth1BlockHeight: 2, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}, Data: &ethpb.Deposit_Data{PublicKey: []byte{}}}},
{Index: 1, Eth1BlockHeight: 4, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("B")}, Data: &ethpb.Deposit_Data{PublicKey: []byte{}}}},
{Index: 2, Eth1BlockHeight: 6, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}, Data: &ethpb.Deposit_Data{PublicKey: []byte{}}}},
}
gs, _ := util.DeterministicGenesisState(t, 1)
beaconDB := dbutil.SetupDB(t)
s := &Service{
chainStartData: &protodb.ChainStartData{Chainstarted: false},
preGenesisState: gs,
cfg: &Web3ServiceConfig{BeaconDB: beaconDB},
cfg: &config{beaconDB: beaconDB},
}
var err error
s.cfg.DepositCache, err = depositcache.New()
s.cfg.depositCache, err = depositcache.New()
require.NoError(t, err)
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
require.Equal(t, 0, len(s.cfg.DepositCache.PendingContainers(context.Background(), nil)))
require.Equal(t, 0, len(s.cfg.depositCache.PendingContainers(context.Background(), nil)))
blockRootA := [32]byte{'a'}
emptyState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveGenesisBlockRoot(context.Background(), blockRootA))
require.NoError(t, s.cfg.BeaconDB.SaveState(context.Background(), emptyState, blockRootA))
require.NoError(t, s.cfg.beaconDB.SaveGenesisBlockRoot(context.Background(), blockRootA))
require.NoError(t, s.cfg.beaconDB.SaveState(context.Background(), emptyState, blockRootA))
s.chainStartData.Chainstarted = true
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
require.Equal(t, 3, len(s.cfg.DepositCache.PendingContainers(context.Background(), nil)))
require.Equal(t, 3, len(s.cfg.depositCache.PendingContainers(context.Background(), nil)))
}
func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
@@ -508,14 +507,14 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
s := &Service{
chainStartData: &protodb.ChainStartData{Chainstarted: false},
preGenesisState: gs,
cfg: &Web3ServiceConfig{BeaconDB: beaconDB},
cfg: &config{beaconDB: beaconDB},
}
var err error
s.cfg.DepositCache, err = depositcache.New()
s.cfg.depositCache, err = depositcache.New()
require.NoError(t, err)
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
require.Equal(t, 0, len(s.cfg.DepositCache.PendingContainers(context.Background(), nil)))
require.Equal(t, 0, len(s.cfg.depositCache.PendingContainers(context.Background(), nil)))
headBlock := util.NewBeaconBlock()
headRoot, err := headBlock.Block.HashTreeRoot()
@@ -524,22 +523,20 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
emptyState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveGenesisBlockRoot(context.Background(), headRoot))
require.NoError(t, s.cfg.BeaconDB.SaveState(context.Background(), emptyState, headRoot))
require.NoError(t, s.cfg.beaconDB.SaveGenesisBlockRoot(context.Background(), headRoot))
require.NoError(t, s.cfg.beaconDB.SaveState(context.Background(), emptyState, headRoot))
require.NoError(t, stateGen.SaveState(context.Background(), headRoot, emptyState))
s.cfg.StateGen = stateGen
s.cfg.stateGen = stateGen
require.NoError(t, emptyState.SetEth1DepositIndex(2))
ctx := context.Background()
require.NoError(t, stateGen.SaveState(ctx, headRoot, emptyState))
require.NoError(t, beaconDB.SaveState(ctx, emptyState, headRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(headBlock)))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(0), Root: headRoot[:]}))
s.cfg.finalizedStateAtStartup = emptyState
s.chainStartData.Chainstarted = true
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
deps := s.cfg.DepositCache.NonFinalizedDeposits(context.Background(), nil)
deps := s.cfg.depositCache.NonFinalizedDeposits(context.Background(), nil)
assert.Equal(t, 0, len(deps))
}
@@ -547,11 +544,11 @@ func TestNewService_EarliestVotingBlock(t *testing.T) {
testAcc, err := contracts.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
web3Service, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service.eth1DataFetcher = &goodFetcher{backend: testAcc.Backend}
// simulated backend sets eth1 block
@@ -598,22 +595,21 @@ func TestNewService_Eth1HeaderRequLimit(t *testing.T) {
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := dbutil.SetupDB(t)
s1, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
})
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
assert.Equal(t, defaultEth1HeaderReqLimit, s1.cfg.Eth1HeaderReqLimit, "default eth1 header request limit not set")
s2, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{endpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
Eth1HeaderReqLimit: uint64(150),
})
assert.Equal(t, defaultEth1HeaderReqLimit, s1.cfg.eth1HeaderReqLimit, "default eth1 header request limit not set")
s2, err := NewService(context.Background(),
WithHttpEndpoints([]string{endpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithEth1HeaderRequestLimit(uint64(150)),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
assert.Equal(t, uint64(150), s2.cfg.Eth1HeaderReqLimit, "unable to set eth1HeaderRequestLimit")
assert.Equal(t, uint64(150), s2.cfg.eth1HeaderReqLimit, "unable to set eth1HeaderRequestLimit")
}
type mockBSUpdater struct {
@@ -635,41 +631,41 @@ func TestServiceFallbackCorrectly(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
mbs := &mockBSUpdater{}
s1, err := NewService(context.Background(), &Web3ServiceConfig{
HttpEndpoints: []string{firstEndpoint},
DepositContract: testAcc.ContractAddr,
BeaconDB: beaconDB,
BeaconNodeStatsUpdater: mbs,
})
s1.bsUpdater = mbs
s1, err := NewService(context.Background(),
WithHttpEndpoints([]string{firstEndpoint}),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(beaconDB),
WithBeaconNodeStatsUpdater(mbs),
)
s1.cfg.beaconNodeStatsUpdater = mbs
require.NoError(t, err)
assert.Equal(t, firstEndpoint, s1.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, firstEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
// Stay at the first endpoint.
s1.fallbackToNextEndpoint()
assert.Equal(t, firstEndpoint, s1.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, firstEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, false, mbs.lastBS.SyncEth1FallbackConfigured, "SyncEth1FallbackConfigured in clientstats update should be false when only 1 endpoint is configured")
s1.httpEndpoints = append(s1.httpEndpoints, network.Endpoint{Url: secondEndpoint})
s1.cfg.httpEndpoints = append(s1.cfg.httpEndpoints, network.Endpoint{Url: secondEndpoint})
s1.fallbackToNextEndpoint()
assert.Equal(t, secondEndpoint, s1.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, secondEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, true, mbs.lastBS.SyncEth1FallbackConfigured, "SyncEth1FallbackConfigured in clientstats update should be true when > 1 endpoint is configured")
thirdEndpoint := "C"
fourthEndpoint := "D"
s1.httpEndpoints = append(s1.httpEndpoints, network.Endpoint{Url: thirdEndpoint}, network.Endpoint{Url: fourthEndpoint})
s1.cfg.httpEndpoints = append(s1.cfg.httpEndpoints, network.Endpoint{Url: thirdEndpoint}, network.Endpoint{Url: fourthEndpoint})
s1.fallbackToNextEndpoint()
assert.Equal(t, thirdEndpoint, s1.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, thirdEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
s1.fallbackToNextEndpoint()
assert.Equal(t, fourthEndpoint, s1.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, fourthEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
// Rollover correctly back to the first endpoint
s1.fallbackToNextEndpoint()
assert.Equal(t, firstEndpoint, s1.currHttpEndpoint.Url, "Unexpected http endpoint")
assert.Equal(t, firstEndpoint, s1.cfg.currHttpEndpoint.Url, "Unexpected http endpoint")
}
func TestDedupEndpoints(t *testing.T) {
@@ -697,19 +693,19 @@ func TestService_EnsureConsistentPowchainData(t *testing.T) {
cache, err := depositcache.New()
require.NoError(t, err)
s1, err := NewService(context.Background(), &Web3ServiceConfig{
BeaconDB: beaconDB,
DepositCache: cache,
})
s1, err := NewService(context.Background(),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
require.NoError(t, err)
genState, err := util.NewBeaconState()
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
require.NoError(t, s1.cfg.BeaconDB.SaveGenesisData(context.Background(), genState))
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(context.Background(), genState))
require.NoError(t, s1.ensureValidPowchainData(context.Background()))
eth1Data, err := s1.cfg.BeaconDB.PowchainData(context.Background())
eth1Data, err := s1.cfg.beaconDB.PowchainData(context.Background())
assert.NoError(t, err)
assert.NotNil(t, eth1Data)
@@ -721,19 +717,19 @@ func TestService_InitializeCorrectly(t *testing.T) {
cache, err := depositcache.New()
require.NoError(t, err)
s1, err := NewService(context.Background(), &Web3ServiceConfig{
BeaconDB: beaconDB,
DepositCache: cache,
})
s1, err := NewService(context.Background(),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
require.NoError(t, err)
genState, err := util.NewBeaconState()
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
require.NoError(t, s1.cfg.BeaconDB.SaveGenesisData(context.Background(), genState))
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(context.Background(), genState))
require.NoError(t, s1.ensureValidPowchainData(context.Background()))
eth1Data, err := s1.cfg.BeaconDB.PowchainData(context.Background())
eth1Data, err := s1.cfg.beaconDB.PowchainData(context.Background())
assert.NoError(t, err)
assert.NoError(t, s1.initializeEth1Data(context.Background(), eth1Data))
@@ -745,25 +741,25 @@ func TestService_EnsureValidPowchainData(t *testing.T) {
cache, err := depositcache.New()
require.NoError(t, err)
s1, err := NewService(context.Background(), &Web3ServiceConfig{
BeaconDB: beaconDB,
DepositCache: cache,
})
s1, err := NewService(context.Background(),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
require.NoError(t, err)
genState, err := util.NewBeaconState()
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
require.NoError(t, s1.cfg.BeaconDB.SaveGenesisData(context.Background(), genState))
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(context.Background(), genState))
err = s1.cfg.BeaconDB.SavePowchainData(context.Background(), &protodb.ETH1ChainData{
err = s1.cfg.beaconDB.SavePowchainData(context.Background(), &protodb.ETH1ChainData{
ChainstartData: &protodb.ChainStartData{Chainstarted: true},
DepositContainers: []*protodb.DepositContainer{{Index: 1}},
})
require.NoError(t, err)
require.NoError(t, s1.ensureValidPowchainData(context.Background()))
eth1Data, err := s1.cfg.BeaconDB.PowchainData(context.Background())
eth1Data, err := s1.cfg.beaconDB.PowchainData(context.Background())
assert.NoError(t, err)
assert.NotNil(t, eth1Data)
@@ -775,10 +771,10 @@ func TestService_ValidateDepositContainers(t *testing.T) {
cache, err := depositcache.New()
require.NoError(t, err)
s1, err := NewService(context.Background(), &Web3ServiceConfig{
BeaconDB: beaconDB,
DepositCache: cache,
})
s1, err := NewService(context.Background(),
WithDatabase(beaconDB),
WithDepositCache(cache),
)
require.NoError(t, err)
var tt = []struct {

View File

@@ -104,6 +104,7 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"encoding/hex"
"testing"
"github.com/ethereum/go-ethereum/common"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -96,6 +97,10 @@ func TestGetSpec(t *testing.T) {
config.ProportionalSlashingMultiplierAltair = 69
config.InactivityScoreRecoveryRate = 70
config.MinSyncCommitteeParticipants = 71
config.TerminalBlockHash = common.HexToHash("TerminalBlockHash")
config.TerminalBlockHashActivationEpoch = 72
config.TerminalTotalDifficulty = 73
config.Coinbase = common.HexToAddress("Coinbase")
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -125,7 +130,7 @@ func TestGetSpec(t *testing.T) {
resp, err := server.GetSpec(context.Background(), &emptypb.Empty{})
require.NoError(t, err)
assert.Equal(t, 91, len(resp.Data))
assert.Equal(t, 94, len(resp.Data))
for k, v := range resp.Data {
switch k {
case "CONFIG_NAME":
@@ -318,6 +323,14 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x09000000", v)
case "TRANSITION_TOTAL_DIFFICULTY":
assert.Equal(t, "0", v)
case "TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":
assert.Equal(t, "72", v)
case "TERMINAL_BLOCK_HASH":
assert.Equal(t, common.HexToHash("TerminalBlockHash"), common.HexToHash(v))
case "TERMINAL_TOTAL_DIFFICULTY":
assert.Equal(t, "73", v)
case "COINBASE":
assert.Equal(t, common.HexToAddress("Coinbase"), v)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/cmd"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
@@ -674,11 +675,23 @@ func (bs *Server) GetValidatorPerformance(
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
currSlot := bs.GenesisTimeFetcher.CurrentSlot()
if bs.GenesisTimeFetcher.CurrentSlot() > headState.Slot() {
headState, err = transition.ProcessSlots(ctx, headState, bs.GenesisTimeFetcher.CurrentSlot())
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots: %v", err)
if currSlot > headState.Slot() {
if features.Get().EnableNextSlotStateCache {
headRoot, err := bs.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
}
headState, err = transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, currSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", currSlot, err)
}
} else {
headState, err = transition.ProcessSlots(ctx, headState, currSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots: %v", err)
}
}
}
var validatorSummary []*precompute.Validator
@@ -788,14 +801,15 @@ func (bs *Server) GetValidatorPerformance(
effectiveBalances = append(effectiveBalances, summary.CurrentEpochEffectiveBalance)
beforeTransitionBalances = append(beforeTransitionBalances, summary.BeforeEpochTransitionBalance)
afterTransitionBalances = append(afterTransitionBalances, summary.AfterEpochTransitionBalance)
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochAttester)
correctlyVotedTarget = append(correctlyVotedTarget, summary.IsPrevEpochTargetAttester)
correctlyVotedHead = append(correctlyVotedHead, summary.IsPrevEpochHeadAttester)
if headState.Version() == version.Phase0 {
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochAttester)
inclusionSlots = append(inclusionSlots, summary.InclusionSlot)
inclusionDistances = append(inclusionDistances, summary.InclusionDistance)
} else {
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochSourceAttester)
inactivityScores = append(inactivityScores, summary.InactivityScore)
}
}

View File

@@ -3,6 +3,7 @@ package debug
import (
"context"
"fmt"
"math"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
@@ -61,7 +62,7 @@ func (ds *Server) GetInclusionSlot(ctx context.Context, req *pbrpc.InclusionSlot
return nil, status.Errorf(codes.Internal, "Could not retrieve blocks: %v", err)
}
inclusionSlot := types.Slot(1<<64 - 1)
inclusionSlot := types.Slot(math.MaxUint64)
targetStates := make(map[[32]byte]state.ReadOnlyBeaconState)
for _, blk := range blks {
for _, a := range blk.Block().Body().Attestations() {

View File

@@ -13,6 +13,7 @@ import (
coreTime "github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
beaconState "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/features"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/crypto/rand"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
@@ -124,9 +125,20 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
return nil, err
}
if s.Slot() < epochStartSlot {
s, err = transition.ProcessSlots(ctx, s, epochStartSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", epochStartSlot, err)
if features.Get().EnableNextSlotStateCache {
headRoot, err := vs.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
}
s, err = transition.ProcessSlotsUsingNextSlotCache(ctx, s, headRoot, epochStartSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", epochStartSlot, err)
}
} else {
s, err = transition.ProcessSlots(ctx, s, epochStartSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", epochStartSlot, err)
}
}
}
committeeAssignments, proposerIndexToSlots, err := helpers.CommitteeAssignments(ctx, s, req.Epoch)

View File

@@ -382,7 +382,7 @@ func (s *Service) logNewClientConnection(ctx context.Context) {
if !s.connectedRPCClients[clientInfo.Addr] {
log.WithFields(logrus.Fields{
"addr": clientInfo.Addr.String(),
}).Infof("New gRPC client connected to beacon node")
}).Infof("NewService gRPC client connected to beacon node")
s.connectedRPCClients[clientInfo.Addr] = true
}
}

View File

@@ -12,6 +12,8 @@ go_library(
"//beacon-chain/state/stateutil:go_default_library",
"//beacon-chain/state/types:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -26,6 +28,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/state/stateutil:go_default_library",
"//beacon-chain/state/types:go_default_library",
"//beacon-chain/state/v1:go_default_library",
"//config/params:go_default_library",

View File

@@ -18,6 +18,7 @@ type FieldTrie struct {
field types.FieldIndex
dataType types.DataType
length uint64
numOfElems int
}
// NewFieldTrie is the constructor for the field trie data structure. It creates the corresponding
@@ -26,31 +27,37 @@ type FieldTrie struct {
func NewFieldTrie(field types.FieldIndex, dataType types.DataType, elements interface{}, length uint64) (*FieldTrie, error) {
if elements == nil {
return &FieldTrie{
field: field,
dataType: dataType,
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: length,
field: field,
dataType: dataType,
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: length,
numOfElems: 0,
}, nil
}
fieldRoots, err := fieldConverters(field, []uint64{}, elements, true)
if err != nil {
return nil, err
}
if err := validateElements(field, elements, length); err != nil {
if err := validateElements(field, dataType, elements, length); err != nil {
return nil, err
}
switch dataType {
case types.BasicArray:
fl, err := stateutil.ReturnTrieLayer(fieldRoots, length)
if err != nil {
return nil, err
}
return &FieldTrie{
fieldLayers: stateutil.ReturnTrieLayer(fieldRoots, length),
fieldLayers: fl,
field: field,
dataType: dataType,
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: length,
numOfElems: reflect.ValueOf(elements).Len(),
}, nil
case types.CompositeArray:
case types.CompositeArray, types.CompressedArray:
return &FieldTrie{
fieldLayers: stateutil.ReturnTrieLayerVariable(fieldRoots, length),
field: field,
@@ -58,6 +65,7 @@ func NewFieldTrie(field types.FieldIndex, dataType types.DataType, elements inte
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: length,
numOfElems: reflect.ValueOf(elements).Len(),
}, nil
default:
return nil, errors.Errorf("unrecognized data type in field map: %v", reflect.TypeOf(dataType).Name())
@@ -88,13 +96,40 @@ func (f *FieldTrie) RecomputeTrie(indices []uint64, elements interface{}) ([32]b
if err != nil {
return [32]byte{}, err
}
f.numOfElems = reflect.ValueOf(elements).Len()
return fieldRoot, nil
case types.CompositeArray:
fieldRoot, f.fieldLayers, err = stateutil.RecomputeFromLayerVariable(fieldRoots, indices, f.fieldLayers)
if err != nil {
return [32]byte{}, err
}
f.numOfElems = reflect.ValueOf(elements).Len()
return stateutil.AddInMixin(fieldRoot, uint64(len(f.fieldLayers[0])))
case types.CompressedArray:
numOfElems, err := f.field.ElemsInChunk()
if err != nil {
return [32]byte{}, err
}
// We remove the duplicates here in order to prevent
// duplicated insertions into the trie.
newIndices := []uint64{}
indexExists := make(map[uint64]bool)
newRoots := make([][32]byte, 0, len(fieldRoots)/int(numOfElems))
for i, idx := range indices {
startIdx := idx / numOfElems
if indexExists[startIdx] {
continue
}
newIndices = append(newIndices, startIdx)
indexExists[startIdx] = true
newRoots = append(newRoots, fieldRoots[i])
}
fieldRoot, f.fieldLayers, err = stateutil.RecomputeFromLayerVariable(newRoots, newIndices, f.fieldLayers)
if err != nil {
return [32]byte{}, err
}
f.numOfElems = reflect.ValueOf(elements).Len()
return stateutil.AddInMixin(fieldRoot, uint64(f.numOfElems))
default:
return [32]byte{}, errors.Errorf("unrecognized data type in field map: %v", reflect.TypeOf(f.dataType).Name())
}
@@ -105,11 +140,12 @@ func (f *FieldTrie) RecomputeTrie(indices []uint64, elements interface{}) ([32]b
func (f *FieldTrie) CopyTrie() *FieldTrie {
if f.fieldLayers == nil {
return &FieldTrie{
field: f.field,
dataType: f.dataType,
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: f.length,
field: f.field,
dataType: f.dataType,
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: f.length,
numOfElems: f.numOfElems,
}
}
dstFieldTrie := make([][]*[32]byte, len(f.fieldLayers))
@@ -124,6 +160,7 @@ func (f *FieldTrie) CopyTrie() *FieldTrie {
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: f.length,
numOfElems: f.numOfElems,
}
}
@@ -135,6 +172,9 @@ func (f *FieldTrie) TrieRoot() ([32]byte, error) {
case types.CompositeArray:
trieRoot := *f.fieldLayers[len(f.fieldLayers)-1][0]
return stateutil.AddInMixin(trieRoot, uint64(len(f.fieldLayers[0])))
case types.CompressedArray:
trieRoot := *f.fieldLayers[len(f.fieldLayers)-1][0]
return stateutil.AddInMixin(trieRoot, uint64(f.numOfElems))
default:
return [32]byte{}, errors.Errorf("unrecognized data type in field map: %v", reflect.TypeOf(f.dataType).Name())
}

View File

@@ -1,6 +1,7 @@
package fieldtrie
import (
"encoding/binary"
"fmt"
"reflect"
@@ -8,20 +9,37 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/beacon-chain/state/types"
"github.com/prysmaticlabs/prysm/crypto/hash"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/encoding/ssz"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/runtime/version"
)
func (f *FieldTrie) validateIndices(idxs []uint64) error {
length := f.length
if f.dataType == types.CompressedArray {
comLength, err := f.field.ElemsInChunk()
if err != nil {
return err
}
length *= comLength
}
for _, idx := range idxs {
if idx >= f.length {
return errors.Errorf("invalid index for field %s: %d >= length %d", f.field.String(version.Phase0), idx, f.length)
if idx >= length {
return errors.Errorf("invalid index for field %s: %d >= length %d", f.field.String(version.Phase0), idx, length)
}
}
return nil
}
func validateElements(field types.FieldIndex, elements interface{}, length uint64) error {
func validateElements(field types.FieldIndex, dataType types.DataType, elements interface{}, length uint64) error {
if dataType == types.CompressedArray {
comLength, err := field.ElemsInChunk()
if err != nil {
return err
}
length *= comLength
}
val := reflect.ValueOf(elements)
if val.Len() > int(length) {
return errors.Errorf("elements length is larger than expected for field %s: %d > %d", field.String(version.Phase0), val.Len(), length)
@@ -38,21 +56,21 @@ func fieldConverters(field types.FieldIndex, indices []uint64, elements interfac
return nil, errors.Errorf("Wanted type of %v but got %v",
reflect.TypeOf([][]byte{}).Name(), reflect.TypeOf(elements).Name())
}
return stateutil.HandleByteArrays(val, indices, convertAll)
return handleByteArrays(val, indices, convertAll)
case types.Eth1DataVotes:
val, ok := elements.([]*ethpb.Eth1Data)
if !ok {
return nil, errors.Errorf("Wanted type of %v but got %v",
reflect.TypeOf([]*ethpb.Eth1Data{}).Name(), reflect.TypeOf(elements).Name())
}
return HandleEth1DataSlice(val, indices, convertAll)
return handleEth1DataSlice(val, indices, convertAll)
case types.Validators:
val, ok := elements.([]*ethpb.Validator)
if !ok {
return nil, errors.Errorf("Wanted type of %v but got %v",
reflect.TypeOf([]*ethpb.Validator{}).Name(), reflect.TypeOf(elements).Name())
}
return stateutil.HandleValidatorSlice(val, indices, convertAll)
return handleValidatorSlice(val, indices, convertAll)
case types.PreviousEpochAttestations, types.CurrentEpochAttestations:
val, ok := elements.([]*ethpb.PendingAttestation)
if !ok {
@@ -60,13 +78,87 @@ func fieldConverters(field types.FieldIndex, indices []uint64, elements interfac
reflect.TypeOf([]*ethpb.PendingAttestation{}).Name(), reflect.TypeOf(elements).Name())
}
return handlePendingAttestation(val, indices, convertAll)
case types.Balances:
val, ok := elements.([]uint64)
if !ok {
return nil, errors.Errorf("Wanted type of %v but got %v",
reflect.TypeOf([]uint64{}).Name(), reflect.TypeOf(elements).Name())
}
return handleBalanceSlice(val, indices, convertAll)
default:
return [][32]byte{}, errors.Errorf("got unsupported type of %v", reflect.TypeOf(elements).Name())
}
}
// HandleEth1DataSlice processes a list of eth1data and indices into the appropriate roots.
func HandleEth1DataSlice(val []*ethpb.Eth1Data, indices []uint64, convertAll bool) ([][32]byte, error) {
// handleByteArrays computes and returns byte arrays in a slice of root format.
func handleByteArrays(val [][]byte, indices []uint64, convertAll bool) ([][32]byte, error) {
length := len(indices)
if convertAll {
length = len(val)
}
roots := make([][32]byte, 0, length)
rootCreator := func(input []byte) {
newRoot := bytesutil.ToBytes32(input)
roots = append(roots, newRoot)
}
if convertAll {
for i := range val {
rootCreator(val[i])
}
return roots, nil
}
if len(val) > 0 {
for _, idx := range indices {
if idx > uint64(len(val))-1 {
return nil, fmt.Errorf("index %d greater than number of byte arrays %d", idx, len(val))
}
rootCreator(val[idx])
}
}
return roots, nil
}
// handleValidatorSlice returns the validator indices in a slice of root format.
func handleValidatorSlice(val []*ethpb.Validator, indices []uint64, convertAll bool) ([][32]byte, error) {
length := len(indices)
if convertAll {
length = len(val)
}
roots := make([][32]byte, 0, length)
hasher := hash.CustomSHA256Hasher()
rootCreator := func(input *ethpb.Validator) error {
newRoot, err := stateutil.ValidatorRootWithHasher(hasher, input)
if err != nil {
return err
}
roots = append(roots, newRoot)
return nil
}
if convertAll {
for i := range val {
err := rootCreator(val[i])
if err != nil {
return nil, err
}
}
return roots, nil
}
if len(val) > 0 {
for _, idx := range indices {
if idx > uint64(len(val))-1 {
return nil, fmt.Errorf("index %d greater than number of validators %d", idx, len(val))
}
err := rootCreator(val[idx])
if err != nil {
return nil, err
}
}
}
return roots, nil
}
// handleEth1DataSlice processes a list of eth1data and indices into the appropriate roots.
func handleEth1DataSlice(val []*ethpb.Eth1Data, indices []uint64, convertAll bool) ([][32]byte, error) {
length := len(indices)
if convertAll {
length = len(val)
@@ -141,3 +233,48 @@ func handlePendingAttestation(val []*ethpb.PendingAttestation, indices []uint64,
}
return roots, nil
}
func handleBalanceSlice(val []uint64, indices []uint64, convertAll bool) ([][32]byte, error) {
if convertAll {
balancesMarshaling := make([][]byte, 0)
for _, b := range val {
balanceBuf := make([]byte, 8)
binary.LittleEndian.PutUint64(balanceBuf, b)
balancesMarshaling = append(balancesMarshaling, balanceBuf)
}
balancesChunks, err := ssz.PackByChunk(balancesMarshaling)
if err != nil {
return [][32]byte{}, errors.Wrap(err, "could not pack balances into chunks")
}
return balancesChunks, nil
}
if len(val) > 0 {
numOfElems, err := types.Balances.ElemsInChunk()
if err != nil {
return nil, err
}
roots := [][32]byte{}
for _, idx := range indices {
// We split the indexes into their relevant groups. Balances
// are compressed according to 4 values -> 1 chunk.
startIdx := idx / numOfElems
startGroup := startIdx * numOfElems
chunk := [32]byte{}
sizeOfElem := len(chunk) / int(numOfElems)
for i, j := 0, startGroup; j < startGroup+numOfElems; i, j = i+sizeOfElem, j+1 {
wantedVal := uint64(0)
// We are adding chunks in sets of 4, if the set is at the edge of the array
// then you will need to zero out the rest of the chunk. Ex : 41 indexes,
// so 41 % 4 = 1 . There are 3 indexes, which do not exist yet but we
// have to add in as a root. These 3 indexes are then given a 'zero' value.
if int(j) < len(val) {
wantedVal = val[j]
}
binary.LittleEndian.PutUint64(chunk[i:i+sizeOfElem], wantedVal)
}
roots = append(roots, chunk)
}
return roots, nil
}
return [][32]byte{}, nil
}

View File

@@ -1,8 +1,13 @@
package fieldtrie
import (
"encoding/binary"
"sync"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/beacon-chain/state/types"
"github.com/prysmaticlabs/prysm/config/params"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/testing/assert"
)
@@ -17,7 +22,64 @@ func Test_handlePendingAttestation_OutOfRange(t *testing.T) {
func Test_handleEth1DataSlice_OutOfRange(t *testing.T) {
items := make([]*ethpb.Eth1Data, 1)
indices := []uint64{3}
_, err := HandleEth1DataSlice(items, indices, false)
_, err := handleEth1DataSlice(items, indices, false)
assert.ErrorContains(t, "index 3 greater than number of items in eth1 data slice 1", err)
}
func Test_handleValidatorSlice_OutOfRange(t *testing.T) {
vals := make([]*ethpb.Validator, 1)
indices := []uint64{3}
_, err := handleValidatorSlice(vals, indices, false)
assert.ErrorContains(t, "index 3 greater than number of validators 1", err)
}
func TestBalancesSlice_CorrectRoots_All(t *testing.T) {
balances := []uint64{5, 2929, 34, 1291, 354305}
roots, err := handleBalanceSlice(balances, []uint64{}, true)
assert.NoError(t, err)
root1 := [32]byte{}
binary.LittleEndian.PutUint64(root1[:8], balances[0])
binary.LittleEndian.PutUint64(root1[8:16], balances[1])
binary.LittleEndian.PutUint64(root1[16:24], balances[2])
binary.LittleEndian.PutUint64(root1[24:32], balances[3])
root2 := [32]byte{}
binary.LittleEndian.PutUint64(root2[:8], balances[4])
assert.DeepEqual(t, roots, [][32]byte{root1, root2})
}
func TestBalancesSlice_CorrectRoots_Some(t *testing.T) {
balances := []uint64{5, 2929, 34, 1291, 354305}
roots, err := handleBalanceSlice(balances, []uint64{2, 3}, false)
assert.NoError(t, err)
root1 := [32]byte{}
binary.LittleEndian.PutUint64(root1[:8], balances[0])
binary.LittleEndian.PutUint64(root1[8:16], balances[1])
binary.LittleEndian.PutUint64(root1[16:24], balances[2])
binary.LittleEndian.PutUint64(root1[24:32], balances[3])
// Returns root for each indice(even if duplicated)
assert.DeepEqual(t, roots, [][32]byte{root1, root1})
}
func TestValidateIndices_CompressedField(t *testing.T) {
fakeTrie := &FieldTrie{
RWMutex: new(sync.RWMutex),
reference: stateutil.NewRef(0),
fieldLayers: nil,
field: types.Balances,
dataType: types.CompressedArray,
length: params.BeaconConfig().ValidatorRegistryLimit / 4,
numOfElems: 0,
}
goodIdx := params.BeaconConfig().ValidatorRegistryLimit - 1
assert.NoError(t, fakeTrie.validateIndices([]uint64{goodIdx}))
badIdx := goodIdx + 1
assert.ErrorContains(t, "invalid index for field balances", fakeTrie.validateIndices([]uint64{badIdx}))
}

View File

@@ -17,7 +17,6 @@ type BeaconState interface {
WriteOnlyBeaconState
Copy() BeaconState
HashTreeRoot(ctx context.Context) ([32]byte, error)
Version() int
FutureForkStub
}
@@ -43,6 +42,7 @@ type ReadOnlyBeaconState interface {
FieldReferencesCount() map[string]uint64
MarshalSSZ() ([]byte, error)
IsNil() bool
Version() int
}
// WriteOnlyBeaconState defines a struct which only has write access to beacon state methods.
@@ -148,7 +148,6 @@ type WriteOnlyBlockRoots interface {
// WriteOnlyStateRoots defines a struct which only has write access to state roots methods.
type WriteOnlyStateRoots interface {
SetStateRoots(val [][]byte) error
UpdateStateRootAtIndex(idx uint64, stateRoot [32]byte) error
}

View File

@@ -37,6 +37,15 @@ func (s *State) HasStateInCache(ctx context.Context, blockRoot [32]byte) (bool,
return has, nil
}
// StateByRootIfCached retrieves a state using the input block root only if the state is already in the cache
func (s *State) StateByRootIfCachedNoCopy(blockRoot [32]byte) state.BeaconState {
if !s.hotStateCache.has(blockRoot) {
return nil
}
state := s.hotStateCache.getWithoutCopy(blockRoot)
return state
}
// StateByRoot retrieves the state using input block root.
func (s *State) StateByRoot(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "stateGen.StateByRoot")
@@ -44,7 +53,7 @@ func (s *State) StateByRoot(ctx context.Context, blockRoot [32]byte) (state.Beac
// Genesis case. If block root is zero hash, short circuit to use genesis cachedState stored in DB.
if blockRoot == params.BeaconConfig().ZeroHash {
return s.beaconDB.State(ctx, blockRoot)
return s.beaconDB.GenesisState(ctx)
}
return s.loadStateByRoot(ctx, blockRoot)
}
@@ -57,7 +66,7 @@ func (s *State) StateByRoot(ctx context.Context, blockRoot [32]byte) (state.Beac
func (s *State) StateByRootInitialSync(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error) {
// Genesis case. If block root is zero hash, short circuit to use genesis state stored in DB.
if blockRoot == params.BeaconConfig().ZeroHash {
return s.beaconDB.State(ctx, blockRoot)
return s.beaconDB.GenesisState(ctx)
}
// To invalidate cache for parent root because pre state will get mutated.

View File

@@ -16,6 +16,23 @@ import (
"github.com/prysmaticlabs/prysm/testing/util"
)
func TestStateByRoot_GenesisState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := New(beaconDB)
b := util.NewBeaconBlock()
bRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.beaconDB.SaveState(ctx, beaconState, bRoot))
require.NoError(t, service.beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
require.NoError(t, service.beaconDB.SaveGenesisBlockRoot(ctx, bRoot))
loadedState, err := service.StateByRoot(ctx, params.BeaconConfig().ZeroHash) // Zero hash is genesis state root.
require.NoError(t, err)
require.DeepSSZEqual(t, loadedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe())
}
func TestStateByRoot_ColdState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -39,6 +56,44 @@ func TestStateByRoot_ColdState(t *testing.T) {
require.DeepSSZEqual(t, loadedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe())
}
func TestStateByRootIfCachedNoCopy_HotState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := New(beaconDB)
beaconState, _ := util.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
require.NoError(t, service.beaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: r[:]}))
service.hotStateCache.put(r, beaconState)
loadedState := service.StateByRootIfCachedNoCopy(r)
require.DeepSSZEqual(t, loadedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe())
}
func TestStateByRootIfCachedNoCopy_ColdState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := New(beaconDB)
service.finalizedInfo.slot = 2
service.slotsPerArchivedPoint = 1
b := util.NewBeaconBlock()
b.Block.Slot = 1
require.NoError(t, beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
bRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, beaconState.SetSlot(1))
require.NoError(t, service.beaconDB.SaveState(ctx, beaconState, bRoot))
require.NoError(t, service.beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
require.NoError(t, service.beaconDB.SaveGenesisBlockRoot(ctx, bRoot))
loadedState := service.StateByRootIfCachedNoCopy(bRoot)
require.NoError(t, err)
require.Equal(t, loadedState, nil)
}
func TestStateByRoot_HotStateUsingEpochBoundaryCacheNoReplay(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -98,6 +153,23 @@ func TestStateByRoot_HotStateCached(t *testing.T) {
require.DeepSSZEqual(t, loadedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe())
}
func TestStateByRoot_StateByRootInitialSync(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := New(beaconDB)
b := util.NewBeaconBlock()
bRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.beaconDB.SaveState(ctx, beaconState, bRoot))
require.NoError(t, service.beaconDB.SaveBlock(ctx, wrapper.WrappedPhase0SignedBeaconBlock(b)))
require.NoError(t, service.beaconDB.SaveGenesisBlockRoot(ctx, bRoot))
loadedState, err := service.StateByRootInitialSync(ctx, params.BeaconConfig().ZeroHash) // Zero hash is genesis state root.
require.NoError(t, err)
require.DeepSSZEqual(t, loadedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe())
}
func TestStateByRootInitialSync_UseEpochStateCache(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)

View File

@@ -23,8 +23,13 @@ func NewMockService() *MockStateManager {
}
}
// StateByRootIfCached
func (m *MockStateManager) StateByRootIfCachedNoCopy(_ [32]byte) state.BeaconState { // skipcq: RVV-B0013
panic("implement me")
}
// Resume --
func (m *MockStateManager) Resume(_ context.Context) (state.BeaconState, error) {
func (m *MockStateManager) Resume(_ context.Context, _ state.BeaconState) (state.BeaconState, error) {
panic("implement me")
}

View File

@@ -3,18 +3,20 @@ package stategen
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/eth2-types"
"github.com/prysmaticlabs/prysm/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/time"
prysmTime "github.com/prysmaticlabs/prysm/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
"github.com/prysmaticlabs/prysm/runtime/version"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -28,7 +30,13 @@ func (s *State) ReplayBlocks(
ctx, span := trace.StartSpan(ctx, "stateGen.ReplayBlocks")
defer span.End()
var err error
log.Debugf("Replaying state from slot %d till slot %d", state.Slot(), targetSlot)
start := time.Now()
log.WithFields(logrus.Fields{
"startSlot": state.Slot(),
"endSlot": targetSlot,
"diff": targetSlot - state.Slot(),
}).Debug("Replaying state")
// The input block list is sorted in decreasing slots order.
if len(signed) > 0 {
for i := len(signed) - 1; i >= 0; i-- {
@@ -57,6 +65,11 @@ func (s *State) ReplayBlocks(
}
}
duration := time.Since(start)
log.WithFields(logrus.Fields{
"duration": duration,
}).Debug("Replayed state")
return state, nil
}
@@ -188,7 +201,7 @@ func processSlotsStateGen(ctx context.Context, state state.BeaconState, slot typ
if err != nil {
return nil, errors.Wrap(err, "could not process slot")
}
if time.CanProcessEpoch(state) {
if prysmTime.CanProcessEpoch(state) {
switch state.Version() {
case version.Phase0:
state, err = transition.ProcessEpochPrecompute(ctx, state)
@@ -208,7 +221,7 @@ func processSlotsStateGen(ctx context.Context, state state.BeaconState, slot typ
return nil, err
}
if time.CanUpgradeToAltair(state.Slot()) {
if prysmTime.CanUpgradeToAltair(state.Slot()) {
state, err = altair.UpgradeToAltair(ctx, state)
if err != nil {
return nil, err

View File

@@ -23,7 +23,7 @@ var defaultHotStateDBInterval types.Slot = 128
// StateManager represents a management object that handles the internal
// logic of maintaining both hot and cold states in DB.
type StateManager interface {
Resume(ctx context.Context) (state.BeaconState, error)
Resume(ctx context.Context, fState state.BeaconState) (state.BeaconState, error)
SaveFinalizedState(fSlot types.Slot, fRoot [32]byte, fState state.BeaconState)
MigrateToCold(ctx context.Context, fRoot [32]byte) error
ReplayBlocks(ctx context.Context, state state.BeaconState, signed []block.SignedBeaconBlock, targetSlot types.Slot) (state.BeaconState, error)
@@ -31,6 +31,7 @@ type StateManager interface {
HasState(ctx context.Context, blockRoot [32]byte) (bool, error)
HasStateInCache(ctx context.Context, blockRoot [32]byte) (bool, error)
StateByRoot(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error)
StateByRootIfCachedNoCopy(blockRoot [32]byte) state.BeaconState
StateByRootInitialSync(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error)
StateBySlot(ctx context.Context, slot types.Slot) (state.BeaconState, error)
RecoverStateSummary(ctx context.Context, blockRoot [32]byte) (*ethpb.StateSummary, error)
@@ -84,7 +85,7 @@ func New(beaconDB db.NoHeadAccessDatabase) *State {
}
// Resume resumes a new state management object from previously saved finalized check point in DB.
func (s *State) Resume(ctx context.Context) (state.BeaconState, error) {
func (s *State) Resume(ctx context.Context, fState state.BeaconState) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "stateGen.Resume")
defer span.End()
@@ -97,12 +98,9 @@ func (s *State) Resume(ctx context.Context) (state.BeaconState, error) {
if fRoot == params.BeaconConfig().ZeroHash {
return s.beaconDB.GenesisState(ctx)
}
fState, err := s.StateByRoot(ctx, fRoot)
if err != nil {
return nil, err
}
if fState == nil || fState.IsNil() {
return nil, errors.New("finalized state not found in disk")
return nil, errors.New("finalized state is nil")
}
go func() {

View File

@@ -28,7 +28,7 @@ func TestResume(t *testing.T) {
require.NoError(t, service.beaconDB.SaveGenesisBlockRoot(ctx, root))
require.NoError(t, service.beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: root[:]}))
resumeState, err := service.Resume(ctx)
resumeState, err := service.Resume(ctx, beaconState)
require.NoError(t, err)
require.DeepSSZEqual(t, beaconState.InnerStateUnsafe(), resumeState.InnerStateUnsafe())
assert.Equal(t, params.BeaconConfig().SlotsPerEpoch, service.finalizedInfo.slot, "Did not get watned slot")

View File

@@ -3,7 +3,6 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"array_root.go",
"block_header_root.go",
"eth1_root.go",
"participation_bit_root.go",
@@ -49,7 +48,6 @@ go_test(
"state_root_test.go",
"stateutil_test.go",
"trie_helpers_test.go",
"validator_root_test.go",
],
embed = [":go_default_library"],
deps = [

Some files were not shown because too many files have changed in this diff Show More