Compare commits

...

50 Commits

Author SHA1 Message Date
Potuz
58df1f1ba5 lock before saving the poststate to db (#12612) 2023-07-11 16:10:32 +00:00
Simon
cec32cb996 Append Dynamic Addinng Trusted Peer Apis (#12531)
* Append Dynamic Addinng Trusted Peer Apis

* Append unit tests for Dynamic Addinng Trusted Peer Apis

* Update beacon-chain/p2p/peers/peerdata/store.go

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* Update beacon-chain/p2p/peers/peerdata/store_test.go

* Update beacon-chain/p2p/peers/status.go

* Update beacon-chain/p2p/peers/status_test.go

* Update beacon-chain/p2p/peers/status_test.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Update beacon-chain/rpc/eth/node/handlers.go

* Move trusted peer apis from rpc/eth/v1/node to rpc/prysm/node; move peersToWatch into ensurePeerConnections function;

* Update beacon-chain/rpc/prysm/node/server.go

* Update beacon-chain/rpc/prysm/node/server.go

* fix go lint problem

* p2p/watch_peers.go: trusted peer makes AddrInfo structure by itself instead of using MakePeer().

p2p/service.go: add connectWithAllTrustedPeers function, before connectWithPeer, add trusted peer info into peer status.

p2p/peers/status.go: trusted peers are not included, when pruning outdated and disconnected peers.

* use readlock for GetTrustedPeers and IsTrustedPeers

---------

Co-authored-by: simon <sunminghui2@huawei.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-07-11 09:26:08 +00:00
Nishant Das
d56a530c86 Copy Bytes Alternatively (#12608)
* copy bytes alternatively

* test
2023-07-10 19:47:29 +08:00
Nishant Das
0a68d2d302 Fix Context Cancellation (#12604) 2023-07-08 08:50:04 +00:00
minh-bq
25ebd335cb Fix bls signature batch unit test (#12602)
We randomly observe this failure when running unit test

go test -test.v -run=^TestSignatureBatch_AggregateBatch/common_and_uncommon_messages_in_batch_with_multiple_messages
=== RUN   TestSignatureBatch_AggregateBatch
=== RUN   TestSignatureBatch_AggregateBatch/common_and_uncommon_messages_in_batch_with_multiple_messages
    signature_batch_test.go:643: AggregateBatch() Descriptions got = [test signature bls aggregated signature test signature bls aggregated signature test signature bls aggregated signature], want [bls aggregated signature test signature bls aggregated signature test signature bls aggregated signature test signature]
--- FAIL: TestSignatureBatch_AggregateBatch (0.02s)
    --- FAIL: TestSignatureBatch_AggregateBatch/common_and_uncommon_messages_in_batch_with_multiple_messages (0.02s)

The problem is that the signature sort forgets to swap the description when a
swap occurs. This commit adds the description swap when swap occurs.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-07-07 14:26:02 -05:00
Sammy Rosso
6a0db800b3 GetValidatorPerformance http endpoint (#12557)
* Add http endpoint for GetValidatorPerformance

* Add tests

* fix up client usage

* Revert changes

* refactor to reuse code

* Move endpoint + move ComputeValidatorPerformance

* Radek's comment change

* Add Bazel file

* Change endpoint path

* Add server for http endpoints

* Fix server

* Create core package

* Gaz

* Add correct error code

* Fix error code in test

* Adding errors

* Fix errors

* Fix default GRPC error

* Change http errors to core ones

* Use error status without helper

* Fix

* Capitalize GRPC error messages

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-07-07 14:49:44 +00:00
Nishant Das
085f90a4f1 Prune Pending Deposits on Finalization (#12598)
* prune them

* Update beacon-chain/blockchain/process_block_helpers.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-07-07 11:20:14 +08:00
dependabot[bot]
ecb26e9885 Bump google.golang.org/grpc from 1.40.0 to 1.53.0 (#12595)
* Bump google.golang.org/grpc from 1.40.0 to 1.53.0

Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.40.0 to 1.53.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.40.0...v1.53.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Run gazelle and fix new gRPC API

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-07-06 20:11:43 +00:00
james-prysm
7eb0091936 Checkpoint sync ux (#12584)
* small ux improvement for checkpoint sync

* adding small log for ux update

* gaz
2023-07-06 16:46:55 +00:00
terencechain
f8408b9ec1 feat: add metric for ReceiveBlock (#12597) 2023-07-06 02:57:55 -07:00
Preston Van Loon
d6d5139d68 Clarify sync committee message validation (#12594)
* Clarify sync committee validation and error message

* fix test
2023-07-05 19:20:21 +00:00
terencechain
2e0e29ecbe fix: prune invalid blocks during initial sync (#12591) 2023-07-05 08:33:33 -07:00
Potuz
e9b5e52ee2 Move consensus and execution validation outside of onBlock (#12589)
* Move consensus and execution validation outside of onBlock

* reviews

* fix unit test

* revert version change

* fix tests

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-07-05 21:12:21 +08:00
Nishant Das
2a4441762e Handle Epoch Boundary Misses (#12579)
* add changes

* fix tests

* fix edge case

* fix logging
2023-07-05 09:23:51 +00:00
Nishant Das
401fccc723 Log Finalized Deposit Insertion (#12593)
* add log

* update key

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-07-05 08:07:38 +00:00
Radosław Kapka
c80f88fc07 Rename payloadHash to lastValidHash in setOptimisticToInvalid (#12592) 2023-07-04 17:03:45 +00:00
Kevin Wood
faa0a2c4cf Correct log level for 'Could not send a chunked response' (#12562)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-07-02 22:58:12 +00:00
Nishant Das
c45cb7e188 Optimize Validator Roots Computation (#12585)
* add changes

* change it
2023-07-01 02:23:25 +00:00
0xalex88
0b10263dd5 Increase validator client startup proposer settings deadline (#12533)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-30 21:37:39 +00:00
Anukul Sangwan
3bc808352f run ineffassign for all code (#12578)
* run `ineffassign` for all code

* fix reported ineffassign errors

* remove redundant changes

* fix remaining ineffassign errors

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-06-29 15:38:26 +00:00
james-prysm
d0c740f477 Registration Cache used by default and other UX changes for Proposer settings (#12456)
* WIP

* WIP

* adding in migration function

* updating mock validator and gaz

* adding descriptive logs

* fixing mocking

* fixing tests

* fixing mock

* adding changes to handle enable builder settings

* fixing tests and edge case

* reduce cognative complexity of function

* further reducing cognative complexity on function

* WIP

* fixing unit test on migration

* adding more tests

* gaz and fix unit test

* fixing deepsource issues

* fixing more deesource issues missed previously

* removing unused reciever name

* WIP fix to migration logic

* fixing loging info

* reverting migration logic, converting logic to address issues discussed on slack, adding unit tests

* adding test for builder setting only not saved to db

* addressing comment

* fixing flag

* removing accidently missed deprecated flags

* rolling back mock on pr

* fixing fmt linting

* updating comments based on feedback

* Update config/features/flags.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* fixing based on feedback on PR

* Update config/validator/service/proposer_settings.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update validator/client/runner.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update validator/db/kv/proposer_settings.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* adding additional logs to clear up some steps based on feedback

* fixing log

* deepsource

* adding comments based on review feedback

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-06-29 02:49:21 +00:00
Preston Van Loon
cbe67f1970 Update protobuf and protobuf deps (#12569)
* Update protobuf and protobuf deps

* gazelle

* enforce c++14

* bump to c++17 since practically all modern compilers support it

* update protobuf again to resolve mac issues, bump c++20

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-28 14:50:43 +00:00
Potuz
5bb482e5d6 Remove forkchoice call from notify new payload (#12560)
* Remove forkchoice call from notify new payload

* add unit test
2023-06-28 13:38:24 +00:00
terencechain
83494c5b23 fix: use diff context to update proposer cache background (#12571) 2023-06-27 20:31:54 +00:00
terencechain
a10ffa9c0e Cache next epoch proposers at epoch boundary (#12484)
* Cache next epoch proposers at epoch boundary

* Fix new lines

* Use UpdateProposerIndicesInCache

* dont set state slot

* Update beacon_committee.go

* dont set state slot

* genesis epoch check

* Rm check

* fix: rm logging ctx

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* feat: move update to background

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-06-27 14:41:24 +00:00
Radosław Kapka
e545b57f26 Deflake cloners_test.go (#12566) 2023-06-26 15:43:00 +00:00
Preston Van Loon
c026b9e897 Set blst_modern=true to be the bazel default build (#12564)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-26 15:06:33 +00:00
Preston Van Loon
a19044051f Add hermetic_cc_toolchain for a hermetic cc toolchain (#12135)
* Add bazel-zig-cc for a hermetic cc toolchain

* gazelle

* Remove llvm

* remove wl

* Add new URLs for renamed repo

* gazelle

* Update to v2.0.0-rc1

* bump to rc2

* Some PR feedback

* use v2.0.0 from rc2

* Disable hermetic builds for mac and windows.

* bump bazel version, add darwin hack

* fix

* Add the no-op emtpy cc toolchain code

* typo and additional copy

* update protobuf and fix vaticle warning

* Revert "update protobuf and fix vaticle warning"

This reverts commit 7bb4b6b564.

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-26 14:31:40 +00:00
Potuz
1ebef16196 use the incoming payload status instead of calling forkchoice (#12559)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-22 18:09:02 +00:00
terencechain
8af634a6a0 feat: aggregate atts using fixed pool of go routines (#12553)
* feat: aggregate atts using fixed pool of go routines

* fix: deepsrc complains

* style: comment

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* feat: aggregate atts using fixed pool of go routines

* fix: deepsrc complains

* style: comment

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-06-22 17:48:42 +00:00
Potuz
884ba4959a Remove unneded helper (#12558)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-06-22 16:54:52 +00:00
terencechain
75e94120b4 fix(aggregator): remove single bit aggregation (#12555) 2023-06-22 09:34:25 -07:00
Sammy Rosso
20f4d21b83 Keymanager API: Add validator voluntary exit endpoint (#12299)
* Initial setup

* Fix + Cleanup

* Add query

* Fix

* Add epoch

* James' review part 1

* James' review part 2

* James' review part 3

* Radek' review

* Gazelle

* Fix cycle

* Start unit test

* fixing part of the test

* Mostly fix test

* Fix tests

* Cleanup

* Handle error

* Remove times

* Fix all tests

* Fix accidental deletion

* Unmarshal epoch

* Add custom_type

* Small fix

* Fix epoch

* Lint fix

* Add test + fix empty query panic

* Add comment

* Fix regex

* Add correct error message

* Change current epoch to use slot

* Return error if incorrect epoch passed

* Remove redundant type conversion

* Fix tests

* gaz

* Remove nodeClient + pass slot

* Remove slot from parameters

* Fix tests

* Fix test attempt 2

* Fix test attempt 2

* Remove nodeClient from ProposeExit

* Fix

* Fix tests

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-21 14:06:16 -05:00
Bryce T
c018981951 Add expected withdrawals API (#12519)
* add structs for expected-withdrawals-api

* add server handler

* add tests

* add bazel file

* register api in service

* remove get prefix for endpoint

* fix review comments

* Update beacon-chain/rpc/eth/builder/handlers.go

* use goimports sorting type

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-21 14:36:47 +00:00
Radosław Kapka
b92226bedb getAttestationRewards API endpoint (#12480)
* handler

* very much work in progress

* remove Polish

* thinking

* working but differs from LH

* remove old stuff

* review from Potuz

* validator performance beacon server

* Revert "validator performance beacon server"

This reverts commit 42464cc6d3.

* reuse precompute calculations

* todos

* production quality

* add json tags to AttestationRewards

* Potuz's review

* extract vars

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-06-21 13:16:53 +00:00
Potuz
57f97feb84 Track optimistic status on head (#12552) 2023-06-20 08:59:48 -07:00
Sanghee Choi
2bf0560dc7 fix typo (beacon-chain/node/node.go) (#12551) 2023-06-20 08:32:34 +00:00
Radosław Kapka
a40f903f76 Fix TestFieldTrie_NativeState_fieldConvertersNative (#12550) 2023-06-19 13:49:12 +00:00
Sanghee Choi
ba55ae8cea fix typo (CONTRIBUTING.md) (#12548) 2023-06-18 19:24:19 -07:00
Potuz
27aac105d7 disable nil payloadid log on relayers flags (#12465)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-06-16 17:01:57 +00:00
terencechain
115d565f49 fix: late block task wait for initial sync (#12526)
* fix: late block task wait for initial sync

* fix: remove wait for clock
2023-06-16 13:47:19 +00:00
Potuz
019e0b56e2 Do not validate merge transition block after Capella (#12459) 2023-06-16 13:11:07 +00:00
Nishant Das
0efb038984 Fix Fuzz Target For ExecutionPayload (#12541) 2023-06-16 12:41:28 +00:00
Nishant Das
63d81144e9 Fix Uint256 Json Parsing (#12540)
* add stronger checks

* radek's review
2023-06-16 09:43:20 +00:00
james-prysm
6edbfa3128 multiple validator status - optimization (#12487)
* adding optmization

* addressing comments

* adding a test and fixing change in assignments.go

* making some changes based on review of the code

* removing irrelevant test

* changing formatting
2023-06-15 17:20:00 -05:00
Nishant Das
194b3b1c5e Ensure File Does Not Exist (#12536)
* error out

* gaz

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-06-15 21:41:46 +00:00
james-prysm
996ec67229 changing default on bad validators (#12535)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-06-15 16:40:59 +00:00
Nishant Das
c7b2c011d8 fix parsing (#12534) 2023-06-15 11:12:39 -04:00
james-prysm
d15122fae2 Reopenning fix for keystore field name change to align with EIP2335 (#12530)
* adding changes

* fixing deepsource
2023-06-14 15:48:30 -05:00
Potuz
3e17dbb532 log the right blocknumber (#12529) 2023-06-14 19:55:33 +00:00
163 changed files with 6461 additions and 1565 deletions

View File

@@ -3,6 +3,7 @@ import %workspace%/build/bazelrc/convenience.bazelrc
import %workspace%/build/bazelrc/correctness.bazelrc
import %workspace%/build/bazelrc/cross.bazelrc
import %workspace%/build/bazelrc/debug.bazelrc
import %workspace%/build/bazelrc/hermetic-cc.bazelrc
import %workspace%/build/bazelrc/performance.bazelrc
# E2E run with debug gotag
@@ -14,7 +15,7 @@ coverage --define=coverage_enabled=1
# Stamp binaries with git information
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --define blst_disabled=false --define blst_modern=true
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true
@@ -27,30 +28,7 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
build:release --compilation_mode=opt
build:release --stamp
# LLVM compiler for building C/C++ dependencies.
build:llvm --define compiler=llvm
build:llvm --copt -fno-sanitize=vptr,function
build:llvm --linkopt -fno-sanitize=vptr,function
# --incompatible_enable_cc_toolchain_resolution not needed after this issue is closed https://github.com/bazelbuild/bazel/issues/7260
build:llvm --incompatible_enable_cc_toolchain_resolution
build:asan --copt -fsanitize=address,undefined
build:asan --copt -fno-omit-frame-pointer
build:asan --linkopt -fsanitize=address,undefined
build:asan --copt -fno-sanitize=vptr,function
build:asan --linkopt -fno-sanitize=vptr,function
build:asan --copt -DADDRESS_SANITIZER=1
build:asan --copt -D__SANITIZE_ADDRESS__
build:asan --linkopt -ldl
build:llvm-asan --config=llvm
build:llvm-asan --config=asan
build:llvm-asan --linkopt -fuse-ld=ld.lld
build:fuzz --@io_bazel_rules_go//go/config:tags=fuzz
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --config=llvm
build:cgo_symbolizer --copt=-g
build:cgo_symbolizer --define=USE_CGO_SYMBOLIZER=true
build:cgo_symbolizer -c dbg
@@ -59,9 +37,13 @@ build:cgo_symbolizer --define=gotags=cgosymbolizer_enabled
# toolchain build debug configs
#------------------------------
build:debug --sandbox_debug
build:debug --toolchain_resolution_debug
build:debug --toolchain_resolution_debug=".*"
build:debug --verbose_failures
build:debug -s
# Set bazel gotag
build --define gotags=bazel
# Abseil requires c++14 or greater.
build --cxxopt=-std=c++20
build --host_cxxopt=-std=c++20

View File

@@ -1 +1 @@
6.1.0
6.2.1

View File

@@ -3,7 +3,6 @@ load("@com_github_atlassian_bazel_tools//gometalinter:def.bzl", "gometalinter")
load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@vaticle_bazel_distribution//common:rules.bzl", "assemble_targz", "assemble_versioned")
load("@bazel_skylib//rules:common_settings.bzl", "string_setting")
prefix = "github.com/prysmaticlabs/prysm"

View File

@@ -1,6 +1,6 @@
# Contribution Guidelines
Note: The latest and most up to date documenation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
Note: The latest and most up-to-date documentation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
@@ -10,9 +10,9 @@ You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues)
**1. Set up Prysm following the instructions in README.md.**
**2. Fork the prysm repo.**
**2. Fork the Prysm repo.**
Sign in to your Github account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
**3. Create a local clone of Prysm.**
@@ -23,7 +23,7 @@ $ git clone https://github.com/prysmaticlabs/prysm.git
$ cd $GOPATH/src/github.com/prysmaticlabs/prysm
```
**4. Link your local clone to the fork on your Github repo.**
**4. Link your local clone to the fork on your GitHub repo.**
```
$ git remote add myprysmrepo https://github.com/<your_github_user_name>/prysm.git
@@ -68,7 +68,7 @@ $ go test <file_you_are_working_on>
$ git add --all
```
This command stages all of the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
This command stages all the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
**11. Commit the file or files.**
@@ -96,8 +96,7 @@ If there are conflicts between your edits and those made by others since you sta
$ git status
```
Open those files one at a time and you
will see lines inserted by Git that identify the conflicts:
Open those files one at a time, and you will see lines inserted by Git that identify the conflicts:
```
<<<<<< HEAD
@@ -119,7 +118,7 @@ $ git push myrepo feature-in-progress-branch
**15. Check to be sure your fork of the Prysm repo contains your feature branch with the latest edits.**
Navigate to your fork of the repo on Github. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
Navigate to your fork of the repo on GitHub. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
**16. Create a pull request.**
@@ -151,7 +150,7 @@ pick hash fix a bug
pick hash add a feature
```
Replace the word pick with the word “squash” for every line but the first so you end with ….
Replace the word pick with the word “squash” for every line but the first, so you end with ….
```
pick hash do some work
@@ -178,7 +177,7 @@ We consider two types of contributions to our repo and categorize them as follow
Anyone can become a part-time contributor and help out on implementing Ethereum consensus. The responsibilities of a part-time contributor include:
- Engaging in Gitter conversations, asking the questions on how to begin contributing to the project
- Opening up github issues to express interest in code to implement
- Opening up GitHub issues to express interest in code to implement
- Opening up PRs referencing any open issue in the repo. PRs should include:
- Detailed context of what would be required for merge
- Tests that are consistent with how other tests are written in our implementation
@@ -188,12 +187,12 @@ Anyone can become a part-time contributor and help out on implementing Ethereum
### Core Contributors
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all of the responsibilities of part-time contributors plus the majority of the following:
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all the responsibilities of part-time contributors plus the majority of the following:
- Stay up to date on the latest beacon chain specification
- Monitor github issues and PRs to make sure owner, labels, descriptions are correct
- Monitor GitHub issues and PRs to make sure owner, labels, descriptions are correct
- Formulate independent ideas, suggest new work to do, point out improvements to existing approaches
- Participate in code review, ensure code quality is excellent, and have ensure high code coverage
- Participate in code review, ensure code quality is excellent, and ensure high code coverage
- Help with social media presence, write bi-weekly development update
- Represent Prysmatic Labs at events to help spread the word on scalability research and solutions

View File

@@ -16,27 +16,37 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
HERMETIC_CC_TOOLCHAIN_VERSION = "v2.0.0"
http_archive(
name = "com_grail_bazel_toolchain",
sha256 = "b210fc8e58782ef171f428bfc850ed7179bdd805543ebd1aa144b9c93489134f",
strip_prefix = "bazel-toolchain-83e69ba9e4b4fdad0d1d057fcb87addf77c281c9",
urls = ["https://github.com/grailbio/bazel-toolchain/archive/83e69ba9e4b4fdad0d1d057fcb87addf77c281c9.tar.gz"],
name = "hermetic_cc_toolchain",
sha256 = "57f03a6c29793e8add7bd64186fc8066d23b5ffd06fe9cc6b0b8c499914d3a65",
urls = [
"https://mirror.bazel.build/github.com/uber/hermetic_cc_toolchain/releases/download/{0}/hermetic_cc_toolchain-{0}.tar.gz".format(HERMETIC_CC_TOOLCHAIN_VERSION),
"https://github.com/uber/hermetic_cc_toolchain/releases/download/{0}/hermetic_cc_toolchain-{0}.tar.gz".format(HERMETIC_CC_TOOLCHAIN_VERSION),
],
)
load("@com_grail_bazel_toolchain//toolchain:deps.bzl", "bazel_toolchain_dependencies")
load("@hermetic_cc_toolchain//toolchain:defs.bzl", zig_toolchains = "toolchains")
bazel_toolchain_dependencies()
zig_toolchains()
load("@com_grail_bazel_toolchain//toolchain:rules.bzl", "llvm_toolchain")
llvm_toolchain(
name = "llvm_toolchain",
llvm_version = "13.0.1",
# Register zig sdk toolchains with support for Ubuntu 20.04 (Focal Fossa) which has an EOL date of April, 2025.
# For ubuntu glibc support, see https://launchpad.net/ubuntu/+source/glibc
register_toolchains(
"@zig_sdk//toolchain:linux_amd64_gnu.2.31",
"@zig_sdk//toolchain:linux_arm64_gnu.2.31",
# Hermetic cc toolchain is not yet supported on darwin. Sysroot needs to be provided.
# See https://github.com/uber/hermetic_cc_toolchain#osx-sysroot
# "@zig_sdk//toolchain:darwin_amd64",
# "@zig_sdk//toolchain:darwin_arm64",
# Windows builds are not supported yet.
# "@zig_sdk//toolchain:windows_amd64",
)
load("@llvm_toolchain//:toolchains.bzl", "llvm_register_toolchains")
load("@prysm//tools/cross-toolchain:darwin_cc_hack.bzl", "configure_nonhermetic_darwin")
llvm_register_toolchains()
configure_nonhermetic_darwin()
load("@prysm//tools/cross-toolchain:prysm_toolchains.bzl", "configure_prysm_toolchains")
@@ -311,11 +321,13 @@ http_archive(
url = "https://github.com/bazelbuild/buildtools/archive/f2aed9ee205d62d45c55cfabbfd26342f8526862.zip",
)
git_repository(
http_archive(
name = "com_google_protobuf",
commit = "436bd7880e458532901c58f4d9d1ea23fa7edd52",
remote = "https://github.com/protocolbuffers/protobuf",
shallow_since = "1617835118 -0700",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
strip_prefix = "protobuf-23.3",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v23.3.tar.gz",
],
)
# Group the sources of the library so that CMake rule have access to it

View File

@@ -128,6 +128,7 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
wst, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, wst.SetFork(fork))
// set up checkpoint block
@@ -226,6 +227,7 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
wst, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, cfg.GenesisEpoch)
require.NoError(t, err)
require.NoError(t, wst.SetFork(fork))
// set up checkpoint block
@@ -399,6 +401,7 @@ func TestDownloadFinalizedData(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, st.SetFork(fork))
require.NoError(t, st.SetSlot(slot))

View File

@@ -135,15 +135,14 @@ func (s Uint256) SSZBytes() []byte {
// UnmarshalJSON takes in a byte array and unmarshals the value in Uint256
func (s *Uint256) UnmarshalJSON(t []byte) error {
start := 0
end := len(t)
if t[0] == '"' {
start += 1
if len(t) < 2 {
return errors.Errorf("provided Uint256 json string is too short: %s", string(t))
}
if t[end-1] == '"' {
end -= 1
if t[0] != '"' || t[end-1] != '"' {
return errors.Errorf("provided Uint256 json string is malformed: %s", string(t))
}
return s.UnmarshalText(t[start:end])
return s.UnmarshalText(t[1 : end-1])
}
// UnmarshalText takes in a byte array and unmarshals the text in Uint256

View File

@@ -1156,6 +1156,14 @@ func TestUint256Unmarshal(t *testing.T) {
require.Equal(t, expected, string(m))
}
func TestUint256Unmarshal_BadData(t *testing.T) {
var bigNum Uint256
assert.ErrorContains(t, "provided Uint256 json string is too short", bigNum.UnmarshalJSON([]byte{'"'}))
assert.ErrorContains(t, "provided Uint256 json string is malformed", bigNum.UnmarshalJSON([]byte{'"', '1', '2'}))
}
func TestUint256UnmarshalNegative(t *testing.T) {
m := "-1"
var value Uint256

View File

@@ -340,7 +340,13 @@ func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
}
s.headLock.RLock()
headRoot := s.head.root
headSlot := s.head.slot
headOptimistic := s.head.optimistic
s.headLock.RUnlock()
// we trust the head package for recent head slots, otherwise fallback to forkchoice
if headSlot+2 >= s.CurrentSlot() {
return headOptimistic, nil
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
@@ -493,6 +499,13 @@ func (s *Service) Ancestor(ctx context.Context, root []byte, slot primitives.Slo
return ar[:], nil
}
// SetOptimisticToInvalid wraps the corresponding method in forkchoice
func (s *Service) SetOptimisticToInvalid(ctx context.Context, root, parent, lvh [32]byte) ([][32]byte, error) {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, parent, lvh)
}
// SetGenesisTime sets the genesis time of beacon chain.
func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t

View File

@@ -422,6 +422,12 @@ func TestService_IsOptimistic(t *testing.T) {
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, primitives.Slot(0), c.CurrentSlot())
require.Equal(t, false, opt)
c.SetGenesisTime(time.Now().Add(-time.Second * time.Duration(4*params.BeaconConfig().SecondsPerSlot)))
opt, err = c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
}

View File

@@ -41,13 +41,15 @@ var (
type invalidBlock struct {
invalidAncestorRoots [][32]byte
error
root [32]byte
root [32]byte
lastValidHash [32]byte
}
type invalidBlockError interface {
Error() string
InvalidAncestorRoots() [][32]byte
BlockRoot() [32]byte
LastValidHash() [32]byte
}
// BlockRoot returns the invalid block root.
@@ -55,6 +57,11 @@ func (e invalidBlock) BlockRoot() [32]byte {
return e.root
}
// LastValidHash returns the last valid hash root.
func (e invalidBlock) LastValidHash() [32]byte {
return e.lastValidHash
}
// InvalidAncestorRoots returns an optional list of invalid roots of the invalid block which leads up last valid root.
func (e invalidBlock) InvalidAncestorRoots() [][32]byte {
return e.invalidAncestorRoots
@@ -72,6 +79,19 @@ func IsInvalidBlock(e error) bool {
return true
}
// InvalidBlockLVH returns the invalid block last valid hash root. If the error
// doesn't have a last valid hash, [32]byte{} is returned.
func InvalidBlockLVH(e error) [32]byte {
if e == nil {
return [32]byte{}
}
d, ok := e.(invalidBlockError)
if !ok {
return [32]byte{}
}
return d.LastValidHash()
}
// InvalidBlockRoot returns the invalid block root. If the error
// doesn't have an invalid blockroot. [32]byte{} is returned.
func InvalidBlockRoot(e error) [32]byte {

View File

@@ -154,7 +154,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
var pId [8]byte
copy(pId[:], payloadID[:])
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
} else if hasAttr && payloadID == nil {
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
"slot": headBlk.Slot(),
@@ -182,21 +182,24 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
postStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
preStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
// Execution payload is only supported in Bellatrix and beyond. Pre
// merge blocks are never optimistic
if blocks.IsPreBellatrixVersion(postStateVersion) {
if blk == nil {
return false, errors.New("signed beacon block can't be nil")
}
if preStateVersion < version.Bellatrix {
return true, nil
}
if err := consensusblocks.BeaconBlockIsNil(blk); err != nil {
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(postStateHeader, body)
enabled, err := blocks.IsExecutionEnabledUsingHeader(preStateHeader, body)
if err != nil {
return false, errors.Wrap(invalidBlock{error: err}, "could not determine if execution is enabled")
}
@@ -220,35 +223,37 @@ func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
}).Info("Called new payload with optimistic block")
return false, nil
case execution.ErrInvalidPayloadStatus:
newPayloadInvalidNodeCount.Inc()
root, err := blk.Block().HashTreeRoot()
if err != nil {
return false, err
}
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, blk.Block().ParentRoot(), bytesutil.ToBytes32(lastValidHash))
if err != nil {
return false, err
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
return false, err
}
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"blockRoot": fmt.Sprintf("%#x", root),
"invalidChildrenCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
invalidAncestorRoots: invalidRoots,
error: ErrInvalidPayload,
error: ErrInvalidPayload,
lastValidHash: lvh,
}
case execution.ErrInvalidBlockHashPayloadStatus:
newPayloadInvalidNodeCount.Inc()
return false, ErrInvalidBlockHashPayloadStatus
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
}
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer
func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, lvh [32]byte) error {
newPayloadInvalidNodeCount.Inc()
invalidRoots, err := s.SetOptimisticToInvalid(ctx, root, parentRoot, lvh)
if err != nil {
return err
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
return err
}
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", root),
"invalidChildrenCount": len(invalidRoots),
}).Warn("Pruned invalid blocks")
return invalidBlock{
invalidAncestorRoots: invalidRoots,
error: ErrInvalidPayload,
lastValidHash: lvh,
}
}
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) (bool, payloadattribute.Attributer, primitives.ValidatorIndex) {

View File

@@ -525,11 +525,13 @@ func Test_NotifyNewPayload(t *testing.T) {
{
name: "phase 0 post state",
postState: phase0State,
blk: altairBlk, // same as phase 0 for this test
isValidPayload: true,
},
{
name: "altair post state",
postState: altairState,
blk: altairBlk,
isValidPayload: true,
},
{
@@ -743,6 +745,37 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
require.Equal(t, true, validated)
}
func Test_reportInvalidBlock(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MainnetConfig())
service, tr := minimalTestService(t)
ctx, _, fcs := tr.ctx, tr.db, tr.fcs
jcp := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
require.NoError(t, fcs.SetOptimisticToValid(ctx, [32]byte{'A'}))
err = service.pruneInvalidBlock(ctx, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'a'})
require.Equal(t, IsInvalidBlock(err), true)
require.Equal(t, InvalidBlockLVH(err), [32]byte{'a'})
invalidRoots := InvalidAncestorRoots(err)
require.Equal(t, 3, len(invalidRoots))
require.Equal(t, [32]byte{'D'}, invalidRoots[0])
require.Equal(t, [32]byte{'C'}, invalidRoots[1])
require.Equal(t, [32]byte{'B'}, invalidRoots[2])
}
func Test_GetPayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
ctx := tr.ctx

View File

@@ -47,9 +47,11 @@ func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
// This defines the current chain service's view of head.
type head struct {
root [32]byte // current head root.
block interfaces.ReadOnlySignedBeaconBlock // current head block.
state state.BeaconState // current head state.
root [32]byte // current head root.
block interfaces.ReadOnlySignedBeaconBlock // current head block.
state state.BeaconState // current head state.
slot primitives.Slot // the head block slot number
optimistic bool // optimistic status when saved head
}
// This saves head info to the local service cache, it also saves the
@@ -94,6 +96,10 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
return errors.Wrap(err, "could not get old head root")
}
oldHeadRoot := bytesutil.ToBytes32(r)
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
if err != nil {
log.WithError(err).Error("could not check if node is optimistically synced")
}
if headBlock.Block().ParentRoot() != oldHeadRoot {
// A chain re-org occurred, so we fire an event notifying the rest of the services.
commonRoot, forkSlot, err := s.cfg.ForkChoiceStore.CommonAncestor(ctx, oldHeadRoot, newHeadRoot)
@@ -125,10 +131,6 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
reorgDistance.Observe(float64(dis))
reorgDepth.Observe(float64(dep))
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Reorg,
Data: &ethpbv1.EventChainReorg{
@@ -150,7 +152,14 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
}
// Cache the new head info.
if err := s.setHead(newHeadRoot, headBlock, headState); err != nil {
newHead := &head{
root: newHeadRoot,
block: headBlock,
state: headState,
optimistic: isOptimistic,
slot: headBlock.Block().Slot(),
}
if err := s.setHead(newHead); err != nil {
return errors.Wrap(err, "could not set head")
}
@@ -195,20 +204,22 @@ func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedB
return nil
}
// This sets head view object which is used to track the head slot, root, block and state.
func (s *Service) setHead(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState) error {
// This sets head view object which is used to track the head slot, root, block, state and optimistic status
func (s *Service) setHead(newHead *head) error {
s.headLock.Lock()
defer s.headLock.Unlock()
// This does a full copy of the block and state.
bCp, err := block.Copy()
bCp, err := newHead.block.Copy()
if err != nil {
return err
}
s.head = &head{
root: root,
block: bCp,
state: state.Copy(),
root: newHead.root,
block: bCp,
state: newHead.state.Copy(),
optimistic: newHead.optimistic,
slot: newHead.slot,
}
return nil
}

View File

@@ -120,7 +120,7 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber,
"blockNumber": payload.BlockNumber(),
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}
if block.Version() >= version.Capella {

View File

@@ -172,11 +172,15 @@ var (
})
onBlockProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "on_block_processing_milliseconds",
Help: "Total time in milliseconds to complete a call to onBlock()",
Help: "Total time in milliseconds to complete a call to postBlockProcess()",
})
stateTransitionProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "state_transition_processing_milliseconds",
Help: "Total time to call a state transition in onBlock()",
Help: "Total time to call a state transition in validateStateTransition()",
})
chainServiceProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
processAttsElapsedTime = promauto.NewHistogram(
prometheus.HistogramOpts{

View File

@@ -172,3 +172,10 @@ func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
return nil
}
}
func WithSyncComplete(c chan struct{}) Option {
return func(s *Service) error {
s.syncComplete = c
return nil
}
}

View File

@@ -22,7 +22,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
@@ -40,59 +39,11 @@ const depositDeadline = 20 * time.Second
// This defines size of the upper bound for initial sync block cache.
var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
// onBlock is called when a gossip block is received. It runs regular state transition on the block.
// The block's signing root should be computed before calling this method to avoid redundant
// computation in this method and methods it calls into.
//
// Spec pseudocode definition:
//
// def on_block(store: Store, signed_block: ReadOnlySignedBeaconBlock) -> None:
// block = signed_block.message
// # Parent block must be known
// assert block.parent_root in store.block_states
// # Make a copy of the state to avoid mutability issues
// pre_state = copy(store.block_states[block.parent_root])
// # Blocks cannot be in the future. If they are, their consideration must be delayed until the are in the past.
// assert get_current_slot(store) >= block.slot
//
// # Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// assert block.slot > finalized_slot
// # Check block is a descendant of the finalized block at the checkpoint finalized slot
// assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
//
// # Check the block is valid and compute the post-state
// state = pre_state.copy()
// state_transition(state, signed_block, True)
// # Add new block to the store
// store.blocks[hash_tree_root(block)] = block
// # Add new state for this block to the store
// store.block_states[hash_tree_root(block)] = state
//
// # Update justified checkpoint
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
// store.best_justified_checkpoint = state.current_justified_checkpoint
// if should_update_justified_checkpoint(store, state.current_justified_checkpoint):
// store.justified_checkpoint = state.current_justified_checkpoint
//
// # Update finalized checkpoint
// if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
// store.finalized_checkpoint = state.finalized_checkpoint
//
// # Potentially update justified if different from store
// if store.justified_checkpoint != state.current_justified_checkpoint:
// # Update justified if new justified is later than store justified
// if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// store.justified_checkpoint = state.current_justified_checkpoint
// return
//
// # Update justified if store justified is not in chain with finalized checkpoint
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// ancestor_at_finalized_slot = get_ancestor(store, store.justified_checkpoint.root, finalized_slot)
// if ancestor_at_finalized_slot != store.finalized_checkpoint.root:
// store.justified_checkpoint = state.current_justified_checkpoint
func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error {
// postBlockProcess is called when a gossip block is received. This function performs
// several duties most importantly informing the engine if head was updated,
// saving the new head information to the blockchain package and database and
// handling attestations, slashings and similar included in the block.
func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, postState state.BeaconState, isValidPayload bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
defer span.End()
if err := consensusblocks.BeaconBlockIsNil(signed); err != nil {
@@ -101,52 +52,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
startTime := time.Now()
b := signed.Block()
preState, err := s.getBlockPreState(ctx, b)
if err != nil {
return err
}
// Verify that the parent block is in forkchoice
if !s.cfg.ForkChoiceStore.HasNode(b.ParentRoot()) {
return ErrNotDescendantOfFinalized
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.cfg.ForkChoiceStore.JustifiedCheckpoint().Epoch
currStoreFinalizedEpoch := s.cfg.ForkChoiceStore.FinalizedCheckpoint().Epoch
preStateFinalizedEpoch := preState.FinalizedCheckpoint().Epoch
preStateJustifiedEpoch := preState.CurrentJustifiedCheckpoint().Epoch
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return invalidBlock{error: err}
}
stateTransitionProcessingTime.Observe(float64(time.Since(stateTransitionStartTime).Milliseconds()))
postStateVersion, postStateHeader, err := getStateVersionAndPayload(postState)
if err != nil {
return err
}
isValidPayload, err := s.notifyNewPayload(ctx, postStateVersion, postStateHeader, signed)
if err != nil {
return errors.Wrap(err, "could not validate new payload")
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preStateVersion, preStateHeader, signed); err != nil {
return err
}
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState); err != nil {
return err
}
if err := s.insertBlockToForkchoiceStore(ctx, signed.Block(), blockRoot, postState); err != nil {
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, postState, blockRoot); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
}
if err := s.handleBlockAttestations(ctx, signed.Block(), postState); err != nil {
@@ -160,34 +66,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
}
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
// Feed the indexed attestation to slasher if enabled. This action
// is done in the background to avoid adding more load to this critical code path.
go func() {
// Using a different context to prevent timeouts as this operation can be expensive
// and we want to avoid affecting the critical code path.
ctx := context.TODO()
for _, att := range signed.Block().Body().Attestations() {
committee, err := helpers.BeaconCommitteeFromState(ctx, preState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
log.WithError(err).Error("Could not get attestation committee")
tracing.AnnotateError(span, err)
return
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
tracing.AnnotateError(span, err)
return
}
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
}
}()
}
justified := s.cfg.ForkChoiceStore.JustifiedCheckpoint()
start := time.Now()
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
@@ -240,50 +118,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
},
})
// Save justified check point to db.
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > currStoreJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
// Save finalized check point to db and more.
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
if finalized.Epoch > currStoreFinalizedEpoch || (finalized.Epoch == postStateFinalizedEpoch && finalized.Epoch > preStateFinalizedEpoch) {
if err := s.updateFinalized(ctx, &ethpb.Checkpoint{Epoch: finalized.Epoch, Root: finalized.Root[:]}); err != nil {
return err
}
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(finalized.Root)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
go func() {
// Send an event regarding the new finalized checkpoint over a common event feed.
stateRoot := signed.Block().StateRoot()
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpbv1.EventFinalizedCheckpoint{
Epoch: postState.FinalizedCheckpoint().Epoch,
Block: postState.FinalizedCheckpoint().Root,
State: stateRoot[:],
ExecutionOptimistic: isOptimistic,
},
})
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
defer cancel()
if err := s.insertFinalizedDeposits(depCtx, finalized.Root); err != nil {
log.WithError(err).Error("Could not insert finalized deposits.")
}
}()
}
defer reportAttestationInclusion(b)
if err := s.handleEpochBoundary(ctx, postState, blockRoot[:]); err != nil {
return err
@@ -409,7 +243,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
postVersionAndHeaders[i].version,
postVersionAndHeaders[i].header, b)
if err != nil {
return err
return s.handleInvalidExecutionError(ctx, err, blockRoots[i], b.Block().ParentRoot())
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preVersionAndHeaders[i].version,
@@ -498,9 +332,20 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
if err := helpers.UpdateCommitteeCache(ctx, copied, coreTime.CurrentEpoch(copied)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, copied); err != nil {
e := coreTime.CurrentEpoch(copied)
if err := helpers.UpdateProposerIndicesInCache(ctx, copied, e); err != nil {
return err
}
go func() {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
// with a custom deadline, therefore using the background context instead.
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := helpers.UpdateProposerIndicesInCache(slotCtx, copied, e+1); err != nil {
log.WithError(err).Warn("Failed to cache next epoch proposers")
}
}()
} else if postState.Slot() >= s.nextEpochBoundarySlot {
s.nextEpochBoundarySlot, err = slots.EpochStart(coreTime.NextEpoch(postState))
if err != nil {
@@ -512,7 +357,7 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
if err := helpers.UpdateCommitteeCache(ctx, postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, postState); err != nil {
if err := helpers.UpdateProposerIndicesInCache(ctx, postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
@@ -524,27 +369,9 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
return err
}
}
return nil
}
// This feeds in the block to fork choice store. It's allows fork choice store
// to gain information on the most current chain.
func (s *Service) insertBlockToForkchoiceStore(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock, root [32]byte, st state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.insertBlockToForkchoiceStore")
defer span.End()
if !s.cfg.ForkChoiceStore.HasNode(blk.ParentRoot()) {
fCheckpoint := st.FinalizedCheckpoint()
jCheckpoint := st.CurrentJustifiedCheckpoint()
if err := s.fillInForkChoiceMissingBlocks(ctx, blk, fCheckpoint, jCheckpoint); err != nil {
return err
}
}
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, root)
}
// This feeds in the attestations included in the block to fork choice store. It's allows fork choice store
// to gain information on the most current chain.
func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock, st state.BeaconState) error {
@@ -652,18 +479,17 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// This routine checks if there is a cached proposer payload ID available for the next slot proposer.
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
func (s *Service) runLateBlockTasks() {
_, err := s.clockWaiter.WaitForClock(s.ctx)
if err != nil {
log.WithError(err).Error("runLateBlockTasks encountered an error waiting for initialization")
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("failed to wait for initial sync")
return
}
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
for {
select {
case <-ticker.C():
s.lateBlockTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
return
@@ -720,3 +546,20 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
}
// waitForSync blocks until the node is synced to the head.
func (s *Service) waitForSync() error {
select {
case <-s.syncComplete:
return nil
case <-s.ctx.Done():
return errors.New("context closed, exiting goroutine")
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
}
return err
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
@@ -209,35 +210,44 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk interfa
return s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes)
}
// inserts finalized deposits into our finalized deposit trie.
func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) error {
// inserts finalized deposits into our finalized deposit trie, needs to be
// called in the background
func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "blockChain.insertFinalizedDeposits")
defer span.End()
startTime := time.Now()
// Update deposit cache.
finalizedState, err := s.cfg.StateGen.StateByRoot(ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not fetch finalized state")
log.WithError(err).Error("could not fetch finalized state")
return
}
// We update the cache up to the last deposit index in the finalized block's state.
// We can be confident that these deposits will be included in some block
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
eth1DepositIndex, err := mathutil.Int(finalizedState.Eth1DepositIndex())
if err != nil {
return errors.Wrap(err, "could not cast eth1 deposit index")
log.WithError(err).Error("could not cast eth1 deposit index")
return
}
// The deposit index in the state is always the index of the next deposit
// to be included(rather than the last one to be processed). This was most likely
// done as the state cannot represent signed integers.
eth1DepositIndex -= 1
if err = s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(eth1DepositIndex)); err != nil {
return err
finalizedEth1DepIdx := eth1DepositIndex - 1
if err = s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(finalizedEth1DepIdx)); err != nil {
log.WithError(err).Error("could not insert finalized deposits")
return
}
// Deposit proofs are only used during state transition and can be safely removed to save space.
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(eth1DepositIndex)); err != nil {
return errors.Wrap(err, "could not prune deposit proofs")
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(finalizedEth1DepIdx)); err != nil {
log.WithError(err).Error("could not prune deposit proofs")
}
return nil
// Prune deposits which have already been finalized, the below method prunes all pending deposits (non-inclusive) up
// to the provided eth1 deposit index.
s.cfg.DepositCache.PrunePendingDeposits(ctx, int64(eth1DepositIndex)) // lint:ignore uintcast -- Deposit index should not exceed int64 in your lifetime.
log.WithField("duration", time.Since(startTime).String()).Debug("Finalized deposit insertion completed")
}
// This ensures that the input root defaults to using genesis root instead of zero hashes. This is needed for handling

View File

@@ -41,103 +41,6 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestStore_OnBlock(t *testing.T) {
service, tr := minimalTestService(t)
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
var genesisStateRoot [32]byte
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
util.SaveBlock(t, ctx, beaconDB, genesis)
validGenesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
ojc := &ethpb.Checkpoint{}
stfcs, root, err := prepareForkchoiceState(ctx, 0, validGenesisRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stfcs, root))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
random := util.NewBeaconBlock()
random.Block.Slot = 1
random.Block.ParentRoot = validGenesisRoot[:]
util.SaveBlock(t, ctx, beaconDB, random)
randomParentRoot, err := random.Block.HashTreeRoot()
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), randomParentRoot))
randomParentRoot2 := roots[1]
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot2}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)))
stfcs, root, err = prepareForkchoiceState(ctx, 2, bytesutil.ToBytes32(randomParentRoot2),
validGenesisRoot, [32]byte{'r'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stfcs, root))
tests := []struct {
name string
blk *ethpb.SignedBeaconBlock
s state.BeaconState
time uint64
wantErrString string
}{
{
name: "parent block root does not have a state",
blk: util.NewBeaconBlock(),
s: st.Copy(),
wantErrString: "could not reconstruct parent state",
},
{
name: "block is from the future",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot2
b.Block.Slot = params.BeaconConfig().FarFutureSlot
return b
}(),
s: st.Copy(),
wantErrString: "is in the far distant future",
},
{
name: "could not get finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot[:]
b.Block.Slot = 2
return b
}(),
s: st.Copy(),
wantErrString: "not descendant of finalized checkpoint",
},
{
name: "same slot as finalized block",
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Slot = 0
b.Block.ParentRoot = randomParentRoot2
return b
}(),
s: st.Copy(),
wantErrString: "block is equal or earlier than finalized block, slot 0 < slot 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fRoot := bytesutil.ToBytes32(roots[0])
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: fRoot}))
root, err := tt.blk.Block.HashTreeRoot()
assert.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(tt.blk)
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
assert.ErrorContains(t, tt.wantErrString, err)
})
}
}
func TestStore_OnBlockBatch(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
@@ -657,7 +560,20 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, fcs.NewSlot(ctx, i))
require.NoError(t, service.onBlock(ctx, wsb, r))
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -692,7 +608,20 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, r))
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -714,8 +643,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
func TestOnBlock_NilBlock(t *testing.T) {
service, tr := minimalTestService(t)
err := service.onBlock(tr.ctx, nil, [32]byte{})
err := service.postBlockProcess(tr.ctx, nil, [32]byte{}, nil, true)
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -729,11 +657,11 @@ func TestOnBlock_InvalidSignature(t *testing.T) {
blk, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
blk.Signature = []byte{'a'} // Mutate the signature.
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = service.onBlock(ctx, wsb, r)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
_, err = service.validateStateTransition(ctx, preState, wsb)
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -757,7 +685,13 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, r))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, false))
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -783,7 +717,7 @@ func TestInsertFinalizedDeposits(t *testing.T) {
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
}
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'}))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(107))
@@ -792,6 +726,45 @@ func TestInsertFinalizedDeposits(t *testing.T) {
}
}
func TestInsertFinalizedDeposits_PrunePendingDeposits(t *testing.T) {
service, tr := minimalTestService(t)
ctx, depositCache := tr.ctx, tr.dc
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gs = gs.Copy()
assert.NoError(t, gs.SetEth1Data(&ethpb.Eth1Data{DepositCount: 10}))
assert.NoError(t, gs.SetEth1DepositIndex(8))
assert.NoError(t, service.cfg.StateGen.SaveState(ctx, [32]byte{'m', 'o', 'c', 'k'}, gs))
var zeroSig [96]byte
for i := uint64(0); i < uint64(4*params.BeaconConfig().SlotsPerEpoch); i++ {
root := []byte(strconv.Itoa(int(i)))
assert.NoError(t, depositCache.InsertDeposit(ctx, &ethpb.Deposit{Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.FromBytes48([fieldparams.BLSPubkeyLength]byte{}),
WithdrawalCredentials: params.BeaconConfig().ZeroHash[:],
Amount: 0,
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
depositCache.InsertPendingDeposit(ctx, &ethpb.Deposit{Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.FromBytes48([fieldparams.BLSPubkeyLength]byte{}),
WithdrawalCredentials: params.BeaconConfig().ZeroHash[:],
Amount: 0,
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root))
}
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(107))
for _, d := range deps {
assert.DeepEqual(t, [][]byte(nil), d.Proof, "Proofs are not empty")
}
pendingDeps := depositCache.PendingContainers(ctx, nil)
for _, d := range pendingDeps {
assert.DeepEqual(t, true, d.Index >= 8, "Pending deposits were not pruned")
}
}
func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
service, tr := minimalTestService(t)
ctx, depositCache := tr.ctx, tr.dc
@@ -819,7 +792,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
// Insert 3 deposits before hand.
require.NoError(t, depositCache.InsertFinalizedDeposits(ctx, 2))
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'}))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 5, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
@@ -829,7 +802,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
}
// Insert New Finalized State with higher deposit count.
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'}))
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'})
fDeposits = depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 12, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps = depositCache.AllDeposits(ctx, big.NewInt(112))
@@ -1131,19 +1104,35 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
var wg sync.WaitGroup
wg.Add(4)
go func() {
require.NoError(t, service.onBlock(ctx, wsb1, r1))
preState, err := service.getBlockPreState(ctx, wsb1.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb1)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb1, r1, postState, true))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb2, r2))
preState, err := service.getBlockPreState(ctx, wsb2.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb2)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb2, r2, postState, true))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb3, r3))
preState, err := service.getBlockPreState(ctx, wsb3.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb3)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb3, r3, postState, true))
wg.Done()
}()
go func() {
require.NoError(t, service.onBlock(ctx, wsb4, r4))
preState, err := service.getBlockPreState(ctx, wsb4.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb4)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb4, r4, postState, true))
wg.Done()
}()
wg.Wait()
@@ -1211,7 +1200,13 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1224,7 +1219,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1238,7 +1238,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1255,7 +1260,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, firstInvalidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1278,7 +1288,12 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head is the last invalid block imported. The
// store's headroot is the previous head (since the invalid block did
@@ -1301,7 +1316,13 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1358,7 +1379,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1371,7 +1397,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1385,7 +1416,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1402,7 +1439,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, firstInvalidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1425,7 +1467,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
// not finish importing and it was never imported to forkchoice). Check
@@ -1448,7 +1495,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1506,7 +1558,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1519,7 +1577,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1533,7 +1597,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
lastValidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, lastValidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1555,7 +1624,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
invalidRoots[i-13], err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, invalidRoots[i-13])
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
err = service.postBlockProcess(ctx, wsb, invalidRoots[i-13], postState, false)
require.NoError(t, err)
}
// Check that we have justified the second epoch
@@ -1576,7 +1650,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
@@ -1610,7 +1689,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, true))
// Check that the head is still INVALID and the node is still optimistic
require.Equal(t, invalidHeadRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
optimistic, err = service.IsOptimistic(ctx)
@@ -1628,7 +1712,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
@@ -1648,7 +1737,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
require.NoError(t, err)
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
@@ -1699,7 +1794,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
}
for i := 6; i < 12; i++ {
@@ -1712,7 +1812,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
require.NoError(t, err)
}
@@ -1726,7 +1831,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
lastValidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, lastValidRoot)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1747,7 +1857,18 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
}
// Check that we have justified the second epoch
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
@@ -1766,7 +1887,11 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that the headroot/state are not in DB and restart the node
@@ -1848,7 +1973,12 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
st, err = service.HeadState(ctx)
require.NoError(t, err)

View File

@@ -128,7 +128,13 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
@@ -178,7 +184,13 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)

View File

@@ -7,11 +7,18 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
@@ -47,15 +54,65 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := s.FinalizedCheckpt().Epoch
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
postState, err := s.validateStateTransition(ctx, preState, blockCopy)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
isValidPayload, err := s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.savePostStateInfo(ctx, blockRoot, blockCopy, postState); err != nil {
return errors.Wrap(err, "could not save post state info")
}
// Apply state transition on the new block.
if err := s.onBlock(ctx, blockCopy, blockRoot); err != nil {
if err := s.postBlockProcess(ctx, blockCopy, blockRoot, postState, isValidPayload); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return err
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
newFinalized, err := s.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
if err != nil {
return errors.Wrap(err, "could not update finalized checkpoint")
}
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(ctx, blockCopy, postState, finalized)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
if features.Get().EnableSlasher {
go s.sendBlockAttestationsToSlasher(blockCopy, preState)
}
// Handle post block operations such as pruning exits and bls messages if incoming block is the head
if err := s.prunePostBlockOperationPools(ctx, blockCopy, blockRoot); err != nil {
log.WithError(err).Error("Could not prune canonical objects from pool ")
@@ -86,6 +143,8 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithError(err).Error("Unable to log state transition data")
}
chainServiceProcessingTime.Observe(float64(time.Since(receivedTime).Milliseconds()))
return nil
}
@@ -226,3 +285,109 @@ func (s *Service) checkSaveHotStateDB(ctx context.Context) error {
return s.cfg.StateGen.DisableSaveHotStateToDB(ctx)
}
// This performs the state transition function and returns the poststate or an
// error if the block fails to verify the consensus rules
func (s *Service) validateStateTransition(ctx context.Context, preState state.BeaconState, signed interfaces.ReadOnlySignedBeaconBlock) (state.BeaconState, error) {
b := signed.Block()
// Verify that the parent block is in forkchoice
parentRoot := b.ParentRoot()
if !s.InForkchoice(parentRoot) {
return nil, ErrNotDescendantOfFinalized
}
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
return nil, invalidBlock{error: err}
}
stateTransitionProcessingTime.Observe(float64(time.Since(stateTransitionStartTime).Milliseconds()))
return postState, nil
}
// updateJustificationOnBlock updates the justified checkpoint on DB if the
// incoming block has updated it on forkchoice.
func (s *Service) updateJustificationOnBlock(ctx context.Context, preState, postState state.BeaconState, preJustifiedEpoch primitives.Epoch) error {
justified := s.cfg.ForkChoiceStore.JustifiedCheckpoint()
preStateJustifiedEpoch := preState.CurrentJustifiedCheckpoint().Epoch
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > preJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
return nil
}
// updateFinalizationOnBlock performs some duties when the incoming block
// changes the finalized checkpoint. It returns true when this has happened.
func (s *Service) updateFinalizationOnBlock(ctx context.Context, preState, postState state.BeaconState, preFinalizedEpoch primitives.Epoch) (bool, error) {
preStateFinalizedEpoch := preState.FinalizedCheckpoint().Epoch
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
if finalized.Epoch > preFinalizedEpoch || (finalized.Epoch == postStateFinalizedEpoch && finalized.Epoch > preStateFinalizedEpoch) {
if err := s.updateFinalized(ctx, &ethpb.Checkpoint{Epoch: finalized.Epoch, Root: finalized.Root[:]}); err != nil {
return true, err
}
return true, nil
}
return false, nil
}
// sendNewFinalizedEvent sends a new finalization checkpoint event over the
// event feed. It needs to be called on the background
func (s *Service) sendNewFinalizedEvent(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, postState state.BeaconState, finalized *forkchoicetypes.Checkpoint) {
isValidPayload := false
s.headLock.RLock()
if s.head != nil {
isValidPayload = s.head.optimistic
}
s.headLock.RUnlock()
// Send an event regarding the new finalized checkpoint over a common event feed.
stateRoot := signed.Block().StateRoot()
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpbv1.EventFinalizedCheckpoint{
Epoch: postState.FinalizedCheckpoint().Epoch,
Block: postState.FinalizedCheckpoint().Root,
State: stateRoot[:],
ExecutionOptimistic: isValidPayload,
},
})
}
// sendBlockAttestationsToSlasher sends the incoming block's attestation to the slasher
func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySignedBeaconBlock, preState state.BeaconState) {
// Feed the indexed attestation to slasher if enabled. This action
// is done in the background to avoid adding more load to this critical code path.
ctx := context.TODO()
for _, att := range signed.Block().Body().Attestations() {
committee, err := helpers.BeaconCommitteeFromState(ctx, preState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
log.WithError(err).Error("Could not get attestation committee")
return
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
return
}
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
}
}
// validateExecutionOnBlock notifies the engine of the incoming block execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, signed)
if err != nil {
return false, s.handleInvalidExecutionError(ctx, err, blockRoot, signed.Block().ParentRoot())
}
if signed.Version() < version.Capella && isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, ver, header, signed); err != nil {
return isValidPayload, err
}
}
return isValidPayload, nil
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
@@ -60,6 +61,7 @@ type Service struct {
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
}
// config options for the service.
@@ -307,7 +309,13 @@ func (s *Service) initializeHeadFromDB(ctx context.Context) error {
if err != nil {
return errors.Wrap(err, "could not get finalized block")
}
if err := s.setHead(finalizedRoot, finalizedBlock, finalizedState); err != nil {
if err := s.setHead(&head{
finalizedRoot,
finalizedBlock,
finalizedState,
finalizedBlock.Block().Slot(),
false,
}); err != nil {
return errors.Wrap(err, "could not set head")
}
@@ -401,7 +409,7 @@ func (s *Service) initializeBeaconChain(
if err := helpers.UpdateCommitteeCache(ctx, genesisState, 0); err != nil {
return nil, err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState); err != nil {
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState, coreTime.CurrentEpoch(genesisState)); err != nil {
return nil, err
}
@@ -439,7 +447,13 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
if err := s.setHead(genesisBlkRoot, genesisBlk, genesisState); err != nil {
if err := s.setHead(&head{
genesisBlkRoot,
genesisBlk,
genesisState,
genesisBlk.Block().Slot(),
false,
}); err != nil {
log.WithError(err).Fatal("Could not set head")
}
return nil

View File

@@ -377,9 +377,7 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
require.NoError(t, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, r))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
@@ -453,9 +451,7 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := state_native.InitializeFromProtoPhase0(bs)
require.NoError(b, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(b, err)
require.NoError(b, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
require.NoError(b, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, r))
b.ResetTimer()
for i := 0; i < b.N; i++ {

View File

@@ -13,6 +13,15 @@ import (
"go.opencensus.io/trace"
)
// AttDelta contains rewards and penalties for a single attestation.
type AttDelta struct {
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
}
// InitializePrecomputeValidators precomputes individual validator for its attested balances and the total sum of validators attested balances of the epoch.
func InitializePrecomputeValidators(ctx context.Context, beaconState state.BeaconState) ([]*precompute.Validator, *precompute.Balance, error) {
ctx, span := trace.StartSpan(ctx, "altair.InitializePrecomputeValidators")
@@ -226,7 +235,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
return beaconState, errors.New("validator registries not the same length as state's validator registries")
}
attsRewards, attsPenalties, err := AttestationsDelta(beaconState, bal, vals)
attDeltas, err := AttestationsDelta(beaconState, bal, vals)
if err != nil {
return nil, errors.Wrap(err, "could not get attestation delta")
}
@@ -237,11 +246,12 @@ func ProcessRewardsAndPenaltiesPrecompute(
// Compute the post balance of the validator after accounting for the
// attester and proposer rewards and penalties.
balances[i], err = helpers.IncreaseBalanceWithVal(balances[i], attsRewards[i])
delta := attDeltas[i]
balances[i], err = helpers.IncreaseBalanceWithVal(balances[i], delta.HeadReward+delta.SourceReward+delta.TargetReward)
if err != nil {
return nil, err
}
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], attsPenalties[i])
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty)
vals[i].AfterEpochTransitionBalance = balances[i]
}
@@ -255,10 +265,8 @@ func ProcessRewardsAndPenaltiesPrecompute(
// AttestationsDelta computes and returns the rewards and penalties differences for individual validators based on the
// voting records.
func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, vals []*precompute.Validator) (rewards, penalties []uint64, err error) {
numOfVals := beaconState.NumValidators()
rewards = make([]uint64, numOfVals)
penalties = make([]uint64, numOfVals)
func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, vals []*precompute.Validator) ([]*AttDelta, error) {
attDeltas := make([]*AttDelta, len(vals))
cfg := params.BeaconConfig()
prevEpoch := time.PrevEpoch(beaconState)
@@ -272,29 +280,29 @@ func AttestationsDelta(beaconState state.BeaconState, bal *precompute.Balance, v
bias := cfg.InactivityScoreBias
inactivityPenaltyQuotient, err := beaconState.InactivityPenaltyQuotient()
if err != nil {
return nil, nil, err
return nil, err
}
inactivityDenominator := bias * inactivityPenaltyQuotient
for i, v := range vals {
rewards[i], penalties[i], err = attestationDelta(bal, v, baseRewardMultiplier, inactivityDenominator, leak)
attDeltas[i], err = attestationDelta(bal, v, baseRewardMultiplier, inactivityDenominator, leak)
if err != nil {
return nil, nil, err
return nil, err
}
}
return rewards, penalties, nil
return attDeltas, nil
}
func attestationDelta(
bal *precompute.Balance,
val *precompute.Validator,
baseRewardMultiplier, inactivityDenominator uint64,
inactivityLeak bool) (reward, penalty uint64, err error) {
inactivityLeak bool) (*AttDelta, error) {
eligible := val.IsActivePrevEpoch || (val.IsSlashed && !val.IsWithdrawableCurrentEpoch)
// Per spec `ActiveCurrentEpoch` can't be 0 to process attestation delta.
if !eligible || bal.ActiveCurrentEpoch == 0 {
return 0, 0, nil
return &AttDelta{}, nil
}
cfg := params.BeaconConfig()
@@ -307,32 +315,32 @@ func attestationDelta(
srcWeight := cfg.TimelySourceWeight
tgtWeight := cfg.TimelyTargetWeight
headWeight := cfg.TimelyHeadWeight
reward, penalty = uint64(0), uint64(0)
attDelta := &AttDelta{}
// Process source reward / penalty
if val.IsPrevEpochSourceAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * srcWeight * (bal.PrevEpochAttested / increment)
reward += n / (activeIncrement * weightDenominator)
attDelta.SourceReward += n / (activeIncrement * weightDenominator)
}
} else {
penalty += baseReward * srcWeight / weightDenominator
attDelta.SourcePenalty += baseReward * srcWeight / weightDenominator
}
// Process target reward / penalty
if val.IsPrevEpochTargetAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * tgtWeight * (bal.PrevEpochTargetAttested / increment)
reward += n / (activeIncrement * weightDenominator)
attDelta.TargetReward += n / (activeIncrement * weightDenominator)
}
} else {
penalty += baseReward * tgtWeight / weightDenominator
attDelta.TargetPenalty += baseReward * tgtWeight / weightDenominator
}
// Process head reward / penalty
if val.IsPrevEpochHeadAttester && !val.IsSlashed {
if !inactivityLeak {
n := baseReward * headWeight * (bal.PrevEpochHeadAttested / increment)
reward += n / (activeIncrement * weightDenominator)
attDelta.HeadReward += n / (activeIncrement * weightDenominator)
}
}
@@ -341,10 +349,10 @@ func attestationDelta(
if !val.IsPrevEpochTargetAttester || val.IsSlashed {
n, err := math.Mul64(effectiveBalance, val.InactivityScore)
if err != nil {
return 0, 0, err
return &AttDelta{}, err
}
penalty += n / inactivityDenominator
attDelta.TargetPenalty += n / inactivityDenominator
}
return reward, penalty, nil
return attDelta, nil
}

View File

@@ -213,9 +213,16 @@ func TestAttestationsDelta(t *testing.T) {
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
rewards, penalties, err := AttestationsDelta(s, balance, validators)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
rewards := make([]uint64, len(deltas))
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
}
// Reward amount should increase as validator index increases due to setup.
for i := 1; i < len(rewards); i++ {
require.Equal(t, true, rewards[i] > rewards[i-1])
@@ -244,9 +251,16 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
rewards, penalties, err := AttestationsDelta(s, balance, validators)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
rewards := make([]uint64, len(deltas))
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
}
// Reward amount should increase as validator index increases due to setup.
for i := 1; i < len(rewards); i++ {
require.Equal(t, true, rewards[i] > rewards[i-1])
@@ -285,8 +299,15 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
}
wanted := make([]uint64, s.NumValidators())
rewards, penalties, err := AttestationsDelta(s, balance, validators)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
rewards := make([]uint64, len(deltas))
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
}
for i := range rewards {
wanted[i] += rewards[i]
}

View File

@@ -195,6 +195,7 @@ func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
}
// ValidateSyncMessageTime validates sync message to ensure that the provided slot is valid.
// Spec: [IGNORE] The message's slot is for the current slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance), i.e. sync_committee_message.slot == current_slot
func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
if err := slots.ValidateClock(slot, uint64(genesisTime.Unix())); err != nil {
return err
@@ -223,13 +224,12 @@ func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockD
// Verify sync message slot is within the time range.
if messageTime.Before(lowerBound) || messageTime.After(upperBound) {
syncErr := fmt.Errorf(
"sync message time %v (slot %d) not within allowable range of %v (slot %d) to %v (slot %d)",
"sync message time %v (message slot %d) not within allowable range of %v to %v (current slot %d)",
messageTime,
slot,
lowerBound,
uint64(lowerBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
upperBound,
uint64(upperBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
currentSlot,
)
// Wrap error message if sync message is too late.
if messageTime.Before(lowerBound) {

View File

@@ -311,7 +311,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 16,
genesisTime: prysmTime.Now().Add(-(15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)),
},
wantedErr: "(slot 16) not within allowable range of",
wantedErr: "(message slot 16) not within allowable range of",
},
{
name: "sync_message.slot == current_slot+CLOCK_DISPARITY",
@@ -327,7 +327,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 100,
genesisTime: prysmTime.Now().Add(-(100 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second) + params.BeaconNetworkConfig().MaximumGossipClockDisparity + 1000*time.Millisecond),
},
wantedErr: "(slot 100) not within allowable range of",
wantedErr: "(message slot 100) not within allowable range of",
},
{
name: "sync_message.slot == current_slot-CLOCK_DISPARITY",
@@ -343,7 +343,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
syncMessageSlot: 101,
genesisTime: prysmTime.Now().Add(-(100*time.Duration(params.BeaconConfig().SecondsPerSlot)*time.Second + params.BeaconNetworkConfig().MaximumGossipClockDisparity)),
},
wantedErr: "(slot 101) not within allowable range of",
wantedErr: "(message slot 101) not within allowable range of",
},
{
name: "sync_message.slot is well beyond current slot",

View File

@@ -38,6 +38,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -336,20 +336,21 @@ func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState,
}
// UpdateProposerIndicesInCache updates proposer indices entry of the committee cache.
func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState) error {
// Input state is used to retrieve active validator indices.
// Input epoch is the epoch to retrieve proposer indices for.
func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
// The cache uses the state root at the (current epoch - 1)'s slot as key. (e.g. for epoch 2, the key is root at slot 63)
// Which is the reason why we skip genesis epoch.
if time.CurrentEpoch(state) <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
if epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
return nil
}
// Use state root from (current_epoch - 1))
wantedEpoch := time.PrevEpoch(state)
s, err := slots.EpochEnd(wantedEpoch)
s, err := slots.EpochEnd(epoch - 1)
if err != nil {
return err
}
r, err := StateRootAtSlot(state, s)
r, err := state.StateRootAtIndex(uint64(s % params.BeaconConfig().SlotsPerHistoricalRoot))
if err != nil {
return err
}
@@ -366,11 +367,11 @@ func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaco
return nil
}
indices, err := ActiveValidatorIndices(ctx, state, time.CurrentEpoch(state))
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return err
}
proposerIndices, err := precomputeProposerIndices(state, indices)
proposerIndices, err := precomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
@@ -432,11 +433,10 @@ func computeCommittee(
// This computes proposer indices of the current epoch and returns a list of proposer indices,
// the index of the list represents the slot number.
func precomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex) ([]primitives.ValidatorIndex, error) {
func precomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex, e primitives.Epoch) ([]primitives.ValidatorIndex, error) {
hashFunc := hash.CustomSHA256Hasher()
proposerIndices := make([]primitives.ValidatorIndex, params.BeaconConfig().SlotsPerEpoch)
e := time.CurrentEpoch(state)
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return nil, errors.Wrap(err, "could not generate seed")

View File

@@ -639,7 +639,7 @@ func TestPrecomputeProposerIndices_Ok(t *testing.T) {
indices, err := ActiveValidatorIndices(context.Background(), state, 0)
require.NoError(t, err)
proposerIndices, err := precomputeProposerIndices(state, indices)
proposerIndices, err := precomputeProposerIndices(state, indices, time.CurrentEpoch(state))
require.NoError(t, err)
var wantedProposerIndices []primitives.ValidatorIndex

View File

@@ -17,6 +17,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var CommitteeCacheInProgressHit = promauto.NewCounter(prometheus.CounterOpts{
@@ -261,7 +262,7 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
}
return proposerIndices[state.Slot()%params.BeaconConfig().SlotsPerEpoch], nil
}
if err := UpdateProposerIndicesInCache(ctx, state); err != nil {
if err := UpdateProposerIndicesInCache(ctx, state, time.CurrentEpoch(state)); err != nil {
return 0, errors.Wrap(err, "could not update committee cache")
}
}
@@ -396,3 +397,22 @@ func isEligibleForActivation(activationEligibilityEpoch, activationEpoch, finali
return activationEligibilityEpoch <= finalizedEpoch &&
activationEpoch == params.BeaconConfig().FarFutureEpoch
}
// LastActivatedValidatorIndex provides the last activated validator given a state
func LastActivatedValidatorIndex(ctx context.Context, st state.ReadOnlyBeaconState) (primitives.ValidatorIndex, error) {
_, span := trace.StartSpan(ctx, "helpers.LastActivatedValidatorIndex")
defer span.End()
var lastActivatedvalidatorIndex primitives.ValidatorIndex
// linear search because status are not sorted
for j := st.NumValidators() - 1; j >= 0; j-- {
val, err := st.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
if err != nil {
return 0, err
}
if IsActiveValidatorUsingTrie(val, time.CurrentEpoch(st)) {
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
break
}
}
return lastActivatedvalidatorIndex, nil
}

View File

@@ -727,3 +727,26 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
}
}
}
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
validators := make([]*ethpb.Validator, 4)
balances := make([]uint64, len(validators))
for i := uint64(0); i < 4; i++ {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, params.BeaconConfig().BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),
EffectiveBalance: 32 * 1e9,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
balances[i] = validators[i].EffectiveBalance
}
require.NoError(t, beaconState.SetValidators(validators))
require.NoError(t, beaconState.SetBalances(balances))
index, err := LastActivatedValidatorIndex(context.Background(), beaconState)
require.NoError(t, err)
require.Equal(t, index, primitives.ValidatorIndex(3))
}

View File

@@ -115,28 +115,32 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
func FuzzExecutionPayload(f *testing.F) {
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
execData := &engine.ExecutableData{
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
LogsBloom: logsBloom[:],
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Number: math.MaxUint64,
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
execData := &engine.ExecutionPayloadEnvelope{
ExecutionPayload: &engine.ExecutableData{
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
LogsBloom: logsBloom[:],
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Number: math.MaxUint64,
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
Withdrawals: []*types.Withdrawal{},
},
BlockValue: nil,
}
output, err := json.Marshal(execData)
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &engine.ExecutableData{}
prysmResp := &pb.ExecutionPayload{}
gethResp := &engine.ExecutionPayloadEnvelope{}
prysmResp := &pb.ExecutionPayloadCapellaWithValue{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
@@ -147,10 +151,10 @@ func FuzzExecutionPayload(f *testing.F) {
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
newGethResp := &engine.ExecutableData{}
newGethResp := &engine.ExecutionPayloadEnvelope{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
newGethResp2 := &engine.ExecutableData{}
newGethResp2 := &engine.ExecutionPayloadEnvelope{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)

View File

@@ -7,7 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
)
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, payloadHash [32]byte) ([][32]byte, error) {
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, lastValidHash [32]byte) ([][32]byte, error) {
invalidRoots := make([][32]byte, 0)
node, ok := s.nodeByRoot[root]
if !ok {
@@ -16,7 +16,7 @@ func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, pa
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid")
}
// return early if the parent is LVH
if node.payloadHash == payloadHash {
if node.payloadHash == lastValidHash {
return invalidRoots, nil
}
} else {
@@ -28,7 +28,7 @@ func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, pa
}
}
firstInvalid := node
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != payloadHash; firstInvalid = firstInvalid.parent {
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != lastValidHash; firstInvalid = firstInvalid.parent {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}

View File

@@ -230,13 +230,13 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
return nil, err
}
log.Debugln("Registering Determinstic Genesis Service")
if err := beacon.registerDeterminsticGenesisService(); err != nil {
log.Debugln("Registering Deterministic Genesis Service")
if err := beacon.registerDeterministicGenesisService(); err != nil {
return nil, err
}
log.Debugln("Registering Blockchain Service")
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer); err != nil {
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer, beacon.initialSyncComplete); err != nil {
return nil, err
}
@@ -590,7 +590,7 @@ func (b *BeaconNode) registerAttestationPool() error {
return b.services.RegisterService(s)
}
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer) error {
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer, syncComplete chan struct{}) error {
var web3Service *execution.Service
if err := b.services.FetchService(&web3Service); err != nil {
return err
@@ -621,6 +621,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
blockchain.WithProposerIdsCache(b.proposerIdsCache),
blockchain.WithClockSynchronizer(gs),
blockchain.WithSyncComplete(syncComplete),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
@@ -923,7 +924,7 @@ func (b *BeaconNode) registerGRPCGateway(router *mux.Router) error {
return b.services.RegisterService(g)
}
func (b *BeaconNode) registerDeterminsticGenesisService() error {
func (b *BeaconNode) registerDeterministicGenesisService() error {
genesisTime := b.cliCtx.Uint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
@@ -986,7 +987,8 @@ func (b *BeaconNode) registerBuilderService(cliCtx *cli.Context) error {
opts := append(b.serviceFlagOpts.builderOpts,
builder.WithHeadFetcher(chainService),
builder.WithDatabase(b.db))
if cliCtx.Bool(flags.EnableRegistrationCache.Name) {
// make cache the default.
if !cliCtx.Bool(features.DisableRegistrationCache.Name) {
opts = append(opts, builder.WithRegistrationCache())
}
svc, err := builder.NewService(b.ctx, opts...)

View File

@@ -47,6 +47,7 @@ go_test(
deps = [
"//async:go_default_library",
"//beacon-chain/operations/attestations/kv:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//crypto/bls:go_default_library",

View File

@@ -14,6 +14,7 @@ go_library(
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
@@ -39,8 +40,8 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -2,9 +2,12 @@ package kv
import (
"context"
"runtime"
"sync"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
attaggregation "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation/aggregation/attestations"
@@ -23,21 +26,11 @@ func (c *AttCaches) AggregateUnaggregatedAttestations(ctx context.Context) error
if err != nil {
return err
}
return c.aggregateUnaggregatedAttestations(ctx, unaggregatedAtts)
return c.aggregateUnaggregatedAtts(ctx, unaggregatedAtts)
}
// AggregateUnaggregatedAttestationsBySlotIndex aggregates the unaggregated attestations and saves
// newly aggregated attestations in the pool. Unaggregated attestations are filtered by slot and
// committee index.
func (c *AttCaches) AggregateUnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) error {
ctx, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregateUnaggregatedAttestationsBySlotIndex")
defer span.End()
unaggregatedAtts := c.UnaggregatedAttestationsBySlotIndex(ctx, slot, committeeIndex)
return c.aggregateUnaggregatedAttestations(ctx, unaggregatedAtts)
}
func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unaggregatedAtts []*ethpb.Attestation) error {
ctx, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAttestations")
func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedAtts []*ethpb.Attestation) error {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAtts")
defer span.End()
attsByDataRoot := make(map[[32]byte][]*ethpb.Attestation, len(unaggregatedAtts))
@@ -52,26 +45,32 @@ func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unagg
// Aggregate unaggregated attestations from the pool and save them in the pool.
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
for _, atts := range attsByDataRoot {
aggregated, err := attaggregation.AggregateDisjointOneBitAtts(atts)
if err != nil {
return errors.Wrap(err, "could not aggregate unaggregated attestations")
}
if aggregated == nil {
return errors.New("could not aggregate unaggregated attestations")
}
if helpers.IsAggregated(aggregated) {
if err := c.SaveAggregatedAttestations([]*ethpb.Attestation{aggregated}); err != nil {
return err
}
} else {
h, err := hashFn(aggregated)
if features.Get().AggregateParallel {
leftOverUnaggregatedAtt = c.aggregateParallel(attsByDataRoot, leftOverUnaggregatedAtt)
} else {
for _, atts := range attsByDataRoot {
aggregated, err := attaggregation.AggregateDisjointOneBitAtts(atts)
if err != nil {
return err
return errors.Wrap(err, "could not aggregate unaggregated attestations")
}
if aggregated == nil {
return errors.New("could not aggregate unaggregated attestations")
}
if helpers.IsAggregated(aggregated) {
if err := c.SaveAggregatedAttestations([]*ethpb.Attestation{aggregated}); err != nil {
return err
}
} else {
h, err := hashFn(aggregated)
if err != nil {
return err
}
leftOverUnaggregatedAtt[h] = true
}
leftOverUnaggregatedAtt[h] = true
}
}
// Remove the unaggregated attestations from the pool that were successfully aggregated.
for _, att := range unaggregatedAtts {
h, err := hashFn(att)
@@ -88,6 +87,58 @@ func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unagg
return nil
}
// aggregateParallel aggregates attestations in parallel for `atts` and saves them in the pool,
// returns the unaggregated attestations that weren't able to aggregate.
// Given `n` CPU cores, it creates a channel of size `n` and spawns `n` goroutines to aggregate attestations
func (c *AttCaches) aggregateParallel(atts map[[32]byte][]*ethpb.Attestation, leftOver map[[32]byte]bool) map[[32]byte]bool {
var leftoverLock sync.Mutex
wg := sync.WaitGroup{}
n := runtime.GOMAXPROCS(0) // defaults to the value of runtime.NumCPU
ch := make(chan []*ethpb.Attestation, n)
wg.Add(n)
for i := 0; i < n; i++ {
go func() {
defer wg.Done()
for as := range ch {
aggregated, err := attaggregation.AggregateDisjointOneBitAtts(as)
if err != nil {
log.WithError(err).Error("could not aggregate unaggregated attestations")
continue
}
if aggregated == nil {
log.Error("nil aggregated attestation")
continue
}
if helpers.IsAggregated(aggregated) {
if err := c.SaveAggregatedAttestations([]*ethpb.Attestation{aggregated}); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
continue
}
} else {
h, err := hashFn(aggregated)
if err != nil {
log.WithError(err).Error("could not hash attestation")
continue
}
leftoverLock.Lock()
leftOver[h] = true
leftoverLock.Unlock()
}
}
}()
}
for _, as := range atts {
ch <- as
}
close(ch)
wg.Wait()
return leftOver
}
// SaveAggregatedAttestation saves an aggregated attestation in cache.
func (c *AttCaches) SaveAggregatedAttestation(att *ethpb.Attestation) error {
if err := helpers.ValidateNilAttestation(att); err != nil {
@@ -165,7 +216,7 @@ func (c *AttCaches) AggregatedAttestations() []*ethpb.Attestation {
// AggregatedAttestationsBySlotIndex returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation {
ctx, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]*ethpb.Attestation, 0)

View File

@@ -9,7 +9,7 @@ import (
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
@@ -18,6 +18,11 @@ import (
)
func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
AggregateParallel: true,
})
defer resetFn()
cache := NewAttCaches()
priv, err := bls.RandKey()
require.NoError(t, err)
@@ -39,61 +44,6 @@ func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
require.Equal(t, 1, len(cache.AggregatedAttestationsBySlotIndex(context.Background(), 2, 0)), "Did not aggregate correctly")
}
func TestKV_Aggregated_AggregateUnaggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
genData := func(slot primitives.Slot, committeeIndex primitives.CommitteeIndex) *ethpb.AttestationData {
return util.HydrateAttestationData(&ethpb.AttestationData{
Slot: slot,
CommitteeIndex: committeeIndex,
})
}
genSign := func() []byte {
priv, err := bls.RandKey()
require.NoError(t, err)
return priv.Sign([]byte{'a'}).Marshal()
}
atts := []*ethpb.Attestation{
// The first slot.
{AggregationBits: bitfield.Bitlist{0b1001}, Data: genData(1, 2), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1010}, Data: genData(1, 2), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1100}, Data: genData(1, 2), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1001}, Data: genData(1, 3), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1100}, Data: genData(1, 3), Signature: genSign()},
// The second slot.
{AggregationBits: bitfield.Bitlist{0b1001}, Data: genData(2, 3), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1010}, Data: genData(2, 3), Signature: genSign()},
{AggregationBits: bitfield.Bitlist{0b1100}, Data: genData(2, 4), Signature: genSign()},
}
ctx := context.Background()
// Make sure that no error is produced if aggregation is requested on empty unaggregated list.
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 1, 2))
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 2, 3))
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)), "Did not aggregate correctly")
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 3)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 3)), "Did not aggregate correctly")
// Persist unaggregated attestations, and aggregate on per slot/committee index base.
require.NoError(t, cache.SaveUnaggregatedAttestations(atts))
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 1, 2))
require.NoError(t, cache.AggregateUnaggregatedAttestationsBySlotIndex(ctx, 2, 3))
// Committee attestations at a slot should be aggregated.
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)))
require.Equal(t, 1, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)), "Did not aggregate correctly")
// Committee attestations haven't been aggregated.
require.Equal(t, 2, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 3)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 1, 3)), "Did not aggregate correctly")
// Committee at a second slot is aggregated.
require.Equal(t, 0, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 3)))
require.Equal(t, 1, len(cache.AggregatedAttestationsBySlotIndex(ctx, 2, 3)), "Did not aggregate correctly")
// The second committee at second slot is not aggregated.
require.Equal(t, 1, len(cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 4)))
require.Equal(t, 0, len(cache.AggregatedAttestationsBySlotIndex(ctx, 2, 4)), "Did not aggregate correctly")
}
func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
tests := []struct {
name string

View File

@@ -15,7 +15,6 @@ import (
type Pool interface {
// For Aggregated attestations
AggregateUnaggregatedAttestations(ctx context.Context) error
AggregateUnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) error
SaveAggregatedAttestation(att *ethpb.Attestation) error
SaveAggregatedAttestations(atts []*ethpb.Attestation) error
AggregatedAttestations() []*ethpb.Attestation

View File

@@ -7,6 +7,7 @@ import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
attaggregation "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation/aggregation/attestations"
@@ -17,6 +18,11 @@ import (
)
func TestBatchAttestations_Multiple(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
AggregateParallel: true,
})
defer resetFn()
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
require.NoError(t, err)

View File

@@ -108,12 +108,31 @@ func (s *Store) DeletePeerData(pid peer.ID) {
}
// SetTrustedPeers sets our desired trusted peer set.
// Important: it is assumed that store mutex is locked when calling this method.
func (s *Store) SetTrustedPeers(peers []peer.ID) {
for _, p := range peers {
s.trustedPeers[p] = true
}
}
// GetTrustedPeers gets our desired trusted peer ids.
// Important: it is assumed that store mutex is locked when calling this method.
func (s *Store) GetTrustedPeers() []peer.ID {
peers := []peer.ID{}
for p := range s.trustedPeers {
peers = append(peers, p)
}
return peers
}
// DeleteTrustedPeers removes peers from trusted peer set.
// Important: it is assumed that store mutex is locked when calling this method.
func (s *Store) DeleteTrustedPeers(peers []peer.ID) {
for _, p := range peers {
delete(s.trustedPeers, p)
}
}
// Peers returns map of peer data objects.
// Important: it is assumed that store mutex is locked when calling this method.
func (s *Store) Peers() map[peer.ID]*PeerData {

View File

@@ -96,4 +96,16 @@ func TestStore_TrustedPeers(t *testing.T) {
assert.Equal(t, true, store.IsTrustedPeer(pid1))
assert.Equal(t, true, store.IsTrustedPeer(pid2))
assert.Equal(t, true, store.IsTrustedPeer(pid3))
tPeers = store.GetTrustedPeers()
assert.Equal(t, 3, len(tPeers))
store.DeleteTrustedPeers(tPeers)
tPeers = store.GetTrustedPeers()
assert.Equal(t, 0, len(tPeers))
assert.Equal(t, false, store.IsTrustedPeer(pid1))
assert.Equal(t, false, store.IsTrustedPeer(pid2))
assert.Equal(t, false, store.IsTrustedPeer(pid3))
}

View File

@@ -560,6 +560,9 @@ func (p *Status) Prune() {
notBadPeer := func(pid peer.ID) bool {
return !p.isBad(pid)
}
notTrustedPeer := func(pid peer.ID) bool {
return !p.isTrustedPeers(pid)
}
type peerResp struct {
pid peer.ID
score float64
@@ -567,7 +570,8 @@ func (p *Status) Prune() {
peersToPrune := make([]*peerResp, 0)
// Select disconnected peers with a smaller bad response count.
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerDisconnected && notBadPeer(pid) {
// Should not prune trusted peer or prune the peer dara and unset trusted peer.
if peerData.ConnState == PeerDisconnected && notBadPeer(pid) && notTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
score: p.Scorers().ScoreNoLock(pid),
@@ -608,6 +612,9 @@ func (p *Status) deprecatedPrune() {
notBadPeer := func(peerData *peerdata.PeerData) bool {
return peerData.BadResponses < p.scorers.BadResponsesScorer().Params().Threshold
}
notTrustedPeer := func(pid peer.ID) bool {
return !p.isTrustedPeers(pid)
}
type peerResp struct {
pid peer.ID
badResp int
@@ -615,7 +622,8 @@ func (p *Status) deprecatedPrune() {
peersToPrune := make([]*peerResp, 0)
// Select disconnected peers with a smaller bad response count.
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerDisconnected && notBadPeer(peerData) {
// Should not prune trusted peer or prune the peer dara and unset trusted peer.
if peerData.ConnState == PeerDisconnected && notBadPeer(peerData) && notTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
badResp: peerData.BadResponses,
@@ -912,6 +920,32 @@ func (p *Status) SetTrustedPeers(peers []peer.ID) {
p.store.SetTrustedPeers(peers)
}
// GetTrustedPeers returns a list of all trusted peers' ids
func (p *Status) GetTrustedPeers() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
return p.store.GetTrustedPeers()
}
// DeleteTrustedPeers removes peers from trusted peer set
func (p *Status) DeleteTrustedPeers(peers []peer.ID) {
p.store.Lock()
defer p.store.Unlock()
p.store.DeleteTrustedPeers(peers)
}
// IsTrustedPeers returns if given peer is a Trusted peer
func (p *Status) IsTrustedPeers(pid peer.ID) bool {
p.store.RLock()
defer p.store.RUnlock()
return p.isTrustedPeers(pid)
}
// isTrustedPeers is the lock-free version of IsTrustedPeers.
func (p *Status) isTrustedPeers(pid peer.ID) bool {
return p.store.IsTrustedPeer(pid)
}
// this method assumes the store lock is acquired before
// executing the method.
func (p *Status) isfromBadIP(pid peer.ID) bool {

View File

@@ -802,6 +802,11 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
}
}
p.SetTrustedPeers(trustedPeers)
// Assert we have correct trusted peers
trustedPeers = p.GetTrustedPeers()
assert.Equal(t, 6, len(trustedPeers))
// Assert all peers more than max are prunable.
peersToPrune = p.PeersToPrune()
assert.Equal(t, 16, len(peersToPrune))
@@ -812,6 +817,34 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
assert.NotEqual(t, pid.String(), tPid.String())
}
}
// Add more peers to check if trusted peers can be pruned after they are deleted from trusted peer set.
for i := 0; i < 9; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Delete trusted peers.
p.DeleteTrustedPeers(trustedPeers)
peersToPrune = p.PeersToPrune()
assert.Equal(t, 25, len(peersToPrune))
// Check that trusted peers are pruned.
for _, tPid := range trustedPeers {
pruned := false
for _, pid := range peersToPrune {
if pid.String() == tPid.String() {
pruned = true
}
}
assert.Equal(t, true, pruned)
}
// Assert have zero trusted peers
trustedPeers = p.GetTrustedPeers()
assert.Equal(t, 0, len(trustedPeers))
for _, pid := range peersToPrune {
dir, err := p.Direction(pid)
require.NoError(t, err)
@@ -821,8 +854,8 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
// Ensure it is in the descending order.
currScore := p.Scorers().Score(peersToPrune[0])
for _, pid := range peersToPrune {
score := p.Scorers().BadResponsesScorer().Score(pid)
assert.Equal(t, true, currScore >= score)
score := p.Scorers().Score(pid)
assert.Equal(t, true, currScore <= score)
currScore = score
}
}

View File

@@ -174,9 +174,9 @@ func (s *Service) Start() {
s.awaitStateInitialized()
s.isPreGenesis = false
var peersToWatch []string
var relayNodes []string
if s.cfg.RelayNodeAddr != "" {
peersToWatch = append(peersToWatch, s.cfg.RelayNodeAddr)
relayNodes = append(relayNodes, s.cfg.RelayNodeAddr)
if err := dialRelayNode(s.ctx, s.host, s.cfg.RelayNodeAddr); err != nil {
log.WithError(err).Errorf("Could not dial relay node")
}
@@ -213,8 +213,7 @@ func (s *Service) Start() {
// Set trusted peers for those that are provided as static addresses.
pids := peerIdsFromMultiAddrs(addrs)
s.peers.SetTrustedPeers(pids)
peersToWatch = append(peersToWatch, s.cfg.StaticPeers...)
s.connectWithAllPeers(addrs)
s.connectWithAllTrustedPeers(addrs)
}
// Initialize metadata according to the
// current epoch.
@@ -226,7 +225,7 @@ func (s *Service) Start() {
// Periodic functions.
async.RunEvery(s.ctx, params.BeaconNetworkConfig().TtfbTimeout, func() {
ensurePeerConnections(s.ctx, s.host, peersToWatch...)
ensurePeerConnections(s.ctx, s.host, s.peers, relayNodes...)
})
async.RunEvery(s.ctx, 30*time.Minute, s.Peers().Prune)
async.RunEvery(s.ctx, params.BeaconNetworkConfig().RespTimeout, s.updateMetrics)
@@ -399,6 +398,24 @@ func (s *Service) awaitStateInitialized() {
}
}
func (s *Service) connectWithAllTrustedPeers(multiAddrs []multiaddr.Multiaddr) {
addrInfos, err := peer.AddrInfosFromP2pAddrs(multiAddrs...)
if err != nil {
log.WithError(err).Error("Could not convert to peer address info's from multiaddresses")
return
}
for _, info := range addrInfos {
// add peer into peer status
s.peers.Add(nil, info.ID, info.Addrs[0], network.DirUnknown)
// make each dial non-blocking
go func(info peer.AddrInfo) {
if err := s.connectWithPeer(s.ctx, info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
}(info)
}
}
func (s *Service) connectWithAllPeers(multiAddrs []multiaddr.Multiaddr) {
addrInfos, err := peer.AddrInfosFromP2pAddrs(multiAddrs...)
if err != nil {

View File

@@ -5,28 +5,52 @@ import (
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
)
// ensurePeerConnections will attempt to reestablish connection to the peers
// if there are currently no connections to that peer.
func ensurePeerConnections(ctx context.Context, h host.Host, peers ...string) {
if len(peers) == 0 {
return
}
for _, p := range peers {
if p == "" {
func ensurePeerConnections(ctx context.Context, h host.Host, peers *peers.Status, relayNodes ...string) {
// every time reset peersToWatch, add RelayNodes and trust peers
var peersToWatch []*peer.AddrInfo
// add RelayNodes
for _, node := range relayNodes {
if node == "" {
continue
}
peerInfo, err := MakePeer(p)
peerInfo, err := MakePeer(node)
if err != nil {
log.WithError(err).Error("Could not make peer")
continue
}
peersToWatch = append(peersToWatch, peerInfo)
}
c := h.Network().ConnsToPeer(peerInfo.ID)
// add trusted peers
trustedPeers := peers.GetTrustedPeers()
for _, trustedPeer := range trustedPeers {
maddr, err := peers.Address(trustedPeer)
// avoid invalid trusted peers
if err != nil || maddr == nil {
log.WithField("peer", trustedPeers).WithError(err).Error("Could not get peer address")
continue
}
peerInfo := &peer.AddrInfo{ID: trustedPeer}
peerInfo.Addrs = []ma.Multiaddr{maddr}
peersToWatch = append(peersToWatch, peerInfo)
}
if len(peersToWatch) == 0 {
return
}
for _, p := range peersToWatch {
c := h.Network().ConnsToPeer(p.ID)
if len(c) == 0 {
if err := connectWithTimeout(ctx, h, peerInfo); err != nil {
log.WithField("peer", peerInfo.ID).WithField("addrs", peerInfo.Addrs).WithError(err).Errorf("Failed to reconnect to peer")
if err := connectWithTimeout(ctx, h, p); err != nil {
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("Failed to reconnect to peer")
continue
}
}

View File

@@ -25,16 +25,19 @@ go_library(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/builder:go_default_library",
"//beacon-chain/rpc/eth/debug:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",
"//beacon-chain/rpc/eth/node:go_default_library",
"//beacon-chain/rpc/eth/rewards:go_default_library",
"//beacon-chain/rpc/eth/validator:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/node:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/beacon:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/debug:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/node:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
"//beacon-chain/rpc/prysm/validator:go_default_library",
"//beacon-chain/slasher:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state/stategen:go_default_library",

View File

@@ -3,6 +3,7 @@ package apimiddleware
import (
"encoding/base64"
"strconv"
"strings"
"github.com/pkg/errors"
)
@@ -17,9 +18,14 @@ func (p *EpochParticipation) UnmarshalJSON(b []byte) error {
if len(b) < 2 {
return errors.New("epoch participation length must be at least 2")
}
if b[0] != '"' || b[len(b)-1] != '"' {
return errors.Errorf("provided epoch participation json string is malformed: %s", string(b))
}
// Remove leading and trailing quotation marks.
decoded, err := base64.StdEncoding.DecodeString(string(b[1 : len(b)-1]))
jsonString := string(b)
jsonString = strings.Trim(jsonString, "\"")
decoded, err := base64.StdEncoding.DecodeString(jsonString)
if err != nil {
return errors.Wrapf(err, "could not decode epoch participation base64 value")
}

View File

@@ -23,7 +23,7 @@ func TestUnmarshalEpochParticipation(t *testing.T) {
ep := EpochParticipation{}
err := ep.UnmarshalJSON([]byte(":illegal:"))
require.NotNil(t, err)
assert.ErrorContains(t, "could not decode epoch participation base64 value", err)
assert.ErrorContains(t, "provided epoch participation json string is malformed", err)
})
t.Run("length too small", func(t *testing.T) {
ep := EpochParticipation{}
@@ -36,4 +36,8 @@ func TestUnmarshalEpochParticipation(t *testing.T) {
require.NoError(t, ep.UnmarshalJSON([]byte("null")))
assert.DeepEqual(t, EpochParticipation([]string{}), ep)
})
t.Run("invalid value", func(t *testing.T) {
ep := EpochParticipation{}
require.ErrorContains(t, "provided epoch participation json string is malformed", ep.UnmarshalJSON([]byte("XdHJ1ZQ==X")))
})
}

View File

@@ -0,0 +1,25 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"errors.go",
"validator.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
],
)

View File

@@ -0,0 +1,49 @@
package core
import (
"net/http"
"google.golang.org/grpc/codes"
)
type ErrorReason uint8
const (
Internal = iota
Unavailable
BadRequest
// Add more errors as needed
)
type RpcError struct {
Err error
Reason ErrorReason
}
func ErrorReasonToGRPC(reason ErrorReason) codes.Code {
switch reason {
case Internal:
return codes.Internal
case Unavailable:
return codes.Unavailable
case BadRequest:
return codes.InvalidArgument
// Add more cases for other error reasons as needed
default:
return codes.Internal
}
}
func ErrorReasonToHTTP(reason ErrorReason) int {
switch reason {
case Internal:
return http.StatusInternalServerError
case Unavailable:
return http.StatusServiceUnavailable
case BadRequest:
return http.StatusBadRequest
// Add more cases for other error reasons as needed
default:
return http.StatusInternalServerError
}
}

View File

@@ -0,0 +1,168 @@
package core
import (
"context"
"sort"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
)
func ComputeValidatorPerformance(
ctx context.Context,
req *ethpb.ValidatorPerformanceRequest,
headFetcher blockchain.HeadFetcher,
currSlot primitives.Slot,
) (*ethpb.ValidatorPerformanceResponse, *RpcError) {
headState, err := headFetcher.HeadState(ctx)
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get head state"), Reason: Internal}
}
if currSlot > headState.Slot() {
headRoot, err := headFetcher.HeadRoot(ctx)
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get head root"), Reason: Internal}
}
headState, err = transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, currSlot)
if err != nil {
return nil, &RpcError{Err: errors.Wrapf(err, "could not process slots up to %d", currSlot), Reason: Internal}
}
}
var validatorSummary []*precompute.Validator
if headState.Version() == version.Phase0 {
vp, bp, err := precompute.New(ctx, headState)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
vp, bp, err = precompute.ProcessAttestations(ctx, headState, vp, bp)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
headState, err = precompute.ProcessRewardsAndPenaltiesPrecompute(headState, bp, vp, precompute.AttestationsDelta, precompute.ProposersDelta)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
validatorSummary = vp
} else if headState.Version() >= version.Altair {
vp, bp, err := altair.InitializePrecomputeValidators(ctx, headState)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
vp, bp, err = altair.ProcessEpochParticipation(ctx, headState, bp, vp)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
headState, vp, err = altair.ProcessInactivityScores(ctx, headState, vp)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
headState, err = altair.ProcessRewardsAndPenaltiesPrecompute(headState, bp, vp)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}
validatorSummary = vp
} else {
return nil, &RpcError{Err: errors.Wrapf(err, "head state version %d not supported", headState.Version()), Reason: Internal}
}
responseCap := len(req.Indices) + len(req.PublicKeys)
validatorIndices := make([]primitives.ValidatorIndex, 0, responseCap)
missingValidators := make([][]byte, 0, responseCap)
filtered := map[primitives.ValidatorIndex]bool{} // Track filtered validators to prevent duplication in the response.
// Convert the list of validator public keys to validator indices and add to the indices set.
for _, pubKey := range req.PublicKeys {
// Skip empty public key.
if len(pubKey) == 0 {
continue
}
pubkeyBytes := bytesutil.ToBytes48(pubKey)
idx, ok := headState.ValidatorIndexByPubkey(pubkeyBytes)
if !ok {
// Validator index not found, track as missing.
missingValidators = append(missingValidators, pubKey)
continue
}
if !filtered[idx] {
validatorIndices = append(validatorIndices, idx)
filtered[idx] = true
}
}
// Add provided indices to the indices set.
for _, idx := range req.Indices {
if !filtered[idx] {
validatorIndices = append(validatorIndices, idx)
filtered[idx] = true
}
}
// Depending on the indices and public keys given, results might not be sorted.
sort.Slice(validatorIndices, func(i, j int) bool {
return validatorIndices[i] < validatorIndices[j]
})
currentEpoch := coreTime.CurrentEpoch(headState)
responseCap = len(validatorIndices)
pubKeys := make([][]byte, 0, responseCap)
beforeTransitionBalances := make([]uint64, 0, responseCap)
afterTransitionBalances := make([]uint64, 0, responseCap)
effectiveBalances := make([]uint64, 0, responseCap)
correctlyVotedSource := make([]bool, 0, responseCap)
correctlyVotedTarget := make([]bool, 0, responseCap)
correctlyVotedHead := make([]bool, 0, responseCap)
inactivityScores := make([]uint64, 0, responseCap)
// Append performance summaries.
// Also track missing validators using public keys.
for _, idx := range validatorIndices {
val, err := headState.ValidatorAtIndexReadOnly(idx)
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get validator"), Reason: Internal}
}
pubKey := val.PublicKey()
if uint64(idx) >= uint64(len(validatorSummary)) {
// Not listed in validator summary yet; treat it as missing.
missingValidators = append(missingValidators, pubKey[:])
continue
}
if !helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
// Inactive validator; treat it as missing.
missingValidators = append(missingValidators, pubKey[:])
continue
}
summary := validatorSummary[idx]
pubKeys = append(pubKeys, pubKey[:])
effectiveBalances = append(effectiveBalances, summary.CurrentEpochEffectiveBalance)
beforeTransitionBalances = append(beforeTransitionBalances, summary.BeforeEpochTransitionBalance)
afterTransitionBalances = append(afterTransitionBalances, summary.AfterEpochTransitionBalance)
correctlyVotedTarget = append(correctlyVotedTarget, summary.IsPrevEpochTargetAttester)
correctlyVotedHead = append(correctlyVotedHead, summary.IsPrevEpochHeadAttester)
if headState.Version() == version.Phase0 {
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochAttester)
} else {
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochSourceAttester)
inactivityScores = append(inactivityScores, summary.InactivityScore)
}
}
return &ethpb.ValidatorPerformanceResponse{
PublicKeys: pubKeys,
CorrectlyVotedSource: correctlyVotedSource,
CorrectlyVotedTarget: correctlyVotedTarget, // In altair, when this is true then the attestation was definitely included.
CorrectlyVotedHead: correctlyVotedHead,
CurrentEffectiveBalances: effectiveBalances,
BalancesBeforeEpochTransition: beforeTransitionBalances,
BalancesAfterEpochTransition: afterTransitionBalances,
MissingValidators: missingValidators,
InactivityScores: inactivityScores, // Only populated in Altair
}, nil
}

View File

@@ -0,0 +1,48 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"handlers.go",
"server.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/builder",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network:go_default_library",
"//proto/engine/v1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["handlers_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//network:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
],
)

View File

@@ -0,0 +1,131 @@
package builder
import (
"fmt"
"net/http"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/network"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
// ExpectedWithdrawals get the withdrawals computed from the specified state, that will be included in the block that gets built on the specified state.
func (s *Server) ExpectedWithdrawals(w http.ResponseWriter, r *http.Request) {
// Retrieve beacon state
stateId := mux.Vars(r)["state_id"]
if stateId == "" {
network.WriteError(w, &network.DefaultErrorJson{
Message: "state_id is required in URL params",
Code: http.StatusBadRequest,
})
return
}
st, err := s.Stater.State(r.Context(), []byte(stateId))
if err != nil {
network.WriteError(w, handleWrapError(err, "could not retrieve state", http.StatusNotFound))
return
}
queryParam := r.URL.Query().Get("proposal_slot")
var proposalSlot primitives.Slot
if queryParam != "" {
pSlot, err := strconv.ParseUint(queryParam, 10, 64)
if err != nil {
network.WriteError(w, handleWrapError(err, "invalid proposal slot value", http.StatusBadRequest))
return
}
proposalSlot = primitives.Slot(pSlot)
} else {
proposalSlot = st.Slot() + 1
}
// Perform sanity checks on proposal slot before computing state
capellaStart, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
if err != nil {
network.WriteError(w, handleWrapError(err, "could not calculate Capella start slot", http.StatusInternalServerError))
return
}
if proposalSlot < capellaStart {
network.WriteError(w, &network.DefaultErrorJson{
Message: "expected withdrawals are not supported before Capella fork",
Code: http.StatusBadRequest,
})
return
}
if proposalSlot <= st.Slot() {
network.WriteError(w, &network.DefaultErrorJson{
Message: fmt.Sprintf("proposal slot must be bigger than state slot. proposal slot: %d, state slot: %d", proposalSlot, st.Slot()),
Code: http.StatusBadRequest,
})
return
}
lookAheadLimit := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().MaxSeedLookahead)))
if st.Slot().Add(lookAheadLimit) <= proposalSlot {
network.WriteError(w, &network.DefaultErrorJson{
Message: fmt.Sprintf("proposal slot cannot be >= %d slots ahead of state slot", lookAheadLimit),
Code: http.StatusBadRequest,
})
return
}
// Get metadata for response
isOptimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
network.WriteError(w, handleWrapError(err, "could not get optimistic mode info", http.StatusInternalServerError))
return
}
root, err := helpers.BlockRootAtSlot(st, st.Slot()-1)
if err != nil {
network.WriteError(w, handleWrapError(err, "could not get block root", http.StatusInternalServerError))
return
}
var blockRoot = [32]byte(root)
isFinalized := s.FinalizationFetcher.IsFinalized(r.Context(), blockRoot)
// Advance state forward to proposal slot
st, err = transition.ProcessSlots(r.Context(), st, proposalSlot)
if err != nil {
network.WriteError(w, &network.DefaultErrorJson{
Message: "could not process slots",
Code: http.StatusInternalServerError,
})
return
}
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
network.WriteError(w, &network.DefaultErrorJson{
Message: "could not get expected withdrawals",
Code: http.StatusInternalServerError,
})
return
}
network.WriteJson(w, &ExpectedWithdrawalsResponse{
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
Data: buildExpectedWithdrawalsData(withdrawals),
})
}
func buildExpectedWithdrawalsData(withdrawals []*enginev1.Withdrawal) []*ExpectedWithdrawal {
data := make([]*ExpectedWithdrawal, len(withdrawals))
for i, withdrawal := range withdrawals {
data[i] = &ExpectedWithdrawal{
Address: hexutil.Encode(withdrawal.Address),
Amount: strconv.FormatUint(withdrawal.Amount, 10),
Index: strconv.FormatUint(withdrawal.Index, 10),
ValidatorIndex: strconv.FormatUint(uint64(withdrawal.ValidatorIndex), 10),
}
}
return data
}
func handleWrapError(err error, message string, code int) *network.DefaultErrorJson {
return &network.DefaultErrorJson{
Message: errors.Wrapf(err, message).Error(),
Code: code,
}
}

View File

@@ -0,0 +1,210 @@
package builder
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"strconv"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/network"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
func TestExpectedWithdrawals_BadRequest(t *testing.T) {
st, err := util.NewBeaconStateCapella()
slotsAhead := 5000
require.NoError(t, err)
capellaSlot, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
currentSlot := capellaSlot + primitives.Slot(slotsAhead)
require.NoError(t, st.SetSlot(currentSlot))
mockChainService := &mock.ChainService{Optimistic: true}
testCases := []struct {
name string
path string
urlParams map[string]string
state state.BeaconState
errorMessage string
}{
{
name: "no state_id url params",
path: "/eth/v1/builder/states/{state_id}/expected_withdrawals?proposal_slot" +
strconv.FormatUint(uint64(currentSlot), 10),
urlParams: map[string]string{},
state: nil,
errorMessage: "state_id is required in URL params",
},
{
name: "invalid proposal slot value",
path: "/eth/v1/builder/states/{state_id}/expected_withdrawals?proposal_slot=aaa",
urlParams: map[string]string{"state_id": "head"},
state: st,
errorMessage: "invalid proposal slot value",
},
{
name: "proposal slot < Capella start slot",
path: "/eth/v1/builder/states/{state_id}/expected_withdrawals?proposal_slot=" +
strconv.FormatUint(uint64(capellaSlot)-1, 10),
urlParams: map[string]string{"state_id": "head"},
state: st,
errorMessage: "expected withdrawals are not supported before Capella fork",
},
{
name: "proposal slot == Capella start slot",
path: "/eth/v1/builder/states/{state_id}/expected_withdrawals?proposal_slot=" +
strconv.FormatUint(uint64(capellaSlot), 10),
urlParams: map[string]string{"state_id": "head"},
state: st,
errorMessage: "proposal slot must be bigger than state slot",
},
{
name: "Proposal slot >= 128 slots ahead of state slot",
path: "/eth/v1/builder/states/{state_id}/expected_withdrawals?proposal_slot=" +
strconv.FormatUint(uint64(currentSlot+128), 10),
urlParams: map[string]string{"state_id": "head"},
state: st,
errorMessage: "proposal slot cannot be >= 128 slots ahead of state slot",
},
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
s := &Server{
FinalizationFetcher: mockChainService,
OptimisticModeFetcher: mockChainService,
Stater: &testutil.MockStater{BeaconState: testCase.state},
}
request := httptest.NewRequest("GET", testCase.path, nil)
request = mux.SetURLVars(request, testCase.urlParams)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ExpectedWithdrawals(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, testCase.errorMessage, e.Message)
})
}
}
func TestExpectedWithdrawals(t *testing.T) {
st, err := util.NewBeaconStateCapella()
slotsAhead := 5000
require.NoError(t, err)
capellaSlot, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
currentSlot := capellaSlot + primitives.Slot(slotsAhead)
require.NoError(t, st.SetSlot(currentSlot))
mockChainService := &mock.ChainService{Optimistic: true}
t.Run("get correct expected withdrawals", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.MaxValidatorsPerWithdrawalsSweep = 16
params.OverrideBeaconConfig(cfg)
// Update state with updated validator fields
valCount := 17
validators := make([]*eth.Validator, 0, valCount)
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
blsKey, err := bls.RandKey()
require.NoError(t, err)
val := &eth.Validator{
PublicKey: blsKey.PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
val.WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
validators = append(validators, val)
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance)
}
epoch := slots.ToEpoch(st.Slot())
// Fully withdrawable now with more than 0 balance
validators[5].WithdrawableEpoch = epoch
// Fully withdrawable now but 0 balance
validators[10].WithdrawableEpoch = epoch
balances[10] = 0
// Partially withdrawable now but fully withdrawable after 1 epoch
validators[14].WithdrawableEpoch = epoch + 1
balances[14] += params.BeaconConfig().MinDepositAmount
// Partially withdrawable
validators[15].WithdrawableEpoch = epoch + 2
balances[15] += params.BeaconConfig().MinDepositAmount
// Above sweep bound
validators[16].WithdrawableEpoch = epoch + 1
balances[16] += params.BeaconConfig().MinDepositAmount
require.NoError(t, st.SetValidators(validators))
require.NoError(t, st.SetBalances(balances))
inactivityScores := make([]uint64, valCount)
for i := range inactivityScores {
inactivityScores[i] = 10
}
require.NoError(t, st.SetInactivityScores(inactivityScores))
s := &Server{
FinalizationFetcher: mockChainService,
OptimisticModeFetcher: mockChainService,
Stater: &testutil.MockStater{BeaconState: st},
}
request := httptest.NewRequest(
"GET", "/eth/v1/builder/states/{state_id}/expected_withdrawals?proposal_slot="+
strconv.FormatUint(uint64(currentSlot+params.BeaconConfig().SlotsPerEpoch), 10), nil)
request = mux.SetURLVars(request, map[string]string{"state_id": "head"})
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ExpectedWithdrawals(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ExpectedWithdrawalsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
assert.Equal(t, false, resp.Finalized)
assert.Equal(t, 3, len(resp.Data))
expectedWithdrawal1 := &ExpectedWithdrawal{
Index: strconv.FormatUint(0, 10),
ValidatorIndex: strconv.FormatUint(5, 10),
Address: hexutil.Encode(validators[5].WithdrawalCredentials[12:]),
// Decreased due to epoch processing when state advanced forward
Amount: strconv.FormatUint(31998257885, 10),
}
expectedWithdrawal2 := &ExpectedWithdrawal{
Index: strconv.FormatUint(1, 10),
ValidatorIndex: strconv.FormatUint(14, 10),
Address: hexutil.Encode(validators[14].WithdrawalCredentials[12:]),
// MaxEffectiveBalance + MinDepositAmount + decrease after epoch processing
Amount: strconv.FormatUint(32998257885, 10),
}
expectedWithdrawal3 := &ExpectedWithdrawal{
Index: strconv.FormatUint(2, 10),
ValidatorIndex: strconv.FormatUint(15, 10),
Address: hexutil.Encode(validators[15].WithdrawalCredentials[12:]),
// MinDepositAmount + decrease after epoch processing
Amount: strconv.FormatUint(998257885, 10),
}
require.DeepEqual(t, expectedWithdrawal1, resp.Data[0])
require.DeepEqual(t, expectedWithdrawal2, resp.Data[1])
require.DeepEqual(t, expectedWithdrawal3, resp.Data[2])
})
}

View File

@@ -0,0 +1,12 @@
package builder
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
)
type Server struct {
FinalizationFetcher blockchain.FinalizationFetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
Stater lookup.Stater
}

View File

@@ -0,0 +1,14 @@
package builder
type ExpectedWithdrawalsResponse struct {
Data []*ExpectedWithdrawal `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type ExpectedWithdrawal struct {
Address string `json:"address" hex:"true"`
Amount string `json:"amount"`
Index string `json:"index"`
ValidatorIndex string `json:"validator_index"`
}

View File

@@ -13,14 +13,21 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_wealdtech_go_bytesutil//:go_default_library",
],
)
@@ -33,6 +40,7 @@ go_test(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -1,6 +1,8 @@
package rewards
import (
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
@@ -8,12 +10,19 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
coreblocks "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/network"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/wealdtech/go-bytesutil"
)
// BlockRewards is an HTTP handler for Beacon API getBlockRewards.
@@ -28,7 +37,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
}
if blk.Version() == version.Phase0 {
errJson := &network.DefaultErrorJson{
Message: "block rewards are not supported for Phase 0 blocks",
Message: "Block rewards are not supported for Phase 0 blocks",
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
@@ -41,7 +50,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
st, err := s.ReplayerBuilder.ReplayerForSlot(blk.Block().Slot()-1).ReplayToSlot(r.Context(), blk.Block().Slot())
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get state").Error(),
Message: "Could not get state: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -52,7 +61,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
initBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get proposer's balance").Error(),
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -61,7 +70,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
st, err = altair.ProcessAttestationsNoVerifySignature(r.Context(), st, blk)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get attestation rewards").Error(),
Message: "Could not get attestation rewards" + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -70,7 +79,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
attBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get proposer's balance").Error(),
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -79,7 +88,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
st, err = coreblocks.ProcessAttesterSlashings(r.Context(), st, blk.Block().Body().AttesterSlashings(), validators.SlashValidator)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get attester slashing rewards").Error(),
Message: "Could not get attester slashing rewards: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -88,7 +97,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
attSlashingsBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get proposer's balance").Error(),
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -97,7 +106,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
st, err = coreblocks.ProcessProposerSlashings(r.Context(), st, blk.Block().Body().ProposerSlashings(), validators.SlashValidator)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get proposer slashing rewards").Error(),
Message: "Could not get proposer slashing rewards" + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -106,7 +115,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
proposerSlashingsBalance, err := st.BalanceAtIndex(proposerIndex)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get proposer's balance").Error(),
Message: "Could not get proposer's balance: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -115,7 +124,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
sa, err := blk.Block().Body().SyncAggregate()
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get sync aggregate").Error(),
Message: "Could not get sync aggregate: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -125,7 +134,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
_, syncCommitteeReward, err = altair.ProcessSyncAggregate(r.Context(), st, sa)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get sync aggregate rewards").Error(),
Message: "Could not get sync aggregate rewards: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -135,7 +144,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get optimistic mode info").Error(),
Message: "Could not get optimistic mode info: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -144,7 +153,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
blkRoot, err := blk.Block().HashTreeRoot()
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get block root").Error(),
Message: "Could not get block root: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
@@ -152,7 +161,7 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
}
response := &BlockRewardsResponse{
Data: &BlockRewards{
Data: BlockRewards{
ProposerIndex: strconv.FormatUint(uint64(proposerIndex), 10),
Total: strconv.FormatUint(proposerSlashingsBalance-initBalance+syncCommitteeReward, 10),
Attestations: strconv.FormatUint(attBalance-initBalance, 10),
@@ -166,22 +175,300 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
network.WriteJson(w, response)
}
// AttestationRewards retrieves attestation reward info for validators specified by array of public keys or validator index.
// If no array is provided, return reward info for every validator.
// TODO: Inclusion delay
func (s *Server) AttestationRewards(w http.ResponseWriter, r *http.Request) {
st, ok := s.attRewardsState(w, r)
if !ok {
return
}
bal, vals, valIndices, ok := attRewardsBalancesAndVals(w, r, st)
if !ok {
return
}
totalRewards, ok := totalAttRewards(w, st, bal, vals, valIndices)
if !ok {
return
}
idealRewards, ok := idealAttRewards(w, st, bal, vals)
if !ok {
return
}
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(r.Context())
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not get optimistic mode info: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return
}
blkRoot, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not get block root: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return
}
resp := &AttestationRewardsResponse{
Data: AttestationRewards{
IdealRewards: idealRewards,
TotalRewards: totalRewards,
},
ExecutionOptimistic: optimistic,
Finalized: s.FinalizationFetcher.IsFinalized(r.Context(), blkRoot),
}
network.WriteJson(w, resp)
}
func (s *Server) attRewardsState(w http.ResponseWriter, r *http.Request) (state.BeaconState, bool) {
segments := strings.Split(r.URL.Path, "/")
requestedEpoch, err := strconv.ParseUint(segments[len(segments)-1], 10, 64)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not decode epoch: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, false
}
if primitives.Epoch(requestedEpoch) < params.BeaconConfig().AltairForkEpoch {
errJson := &network.DefaultErrorJson{
Message: "Attestation rewards are not supported for Phase 0",
Code: http.StatusNotFound,
}
network.WriteError(w, errJson)
return nil, false
}
currentEpoch := uint64(slots.ToEpoch(s.TimeFetcher.CurrentSlot()))
if requestedEpoch+1 >= currentEpoch {
errJson := &network.DefaultErrorJson{
Code: http.StatusNotFound,
Message: "Attestation rewards are available after two epoch transitions to ensure all attestations have a chance of inclusion",
}
network.WriteError(w, errJson)
return nil, false
}
nextEpochEnd, err := slots.EpochEnd(primitives.Epoch(requestedEpoch + 1))
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not get next epoch's ending slot: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return nil, false
}
st, err := s.Stater.StateBySlot(r.Context(), nextEpochEnd)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not get state for epoch's starting slot: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return nil, false
}
return st, true
}
func attRewardsBalancesAndVals(
w http.ResponseWriter,
r *http.Request,
st state.BeaconState,
) (*precompute.Balance, []*precompute.Validator, []primitives.ValidatorIndex, bool) {
allVals, bal, err := altair.InitializePrecomputeValidators(r.Context(), st)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not initialize precompute validators: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
allVals, bal, err = altair.ProcessEpochParticipation(r.Context(), st, bal, allVals)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not process epoch participation: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
var rawValIds []string
if r.Body != http.NoBody {
if err = json.NewDecoder(r.Body).Decode(&rawValIds); err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not decode validators: " + err.Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
}
valIndices := make([]primitives.ValidatorIndex, len(rawValIds))
for i, v := range rawValIds {
index, err := strconv.ParseUint(v, 10, 64)
if err != nil {
pubkey, err := bytesutil.FromHexString(v)
if err != nil || len(pubkey) != fieldparams.BLSPubkeyLength {
errJson := &network.DefaultErrorJson{
Message: fmt.Sprintf("%s is not a validator index or pubkey", v),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
var ok bool
valIndices[i], ok = st.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubkey))
if !ok {
errJson := &network.DefaultErrorJson{
Message: fmt.Sprintf("No validator index found for pubkey %#x", pubkey),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
} else {
if index >= uint64(st.NumValidators()) {
errJson := &network.DefaultErrorJson{
Message: fmt.Sprintf("Validator index %d is too large. Maximum allowed index is %d", index, st.NumValidators()-1),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return nil, nil, nil, false
}
valIndices[i] = primitives.ValidatorIndex(index)
}
}
if len(valIndices) == 0 {
valIndices = make([]primitives.ValidatorIndex, len(allVals))
for i := 0; i < len(allVals); i++ {
valIndices[i] = primitives.ValidatorIndex(i)
}
}
if len(valIndices) == len(allVals) {
return bal, allVals, valIndices, true
} else {
filteredVals := make([]*precompute.Validator, len(valIndices))
for i, valIx := range valIndices {
filteredVals[i] = allVals[valIx]
}
return bal, filteredVals, valIndices, true
}
}
// idealAttRewards returns rewards for hypothetical, perfectly voting validators
// whose effective balances are over EJECTION_BALANCE and match balances in passed in validators.
func idealAttRewards(
w http.ResponseWriter,
st state.BeaconState,
bal *precompute.Balance,
vals []*precompute.Validator,
) ([]IdealAttestationReward, bool) {
idealValsCount := uint64(16)
minIdealBalance := uint64(17)
maxIdealBalance := minIdealBalance + idealValsCount - 1
idealRewards := make([]IdealAttestationReward, 0, idealValsCount)
idealVals := make([]*precompute.Validator, 0, idealValsCount)
increment := params.BeaconConfig().EffectiveBalanceIncrement
for i := minIdealBalance; i <= maxIdealBalance; i++ {
for _, v := range vals {
if v.CurrentEpochEffectiveBalance/1e9 == i {
effectiveBalance := i * increment
idealVals = append(idealVals, &precompute.Validator{
IsActivePrevEpoch: true,
IsSlashed: false,
CurrentEpochEffectiveBalance: effectiveBalance,
IsPrevEpochSourceAttester: true,
IsPrevEpochTargetAttester: true,
IsPrevEpochHeadAttester: true,
})
idealRewards = append(idealRewards, IdealAttestationReward{EffectiveBalance: strconv.FormatUint(effectiveBalance, 10)})
break
}
}
}
deltas, err := altair.AttestationsDelta(st, bal, idealVals)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not get attestations delta: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return nil, false
}
for i, d := range deltas {
idealRewards[i].Head = strconv.FormatUint(d.HeadReward, 10)
if d.SourcePenalty > 0 {
idealRewards[i].Source = fmt.Sprintf("-%s", strconv.FormatUint(d.SourcePenalty, 10))
} else {
idealRewards[i].Source = strconv.FormatUint(d.SourceReward, 10)
}
if d.TargetPenalty > 0 {
idealRewards[i].Target = fmt.Sprintf("-%s", strconv.FormatUint(d.TargetPenalty, 10))
} else {
idealRewards[i].Target = strconv.FormatUint(d.TargetReward, 10)
}
}
return idealRewards, true
}
func totalAttRewards(
w http.ResponseWriter,
st state.BeaconState,
bal *precompute.Balance,
vals []*precompute.Validator,
valIndices []primitives.ValidatorIndex,
) ([]TotalAttestationReward, bool) {
totalRewards := make([]TotalAttestationReward, len(valIndices))
for i, v := range valIndices {
totalRewards[i] = TotalAttestationReward{ValidatorIndex: strconv.FormatUint(uint64(v), 10)}
}
deltas, err := altair.AttestationsDelta(st, bal, vals)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: "Could not get attestations delta: " + err.Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return nil, false
}
for i, d := range deltas {
totalRewards[i].Head = strconv.FormatUint(d.HeadReward, 10)
if d.SourcePenalty > 0 {
totalRewards[i].Source = fmt.Sprintf("-%s", strconv.FormatUint(d.SourcePenalty, 10))
} else {
totalRewards[i].Source = strconv.FormatUint(d.SourceReward, 10)
}
if d.TargetPenalty > 0 {
totalRewards[i].Target = fmt.Sprintf("-%s", strconv.FormatUint(d.TargetPenalty, 10))
} else {
totalRewards[i].Target = strconv.FormatUint(d.TargetReward, 10)
}
}
return totalRewards, true
}
func handleGetBlockError(blk interfaces.ReadOnlySignedBeaconBlock, err error) *network.DefaultErrorJson {
if errors.Is(err, lookup.BlockIdParseError{}) {
return &network.DefaultErrorJson{
Message: errors.Wrapf(err, "invalid block ID").Error(),
Message: "Invalid block ID: " + err.Error(),
Code: http.StatusBadRequest,
}
}
if err != nil {
return &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not get block from block ID").Error(),
Message: "Could not get block from block ID: " + err.Error(),
Code: http.StatusInternalServerError,
}
}
if err := blocks.BeaconBlockIsNil(blk); err != nil {
return &network.DefaultErrorJson{
Message: errors.Wrapf(err, "could not find requested block").Error(),
Message: "Could not find requested block" + err.Error(),
Code: http.StatusNotFound,
}
}

View File

@@ -4,8 +4,11 @@ import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"strconv"
"strings"
"testing"
"github.com/prysmaticlabs/go-bitfield"
@@ -13,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
mockstategen "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen/mock"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -32,7 +36,7 @@ import (
func TestBlockRewards(t *testing.T) {
valCount := 64
st, err := util.NewBeaconStateAltair()
st, err := util.NewBeaconStateCapella()
require.NoError(t, st.SetSlot(1))
require.NoError(t, err)
validators := make([]*eth.Validator, 0, valCount)
@@ -193,6 +197,285 @@ func TestBlockRewards(t *testing.T) {
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "block rewards are not supported for Phase 0 blocks", e.Message)
assert.Equal(t, "Block rewards are not supported for Phase 0 blocks", e.Message)
})
}
func TestAttestationRewards(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
params.OverrideBeaconConfig(cfg)
valCount := 64
st, err := util.NewBeaconStateCapella()
require.NoError(t, err)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*3-1))
validators := make([]*eth.Validator, 0, valCount)
balances := make([]uint64, 0, valCount)
secretKeys := make([]bls.SecretKey, 0, valCount)
for i := 0; i < valCount; i++ {
blsKey, err := bls.RandKey()
require.NoError(t, err)
secretKeys = append(secretKeys, blsKey)
validators = append(validators, &eth.Validator{
PublicKey: blsKey.PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance / 64 * uint64(i+1),
})
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance/64*uint64(i+1))
}
require.NoError(t, st.SetValidators(validators))
require.NoError(t, st.SetBalances(balances))
require.NoError(t, st.SetInactivityScores(make([]uint64, len(validators))))
participation := make([]byte, len(validators))
for i := range participation {
participation[i] = 0b111
}
require.NoError(t, st.SetCurrentParticipationBits(participation))
require.NoError(t, st.SetPreviousParticipationBits(participation))
currentSlot := params.BeaconConfig().SlotsPerEpoch * 3
mockChainService := &mock.ChainService{Optimistic: true, Slot: &currentSlot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
params.BeaconConfig().SlotsPerEpoch*3 - 1: st,
}},
TimeFetcher: mockChainService,
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
}
t.Run("ok - ideal rewards", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &AttestationRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 16, len(resp.Data.IdealRewards))
sum := uint64(0)
for _, r := range resp.Data.IdealRewards {
hr, err := strconv.ParseUint(r.Head, 10, 64)
require.NoError(t, err)
sr, err := strconv.ParseUint(r.Source, 10, 64)
require.NoError(t, err)
tr, err := strconv.ParseUint(r.Target, 10, 64)
require.NoError(t, err)
sum += hr + sr + tr
}
assert.Equal(t, uint64(20756849), sum)
})
t.Run("ok - filtered vals", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
pubkey := fmt.Sprintf("%#x", secretKeys[10].PublicKey().Marshal())
valIds, err := json.Marshal([]string{"20", pubkey})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &AttestationRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data.TotalRewards))
sum := uint64(0)
for _, r := range resp.Data.TotalRewards {
hr, err := strconv.ParseUint(r.Head, 10, 64)
require.NoError(t, err)
sr, err := strconv.ParseUint(r.Source, 10, 64)
require.NoError(t, err)
tr, err := strconv.ParseUint(r.Target, 10, 64)
require.NoError(t, err)
sum += hr + sr + tr
}
assert.Equal(t, uint64(794265), sum)
})
t.Run("ok - all vals", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &AttestationRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 64, len(resp.Data.TotalRewards))
sum := uint64(0)
for _, r := range resp.Data.TotalRewards {
hr, err := strconv.ParseUint(r.Head, 10, 64)
require.NoError(t, err)
sr, err := strconv.ParseUint(r.Source, 10, 64)
require.NoError(t, err)
tr, err := strconv.ParseUint(r.Target, 10, 64)
require.NoError(t, err)
sum += hr + sr + tr
}
assert.Equal(t, uint64(54221955), sum)
})
t.Run("ok - penalty", func(t *testing.T) {
st, err := util.NewBeaconStateCapella()
require.NoError(t, err)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*3-1))
validators := make([]*eth.Validator, 0, valCount)
balances := make([]uint64, 0, valCount)
secretKeys := make([]bls.SecretKey, 0, valCount)
for i := 0; i < valCount; i++ {
blsKey, err := bls.RandKey()
require.NoError(t, err)
secretKeys = append(secretKeys, blsKey)
validators = append(validators, &eth.Validator{
PublicKey: blsKey.PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance / 64 * uint64(i),
})
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance/64*uint64(i))
}
validators[63].Slashed = true
require.NoError(t, st.SetValidators(validators))
require.NoError(t, st.SetBalances(balances))
require.NoError(t, st.SetInactivityScores(make([]uint64, len(validators))))
participation := make([]byte, len(validators))
for i := range participation {
participation[i] = 0b111
}
require.NoError(t, st.SetCurrentParticipationBits(participation))
require.NoError(t, st.SetPreviousParticipationBits(participation))
currentSlot := params.BeaconConfig().SlotsPerEpoch * 3
mockChainService := &mock.ChainService{Optimistic: true, Slot: &currentSlot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
params.BeaconConfig().SlotsPerEpoch*3 - 1: st,
}},
TimeFetcher: mockChainService,
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
}
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
valIds, err := json.Marshal([]string{"63"})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &AttestationRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, "0", resp.Data.TotalRewards[0].Head)
assert.Equal(t, "-432270", resp.Data.TotalRewards[0].Source)
assert.Equal(t, "-802788", resp.Data.TotalRewards[0].Target)
})
t.Run("invalid validator index/pubkey", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
valIds, err := json.Marshal([]string{"10", "foo"})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "foo is not a validator index or pubkey", e.Message)
})
t.Run("unknown validator pubkey", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
privkey, err := bls.RandKey()
require.NoError(t, err)
pubkey := fmt.Sprintf("%#x", privkey.PublicKey().Marshal())
valIds, err := json.Marshal([]string{"10", pubkey})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "No validator index found for pubkey "+pubkey, e.Message)
})
t.Run("validator index too large", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
valIds, err := json.Marshal([]string{"10", "999"})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, "Validator index 999 is too large. Maximum allowed index is 63", e.Message)
})
t.Run("phase 0", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/0"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.Equal(t, "Attestation rewards are not supported for Phase 0", e.Message)
})
t.Run("invalid epoch", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/foo"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "Could not decode epoch"))
})
t.Run("previous epoch", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/2"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.Equal(t, "Attestation rewards are available after two epoch transitions to ensure all attestations have a chance of inclusion", e.Message)
})
}

View File

@@ -11,4 +11,8 @@ type Server struct {
OptimisticModeFetcher blockchain.OptimisticModeFetcher
FinalizationFetcher blockchain.FinalizationFetcher
ReplayerBuilder stategen.ReplayerBuilder
// TODO: Init
TimeFetcher blockchain.TimeFetcher
Stater lookup.Stater
HeadFetcher blockchain.HeadFetcher
}

View File

@@ -1,9 +1,9 @@
package rewards
type BlockRewardsResponse struct {
Data *BlockRewards `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data BlockRewards `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type BlockRewards struct {
@@ -14,3 +14,29 @@ type BlockRewards struct {
ProposerSlashings string `json:"proposer_slashings"`
AttesterSlashings string `json:"attester_slashings"`
}
type AttestationRewardsResponse struct {
Data AttestationRewards `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type AttestationRewards struct {
IdealRewards []IdealAttestationReward `json:"ideal_rewards"`
TotalRewards []TotalAttestationReward `json:"total_rewards"`
}
type IdealAttestationReward struct {
EffectiveBalance string `json:"effective_balance"`
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
}
type TotalAttestationReward struct {
ValidatorIndex string `json:"validator_index"`
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
InclusionDelay string `json:"inclusion_delay"`
}

View File

@@ -0,0 +1,49 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"handlers.go",
"server.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/node",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/sync:go_default_library",
"//network:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"handlers_test.go",
"server_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//network:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/host/peerstore/test:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
],
)

View File

@@ -0,0 +1,177 @@
package node
import (
"encoding/json"
"io"
"net/http"
"strings"
corenet "github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v4/network"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// ListTrustedPeer retrieves data about the node's trusted peers.
func (s *Server) ListTrustedPeer(w http.ResponseWriter, r *http.Request) {
peerStatus := s.PeersFetcher.Peers()
allIds := s.PeersFetcher.Peers().GetTrustedPeers()
allPeers := make([]*Peer, 0, len(allIds))
for _, id := range allIds {
p, err := httpPeerInfo(peerStatus, id)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not get peer info").Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return
}
// peers added into trusted set but never connected should also be listed
if p == nil {
p = &Peer{
PeerID: id.String(),
Enr: "",
LastSeenP2PAddress: "",
State: eth.ConnectionState(corenet.NotConnected).String(),
Direction: eth.PeerDirection(corenet.DirUnknown).String(),
}
}
allPeers = append(allPeers, p)
}
response := &PeersResponse{Peers: allPeers}
network.WriteJson(w, response)
}
// AddTrustedPeer adds a new peer into node's trusted peer set by Multiaddr
func (s *Server) AddTrustedPeer(w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(r.Body)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not read request body").Error(),
Code: http.StatusInternalServerError,
}
network.WriteError(w, errJson)
return
}
var addrRequest *AddrRequest
err = json.Unmarshal(body, &addrRequest)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not decode request body into peer address").Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return
}
info, err := peer.AddrInfoFromString(addrRequest.Addr)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not derive peer info from multiaddress").Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return
}
// also add new peerdata to peers
direction, err := s.PeersFetcher.Peers().Direction(info.ID)
if err != nil {
s.PeersFetcher.Peers().Add(nil, info.ID, info.Addrs[0], corenet.DirUnknown)
} else {
s.PeersFetcher.Peers().Add(nil, info.ID, info.Addrs[0], direction)
}
peers := []peer.ID{}
peers = append(peers, info.ID)
s.PeersFetcher.Peers().SetTrustedPeers(peers)
w.WriteHeader(http.StatusOK)
}
// RemoveTrustedPeer removes peer from our trusted peer set but does not close connection.
func (s *Server) RemoveTrustedPeer(w http.ResponseWriter, r *http.Request) {
segments := strings.Split(r.URL.Path, "/")
id := segments[len(segments)-1]
peerId, err := peer.Decode(id)
if err != nil {
errJson := &network.DefaultErrorJson{
Message: errors.Wrapf(err, "Could not decode peer id").Error(),
Code: http.StatusBadRequest,
}
network.WriteError(w, errJson)
return
}
// if the peer is not a trusted peer, do nothing but return 200
if !s.PeersFetcher.Peers().IsTrustedPeers(peerId) {
w.WriteHeader(http.StatusOK)
return
}
peers := []peer.ID{}
peers = append(peers, peerId)
s.PeersFetcher.Peers().DeleteTrustedPeers(peers)
w.WriteHeader(http.StatusOK)
}
// httpPeerInfo does the same thing as peerInfo function in node.go but returns the
// http peer response.
func httpPeerInfo(peerStatus *peers.Status, id peer.ID) (*Peer, error) {
enr, err := peerStatus.ENR(id)
if err != nil {
if errors.Is(err, peerdata.ErrPeerUnknown) {
return nil, nil
}
return nil, errors.Wrap(err, "could not obtain ENR")
}
var serializedEnr string
if enr != nil {
serializedEnr, err = p2p.SerializeENR(enr)
if err != nil {
return nil, errors.Wrap(err, "could not serialize ENR")
}
}
address, err := peerStatus.Address(id)
if err != nil {
if errors.Is(err, peerdata.ErrPeerUnknown) {
return nil, nil
}
return nil, errors.Wrap(err, "could not obtain address")
}
connectionState, err := peerStatus.ConnectionState(id)
if err != nil {
if errors.Is(err, peerdata.ErrPeerUnknown) {
return nil, nil
}
return nil, errors.Wrap(err, "could not obtain connection state")
}
direction, err := peerStatus.Direction(id)
if err != nil {
if errors.Is(err, peerdata.ErrPeerUnknown) {
return nil, nil
}
return nil, errors.Wrap(err, "could not obtain direction")
}
if eth.PeerDirection(direction) == eth.PeerDirection_UNKNOWN {
return nil, nil
}
v1ConnState := eth.ConnectionState(connectionState).String()
v1PeerDirection := eth.PeerDirection(direction).String()
p := Peer{
PeerID: id.String(),
State: v1ConnState,
Direction: v1PeerDirection,
}
if address != nil {
p.LastSeenP2PAddress = address.String()
}
if serializedEnr != "" {
p.Enr = "enr:" + serializedEnr
}
return &p, nil
}

View File

@@ -0,0 +1,250 @@
package node
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"strconv"
"testing"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
corenet "github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
libp2ptest "github.com/libp2p/go-libp2p/p2p/host/peerstore/test"
ma "github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
mockp2p "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/network"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
type testIdentity enode.ID
func (_ testIdentity) Verify(_ *enr.Record, _ []byte) error { return nil }
func (id testIdentity) NodeAddr(_ *enr.Record) []byte { return id[:] }
func TestListTrustedPeer(t *testing.T) {
ids := libp2ptest.GeneratePeerIDs(9)
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
peerStatus := peerFetcher.Peers()
for i, id := range ids {
if i == len(ids)-1 {
var p2pAddr = "/ip4/127.0.0." + strconv.Itoa(i) + "/udp/12000/p2p/16Uiu2HAm7yD5fhhw1Kihg5pffaGbvKV3k7sqxRGHMZzkb7u9UUxQ"
p2pMultiAddr, err := ma.NewMultiaddr(p2pAddr)
require.NoError(t, err)
peerStatus.Add(nil, id, p2pMultiAddr, corenet.DirUnknown)
continue
}
enrRecord := &enr.Record{}
err := enrRecord.SetSig(testIdentity{1}, []byte{42})
require.NoError(t, err)
enrRecord.Set(enr.IPv4{127, 0, 0, byte(i)})
err = enrRecord.SetSig(testIdentity{}, []byte{})
require.NoError(t, err)
var p2pAddr = "/ip4/127.0.0." + strconv.Itoa(i) + "/udp/12000/p2p/16Uiu2HAm7yD5fhhw1Kihg5pffaGbvKV3k7sqxRGHMZzkb7u9UUxQ"
p2pMultiAddr, err := ma.NewMultiaddr(p2pAddr)
require.NoError(t, err)
var direction corenet.Direction
if i%2 == 0 {
direction = corenet.DirInbound
} else {
direction = corenet.DirOutbound
}
peerStatus.Add(enrRecord, id, p2pMultiAddr, direction)
switch i {
case 0, 1:
peerStatus.SetConnectionState(id, peers.PeerConnecting)
case 2, 3:
peerStatus.SetConnectionState(id, peers.PeerConnected)
case 4, 5:
peerStatus.SetConnectionState(id, peers.PeerDisconnecting)
case 6, 7:
peerStatus.SetConnectionState(id, peers.PeerDisconnected)
default:
t.Fatalf("Failed to set connection state for peer")
}
}
s := Server{PeersFetcher: peerFetcher}
// set all peers as trusted peers
s.PeersFetcher.Peers().SetTrustedPeers(ids)
t.Run("Peer data OK", func(t *testing.T) {
url := "http://anything.is.fine"
request := httptest.NewRequest("GET", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ListTrustedPeer(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &PeersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
peers := resp.Peers
// assert number of trusted peer is right
assert.Equal(t, 9, len(peers))
for i := 0; i < 9; i++ {
pid, err := peer.Decode(peers[i].PeerID)
require.NoError(t, err)
if pid == ids[8] {
assert.Equal(t, "", peers[i].Enr)
assert.Equal(t, "", peers[i].LastSeenP2PAddress)
assert.Equal(t, "DISCONNECTED", peers[i].State)
assert.Equal(t, "UNKNOWN", peers[i].Direction)
continue
}
expectedEnr, err := peerStatus.ENR(pid)
require.NoError(t, err)
serializeENR, err := p2p.SerializeENR(expectedEnr)
require.NoError(t, err)
assert.Equal(t, "enr:"+serializeENR, peers[i].Enr)
expectedP2PAddr, err := peerStatus.Address(pid)
require.NoError(t, err)
assert.Equal(t, expectedP2PAddr.String(), peers[i].LastSeenP2PAddress)
switch pid {
case ids[0]:
assert.Equal(t, "CONNECTING", peers[i].State)
assert.Equal(t, "INBOUND", peers[i].Direction)
case ids[1]:
assert.Equal(t, "CONNECTING", peers[i].State)
assert.Equal(t, "OUTBOUND", peers[i].Direction)
case ids[2]:
assert.Equal(t, "CONNECTED", peers[i].State)
assert.Equal(t, "INBOUND", peers[i].Direction)
case ids[3]:
assert.Equal(t, "CONNECTED", peers[i].State)
assert.Equal(t, "OUTBOUND", peers[i].Direction)
case ids[4]:
assert.Equal(t, "DISCONNECTING", peers[i].State)
assert.Equal(t, "INBOUND", peers[i].Direction)
case ids[5]:
assert.Equal(t, "DISCONNECTING", peers[i].State)
assert.Equal(t, "OUTBOUND", peers[i].Direction)
case ids[6]:
assert.Equal(t, "DISCONNECTED", peers[i].State)
assert.Equal(t, "INBOUND", peers[i].Direction)
case ids[7]:
assert.Equal(t, "DISCONNECTED", peers[i].State)
assert.Equal(t, "OUTBOUND", peers[i].Direction)
default:
t.Fatalf("Failed to get connection state and direction for peer")
}
}
})
}
func TestListTrustedPeers_NoPeersReturnsEmptyArray(t *testing.T) {
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
s := Server{PeersFetcher: peerFetcher}
url := "http://anything.is.fine"
request := httptest.NewRequest("GET", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.ListTrustedPeer(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &PeersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
peers := resp.Peers
assert.Equal(t, 0, len(peers))
}
func TestAddTrustedPeer(t *testing.T) {
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
s := Server{PeersFetcher: peerFetcher}
url := "http://anything.is.fine"
addr := &AddrRequest{
Addr: "/ip4/127.0.0.1/tcp/30303/p2p/16Uiu2HAm1n583t4huDMMqEUUBuQs6bLts21mxCfX3tiqu9JfHvRJ",
}
addrJson, err := json.Marshal(addr)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(addrJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AddTrustedPeer(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
}
func TestAddTrustedPeer_EmptyBody(t *testing.T) {
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
s := Server{PeersFetcher: peerFetcher}
url := "http://anything.is.fine"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AddTrustedPeer(writer, request)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.Equal(t, "Could not decode request body into peer address: unexpected end of JSON input", e.Message)
}
func TestAddTrustedPeer_BadAddress(t *testing.T) {
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
s := Server{PeersFetcher: peerFetcher}
url := "http://anything.is.fine"
addr := &AddrRequest{
Addr: "anything/but/not/an/address",
}
addrJson, err := json.Marshal(addr)
require.NoError(t, err)
var body bytes.Buffer
_, err = body.Write(addrJson)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AddTrustedPeer(writer, request)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not derive peer info from multiaddress", e.Message)
}
func TestRemoveTrustedPeer(t *testing.T) {
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
s := Server{PeersFetcher: peerFetcher}
url := "http://anything.is.fine.but.last.is.important/16Uiu2HAm1n583t4huDMMqEUUBuQs6bLts21mxCfX3tiqu9JfHvRJ"
request := httptest.NewRequest("DELETE", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.RemoveTrustedPeer(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
}
func TestRemoveTrustedPeer_EmptyParameter(t *testing.T) {
peerFetcher := &mockp2p.MockPeersProvider{}
peerFetcher.ClearPeers()
s := Server{PeersFetcher: peerFetcher}
url := "http://anything.is.fine"
request := httptest.NewRequest("DELETE", url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.RemoveTrustedPeer(writer, request)
e := &network.DefaultErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.Equal(t, "Could not decode peer id: failed to parse peer ID: invalid cid: cid too short", e.Message)
}

View File

@@ -0,0 +1,21 @@
package node
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
)
type Server struct {
SyncChecker sync.Checker
OptimisticModeFetcher blockchain.OptimisticModeFetcher
BeaconDB db.ReadOnlyDatabase
PeersFetcher p2p.PeersProvider
PeerManager p2p.PeerManager
MetadataProvider p2p.MetadataProvider
GenesisTimeFetcher blockchain.TimeFetcher
HeadFetcher blockchain.HeadFetcher
ExecutionChainInfoFetcher execution.ChainInfoFetcher
}

View File

@@ -0,0 +1 @@
package node

View File

@@ -0,0 +1,17 @@
package node
type AddrRequest struct {
Addr string `json:"addr"`
}
type PeersResponse struct {
Peers []*Peer `json:"Peers"`
}
type Peer struct {
PeerID string `json:"peer_id"`
Enr string `json:"enr"`
LastSeenP2PAddress string `json:"last_seen_p2p_address"`
State string `json:"state"`
Direction string `json:"direction"`
}

View File

@@ -38,6 +38,7 @@ go_library(
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",

View File

@@ -13,6 +13,7 @@ import (
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/cmd"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -659,153 +660,14 @@ func (bs *Server) GetValidatorPerformance(
ctx context.Context, req *ethpb.ValidatorPerformanceRequest,
) (*ethpb.ValidatorPerformanceResponse, error) {
if bs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
headState, err := bs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
currSlot := bs.GenesisTimeFetcher.CurrentSlot()
if currSlot > headState.Slot() {
headRoot, err := bs.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
}
headState, err = transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, currSlot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", currSlot, err)
}
response, err := core.ComputeValidatorPerformance(ctx, req, bs.HeadFetcher, currSlot)
if err != nil {
return nil, status.Errorf(core.ErrorReasonToGRPC(err.Reason), "Could not compute validator performance: %v", err.Err)
}
var validatorSummary []*precompute.Validator
if headState.Version() == version.Phase0 {
vp, bp, err := precompute.New(ctx, headState)
if err != nil {
return nil, err
}
vp, bp, err = precompute.ProcessAttestations(ctx, headState, vp, bp)
if err != nil {
return nil, err
}
headState, err = precompute.ProcessRewardsAndPenaltiesPrecompute(headState, bp, vp, precompute.AttestationsDelta, precompute.ProposersDelta)
if err != nil {
return nil, err
}
validatorSummary = vp
} else if headState.Version() >= version.Altair {
vp, bp, err := altair.InitializePrecomputeValidators(ctx, headState)
if err != nil {
return nil, err
}
vp, bp, err = altair.ProcessEpochParticipation(ctx, headState, bp, vp)
if err != nil {
return nil, err
}
headState, vp, err = altair.ProcessInactivityScores(ctx, headState, vp)
if err != nil {
return nil, err
}
headState, err = altair.ProcessRewardsAndPenaltiesPrecompute(headState, bp, vp)
if err != nil {
return nil, err
}
validatorSummary = vp
} else {
return nil, status.Errorf(codes.Internal, "Head state version %d not supported", headState.Version())
}
responseCap := len(req.Indices) + len(req.PublicKeys)
validatorIndices := make([]primitives.ValidatorIndex, 0, responseCap)
missingValidators := make([][]byte, 0, responseCap)
filtered := map[primitives.ValidatorIndex]bool{} // Track filtered validators to prevent duplication in the response.
// Convert the list of validator public keys to validator indices and add to the indices set.
for _, pubKey := range req.PublicKeys {
// Skip empty public key.
if len(pubKey) == 0 {
continue
}
pubkeyBytes := bytesutil.ToBytes48(pubKey)
idx, ok := headState.ValidatorIndexByPubkey(pubkeyBytes)
if !ok {
// Validator index not found, track as missing.
missingValidators = append(missingValidators, pubKey)
continue
}
if !filtered[idx] {
validatorIndices = append(validatorIndices, idx)
filtered[idx] = true
}
}
// Add provided indices to the indices set.
for _, idx := range req.Indices {
if !filtered[idx] {
validatorIndices = append(validatorIndices, idx)
filtered[idx] = true
}
}
// Depending on the indices and public keys given, results might not be sorted.
sort.Slice(validatorIndices, func(i, j int) bool {
return validatorIndices[i] < validatorIndices[j]
})
currentEpoch := coreTime.CurrentEpoch(headState)
responseCap = len(validatorIndices)
pubKeys := make([][]byte, 0, responseCap)
beforeTransitionBalances := make([]uint64, 0, responseCap)
afterTransitionBalances := make([]uint64, 0, responseCap)
effectiveBalances := make([]uint64, 0, responseCap)
correctlyVotedSource := make([]bool, 0, responseCap)
correctlyVotedTarget := make([]bool, 0, responseCap)
correctlyVotedHead := make([]bool, 0, responseCap)
inactivityScores := make([]uint64, 0, responseCap)
// Append performance summaries.
// Also track missing validators using public keys.
for _, idx := range validatorIndices {
val, err := headState.ValidatorAtIndexReadOnly(idx)
if err != nil {
return nil, status.Errorf(codes.Internal, "could not get validator: %v", err)
}
pubKey := val.PublicKey()
if uint64(idx) >= uint64(len(validatorSummary)) {
// Not listed in validator summary yet; treat it as missing.
missingValidators = append(missingValidators, pubKey[:])
continue
}
if !helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
// Inactive validator; treat it as missing.
missingValidators = append(missingValidators, pubKey[:])
continue
}
summary := validatorSummary[idx]
pubKeys = append(pubKeys, pubKey[:])
effectiveBalances = append(effectiveBalances, summary.CurrentEpochEffectiveBalance)
beforeTransitionBalances = append(beforeTransitionBalances, summary.BeforeEpochTransitionBalance)
afterTransitionBalances = append(afterTransitionBalances, summary.AfterEpochTransitionBalance)
correctlyVotedTarget = append(correctlyVotedTarget, summary.IsPrevEpochTargetAttester)
correctlyVotedHead = append(correctlyVotedHead, summary.IsPrevEpochHeadAttester)
if headState.Version() == version.Phase0 {
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochAttester)
} else {
correctlyVotedSource = append(correctlyVotedSource, summary.IsPrevEpochSourceAttester)
inactivityScores = append(inactivityScores, summary.InactivityScore)
}
}
return &ethpb.ValidatorPerformanceResponse{
PublicKeys: pubKeys,
CorrectlyVotedSource: correctlyVotedSource,
CorrectlyVotedTarget: correctlyVotedTarget, // In altair, when this is true then the attestation was definitely included.
CorrectlyVotedHead: correctlyVotedHead,
CurrentEffectiveBalances: effectiveBalances,
BalancesBeforeEpochTransition: beforeTransitionBalances,
BalancesAfterEpochTransition: afterTransitionBalances,
MissingValidators: missingValidators,
InactivityScores: inactivityScores, // Only populated in Altair
}, nil
return response, nil
}
// GetIndividualVotes retrieves individual voting status of validators.

View File

@@ -66,11 +66,7 @@ func (vs *Server) SubmitAggregateSelectionProof(ctx context.Context, req *ethpb.
return nil, status.Errorf(codes.InvalidArgument, "Validator is not an aggregator")
}
if err := vs.AttPool.AggregateUnaggregatedAttestationsBySlotIndex(ctx, req.Slot, req.CommitteeIndex); err != nil {
return nil, status.Errorf(codes.Internal, "Could not aggregate unaggregated attestations")
}
aggregatedAtts := vs.AttPool.AggregatedAttestationsBySlotIndex(ctx, req.Slot, req.CommitteeIndex)
// Filter out the best aggregated attestation (ie. the one with the most aggregated bits).
if len(aggregatedAtts) == 0 {
aggregatedAtts = vs.AttPool.UnaggregatedAttestationsBySlotIndex(ctx, req.Slot, req.CommitteeIndex)

View File

@@ -146,6 +146,7 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
validatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
nextValidatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
for _, pubKey := range req.PublicKeys {
if ctx.Err() != nil {
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())
@@ -194,7 +195,8 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
vs.ProposerSlotIndexCache.PrunePayloadIDs(epochStartSlot)
} else {
// If the validator isn't in the beacon state, try finding their deposit to determine their status.
vStatus, _ := vs.validatorStatus(ctx, s, pubKey)
// We don't need the lastActiveValidatorFn because we don't use the response in this.
vStatus, _ := vs.validatorStatus(ctx, s, pubKey, nil)
assignment.Status = vStatus.Status
}

View File

@@ -23,6 +23,7 @@ import (
)
var errPubkeyDoesNotExist = errors.New("pubkey does not exist")
var errHeadstateDoesNotExist = errors.New("head state does not exist")
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
var nonExistentIndex = primitives.ValidatorIndex(^uint64(0))
@@ -46,7 +47,8 @@ func (vs *Server) ValidatorStatus(
if err != nil {
return nil, status.Error(codes.Internal, "Could not get head state")
}
vStatus, _ := vs.validatorStatus(ctx, headState, req.PublicKey)
vStatus, _ := vs.validatorStatus(ctx, headState, req.PublicKey, func() (primitives.ValidatorIndex, error) { return helpers.LastActivatedValidatorIndex(ctx, headState) })
return vStatus, nil
}
@@ -86,8 +88,9 @@ func (vs *Server) MultipleValidatorStatus(
// Fetch statuses from beacon state.
statuses := make([]*ethpb.ValidatorStatusResponse, len(pubKeys))
indices := make([]primitives.ValidatorIndex, len(pubKeys))
lastActivated, hpErr := helpers.LastActivatedValidatorIndex(ctx, headState)
for i, pubKey := range pubKeys {
statuses[i], indices[i] = vs.validatorStatus(ctx, headState, pubKey)
statuses[i], indices[i] = vs.validatorStatus(ctx, headState, pubKey, func() (primitives.ValidatorIndex, error) { return lastActivated, hpErr })
}
return &ethpb.MultipleValidatorStatusResponse{
@@ -223,11 +226,13 @@ func (vs *Server) activationStatus(
}
activeValidatorExists := false
statusResponses := make([]*ethpb.ValidatorActivationResponse_Status, len(pubKeys))
// only run calculation of last activated once per state
lastActivated, hpErr := helpers.LastActivatedValidatorIndex(ctx, headState)
for i, pubKey := range pubKeys {
if ctx.Err() != nil {
return false, nil, ctx.Err()
}
vStatus, idx := vs.validatorStatus(ctx, headState, pubKey)
vStatus, idx := vs.validatorStatus(ctx, headState, pubKey, func() (primitives.ValidatorIndex, error) { return lastActivated, hpErr })
if vStatus == nil {
continue
}
@@ -272,6 +277,7 @@ func (vs *Server) validatorStatus(
ctx context.Context,
headState state.ReadOnlyBeaconState,
pubKey []byte,
lastActiveValidatorFn func() (primitives.ValidatorIndex, error),
) (*ethpb.ValidatorStatusResponse, primitives.ValidatorIndex) {
ctx, span := trace.StartSpan(ctx, "ValidatorServer.validatorStatus")
defer span.End()
@@ -340,17 +346,12 @@ func (vs *Server) validatorStatus(
}
}
}
var lastActivatedvalidatorIndex primitives.ValidatorIndex
for j := headState.NumValidators() - 1; j >= 0; j-- {
val, err := headState.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
if err != nil {
return resp, idx
}
if helpers.IsActiveValidatorUsingTrie(val, time.CurrentEpoch(headState)) {
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
break
}
if lastActiveValidatorFn == nil {
return resp, idx
}
lastActivatedvalidatorIndex, err := lastActiveValidatorFn()
if err != nil {
return resp, idx
}
// Our position in the activation queue is the above index - our validator index.
if lastActivatedvalidatorIndex < idx {
@@ -390,7 +391,7 @@ func checkValidatorsAreRecent(headEpoch primitives.Epoch, req *ethpb.DoppelGange
func statusForPubKey(headState state.ReadOnlyBeaconState, pubKey []byte) (ethpb.ValidatorStatus, primitives.ValidatorIndex, error) {
if headState == nil || headState.IsNil() {
return ethpb.ValidatorStatus_UNKNOWN_STATUS, 0, errors.New("head state does not exist")
return ethpb.ValidatorStatus_UNKNOWN_STATUS, 0, errHeadstateDoesNotExist
}
idx, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok || uint64(idx) >= uint64(headState.NumValidators()) {

View File

@@ -0,0 +1,40 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"server.go",
"validator_performance.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/validator",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/sync:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["validator_performance_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -0,0 +1,14 @@
package validator
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
)
// Server defines a server implementation for HTTP endpoints, providing
// access data relevant to the Ethereum Beacon Chain.
type Server struct {
GenesisTimeFetcher blockchain.TimeFetcher
SyncChecker sync.Checker
HeadFetcher blockchain.HeadFetcher
}

View File

@@ -0,0 +1,78 @@
package validator
import (
"encoding/json"
"net/http"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/network"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
type ValidatorPerformanceRequest struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
Indices []primitives.ValidatorIndex `json:"indices,omitempty"`
}
type ValidatorPerformanceResponse struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
CorrectlyVotedSource []bool `json:"correctly_voted_source,omitempty"`
CorrectlyVotedTarget []bool `json:"correctly_voted_target,omitempty"`
CorrectlyVotedHead []bool `json:"correctly_voted_head,omitempty"`
CurrentEffectiveBalances []uint64 `json:"current_effective_balances,omitempty"`
BalancesBeforeEpochTransition []uint64 `json:"balances_before_epoch_transition,omitempty"`
BalancesAfterEpochTransition []uint64 `json:"balances_after_epoch_transition,omitempty"`
MissingValidators [][]byte `json:"missing_validators,omitempty"`
InactivityScores []uint64 `json:"inactivity_scores,omitempty"`
}
// GetValidatorPerformance is an HTTP handler for GetValidatorPerformance.
func (vs *Server) GetValidatorPerformance(w http.ResponseWriter, r *http.Request) {
if vs.SyncChecker.Syncing() {
handleHTTPError(w, "Syncing", http.StatusServiceUnavailable)
return
}
ctx := r.Context()
currSlot := vs.GenesisTimeFetcher.CurrentSlot()
var req ValidatorPerformanceRequest
if r.Body != http.NoBody {
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
handleHTTPError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
}
computed, err := core.ComputeValidatorPerformance(
ctx,
&ethpb.ValidatorPerformanceRequest{
PublicKeys: req.PublicKeys,
Indices: req.Indices,
},
vs.HeadFetcher,
currSlot,
)
if err != nil {
handleHTTPError(w, "Could not compute validator performance: "+err.Err.Error(), core.ErrorReasonToHTTP(err.Reason))
return
}
response := &ValidatorPerformanceResponse{
PublicKeys: computed.PublicKeys,
CorrectlyVotedSource: computed.CorrectlyVotedSource,
CorrectlyVotedTarget: computed.CorrectlyVotedTarget, // In altair, when this is true then the attestation was definitely included.
CorrectlyVotedHead: computed.CorrectlyVotedHead,
CurrentEffectiveBalances: computed.CurrentEffectiveBalances,
BalancesBeforeEpochTransition: computed.BalancesBeforeEpochTransition,
BalancesAfterEpochTransition: computed.BalancesAfterEpochTransition,
MissingValidators: computed.MissingValidators,
InactivityScores: computed.InactivityScores, // Only populated in Altair
}
network.WriteJson(w, response)
}
func handleHTTPError(w http.ResponseWriter, message string, code int) {
errJson := &network.DefaultErrorJson{
Message: message,
Code: code,
}
network.WriteError(w, errJson)
}

View File

@@ -0,0 +1,453 @@
package validator
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
mockSync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
)
func TestServer_GetValidatorPerformance(t *testing.T) {
t.Run("Syncing", func(t *testing.T) {
vs := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var buf bytes.Buffer
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
require.Equal(t, http.StatusServiceUnavailable, rawResp.StatusCode)
})
t.Run("OK", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
publicKeys := [][48]byte{
bytesutil.ToBytes48([]byte{1}),
bytesutil.ToBytes48([]byte{2}),
bytesutil.ToBytes48([]byte{3}),
}
headState, err := util.NewBeaconState()
require.NoError(t, err)
headState = setHeadState(t, headState, publicKeys)
require.NoError(t, headState.SetBalances([]uint64{100, 101, 102}))
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
vs := &Server{
HeadFetcher: &mock.ChainService{
State: headState,
},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
want := &ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKeys[1][:], publicKeys[2][:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{101, 102},
BalancesAfterEpochTransition: []uint64{0, 0},
MissingValidators: [][]byte{publicKeys[0][:]},
}
request := &ValidatorPerformanceRequest{
PublicKeys: [][]byte{publicKeys[0][:], publicKeys[2][:], publicKeys[1][:]},
}
var buf bytes.Buffer
err = json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
defer func() {
if err := rawResp.Body.Close(); err != nil {
t.Fatal(err)
}
}()
body, err := io.ReadAll(rawResp.Body)
require.NoError(t, err)
response := &ValidatorPerformanceResponse{}
require.NoError(t, json.Unmarshal(body, response))
require.DeepEqual(t, want, response)
})
t.Run("Indices", func(t *testing.T) {
ctx := context.Background()
publicKeys := [][48]byte{
bytesutil.ToBytes48([]byte{1}),
bytesutil.ToBytes48([]byte{2}),
bytesutil.ToBytes48([]byte{3}),
}
headState, err := util.NewBeaconState()
require.NoError(t, err)
headState = setHeadState(t, headState, publicKeys)
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
vs := &Server{
HeadFetcher: &mock.ChainService{
// 10 epochs into the future.
State: headState,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
}
c := headState.Copy()
vp, bp, err := precompute.New(ctx, c)
require.NoError(t, err)
vp, bp, err = precompute.ProcessAttestations(ctx, c, vp, bp)
require.NoError(t, err)
_, err = precompute.ProcessRewardsAndPenaltiesPrecompute(c, bp, vp, precompute.AttestationsDelta, precompute.ProposersDelta)
require.NoError(t, err)
extraBal := params.BeaconConfig().MaxEffectiveBalance + params.BeaconConfig().GweiPerEth
want := &ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKeys[1][:], publicKeys[2][:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{extraBal, extraBal + params.BeaconConfig().GweiPerEth},
BalancesAfterEpochTransition: []uint64{vp[1].AfterEpochTransitionBalance, vp[2].AfterEpochTransitionBalance},
MissingValidators: [][]byte{publicKeys[0][:]},
}
request := &ValidatorPerformanceRequest{
Indices: []primitives.ValidatorIndex{2, 1, 0},
}
var buf bytes.Buffer
err = json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
defer func() {
if err := rawResp.Body.Close(); err != nil {
t.Fatal(err)
}
}()
body, err := io.ReadAll(rawResp.Body)
require.NoError(t, err)
response := &ValidatorPerformanceResponse{}
require.NoError(t, json.Unmarshal(body, response))
require.DeepEqual(t, want, response)
})
t.Run("Indices Pubkeys", func(t *testing.T) {
ctx := context.Background()
publicKeys := [][48]byte{
bytesutil.ToBytes48([]byte{1}),
bytesutil.ToBytes48([]byte{2}),
bytesutil.ToBytes48([]byte{3}),
}
headState, err := util.NewBeaconState()
require.NoError(t, err)
headState = setHeadState(t, headState, publicKeys)
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
vs := &Server{
HeadFetcher: &mock.ChainService{
// 10 epochs into the future.
State: headState,
},
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
}
c := headState.Copy()
vp, bp, err := precompute.New(ctx, c)
require.NoError(t, err)
vp, bp, err = precompute.ProcessAttestations(ctx, c, vp, bp)
require.NoError(t, err)
_, err = precompute.ProcessRewardsAndPenaltiesPrecompute(c, bp, vp, precompute.AttestationsDelta, precompute.ProposersDelta)
require.NoError(t, err)
extraBal := params.BeaconConfig().MaxEffectiveBalance + params.BeaconConfig().GweiPerEth
want := &ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKeys[1][:], publicKeys[2][:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{extraBal, extraBal + params.BeaconConfig().GweiPerEth},
BalancesAfterEpochTransition: []uint64{vp[1].AfterEpochTransitionBalance, vp[2].AfterEpochTransitionBalance},
MissingValidators: [][]byte{publicKeys[0][:]},
}
request := &ValidatorPerformanceRequest{
PublicKeys: [][]byte{publicKeys[0][:], publicKeys[2][:]}, Indices: []primitives.ValidatorIndex{1, 2},
}
var buf bytes.Buffer
err = json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
defer func() {
if err := rawResp.Body.Close(); err != nil {
t.Fatal(err)
}
}()
body, err := io.ReadAll(rawResp.Body)
require.NoError(t, err)
response := &ValidatorPerformanceResponse{}
require.NoError(t, json.Unmarshal(body, response))
require.DeepEqual(t, want, response)
})
t.Run("Altair OK", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
publicKeys := [][48]byte{
bytesutil.ToBytes48([]byte{1}),
bytesutil.ToBytes48([]byte{2}),
bytesutil.ToBytes48([]byte{3}),
}
epoch := primitives.Epoch(1)
headState, _ := util.DeterministicGenesisStateAltair(t, 32)
require.NoError(t, headState.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch+1))))
headState = setHeadState(t, headState, publicKeys)
require.NoError(t, headState.SetInactivityScores([]uint64{0, 0, 0}))
require.NoError(t, headState.SetBalances([]uint64{100, 101, 102}))
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
vs := &Server{
HeadFetcher: &mock.ChainService{
State: headState,
},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
want := &ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKeys[1][:], publicKeys[2][:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{101, 102},
BalancesAfterEpochTransition: []uint64{0, 0},
MissingValidators: [][]byte{publicKeys[0][:]},
InactivityScores: []uint64{0, 0},
}
request := &ValidatorPerformanceRequest{
PublicKeys: [][]byte{publicKeys[0][:], publicKeys[2][:], publicKeys[1][:]},
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
defer func() {
if err := rawResp.Body.Close(); err != nil {
t.Fatal(err)
}
}()
body, err := io.ReadAll(rawResp.Body)
require.NoError(t, err)
response := &ValidatorPerformanceResponse{}
require.NoError(t, json.Unmarshal(body, response))
require.DeepEqual(t, want, response)
})
t.Run("Bellatrix OK", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
publicKeys := [][48]byte{
bytesutil.ToBytes48([]byte{1}),
bytesutil.ToBytes48([]byte{2}),
bytesutil.ToBytes48([]byte{3}),
}
epoch := primitives.Epoch(1)
headState, _ := util.DeterministicGenesisStateBellatrix(t, 32)
require.NoError(t, headState.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch+1))))
headState = setHeadState(t, headState, publicKeys)
require.NoError(t, headState.SetInactivityScores([]uint64{0, 0, 0}))
require.NoError(t, headState.SetBalances([]uint64{100, 101, 102}))
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
vs := &Server{
HeadFetcher: &mock.ChainService{
State: headState,
},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
want := &ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKeys[1][:], publicKeys[2][:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{101, 102},
BalancesAfterEpochTransition: []uint64{0, 0},
MissingValidators: [][]byte{publicKeys[0][:]},
InactivityScores: []uint64{0, 0},
}
request := &ValidatorPerformanceRequest{
PublicKeys: [][]byte{publicKeys[0][:], publicKeys[2][:], publicKeys[1][:]},
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
defer func() {
if err := rawResp.Body.Close(); err != nil {
t.Fatal(err)
}
}()
body, err := io.ReadAll(rawResp.Body)
require.NoError(t, err)
response := &ValidatorPerformanceResponse{}
require.NoError(t, json.Unmarshal(body, response))
require.DeepEqual(t, want, response)
})
t.Run("Capella OK", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
publicKeys := [][48]byte{
bytesutil.ToBytes48([]byte{1}),
bytesutil.ToBytes48([]byte{2}),
bytesutil.ToBytes48([]byte{3}),
}
epoch := primitives.Epoch(1)
headState, _ := util.DeterministicGenesisStateCapella(t, 32)
require.NoError(t, headState.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch+1))))
headState = setHeadState(t, headState, publicKeys)
require.NoError(t, headState.SetInactivityScores([]uint64{0, 0, 0}))
require.NoError(t, headState.SetBalances([]uint64{100, 101, 102}))
offset := int64(headState.Slot().Mul(params.BeaconConfig().SecondsPerSlot))
vs := &Server{
HeadFetcher: &mock.ChainService{
State: headState,
},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
want := &ValidatorPerformanceResponse{
PublicKeys: [][]byte{publicKeys[1][:], publicKeys[2][:]},
CurrentEffectiveBalances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
CorrectlyVotedSource: []bool{false, false},
CorrectlyVotedTarget: []bool{false, false},
CorrectlyVotedHead: []bool{false, false},
BalancesBeforeEpochTransition: []uint64{101, 102},
BalancesAfterEpochTransition: []uint64{0, 0},
MissingValidators: [][]byte{publicKeys[0][:]},
InactivityScores: []uint64{0, 0},
}
request := &ValidatorPerformanceRequest{
PublicKeys: [][]byte{publicKeys[0][:], publicKeys[2][:], publicKeys[1][:]},
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(vs.GetValidatorPerformance))
req := httptest.NewRequest("POST", "/foo", &buf)
client := &http.Client{}
rawResp, err := client.Post(srv.URL, "application/json", req.Body)
require.NoError(t, err)
defer func() {
if err := rawResp.Body.Close(); err != nil {
t.Fatal(err)
}
}()
body, err := io.ReadAll(rawResp.Body)
require.NoError(t, err)
response := &ValidatorPerformanceResponse{}
require.NoError(t, json.Unmarshal(body, response))
require.DeepEqual(t, want, response)
})
}
func setHeadState(t *testing.T, headState state.BeaconState, publicKeys [][48]byte) state.BeaconState {
epoch := primitives.Epoch(1)
require.NoError(t, headState.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch+1))))
if headState.Version() < version.Altair {
atts := make([]*ethpb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
atts[i] = &ethpb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
},
AggregationBits: bitfield.Bitlist{},
InclusionDelay: 1,
}
require.NoError(t, headState.AppendPreviousEpochAttestations(atts[i]))
}
}
defaultBal := params.BeaconConfig().MaxEffectiveBalance
extraBal := params.BeaconConfig().MaxEffectiveBalance + params.BeaconConfig().GweiPerEth
balances := []uint64{defaultBal, extraBal, extraBal + params.BeaconConfig().GweiPerEth}
require.NoError(t, headState.SetBalances(balances))
validators := []*ethpb.Validator{
{
PublicKey: publicKeys[0][:],
ActivationEpoch: 5,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{
PublicKey: publicKeys[1][:],
EffectiveBalance: defaultBal,
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{
PublicKey: publicKeys[2][:],
EffectiveBalance: defaultBal,
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
require.NoError(t, headState.SetValidators(validators))
return headState
}

View File

@@ -30,16 +30,19 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon"
rpcBuilder "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/debug"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/events"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/node"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/rewards"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/validator"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
nodeprysm "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/node"
beaconv1alpha1 "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/beacon"
debugv1alpha1 "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/debug"
nodev1alpha1 "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/node"
validatorv1alpha1 "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/validator"
httpserver "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/validator"
slasherservice "github.com/prysmaticlabs/prysm/v4/beacon-chain/slasher"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
@@ -211,8 +214,19 @@ func (s *Service) Start() {
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
ReplayerBuilder: ch,
TimeFetcher: s.cfg.GenesisTimeFetcher,
Stater: stater,
HeadFetcher: s.cfg.HeadFetcher,
}
s.cfg.Router.HandleFunc("/eth/v1/beacon/rewards/blocks/{block_id}", rewardsServer.BlockRewards)
s.cfg.Router.HandleFunc("/eth/v1/beacon/rewards/attestations/{epoch}", rewardsServer.AttestationRewards)
builderServer := &rpcBuilder.Server{
FinalizationFetcher: s.cfg.FinalizationFetcher,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
Stater: stater,
}
s.cfg.Router.HandleFunc("/eth/v1/builder/states/{state_id}/expected_withdrawals", builderServer.ExpectedWithdrawals)
validatorServer := &validatorv1alpha1.Server{
Ctx: s.ctx,
@@ -294,6 +308,22 @@ func (s *Service) Start() {
ExecutionChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
}
nodeServerPrysm := &nodeprysm.Server{
BeaconDB: s.cfg.BeaconDB,
SyncChecker: s.cfg.SyncService,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
PeersFetcher: s.cfg.PeersFetcher,
PeerManager: s.cfg.PeerManager,
MetadataProvider: s.cfg.MetadataProvider,
HeadFetcher: s.cfg.HeadFetcher,
ExecutionChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
}
s.cfg.Router.HandleFunc("/prysm/node/trusted_peers", nodeServerPrysm.ListTrustedPeer).Methods("GET")
s.cfg.Router.HandleFunc("/prysm/node/trusted_peers", nodeServerPrysm.AddTrustedPeer).Methods("POST")
s.cfg.Router.HandleFunc("/prysm/node/trusted_peers/{peer_id}", nodeServerPrysm.RemoveTrustedPeer).Methods("Delete")
beaconChainServer := &beaconv1alpha1.Server{
Ctx: s.ctx,
BeaconDB: s.cfg.BeaconDB,
@@ -342,6 +372,12 @@ func (s *Service) Start() {
FinalizationFetcher: s.cfg.FinalizationFetcher,
ForkchoiceFetcher: s.cfg.ForkchoiceFetcher,
}
httpServer := &httpserver.Server{
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
HeadFetcher: s.cfg.HeadFetcher,
SyncChecker: s.cfg.SyncService,
}
s.cfg.Router.HandleFunc("/prysm/validators/performance", httpServer.GetValidatorPerformance)
s.cfg.Router.HandleFunc("/eth/v2/beacon/blocks", beaconChainServerV1.PublishBlockV2)
s.cfg.Router.HandleFunc("/eth/v2/beacon/blinded_blocks", beaconChainServerV1.PublishBlindedBlockV2)
ethpbv1alpha1.RegisterNodeServer(s.grpcServer, nodeServer)

View File

@@ -207,7 +207,7 @@ func handle32ByteArrays(val [][32]byte, indices []uint64, convertAll bool) ([][3
func handleValidatorSlice(val []*ethpb.Validator, indices []uint64, convertAll bool) ([][32]byte, error) {
length := len(indices)
if convertAll {
length = len(val)
return stateutil.OptimizedValidatorRoots(val)
}
roots := make([][32]byte, 0, length)
rootCreator := func(input *ethpb.Validator) error {
@@ -218,15 +218,6 @@ func handleValidatorSlice(val []*ethpb.Validator, indices []uint64, convertAll b
roots = append(roots, newRoot)
return nil
}
if convertAll {
for i := range val {
err := rootCreator(val[i])
if err != nil {
return nil, err
}
}
return roots, nil
}
if len(val) > 0 {
for _, idx := range indices {
if idx > uint64(len(val))-1 {

View File

@@ -251,6 +251,10 @@ func TestFieldTrie_NativeState_fieldConvertersNative(t *testing.T) {
field: types.FieldIndex(9),
indices: []uint64{1},
elements: []*ethpb.Eth1Data{
{
DepositRoot: make([]byte, fieldparams.RootLength),
DepositCount: 2,
},
{
DepositRoot: make([]byte, fieldparams.RootLength),
DepositCount: 1,
@@ -321,11 +325,14 @@ func TestFieldTrie_NativeState_fieldConvertersNative(t *testing.T) {
wantHex: []string{"0x7d7696e7f12593934afcd87a0d38e1a981bee63cb4cf0568ba36a6e0596eeccb"},
},
{
name: "Attestations",
name: "Attestations convertAll false",
args: &args{
field: types.FieldIndex(15),
indices: []uint64{1},
elements: []*ethpb.PendingAttestation{
{
ProposerIndex: 0,
},
{
ProposerIndex: 1,
},
@@ -352,8 +359,12 @@ func TestFieldTrie_NativeState_fieldConvertersNative(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
roots, err := fieldConverters(tt.args.field, tt.args.indices, tt.args.elements, tt.args.convertAll)
if err != nil && tt.errMsg != "" {
require.ErrorContains(t, tt.errMsg, err)
if err != nil {
if tt.errMsg != "" {
require.ErrorContains(t, tt.errMsg, err)
} else {
t.Error("Unexpected error: " + err.Error())
}
} else {
for i, root := range roots {
hex := hexutil.Encode(root[:])

View File

@@ -155,6 +155,20 @@ func (e *epochBoundaryState) put(blockRoot [32]byte, s state.BeaconState) error
func (e *epochBoundaryState) delete(blockRoot [32]byte) error {
e.lock.Lock()
defer e.lock.Unlock()
rInfo, ok, err := e.getByBlockRootLockFree(blockRoot)
if err != nil {
return err
}
if !ok {
return nil
}
slotInfo := &slotRootInfo{
slot: rInfo.state.Slot(),
blockRoot: blockRoot,
}
if err = e.slotRootCache.Delete(slotInfo); err != nil {
return err
}
return e.rootStateCache.Delete(&rootStateInfo{
root: blockRoot,
})

View File

@@ -36,12 +36,12 @@ func (_ *State) replayBlocks(
var err error
start := time.Now()
log = log.WithFields(logrus.Fields{
rLog := log.WithFields(logrus.Fields{
"startSlot": state.Slot(),
"endSlot": targetSlot,
"diff": targetSlot - state.Slot(),
})
log.Debug("Replaying state")
rLog.Debug("Replaying state")
// The input block list is sorted in decreasing slots order.
if len(signed) > 0 {
for i := len(signed) - 1; i >= 0; i-- {
@@ -71,7 +71,7 @@ func (_ *State) replayBlocks(
}
duration := time.Since(start)
log.WithFields(logrus.Fields{
rLog.WithFields(logrus.Fields{
"duration": duration,
}).Debug("Replayed state")

View File

@@ -2,8 +2,10 @@ package stategen
import (
"context"
"fmt"
"math"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
@@ -79,6 +81,41 @@ func (s *State) saveStateByRoot(ctx context.Context, blockRoot [32]byte, st stat
if err := s.epochBoundaryStateCache.put(blockRoot, st); err != nil {
return err
}
} else {
// Always check that the correct epoch boundary states have been saved
// for the current epoch.
epochStart, err := slots.EpochStart(slots.ToEpoch(st.Slot()))
if err != nil {
return err
}
bRoot, err := helpers.BlockRootAtSlot(st, epochStart)
if err != nil {
return err
}
_, ok, err := s.epochBoundaryStateCache.getByBlockRoot([32]byte(bRoot))
if err != nil {
return err
}
// We would only recover the boundary states under this condition:
//
// 1) Would indicate that the epoch boundary was skipped due to a missed slot, we
// then recover by saving the state at that particular slot here.
if !ok {
// Only recover the state if it is in our hot state cache, otherwise we
// simply skip this step.
if s.hotStateCache.has([32]byte(bRoot)) {
log.WithFields(logrus.Fields{
"slot": epochStart,
"root": fmt.Sprintf("%#x", bRoot),
}).Debug("Recovering state for epoch boundary cache")
hState := s.hotStateCache.get([32]byte(bRoot))
if err := s.epochBoundaryStateCache.put([32]byte(bRoot), hState); err != nil {
return err
}
}
}
}
// On an intermediate slot, save state summary.

View File

@@ -7,6 +7,7 @@ import (
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
@@ -137,6 +138,34 @@ func TestSaveState_NoSaveNotEpochBoundary(t *testing.T) {
require.Equal(t, false, beaconDB.HasState(ctx, r))
}
func TestSaveState_RecoverForEpochBoundary(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := New(beaconDB, doublylinkedtree.New())
beaconState, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch-1))
r := [32]byte{'A'}
boundaryRoot := [32]byte{'B'}
require.NoError(t, beaconState.UpdateBlockRootAtIndex(0, boundaryRoot))
b := util.NewBeaconBlock()
util.SaveBlock(t, ctx, beaconDB, b)
gRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, gRoot))
// Save boundary state to the hot state cache.
boundaryState, _ := util.DeterministicGenesisState(t, 32)
service.hotStateCache.put(boundaryRoot, boundaryState)
require.NoError(t, service.SaveState(ctx, r, beaconState))
rInfo, ok, err := service.epochBoundaryStateCache.getByBlockRoot(boundaryRoot)
assert.NoError(t, err)
assert.Equal(t, true, ok, "state does not exist in cache")
assert.Equal(t, rInfo.root, boundaryRoot, "incorrect root of root state info")
assert.Equal(t, rInfo.state.Slot(), primitives.Slot(0), "incorrect slot of state")
}
func TestSaveState_CanSaveHotStateToDB(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()

View File

@@ -30,7 +30,7 @@ func ValidatorRegistryRoot(vals []*ethpb.Validator) ([32]byte, error) {
}
func validatorRegistryRoot(validators []*ethpb.Validator) ([32]byte, error) {
roots, err := optimizedValidatorRoots(validators)
roots, err := OptimizedValidatorRoots(validators)
if err != nil {
return [32]byte{}, err
}
@@ -51,7 +51,9 @@ func validatorRegistryRoot(validators []*ethpb.Validator) ([32]byte, error) {
return res, nil
}
func optimizedValidatorRoots(validators []*ethpb.Validator) ([][32]byte, error) {
// OptimizedValidatorRoots uses an optimized routine with gohashtree in order to
// derive a list of validator roots from a list of validator objects.
func OptimizedValidatorRoots(validators []*ethpb.Validator) ([][32]byte, error) {
// Exit early if no validators are provided.
if len(validators) == 0 {
return [][32]byte{}, nil

View File

@@ -139,7 +139,7 @@ func (s *Service) writeBlockBatchToStream(ctx context.Context, batch blockBatch,
continue
}
if chunkErr := s.chunkBlockWriter(stream, b); chunkErr != nil {
log.WithError(chunkErr).Error("Could not send a chunked response")
log.WithError(chunkErr).Debug("Could not send a chunked response")
return chunkErr
}
}

View File

@@ -0,0 +1,19 @@
# From Bazel's perspective, this is almost equivalent to always specifying
# --extra_toolchains on every bazel <...> command-line invocation. It also
# means there is no way to disable the toolchain with the command line. This is
# useful if you find bazel-hermetic-cc useful enough to compile for all of your
# targets and tools.
#
# With BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1 Bazel stops detecting the default
# host toolchain. Configuring toolchains is complicated enough, and the
# auto-detection (read: fallback to non-hermetic toolchain) is a footgun best
# avoided. This option is not documented in bazel, so may break. If you intend
# to use the hermetic toolchain exclusively, it won't hurt.
build --action_env BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
# This snippet instructs Bazel to use the registered "new kinds of toolchains".
# This flag not needed after this issue is closed https://github.com/bazelbuild/bazel/issues/7260
build --incompatible_enable_cc_toolchain_resolution
# Add a no-op warning for users still using --config=llvm
build:llvm --unconditional_warning="llvm config is no longer used as clang is now the default compiler"

View File

@@ -202,11 +202,7 @@ var (
Usage: "Sets the maximum number of headers that a deposit log query can fetch.",
Value: uint64(1000),
}
// EnableRegistrationCache a temporary flag for enabling the validator registration cache instead of db.
EnableRegistrationCache = &cli.BoolFlag{
Name: "enable-registration-cache",
Usage: "A temporary flag for enabling the validator registration cache instead of persisting in db. The cache will clear on restart.",
}
// WeakSubjectivityCheckpoint defines the weak subjectivity checkpoint the node must sync through to defend against long range attacks.
WeakSubjectivityCheckpoint = &cli.StringFlag{
Name: "weak-subjectivity-checkpoint",

View File

@@ -58,7 +58,6 @@ var appFlags = []cli.Flag{
flags.InteropGenesisTimeFlag,
flags.SlotsPerArchivedPoint,
flags.EnableDebugRPCEndpoints,
flags.EnableRegistrationCache,
flags.SubscribeToAllSubnets,
flags.HistoricalSlasherNode,
flags.ChainID,

View File

@@ -8,7 +8,9 @@ go_library(
deps = [
"//beacon-chain/node:go_default_library",
"//beacon-chain/sync/genesis:go_default_library",
"//cmd/beacon-chain/sync/checkpoint:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -4,6 +4,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/node"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/genesis"
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/sync/checkpoint"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
)
@@ -28,6 +30,10 @@ var (
func BeaconNodeOptions(c *cli.Context) (node.Option, error) {
statePath := c.Path(StatePath.Name)
remoteURL := c.String(BeaconAPIURL.Name)
if remoteURL == "" && c.String(checkpoint.RemoteURL.Name) != "" {
log.Infof("using checkpoint sync url %s for value in --%s flag", c.String(checkpoint.RemoteURL.Name), BeaconAPIURL.Name)
remoteURL = c.String(checkpoint.RemoteURL.Name)
}
if remoteURL != "" {
return func(node *node.BeaconNode) error {
var err error

View File

@@ -113,7 +113,6 @@ var appHelpFlagGroups = []flagGroup{
flags.BlockBatchLimit,
flags.BlockBatchLimitBurstFactor,
flags.EnableDebugRPCEndpoints,
flags.EnableRegistrationCache,
flags.SubscribeToAllSubnets,
flags.HistoricalSlasherNode,
flags.ChainID,

View File

@@ -52,11 +52,11 @@ func createKeystore(t *testing.T, path string) (*keymanager.Keystore, string) {
id, err := uuid.NewRandom()
require.NoError(t, err)
keystoreFile := &keymanager.Keystore{
Crypto: cryptoFields,
ID: id.String(),
Pubkey: fmt.Sprintf("%x", validatingKey.PublicKey().Marshal()),
Version: encryptor.Version(),
Name: encryptor.Name(),
Crypto: cryptoFields,
ID: id.String(),
Pubkey: fmt.Sprintf("%x", validatingKey.PublicKey().Marshal()),
Version: encryptor.Version(),
Description: encryptor.Name(),
}
encoded, err := json.MarshalIndent(keystoreFile, "", "\t")
require.NoError(t, err)

View File

@@ -134,7 +134,6 @@ func TestExitAccountsCli_OK_AllPublicKeys(t *testing.T) {
mockNodeClient.EXPECT().
GetGenesis(gomock.Any(), gomock.Any()).
Times(2).
Return(&ethpb.Genesis{GenesisTime: genesisTime}, nil)
mockValidatorClient.EXPECT().

View File

@@ -70,6 +70,7 @@ type Flags struct {
PrepareAllPayloads bool // PrepareAllPayloads informs the engine to prepare a block on every slot.
BuildBlockParallel bool // BuildBlockParallel builds beacon block for proposer in parallel.
AggregateParallel bool // AggregateParallel aggregates attestations in parallel.
// KeystoreImportDebounceInterval specifies the time duration the validator waits to reload new keys if they have
// changed on disk. This feature is for advanced use cases only.
@@ -229,6 +230,10 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(disableBuildBlockParallel)
cfg.BuildBlockParallel = false
}
if ctx.IsSet(aggregateParallel.Name) {
logEnabled(aggregateParallel)
cfg.AggregateParallel = true
}
if ctx.IsSet(disableResourceManager.Name) {
logEnabled(disableResourceManager)
cfg.DisableResourceManager = true

View File

@@ -32,6 +32,12 @@ var (
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedEnableRegistrationCache = &cli.BoolFlag{
Name: "enable-registration-cache",
Usage: deprecatedUsage,
Hidden: true,
}
)
// Deprecated flags for both the beacon node and validator client.
@@ -41,6 +47,7 @@ var deprecatedFlags = []cli.Flag{
deprecatedEnableReorgLateBlocks,
deprecatedDisableGossipBatchAggregation,
deprecatedBuildBlockParallel,
deprecatedEnableRegistrationCache,
}
// deprecatedBeaconFlags contains flags that are still used by other components

View File

@@ -148,6 +148,17 @@ var (
Name: "disable-resource-manager",
Usage: "Disables running the libp2p resource manager",
}
// DisableRegistrationCache a flag for disabling the validator registration cache and use db instead.
DisableRegistrationCache = &cli.BoolFlag{
Name: "diable-registration-cache",
Usage: "A temporary flag for disabling the validator registration cache instead of using the db. note: registrations do not clear on restart while using the db",
}
aggregateParallel = &cli.BoolFlag{
Name: "aggregate-parallel",
Usage: "Enables parallel aggregation of attestations",
}
)
// devModeFlags holds list of flags that are set when development mode is on.
@@ -200,6 +211,8 @@ var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []c
aggregateSecondInterval,
aggregateThirdInterval,
disableResourceManager,
DisableRegistrationCache,
aggregateParallel,
}...)...)
// E2EBeaconChainFlags contains a list of the beacon chain feature flags to be tested in E2E.

View File

@@ -84,6 +84,15 @@ type ProposerSettings struct {
DefaultConfig *ProposerOption
}
// ShouldBeSaved goes through checks to see if the value should be saveable
// Pseudocode: conditions for being saved into the database
// 1. settings are not nil
// 2. proposeconfig is not nil (this defines specific settings for each validator key), default config can be nil in this case and fall back to beacon node settings
// 3. defaultconfig is not nil, meaning it has at least fee recipient settings (this defines general settings for all validator keys but keys will use settings from propose config if available), propose config can be nil in this case
func (settings *ProposerSettings) ShouldBeSaved() bool {
return settings != nil && (settings.ProposeConfig != nil || settings.DefaultConfig != nil && settings.DefaultConfig.FeeRecipientConfig != nil)
}
// ToPayload converts struct to ProposerSettingsPayload
func (ps *ProposerSettings) ToPayload() *validatorpb.ProposerSettingsPayload {
if ps == nil {

Some files were not shown because too many files have changed in this diff Show More