Compare commits

...

35 Commits

Author SHA1 Message Date
Potuz
ccce257df8 do not range-for for votes 2024-07-08 12:17:51 -03:00
Nishant Das
74b5f6ecf2 Add Aggregated Public Key Method (#14178)
* Add aggregated key method

* Gazelle

* Manu's Review
2024-07-08 09:28:41 +00:00
Nishant Das
aea2a469cc Update Libp2p (#14192) 2024-07-08 06:29:10 +00:00
Khanh Hoa
7a394062e1 refactor: enable errorlint and refactor code (#14110)
* refactor: enable errorlint and refactor code

* revert

* revert

* add bazel

* gofmt

* gofmt

* gofmt

* gofmt

* gci

* lint

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-07-04 22:40:13 +00:00
Manu NALEPA
8070fc8ece Fix slasher disk usage leak. (#14151)
* `PruneProposalsAtEpoch`: Test return value in case of nothing to prune.

* `TestStore_PruneAttestations_OK`: Create unique validator indexes.

Before this commit, `attester2` for `j = n` was the same than
`attester1` for `j = n + 1`, resulting in erasure of a lot of attesters.

I guess it was not the initial intent.

* Slasher pruning: Check if the number of pruned items corresponds to the expectation.

Before this commit, if the pruning function did remove a superset
of the expected pruned items (including all the items), then the test would pass.

* Prune items that should be pruned and stop pruning items that should not be pruned.

The first 8 bytes of the key of `attestation-data-roots` and
`proposal-records` bytes correspond respectively to an encoded epoch and
and encoded slot.

The important word in this sentence is "encoded".
Before this commit, these slot/epoch are SSZ encoded, which means that
they are little-endian encoded.

However:
- `uint64PrefixGreaterThan` uses `bytes.Compare` which expects
   big-endian encoded values.
- `for k, _ := c.First(); k != nil; k, _ = c.Next()` iters over the
  keys in big-endian order.

The consequence is:
- Some items that should be pruned are not pruned, provoking a disk
  usage leak.
- Some items that should not be pruned are pruned, provoking errors like
  in https://github.com/prysmaticlabs/prysm/issues/13658.

This commit encodes the slot/epoch as big-endian before storing them in
the database keys.

Why this bug has not been detected in unit test before?

The values used in unit tests before this commit in
`TestStore_PruneProposalsAtEpoch` and `TestStore_PruneAttestations_OK`
are `10` and `20`.
Unfortunately, checking if `littleEndian(20) > littlenEndien(10)`
with the `>` operator considering operands as big-endian encoded returns
the expected result...

Just replacing `20` by `30` trigs the bug.

* Make deepsource happy.

* Slasher: Migrate database from little-endian to big-endian.

* Update beacon-chain/slasher/service.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update beacon-chain/db/slasherkv/migrate.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* `TestMigrate`: Fix documentation.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-07-03 13:33:07 +00:00
Preston Van Loon
d6aeaf77b3 Electra: Unskip state-native tests for spectest v1.5.0-alpha.3 (#14174) 2024-07-02 19:33:27 +00:00
Preston Van Loon
40434ac209 Electra: Unskip passing spectests at v1.5.0-alpha.3 (#14175) 2024-07-02 19:25:19 +00:00
Preston Van Loon
0aab919d7c Electra: new process_consolidation_request (#14163)
* Electra: process_consolidation_request

* Enable consolidation_request spectests
2024-07-02 14:43:40 +00:00
Nishant Das
a8ecf5d118 Update to Go v1.22 (#13965)
* Bump go version up

* Update to go 1.22 compatible version

* Fix NoSec declarations

* Skip Gosec in GolangCi

* Avoid Bug In Analyzer

* Add in Gohashtree and Update to 1.22.4

* Fix Go Sum
2024-07-02 04:03:24 +00:00
kevaundray
8c0d6b27d0 use underscore to signify that it is not being used (#14162)
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-07-01 20:42:55 +00:00
Radosław Kapka
2e6f1de29a EIP-7549: Aggregation gRPC metods (#14115)
* impl

* tests

* generate mock

* review feedback

* gofmt
2024-07-01 18:40:33 +00:00
Gianguido Sorà
657750b803 validator/client: process Sync Committee roles separately (#13995)
* validator/client: process Sync Committee roles separately

In a DV context, to be compatible with the proposed selection endpoint, the VC must push all partial selections to it instead of just one.

Process sync committee role separately within the RolesAt method, so that partial selections can be pushed to the DV client appropriately, if configured.

* testing/util: re-add erroneously deleted state_test.go

* validator/client: fix tests

* validator/client: always process sync committee validator

Process sync committee duty at slot boundary as well.

* don't fail if validator is marked as sync committee but it is not in the list

ignore the duty and continue

* address code review comments

* fix build

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-07-01 17:18:45 +00:00
Preston Van Loon
af5eb82217 Electra: get_next_sync_committee_indices. (#14164) 2024-07-01 17:11:09 +00:00
terence
84e7f33fd9 Update to v1.5.0-alpha.3 spec tests (#14112)
* Update to v1.5.0-alpha.3 spec tests

* Fix deposit request

* Remove todo
2024-06-29 23:12:43 +00:00
Brindrajsinh Chauhan
ca83d29eef go version upgrade (#14161) 2024-06-29 23:08:16 +00:00
Afanti
0d49f6c142 chore: fix comments (#14149)
fix the typos in the code comments.
2024-06-29 21:03:15 +00:00
Preston Van Loon
041e81aad2 eip-7251: process_registry_updates (#14005)
* eip-7251: registry_updates

* Fix spectest violation

* More unit tests for process_registry_updates

* Update tests for ProcessRegistryUpdates

* Minor refactorings and benchmark

* Update beacon-chain/core/electra/registry_updates.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Wrap errors from process_registry_updates for easier debugging

* Rewrite process_registry_updates not to use st.Validators()

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-06-29 15:56:28 +00:00
james-prysm
028504ae9a EIP-6110: Supply validator deposits on chain (#13944)
* wip changes

* wip changes refactoring deposit functions

* wip on deposit functions

* wip changes to fix deposit processing

* more wip function logic

* gaz

* wip refactoring deposit changes

* removing circlular dependency and other small fix

* fixing circular dependencies

* fixing validators file

* fixing more tests

* fixing unit tests

* wip deposit packing

* refactoring packing

* changing packing code some more

* fixing packing change

* updating more tests

* removing comment

* fixing transition code for eip6110

* including inserts for validator index cache

* adding tests and placeholder test

* moving deposit related unit tests over

* adding in test for electra proposer packing

* spec test wip

* moving cache saving to the correct spot

* eip-6110: deposit spectests

* adding deposit receipt spec tests

* reverting altair operations in this pr and will be handled separately

* fixing renames but need to refactor process deposits

* gaz

* fixing names

* fixing linting

* fixing unit test

* fixing a test and updating test util

* bal never used

* fixing more tests

* fixing one more test

* addressing feedback

* refactoring process deposits to be part of their own fork instead of in blocks package

* adding new tests to cover functions and removing redundancies

* removing comment based on feedback

* resolving easy deepsource issue

* refactoring for appropriate aliasing and readability

* reverting some changes to simplify diff

* small edit to comment

* fixing linting

* fixing merge changes and review comments

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_deposits.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* partial review comments

* addressing slack feedback

* moving function from deposits to validator.go

* adding in batch verification and improving logs

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-06-29 01:53:20 +00:00
james-prysm
91c55c6880 Electra: Properly Calculate Proposer Probabilities for MaxEB (#14010)
* adding small change and tests for max eb spec change

* gaz

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-06-28 22:24:32 +00:00
Radosław Kapka
78cf75a0ed IndexedAtt wrapper for the slasher feed (#14150)
* `IndexedAtt` wrapper for the slasher feed

* test fixes

* fix simulator

* fix e2e

* Revert "Auxiliary commit to revert individual files from 191bbf77accfc2523fa9f909837a2e9dca132afa"

This reverts commit 2b0441a23a0e5f66e50cf36c3bbfbb39d587b17b.

* extract interface from channel
2024-06-28 15:44:19 +00:00
terence
fa370724f1 Increase attestation seen cache exp time to two epochs (#14156) 2024-06-28 00:20:08 +00:00
Barnabas Busa
5edc64d88c fix: update holesky repository layout (#14057)
* fix: update holesky repository layout

* Update holesky-testnet dependency

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2024-06-27 19:36:24 +00:00
Preston Van Loon
7898e65d4e Electra v1.5.0-alpha.3 changes: Move consolidations from beacon block body to the execution payload header. (#14152) 2024-06-27 19:28:30 +00:00
vickkkkkyy
6d63dbe1af removed some internal expired jump targets (#14128) 2024-06-27 19:26:54 +00:00
Preston Van Loon
eab9daf5f5 process_registry_updates without a full copy of the validator set (#14130)
* Avoid copying validator set in ProcessRegistryUpdates

* Fix bug with sortIndices. Thanks to @terencechain for the expert debugging
2024-06-27 17:49:24 +00:00
Nishant Das
4becd7b375 Remove Backoff when Searching for a lot of non-viable peers. Affects peer performance (#14148) 2024-06-27 03:27:00 +00:00
james-prysm
f230a6af58 Finalized validator index cache saving (#14146)
* adding logic for saving validator indicies in new cache

* updating index cache for eip6110

* Update setters_misc.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* linting

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-06-26 20:58:28 +00:00
james-prysm
539b981ac3 Web3signer: persistent public keys (#13682)
* WIP

* broken and still wip

* more wip improving saving

* wip

* removing cyclic dependency

* gaz

* fixes

* fixing more tests and how files load

* fixing wallet tests

* fixing test

* updating keymanager tests

* improving how the web3signer keymanager works

* WIP

* updated keymanager to read from file

* gaz

* reuse readkeyfile function and add in duplicate keys check

* adding in locks to increase safety

* refactored how saving keys work, more tests needed:

* fix test

* fix tests

* adding unit tests and cleaning up locks

* fixing tests

* tests were not fixed properly

* removing unneeded files

* Update cmd/validator/accounts/wallet_utils.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/accounts/wallet/wallet.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* review feedback

* updating flags and e2e

* deepsource fix

* resolving feedback

* removing fatal test for now

* addressing manu's feedback

* gofmt

* fixing tests

* fixing unit tests

* more idomatic feedback

* updating log files

* updating based on preston's suggestion

* improving logs and event triggers

* addressing comments from manu

* truncating was not triggering key file reload

* fixing unit test

* removing wrong dependency

* fix another broken unit test

* fixing bad pathing on file

* handle errors in test

* fixing testdata dependency

* resolving deepsource and comment around logs

* removing unneeded buffer

* reworking ux of web3signer file, unit tests to come

* adding unit tests for file change retries

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* updating based on review feedback

* missed err check

* adding some aliases to make running easier

* Update validator/keymanager/remote-web3signer/log.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review

* Update validator/keymanager/remote-web3signer/internal/client.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing more review feedback and linting

* fixing tests

* adding log

* adding 1 more test

* improving logs

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-06-26 15:36:32 +00:00
Khanh Hoa
aad29ff9fc refactor: use go ticker instead of timer (#14134)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-06-25 21:28:45 +00:00
james-prysm
4722446caf updating process deposit function to be more readable and fork specific, breaking out components of pr 13944 (#14139) 2024-06-25 18:19:42 +00:00
Radosław Kapka
b8aad84285 EIP-7549: attestation pool (#14121)
* implementation

* test fixes

* Electra tests

* remove aggregator tests

* id comments and tests

* make Id equal to [33]byte
2024-06-25 13:18:07 +00:00
Sammy Rosso
5f0d6074d6 Prysmctl ui bug (#14140)
* Fix ui bug

* Better logging
2024-06-25 13:13:42 +00:00
james-prysm
9d6a2f5390 Remove electra duplicate helpers (#14138)
* removing duplicate helper functions to reduce 6110 size

* linting
2024-06-24 21:22:46 +00:00
Preston Van Loon
490ddbf782 Enable golang.org/x/tools/go/analysis/errorsas static analysis check (#14135) 2024-06-24 17:20:45 +00:00
Radosław Kapka
adc875b20d EIP-7549: p2p and sync (#14085)
* EIP-7549: p2p and sync

* small cleanup

* fuzz fix

* deepsource

* review

* fix ineffectual assignment

* fix pubsub

* update ComputeSubnetForAttestation

* review

* review
2024-06-24 13:57:11 +00:00
352 changed files with 8885 additions and 4943 deletions

View File

@@ -1,4 +1,4 @@
FROM golang:1.21-alpine
FROM golang:1.22-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.21.5'
go-version: '1.22.3'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.21.5'
go-version: '1.22.3'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -28,14 +28,14 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.21
- name: Set up Go 1.22
uses: actions/setup-go@v3
with:
go-version: '1.21.5'
go-version: '1.22.3'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@v2.15.0
go install github.com/securego/gosec/v2/cmd/gosec@v2.19.0
gosec -exclude-generated -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
@@ -45,10 +45,10 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.21
- name: Set up Go 1.22
uses: actions/setup-go@v3
with:
go-version: '1.21.5'
go-version: '1.22.3'
id: go
- name: Golangci-lint
@@ -64,7 +64,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: '1.21.5'
go-version: '1.22.3'
id: go
- name: Check out code into the Go module directory

View File

@@ -6,7 +6,7 @@ run:
- proto
- tools/analyzers
timeout: 10m
go: '1.21.5'
go: '1.22.3'
linters:
enable-all: true
@@ -34,7 +34,6 @@ linters:
- dogsled
- dupl
- durationcheck
- errorlint
- exhaustive
- exhaustruct
- forbidigo
@@ -52,6 +51,7 @@ linters:
- gofumpt
- gomnd
- gomoddirectives
- gosec
- inamedparam
- interfacebloat
- ireturn

View File

@@ -224,6 +224,7 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
"@org_golang_x_tools//go/analysis/passes/errorsas:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",

View File

@@ -182,7 +182,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.21.8",
go_version = "1.22.4",
nogo = "@//:nogo",
)
@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.2"
consensus_spec_version = "v1.5.0-alpha.3"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NNXBa7SZ2sFb68HPNahgu1p0yDBpjuKJuLfRCl7vvoQ=",
integrity = "sha256-+byv+GUOQytex5GgtjBGVoNDseJZbiBdAjEtlgCbjEo=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-7BnlBvGWU92iAB100cMaAXVQhRrqpMQbavgrI+/paCw=",
integrity = "sha256-JJUy/jT1h3kGQkinTuzL7gMOA1+qgmPgJXVrYuH63Cg=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-VCHhcNt+fynf/sHK11qbRBAy608u9T1qAafvAGfxQhA=",
integrity = "sha256-T2VM4Qd0SwgGnTjWxjOX297DqEsovO9Ueij1UEJy48Y=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-a2aCNFyFkYLtf6QSwGOHdx7xXHjA2NNT8x8ZuxB0aes=",
integrity = "sha256-OP9BCBcQ7i+93bwj7ktY8pZ5uWsGjgTe4XTp7BDhX+I=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -332,14 +332,14 @@ http_archive(
filegroup(
name = "configs",
srcs = [
"custom_config_data/config.yaml",
"metadata/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
sha256 = "5f4be6fd088683ea9db45c863b9c5a1884422449e5b59fd2d561d3ba0f73ffd9",
strip_prefix = "holesky-9d9aabf2d4de51334ee5fed6c79a4d55097d1a43",
url = "https://github.com/eth-clients/holesky/archive/9d9aabf2d4de51334ee5fed6c79a4d55097d1a43.tar.gz", # 2024-01-22
integrity = "sha256-b7ZTT+olF+VXEJYNTV5jggNtCkt9dOejm1i2VE+zy+0=",
strip_prefix = "holesky-874c199423ccd180607320c38cbaca05d9a1573a",
url = "https://github.com/eth-clients/holesky/archive/874c199423ccd180607320c38cbaca05d9a1573a.tar.gz", # 2024-06-18
)
http_archive(

View File

@@ -9,22 +9,20 @@ import (
"net/http"
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
blocktest "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks/testing"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
type testRT struct {

View File

@@ -55,6 +55,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -12,6 +12,7 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
@@ -1914,7 +1915,8 @@ func TestEmptyResponseBody(t *testing.T) {
var b []byte
r := &ExecutionPayloadResponse{}
err := json.Unmarshal(b, r)
_, ok := err.(*json.SyntaxError)
var syntaxError *json.SyntaxError
ok := errors.As(err, &syntaxError)
require.Equal(t, true, ok)
})
t.Run("empty object", func(t *testing.T) {

View File

@@ -1,6 +1,7 @@
package server
import (
"errors"
"fmt"
"strings"
)
@@ -15,7 +16,8 @@ type DecodeError struct {
// NewDecodeError wraps an error (either the initial decoding error or another DecodeError).
// The current field that failed decoding must be passed in.
func NewDecodeError(err error, field string) *DecodeError {
de, ok := err.(*DecodeError)
var de *DecodeError
ok := errors.As(err, &de)
if ok {
return &DecodeError{path: append([]string{field}, de.path...), err: de.err}
}

View File

@@ -8,11 +8,10 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v5/container/slice"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v5/container/slice"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"

View File

@@ -64,6 +64,7 @@ go_library(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
@@ -138,6 +139,7 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -181,6 +183,7 @@ go_test(
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -6,8 +6,6 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
@@ -20,6 +18,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which

View File

@@ -74,8 +74,8 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch err {
case execution.ErrAcceptedSyncingPayloadStatus:
switch {
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
forkchoiceUpdatedOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"headSlot": headBlk.Slot(),
@@ -83,7 +83,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
case execution.ErrInvalidPayloadStatus:
case errors.Is(err, execution.ErrInvalidPayloadStatus):
forkchoiceUpdatedInvalidNodeCount.Inc()
headRoot := arg.headRoot
if len(lastValidHash) == 0 {
@@ -139,7 +139,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
"newHeadRoot": fmt.Sprintf("%#x", bytesutil.Trunc(r[:])),
}).Warn("Pruned invalid blocks")
return pid, invalidBlock{error: ErrInvalidPayload, root: arg.headRoot, invalidAncestorRoots: invalidRoots}
default:
log.WithError(err).Error(ErrUndefinedExecutionEngineError)
return nil, nil
@@ -231,18 +230,18 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
} else {
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, []common.Hash{}, &common.Hash{} /*empty version hashes and root before Deneb*/)
}
switch err {
case nil:
switch {
case err == nil:
newPayloadValidNodeCount.Inc()
return true, nil
case execution.ErrAcceptedSyncingPayloadStatus:
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
return false, nil
case execution.ErrInvalidPayloadStatus:
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
error: ErrInvalidPayload,

View File

@@ -69,7 +69,7 @@ func NewLightClientOptimisticUpdateFromBeaconState(
// assert sum(block.message.body.sync_aggregate.sync_committee_bits) >= MIN_SYNC_COMMITTEE_PARTICIPANTS
syncAggregate, err := block.Block().Body().SyncAggregate()
if err != nil {
return nil, fmt.Errorf("could not get sync aggregate %v", err)
return nil, fmt.Errorf("could not get sync aggregate %w", err)
}
if syncAggregate.SyncCommitteeBits.Count() < params.BeaconConfig().MinSyncCommitteeParticipants {
@@ -85,18 +85,18 @@ func NewLightClientOptimisticUpdateFromBeaconState(
header := state.LatestBlockHeader()
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, fmt.Errorf("could not get state root %v", err)
return nil, fmt.Errorf("could not get state root %w", err)
}
header.StateRoot = stateRoot[:]
headerRoot, err := header.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get header root %v", err)
return nil, fmt.Errorf("could not get header root %w", err)
}
blockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get block root %v", err)
return nil, fmt.Errorf("could not get block root %w", err)
}
if headerRoot != blockRoot {
@@ -114,14 +114,14 @@ func NewLightClientOptimisticUpdateFromBeaconState(
// attested_header.state_root = hash_tree_root(attested_state)
attestedStateRoot, err := attestedState.HashTreeRoot(ctx)
if err != nil {
return nil, fmt.Errorf("could not get attested state root %v", err)
return nil, fmt.Errorf("could not get attested state root %w", err)
}
attestedHeader.StateRoot = attestedStateRoot[:]
// assert hash_tree_root(attested_header) == block.message.parent_root
attestedHeaderRoot, err := attestedHeader.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get attested header root %v", err)
return nil, fmt.Errorf("could not get attested header root %w", err)
}
if attestedHeaderRoot != block.Block().ParentRoot() {
@@ -175,13 +175,13 @@ func NewLightClientFinalityUpdateFromBeaconState(
if finalizedBlock.Block().Slot() != 0 {
tempFinalizedHeader, err := finalizedBlock.Header()
if err != nil {
return nil, fmt.Errorf("could not get finalized header %v", err)
return nil, fmt.Errorf("could not get finalized header %w", err)
}
finalizedHeader = migration.V1Alpha1SignedHeaderToV1(tempFinalizedHeader).GetMessage()
finalizedHeaderRoot, err := finalizedHeader.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get finalized header root %v", err)
return nil, fmt.Errorf("could not get finalized header root %w", err)
}
if finalizedHeaderRoot != bytesutil.ToBytes32(attestedState.FinalizedCheckpoint().Root) {
@@ -204,7 +204,7 @@ func NewLightClientFinalityUpdateFromBeaconState(
var bErr error
finalityBranch, bErr = attestedState.FinalizedRootProof(ctx)
if bErr != nil {
return nil, fmt.Errorf("could not get finalized root proof %v", bErr)
return nil, fmt.Errorf("could not get finalized root proof %w", bErr)
}
} else {
finalizedHeader = &ethpbv1.BeaconBlockHeader{

View File

@@ -6,9 +6,6 @@ import (
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -32,6 +29,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.

View File

@@ -7,9 +7,6 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -25,6 +22,8 @@ import (
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.

View File

@@ -13,6 +13,7 @@ import (
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -169,13 +170,8 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(ctx, postState)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
// hook to process all post state finalization tasks
s.executePostFinalizationTasks(ctx, postState)
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
@@ -224,6 +220,19 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
return nil
}
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go func() {
finalizedState.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
s.sendNewFinalizedEvent(ctx, finalizedState)
}()
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
}
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
@@ -489,7 +498,7 @@ func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySigne
log.WithError(err).Error("Could not convert to indexed attestation")
return
}
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
s.cfg.SlasherAttestationsFeed.Send(&types.WrappedIndexedAtt{IndexedAtt: indexedAtt})
}
}

View File

@@ -6,11 +6,14 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
blockchainTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -20,6 +23,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -415,3 +419,81 @@ func Test_sendNewFinalizedEvent(t *testing.T) {
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
}
func Test_executePostFinalizationTasks(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EIP6110ValidatorIndexCache: true,
})
defer resetFn()
logHook := logTest.NewGlobal()
headState, err := util.NewBeaconState()
require.NoError(t, err)
finalizedStRoot, err := headState.HashTreeRoot(context.Background())
require.NoError(t, err)
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*122 + 1
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.StateRoot = finalizedStRoot[:]
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
hexKey := "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
key, err := hexutil.Decode(hexKey)
require.NoError(t, err)
require.NoError(t, headState.SetValidators([]*ethpb.Validator{
{
PublicKey: key,
WithdrawalCredentials: make([]byte, fieldparams.RootLength),
},
}))
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetFinalizedCheckpoint(&ethpb.Checkpoint{
Epoch: 123,
Root: headRoot[:],
}))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
// check deposit
require.LogsContain(t, logHook, "Finalized deposit insertion completed at index")
}

View File

@@ -11,8 +11,6 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
@@ -44,6 +42,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
@@ -330,6 +329,8 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "failed to initialize blockchain service")
}
saved.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
return nil
}
@@ -504,7 +505,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
}
genesisBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
if err != nil || genesisBlk == nil || genesisBlk.IsNil() {
return fmt.Errorf("could not load genesis block: %v", err)
return fmt.Errorf("could not load genesis block: %w", err)
}
genesisBlkRoot, err := genesisBlk.Block().HashTreeRoot()
if err != nil {

View File

@@ -8,9 +8,10 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
@@ -29,6 +30,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -203,7 +205,7 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
BlockHash: make([]byte, 32),
})
require.NoError(t, err)
genState, err = blocks.ProcessPreGenesisDeposits(ctx, genState, deposits)
genState, err = altair.ProcessPreGenesisDeposits(ctx, genState, deposits)
require.NoError(t, err)
_, err = bc.initializeBeaconChain(ctx, time.Unix(0, 0), genState, &ethpb.Eth1Data{DepositRoot: hashTreeRoot[:], BlockHash: make([]byte, 32)})
@@ -499,6 +501,66 @@ func TestChainService_EverythingOptimistic(t *testing.T) {
require.Equal(t, true, op)
}
func TestStartFromSavedState_ValidatorIndexCacheUpdated(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableStartOptimistic: true,
EIP6110ValidatorIndexCache: true,
})
defer resetFn()
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headState, err := util.NewBeaconState()
require.NoError(t, err)
hexKey := "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
key, err := hexutil.Decode(hexKey)
require.NoError(t, err)
hexKey2 := "0x42247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
key2, err := hexutil.Decode(hexKey2)
require.NoError(t, err)
require.NoError(t, headState.SetValidators([]*ethpb.Validator{
{
PublicKey: key,
WithdrawalCredentials: make([]byte, fieldparams.RootLength),
},
{
PublicKey: key2,
WithdrawalCredentials: make([]byte, fieldparams.RootLength),
},
}))
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
c, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, c.StartFromSavedState(headState))
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
index2, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key2))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(1), index2) // first index
}
// MockClockSetter satisfies the ClockSetter interface for testing the conditions where blockchain.Service should
// call SetGenesis.
type MockClockSetter struct {

View File

@@ -36,6 +36,7 @@ go_library(
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
@@ -78,6 +79,7 @@ go_test(
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -5,11 +5,32 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// ProcessPreGenesisDeposits processes a deposit for the beacon state before chainstart.
func ProcessPreGenesisDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
var err error
beaconState, err = ProcessDeposits(ctx, beaconState, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
beaconState, err = blocks.ActivateValidatorWithEffectiveBalance(beaconState, deposits)
if err != nil {
return nil, err
}
return beaconState, nil
}
// ProcessDeposits processes validator deposits for beacon state Altair.
func ProcessDeposits(
ctx context.Context,
@@ -33,23 +54,158 @@ func ProcessDeposits(
return beaconState, nil
}
// ProcessDeposit processes validator deposit for beacon state Altair.
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// apply_deposit(
// state=state,
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
beaconState, isNewValidator, err := blocks.ProcessDeposit(beaconState, deposit, verifySignature)
if err != nil {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, err
}
if isNewValidator {
if err := beaconState.AppendInactivityScore(0); err != nil {
return ApplyDeposit(beaconState, deposit.Data, verifySignature)
}
// ApplyDeposit
// Spec pseudocode definition:
// def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None:
//
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// amount=amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if bls.Verify(pubkey, signing_root, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, verifySignature bool) (state.BeaconState, error) {
pubKey := data.PublicKey
amount := data.Amount
withdrawalCredentials := data.WithdrawalCredentials
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
valid, err := blocks.IsValidDepositSignature(data)
if err != nil {
return nil, err
}
if !valid {
return beaconState, nil
}
}
if err := AddValidatorToRegistry(beaconState, pubKey, withdrawalCredentials, amount); err != nil {
return nil, err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return nil, err
}
if err := beaconState.AppendCurrentParticipationBits(0); err != nil {
} else {
if err := helpers.IncreaseBalance(beaconState, index, amount); err != nil {
return nil, err
}
}
return beaconState, nil
}
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64) -> None:
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, 0)
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.inactivity_scores, index, uint64(0)) // New in Altair
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
if err := beaconState.AppendBalance(amount); err != nil {
return err
}
// only active in altair and only when it's a new validator (after append balance)
if beaconState.Version() >= version.Altair {
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
}
if err := beaconState.AppendCurrentParticipationBits(0); err != nil {
return err
}
}
return nil
}
// GetValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> Validator:
//
// effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
//
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=effective_balance,
// )
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}
}

View File

@@ -30,6 +30,59 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeAltair(state)
require.NoError(t, err)
r, err := altair.ProcessPreGenesisDeposits(ctx, s, []*ethpb.Deposit{deposit})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := altair.ProcessPreGenesisDeposits(ctx, s, []*ethpb.Deposit{deposit})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := altair.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}

View File

@@ -7,11 +7,14 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
@@ -44,36 +47,6 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
require.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
beaconState, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
},
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
require.ErrorContains(t, want, err)
}
func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
@@ -245,3 +218,158 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(100)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
for i := range dep {
proof, err := dt.MerkleProof(i)
require.NoError(t, err)
dep[i].Proof = proof
}
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := altair.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
require.Equal(t, false, ok, "bad pubkey should not exist in state")
for i := 1; i < newState.NumValidators(); i++ {
val, err := newState.ValidatorAtIndex(primitives.ValidatorIndex(i))
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance, "unequal effective balance")
require.Equal(t, primitives.Epoch(0), val.ActivationEpoch)
require.Equal(t, primitives.Epoch(0), val.ActivationEligibilityEpoch)
}
if newState.Eth1DepositIndex() != 100 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 99 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 100 {
t.Errorf("Expected validator list to have length 100, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 100 {
t.Errorf("Expected validator balances list to have length 100, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestProcessDeposit_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
sr, err := signing.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 32))
require.NoError(t, err)
sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
registry := []*ethpb.Validator{
{
PublicKey: []byte{1, 2, 3},
},
{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: []byte{1},
},
}
balances := []uint64{0, 50}
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(beaconState, deposit, true /*verifySignature*/)
require.NoError(t, err, "Process deposit failed")
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
}
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{deposit},
},
}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
},
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
}

View File

@@ -18,6 +18,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -101,7 +102,8 @@ func NextSyncCommittee(ctx context.Context, s state.BeaconState) (*ethpb.SyncCom
// candidate_index = active_validator_indices[shuffled_index]
// random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
// effective_balance = state.validators[candidate_index].effective_balance
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
// # [Modified in Electra:EIP7251]
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_byte:
// sync_committee_indices.append(candidate_index)
// i += 1
// return sync_committee_indices
@@ -121,6 +123,11 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]primi
cIndices := make([]primitives.ValidatorIndex, 0, syncCommitteeSize)
hashFunc := hash.CustomSHA256Hasher()
maxEB := cfg.MaxEffectiveBalanceElectra
if s.Version() < version.Electra {
maxEB = cfg.MaxEffectiveBalance
}
for i := primitives.ValidatorIndex(0); uint64(len(cIndices)) < params.BeaconConfig().SyncCommitteeSize; i++ {
if ctx.Err() != nil {
return nil, ctx.Err()
@@ -140,7 +147,7 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]primi
}
effectiveBal := v.EffectiveBalance()
if effectiveBal*maxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
if effectiveBal*maxRandomByte >= maxEB*uint64(randomByte) {
cIndices = append(cIndices, cIndex)
}
}

View File

@@ -13,13 +13,14 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
)
func TestSyncCommitteeIndices_CanGet(t *testing.T) {
getState := func(t *testing.T, count uint64) state.BeaconState {
getState := func(t *testing.T, count uint64, vers int) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -27,17 +28,28 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
EffectiveBalance: params.BeaconConfig().MinDepositAmount,
}
}
st, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
var st state.BeaconState
var err error
switch vers {
case version.Altair:
st, err = state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
case version.Electra:
st, err = state_native.InitializeFromProtoElectra(&ethpb.BeaconStateElectra{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
default:
t.Fatal("Unknown version")
}
require.NoError(t, err)
require.NoError(t, st.SetValidators(validators))
return st
}
type args struct {
state state.BeaconState
epoch primitives.Epoch
validatorCount uint64
epoch primitives.Epoch
}
tests := []struct {
name string
@@ -48,32 +60,32 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
{
name: "genesis validator count, epoch 0",
args: args{
state: getState(t, params.BeaconConfig().MinGenesisActiveValidatorCount),
epoch: 0,
validatorCount: params.BeaconConfig().MinGenesisActiveValidatorCount,
epoch: 0,
},
wantErr: false,
},
{
name: "genesis validator count, epoch 100",
args: args{
state: getState(t, params.BeaconConfig().MinGenesisActiveValidatorCount),
epoch: 100,
validatorCount: params.BeaconConfig().MinGenesisActiveValidatorCount,
epoch: 100,
},
wantErr: false,
},
{
name: "less than optimal validator count, epoch 100",
args: args{
state: getState(t, params.BeaconConfig().MaxValidatorsPerCommittee),
epoch: 100,
validatorCount: params.BeaconConfig().MaxValidatorsPerCommittee,
epoch: 100,
},
wantErr: false,
},
{
name: "no active validators, epoch 100",
args: args{
state: getState(t, 0), // Regression test for divide by zero. Issue #13051.
epoch: 100,
validatorCount: 0, // Regression test for divide by zero. Issue #13051.
epoch: 100,
},
wantErr: true,
errString: "no active validator indices",
@@ -81,13 +93,18 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
got, err := altair.NextSyncCommitteeIndices(context.Background(), tt.args.state)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.Equal(t, int(params.BeaconConfig().SyncCommitteeSize), len(got))
for _, v := range []int{version.Altair, version.Electra} {
t.Run(version.String(v), func(t *testing.T) {
helpers.ClearCache()
st := getState(t, tt.args.validatorCount, v)
got, err := altair.NextSyncCommitteeIndices(context.Background(), st)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.Equal(t, int(params.BeaconConfig().SyncCommitteeSize), len(got))
}
})
}
})
}

View File

@@ -297,60 +297,6 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
}
}
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposits := make([]*ethpb.Deposit, 100)
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
fuzzer.Fuzz(deposits[i])
}
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessDeposits(ctx, s, deposits)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessPreGenesisDeposits(ctx, s, []*ethpb.Deposit{deposit})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, _, err := ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
@@ -360,7 +306,7 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
err = verifyDeposit(s, deposit)
err = VerifyDeposit(s, deposit)
_ = err
}
}

View File

@@ -5,7 +5,6 @@ import (
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -17,24 +16,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// ProcessPreGenesisDeposits processes a deposit for the beacon state before chainstart.
func ProcessPreGenesisDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
var err error
beaconState, err = ProcessDeposits(ctx, beaconState, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
beaconState, err = ActivateValidatorWithEffectiveBalance(beaconState, deposits)
if err != nil {
return nil, err
}
return beaconState, nil
}
// ActivateValidatorWithEffectiveBalance updates validator's effective balance, and if it's above MaxEffectiveBalance, validator becomes active in genesis.
func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposits []*ethpb.Deposit) (state.BeaconState, error) {
for _, d := range deposits {
@@ -66,38 +47,6 @@ func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposi
return beaconState, nil
}
// ProcessDeposits is one of the operations performed on each processed
// beacon block to verify queued validators from the Ethereum 1.0 Deposit Contract
// into the beacon chain.
//
// Spec pseudocode definition:
//
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
// Attempt to verify all deposit signatures at once, if this fails then fall back to processing
// individual deposits with signature verification enabled.
batchVerified, err := BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, err
}
for _, d := range deposits {
if d == nil || d.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, _, err = ProcessDeposit(beaconState, d, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(d.Data.PublicKey))
}
}
return beaconState, nil
}
// BatchVerifyDepositsSignatures batch verifies deposit signatures.
func BatchVerifyDepositsSignatures(ctx context.Context, deposits []*ethpb.Deposit) (bool, error) {
var err error
@@ -114,102 +63,28 @@ func BatchVerifyDepositsSignatures(ctx context.Context, deposits []*ethpb.Deposi
return verified, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
// IsValidDepositSignature returns whether deposit_data is valid
// def is_valid_deposit_signature(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> bool:
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// pubkey = deposit.data.pubkey
// amount = deposit.data.amount
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if not bls.Verify(pubkey, signing_root, deposit.data.signature):
// return
//
// # Add validator and balance entries
// state.validators.append(get_validator_from_deposit(state, deposit))
// state.balances.append(amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, bool, error) {
var newValidator bool
if err := verifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, newValidator, err
}
return nil, newValidator, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
// deposit_message = DepositMessage( pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, amount=amount, )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// return bls.Verify(pubkey, signing_root, signature)
func IsValidDepositSignature(data *ethpb.Deposit_Data) (bool, error) {
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return false, err
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, newValidator, err
if err := verifyDepositDataSigningRoot(data, domain); err != nil {
// Ignore this error as in the spec pseudo code.
log.WithError(err).Debug("Skipping deposit: could not verify deposit data signature")
return false, nil
}
pubKey := deposit.Data.PublicKey
amount := deposit.Data.Amount
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return nil, newValidator, err
}
if err := verifyDepositDataSigningRoot(deposit.Data, domain); err != nil {
// Ignore this error as in the spec pseudo code.
log.WithError(err).Debug("Skipping deposit: could not verify deposit data signature")
return beaconState, newValidator, nil
}
}
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
if err := beaconState.AppendValidator(&ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: deposit.Data.WithdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}); err != nil {
return nil, newValidator, err
}
newValidator = true
if err := beaconState.AppendBalance(amount); err != nil {
return nil, newValidator, err
}
} else if err := helpers.IncreaseBalance(beaconState, index, amount); err != nil {
return nil, newValidator, err
}
return beaconState, newValidator, nil
return true, nil
}
func verifyDeposit(beaconState state.ReadOnlyBeaconState, deposit *ethpb.Deposit) error {
// VerifyDeposit verifies the deposit data and signature given the beacon state and deposit information
func VerifyDeposit(beaconState state.ReadOnlyBeaconState, deposit *ethpb.Deposit) error {
// Verify Merkle proof of deposit and deposit trie root.
if deposit == nil || deposit.Data == nil {
return errors.New("received nil deposit or nil deposit data")

View File

@@ -9,59 +9,42 @@ import (
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
// Same validator created 3 valid deposits within the same block
dep, _, err := util.DeterministicDepositsAndKeysSameValidator(3)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(dep))
require.NoError(t, err)
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
// 3 deposits from the same validator
Deposits: []*ethpb.Deposit{dep[0], dep[1], dep[2]},
},
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
require.NoError(t, err, "Expected block deposits to process correctly")
assert.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, fieldparams.BLSPubkeyLength),
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
Signature: make([]byte, 96),
},
}
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
require.NoError(t, err)
ok, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, ok)
}
func TestVerifyDeposit_MerkleBranchFailsVerification(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
leaf, err := deposit.Data.HashTreeRoot()
@@ -74,13 +57,7 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{deposit},
},
}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
beaconState, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
@@ -88,312 +65,31 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
err = blocks.VerifyDeposit(beaconState, deposit)
require.ErrorContains(t, want, err)
}
func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(dep))
require.NoError(t, err)
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{dep[0]},
},
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
require.NoError(t, err, "Expected block deposits to process correctly")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
"Expected state validator balances index 0 to equal %d, received %d",
dep[0].Data.Amount,
newState.Balances()[1],
)
}
}
func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T) {
func TestIsValidDepositSignature_Ok(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
depositData := &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 0,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
}
sr, err := signing.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 32))
dm := &ethpb.DepositMessage{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
Amount: 0,
}
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(dm, domain)
require.NoError(t, err)
sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
depositData.Signature = sig.Marshal()
valid, err := blocks.IsValidDepositSignature(depositData)
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{deposit},
},
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1, 2, 3},
},
{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: []byte{1},
},
}
balances := []uint64{0, 50}
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
})
require.NoError(t, err)
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
require.NoError(t, err, "Process deposit failed")
assert.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
func TestProcessDeposit_AddsNewValidatorDeposit(t *testing.T) {
// Similar to TestProcessDeposits_AddsNewValidatorDeposit except that this test directly calls ProcessDeposit
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(dep))
require.NoError(t, err)
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, isNewValidator, err := blocks.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Process deposit failed")
assert.Equal(t, true, isNewValidator, "Expected isNewValidator to be true")
assert.Equal(t, 2, len(newState.Validators()), "Expected validator list to have length 2")
assert.Equal(t, 2, len(newState.Balances()), "Expected validator balances list to have length 2")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
"Expected state validator balances index 1 to equal %d, received %d",
dep[0].Data.Amount,
newState.Balances()[1],
)
}
}
func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
// Same test settings as in TestProcessDeposit_AddsNewValidatorDeposit, except that we use an invalid signature
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, isNewValidator, err := blocks.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
assert.Equal(t, false, isNewValidator, "Expected isNewValidator to be false")
if newState.Eth1DepositIndex() != 1 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 1 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 1 {
t.Errorf("Expected validator list to have length 1, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 1 {
t.Errorf("Expected validator balances list to have length 1, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(100)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
for i := range dep {
proof, err := dt.MerkleProof(i)
require.NoError(t, err)
dep[i].Proof = proof
}
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := blocks.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
require.Equal(t, false, ok, "bad pubkey should not exist in state")
for i := 1; i < newState.NumValidators(); i++ {
val, err := newState.ValidatorAtIndex(primitives.ValidatorIndex(i))
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance, "unequal effective balance")
require.Equal(t, primitives.Epoch(0), val.ActivationEpoch)
require.Equal(t, primitives.Epoch(0), val.ActivationEligibilityEpoch)
}
if newState.Eth1DepositIndex() != 100 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 99 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 100 {
t.Errorf("Expected validator list to have length 100, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 100 {
t.Errorf("Expected validator balances list to have length 100, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestProcessDeposit_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
sr, err := signing.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 32))
require.NoError(t, err)
sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
registry := []*ethpb.Validator{
{
PublicKey: []byte{1, 2, 3},
},
{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: []byte{1},
},
}
balances := []uint64{0, 50}
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
})
require.NoError(t, err)
newState, isNewValidator, err := blocks.ProcessDeposit(beaconState, deposit, true /*verifySignature*/)
require.NoError(t, err, "Process deposit failed")
assert.Equal(t, false, isNewValidator, "Expected isNewValidator to be false")
assert.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
require.Equal(t, true, valid)
}

View File

@@ -204,12 +204,7 @@ func createAttestationSignatureBatch(
return nil, err
}
indices := ia.GetAttestingIndices()
pubkeys := make([][]byte, len(indices))
for i := 0; i < len(indices); i++ {
pubkeyAtIdx := beaconState.PubkeyAtIndex(primitives.ValidatorIndex(indices[i]))
pubkeys[i] = pubkeyAtIdx[:]
}
aggP, err := bls.AggregatePublicKeys(pubkeys)
aggP, err := beaconState.AggregateKeyFromIndices(indices)
if err != nil {
return nil, err
}

View File

@@ -10,6 +10,7 @@ go_library(
"effective_balance_updates.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
"upgrade.go",
"validator.go",
"withdrawals.go",
@@ -18,6 +19,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -27,8 +29,9 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//contracts/deposit:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
@@ -46,8 +49,11 @@ go_test(
srcs = [
"churn_test.go",
"consolidations_test.go",
"deposit_fuzz_test.go",
"deposits_test.go",
"effective_balance_updates_test.go",
"registry_updates_test.go",
"transition_test.go",
"upgrade_test.go",
"validator_test.go",
"withdrawals_test.go",
@@ -61,18 +67,19 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls/blst:go_default_library",
"//crypto/bls/common:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/interop:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],

View File

@@ -2,6 +2,7 @@ package electra_test
import (
"context"
"fmt"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
@@ -18,10 +19,17 @@ func createValidatorsWithTotalActiveBalance(totalBal primitives.Gwei) []*eth.Val
num := totalBal / primitives.Gwei(params.BeaconConfig().MinActivationBalance)
vals := make([]*eth.Validator, num)
for i := range vals {
wd := make([]byte, 32)
wd[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wd[31] = byte(i)
vals[i] = &eth.Validator{
ActivationEpoch: primitives.Epoch(0),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
ActivationEpoch: primitives.Epoch(0),
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: []byte(fmt.Sprintf("val_%d", i)),
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawalCredentials: wd,
}
}
if totalBal%primitives.Gwei(params.BeaconConfig().MinActivationBalance) != 0 {

View File

@@ -1,16 +1,18 @@
package electra
import (
"bytes"
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
@@ -40,7 +42,7 @@ import (
//
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
_, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
defer span.End()
if st == nil || st.IsNil() {
@@ -68,7 +70,7 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
break
}
if err := SwitchToCompoundingValidator(ctx, st, pc.TargetIndex); err != nil {
if err := SwitchToCompoundingValidator(st, pc.TargetIndex); err != nil {
return err
}
@@ -92,165 +94,159 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
return nil
}
// ProcessConsolidations implements the spec definition below. This method makes mutating calls to
// the beacon state.
// ProcessConsolidationRequests implements the spec definition below. This method makes mutating
// calls to the beacon state.
//
// Spec definition:
// def process_consolidation_request(
// state: BeaconState,
// consolidation_request: ConsolidationRequest
// ) -> None:
// # If the pending consolidations queue is full, consolidation requests are ignored
// if len(state.pending_consolidations) == PENDING_CONSOLIDATIONS_LIMIT:
// return
// # If there is too little available consolidation churn limit, consolidation requests are ignored
// if get_consolidation_churn_limit(state) <= MIN_ACTIVATION_BALANCE:
// return
//
// validator_pubkeys = [v.pubkey for v in state.validators]
// # Verify pubkeys exists
// request_source_pubkey = consolidation_request.source_pubkey
// request_target_pubkey = consolidation_request.target_pubkey
// if request_source_pubkey not in validator_pubkeys:
// return
// if request_target_pubkey not in validator_pubkeys:
// return
// source_index = ValidatorIndex(validator_pubkeys.index(request_source_pubkey))
// target_index = ValidatorIndex(validator_pubkeys.index(request_target_pubkey))
// source_validator = state.validators[source_index]
// target_validator = state.validators[target_index]
//
// def process_consolidation(state: BeaconState, signed_consolidation: SignedConsolidation) -> None:
// # If the pending consolidations queue is full, no consolidations are allowed in the block
// assert len(state.pending_consolidations) < PENDING_CONSOLIDATIONS_LIMIT
// # If there is too little available consolidation churn limit, no consolidations are allowed in the block
// assert get_consolidation_churn_limit(state) > MIN_ACTIVATION_BALANCE
// consolidation = signed_consolidation.message
// # Verify that source != target, so a consolidation cannot be used as an exit.
// assert consolidation.source_index != consolidation.target_index
// if source_index == target_index:
// return
//
// # Verify source withdrawal credentials
// has_correct_credential = has_execution_withdrawal_credential(source_validator)
// is_correct_source_address = (
// source_validator.withdrawal_credentials[12:] == consolidation_request.source_address
// )
// if not (has_correct_credential and is_correct_source_address):
// return
//
// # Verify that target has execution withdrawal credentials
// if not has_execution_withdrawal_credential(target_validator):
// return
//
// source_validator = state.validators[consolidation.source_index]
// target_validator = state.validators[consolidation.target_index]
// # Verify the source and the target are active
// current_epoch = get_current_epoch(state)
// assert is_active_validator(source_validator, current_epoch)
// assert is_active_validator(target_validator, current_epoch)
// if not is_active_validator(source_validator, current_epoch):
// return
// if not is_active_validator(target_validator, current_epoch):
// return
// # Verify exits for source and target have not been initiated
// assert source_validator.exit_epoch == FAR_FUTURE_EPOCH
// assert target_validator.exit_epoch == FAR_FUTURE_EPOCH
// # Consolidations must specify an epoch when they become valid; they are not valid before then
// assert current_epoch >= consolidation.epoch
//
// # Verify the source and the target have Execution layer withdrawal credentials
// assert has_execution_withdrawal_credential(source_validator)
// assert has_execution_withdrawal_credential(target_validator)
// # Verify the same withdrawal address
// assert source_validator.withdrawal_credentials[12:] == target_validator.withdrawal_credentials[12:]
//
// # Verify consolidation is signed by the source and the target
// domain = compute_domain(DOMAIN_CONSOLIDATION, genesis_validators_root=state.genesis_validators_root)
// signing_root = compute_signing_root(consolidation, domain)
// pubkeys = [source_validator.pubkey, target_validator.pubkey]
// assert bls.FastAggregateVerify(pubkeys, signing_root, signed_consolidation.signature)
// if source_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return
// if target_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return
//
// # Initiate source validator exit and append pending consolidation
// source_validator.exit_epoch = compute_consolidation_epoch_and_update_churn(
// state, source_validator.effective_balance)
// state, source_validator.effective_balance
// )
// source_validator.withdrawable_epoch = Epoch(
// source_validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// )
// state.pending_consolidations.append(PendingConsolidation(
// source_index=consolidation.source_index,
// target_index=consolidation.target_index
// source_index=source_index,
// target_index=target_index
// ))
func ProcessConsolidations(ctx context.Context, st state.BeaconState, cs []*ethpb.SignedConsolidation) error {
_, span := trace.StartSpan(ctx, "electra.ProcessConsolidations")
defer span.End()
if st == nil || st.IsNil() {
return errors.New("nil state")
func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, reqs []*enginev1.ConsolidationRequest) error {
if len(reqs) == 0 || st == nil {
return nil
}
if len(cs) == 0 {
return nil // Nothing to process.
}
domain, err := signing.ComputeDomain(
params.BeaconConfig().DomainConsolidation,
nil, // Use genesis fork version
st.GenesisValidatorsRoot(),
)
activeBal, err := helpers.TotalActiveBalance(st)
if err != nil {
return err
}
totalBalance, err := helpers.TotalActiveBalance(st)
if err != nil {
return err
churnLimit := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
if churnLimit <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
return nil
}
curEpoch := slots.ToEpoch(st.Slot())
ffe := params.BeaconConfig().FarFutureEpoch
minValWithdrawDelay := params.BeaconConfig().MinValidatorWithdrawabilityDelay
pcLimit := params.BeaconConfig().PendingConsolidationsLimit
if helpers.ConsolidationChurnLimit(primitives.Gwei(totalBalance)) <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
return errors.New("too little available consolidation churn limit")
}
currentEpoch := slots.ToEpoch(st.Slot())
for _, c := range cs {
if c == nil || c.Message == nil {
return errors.New("nil consolidation")
for _, cr := range reqs {
if ctx.Err() != nil {
return fmt.Errorf("cannot process consolidation requests: %w", ctx.Err())
}
if npc, err := st.NumPendingConsolidations(); err != nil {
return fmt.Errorf("failed to fetch number of pending consolidations: %w", err) // This should never happen.
} else if npc >= pcLimit {
return nil
}
if n, err := st.NumPendingConsolidations(); err != nil {
return err
} else if n >= params.BeaconConfig().PendingConsolidationsLimit {
return errors.New("pending consolidations queue is full")
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
if !ok {
continue
}
tgtIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.TargetPubkey))
if !ok {
continue
}
if c.Message.SourceIndex == c.Message.TargetIndex {
return errors.New("source and target index are the same")
if srcIdx == tgtIdx {
continue
}
source, err := st.ValidatorAtIndex(c.Message.SourceIndex)
srcV, err := st.ValidatorAtIndex(srcIdx)
if err != nil {
return err
}
target, err := st.ValidatorAtIndex(c.Message.TargetIndex)
if err != nil {
return err
}
if !helpers.IsActiveValidator(source, currentEpoch) {
return errors.New("source is not active")
}
if !helpers.IsActiveValidator(target, currentEpoch) {
return errors.New("target is not active")
}
if source.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return errors.New("source exit epoch has been initiated")
}
if target.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return errors.New("target exit epoch has been initiated")
}
if currentEpoch < c.Message.Epoch {
return errors.New("consolidation is not valid yet")
return fmt.Errorf("failed to fetch source validator: %w", err) // This should never happen.
}
if !helpers.HasExecutionWithdrawalCredentials(source) {
return errors.New("source does not have execution withdrawal credentials")
}
if !helpers.HasExecutionWithdrawalCredentials(target) {
return errors.New("target does not have execution withdrawal credentials")
}
if !helpers.IsSameWithdrawalCredentials(source, target) {
return errors.New("source and target have different withdrawal credentials")
tgtV, err := st.ValidatorAtIndexReadOnly(tgtIdx)
if err != nil {
return fmt.Errorf("failed to fetch target validator: %w", err) // This should never happen.
}
sr, err := signing.ComputeSigningRoot(c.Message, domain)
if err != nil {
return err
// Verify source withdrawal credentials
if !helpers.HasExecutionWithdrawalCredentials(srcV) {
continue
}
sourcePk, err := bls.PublicKeyFromBytes(source.PublicKey)
if err != nil {
return errors.Wrap(err, "could not convert source public key bytes to bls public key")
}
targetPk, err := bls.PublicKeyFromBytes(target.PublicKey)
if err != nil {
return errors.Wrap(err, "could not convert target public key bytes to bls public key")
}
sig, err := bls.SignatureFromBytes(c.Signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
if !sig.FastAggregateVerify([]bls.PublicKey{sourcePk, targetPk}, sr) {
return errors.New("consolidation signature verification failed")
// Confirm source_validator.withdrawal_credentials[12:] == consolidation_request.source_address
if len(srcV.WithdrawalCredentials) != 32 || len(cr.SourceAddress) != 20 || !bytes.HasSuffix(srcV.WithdrawalCredentials, cr.SourceAddress) {
continue
}
sEE, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(source.EffectiveBalance))
// Target validator must have their withdrawal credentials set appropriately.
if !helpers.HasExecutionWithdrawalCredentials(tgtV) {
continue
}
// Both validators must be active.
if !helpers.IsActiveValidator(srcV, curEpoch) || !helpers.IsActiveValidatorUsingTrie(tgtV, curEpoch) {
continue
}
// Neither validator are exiting.
if srcV.ExitEpoch != ffe || tgtV.ExitEpoch() != ffe {
continue
}
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {
return err
return fmt.Errorf("failed to compute consolidaiton epoch: %w", err)
}
source.ExitEpoch = sEE
source.WithdrawableEpoch = sEE + params.BeaconConfig().MinValidatorWithdrawabilityDelay
if err := st.UpdateValidatorAtIndex(c.Message.SourceIndex, source); err != nil {
return err
srcV.ExitEpoch = exitEpoch
srcV.WithdrawableEpoch = exitEpoch + minValWithdrawDelay
if err := st.UpdateValidatorAtIndex(srcIdx, srcV); err != nil {
return fmt.Errorf("failed to update validator: %w", err) // This should never happen.
}
if err := st.AppendPendingConsolidation(c.Message.ToPendingConsolidation()); err != nil {
return err
if err := st.AppendPendingConsolidation(&eth.PendingConsolidation{SourceIndex: srcIdx, TargetIndex: tgtIdx}); err != nil {
return fmt.Errorf("failed to append pending consolidation: %w", err) // This should never happen.
}
}

View File

@@ -5,19 +5,14 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/blst"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/common"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/interop"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingConsolidations(t *testing.T) {
@@ -238,203 +233,197 @@ func stateWithActiveBalanceETH(t *testing.T, balETH uint64) state.BeaconState {
return st
}
func TestProcessConsolidations(t *testing.T) {
secretKeys, publicKeys, err := interop.DeterministicallyGenerateKeys(0, 2)
require.NoError(t, err)
genesisValidatorRoot := bytesutil.PadTo([]byte("genesisValidatorRoot"), fieldparams.RootLength)
_ = secretKeys
func TestProcessConsolidationRequests(t *testing.T) {
tests := []struct {
name string
state state.BeaconState
scs []*eth.SignedConsolidation
check func(*testing.T, state.BeaconState)
wantErr string
name string
state state.BeaconState
reqs []*enginev1.ConsolidationRequest
validate func(*testing.T, state.BeaconState)
}{
{
name: "nil state",
scs: make([]*eth.SignedConsolidation, 10),
wantErr: "nil state",
},
{
name: "nil consolidation in slice",
state: stateWithActiveBalanceETH(t, 19_000_000),
scs: []*eth.SignedConsolidation{nil, nil},
wantErr: "nil consolidation",
},
{
name: "state is 100% full of pending consolidations",
name: "one valid request",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
pc := make([]*eth.PendingConsolidation, params.BeaconConfig().PendingConsolidationsLimit)
require.NoError(t, st.SetPendingConsolidations(pc))
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{}}},
wantErr: "pending consolidations queue is full",
},
{
name: "state has too little consolidation churn limit available to process a consolidation",
state: func() state.BeaconState {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{}}},
wantErr: "too little available consolidation churn limit",
},
{
name: "consolidation with source and target as the same index is rejected",
state: stateWithActiveBalanceETH(t, 19_000_000),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 100}}},
wantErr: "source and target index are the same",
},
{
name: "consolidation with inactive source is rejected",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
val, err := st.ValidatorAtIndex(25)
st := &eth.BeaconStateElectra{
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
}
// Validator scenario setup. See comments in reqs section.
st.Validators[3].WithdrawalCredentials = bytesutil.Bytes32(0)
st.Validators[8].WithdrawalCredentials = bytesutil.Bytes32(0)
st.Validators[9].ActivationEpoch = params.BeaconConfig().FarFutureEpoch
st.Validators[12].ActivationEpoch = params.BeaconConfig().FarFutureEpoch
st.Validators[13].ExitEpoch = 10
st.Validators[16].ExitEpoch = 10
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
val.ActivationEpoch = params.BeaconConfig().FarFutureEpoch
require.NoError(t, st.UpdateValidatorAtIndex(25, val))
return st
return s
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 25, TargetIndex: 100}}},
wantErr: "source is not active",
},
{
name: "consolidation with inactive target is rejected",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
val, err := st.ValidatorAtIndex(25)
reqs: []*enginev1.ConsolidationRequest{
// Source doesn't have withdrawal credentials.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(1)),
SourcePubkey: []byte("val_3"),
TargetPubkey: []byte("val_4"),
},
// Source withdrawal credentials don't match the consolidation address.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(0)), // Should be 5
SourcePubkey: []byte("val_5"),
TargetPubkey: []byte("val_6"),
},
// Target does not have their withdrawal credentials set appropriately.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(7)),
SourcePubkey: []byte("val_7"),
TargetPubkey: []byte("val_8"),
},
// Source is inactive.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(9)),
SourcePubkey: []byte("val_9"),
TargetPubkey: []byte("val_10"),
},
// Target is inactive.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(11)),
SourcePubkey: []byte("val_11"),
TargetPubkey: []byte("val_12"),
},
// Source is exiting.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(13)),
SourcePubkey: []byte("val_13"),
TargetPubkey: []byte("val_14"),
},
// Target is exiting.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(15)),
SourcePubkey: []byte("val_15"),
TargetPubkey: []byte("val_16"),
},
// Source doesn't exist
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(0)),
SourcePubkey: []byte("INVALID"),
TargetPubkey: []byte("val_0"),
},
// Target doesn't exist
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(0)),
SourcePubkey: []byte("val_0"),
TargetPubkey: []byte("INVALID"),
},
// Source == target
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(0)),
SourcePubkey: []byte("val_0"),
TargetPubkey: []byte("val_0"),
},
// Valid consolidation request. This should be last to ensure invalid requests do
// not end the processing early.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(1)),
SourcePubkey: []byte("val_1"),
TargetPubkey: []byte("val_2"),
},
},
validate: func(t *testing.T, st state.BeaconState) {
// Verify a pending consolidation is created.
numPC, err := st.NumPendingConsolidations()
require.NoError(t, err)
val.ActivationEpoch = params.BeaconConfig().FarFutureEpoch
require.NoError(t, st.UpdateValidatorAtIndex(25, val))
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 25}}},
wantErr: "target is not active",
},
{
name: "consolidation with exiting source is rejected",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
val, err := st.ValidatorAtIndex(25)
require.Equal(t, uint64(1), numPC)
pcs, err := st.PendingConsolidations()
require.NoError(t, err)
val.ExitEpoch = 256
require.NoError(t, st.UpdateValidatorAtIndex(25, val))
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 25, TargetIndex: 100}}},
wantErr: "source exit epoch has been initiated",
},
{
name: "consolidation with exiting target is rejected",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
val, err := st.ValidatorAtIndex(25)
require.NoError(t, err)
val.ExitEpoch = 256
require.NoError(t, st.UpdateValidatorAtIndex(25, val))
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 25}}},
wantErr: "target exit epoch has been initiated",
},
{
name: "consolidation with future epoch is rejected",
state: stateWithActiveBalanceETH(t, 19_000_000),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 25, Epoch: 55}}},
wantErr: "consolidation is not valid yet",
},
{
name: "source validator without withdrawal credentials is rejected",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
val, err := st.ValidatorAtIndex(25)
require.NoError(t, err)
val.WithdrawalCredentials = []byte{}
require.NoError(t, st.UpdateValidatorAtIndex(25, val))
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 25, TargetIndex: 100}}},
wantErr: "source does not have execution withdrawal credentials",
},
{
name: "target validator without withdrawal credentials is rejected",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
val, err := st.ValidatorAtIndex(25)
require.NoError(t, err)
val.WithdrawalCredentials = []byte{}
require.NoError(t, st.UpdateValidatorAtIndex(25, val))
return st
}(),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 25}}},
wantErr: "target does not have execution withdrawal credentials",
},
{
name: "source and target with different withdrawal credentials is rejected",
state: stateWithActiveBalanceETH(t, 19_000_000),
scs: []*eth.SignedConsolidation{{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 25}}},
wantErr: "source and target have different withdrawal credentials",
},
{
name: "consolidation with valid signatures is OK",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 19_000_000)
require.NoError(t, st.SetGenesisValidatorsRoot(genesisValidatorRoot))
source, err := st.ValidatorAtIndex(100)
require.NoError(t, err)
target, err := st.ValidatorAtIndex(25)
require.NoError(t, err)
source.PublicKey = publicKeys[0].Marshal()
source.WithdrawalCredentials = target.WithdrawalCredentials
require.NoError(t, st.UpdateValidatorAtIndex(100, source))
target.PublicKey = publicKeys[1].Marshal()
require.NoError(t, st.UpdateValidatorAtIndex(25, target))
return st
}(),
scs: func() []*eth.SignedConsolidation {
sc := &eth.SignedConsolidation{Message: &eth.Consolidation{SourceIndex: 100, TargetIndex: 25, Epoch: 8}}
require.Equal(t, primitives.ValidatorIndex(1), pcs[0].SourceIndex)
require.Equal(t, primitives.ValidatorIndex(2), pcs[0].TargetIndex)
domain, err := signing.ComputeDomain(
params.BeaconConfig().DomainConsolidation,
nil,
genesisValidatorRoot,
)
// Verify the source validator is exiting.
src, err := st.ValidatorAtIndex(1)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(sc.Message, domain)
require.NotEqual(t, params.BeaconConfig().FarFutureEpoch, src.ExitEpoch, "source validator exit epoch not updated")
require.Equal(t, params.BeaconConfig().MinValidatorWithdrawabilityDelay, src.WithdrawableEpoch-src.ExitEpoch, "source validator withdrawable epoch not set correctly")
},
},
{
name: "pending consolidations limit reached",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
PendingConsolidations: make([]*eth.PendingConsolidation, params.BeaconConfig().PendingConsolidationsLimit),
}
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
sig0 := secretKeys[0].Sign(sr[:])
sig1 := secretKeys[1].Sign(sr[:])
sc.Signature = blst.AggregateSignatures([]common.Signature{sig0, sig1}).Marshal()
return []*eth.SignedConsolidation{sc}
return s
}(),
check: func(t *testing.T, st state.BeaconState) {
source, err := st.ValidatorAtIndex(100)
reqs: []*enginev1.ConsolidationRequest{
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(1)),
SourcePubkey: []byte("val_1"),
TargetPubkey: []byte("val_2"),
},
},
validate: func(t *testing.T, st state.BeaconState) {
// Verify no pending consolidation is created.
numPC, err := st.NumPendingConsolidations()
require.NoError(t, err)
// The consolidated validator is exiting.
require.Equal(t, primitives.Epoch(15), source.ExitEpoch) // 15 = state.Epoch(10) + MIN_SEED_LOOKAHEAD(4) + 1
require.Equal(t, primitives.Epoch(15+params.BeaconConfig().MinValidatorWithdrawabilityDelay), source.WithdrawableEpoch)
require.Equal(t, params.BeaconConfig().PendingConsolidationsLimit, numPC)
// Verify the source validator is not exiting.
src, err := st.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().FarFutureEpoch, src.ExitEpoch, "source validator exit epoch should not be updated")
require.Equal(t, params.BeaconConfig().FarFutureEpoch, src.WithdrawableEpoch, "source validator withdrawable epoch should not be updated")
},
},
{
name: "pending consolidations limit reached during processing",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
PendingConsolidations: make([]*eth.PendingConsolidation, params.BeaconConfig().PendingConsolidationsLimit-1),
}
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
return s
}(),
reqs: []*enginev1.ConsolidationRequest{
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(1)),
SourcePubkey: []byte("val_1"),
TargetPubkey: []byte("val_2"),
},
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(3)),
SourcePubkey: []byte("val_3"),
TargetPubkey: []byte("val_4"),
},
},
validate: func(t *testing.T, st state.BeaconState) {
// Verify a pending consolidation is created.
numPC, err := st.NumPendingConsolidations()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().PendingConsolidationsLimit, numPC)
// The first consolidation was appended.
pcs, err := st.PendingConsolidations()
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(1), pcs[params.BeaconConfig().PendingConsolidationsLimit-1].SourceIndex)
require.Equal(t, primitives.ValidatorIndex(2), pcs[params.BeaconConfig().PendingConsolidationsLimit-1].TargetIndex)
// Verify the second source validator is not exiting.
src, err := st.ValidatorAtIndex(3)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().FarFutureEpoch, src.ExitEpoch, "source validator exit epoch should not be updated")
require.Equal(t, params.BeaconConfig().FarFutureEpoch, src.WithdrawableEpoch, "source validator withdrawable epoch should not be updated")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := electra.ProcessConsolidations(context.TODO(), tt.state, tt.scs)
if len(tt.wantErr) > 0 {
require.ErrorContains(t, tt.wantErr, err)
} else {
require.NoError(t, err)
}
if tt.check != nil {
tt.check(t, tt.state)
err := electra.ProcessConsolidationRequests(context.TODO(), tt.state, tt.reqs)
require.NoError(t, err)
if tt.validate != nil {
tt.validate(t, tt.state)
}
})
}

View File

@@ -0,0 +1,48 @@
package electra_test
import (
"context"
"testing"
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateElectra{}
deposits := make([]*ethpb.Deposit, 100)
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
fuzzer.Fuzz(deposits[i])
}
s, err := state_native.InitializeFromProtoUnsafeElectra(state)
require.NoError(t, err)
r, err := electra.ProcessDeposits(ctx, s, deposits)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateElectra{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeElectra(state)
require.NoError(t, err)
r, err := electra.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}

View File

@@ -2,15 +2,186 @@ package electra
import (
"context"
"errors"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/contracts/deposit"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ProcessDeposits is one of the operations performed on each processed
// beacon block to verify queued validators from the Ethereum 1.0 Deposit Contract
// into the beacon chain.
//
// Spec pseudocode definition:
//
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "electra.ProcessDeposits")
defer span.End()
// Attempt to verify all deposit signatures at once, if this fails then fall back to processing
// individual deposits with signature verification enabled.
batchVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signatures in batch")
}
for _, d := range deposits {
if d == nil || d.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, err = ProcessDeposit(beaconState, d, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(d.Data.PublicKey))
}
}
return beaconState, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// apply_deposit(
// state=state,
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, err
}
return ApplyDeposit(beaconState, deposit.Data, verifySignature)
}
// ApplyDeposit
// def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None:
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
//
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// if is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
//
// else:
//
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// state.pending_balance_deposits.append(PendingBalanceDeposit(index=index, amount=amount)) # [Modified in Electra:EIP-7251]
// # Check if valid deposit switch to compounding credentials
//
// if ( is_compounding_withdrawal_credential(withdrawal_credentials) and has_eth1_withdrawal_credential(state.validators[index])
//
// and is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature)
// ):
// switch_to_compounding_validator(state, index)
func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, verifySignature bool) (state.BeaconState, error) {
pubKey := data.PublicKey
amount := data.Amount
withdrawalCredentials := data.WithdrawalCredentials
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
valid, err := IsValidDepositSignature(data)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signature")
}
if !valid {
return beaconState, nil
}
}
if err := AddValidatorToRegistry(beaconState, pubKey, withdrawalCredentials, amount); err != nil {
return nil, errors.Wrap(err, "could not add validator to registry")
}
} else {
// no validation on top-ups (phase0 feature). no validation before state change
if err := beaconState.AppendPendingBalanceDeposit(index, amount); err != nil {
return nil, err
}
val, err := beaconState.ValidatorAtIndex(index)
if err != nil {
return nil, err
}
if helpers.IsCompoundingWithdrawalCredential(withdrawalCredentials) && helpers.HasETH1WithdrawalCredential(val) {
if verifySignature {
valid, err := IsValidDepositSignature(data)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signature")
}
if !valid {
return beaconState, nil
}
}
if err := SwitchToCompoundingValidator(beaconState, index); err != nil {
return nil, errors.Wrap(err, "could not switch to compound validator")
}
}
}
return beaconState, nil
}
// IsValidDepositSignature returns whether deposit_data is valid
// def is_valid_deposit_signature(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> bool:
//
// deposit_message = DepositMessage( pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, amount=amount, )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// return bls.Verify(pubkey, signing_root, signature)
func IsValidDepositSignature(data *ethpb.Deposit_Data) (bool, error) {
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return false, err
}
if err := verifyDepositDataSigningRoot(data, domain); err != nil {
// Ignore this error as in the spec pseudo code.
log.WithError(err).Debug("Skipping deposit: could not verify deposit data signature")
return false, nil
}
return true, nil
}
func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, domain []byte) error {
return deposit.VerifyDepositSignature(obj, domain)
}
// ProcessPendingBalanceDeposits implements the spec definition below. This method mutates the state.
//
// Spec definition:
@@ -54,14 +225,14 @@ func ProcessPendingBalanceDeposits(ctx context.Context, st state.BeaconState, ac
return err
}
for _, deposit := range deposits {
if primitives.Gwei(deposit.Amount) > availableForProcessing {
for _, balanceDeposit := range deposits {
if primitives.Gwei(balanceDeposit.Amount) > availableForProcessing {
break
}
if err := helpers.IncreaseBalance(st, deposit.Index, deposit.Amount); err != nil {
if err := helpers.IncreaseBalance(st, balanceDeposit.Index, balanceDeposit.Amount); err != nil {
return err
}
availableForProcessing -= primitives.Gwei(deposit.Amount)
availableForProcessing -= primitives.Gwei(balanceDeposit.Amount)
nextDepositIndex++
}
@@ -79,9 +250,69 @@ func ProcessPendingBalanceDeposits(ctx context.Context, st state.BeaconState, ac
// ProcessDepositRequests is a function as part of electra to process execution layer deposits
func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState, requests []*enginev1.DepositRequest) (state.BeaconState, error) {
_, span := trace.StartSpan(ctx, "electra.ProcessDepositRequests")
ctx, span := trace.StartSpan(ctx, "electra.ProcessDepositRequests")
defer span.End()
// TODO: replace with 6110 logic
// return b.ProcessDepositRequests(beaconState, requests)
if len(requests) == 0 {
log.Debug("ProcessDepositRequests: no deposit requests found")
return beaconState, nil
}
deposits := make([]*ethpb.Deposit, 0)
for _, req := range requests {
if req == nil {
return nil, errors.New("got a nil DepositRequest")
}
deposits = append(deposits, &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: req.Pubkey,
WithdrawalCredentials: req.WithdrawalCredentials,
Amount: req.Amount,
Signature: req.Signature,
},
})
}
batchVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signatures in batch")
}
for _, receipt := range requests {
beaconState, err = processDepositRequest(beaconState, receipt, batchVerified)
if err != nil {
return nil, errors.Wrap(err, "could not apply deposit request")
}
}
return beaconState, nil
}
// processDepositRequest processes the specific deposit receipt
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
//
// # Set deposit request start index
// if state.deposit_requests_start_index == UNSET_DEPOSIT_REQUEST_START_INDEX:
// state.deposit_requests_start_index = deposit_request.index
//
// apply_deposit(
// state=state,
// pubkey=deposit_request.pubkey,
// withdrawal_credentials=deposit_request.withdrawal_credentials,
// amount=deposit_request.amount,
// signature=deposit_request.signature,
// )
func processDepositRequest(beaconState state.BeaconState, request *enginev1.DepositRequest, verifySignature bool) (state.BeaconState, error) {
requestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get deposit requests start index")
}
if requestsStartIndex == params.BeaconConfig().UnsetDepositRequestsStartIndex {
if err := beaconState.SetDepositRequestsStartIndex(request.Index); err != nil {
return nil, errors.Wrap(err, "could not set deposit requests start index")
}
}
return ApplyDeposit(beaconState, &ethpb.Deposit_Data{
PublicKey: bytesutil.SafeCopyBytes(request.Pubkey),
Amount: request.Amount,
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Signature: bytesutil.SafeCopyBytes(request.Signature),
}, verifySignature)
}

View File

@@ -6,11 +6,18 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingBalanceDeposits(t *testing.T) {
@@ -126,3 +133,194 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
})
}
}
func TestProcessDepositRequests(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
sk, err := bls.RandKey()
require.NoError(t, err)
t.Run("empty requests continues", func(t *testing.T) {
newSt, err := electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{})
require.NoError(t, err)
require.DeepEqual(t, newSt, st)
})
t.Run("nil request errors", func(t *testing.T) {
_, err = electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{nil})
require.ErrorContains(t, "got a nil DepositRequest", err)
})
vals := st.Validators()
vals[0].PublicKey = sk.PublicKey().Marshal()
vals[0].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 2000
require.NoError(t, st.SetBalances(bals))
require.NoError(t, st.SetPendingBalanceDeposits(make([]*eth.PendingBalanceDeposit, 0))) // reset pbd as the determinitstic state populates this already
withdrawalCred := make([]byte, 32)
withdrawalCred[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
depositMessage := &eth.DepositMessage{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: withdrawalCred,
}
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(depositMessage, domain)
require.NoError(t, err)
sig := sk.Sign(sr[:])
requests := []*enginev1.DepositRequest{
{
Pubkey: depositMessage.PublicKey,
Index: 0,
WithdrawalCredentials: depositMessage.WithdrawalCredentials,
Amount: depositMessage.Amount,
Signature: sig.Marshal(),
},
}
st, err = electra.ProcessDepositRequests(context.Background(), st, requests)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pbd))
require.Equal(t, uint64(1000), pbd[0].Amount)
require.Equal(t, uint64(2000), pbd[1].Amount)
}
func TestProcessDeposit_Electra_Simple(t *testing.T) {
deps, _, err := util.DeterministicDepositsAndKeysSameValidator(3)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(deps))
require.NoError(t, err)
registry := []*eth.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
st, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &eth.Fork{
PreviousVersion: params.BeaconConfig().ElectraForkVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
},
})
require.NoError(t, err)
pdSt, err := electra.ProcessDeposits(context.Background(), st, deps)
require.NoError(t, err)
pbd, err := pdSt.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, pbd[2].Amount)
require.Equal(t, 3, len(pbd))
}
func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
// Same test settings as in TestProcessDeposit_AddsNewValidatorDeposit, except that we use an invalid signature
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &eth.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*eth.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &eth.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := electra.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
if newState.Eth1DepositIndex() != 1 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 1 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 1 {
t.Errorf("Expected validator list to have length 1, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 1 {
t.Errorf("Expected validator balances list to have length 1, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestApplyDeposit_TopUps_WithBadSignature(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 3)
sk, err := bls.RandKey()
require.NoError(t, err)
withdrawalCred := make([]byte, 32)
withdrawalCred[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
topUpAmount := uint64(1234)
depositData := &eth.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: topUpAmount,
WithdrawalCredentials: withdrawalCred,
Signature: make([]byte, fieldparams.BLSSignatureLength),
}
vals := st.Validators()
vals[0].PublicKey = sk.PublicKey().Marshal()
vals[0].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
require.NoError(t, st.SetValidators(vals))
adSt, err := electra.ApplyDeposit(st, depositData, true)
require.NoError(t, err)
pbd, err := adSt.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, topUpAmount, pbd[0].Amount)
}
func TestApplyDeposit_Electra_SwitchToCompoundingValidator(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 3)
sk, err := bls.RandKey()
require.NoError(t, err)
withdrawalCred := make([]byte, 32)
withdrawalCred[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
depositData := &eth.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: withdrawalCred,
Signature: make([]byte, fieldparams.BLSSignatureLength),
}
vals := st.Validators()
vals[0].PublicKey = sk.PublicKey().Marshal()
vals[0].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 2000
require.NoError(t, st.SetBalances(bals))
sr, err := signing.ComputeSigningRoot(depositData, bytesutil.ToBytes(3, 32))
require.NoError(t, err)
sig := sk.Sign(sr[:])
depositData.Signature = sig.Marshal()
adSt, err := electra.ApplyDeposit(st, depositData, false)
require.NoError(t, err)
pbd, err := adSt.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pbd))
require.Equal(t, uint64(1000), pbd[0].Amount)
require.Equal(t, uint64(2000), pbd[1].Amount)
}

View File

@@ -2,12 +2,18 @@ package electra
import (
"context"
"fmt"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
// ProcessRegistryUpdates rotates validators in and out of active pool.
// the amount to rotate is determined churn limit.
// ProcessRegistryUpdates processes all validators eligible for the activation queue, all validators
// which should be ejected, and all validators which are eligible for activation from the queue.
//
// Spec pseudocode definition:
//
@@ -28,7 +34,72 @@ import (
// for validator in state.validators:
// if is_eligible_for_activation(state, validator):
// validator.activation_epoch = activation_epoch
func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
// TODO: replace with real implementation
return state, nil
func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
currentEpoch := time.CurrentEpoch(st)
ejectionBal := params.BeaconConfig().EjectionBalance
activationEpoch := helpers.ActivationExitEpoch(currentEpoch)
// To avoid copying the state validator set via st.Validators(), we will perform a read only pass
// over the validator set while collecting validator indices where the validator copy is actually
// necessary, then we will process these operations.
eligibleForActivationQ := make([]primitives.ValidatorIndex, 0)
eligibleForEjection := make([]primitives.ValidatorIndex, 0)
eligibleForActivation := make([]primitives.ValidatorIndex, 0)
if err := st.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
// Collect validators eligible to enter the activation queue.
if helpers.IsEligibleForActivationQueue(val, currentEpoch) {
eligibleForActivationQ = append(eligibleForActivationQ, primitives.ValidatorIndex(idx))
}
// Collect validators to eject.
if val.EffectiveBalance() <= ejectionBal && helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
eligibleForEjection = append(eligibleForEjection, primitives.ValidatorIndex(idx))
}
// Collect validators eligible for activation and not yet dequeued for activation.
if helpers.IsEligibleForActivationUsingTrie(st, val) {
eligibleForActivation = append(eligibleForActivation, primitives.ValidatorIndex(idx))
}
return nil
}); err != nil {
return fmt.Errorf("failed to read validators: %w", err)
}
// Handle validators eligible to join the activation queue.
for _, idx := range eligibleForActivationQ {
v, err := st.ValidatorAtIndex(idx)
if err != nil {
return err
}
v.ActivationEligibilityEpoch = currentEpoch + 1
if err := st.UpdateValidatorAtIndex(idx, v); err != nil {
return fmt.Errorf("failed to updated eligible validator at index %d: %w", idx, err)
}
}
// Handle validator ejections.
for _, idx := range eligibleForEjection {
var err error
// exitQueueEpoch and churn arguments are not used in electra.
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, 0 /*exitQueueEpoch*/, 0 /*churn*/)
if err != nil {
return fmt.Errorf("failed to initiate validator exit at index %d: %w", idx, err)
}
}
for _, idx := range eligibleForActivation {
// Activate all eligible validators.
v, err := st.ValidatorAtIndex(idx)
if err != nil {
return err
}
v.ActivationEpoch = activationEpoch
if err := st.UpdateValidatorAtIndex(idx, v); err != nil {
return fmt.Errorf("failed to activate validator at index %d: %w", idx, err)
}
}
return nil
}

View File

@@ -0,0 +1,145 @@
package electra_test
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessRegistryUpdates(t *testing.T) {
finalizedEpoch := primitives.Epoch(4)
tests := []struct {
name string
state state.BeaconState
check func(*testing.T, state.BeaconState)
}{
{
name: "No rotation",
state: func() state.BeaconState {
base := &eth.BeaconStateElectra{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
Validators: []*eth.Validator{
{ExitEpoch: params.BeaconConfig().MaxSeedLookahead},
{ExitEpoch: params.BeaconConfig().MaxSeedLookahead},
},
Balances: []uint64{
params.BeaconConfig().MaxEffectiveBalance,
params.BeaconConfig().MaxEffectiveBalance,
},
FinalizedCheckpoint: &eth.Checkpoint{Root: make([]byte, fieldparams.RootLength)},
}
st, err := state_native.InitializeFromProtoElectra(base)
require.NoError(t, err)
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
for i, val := range st.Validators() {
require.Equal(t, params.BeaconConfig().MaxSeedLookahead, val.ExitEpoch, "validator updated unexpectedly at index %d", i)
}
},
},
{
name: "Validators are activated",
state: func() state.BeaconState {
base := &eth.BeaconStateElectra{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &eth.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
for i := uint64(0); i < 10; i++ {
base.Validators = append(base.Validators, &eth.Validator{
ActivationEligibilityEpoch: finalizedEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
})
}
st, err := state_native.InitializeFromProtoElectra(base)
require.NoError(t, err)
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
activationEpoch := helpers.ActivationExitEpoch(5)
// All validators should be activated.
for i, val := range st.Validators() {
require.Equal(t, activationEpoch, val.ActivationEpoch, "failed to update validator at index %d", i)
}
},
},
{
name: "Validators are exited",
state: func() state.BeaconState {
base := &eth.BeaconStateElectra{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &eth.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
for i := uint64(0); i < 10; i++ {
base.Validators = append(base.Validators, &eth.Validator{
EffectiveBalance: params.BeaconConfig().EjectionBalance - 1,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
})
}
st, err := state_native.InitializeFromProtoElectra(base)
require.NoError(t, err)
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
// All validators should be exited
for i, val := range st.Validators() {
require.NotEqual(t, params.BeaconConfig().FarFutureEpoch, val.ExitEpoch, "failed to update exit epoch on validator %d", i)
require.NotEqual(t, params.BeaconConfig().FarFutureEpoch, val.WithdrawableEpoch, "failed to update withdrawable epoch on validator %d", i)
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := electra.ProcessRegistryUpdates(context.TODO(), tt.state)
require.NoError(t, err)
if tt.check != nil {
tt.check(t, tt.state)
}
})
}
}
func Benchmark_ProcessRegistryUpdates_MassEjection(b *testing.B) {
bal := params.BeaconConfig().EjectionBalance - 1
ffe := params.BeaconConfig().FarFutureEpoch
genValidators := func(num uint64) []*eth.Validator {
vals := make([]*eth.Validator, num)
for i := range vals {
vals[i] = &eth.Validator{
EffectiveBalance: bal,
ExitEpoch: ffe,
}
}
return vals
}
st, err := util.NewBeaconStateElectra()
require.NoError(b, err)
for i := 0; i < b.N; i++ {
b.StopTimer()
if err := st.SetValidators(genValidators(100000)); err != nil {
panic(err)
}
b.StartTimer()
if err := electra.ProcessRegistryUpdates(context.TODO(), st); err != nil {
panic(err)
}
}
}

View File

@@ -2,12 +2,15 @@ package electra
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
e "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"go.opencensus.io/trace"
)
@@ -74,8 +77,7 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
return errors.Wrap(err, "could not process rewards and penalties")
}
state, err = ProcessRegistryUpdates(ctx, state)
if err != nil {
if err := ProcessRegistryUpdates(ctx, state); err != nil {
return errors.Wrap(err, "could not process registry updates")
}
@@ -127,3 +129,33 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
return nil
}
// VerifyBlockDepositLength
//
// Spec definition:
//
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
// assert len(body.deposits) == 0
func VerifyBlockDepositLength(body interfaces.ReadOnlyBeaconBlockBody, state state.BeaconState) error {
eth1Data := state.Eth1Data()
requestsStartIndex, err := state.DepositRequestsStartIndex()
if err != nil {
return errors.Wrap(err, "failed to get requests start index")
}
eth1DepositIndexLimit := min(eth1Data.DepositCount, requestsStartIndex)
if state.Eth1DepositIndex() < eth1DepositIndexLimit {
if uint64(len(body.Deposits())) != min(params.BeaconConfig().MaxDeposits, eth1DepositIndexLimit-state.Eth1DepositIndex()) {
return fmt.Errorf("incorrect outstanding deposits in block body, wanted: %d, got: %d", min(params.BeaconConfig().MaxDeposits, eth1DepositIndexLimit-state.Eth1DepositIndex()), len(body.Deposits()))
}
} else {
if len(body.Deposits()) != 0 {
return fmt.Errorf("incorrect outstanding deposits in block body, wanted: %d, got: %d", 0, len(body.Deposits()))
}
}
return nil
}

View File

@@ -0,0 +1,98 @@
package electra
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
)
var (
ProcessBLSToExecutionChanges = blocks.ProcessBLSToExecutionChanges
ProcessVoluntaryExits = blocks.ProcessVoluntaryExits
ProcessAttesterSlashings = blocks.ProcessAttesterSlashings
ProcessProposerSlashings = blocks.ProcessProposerSlashings
)
// ProcessOperations
//
// Spec definition:
//
// def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
// assert len(body.deposits) == 0
//
// def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
// for operation in operations:
// fn(state, operation)
//
// for_ops(body.proposer_slashings, process_proposer_slashing)
// for_ops(body.attester_slashings, process_attester_slashing)
// for_ops(body.attestations, process_attestation) # [Modified in Electra:EIP7549]
// for_ops(body.deposits, process_deposit) # [Modified in Electra:EIP7251]
// for_ops(body.voluntary_exits, process_voluntary_exit) # [Modified in Electra:EIP7251]
// for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
// # [New in Electra:EIP7002:EIP7251]
// for_ops(body.execution_payload.withdrawal_requests, process_execution_layer_withdrawal_request)
// for_ops(body.execution_payload.deposit_requests, process_deposit_requests) # [New in Electra:EIP6110]
// for_ops(body.consolidations, process_consolidation) # [New in Electra:EIP7251]
func ProcessOperations(
ctx context.Context,
st state.BeaconState,
block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
// 6110 validations are in VerifyOperationLengths
bb := block.Body()
// Electra extends the altair operations.
st, err := ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), v.SlashValidator)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
st, err = ProcessAttestationsNoVerifySignature(ctx, st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
}
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
st, err = ProcessBLSToExecutionChanges(st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
}
// new in electra
e, err := bb.Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution data from block")
}
exe, ok := e.(interfaces.ExecutionDataElectra)
if !ok {
return nil, errors.New("could not cast execution data to electra execution data")
}
st, err = ProcessWithdrawalRequests(ctx, st, exe.WithdrawalRequests())
if err != nil {
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
}
st, err = ProcessDepositRequests(ctx, st, exe.DepositRequests()) // TODO: EIP-6110 deposit changes.
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
}
// TODO: Process consolidations from execution header.
return st, nil
}

View File

@@ -0,0 +1,49 @@
package electra_test
import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestVerifyOperationLengths_Electra(t *testing.T) {
t.Run("happy path", func(t *testing.T) {
s, _ := util.DeterministicGenesisStateElectra(t, 1)
sb, err := consensusblocks.NewSignedBeaconBlock(util.NewBeaconBlockElectra())
require.NoError(t, err)
require.NoError(t, electra.VerifyBlockDepositLength(sb.Block().Body(), s))
})
t.Run("eth1depositIndex less than eth1depositIndexLimit & number of deposits incorrect", func(t *testing.T) {
s, _ := util.DeterministicGenesisStateElectra(t, 1)
sb, err := consensusblocks.NewSignedBeaconBlock(util.NewBeaconBlockElectra())
require.NoError(t, err)
require.NoError(t, s.SetEth1DepositIndex(0))
require.NoError(t, s.SetDepositRequestsStartIndex(1))
err = electra.VerifyBlockDepositLength(sb.Block().Body(), s)
require.ErrorContains(t, "incorrect outstanding deposits in block body", err)
})
t.Run("eth1depositIndex more than eth1depositIndexLimit & number of deposits is not 0", func(t *testing.T) {
s, _ := util.DeterministicGenesisStateElectra(t, 1)
sb, err := consensusblocks.NewSignedBeaconBlock(util.NewBeaconBlockElectra())
require.NoError(t, err)
sb.SetDeposits([]*ethpb.Deposit{
{
Data: &ethpb.Deposit_Data{
PublicKey: []byte{1, 2, 3},
Amount: 1000,
WithdrawalCredentials: make([]byte, common.AddressLength),
Signature: []byte{4, 5, 6},
},
},
})
require.NoError(t, s.SetEth1DepositIndex(1))
require.NoError(t, s.SetDepositRequestsStartIndex(1))
err = electra.VerifyBlockDepositLength(sb.Block().Body(), s)
require.ErrorContains(t, "incorrect outstanding deposits in block body", err)
})
}

View File

@@ -244,25 +244,26 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderElectra{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
DepositRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP6110]
WithdrawalRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP7002]
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
DepositRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP6110]
WithdrawalRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP7002]
ConsolidationRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP7251]
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
@@ -295,14 +296,14 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
for _, index := range preActivationIndices {
if err := helpers.QueueEntireBalanceAndResetValidator(post, index); err != nil {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := helpers.QueueExcessActiveBalance(post, index); err != nil {
if err := QueueExcessActiveBalance(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}

View File

@@ -113,23 +113,24 @@ func TestUpgradeToElectra(t *testing.T) {
wdRoot, err := prevHeader.WithdrawalsRoot()
require.NoError(t, err)
wanted := &enginev1.ExecutionPayloadHeaderElectra{
ParentHash: prevHeader.ParentHash(),
FeeRecipient: prevHeader.FeeRecipient(),
StateRoot: prevHeader.StateRoot(),
ReceiptsRoot: prevHeader.ReceiptsRoot(),
LogsBloom: prevHeader.LogsBloom(),
PrevRandao: prevHeader.PrevRandao(),
BlockNumber: prevHeader.BlockNumber(),
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
DepositRequestsRoot: bytesutil.Bytes32(0),
WithdrawalRequestsRoot: bytesutil.Bytes32(0),
ParentHash: prevHeader.ParentHash(),
FeeRecipient: prevHeader.FeeRecipient(),
StateRoot: prevHeader.StateRoot(),
ReceiptsRoot: prevHeader.ReceiptsRoot(),
LogsBloom: prevHeader.LogsBloom(),
PrevRandao: prevHeader.PrevRandao(),
BlockNumber: prevHeader.BlockNumber(),
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
DepositRequestsRoot: bytesutil.Bytes32(0),
WithdrawalRequestsRoot: bytesutil.Bytes32(0),
ConsolidationRequestsRoot: bytesutil.Bytes32(0),
}
require.DeepEqual(t, wanted, protoHeader)

View File

@@ -1,15 +1,80 @@
package electra
import (
"context"
"errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64) -> None:
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, 0) # [Modified in Electra:EIP7251]
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.inactivity_scores, index, uint64(0))
// state.pending_balance_deposits.append(PendingBalanceDeposit(index=index, amount=amount)) # [New in Electra:EIP7251]
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := ValidatorFromDeposit(pubKey, withdrawalCredentials)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
return errors.New("could not find validator in registry")
}
if err := beaconState.AppendBalance(0); err != nil {
return err
}
if err := beaconState.AppendPendingBalanceDeposit(index, amount); err != nil {
return err
}
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
}
return beaconState.AppendCurrentParticipationBits(0)
}
// ValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32) -> Validator:
//
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=0, # [Modified in Electra:EIP7251]
//
// )
func ValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte) *ethpb.Validator {
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 0, // [Modified in Electra:EIP7251]
}
}
// SwitchToCompoundingValidator
//
// Spec definition:
@@ -19,7 +84,7 @@ import (
// if has_eth1_withdrawal_credential(validator):
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
v, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
@@ -32,12 +97,12 @@ func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return queueExcessActiveBalance(ctx, s, idx)
return QueueExcessActiveBalance(s, idx)
}
return nil
}
// queueExcessActiveBalance
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending balance deposit.
//
// Spec definition:
//
@@ -49,7 +114,7 @@ func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
func queueExcessActiveBalance(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
@@ -80,7 +145,7 @@ func queueExcessActiveBalance(ctx context.Context, s state.BeaconState, idx prim
// )
//
//nolint:dupword
func QueueEntireBalanceAndResetValidator(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err

View File

@@ -2,17 +2,32 @@ package electra_test
import (
"bytes"
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestAddValidatorToRegistry(t *testing.T) {
st, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{})
require.NoError(t, err)
require.NoError(t, electra.AddValidatorToRegistry(st, make([]byte, fieldparams.BLSPubkeyLength), make([]byte, fieldparams.RootLength), 100))
balances := st.Balances()
require.Equal(t, 1, len(balances))
require.Equal(t, uint64(0), balances[0])
pbds, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbds))
require.Equal(t, uint64(100), pbds[0].Amount)
require.Equal(t, primitives.ValidatorIndex(0), pbds[0].Index)
}
func TestSwitchToCompoundingValidator(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{
Validators: []*eth.Validator{
@@ -34,10 +49,10 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
})
// Test that a validator with no withdrawal credentials cannot be switched to compounding.
require.NoError(t, err)
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(context.TODO(), s, 0))
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(s, 0))
// Test that a validator with withdrawal credentials can be switched to compounding.
require.NoError(t, electra.SwitchToCompoundingValidator(context.TODO(), s, 1))
require.NoError(t, electra.SwitchToCompoundingValidator(s, 1))
v, err := s.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, true, bytes.HasPrefix(v.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte}), "withdrawal credentials were not updated")
@@ -50,7 +65,7 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
require.Equal(t, 0, len(pbd), "pending balance deposits should be empty")
// Test that a validator with excess balance can be switched to compounding, excess balance is queued.
require.NoError(t, electra.SwitchToCompoundingValidator(context.TODO(), s, 2))
require.NoError(t, electra.SwitchToCompoundingValidator(s, 2))
b, err = s.BalanceAtIndex(2)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was not changed")
@@ -74,7 +89,7 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
},
})
require.NoError(t, err)
require.NoError(t, electra.QueueEntireBalanceAndResetValidator(context.TODO(), s, 0))
require.NoError(t, electra.QueueEntireBalanceAndResetValidator(s, 0))
b, err := s.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), b, "balance was not changed")
@@ -88,3 +103,57 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
require.Equal(t, params.BeaconConfig().MinActivationBalance+100_000, pbd[0].Amount, "pending balance deposit amount is incorrect")
require.Equal(t, primitives.ValidatorIndex(0), pbd[0].Index, "pending balance deposit index is incorrect")
}
func TestSwitchToCompoundingValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
vals := st.Validators()
vals[0].WithdrawalCredentials = []byte{params.BeaconConfig().ETH1AddressWithdrawalPrefixByte}
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1010
require.NoError(t, st.SetBalances(bals))
require.NoError(t, electra.SwitchToCompoundingValidator(st, 0))
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1010), pbd[0].Amount) // appends it at the end
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
bytes.HasPrefix(val.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte})
}
func TestQueueExcessActiveBalance_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
err := electra.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount) // appends it at the end
bals = st.Balances()
require.Equal(t, params.BeaconConfig().MinActivationBalance, bals[0])
}
func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
// need to manually set this to 0 as after 6110 these balances are now 0 and instead populates pending balance deposits
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance - 1000
require.NoError(t, st.SetBalances(bals))
err := electra.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, params.BeaconConfig().MinActivationBalance-1000, pbd[0].Amount)
bal, err := st.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), bal)
}

View File

@@ -81,7 +81,6 @@ func TestProcessWithdrawRequests(t *testing.T) {
}))
_, err = wantPostSt.ExitEpochAndUpdateChurn(primitives.Gwei(v.EffectiveBalance))
require.NoError(t, err)
require.DeepEqual(t, wantPostSt.Validators(), got.Validators())
webc, err := wantPostSt.ExitBalanceToConsume()
require.NoError(t, err)
gebc, err := got.ExitBalanceToConsume()
@@ -110,10 +109,7 @@ func TestProcessWithdrawRequests(t *testing.T) {
prefix[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
require.NoError(t, preSt.SetValidators([]*eth.Validator{v}))
bal, err := preSt.BalanceAtIndex(0)
require.NoError(t, err)
bal += 200
require.NoError(t, preSt.SetBalances([]uint64{bal}))
require.NoError(t, preSt.SetBalances([]uint64{params.BeaconConfig().MinActivationBalance + 200}))
require.NoError(t, preSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: 100,
@@ -168,7 +164,6 @@ func TestProcessWithdrawRequests(t *testing.T) {
require.Equal(t, wece, gece)
_, err = wantPostSt.ExitEpochAndUpdateChurn(primitives.Gwei(100))
require.NoError(t, err)
require.DeepEqual(t, wantPostSt.Validators(), got.Validators())
webc, err := wantPostSt.ExitBalanceToConsume()
require.NoError(t, err)
gebc, err := got.ExitBalanceToConsume()

View File

@@ -2,7 +2,10 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["epoch_processing.go"],
srcs = [
"epoch_processing.go",
"sortable_indices.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch",
visibility = [
"//beacon-chain:__subpackages__",
@@ -31,6 +34,7 @@ go_test(
srcs = [
"epoch_processing_fuzz_test.go",
"epoch_processing_test.go",
"sortable_indices_test.go",
],
embed = [":go_default_library"],
deps = [
@@ -47,6 +51,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_google_go_cmp//cmp:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",

View File

@@ -24,27 +24,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// sortableIndices implements the Sort interface to sort newly activated validator indices
// by activation epoch and by index number.
type sortableIndices struct {
indices []primitives.ValidatorIndex
validators []*ethpb.Validator
}
// Len is the number of elements in the collection.
func (s sortableIndices) Len() int { return len(s.indices) }
// Swap swaps the elements with indexes i and j.
func (s sortableIndices) Swap(i, j int) { s.indices[i], s.indices[j] = s.indices[j], s.indices[i] }
// Less reports whether the element with index i must sort before the element with index j.
func (s sortableIndices) Less(i, j int) bool {
if s.validators[s.indices[i]].ActivationEligibilityEpoch == s.validators[s.indices[j]].ActivationEligibilityEpoch {
return s.indices[i] < s.indices[j]
}
return s.validators[s.indices[i]].ActivationEligibilityEpoch < s.validators[s.indices[j]].ActivationEligibilityEpoch
}
// AttestingBalance returns the total balance from all the attesting indices.
//
// WARNING: This method allocates a new copy of the attesting validator indices set and is
@@ -91,55 +70,78 @@ func AttestingBalance(ctx context.Context, state state.ReadOnlyBeaconState, atts
// for index in activation_queue[:get_validator_churn_limit(state)]:
// validator = state.validators[index]
// validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
vals := state.Validators()
func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(st)
var err error
ejectionBal := params.BeaconConfig().EjectionBalance
activationEligibilityEpoch := time.CurrentEpoch(state) + 1
for idx, validator := range vals {
// Process the validators for activation eligibility.
if helpers.IsEligibleForActivationQueue(validator, currentEpoch) {
validator.ActivationEligibilityEpoch = activationEligibilityEpoch
if err := state.UpdateValidatorAtIndex(primitives.ValidatorIndex(idx), validator); err != nil {
return nil, err
}
// To avoid copying the state validator set via st.Validators(), we will perform a read only pass
// over the validator set while collecting validator indices where the validator copy is actually
// necessary, then we will process these operations.
eligibleForActivationQ := make([]primitives.ValidatorIndex, 0)
eligibleForActivation := make([]primitives.ValidatorIndex, 0)
eligibleForEjection := make([]primitives.ValidatorIndex, 0)
if err := st.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
// Collect validators eligible to enter the activation queue.
if helpers.IsEligibleForActivationQueue(val, currentEpoch) {
eligibleForActivationQ = append(eligibleForActivationQ, primitives.ValidatorIndex(idx))
}
// Process the validators for ejection.
isActive := helpers.IsActiveValidator(validator, currentEpoch)
belowEjectionBalance := validator.EffectiveBalance <= ejectionBal
// Collect validators to eject.
isActive := helpers.IsActiveValidatorUsingTrie(val, currentEpoch)
belowEjectionBalance := val.EffectiveBalance() <= ejectionBal
if isActive && belowEjectionBalance {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(state)
state, _, err = validators.InitiateValidatorExit(ctx, state, primitives.ValidatorIndex(idx), maxExitEpoch, churn)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}
eligibleForEjection = append(eligibleForEjection, primitives.ValidatorIndex(idx))
}
// Collect validators eligible for activation and not yet dequeued for activation.
if helpers.IsEligibleForActivationUsingTrie(st, val) {
eligibleForActivation = append(eligibleForActivation, primitives.ValidatorIndex(idx))
}
return nil
}); err != nil {
return st, fmt.Errorf("failed to read validators: %w", err)
}
// Process validators for activation eligibility.
activationEligibilityEpoch := time.CurrentEpoch(st) + 1
for _, idx := range eligibleForActivationQ {
v, err := st.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
v.ActivationEligibilityEpoch = activationEligibilityEpoch
if err := st.UpdateValidatorAtIndex(idx, v); err != nil {
return nil, err
}
}
// Process validators eligible for ejection.
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, maxExitEpoch, churn)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}
}
// Queue validators eligible for activation and not yet dequeued for activation.
var activationQ []primitives.ValidatorIndex
for idx, validator := range vals {
if helpers.IsEligibleForActivation(state, validator) {
activationQ = append(activationQ, primitives.ValidatorIndex(idx))
}
}
sort.Sort(sortableIndices{indices: activationQ, validators: vals})
sort.Sort(sortableIndices{indices: eligibleForActivation, state: st})
// Only activate just enough validators according to the activation churn limit.
limit := uint64(len(activationQ))
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, state, currentEpoch)
limit := uint64(len(eligibleForActivation))
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, st, currentEpoch)
if err != nil {
return nil, errors.Wrap(err, "could not get active validator count")
}
churnLimit := helpers.ValidatorActivationChurnLimit(activeValidatorCount)
if state.Version() >= version.Deneb {
if st.Version() >= version.Deneb {
churnLimit = helpers.ValidatorActivationChurnLimitDeneb(activeValidatorCount)
}
@@ -149,17 +151,17 @@ func ProcessRegistryUpdates(ctx context.Context, state state.BeaconState) (state
}
activationExitEpoch := helpers.ActivationExitEpoch(currentEpoch)
for _, index := range activationQ[:limit] {
validator, err := state.ValidatorAtIndex(index)
for _, index := range eligibleForActivation[:limit] {
validator, err := st.ValidatorAtIndex(index)
if err != nil {
return nil, err
}
validator.ActivationEpoch = activationExitEpoch
if err := state.UpdateValidatorAtIndex(index, validator); err != nil {
if err := st.UpdateValidatorAtIndex(index, validator); err != nil {
return nil, err
}
}
return state, nil
return st, nil
}
// ProcessSlashings processes the slashed validators during epoch processing,

View File

@@ -307,14 +307,15 @@ func TestProcessRegistryUpdates_NoRotation(t *testing.T) {
}
func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
finalizedEpoch := primitives.Epoch(4)
base := &ethpb.BeaconState{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 6, Root: make([]byte, fieldparams.RootLength)},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
limit := helpers.ValidatorActivationChurnLimit(0)
for i := uint64(0); i < limit+10; i++ {
base.Validators = append(base.Validators, &ethpb.Validator{
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEligibilityEpoch: finalizedEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
})
@@ -325,7 +326,6 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
assert.Equal(t, currentEpoch+1, validator.ActivationEligibilityEpoch, "Could not update registry %d, unexpected activation eligibility epoch", i)
if uint64(i) < limit && validator.ActivationEpoch != helpers.ActivationExitEpoch(currentEpoch) {
t.Errorf("Could not update registry %d, validators failed to activate: wanted activation epoch %d, got %d",
i, helpers.ActivationExitEpoch(currentEpoch), validator.ActivationEpoch)
@@ -338,9 +338,10 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
}
func TestProcessRegistryUpdates_EligibleToActivate_Cancun(t *testing.T) {
finalizedEpoch := primitives.Epoch(4)
base := &ethpb.BeaconStateDeneb{
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 6, Root: make([]byte, fieldparams.RootLength)},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
cfg := params.BeaconConfig()
cfg.MinPerEpochChurnLimit = 10
@@ -349,7 +350,7 @@ func TestProcessRegistryUpdates_EligibleToActivate_Cancun(t *testing.T) {
for i := uint64(0); i < 10; i++ {
base.Validators = append(base.Validators, &ethpb.Validator{
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEligibilityEpoch: finalizedEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
})
@@ -360,7 +361,6 @@ func TestProcessRegistryUpdates_EligibleToActivate_Cancun(t *testing.T) {
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
assert.Equal(t, currentEpoch+1, validator.ActivationEligibilityEpoch, "Could not update registry %d, unexpected activation eligibility epoch", i)
// Note: In Deneb, only validators indices before `MaxPerEpochActivationChurnLimit` should be activated.
if uint64(i) < params.BeaconConfig().MaxPerEpochActivationChurnLimit && validator.ActivationEpoch != helpers.ActivationExitEpoch(currentEpoch) {
t.Errorf("Could not update registry %d, validators failed to activate: wanted activation epoch %d, got %d",

View File

@@ -0,0 +1,35 @@
package epoch
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
// sortableIndices implements the Sort interface to sort newly activated validator indices
// by activation epoch and by index number.
type sortableIndices struct {
indices []primitives.ValidatorIndex
state state.ReadOnlyValidators
}
// Len is the number of elements in the collection.
func (s sortableIndices) Len() int { return len(s.indices) }
// Swap swaps the elements with indexes i and j.
func (s sortableIndices) Swap(i, j int) { s.indices[i], s.indices[j] = s.indices[j], s.indices[i] }
// Less reports whether the element with index i must sort before the element with index j.
func (s sortableIndices) Less(i, j int) bool {
vi, erri := s.state.ValidatorAtIndexReadOnly(s.indices[i])
vj, errj := s.state.ValidatorAtIndexReadOnly(s.indices[j])
if erri != nil || errj != nil {
return false
}
a, b := vi.ActivationEligibilityEpoch(), vj.ActivationEligibilityEpoch()
if a == b {
return s.indices[i] < s.indices[j]
}
return a < b
}

View File

@@ -0,0 +1,53 @@
package epoch
import (
"sort"
"testing"
"github.com/google/go-cmp/cmp"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestSortableIndices(t *testing.T) {
st, err := state_native.InitializeFromProtoPhase0(&eth.BeaconState{
Validators: []*eth.Validator{
{ActivationEligibilityEpoch: 0},
{ActivationEligibilityEpoch: 5},
{ActivationEligibilityEpoch: 4},
{ActivationEligibilityEpoch: 4},
{ActivationEligibilityEpoch: 2},
{ActivationEligibilityEpoch: 1},
},
})
require.NoError(t, err)
s := sortableIndices{
indices: []primitives.ValidatorIndex{
4,
2,
5,
3,
1,
0,
},
state: st,
}
sort.Sort(s)
want := []primitives.ValidatorIndex{
0,
5,
4,
2, // Validators with the same ActivationEligibilityEpoch are sorted by index, ascending.
3,
1,
}
if !cmp.Equal(s.indices, want) {
t.Errorf("Failed to sort indices correctly, wanted %v, got %v", want, s.indices)
}
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -91,6 +92,14 @@ func IsAggregated(attestation ethpb.Att) bool {
//
// return uint64((committees_since_epoch_start + committee_index) % ATTESTATION_SUBNET_COUNT)
func ComputeSubnetForAttestation(activeValCount uint64, att ethpb.Att) uint64 {
if att.Version() >= version.Electra {
committeeIndex := 0
committeeIndices := att.CommitteeBitsVal().BitIndices()
if len(committeeIndices) > 0 {
committeeIndex = committeeIndices[0]
}
return ComputeSubnetFromCommitteeAndSlot(activeValCount, primitives.CommitteeIndex(committeeIndex), att.GetData().Slot)
}
return ComputeSubnetFromCommitteeAndSlot(activeValCount, att.GetData().CommitteeIndex, att.GetData().Slot)
}

View File

@@ -73,21 +73,37 @@ func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
att := &ethpb.Attestation{
AggregationBits: []byte{'A'},
Data: &ethpb.AttestationData{
Slot: 34,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{'C'},
Source: nil,
Target: nil,
},
Signature: []byte{'B'},
}
valCount, err := helpers.ActiveValidatorCount(context.Background(), state, slots.ToEpoch(att.Data.Slot))
valCount, err := helpers.ActiveValidatorCount(context.Background(), state, slots.ToEpoch(34))
require.NoError(t, err)
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
t.Run("Phase 0", func(t *testing.T) {
att := &ethpb.Attestation{
AggregationBits: []byte{'A'},
Data: &ethpb.AttestationData{
Slot: 34,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{'C'},
},
Signature: []byte{'B'},
}
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
})
t.Run("Electra", func(t *testing.T) {
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(4, true)
att := &ethpb.AttestationElectra{
AggregationBits: []byte{'A'},
CommitteeBits: cb,
Data: &ethpb.AttestationData{
Slot: 34,
BeaconBlockRoot: []byte{'C'},
},
Signature: []byte{'B'},
}
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
})
}
func Test_ValidateAttestationTime(t *testing.T) {

View File

@@ -31,7 +31,7 @@ func IsCurrentPeriodSyncCommittee(st state.BeaconState, valIdx primitives.Valida
return false, err
}
indices, err := syncCommitteeCache.CurrentPeriodIndexPosition(root, valIdx)
if err == cache.ErrNonExistingSyncCommitteeKey {
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return false, err
@@ -68,7 +68,7 @@ func IsNextPeriodSyncCommittee(
return false, err
}
indices, err := syncCommitteeCache.NextPeriodIndexPosition(root, valIdx)
if err == cache.ErrNonExistingSyncCommitteeKey {
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return false, err
@@ -95,7 +95,7 @@ func CurrentPeriodSyncSubcommitteeIndices(
return nil, err
}
indices, err := syncCommitteeCache.CurrentPeriodIndexPosition(root, valIdx)
if err == cache.ErrNonExistingSyncCommitteeKey {
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return nil, err
@@ -129,7 +129,7 @@ func NextPeriodSyncSubcommitteeIndices(
return nil, err
}
indices, err := syncCommitteeCache.NextPeriodIndexPosition(root, valIdx)
if err == cache.ErrNonExistingSyncCommitteeKey {
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return nil, err

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -357,10 +358,10 @@ func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconSt
// candidate_index = indices[compute_shuffled_index(i % total, total, seed)]
// random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
// effective_balance = state.validators[candidate_index].effective_balance
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_byte: #[Modified in Electra:EIP7251]
// return candidate_index
// i += 1
func ComputeProposerIndex(bState state.ReadOnlyValidators, activeIndices []primitives.ValidatorIndex, seed [32]byte) (primitives.ValidatorIndex, error) {
func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex, seed [32]byte) (primitives.ValidatorIndex, error) {
length := uint64(len(activeIndices))
if length == 0 {
return 0, errors.New("empty active indices list")
@@ -385,7 +386,12 @@ func ComputeProposerIndex(bState state.ReadOnlyValidators, activeIndices []primi
}
effectiveBal := v.EffectiveBalance()
if effectiveBal*maxRandomByte >= params.BeaconConfig().MaxEffectiveBalance*uint64(randomByte) {
maxEB := params.BeaconConfig().MaxEffectiveBalance
if bState.Version() >= version.Electra {
maxEB = params.BeaconConfig().MaxEffectiveBalanceElectra
}
if effectiveBal*maxRandomByte >= maxEB*uint64(randomByte) {
return candidateIndex, nil
}
}
@@ -404,11 +410,11 @@ func ComputeProposerIndex(bState state.ReadOnlyValidators, activeIndices []primi
// validator.activation_eligibility_epoch == FAR_FUTURE_EPOCH
// and validator.effective_balance >= MIN_ACTIVATION_BALANCE # [Modified in Electra:EIP7251]
// )
func IsEligibleForActivationQueue(validator *ethpb.Validator, currentEpoch primitives.Epoch) bool {
func IsEligibleForActivationQueue(validator state.ReadOnlyValidator, currentEpoch primitives.Epoch) bool {
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
return isEligibleForActivationQueueElectra(validator.ActivationEligibilityEpoch, validator.EffectiveBalance)
return isEligibleForActivationQueueElectra(validator.ActivationEligibilityEpoch(), validator.EffectiveBalance())
}
return isEligibleForActivationQueue(validator.ActivationEligibilityEpoch, validator.EffectiveBalance)
return isEligibleForActivationQueue(validator.ActivationEligibilityEpoch(), validator.EffectiveBalance())
}
// isEligibleForActivationQueue carries out the logic for IsEligibleForActivationQueue
@@ -525,16 +531,16 @@ func HasCompoundingWithdrawalCredential(v interfaces.WithWithdrawalCredentials)
if v == nil {
return false
}
return isCompoundingWithdrawalCredential(v.GetWithdrawalCredentials())
return IsCompoundingWithdrawalCredential(v.GetWithdrawalCredentials())
}
// isCompoundingWithdrawalCredential checks if the credentials are a compounding withdrawal credential.
// IsCompoundingWithdrawalCredential checks if the credentials are a compounding withdrawal credential.
//
// Spec definition:
//
// def is_compounding_withdrawal_credential(withdrawal_credentials: Bytes32) -> bool:
// return withdrawal_credentials[:1] == COMPOUNDING_WITHDRAWAL_PREFIX
func isCompoundingWithdrawalCredential(creds []byte) bool {
func IsCompoundingWithdrawalCredential(creds []byte) bool {
return bytes.HasPrefix(creds, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte})
}
@@ -548,7 +554,7 @@ func isCompoundingWithdrawalCredential(creds []byte) bool {
// Check if ``validator`` has a 0x01 or 0x02 prefixed withdrawal credential.
// """
// return has_compounding_withdrawal_credential(validator) or has_eth1_withdrawal_credential(validator)
func HasExecutionWithdrawalCredentials(v *ethpb.Validator) bool {
func HasExecutionWithdrawalCredentials(v interfaces.WithWithdrawalCredentials) bool {
if v == nil {
return false
}
@@ -674,68 +680,3 @@ func ValidatorMaxEffectiveBalance(val *ethpb.Validator) uint64 {
}
return params.BeaconConfig().MinActivationBalance
}
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending balance deposit.
//
// Spec definition:
//
// def queue_excess_active_balance(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// if balance > MIN_ACTIVATION_BALANCE:
// excess_balance = balance - MIN_ACTIVATION_BALANCE
// state.balances[index] = MIN_ACTIVATION_BALANCE
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
}
if bal > params.BeaconConfig().MinActivationBalance {
excessBalance := bal - params.BeaconConfig().MinActivationBalance
if err := s.UpdateBalancesAtIndex(idx, params.BeaconConfig().MinActivationBalance); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, excessBalance)
}
return nil
}
// QueueEntireBalanceAndResetValidator queues the entire balance and resets the validator. This is used in electra fork logic.
//
// Spec definition:
//
// def queue_entire_balance_and_reset_validator(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// validator = state.validators[index]
// state.balances[index] = 0
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=balance)
// )
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
}
if err := s.UpdateBalancesAtIndex(idx, 0); err != nil {
return err
}
v, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
}
v.EffectiveBalance = 0
v.ActivationEligibilityEpoch = params.BeaconConfig().FarFutureEpoch
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, bal)
}

View File

@@ -18,7 +18,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestIsActiveValidator_OK(t *testing.T) {
@@ -598,10 +597,11 @@ func TestComputeProposerIndex(t *testing.T) {
seed [32]byte
}
tests := []struct {
name string
args args
want primitives.ValidatorIndex
wantedErr string
name string
isElectraOrAbove bool
args args
want primitives.ValidatorIndex
wantedErr string
}{
{
name: "all_active_indices",
@@ -683,6 +683,54 @@ func TestComputeProposerIndex(t *testing.T) {
},
want: 7,
},
{
name: "electra_probability_changes",
isElectraOrAbove: true,
args: args{
validators: []*ethpb.Validator{
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
},
indices: []primitives.ValidatorIndex{3},
seed: seed,
},
want: 3,
},
{
name: "electra_probability_changes_all_active",
isElectraOrAbove: true,
args: args{
validators: []*ethpb.Validator{
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance}, // skip this one
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
},
indices: []primitives.ValidatorIndex{0, 1, 2, 3, 4},
seed: seed,
},
want: 4,
},
{
name: "electra_probability_returns_first_validator_with_criteria",
isElectraOrAbove: true,
args: args{
validators: []*ethpb.Validator{
{EffectiveBalance: 1},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{EffectiveBalance: 1},
{EffectiveBalance: 1},
{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
},
indices: []primitives.ValidatorIndex{1},
seed: seed,
},
want: 1,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -691,6 +739,11 @@ func TestComputeProposerIndex(t *testing.T) {
bState := &ethpb.BeaconState{Validators: tt.args.validators}
stTrie, err := state_native.InitializeFromProtoUnsafePhase0(bState)
require.NoError(t, err)
if tt.isElectraOrAbove {
stTrie, err = state_native.InitializeFromProtoUnsafeElectra(&ethpb.BeaconStateElectra{Validators: tt.args.validators})
require.NoError(t, err)
}
got, err := helpers.ComputeProposerIndex(stTrie, tt.args.indices, tt.args.seed)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
@@ -744,7 +797,9 @@ func TestIsEligibleForActivationQueue(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
assert.Equal(t, tt.want, helpers.IsEligibleForActivationQueue(tt.validator, tt.currentEpoch), "IsEligibleForActivationQueue()")
v, err := state_native.NewValidator(tt.validator)
assert.NoError(t, err)
assert.Equal(t, tt.want, helpers.IsEligibleForActivationQueue(v, tt.currentEpoch), "IsEligibleForActivationQueue()")
})
}
}
@@ -1120,40 +1175,3 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
// Sanity check that MinActivationBalance equals (pre-electra) MaxEffectiveBalance
assert.Equal(t, params.BeaconConfig().MinActivationBalance, params.BeaconConfig().MaxEffectiveBalance)
}
func TestQueueExcessActiveBalance_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
err := helpers.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount)
bals = st.Balances()
require.Equal(t, params.BeaconConfig().MinActivationBalance, bals[0])
}
func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(pbd))
err = helpers.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err = st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
val, err = st.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), val.EffectiveBalance)
}

View File

@@ -40,7 +40,6 @@ go_library(
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -5,7 +5,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
b "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
@@ -71,7 +70,7 @@ func GenesisBeaconStateBellatrix(ctx context.Context, deposits []*ethpb.Deposit,
return nil, err
}
st, err = b.ProcessPreGenesisDeposits(ctx, st, deposits)
st, err = altair.ProcessPreGenesisDeposits(ctx, st, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process validator deposits")
}

View File

@@ -4,7 +4,7 @@ import (
"context"
"github.com/pkg/errors"
b "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
@@ -69,7 +69,7 @@ func GenesisBeaconState(ctx context.Context, deposits []*ethpb.Deposit, genesisT
return nil, err
}
st, err = b.ProcessPreGenesisDeposits(ctx, st, deposits)
st, err = altair.ProcessPreGenesisDeposits(ctx, st, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process validator deposits")
}
@@ -92,7 +92,7 @@ func PreminedGenesisBeaconState(ctx context.Context, deposits []*ethpb.Deposit,
if err != nil {
return nil, err
}
st, err = b.ProcessPreGenesisDeposits(ctx, st, deposits)
st, err = altair.ProcessPreGenesisDeposits(ctx, st, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process validator deposits")
}

View File

@@ -24,7 +24,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"go.opencensus.io/trace"
@@ -376,14 +375,25 @@ func VerifyOperationLengths(_ context.Context, state state.BeaconState, b interf
if eth1Data == nil {
return nil, errors.New("nil eth1data in state")
}
if state.Eth1DepositIndex() > eth1Data.DepositCount {
return nil, fmt.Errorf("expected state.deposit_index %d <= eth1data.deposit_count %d", state.Eth1DepositIndex(), eth1Data.DepositCount)
}
maxDeposits := math.Min(params.BeaconConfig().MaxDeposits, eth1Data.DepositCount-state.Eth1DepositIndex())
// Verify outstanding deposits are processed up to max number of deposits
if uint64(len(body.Deposits())) != maxDeposits {
return nil, fmt.Errorf("incorrect outstanding deposits in block body, wanted: %d, got: %d",
maxDeposits, len(body.Deposits()))
if state.Version() < version.Electra {
// Deneb specs
// # Verify that outstanding deposits are processed up to the maximum number of deposits
// assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index)
if state.Eth1DepositIndex() > eth1Data.DepositCount {
return nil, fmt.Errorf("expected state.deposit_index %d <= eth1data.deposit_count %d", state.Eth1DepositIndex(), eth1Data.DepositCount)
}
maxDeposits := min(params.BeaconConfig().MaxDeposits, eth1Data.DepositCount-state.Eth1DepositIndex())
// Verify outstanding deposits are processed up to max number of deposits
if uint64(len(body.Deposits())) != maxDeposits {
return nil, fmt.Errorf("incorrect outstanding deposits in block body, wanted: %d, got: %d",
maxDeposits, len(body.Deposits()))
}
} else {
// Electra
if err := electra.VerifyBlockDepositLength(body, state); err != nil {
return nil, errors.Wrap(err, "failed to verify block deposit length")
}
}
return state, nil

View File

@@ -262,24 +262,21 @@ func ProcessOperationsNoVerifyAttsSigs(
}
var err error
switch beaconBlock.Version() {
case version.Phase0:
if beaconBlock.Version() == version.Phase0 {
state, err = phase0Operations(ctx, state, beaconBlock)
if err != nil {
return nil, err
}
case version.Altair, version.Bellatrix, version.Capella, version.Deneb:
} else if beaconBlock.Version() < version.Electra {
state, err = altairOperations(ctx, state, beaconBlock)
if err != nil {
return nil, err
}
case version.Electra:
state, err = electraOperations(ctx, state, beaconBlock)
} else {
state, err = electra.ProcessOperations(ctx, state, beaconBlock)
if err != nil {
return nil, err
}
default:
return nil, errors.New("block does not have correct version")
}
return state, nil
@@ -394,73 +391,6 @@ func VerifyBlobCommitmentCount(blk interfaces.ReadOnlyBeaconBlock) error {
return nil
}
// electraOperations
//
// Spec definition:
//
// def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
// assert len(body.deposits) == 0
//
// def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
// for operation in operations:
// fn(state, operation)
//
// for_ops(body.proposer_slashings, process_proposer_slashing)
// for_ops(body.attester_slashings, process_attester_slashing)
// for_ops(body.attestations, process_attestation) # [Modified in Electra:EIP7549]
// for_ops(body.deposits, process_deposit) # [Modified in Electra:EIP7251]
// for_ops(body.voluntary_exits, process_voluntary_exit) # [Modified in Electra:EIP7251]
// for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
// # [New in Electra:EIP7002:EIP7251]
// for_ops(body.execution_payload.withdrawal_requests, process_execution_layer_withdrawal_request)
// for_ops(body.execution_payload.deposit_requests, process_deposit_requests) # [New in Electra:EIP6110]
// for_ops(body.consolidations, process_consolidation) # [New in Electra:EIP7251]
func electraOperations(
ctx context.Context,
st state.BeaconState,
block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
// 6110 validations are in VerifyOperationLengths
// Electra extends the altair operations.
st, err := altairOperations(ctx, st, block)
if err != nil {
return nil, err
}
b := block.Body()
bod, ok := b.(interfaces.ROBlockBodyElectra)
if !ok {
return nil, errors.New("could not cast block body to electra block body")
}
e, err := bod.Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution data from block")
}
exe, ok := e.(interfaces.ExecutionDataElectra)
if !ok {
return nil, errors.New("could not cast execution data to electra execution data")
}
st, err = electra.ProcessWithdrawalRequests(ctx, st, exe.WithdrawalRequests())
if err != nil {
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
}
st, err = electra.ProcessDepositRequests(ctx, st, exe.DepositRequests()) // TODO: EIP-6110 deposit changes.
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
}
if err := electra.ProcessConsolidations(ctx, st, bod.Consolidations()); err != nil {
return nil, errors.Wrap(err, "could not process consolidations")
}
return st, nil
}
// This calls altair block operations.
func altairOperations(
ctx context.Context,
@@ -505,7 +435,7 @@ func phase0Operations(
if err != nil {
return nil, errors.Wrap(err, "could not process block attestations")
}
if _, err := b.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
}
return b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())

View File

@@ -161,6 +161,7 @@ type SlasherDatabase interface {
) ([]*ethpb.HighestAttestation, error)
DatabasePath() string
ClearDB() error
Migrate(ctx context.Context, headEpoch, maxPruningEpoch primitives.Epoch, batchSize int) error
}
// Database interface with full access.

View File

@@ -132,16 +132,6 @@ var blockTests = []struct {
b.Block.Slot = slot
if root != nil {
b.Block.ParentRoot = root
b.Block.Body.Consolidations = []*ethpb.SignedConsolidation{
{
Message: &ethpb.Consolidation{
SourceIndex: 1,
TargetIndex: 2,
Epoch: 3,
},
Signature: make([]byte, 96),
},
}
}
return blocks.NewSignedBeaconBlock(b)
},
@@ -153,16 +143,6 @@ var blockTests = []struct {
b.Message.Slot = slot
if root != nil {
b.Message.ParentRoot = root
b.Message.Body.Consolidations = []*ethpb.SignedConsolidation{
{
Message: &ethpb.Consolidation{
SourceIndex: 1,
TargetIndex: 2,
Epoch: 3,
},
Signature: make([]byte, 96),
},
}
}
return blocks.NewSignedBeaconBlock(b)
}},

View File

@@ -138,19 +138,20 @@ func TestState_CanSaveRetrieve(t *testing.T) {
require.NoError(t, err)
require.NoError(t, st.SetSlot(100))
p, err := blocks.WrappedExecutionPayloadHeaderElectra(&enginev1.ExecutionPayloadHeaderElectra{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: []byte("foo"),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
DepositRequestsRoot: make([]byte, 32),
WithdrawalRequestsRoot: make([]byte, 32),
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: []byte("foo"),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
DepositRequestsRoot: make([]byte, 32),
WithdrawalRequestsRoot: make([]byte, 32),
ConsolidationRequestsRoot: make([]byte, 32),
})
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(p))

View File

@@ -6,6 +6,7 @@ go_library(
"kv.go",
"log.go",
"metrics.go",
"migrate.go",
"pruning.go",
"schema.go",
"slasher.go",
@@ -37,6 +38,7 @@ go_test(
name = "go_default_test",
srcs = [
"kv_test.go",
"migrate_test.go",
"pruning_test.go",
"slasher_test.go",
"slasherkv_test.go",

View File

@@ -0,0 +1,231 @@
package slasherkv
import (
"context"
"encoding/binary"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
// Migrate , its corresponding usage and tests can be totally removed once Electra is on mainnet.
// Previously, the first 8 bytes of keys of `attestation-data-roots` and `proposal-records` buckets
// were stored as little-endian respectively epoch and slots. It was the source of
// https://github.com/prysmaticlabs/prysm/issues/14142 and potentially
// https://github.com/prysmaticlabs/prysm/issues/13658.
// To solve this (or these) issue(s), we decided to store the first 8 bytes of keys as big-endian.
// See https://github.com/prysmaticlabs/prysm/pull/14151.
// However, not to break the backward compatibility, we need to migrate the existing data.
// The strategy is quite simple: If, for these bucket keys in the store, we detect
// a slot (resp. epoch) higher, than the current slot (resp. epoch), then we consider that the data
// is stored in little-endian. We create a new entry with the same value, but with the slot (resp. epoch)
// part in the key stored as a big-endian.
// We start the iterate by the highest key and iterate down until we reach the current slot (resp. epoch).
func (s *Store) Migrate(ctx context.Context, headEpoch, maxPruningEpoch primitives.Epoch, batchSize int) error {
// Migrate attestations.
log.Info("Starting migration of attestations. This may take a while.")
start := time.Now()
if err := s.migrateAttestations(ctx, headEpoch, maxPruningEpoch, batchSize); err != nil {
return errors.Wrap(err, "migrate attestations")
}
log.WithField("duration", time.Since(start)).Info("Migration of attestations completed successfully")
// Migrate proposals.
log.Info("Starting migration of proposals. This may take a while.")
start = time.Now()
if err := s.migrateProposals(ctx, headEpoch, maxPruningEpoch, batchSize); err != nil {
return errors.Wrap(err, "migrate proposals")
}
log.WithField("duration", time.Since(start)).Info("Migration of proposals completed successfully")
return nil
}
func (s *Store) migrateAttestations(ctx context.Context, headEpoch, maxPruningEpoch primitives.Epoch, batchSize int) error {
done := false
var epochLittleEndian uint64
for !done {
count := 0
if err := s.db.Update(func(tx *bolt.Tx) error {
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
// We begin a migrating iteration starting from the last item in the bucket.
c := signingRootsBkt.Cursor()
for k, v := c.Last(); k != nil; k, v = c.Prev() {
if count >= batchSize {
log.WithField("epoch", epochLittleEndian).Info("Migrated attestations")
return nil
}
// Check if the context is done.
if ctx.Err() != nil {
return ctx.Err()
}
// Extract the epoch encoded in the first 8 bytes of the key.
encodedEpoch := k[:8]
// Convert it to an uint64, considering it is stored as big-endian.
epochBigEndian := binary.BigEndian.Uint64(encodedEpoch)
// If the epoch is smaller or equal to the current epoch, we are done.
if epochBigEndian <= uint64(headEpoch) {
break
}
// Otherwise, we consider that the epoch is stored as little-endian.
epochLittleEndian = binary.LittleEndian.Uint64(encodedEpoch)
// Increment the count of migrated items.
count++
// If the epoch is still higher than the current epoch, then it is an issue.
// This should never happen.
if epochLittleEndian > uint64(headEpoch) {
log.WithFields(logrus.Fields{
"epochLittleEndian": epochLittleEndian,
"epochBigEndian": epochBigEndian,
"headEpoch": headEpoch,
}).Error("Epoch is higher than the current epoch both if stored as little-endian or as big-endian")
continue
}
epoch := primitives.Epoch(epochLittleEndian)
if err := signingRootsBkt.Delete(k); err != nil {
return err
}
// We don't bother migrating data that is going to be pruned by the pruning routine.
if epoch <= maxPruningEpoch {
if err := attRecordsBkt.Delete(v); err != nil {
return err
}
continue
}
// Create a new key with the epoch stored as big-endian.
newK := make([]byte, 8)
binary.BigEndian.PutUint64(newK, uint64(epoch))
newK = append(newK, k[8:]...)
// Store the same value with the new key.
if err := signingRootsBkt.Put(newK, v); err != nil {
return err
}
}
done = true
return nil
}); err != nil {
return err
}
}
return nil
}
func (s *Store) migrateProposals(ctx context.Context, headEpoch, maxPruningEpoch primitives.Epoch, batchSize int) error {
done := false
if !done {
count := 0
// Compute the max pruning slot.
maxPruningSlot, err := slots.EpochEnd(maxPruningEpoch)
if err != nil {
return errors.Wrap(err, "compute max pruning slot")
}
// Compute the head slot.
headSlot, err := slots.EpochEnd(headEpoch)
if err != nil {
return errors.Wrap(err, "compute head slot")
}
if err := s.db.Update(func(tx *bolt.Tx) error {
proposalBkt := tx.Bucket(proposalRecordsBucket)
// We begin a migrating iteration starting from the last item in the bucket.
c := proposalBkt.Cursor()
for k, v := c.Last(); k != nil; k, v = c.Prev() {
if count >= batchSize {
return nil
}
// Check if the context is done.
if ctx.Err() != nil {
return ctx.Err()
}
// Extract the slot encoded in the first 8 bytes of the key.
encodedSlot := k[:8]
// Convert it to an uint64, considering it is stored as big-endian.
slotBigEndian := binary.BigEndian.Uint64(encodedSlot)
// If the epoch is smaller or equal to the current epoch, we are done.
if slotBigEndian <= uint64(headSlot) {
break
}
// Otherwise, we consider that the epoch is stored as little-endian.
slotLittleEndian := binary.LittleEndian.Uint64(encodedSlot)
// If the slot is still higher than the current slot, then it is an issue.
// This should never happen.
if slotLittleEndian > uint64(headSlot) {
log.WithFields(logrus.Fields{
"slotLittleEndian": slotLittleEndian,
"slotBigEndian": slotBigEndian,
"headSlot": headSlot,
}).Error("Slot is higher than the current slot both if stored as little-endian or as big-endian")
continue
}
slot := primitives.Slot(slotLittleEndian)
if err := proposalBkt.Delete(k); err != nil {
return err
}
// We don't bother migrating data that is going to be pruned by the pruning routine.
if slot <= maxPruningSlot {
continue
}
// Create a new key with the epoch stored as big-endian.
newK := make([]byte, 8)
binary.BigEndian.PutUint64(newK, uint64(slot))
newK = append(newK, k[8:]...)
// Store the same value with the new key.
if err := proposalBkt.Put(newK, v); err != nil {
return err
}
}
done = true
return nil
}); err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,239 @@
package slasherkv
import (
"context"
"encoding/binary"
"testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
bolt "go.etcd.io/bbolt"
)
type endianness int
const (
bigEndian endianness = iota
littleEndian
)
func encodeEpochLittleEndian(epoch primitives.Epoch) []byte {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, uint64(epoch))
return buf
}
func createAttestation(signingRootBucket, attRecordsBucket *bolt.Bucket, epoch primitives.Epoch, encoding endianness) error {
// Encode the target epoch.
var key []byte
if encoding == bigEndian {
key = encodeTargetEpoch(epoch)
} else {
key = encodeEpochLittleEndian(epoch)
}
// Encode the validator index.
encodedValidatorIndex := encodeValidatorIndex(primitives.ValidatorIndex(epoch))
// Write the attestation to the database.
key = append(key, encodedValidatorIndex...)
if err := signingRootBucket.Put(key, encodedValidatorIndex); err != nil {
return err
}
err := attRecordsBucket.Put(encodedValidatorIndex, []byte("dummy"))
return err
}
func createProposal(proposalBucket *bolt.Bucket, epoch primitives.Epoch, encoding endianness) error {
// Get the slot for the epoch.
slot := primitives.Slot(epoch) * params.BeaconConfig().SlotsPerEpoch
// Encode the slot.
key := make([]byte, 8)
if encoding == bigEndian {
binary.BigEndian.PutUint64(key, uint64(slot))
} else {
binary.LittleEndian.PutUint64(key, uint64(slot))
}
// Encode the validator index.
encodedValidatorIndex := encodeValidatorIndex(primitives.ValidatorIndex(slot))
// Write the proposal to the database.
key = append(key, encodedValidatorIndex...)
err := proposalBucket.Put(key, []byte("dummy"))
return err
}
func TestMigrate(t *testing.T) {
const (
headEpoch = primitives.Epoch(65000)
maxPruningEpoch = primitives.Epoch(60000)
batchSize = 3
)
/*
State of the DB before migration:
=================================
LE: Little-endian encoding
BE: Big-endian encoding
Attestations:
-------------
59000 (LE), 59100 (LE), 59200 (BE), 59300 (LE), 59400 (LE), 59500 (LE), 59600 (LE), 59700 (LE), 59800 (LE), 59900 (LE),
60000 (LE), 60100 (LE), 60200 (LE), 60300 (LE), 60400 (BE), 60500 (LE), 60600 (LE), 60700 (LE), 60800 (LE), 60900 (LE)
Proposals:
----------
59000*32 (LE), 59100*32 (LE), 59200*32 (BE), 59300*32 (LE), 59400*32 (LE), 59500*32 (LE), 59600*32 (LE), 59700*32 (LE), 59800*32 (LE), 59900*32 (LE),
60000*32 (LE), 60100*32 (LE), 60200*32 (LE), 60300*32 (LE), 60400*32 (BE), 60500*32 (LE), 60600*32 (LE), 60700*32 (LE), 60800*32 (LE), 60900*32 (LE)
State of the DB after migration:
================================
Attestations:
-------------
59200 (BE), 60100 (BE), 60200 (BE), 60300(BE), 60400 (BE), 60500 (BE), 60600 (BE), 60700 (BE), 60800 (BE), 60900 (BE)
Proposals:
----------
59200*32 (BE), 60100*32 (BE), 60200*32 (BE), 60300*32 (BE), 60400*32 (BE), 60500*32 (BE), 60600*32 (BE), 60700*32 (BE), 60800*32 (BE), 60900*32 (BE)
*/
beforeLittleEndianEpochs := []primitives.Epoch{
59000, 59100, 59300, 59400, 59500, 59600, 59700, 59800, 59900,
60000, 60100, 60200, 60300, 60500, 60600, 60700, 60800, 60900,
}
beforeBigEndianEpochs := []primitives.Epoch{59200, 60400}
afterBigEndianEpochs := []primitives.Epoch{
59200, 60100, 60200, 60300, 60400, 60500, 60600, 60700, 60800, 60900,
}
// Create a new context.
ctx := context.Background()
// Setup a test database.
beaconDB := setupDB(t)
// Write attestations and proposals to the database.
err := beaconDB.db.Update(func(tx *bolt.Tx) error {
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
proposalBkt := tx.Bucket(proposalRecordsBucket)
// Create attestations with little-endian encoding.
for _, epoch := range beforeLittleEndianEpochs {
if err := createAttestation(signingRootsBkt, attRecordsBkt, epoch, littleEndian); err != nil {
return err
}
}
// Create attestations with big-endian encoding.
for _, epoch := range beforeBigEndianEpochs {
if err := createAttestation(signingRootsBkt, attRecordsBkt, epoch, bigEndian); err != nil {
return err
}
}
// Create proposals with little-endian encoding.
for _, epoch := range beforeLittleEndianEpochs {
if err := createProposal(proposalBkt, epoch, littleEndian); err != nil {
return err
}
}
// Create proposals with big-endian encoding.
for _, epoch := range beforeBigEndianEpochs {
if err := createProposal(proposalBkt, epoch, bigEndian); err != nil {
return err
}
}
return nil
})
require.NoError(t, err)
// Migrate the database.
err = beaconDB.Migrate(ctx, headEpoch, maxPruningEpoch, batchSize)
require.NoError(t, err)
// Check the state of the database after migration.
err = beaconDB.db.View(func(tx *bolt.Tx) error {
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
proposalBkt := tx.Bucket(proposalRecordsBucket)
// Check that all the expected attestations are in the store.
for _, epoch := range afterBigEndianEpochs {
// Check if the attestation exists.
key := encodeTargetEpoch(epoch)
encodedValidatorIndex := encodeValidatorIndex(primitives.ValidatorIndex(epoch))
key = append(key, encodedValidatorIndex...)
// Check the signing root bucket.
indexedAtt := signingRootsBkt.Get(key)
require.DeepSSZEqual(t, encodedValidatorIndex, indexedAtt)
// Check the attestation records bucket.
attestationRecord := attRecordsBkt.Get(encodedValidatorIndex)
require.DeepSSZEqual(t, []byte("dummy"), attestationRecord)
}
// Check only the expected attestations are in the store.
c := signingRootsBkt.Cursor()
count := 0
for k, _ := c.First(); k != nil; k, _ = c.Next() {
count++
}
require.Equal(t, len(afterBigEndianEpochs), count)
c = attRecordsBkt.Cursor()
count = 0
for k, _ := c.First(); k != nil; k, _ = c.Next() {
count++
}
require.Equal(t, len(afterBigEndianEpochs), count)
// Check that all the expected proposals are in the store.
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
for _, epoch := range afterBigEndianEpochs {
// Check if the proposal exists.
slot := primitives.Slot(epoch) * slotsPerEpoch
key := make([]byte, 8)
binary.BigEndian.PutUint64(key, uint64(slot))
encodedValidatorIndex := encodeValidatorIndex(primitives.ValidatorIndex(slot))
key = append(key, encodedValidatorIndex...)
// Check the proposal bucket.
proposal := proposalBkt.Get(key)
require.DeepEqual(t, []byte("dummy"), proposal)
}
// Check only the expected proposals are in the store.
c = proposalBkt.Cursor()
count = 0
for k, _ := c.First(); k != nil; k, _ = c.Next() {
count++
}
require.Equal(t, len(afterBigEndianEpochs), count)
return nil
})
require.NoError(t, err)
}

View File

@@ -5,7 +5,6 @@ import (
"context"
"encoding/binary"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
bolt "go.etcd.io/bbolt"
@@ -17,7 +16,8 @@ func (s *Store) PruneAttestationsAtEpoch(
_ context.Context, maxEpoch primitives.Epoch,
) (numPruned uint, err error) {
// We can prune everything less than the current epoch - history length.
encodedEndPruneEpoch := fssz.MarshalUint64([]byte{}, uint64(maxEpoch))
encodedEndPruneEpoch := make([]byte, 8)
binary.BigEndian.PutUint64(encodedEndPruneEpoch, uint64(maxEpoch))
// We retrieve the lowest stored epoch in the attestations bucket.
var lowestEpoch primitives.Epoch
@@ -30,7 +30,7 @@ func (s *Store) PruneAttestationsAtEpoch(
return nil
}
hasData = true
lowestEpoch = primitives.Epoch(binary.LittleEndian.Uint64(k))
lowestEpoch = primitives.Epoch(binary.BigEndian.Uint64(k))
return nil
}); err != nil {
return
@@ -92,7 +92,8 @@ func (s *Store) PruneProposalsAtEpoch(
if err != nil {
return
}
encodedEndPruneSlot := fssz.MarshalUint64([]byte{}, uint64(endPruneSlot))
encodedEndPruneSlot := make([]byte, 8)
binary.BigEndian.PutUint64(encodedEndPruneSlot, uint64(endPruneSlot))
// We retrieve the lowest stored slot in the proposals bucket.
var lowestSlot primitives.Slot
@@ -157,7 +158,7 @@ func (s *Store) PruneProposalsAtEpoch(
}
func slotFromProposalKey(key []byte) primitives.Slot {
return primitives.Slot(binary.LittleEndian.Uint64(key[:8]))
return primitives.Slot(binary.BigEndian.Uint64(key[:8]))
}
func uint64PrefixGreaterThan(key, lessThan []byte) bool {

View File

@@ -23,9 +23,9 @@ func TestStore_PruneProposalsAtEpoch(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := setupDB(t)
// With a current epoch of 20 and a history length of 10, we should be pruning
// everything before epoch (20 - 10) = 10.
currentEpoch := primitives.Epoch(20)
// With a current epoch of 30 and a history length of 10, we should be pruning
// everything before epoch (30 - 10) = 20.
currentEpoch := primitives.Epoch(30)
historyLength := primitives.Epoch(10)
pruningLimitEpoch := currentEpoch - historyLength
@@ -34,16 +34,14 @@ func TestStore_PruneProposalsAtEpoch(t *testing.T) {
err = beaconDB.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(proposalRecordsBucket)
key, err := keyForValidatorProposal(lowestStoredSlot+1, 0 /* proposer index */)
if err != nil {
return err
}
key := keyForValidatorProposal(lowestStoredSlot+1, 0 /* proposer index */)
return bkt.Put(key, []byte("hi"))
})
require.NoError(t, err)
_, err = beaconDB.PruneProposalsAtEpoch(ctx, pruningLimitEpoch)
numPruned, err := beaconDB.PruneProposalsAtEpoch(ctx, pruningLimitEpoch)
require.NoError(t, err)
require.Equal(t, uint(0), numPruned)
expectedLog := fmt.Sprintf(
"Lowest slot %d is > pruning slot %d, nothing to prune", lowestStoredSlot+1, lowestStoredSlot,
)
@@ -59,12 +57,14 @@ func TestStore_PruneProposalsAtEpoch(t *testing.T) {
params.OverrideBeaconConfig(config)
historyLength := primitives.Epoch(10)
currentEpoch := primitives.Epoch(20)
currentEpoch := primitives.Epoch(30)
pruningLimitEpoch := currentEpoch - historyLength
// We create proposals from genesis to the current epoch, with 2 proposals
// at each slot to ensure the entire pruning logic works correctly.
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
expectedNumPruned := 2 * uint(pruningLimitEpoch+1) * uint(slotsPerEpoch)
proposals := make([]*slashertypes.SignedBlockHeaderWrapper, 0, uint64(currentEpoch)*uint64(slotsPerEpoch)*2)
for i := primitives.Epoch(0); i < currentEpoch; i++ {
startSlot, err := slots.EpochStart(i)
@@ -81,8 +81,9 @@ func TestStore_PruneProposalsAtEpoch(t *testing.T) {
require.NoError(t, beaconDB.SaveBlockProposals(ctx, proposals))
// We expect pruning completes without an issue and properly logs progress.
_, err := beaconDB.PruneProposalsAtEpoch(ctx, pruningLimitEpoch)
actualNumPruned, err := beaconDB.PruneProposalsAtEpoch(ctx, pruningLimitEpoch)
require.NoError(t, err)
require.Equal(t, expectedNumPruned, actualNumPruned)
// Everything before epoch 10 should be deleted.
for i := primitives.Epoch(0); i < pruningLimitEpoch; i++ {
@@ -93,14 +94,8 @@ func TestStore_PruneProposalsAtEpoch(t *testing.T) {
endSlot, err := slots.EpochStart(i + 1)
require.NoError(t, err)
for j := startSlot; j < endSlot; j++ {
prop1Key, err := keyForValidatorProposal(j, 0)
if err != nil {
return err
}
prop2Key, err := keyForValidatorProposal(j, 1)
if err != nil {
return err
}
prop1Key := keyForValidatorProposal(j, 0)
prop2Key := keyForValidatorProposal(j, 1)
if bkt.Get(prop1Key) != nil {
return fmt.Errorf("proposal still exists for epoch %d, validator 0", j)
}
@@ -124,9 +119,9 @@ func TestStore_PruneAttestations_OK(t *testing.T) {
hook := logTest.NewGlobal()
beaconDB := setupDB(t)
// With a current epoch of 20 and a history length of 10, we should be pruning
// everything before epoch (20 - 10) = 10.
currentEpoch := primitives.Epoch(20)
// With a current epoch of 30 and a history length of 10, we should be pruning
// everything before epoch (30 - 10) = 20.
currentEpoch := primitives.Epoch(30)
historyLength := primitives.Epoch(10)
pruningLimitEpoch := currentEpoch - historyLength
@@ -134,14 +129,16 @@ func TestStore_PruneAttestations_OK(t *testing.T) {
err := beaconDB.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(attestationDataRootsBucket)
encIdx := encodeValidatorIndex(primitives.ValidatorIndex(0))
encodedTargetEpoch := encodeTargetEpoch(lowestStoredEpoch + 1)
key := append(encodedTargetEpoch, encIdx...)
key := append(
encodeTargetEpoch(lowestStoredEpoch+1),
encodeValidatorIndex(primitives.ValidatorIndex(0))...,
)
return bkt.Put(key, []byte("hi"))
})
require.NoError(t, err)
_, err = beaconDB.PruneAttestationsAtEpoch(ctx, pruningLimitEpoch)
numPruned, err := beaconDB.PruneAttestationsAtEpoch(ctx, pruningLimitEpoch)
require.Equal(t, uint(0), numPruned)
require.NoError(t, err)
expectedLog := fmt.Sprintf(
"Lowest epoch %d is > pruning epoch %d, nothing to prune", lowestStoredEpoch+1, lowestStoredEpoch,
@@ -158,12 +155,14 @@ func TestStore_PruneAttestations_OK(t *testing.T) {
params.OverrideBeaconConfig(config)
historyLength := primitives.Epoch(10)
currentEpoch := primitives.Epoch(20)
currentEpoch := primitives.Epoch(30)
pruningLimitEpoch := currentEpoch - historyLength
// We create attestations from genesis to the current epoch, with 2 attestations
// at each slot to ensure the entire pruning logic works correctly.
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
expectedNumPruned := 2 * uint(pruningLimitEpoch+1) * uint(slotsPerEpoch)
attestations := make([]*slashertypes.IndexedAttestationWrapper, 0, uint64(currentEpoch)*uint64(slotsPerEpoch)*2)
for i := primitives.Epoch(0); i < currentEpoch; i++ {
startSlot, err := slots.EpochStart(i)
@@ -171,8 +170,8 @@ func TestStore_PruneAttestations_OK(t *testing.T) {
endSlot, err := slots.EpochStart(i + 1)
require.NoError(t, err)
for j := startSlot; j < endSlot; j++ {
attester1 := uint64(j + 10)
attester2 := uint64(j + 11)
attester1 := uint64(2 * j)
attester2 := uint64(2*j + 1)
target := i
var source primitives.Epoch
if i > 0 {
@@ -187,8 +186,9 @@ func TestStore_PruneAttestations_OK(t *testing.T) {
require.NoError(t, beaconDB.SaveAttestationRecordsForValidators(ctx, attestations))
// We expect pruning completes without an issue.
_, err := beaconDB.PruneAttestationsAtEpoch(ctx, pruningLimitEpoch)
actualNumPruned, err := beaconDB.PruneAttestationsAtEpoch(ctx, pruningLimitEpoch)
require.NoError(t, err)
require.Equal(t, expectedNumPruned, actualNumPruned)
// Everything before epoch 10 should be deleted.
for i := primitives.Epoch(0); i < pruningLimitEpoch; i++ {

View File

@@ -484,15 +484,11 @@ func (s *Store) CheckDoubleBlockProposals(
for _, incomingProposal := range incomingProposals {
// Build the key corresponding to this slot + validator index combination
key, err := keyForValidatorProposal(
key := keyForValidatorProposal(
incomingProposal.SignedBeaconBlockHeader.Header.Slot,
incomingProposal.SignedBeaconBlockHeader.Header.ProposerIndex,
)
if err != nil {
return err
}
// Retrieve the existing proposal record from the database
encExistingProposalWrapper := bkt.Get(key)
@@ -531,11 +527,9 @@ func (s *Store) BlockProposalForValidator(
_, span := trace.StartSpan(ctx, "BeaconDB.BlockProposalForValidator")
defer span.End()
var record *slashertypes.SignedBlockHeaderWrapper
key, err := keyForValidatorProposal(slot, validatorIdx)
if err != nil {
return nil, err
}
err = s.db.View(func(tx *bolt.Tx) error {
key := keyForValidatorProposal(slot, validatorIdx)
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(proposalRecordsBucket)
encProposal := bkt.Get(key)
if encProposal == nil {
@@ -567,13 +561,10 @@ func (s *Store) SaveBlockProposals(
// Loop over all proposals to encode keys and proposals themselves.
for i, proposal := range proposals {
// Encode the key for this proposal.
key, err := keyForValidatorProposal(
key := keyForValidatorProposal(
proposal.SignedBeaconBlockHeader.Header.Slot,
proposal.SignedBeaconBlockHeader.Header.ProposerIndex,
)
if err != nil {
return err
}
// Encode the proposal itself.
enc, err := encodeProposalRecord(proposal)
@@ -657,13 +648,11 @@ func suffixForAttestationRecordsKey(key, encodedValidatorIndex []byte) bool {
}
// keyForValidatorProposal returns a disk key for a validator proposal, including a slot+validatorIndex as a byte slice.
func keyForValidatorProposal(slot primitives.Slot, proposerIndex primitives.ValidatorIndex) ([]byte, error) {
encSlot, err := slot.MarshalSSZ()
if err != nil {
return nil, err
}
func keyForValidatorProposal(slot primitives.Slot, proposerIndex primitives.ValidatorIndex) []byte {
encSlot := make([]byte, 8)
binary.BigEndian.PutUint64(encSlot, uint64(slot))
encValidatorIdx := encodeValidatorIndex(proposerIndex)
return append(encSlot, encValidatorIdx...), nil
return append(encSlot, encValidatorIdx...)
}
func encodeSlasherChunk(chunk []uint16) ([]byte, error) {
@@ -780,10 +769,10 @@ func decodeProposalRecord(encoded []byte) (*slashertypes.SignedBlockHeaderWrappe
}, nil
}
// Encodes an epoch into little-endian bytes.
// Encodes an epoch into big-endian bytes.
func encodeTargetEpoch(epoch primitives.Epoch) []byte {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, uint64(epoch))
binary.BigEndian.PutUint64(buf, uint64(epoch))
return buf
}

View File

@@ -27,7 +27,7 @@ go_library(
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -5,7 +5,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/config/params"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -28,7 +28,8 @@ func (s *Service) processDeposit(ctx context.Context, eth1Data *ethpb.Eth1Data,
if err := s.preGenesisState.SetEth1Data(eth1Data); err != nil {
return err
}
beaconState, err := blocks.ProcessPreGenesisDeposits(ctx, s.preGenesisState, []*ethpb.Deposit{deposit})
// preGenesisState is always a genesis state ( phase 0 ) and so state version does not need to be checked here for post electra deposit processing
beaconState, err := altair.ProcessPreGenesisDeposits(ctx, s.preGenesisState, []*ethpb.Deposit{deposit})
if err != nil {
return errors.Wrap(err, "could not process pre-genesis deposits")
}

View File

@@ -647,14 +647,15 @@ func handleRPCError(err error) error {
if isTimeout(err) {
return ErrHTTPTimeout
}
e, ok := err.(gethRPC.Error)
var e gethRPC.Error
ok := errors.As(err, &e)
if !ok {
if strings.Contains(err.Error(), "401 Unauthorized") {
log.Error("HTTP authentication to your execution client is not working. Please ensure " +
"you are setting a correct value for the --jwt-secret flag in Prysm, or use an IPC connection if on " +
"the same machine. Please see our documentation for more information on authenticating connections " +
"here https://docs.prylabs.network/docs/execution-node/authentication")
return fmt.Errorf("could not authenticate connection to execution client: %v", err)
return fmt.Errorf("could not authenticate connection to execution client: %w", err)
}
return errors.Wrapf(err, "got an unexpected error in JSON-RPC response")
}
@@ -689,7 +690,8 @@ func handleRPCError(err error) error {
case -32000:
errServerErrorCount.Inc()
// Only -32000 status codes are data errors in the RPC specification.
errWithData, ok := err.(gethRPC.DataError)
var errWithData gethRPC.DataError
ok := errors.As(err, &errWithData)
if !ok {
return errors.Wrapf(err, "got an unexpected error in JSON-RPC response")
}
@@ -708,7 +710,8 @@ type httpTimeoutError interface {
}
func isTimeout(e error) bool {
t, ok := e.(httpTimeoutError)
var t httpTimeoutError
ok := errors.As(e, &t)
return ok && t.Timeout()
}

View File

@@ -267,11 +267,12 @@ func (f *ForkChoice) updateBalances() error {
newBalances := f.justifiedBalances
zHash := params.BeaconConfig().ZeroHash
for index, vote := range f.votes {
for index := 0; index < len(f.votes); index++ {
// Skip if validator has been slashed
if f.store.slashedIndices[primitives.ValidatorIndex(index)] {
continue
}
vote := f.votes[index]
// Skip if validator has never voted for current root and next root (i.e. if the
// votes are zero hash aka genesis block), there's nothing to compute.
if vote.currentRoot == zHash && vote.nextRoot == zHash {

View File

@@ -572,7 +572,7 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
if b.GenesisInitializer != nil {
if err := b.GenesisInitializer.Initialize(b.ctx, d); err != nil {
if err == db.ErrExistingGenesisState {
if errors.Is(err, db.ErrExistingGenesisState) {
return errors.Errorf("Genesis state flag specified but a genesis state "+
"exists already. Run again with --%s and/or ensure you are using the "+
"appropriate testnet flag to load the given genesis state.", cmd.ClearDB.Name)

View File

@@ -51,7 +51,7 @@ func (bc *bcnodeCollector) Collect(ch chan<- prometheus.Metric) {
func (bc *bcnodeCollector) getCurrentDbBytes() (float64, error) {
fs, err := os.Stat(bc.dbPath)
if err != nil {
return 0, fmt.Errorf("could not collect database file size for prometheus, path=%s, err=%s", bc.dbPath, err)
return 0, fmt.Errorf("could not collect database file size for prometheus, path=%s, err=%w", bc.dbPath, err)
}
return float64(fs.Size()), nil
}

View File

@@ -1,5 +1 @@
package registration
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "registration")

View File

@@ -7,6 +7,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/cmd"
"github.com/prysmaticlabs/prysm/v5/config/params"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
"gopkg.in/yaml.v2"
)
@@ -50,7 +51,8 @@ func readbootNodes(fileName string) ([]string, error) {
listNodes := make([]string, 0)
err = yaml.UnmarshalStrict(fileContent, &listNodes)
if err != nil {
if _, ok := err.(*yaml.TypeError); !ok {
var typeError *yaml.TypeError
if !errors.As(err, &typeError) {
return nil, err
} else {
log.WithError(err).Error("There were some issues parsing the bootnodes from a yaml file.")

View File

@@ -21,12 +21,13 @@ go_library(
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -16,9 +16,10 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
@@ -39,14 +40,15 @@ go_test(
embed = [":go_default_library"],
deps = [
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -9,7 +9,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
attaggregation "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -32,28 +34,28 @@ func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedA
_, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAtts")
defer span.End()
attsByDataRoot := make(map[[32]byte][]ethpb.Att, len(unaggregatedAtts))
attsByVerAndDataRoot := make(map[attestation.Id][]ethpb.Att, len(unaggregatedAtts))
for _, att := range unaggregatedAtts {
attDataRoot, err := att.GetData().HashTreeRoot()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
attsByVerAndDataRoot[id] = append(attsByVerAndDataRoot[id], att)
}
// Aggregate unaggregated attestations from the pool and save them in the pool.
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
leftOverUnaggregatedAtt := make(map[attestation.Id]bool)
leftOverUnaggregatedAtt = c.aggregateParallel(attsByDataRoot, leftOverUnaggregatedAtt)
leftOverUnaggregatedAtt = c.aggregateParallel(attsByVerAndDataRoot, leftOverUnaggregatedAtt)
// Remove the unaggregated attestations from the pool that were successfully aggregated.
for _, att := range unaggregatedAtts {
h, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
if leftOverUnaggregatedAtt[h] {
if leftOverUnaggregatedAtt[id] {
continue
}
if err := c.DeleteUnaggregatedAttestation(att); err != nil {
@@ -66,7 +68,7 @@ func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedA
// aggregateParallel aggregates attestations in parallel for `atts` and saves them in the pool,
// returns the unaggregated attestations that weren't able to aggregate.
// Given `n` CPU cores, it creates a channel of size `n` and spawns `n` goroutines to aggregate attestations
func (c *AttCaches) aggregateParallel(atts map[[32]byte][]ethpb.Att, leftOver map[[32]byte]bool) map[[32]byte]bool {
func (c *AttCaches) aggregateParallel(atts map[attestation.Id][]ethpb.Att, leftOver map[attestation.Id]bool) map[attestation.Id]bool {
var leftoverLock sync.Mutex
wg := sync.WaitGroup{}
@@ -92,13 +94,13 @@ func (c *AttCaches) aggregateParallel(atts map[[32]byte][]ethpb.Att, leftOver ma
continue
}
} else {
h, err := hashFn(aggregated)
id, err := attestation.NewId(aggregated, attestation.Full)
if err != nil {
log.WithError(err).Error("could not hash attestation")
log.WithError(err).Error("Could not create attestation ID")
continue
}
leftoverLock.Lock()
leftOver[h] = true
leftOver[id] = true
leftoverLock.Unlock()
}
}
@@ -139,17 +141,18 @@ func (c *AttCaches) SaveAggregatedAttestation(att ethpb.Att) error {
return nil
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
copiedAtt := att.Copy()
c.aggregatedAttLock.Lock()
defer c.aggregatedAttLock.Unlock()
atts, ok := c.aggregatedAtt[r]
atts, ok := c.aggregatedAtt[id]
if !ok {
atts := []ethpb.Att{copiedAtt}
c.aggregatedAtt[r] = atts
c.aggregatedAtt[id] = atts
return nil
}
@@ -157,7 +160,7 @@ func (c *AttCaches) SaveAggregatedAttestation(att ethpb.Att) error {
if err != nil {
return err
}
c.aggregatedAtt[r] = atts
c.aggregatedAtt[id] = atts
return nil
}
@@ -191,17 +194,56 @@ func (c *AttCaches) AggregatedAttestations() []ethpb.Att {
// AggregatedAttestationsBySlotIndex returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att {
func (c *AttCaches) AggregatedAttestationsBySlotIndex(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.Attestation {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]ethpb.Att, 0)
atts := make([]*ethpb.Attestation, 0)
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, a := range c.aggregatedAtt {
if slot == a[0].GetData().Slot && committeeIndex == a[0].GetData().CommitteeIndex {
atts = append(atts, a...)
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Phase0 && slot == as[0].GetData().Slot && committeeIndex == as[0].GetData().CommitteeIndex {
for _, a := range as {
att, ok := a.(*ethpb.Attestation)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
}
return atts
}
// AggregatedAttestationsBySlotIndexElectra returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndexElectra(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.AttestationElectra {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndexElectra")
defer span.End()
atts := make([]*ethpb.AttestationElectra, 0)
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Electra && slot == as[0].GetData().Slot && as[0].CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
for _, a := range as {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
}
@@ -216,18 +258,19 @@ func (c *AttCaches) DeleteAggregatedAttestation(att ethpb.Att) error {
if !helpers.IsAggregated(att) {
return errors.New("attestation is not aggregated")
}
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not tree hash attestation data")
}
if err := c.insertSeenBit(att); err != nil {
return err
}
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
}
c.aggregatedAttLock.Lock()
defer c.aggregatedAttLock.Unlock()
attList, ok := c.aggregatedAtt[r]
attList, ok := c.aggregatedAtt[id]
if !ok {
return nil
}
@@ -241,9 +284,9 @@ func (c *AttCaches) DeleteAggregatedAttestation(att ethpb.Att) error {
}
}
if len(filtered) == 0 {
delete(c.aggregatedAtt, r)
delete(c.aggregatedAtt, id)
} else {
c.aggregatedAtt[r] = filtered
c.aggregatedAtt[id] = filtered
}
return nil
@@ -254,14 +297,15 @@ func (c *AttCaches) HasAggregatedAttestation(att ethpb.Att) (bool, error) {
if err := helpers.ValidateNilAttestation(att); err != nil {
return false, err
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return false, errors.Wrap(err, "could not tree hash attestation")
return false, errors.Wrap(err, "could not create attestation ID")
}
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
if atts, ok := c.aggregatedAtt[r]; ok {
if atts, ok := c.aggregatedAtt[id]; ok {
for _, a := range atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return false, err
@@ -273,7 +317,7 @@ func (c *AttCaches) HasAggregatedAttestation(att ethpb.Att) (bool, error) {
c.blockAttLock.RLock()
defer c.blockAttLock.RUnlock()
if atts, ok := c.blockAtt[r]; ok {
if atts, ok := c.blockAtt[id]; ok {
for _, a := range atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return false, err

View File

@@ -7,10 +7,11 @@ import (
c "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -69,7 +70,7 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
}),
AggregationBits: bitfield.Bitlist{0b10111},
},
wantErrString: "could not tree hash attestation: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")",
wantErrString: "could not create attestation ID",
},
{
name: "already seen",
@@ -92,15 +93,13 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
count: 1,
},
}
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{
Slot: 100,
}))
id, err := attestation.NewId(util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100}}), attestation.Data)
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewAttCaches()
cache.seenAtt.Set(string(r[:]), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
cache.seenAtt.Set(id.String(), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
assert.Equal(t, 0, len(cache.unAggregatedAtt), "Invalid start pool, atts: %d", len(cache.unAggregatedAtt))
err := cache.SaveAggregatedAttestation(tt.att)
@@ -230,7 +229,7 @@ func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
},
}
err := cache.DeleteAggregatedAttestation(att)
wantErr := "could not tree hash attestation data: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")"
wantErr := "could not create attestation ID"
assert.ErrorContains(t, wantErr, err)
})
@@ -500,3 +499,49 @@ func TestKV_Aggregated_DuplicateAggregatedAttestations(t *testing.T) {
assert.DeepSSZEqual(t, att2, returned[0], "Did not receive correct aggregated atts")
assert.Equal(t, 1, len(returned), "Did not receive correct aggregated atts")
}
func TestKV_Aggregated_AggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b1011}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveAggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.AggregatedAttestationsBySlotIndex(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att1}, returned)
returned = cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.Attestation{att2}, returned)
returned = cache.AggregatedAttestationsBySlotIndex(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att3}, returned)
}
func TestKV_Aggregated_AggregatedAttestationsBySlotIndexElectra(t *testing.T) {
cache := NewAttCaches()
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1011}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(2, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att3 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}, CommitteeBits: committeeBits})
atts := []*ethpb.AttestationElectra{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveAggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.AggregatedAttestationsBySlotIndexElectra(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att1}, returned)
returned = cache.AggregatedAttestationsBySlotIndexElectra(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att2}, returned)
returned = cache.AggregatedAttestationsBySlotIndexElectra(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att3}, returned)
}

View File

@@ -3,6 +3,7 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
// SaveBlockAttestation saves an block attestation in cache.
@@ -10,14 +11,15 @@ func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.blockAttLock.Lock()
defer c.blockAttLock.Unlock()
atts, ok := c.blockAtt[r]
atts, ok := c.blockAtt[id]
if !ok {
atts = make([]ethpb.Att, 0, 1)
}
@@ -31,7 +33,7 @@ func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
}
}
c.blockAtt[r] = append(atts, att.Copy())
c.blockAtt[id] = append(atts, att.Copy())
return nil
}
@@ -54,14 +56,15 @@ func (c *AttCaches) DeleteBlockAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.blockAttLock.Lock()
defer c.blockAttLock.Unlock()
delete(c.blockAtt, r)
delete(c.blockAtt, id)
return nil
}

View File

@@ -3,6 +3,7 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
@@ -10,14 +11,15 @@ func (c *AttCaches) SaveForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.forkchoiceAttLock.Lock()
defer c.forkchoiceAttLock.Unlock()
c.forkchoiceAtt[r] = att.Copy()
c.forkchoiceAtt[id] = att
return nil
}
@@ -51,14 +53,15 @@ func (c *AttCaches) DeleteForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.forkchoiceAttLock.Lock()
defer c.forkchoiceAttLock.Unlock()
delete(c.forkchoiceAtt, r)
delete(c.forkchoiceAtt, id)
return nil
}

View File

@@ -9,24 +9,22 @@ import (
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
var hashFn = hash.Proto
// AttCaches defines the caches used to satisfy attestation pool interface.
// These caches are KV store for various attestations
// such are unaggregated, aggregated or attestations within a block.
type AttCaches struct {
aggregatedAttLock sync.RWMutex
aggregatedAtt map[[32]byte][]ethpb.Att
aggregatedAtt map[attestation.Id][]ethpb.Att
unAggregateAttLock sync.RWMutex
unAggregatedAtt map[[32]byte]ethpb.Att
unAggregatedAtt map[attestation.Id]ethpb.Att
forkchoiceAttLock sync.RWMutex
forkchoiceAtt map[[32]byte]ethpb.Att
forkchoiceAtt map[attestation.Id]ethpb.Att
blockAttLock sync.RWMutex
blockAtt map[[32]byte][]ethpb.Att
blockAtt map[attestation.Id][]ethpb.Att
seenAtt *cache.Cache
}
@@ -34,12 +32,12 @@ type AttCaches struct {
// various kind of attestations.
func NewAttCaches() *AttCaches {
secsInEpoch := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
c := cache.New(secsInEpoch*time.Second, 2*secsInEpoch*time.Second)
c := cache.New(2*secsInEpoch*time.Second, 2*secsInEpoch*time.Second)
pool := &AttCaches{
unAggregatedAtt: make(map[[32]byte]ethpb.Att),
aggregatedAtt: make(map[[32]byte][]ethpb.Att),
forkchoiceAtt: make(map[[32]byte]ethpb.Att),
blockAtt: make(map[[32]byte][]ethpb.Att),
unAggregatedAtt: make(map[attestation.Id]ethpb.Att),
aggregatedAtt: make(map[attestation.Id][]ethpb.Att),
forkchoiceAtt: make(map[attestation.Id]ethpb.Att),
blockAtt: make(map[attestation.Id][]ethpb.Att),
seenAtt: c,
}

View File

@@ -1,21 +1,20 @@
package kv
import (
"fmt"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
v, ok := c.seenAtt.Get(string(r[:]))
v, ok := c.seenAtt.Get(id.String())
if ok {
seenBits, ok := v.([]bitfield.Bitlist)
if !ok {
@@ -24,7 +23,7 @@ func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
alreadyExists := false
for _, bit := range seenBits {
if c, err := bit.Contains(att.GetAggregationBits()); err != nil {
return fmt.Errorf("failed to check seen bits on attestation when inserting bit: %w", err)
return err
} else if c {
alreadyExists = true
break
@@ -33,21 +32,21 @@ func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
if !alreadyExists {
seenBits = append(seenBits, att.GetAggregationBits())
}
c.seenAtt.Set(string(r[:]), seenBits, cache.DefaultExpiration /* one epoch */)
c.seenAtt.Set(id.String(), seenBits, cache.DefaultExpiration /* one epoch */)
return nil
}
c.seenAtt.Set(string(r[:]), []bitfield.Bitlist{att.GetAggregationBits()}, cache.DefaultExpiration /* one epoch */)
c.seenAtt.Set(id.String(), []bitfield.Bitlist{att.GetAggregationBits()}, cache.DefaultExpiration /* one epoch */)
return nil
}
func (c *AttCaches) hasSeenBit(att ethpb.Att) (bool, error) {
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return false, err
return false, errors.Wrap(err, "could not create attestation ID")
}
v, ok := c.seenAtt.Get(string(r[:]))
v, ok := c.seenAtt.Get(id.String())
if ok {
seenBits, ok := v.([]bitfield.Bitlist)
if !ok {
@@ -55,7 +54,7 @@ func (c *AttCaches) hasSeenBit(att ethpb.Att) (bool, error) {
}
for _, bit := range seenBits {
if c, err := bit.Contains(att.GetAggregationBits()); err != nil {
return false, fmt.Errorf("failed to check seen bits on attestation when reading bit: %w", err)
return false, err
} else if c {
return true, nil
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
@@ -39,18 +40,18 @@ func TestAttCaches_hasSeenBit(t *testing.T) {
func TestAttCaches_insertSeenBitDuplicates(t *testing.T) {
c := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
r, err := hashFn(att1.Data)
id, err := attestation.NewId(att1, attestation.Data)
require.NoError(t, err)
require.NoError(t, c.insertSeenBit(att1))
require.Equal(t, 1, c.seenAtt.ItemCount())
_, expirationTime1, ok := c.seenAtt.GetWithExpiration(string(r[:]))
_, expirationTime1, ok := c.seenAtt.GetWithExpiration(id.String())
require.Equal(t, true, ok)
// Make sure that duplicates are not inserted, but expiration time gets updated.
require.NoError(t, c.insertSeenBit(att1))
require.Equal(t, 1, c.seenAtt.ItemCount())
_, expirationprysmTime, ok := c.seenAtt.GetWithExpiration(string(r[:]))
_, expirationprysmTime, ok := c.seenAtt.GetWithExpiration(id.String())
require.Equal(t, true, ok)
require.Equal(t, true, expirationprysmTime.After(expirationTime1), "Expiration time is not updated")
}

View File

@@ -7,6 +7,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"go.opencensus.io/trace"
)
@@ -27,13 +29,14 @@ func (c *AttCaches) SaveUnaggregatedAttestation(att ethpb.Att) error {
return nil
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.unAggregateAttLock.Lock()
defer c.unAggregateAttLock.Unlock()
c.unAggregatedAtt[r] = att.Copy()
c.unAggregatedAtt[id] = att
return nil
}
@@ -69,19 +72,56 @@ func (c *AttCaches) UnaggregatedAttestations() ([]ethpb.Att, error) {
// UnaggregatedAttestationsBySlotIndex returns the unaggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att {
func (c *AttCaches) UnaggregatedAttestationsBySlotIndex(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.Attestation {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.UnaggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]ethpb.Att, 0)
atts := make([]*ethpb.Attestation, 0)
c.unAggregateAttLock.RLock()
defer c.unAggregateAttLock.RUnlock()
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if slot == a.GetData().Slot && committeeIndex == a.GetData().CommitteeIndex {
atts = append(atts, a)
if a.Version() == version.Phase0 && slot == a.GetData().Slot && committeeIndex == a.GetData().CommitteeIndex {
att, ok := a.(*ethpb.Attestation)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
return atts
}
// UnaggregatedAttestationsBySlotIndexElectra returns the unaggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.AttestationElectra {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.UnaggregatedAttestationsBySlotIndexElectra")
defer span.End()
atts := make([]*ethpb.AttestationElectra, 0)
c.unAggregateAttLock.RLock()
defer c.unAggregateAttLock.RUnlock()
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if a.Version() == version.Electra && slot == a.GetData().Slot && a.CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
@@ -101,14 +141,14 @@ func (c *AttCaches) DeleteUnaggregatedAttestation(att ethpb.Att) error {
return err
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.unAggregateAttLock.Lock()
defer c.unAggregateAttLock.Unlock()
delete(c.unAggregatedAtt, r)
delete(c.unAggregatedAtt, id)
return nil
}

View File

@@ -7,10 +7,11 @@ import (
"testing"
c "github.com/patrickmn/go-cache"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -39,7 +40,7 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
BeaconBlockRoot: []byte{0b0},
},
},
wantErrString: fssz.ErrBytesLength.Error(),
wantErrString: "could not create attestation ID",
},
{
name: "normal save",
@@ -57,13 +58,13 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
count: 0,
},
}
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{Slot: 100}))
id, err := attestation.NewId(util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100}}), attestation.Data)
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewAttCaches()
cache.seenAtt.Set(string(r[:]), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
cache.seenAtt.Set(id.String(), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
assert.Equal(t, 0, len(cache.unAggregatedAtt), "Invalid start pool, atts: %d", len(cache.unAggregatedAtt))
if tt.att != nil && tt.att.GetSignature() == nil {
@@ -246,9 +247,35 @@ func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndex(t *testing.T) {
}
ctx := context.Background()
returned := cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 1)
assert.DeepEqual(t, []ethpb.Att{att1}, returned)
assert.DeepEqual(t, []*ethpb.Attestation{att1}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)
assert.DeepEqual(t, []ethpb.Att{att2}, returned)
assert.DeepEqual(t, []*ethpb.Attestation{att2}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 1)
assert.DeepEqual(t, []ethpb.Att{att3}, returned)
assert.DeepEqual(t, []*ethpb.Attestation{att3}, returned)
}
func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndexElectra(t *testing.T) {
cache := NewAttCaches()
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b101}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(2, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b110}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att3 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110}, CommitteeBits: committeeBits})
atts := []*ethpb.AttestationElectra{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveUnaggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att1}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att2}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att3}, returned)
}

View File

@@ -18,7 +18,8 @@ type Pool interface {
SaveAggregatedAttestation(att ethpb.Att) error
SaveAggregatedAttestations(atts []ethpb.Att) error
AggregatedAttestations() []ethpb.Att
AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att
AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation
AggregatedAttestationsBySlotIndexElectra(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.AttestationElectra
DeleteAggregatedAttestation(att ethpb.Att) error
HasAggregatedAttestation(att ethpb.Att) (bool, error)
AggregatedAttestationCount() int
@@ -26,7 +27,8 @@ type Pool interface {
SaveUnaggregatedAttestation(att ethpb.Att) error
SaveUnaggregatedAttestations(atts []ethpb.Att) error
UnaggregatedAttestations() ([]ethpb.Att, error)
UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att
UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation
UnaggregatedAttestationsBySlotIndexElectra(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.AttestationElectra
DeleteUnaggregatedAttestation(att ethpb.Att) error
DeleteSeenUnaggregatedAttestations() (int, error)
UnaggregatedAttestationCount() int

View File

@@ -3,14 +3,14 @@ package attestations
import (
"bytes"
"context"
"errors"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
attaggregation "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
@@ -67,7 +67,7 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
atts := append(s.cfg.Pool.AggregatedAttestations(), s.cfg.Pool.BlockAttestations()...)
atts = append(atts, s.cfg.Pool.ForkchoiceAttestations()...)
attsByDataRoot := make(map[[32]byte][]ethpb.Att, len(atts))
attsByVerAndDataRoot := make(map[attestation.Id][]ethpb.Att, len(atts))
// Consolidate attestations by aggregating them by similar data root.
for _, att := range atts {
@@ -79,14 +79,14 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
continue
}
attDataRoot, err := att.GetData().HashTreeRoot()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
attsByVerAndDataRoot[id] = append(attsByVerAndDataRoot[id], att)
}
for _, atts := range attsByDataRoot {
for _, atts := range attsByVerAndDataRoot {
if err := s.aggregateAndSaveForkChoiceAtts(atts); err != nil {
return err
}
@@ -119,12 +119,12 @@ func (s *Service) aggregateAndSaveForkChoiceAtts(atts []ethpb.Att) error {
// This checks if the attestation has previously been aggregated for fork choice
// return true if yes, false if no.
func (s *Service) seen(att ethpb.Att) (bool, error) {
attRoot, err := hash.Proto(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return false, err
return false, errors.Wrap(err, "could not create attestation ID")
}
incomingBits := att.GetAggregationBits()
savedBits, ok := s.forkChoiceProcessedRoots.Get(attRoot)
savedBits, ok := s.forkChoiceProcessedAtts.Get(id)
if ok {
savedBitlist, ok := savedBits.(bitfield.Bitlist)
if !ok {
@@ -149,6 +149,6 @@ func (s *Service) seen(att ethpb.Att) (bool, error) {
}
}
s.forkChoiceProcessedRoots.Add(attRoot, incomingBits)
s.forkChoiceProcessedAtts.Add(id, incomingBits)
return false, nil
}

View File

@@ -13,16 +13,16 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
)
var forkChoiceProcessedRootsSize = 1 << 16
var forkChoiceProcessedAttsSize = 1 << 16
// Service of attestation pool operations.
type Service struct {
cfg *Config
ctx context.Context
cancel context.CancelFunc
err error
forkChoiceProcessedRoots *lru.Cache
genesisTime uint64
cfg *Config
ctx context.Context
cancel context.CancelFunc
err error
forkChoiceProcessedAtts *lru.Cache
genesisTime uint64
}
// Config options for the service.
@@ -35,7 +35,7 @@ type Config struct {
// NewService instantiates a new attestation pool service instance that will
// be registered into a running beacon node.
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
cache := lruwrpr.New(forkChoiceProcessedRootsSize)
cache := lruwrpr.New(forkChoiceProcessedAttsSize)
if cfg.pruneInterval == 0 {
// Prune expired attestations from the pool every slot interval.
@@ -44,10 +44,10 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
return &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
forkChoiceProcessedRoots: cache,
cfg: cfg,
ctx: ctx,
cancel: cancel,
forkChoiceProcessedAtts: cache,
}, nil
}

View File

@@ -112,8 +112,6 @@ type HistoricalSummaryCreator struct{}
type BlobIdentifierCreator struct{}
type PendingBalanceDepositCreator struct{}
type PendingPartialWithdrawalCreator struct{}
type ConsolidationCreator struct{}
type SignedConsolidationCreator struct{}
type PendingConsolidationCreator struct{}
type StatusCreator struct{}
type BeaconBlocksByRangeRequestCreator struct{}
@@ -287,10 +285,6 @@ func (PendingBalanceDepositCreator) Create() MarshalerProtoMessage {
func (PendingPartialWithdrawalCreator) Create() MarshalerProtoMessage {
return &ethpb.PendingPartialWithdrawal{}
}
func (ConsolidationCreator) Create() MarshalerProtoMessage { return &ethpb.Consolidation{} }
func (SignedConsolidationCreator) Create() MarshalerProtoMessage {
return &ethpb.SignedConsolidation{}
}
func (PendingConsolidationCreator) Create() MarshalerProtoMessage {
return &ethpb.PendingConsolidation{}
}
@@ -405,8 +399,6 @@ var creators = []MarshalerProtoCreator{
BlobIdentifierCreator{},
PendingBalanceDepositCreator{},
PendingPartialWithdrawalCreator{},
ConsolidationCreator{},
SignedConsolidationCreator{},
PendingConsolidationCreator{},
StatusCreator{},
BeaconBlocksByRangeRequestCreator{},
@@ -553,26 +545,6 @@ func TestSszNetworkEncoder_RoundTrip_SignedVoluntaryExit(t *testing.T) {
assertProtoMessagesEqual(t, decoded, msg)
}
func TestSszNetworkEncoder_RoundTrip_Consolidation(t *testing.T) {
e := &encoder.SszNetworkEncoder{}
buf := new(bytes.Buffer)
data := []byte("\xc800")
msg := &ethpb.Consolidation{}
if err := proto.Unmarshal(data, msg); err != nil {
t.Logf("Failed to unmarshal: %v", err)
return
}
_, err := e.EncodeGossip(buf, msg)
require.NoError(t, err)
decoded := &ethpb.Consolidation{}
require.NoError(t, e.DecodeGossip(buf.Bytes(), decoded))
assertProtoMessagesEqual(t, decoded, msg)
}
func TestSszNetworkEncoder_RoundTrip(t *testing.T) {
e := &encoder.SszNetworkEncoder{}
testRoundTripWithLength(t, e)

View File

@@ -27,7 +27,8 @@ var gossipTopicMappings = map[string]proto.Message{
// GossipTopicMappings is a function to return the assigned data type
// versioned by epoch.
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if topic == BlockSubnetTopicFormat {
switch topic {
case BlockSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedBeaconBlockElectra{}
}
@@ -43,8 +44,25 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if epoch >= params.BeaconConfig().AltairForkEpoch {
return &ethpb.SignedBeaconBlockAltair{}
}
return gossipTopicMappings[topic]
case AttestationSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.AttestationElectra{}
}
return gossipTopicMappings[topic]
case AttesterSlashingSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.AttesterSlashingElectra{}
}
return gossipTopicMappings[topic]
case AggregateAndProofSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedAggregateAttestationAndProofElectra{}
}
return gossipTopicMappings[topic]
default:
return gossipTopicMappings[topic]
}
return gossipTopicMappings[topic]
}
// AllTopics returns all topics stored in our
@@ -75,4 +93,7 @@ func init() {
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockDeneb{})] = BlockSubnetTopicFormat
// Specially handle Electra objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttestationElectra{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat
}

View File

@@ -22,20 +22,20 @@ func TestMappingHasNoDuplicates(t *testing.T) {
}
}
func TestGossipTopicMappings_CorrectBlockType(t *testing.T) {
func TestGossipTopicMappings_CorrectType(t *testing.T) {
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig().Copy()
altairForkEpoch := primitives.Epoch(100)
BellatrixForkEpoch := primitives.Epoch(200)
CapellaForkEpoch := primitives.Epoch(300)
DenebForkEpoch := primitives.Epoch(400)
ElectraForkEpoch := primitives.Epoch(500)
bellatrixForkEpoch := primitives.Epoch(200)
capellaForkEpoch := primitives.Epoch(300)
denebForkEpoch := primitives.Epoch(400)
electraForkEpoch := primitives.Epoch(500)
bCfg.AltairForkEpoch = altairForkEpoch
bCfg.BellatrixForkEpoch = BellatrixForkEpoch
bCfg.CapellaForkEpoch = CapellaForkEpoch
bCfg.DenebForkEpoch = DenebForkEpoch
bCfg.ElectraForkEpoch = ElectraForkEpoch
bCfg.BellatrixForkEpoch = bellatrixForkEpoch
bCfg.CapellaForkEpoch = capellaForkEpoch
bCfg.DenebForkEpoch = denebForkEpoch
bCfg.ElectraForkEpoch = electraForkEpoch
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = primitives.Epoch(100)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.BellatrixForkVersion)] = primitives.Epoch(200)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.CapellaForkVersion)] = primitives.Epoch(300)
@@ -47,29 +47,83 @@ func TestGossipTopicMappings_CorrectBlockType(t *testing.T) {
pMessage := GossipTopicMappings(BlockSubnetTopicFormat, 0)
_, ok := pMessage.(*ethpb.SignedBeaconBlock)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Altair Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockAltair)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Bellatrix Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, BellatrixForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockBellatrix)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Capella Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, CapellaForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockCapella)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Deneb Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, DenebForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockDeneb)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Electra Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, ElectraForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttestationElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashingElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProofElectra)
assert.Equal(t, true, ok)
}

View File

@@ -138,7 +138,7 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
}
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnecting)
if err := reqFunc(context.TODO(), conn.RemotePeer()); err != nil && err != io.EOF {
if err := reqFunc(context.TODO(), conn.RemotePeer()); err != nil && !errors.Is(err, io.EOF) {
log.WithError(err).Trace("Handshake failed")
disconnectFromPeer()
return

View File

@@ -2,14 +2,10 @@ package p2p
import (
"context"
"runtime"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
)
const backOffCounter = 50
// filterNodes wraps an iterator such that Next only returns nodes for which
// the 'check' function returns true. This custom implementation also
// checks for context deadlines so that in the event the parent context has
@@ -28,21 +24,13 @@ type filterIter struct {
// Next looks up for the next valid node according to our
// filter criteria.
func (f *filterIter) Next() bool {
lookupCounter := 0
for f.Iterator.Next() {
// Do not excessively perform lookups if we constantly receive non-viable peers.
if lookupCounter > backOffCounter {
lookupCounter = 0
runtime.Gosched()
time.Sleep(pollingPeriod)
}
if f.Context.Err() != nil {
return false
}
if f.check(f.Node()) {
return true
}
lookupCounter++
}
return false
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/features"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)

Some files were not shown because too many files have changed in this diff Show More